Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Is the tensor algebra a Lie algebra? I found that the reverse is true. To show that it is a Lie algebra I just have to show that the Jacobi identity holds, so I was wontering if it could be true either for the tensor algebra of a vector space or for a subalgebra of that space.
As Torsten says in the comments, every associative algebra is a Lie algebra with bracket the commutator bracket $[a, b] = ab - ba$. The tensor algebra $T(V)$, as a Lie algebra, has a subalgebra generated by $V$ (which is not all of $T(V)$), and $T(V)$ has a special relationship to this subalgebra: at least in characteristic $0$, it is precisely the universal enveloping algebra of this subalgebra, and moreover this subalgebra is the free Lie algebra $L(V)$ on $V$. This mostly follows from the fact that the two have the same universal property, except that there's a little work to be done explaining why the natural map $L(V) \to U(L(V))$ is injective. But this follows from the Poincare-Birkhoff-Witt theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to interpret $|x|^{-1}$ as a distribution Currently I'm reading a little about regularization of distributions which usually have annoying singularities like $x^{-1}$ or $|x|^{-1}$. To the best of my understanding, $$ \frac{1}{x} $$ is interpreted as a distribution by considering it's Cauchy principal value instead of doing the bruteforce integration. This way, it's possible to recover a meaningful result by carefully cancelling contributions to the singularity. So if $g$ is, say, a function from Schwartz space, then $$ \left\langle \text{p.v.}\, \frac{1}{x}, \, g(x) \right\rangle = \int _{0}^{\infty} \frac{g(x) - g(-x)}{x} \, dx. $$ Intuitively, this makes total sense to me. However, when reading about $|x|^{-1}$ I get slightly confused. The definition states that $|x|^{-1}$ (or, rather, it's regularized cousin $\mathcal{P}|x|^{-1}$) acts on test functions as per $$ \left\langle \mathcal{P}\frac{1}{|x|}, \, g(x) \right\rangle = \int _{|x|<1} \frac{g(x)-g(0)}{|x|} \, dx + \int_{|x|>1} \frac{g(x)}{|x|} \, dx $$ The second integral is pretty straightforward; no singularities to sneak around. But the first one... Does it solve the singularity problem? Yes, I can see that, since $$ \frac{g(x)-g(0)}{|x|} $$ remains bounded as $x \rightarrow0$. But how do we get to it? What's the reasoning behind it? I've been struggling to see it for a little while. Any and all help will be appreciated :).
Since $u(x):=\operatorname{sign}(x)\ln|x| \in L^{1}_{\text{loc}}(\mathbb{R})$ we can take $$ \frac{1}{|x|}:=u'(x), $$ i.e. for $\varphi\in C_c^\infty(\mathbb{R})$ we set $$\begin{align} \left< \frac{1}{|x|}, \varphi(x) \right> &:= - \int_{-\infty}^{\infty} \operatorname{sign}(x)\ln|x| \, \varphi'(x) \, dx \\&= \lim_{\epsilon\to 0} \left( \int_{-\infty}^{-\epsilon} \ln|x|\,\varphi'(x)\,dx - \int_{\epsilon}^{\infty} \ln|x|\,\varphi'(x)\,dx \right) \\&= \lim_{\epsilon\to 0} \left( \left[\ln|x|\,\varphi(x)\right]_{-\infty}^{-\epsilon} - \int_{-\infty}^{-\epsilon} \frac{1}{x} \varphi(x)\,dx - \left[\ln|x|\,\varphi(x)\right]_{\epsilon}^{\infty} + \int_{\epsilon}^{\infty} \frac{1}{x}\,\varphi(x)\,dx \right) \\&= \lim_{\epsilon\to 0} \left( \ln\epsilon\,\varphi(-\epsilon) - \int_{-\infty}^{-\epsilon} \frac{1}{x} \varphi(x)\,dx + \ln\epsilon\,\varphi(\epsilon) + \int_{\epsilon}^{\infty} \frac{1}{x}\,\varphi(x)\,dx \right) \\&= \lim_{\epsilon\to 0} \left( \ln\epsilon\,(\varphi(-\epsilon)+\varphi(\epsilon)) + \int_{|x|>\epsilon} \frac{1}{|x|}\,\varphi(x)\,dx \right) \\&= \lim_{\epsilon\to 0} \left( 2\ln\epsilon\,\varphi(0) + \int_{|x|>\epsilon} \frac{1}{|x|}\,\varphi(x)\,dx \right) .\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Use jacobian matrix to solve equation. our teacher told us that Jacobian matrices can be used to solve a system of nonlinear equations, and I am wondering exactly how this work (He never actually showed us an example). if I have the following system to solve ($u,v$ are $C^1$) : (local inversion of $f(x,y)$ next to the point $(x,y,u,v)= (1,1,-2,2)$ , and the origin? ) $$ u =x^3-3xy^2 $$ $$v=3x^2y-y^3$$ Here's my attemps : Let , $f=x^3-3xy^2 $ and we have : $g = 3x^2y-y^3$ So I found $f_x = 3x^2-3y^2 = g_y$ and also $f_y = 6xy = g_x$ , I used the jacobian to find the matrix determinant: $\frac{\partial(f,g) }{\partial (x,y)}=9x^4-54x^2y^2+9y^4$ . Then I used the given point$(1,1)$ to find it equal to $-36 \neq 0 $ , But I don't know what are the next steps , since the results is not $0$ we can now use the inverse function theorem ? * *I'm really not too sure about my steps *And for the point at the origin should we take $(0,0,-2,2) $ or $(0,0,0,0)$ ? Thanks in advance , any help would be a lot appreciated.
Your notation is clumsy so let's clean up the problem and see what we are doing. You have a function $F:\mathbb{R}^2\times \mathbb{R}^2\to \mathbb{R}^2$ given by $F(x,y,u,v)=(f_1(x,y,u,v),f_2(x,y,u,v))^T$, where \begin{align} f_1(x,y,u,v)&=x^3-3xy^2-u \\ f_2(x,y,u,v)&=3x^2y-y^3-v \end{align} (The order of the $x,y$ terms and the $u,v$ terms may be flipped). Then we have $F(1, 1, -2, 2)=(0,0)^T$ and $F(0,0,0,0)=(0,0)^T$. By the implicit function theorem, we may write $F(x(u,v),y(u,v),u,v)=(0,0)^T$ locally around the points (i.e. there exists a neighborhood such that this is valid) of the Jacobian with respect to $x$ and $y$ has nonzero determinate. Alternatively, if you wanted to write, $F(x,y,u(x,y),v(x,y))=(0,0)^T$, you would have to show that the Jacobian with respect to $u$ and $v$ has nonzero determinate. In this problem, this is really easy because you can solve for $u$ and $v$ explicitly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the limit of $a_1 = 1 $ , $ a_{n+1} = \frac{\sqrt{1+a_n^2}-1}{a_n}$ , $n \in \mathbb{N}$ $a_1 = 1 $ , $ a_{n+1} = \frac{\sqrt{1+a_n^2}-1}{a_n}$ , $n \in \mathbb{N}$ It is easy to prove that the limit exists: the boundary of that expression is $0$ and it is monotonically decreasing. The problem is to actually find the limit (it is $0$) because if I take arithmetic of limits: $ \lim_{x \to +\infty} a_{n+1} = \frac{\sqrt{1+ \lim_{n \to +\infty} a_n^2+1}}{\lim_{n \to +\infty} a_n}$ $g = \frac{\sqrt{1+g^2+1}}{g} \implies g^2 = {\sqrt{1+g^2}-1} \implies g^2 + 1 = {\sqrt{1+g^2}} \implies 0 = 0$
You can prove it directly like this $$a_{n+1} = \frac{\sqrt{1+a_n^2}-1}{a_n} < \frac{\sqrt{1+a_n^2+\frac{1}{4}a_n^4}-1}{a_n} = \frac{1+\frac{1}{2}a_n^2-1}{a_n} = \frac 12 a_n$$ Therefore $a_n \to 0$, $n\to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
An attempt at proving that $A=(0,1)$ is not compact on the real line with the usual topology. I am supposed to show that the open interval $(0,1)$ on the real line $\mathbb{R}$ (with the usual topology) is not compact. I know that one of the examples of an open cover without a finite subcover, in this case, would be $A=\left\{\left(\dfrac{1}{n},1\right)\bigg\rvert n\in\mathbb{N}\right\}$, thus proving that $A$ is not compact. However, I am trying to prove the same using a different example of an open subcover, $$\mathscr{F}=\left\{\left(\dfrac{1}{2^n},\dfrac{3}{2^n}\right)\bigg\rvert n\in\mathbb{N}\right\}.$$ If it can be shown that the family of sets $\mathscr{F}$ forms an open cover for $(0,1)$, i.e., $\bigcup\limits_{n\in\mathbb{N}}G_n$, where $G_n=\left(\dfrac{1}{2^n},\dfrac{3}{2^n}\right)$, it is straightforward to check that for any $x=\dfrac{1}{2^n}$, where $n\in\mathbb{N}$, it only lies in one of these $G_n$'s. So, removing any one of these open sets from our open cover, in an attempt to find a finite subcover, the family of sets won't form a finite subcover of $(0,1)$. Thus, $(0,1)$ is not compact. Where I am getting stuck is in actually formally proving that any $x\in(0,1)$ lies in one of these $G_n$'s. My intuition here is that by the use of Archimedean principle, for any $x\in(0,1)$, there exists some $k\in\mathbb{N}$ such that $x>\dfrac{1}{k}$. So, let us choose $k$, for a given $x$, such that $\dfrac{1}{k}<x<\dfrac{1}{k-1}$. Again, since this $k>1$, there exists some $n_0\in\mathbb{N}$ such that $2^{n_0-1}<k<2^{n_0}\implies \dfrac{1}{2^{n_0}}<\dfrac{1}{k}<\dfrac{1}{2^{n_0-1}}$. My claim is that $x\in \left(\dfrac{1}{2^{n_0}},\dfrac{3}{2^{n_0}}\right)$, and approach was to try and prove that $\left(\dfrac{1}{k},\dfrac{1}{k-1}\right)\subseteq \left(\dfrac{1}{2^{n_0}},\dfrac{3}{2^{n_0}}\right)$, but I am unable to proceed ahead from this point. Any help would be highly appreciated, in either proving or disproving the claim that I make in the last sentence.
Let $0<x<1$. Consider the interval $(\frac {\ln (\frac 1 x)} {\ln 2},\frac {\ln (\frac 3 x)} {\ln 2})$. The length of this interval is easily seen to be $\frac {\ln 3} {\ln 2}$. Since any interval of length greater than $1$ contains an integer there exist a positive integer $n$ in this interval. You can now verify that $x \in (\frac 1 {2^{n}},\frac 3 {2^{n}})$. Note that since $0 <x<1$ and $x \in (\frac 1 {2^{n}},\frac 3 {2^{n}})$ the integer $n$ is necessarily a positive integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
An asymptotic about Integral of Legendre Polynomials I want to show asymptotics of the following integral involving Legendre Polynomial: For $0<t<\theta<\frac\pi2$, $$\Big|\int_0^\theta \frac{1}{\sqrt{\theta}+\sqrt{\theta-t}} \frac{1-P_n(\cos t)}{t}dt \Big| \leq C \theta^{-\frac12} \cdot \log (n\theta),$$ where $C$ is the constant, $P_n(x)$ is a Legendre Polynomial, $n$ is a positive integer. I am trying to use Bernstein's inequality for trigonometric polynomials: Since $P_n(1)=1$ and $|P_n(\cos t)|\leq 1$, $$|P_n(\cos t)-1|=|P_n(\cos t)-P_n(\cos 0)|\leq nt||P_n ||_{\infty}\leq nt.$$ Then I can only get the below estimate but I have no further idea $$\Big|\int_0^\theta \frac{1}{\sqrt{\theta}+\sqrt{\theta-t}} \frac{1-P_n(\cos t)}{t}dt \Big| \leq \int_0^\theta \frac{1}{\sqrt{\theta}+\sqrt{\theta-t}} \frac{|1-P_n(\cos t)|}{t}dt \leq \int_0^\theta \frac{1}{\sqrt{\theta}+\sqrt{\theta-t}} \frac{nt}{t}dt=O(n).$$ Any suggestions are welcome! Thank you for your help!
First, suppose that $\frac{1}{n} <\theta <\frac{\pi}{2}$. We split the range of integration into two parts: $0<t<\frac{1}{n}$ and $\frac{1}{n}<t<\theta$, respectively. On the first interval, we use the known limiting relation between the Legendre polynomials and the Bessel function of the first kind $J_0$: \begin{align*} &\int_0^{1/n} {\frac{1}{{\sqrt \theta + \sqrt {\theta - t} }}\frac{{1 - P_n (\cos t)}}{t}dt} = \int_0^1 {\frac{1}{{\sqrt \theta + \sqrt {\theta - s/n} }}\frac{{1 - P_n \left( {\cos \left( {\frac{s}{n}} \right)} \right)}}{s}ds} \\ & \sim \int_0^1 {\frac{1}{{\sqrt \theta + \sqrt {\theta - s/n} }}\frac{{1 - J_0 (s)}}{s}ds} \le K\int_0^1 {\frac{1}{{\sqrt \theta + \sqrt {\theta - s/n} }} s ds} \le \frac{K}{2}\theta ^{ - 1/2} \end{align*} with a suitable positive constant $K$ and large $n$. On the remaining interval, we have \begin{align*} \left| {\int_{1/n}^\theta {\frac{1}{{\sqrt \theta + \sqrt {\theta - t} }}\frac{{1 - P_n (\cos t)}}{t}dt} } \right| & \le \theta ^{ - 1/2} \int_{1/n}^\theta {\left| {\frac{{1 - P_n (\cos t)}}{t}} \right|dt} \\ & \le 2\theta ^{ - 1/2} \int_{1/n}^\theta \frac{{dt}}{t} = 2\theta ^{ - 1/2} \log (n\theta ) . \end{align*} Thus, there is a constant $C >0$ such that $$ \left| {\int_0^\theta {\frac{1}{{\sqrt \theta + \sqrt {\theta - t} }}\frac{{1 - P_n (\cos t)}}{t}dt} } \right| \le C \theta ^{ - 1/2} \max(1,\log (n\theta )). $$ If $0 <\theta <\frac{1}{n}$, then \begin{align*} & \int_0^\theta {\frac{1}{{\sqrt \theta + \sqrt {\theta - t} }}} \frac{{1 - P_n (\cos t)}}{t}dt \le \theta ^{ - 1/2} \int_0^\theta {\frac{{1 - P_n (\cos t)}}{t}dt} \\ & \le \theta ^{ - 1/2} \int_0^{1/n} {\frac{{1 - P_n (\cos t)}}{t}dt} \sim \theta ^{ - 1/2} \int_0^1 {\frac{{1 - J_0 (s)}}{s}ds} \\ & \le K\theta ^{ - 1/2} \int_0^1 {sds} = \frac{K}{2}\theta ^{ - 1/2} . \end{align*} In summary, there is a constant $C >0$ such that $$ \left| {\int_0^\theta {\frac{1}{{\sqrt \theta + \sqrt {\theta - t} }}\frac{{1 - P_n (\cos t)}}{t}dt} } \right| \le C \theta ^{ - 1/2} \max(1,\log (n\theta )). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3925891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
example where $E(E(X \mid \mathcal{F}_1) \mid \mathcal{F}_2) \neq E(E(X \mid \mathcal{F}_2) \mid \mathcal{F}_1)$ Given $\Omega = \{ a,b,c\}$, I want to construct an example where $E(E(X \mid \mathcal{F}_1) \mid \mathcal{F}_2) \neq E(E(X \mid \mathcal{F}_2) \mid \mathcal{F}_1)$. This is from Durrett's probability theory and examples. Let $\mathcal{F}_1 = \sigma(\{a\}) = \{ \{a\}, \{b,c\}, \{a,b,c\}, \emptyset\}$ and let $\mathcal{F}_2 = \sigma(\{ c\}) = \{\{c\}, \{a,b\}, \{a,b,c\}, \emptyset \}$. Let $X(a) = X(c) = 0$ and $X(b) = 1$, and $P(a) = P(b)= P(c) = 1/3$. Then in the text, we apparently get $$E(X\mid \mathcal{F}_1)(a) = 0,\quad E(X\mid \mathcal{F}_1)(b) = 1/2, \quad E(X\mid \mathcal{F}_1)(c) = 1/2 $$ $$E(E(X\mid \mathcal{F}_1)\mid \mathcal{F}_2)(a) = 1/4, \quad E(E(X\mid \mathcal{F}_1)\mid \mathcal{F}_2)(b) = 1/4, \quad E(E(X\mid \mathcal{F}_1)\mid \mathcal{F}_2)(c) = 1/2$$ I think what the book did here is use $E(X \mid \mathcal{F}) = {E(X ; \Omega_i) \over P(\Omega_i)}$ on $\Omega_i$, so for $\{b,c\}$, $$E(X\mid \mathcal{F}_1) (\{b,c\}) = {E(X ; \{b,c\}) \over P(\{b,c\})} = {1/3 \over 2/3} = {1 \over 2}$$ But how are $E(X\mid \mathcal{F}_1)(b) = 1/2$, $E(X\mid \mathcal{F}_1)(c) = 1/2$ when $\{b\}$ and $\{c\}$ are not even in $\mathcal{F}_1$. Even if we assume $E(X\mid \mathcal{F}_1)(c)$ can be computed, shouldn't $E(X\mid \mathcal{F}_1)(c) = 0$ since $E(X; \{c\}) = 0$?
$E(X|\mathcal{F}_1)$ is by definition $\mathcal{F}_1$- measurable, so we have $E(X|\mathcal{F}_1)(\omega)=\alpha {1}_{\{a\}}(\omega) + \beta 1_{\{b,c\}}(\omega)$ for all $\omega \in \Omega$ and some $\alpha, \beta\in \mathbb{R}$. Moreover, it easily follows from the definition of the conditional expectation that $\beta=\frac{E(X \cdot 1_{\{b,c\}} )}{P(\{b,c\})}=\frac{1\cdot \frac{1}{3}+0\cdot \frac{1}{3}}{2/3}=\frac{1}{2}$. Therefore, $E(X|\mathcal{F}_1)(b)=E(X|\mathcal{F}_1)(c)=\beta=\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f(x)=a$ for $x\in A$, and $f(x)=b\neq a$ otherwise, where $A$ and $\Bbb R-A$ are dense in $\Bbb R$, show that there is no limit as $x\to\infty$ Let $A\subseteq \Bbb{R}$ and let $a,b\in \Bbb{R}$ such that $a\neq b$. $A$ and $\Bbb{R}-A$ are dense in $\Bbb{R}$. Define $f:\Bbb{R}\to\Bbb{R}$: $$f(x) = \begin{cases} a, &\text{if } x \in A \\ b, & \text{otherwise}\end{cases}$$ Prove that there is no limit when $x\to\infty$. I tried proving by contradiction, but I couldn't find any claim that contradicts that there's a limit. I assumed in contradiction that there's a limit and defined an $\epsilon = 0.5$. Then from the definition of limit: $|f(x) - L| = | a - L| < \epsilon$ $|f(y) - L| = | b - L| < \epsilon$ $a-0.5 < L < a + 0.5$ $b-0.5 < L < b + 0.5$ Then I got stuck because I don't see any contradiction here.
Try $\epsilon = 0.5 |b - a|$. Then use the density of $A$ and $R\setminus A$ to see $L \in (a-\epsilon, a+\epsilon)$ and $L \in (b-\epsilon, b+\epsilon)$, but $\epsilon$ is half of the distance between $a$ and $b$, so these two open intervals won't overlap. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I solve a first order differential equation of the following form If I had a differential equation $x'(t) + \frac{x(t)}{t} = e^{t^2}$ And I have a question saying solve the following first ODE for $x(t)$ how would I go about doing this?
This is a linear differential equation so first let's find the solution to the homogenous equation. We see that $x(t)=A\exp(-\ln(t))=\frac{A}{t}$ where $A$ is a constant. Using a well-known method, we suppose that $A=A(t)$, so we have $x(t)=\frac{A(t)}{t}$. Using the equation, we get$$ A'(t)=t \exp(t^2) $$ so $A(t) = \frac{\exp(t^2)}{2}+C$ where $C$ is a constant. Thus $$ x(t)=\frac{\exp(t^2)}{2t}+\frac{C}{t}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
find limit of $\frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n}$ with squeeze theorem I'm trying to prove with squeeze theorem that the limit of the following series equals 1: $$\frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n}$$ For the left side of the inequality I did: $$\frac{1+\sqrt{1}+\sqrt[3]{1}+...+\sqrt[n]{1}}{n} < \frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n}$$ For the right side, at first I did the following: $$\frac{1+\sqrt{2}+\sqrt[3]{3}+...+\sqrt[n]{n}}{n} < \frac{n\sqrt[n]{n}}{n}$$ But then I realized it wasn't true and that the direction of this inequality is the opposite. Do you have any idea which series with limit 1 is bigger from the original series? Thanks!
This is not a full answer to the question, but many answers are implying that the function $n\mapsto n^{1/n}$ is strictly increasing. This is not the case. To see this: Let $y=x^{1/x}$. Then $\ln y=\frac 1x \ln x$ so $\frac{y'}{y}=\frac{1}{x^2}(1-\ln x)$. Since $y>0$, this implies that $y$ is increasing on $(0,e)$ and decreasing on $(e,\infty)$. Hence, do not use upper bound of $n^{1/n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Is Language of All Binary-Digit Strings Not Containing Substring 0100 or Suffix 010 Context-Free? Let $\Sigma$ be the alphabet consisting of the symbols {0,1}, let $\Sigma^{*}$ be its Kleene closure, and let $R$ be defined by $R = \{w \in \Sigma^{*} \mid (0100 \text{ is not a substring of }w) \wedge (010 \text{ is not a suffix of }w)\}$. Is $R$ a context-free language, and if so, what would be a grammar or FSM describing it? Thank you...
$R$ is not just context-free: it’s regular. Here is a the transition table for a DFA that recognizes $R$; $q_0$ is the initial state, and $q_0,q_1$, and $q_2$ are the acceptor states. $$\begin{array}{c|c|c} q&a&\delta(q,a)\\\hline q_0&0&q_1\\ q_0&1&q_0\\ q_1&0&q_1\\ q_1&1&q_2\\ q_2&0&q_3\\ q_2&1&q_0\\ q_3&0&q_4\\ q_3&1&q_2\\ q_4&0&q_4\\ q_4&1&q_4 \end{array}$$ Draw the automaton and check that it is at $q_3$ if and only if the current input ends in $010$, and it reaches $q_4$ if and only if the input has a substring $0100$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is this method right to solve some equations of degree 2? Given this equation $-x^2+x+6=4$ We can write it as $(3-x)(x+2) = 4$ So weridly if I take $(3-x) = 4$ and $(x+2) = 4$, I can get $x=-1$ and $x=2$ that are indeed the correct solutions. Why does this happen? And in which cases can this occur?
Note that the given equation can be rewritten as $$(x+1)(x-2)=-[(3-x)-4][(x+2)-4]=0$$ which leads to $3-x=4$ and $x+2=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3926948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Compute $\frac{4}{5}\pi \int_0^{2\pi}\int_0^1\sqrt{v^2-2 v\cos\theta+1}\>v{\rm d}v\, {\rm d}\theta$ The integral $$I=\frac{4}{5}\pi \int_0^{2\pi}\int_0^1\sqrt{v^2-2 v\cos\theta+1}\>v{\rm d}v\, {\rm d}\theta$$ arises from the calculation of the expected distance between two random points inside a unit circle. Can't figure out the change in variables to transform it to $$I=\frac{4}{5}\pi \int_{-\pi/2}^{\pi/2}\int_0^{\sqrt{2(1+\cos2\theta')}} v'^2\,{\rm d}v'\,{\rm d}\theta'$$ Change of variable ${}$
Referring to the graph above, the variable changes used are $$v = \sqrt{{v'}^2-2{v'}\cos\theta'+1},\>\>\>\>\>\>\>\frac{\sin\theta}{\sin\theta'}=\frac{v'}{v} $$ Then, according to the Jacobian determinant $$dv d\theta = \frac{v'}{v}dv' d\theta'$$ with the ranges $\theta'\in[-\frac\pi2, \frac\pi2]$ and $v'\in [0, \sqrt{2(1+\cos2\theta')}]$. Thus, the integral is transformed to $$I=\frac{4}{5}\pi \int_{-\pi/2}^{\pi/2}\int_0^{\sqrt{2(1+\cos2\theta')}} v'^2\,{\rm d}v'\,{\rm d}\theta'=\frac{128\pi}{45}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Behaviour of the Gamma function near zero The Gamma function on the positive real half-line is defined via the reknown formula $$ \Gamma(z)=\int_0^\infty x^{z-1}e^{-x}dx, \quad z>0. $$ A classical result is Stirling's formula, describing the behaviour of $\Gamma(z)$ as $z$ diverges to infinity, $$ \Gamma(z)\sim \sqrt{\frac{2\pi}{z}} \left( \frac{z}{e}\right)^z, \quad z \to \infty. $$ Is there any such approximation formula for $z \downarrow 0$, describing the speed at which the Gamma function diverges near the origin?
Around $z=0$ $$\Gamma(z)=\frac{1}{z}-\gamma +\sum_{n=1} a_n z^n$$ $$a_1=\frac{1}{12} \left(6 \gamma ^2+\pi ^2\right)\qquad a_2=-\frac{1}{6} \left(2 \zeta (3)+\gamma ^3+\frac{\gamma \pi ^2}{2}\right)$$ $$a_3=\frac{1}{24} \left(8 \gamma \zeta (3)+\gamma ^4+\gamma ^2 \pi ^2+\frac{3 \pi ^4}{20}\right)$$ Edit A quite good approximation is given by $$\Gamma(z)\sim \frac 1 z \frac{1+\frac{\left(\pi ^2-6 \gamma ^2\right) }{12 \gamma }z } {1+\frac{\left(\pi ^2+6 \gamma ^2\right) }{12 \gamma }z }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Irrationality of $\sum_{n=2}^\infty\frac{1}{\sqrt{n}(n-1)}$ I was looking at the following series $$S = \sum_{n=2}^\infty\frac{\sqrt{n}}{n(n-1)}$$ which WA says that it converges to some number (WA1: 2.18401, WA2: 2.0839...). I noticed that it is (almost) equal to $$\sum_{p=3,5,...}^\infty \zeta\left(\frac{p}{2}\right) = \zeta\left(\frac{3}{2}\right) + \zeta\left(\frac{5}{2}\right) + \cdots,$$ i.e., sum of "all halves" (from 3/2) of Rieman zeta functions. Here is my thinking: $$\frac{\sqrt{n}}{n(n-1)} = \left(\frac{1}{n}\right)^\frac{3}{2}+\left(\frac{1}{n}\right)^\frac{5}{2}+\left(\frac{1}{n}\right)^\frac{7}{2}+\dots$$ Sum it over $n$ from 2 to $\infty$, you get e.g. for the first part $\sum_{n=2}^{\infty}\frac{1}{n^{3/2}},$ which is almost $\zeta\left(\frac{3}{2}\right)$, except that the summation does not start at $n=1$. To fix this, I do $\sum_{n=2}^\infty\frac{1}{n^{3/2}}=\zeta\left(\frac{3}{2}\right) - 1.$ Thus, I conclude that $$\sum_{p=3,5,...}^\infty \left(\zeta\left(\frac{p}{2}\right) - 1\right) = \sum_{n=1}^\infty\left(\zeta\left(n+\frac{1}{2}\right) - 1\right)= \sum_{n=2}^\infty\frac{\sqrt{n}}{n(n - 1)}.$$ It is also known (see https://en.wikipedia.org/wiki/Riemann_zeta_function) that $\sum_{n=2}^\infty\left(\zeta(n) - 1\right) = 1$. We can thus combine those sums, leading to $$\sum_{p=3}^\infty\left(\zeta\left(\frac{p}{2}\right) - 1\right)=\sum_{n=2}^\infty\frac{\sqrt{n}}{n(n-1)} + 1$$ My question is. If it is correct, is this identity known? If so, can we conclude that $\sum_{n=2}^\infty\frac{\sqrt{n}}{n(n-1)}$ is an irrational number (sum of square roots "feels" like an irrational number) and thus also the sum of "Riemann halves" is irrational number? Also, is there something known about the $\sum_{p=1,3,5,...}^\infty \zeta\left(\frac{p}{2}\right)$? Its convergence, etc. Thanks.
Answer to the question if it is known: It seems that the same equality has been mentioned lately in The Spiral of Theodorus and Sums ofZeta-values at the Half-integers
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\lim\limits_{n \to\infty}\dfrac{n^{2}-n+2}{3n^{2}+2n-4}=\dfrac{1}{3}.$ Change $n$'s by $x$ and find $M$. $\lim\limits_{n\to\infty}\dfrac{n^2-n+2}{3n^2+2n-4}=\dfrac{1}{3}$. With epsilon definition I get my answer as $N=\left[ \dfrac{5}{9\varepsilon }\right] +1$. But then I thought that how can I evaluate this sequence, in functions $\delta$-$\varepsilon$ definition by substituting $n$ by $x$. But I can't make it. So here's my question: What can I do is I have function like this one? $\lim\limits_{x\to+\infty}\dfrac{x^2-x+2}{3x^2+2x-4}=\dfrac{1}{3}$. I'm thinking about that function. But how to find. $\forall \varepsilon>0$ be given, then $\exists M>0$ such that $x>M$ implies $| f\left( x\right) -l|<\varepsilon$ For function: $\left| \dfrac{x^{2}-x+2}{3x^{2}+2x-4}-\dfrac{1}{3}\right|= \left| \dfrac{-5x+10}{9x^{2}+6x-12}\right|<\left| \dfrac{-5x+10}{9x^{2}}\right|$ And this is where I stuck. Because $x\in \mathbb{R}$, so I think we can't remove the brackets. Isn't it? For y'all's helps I solved it thank you all!
If you use the long division, you have $$\frac{n^{2}-n+2}{3n^{2}+2n-4}=\frac{1}{3}-\frac{5}{9 n}+\frac{40}{27 n^2}+O\left(\frac{1}{n^3}\right)$$ Therefore $$ \frac{1}{3}-\frac{5}{9 n}<\frac{n^{2}-n+2}{3n^{2}+2n-4}<\frac{1}{3}-\frac{5}{9 n}+\frac{40}{27 n^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show the inequality : $a^{(2(1-a))}+b^{(2(1-b))}+c^{(2(1-c))}+c\leq 1$ Claim : Let $0.5\geq a \geq b \geq 0.25\geq c\geq 0$ such that $a+b+c=1$ then we have : $$a^{(2(1-a))}+b^{(2(1-b))}+c^{(2(1-c))}+c\leq 1$$ To prove it I have tried Bernoulli's inequality . For $0\leq x\leq 0.25$ we have : $$x^{2(1-x)}\leq 2x^2$$ As in my previous posts we have the inequality $x\in[0,0.5]$ : $$x^{2(1-x)}\leq 2^{2x+1}x^2(1-x)$$ applying this for each variables $a,b$ we want to show : $$2^{2a+1}a^2(1-a)+2^{2b+1}b^2(1-b)+2c^2+c\leq 1$$ Now by Bernoulli's inequality we have: $$2^{2x+1}\leq 2(1+2x)$$ Remains to show : $$2(1+2a)a^2(1-a)+2(1+2b)b^2(1-b)+2c^2+c\leq 1\quad(1)$$ The function : $$f(x)=2(1+2x)x^2(1-x)$$ is concave for $x\in [\frac{1}{8}+\frac{\sqrt{\frac{19}{3}}}{8},0.5]$ So we can use Jensen's inequality remains to show : $$2\left(2(1+a+b)\left(\frac{a+b}{2}\right)^2\left(1-\left(\frac{a+b}{2}\right)\right)\right)+2c^2+c\leq 1$$ So it reduces to a one variable inequality and using derivatives it's not hard to show that : $$g(c)=2f\left(\frac{1-c}{2}\right)+2c^2+c\leq 1$$ For $c\in[0,1-2\left(\frac{1}{8}+\frac{\sqrt{\frac{19}{3}}}{8}\right)]$ It shows the equality case $a=b=0.5$ and $c=0$ but inequality $(1)$ is false for the other equality case $a=0.5$ and $b=c=0.25$. We have also the inequality for $x\in[0.25,0.5]$ (we can prove it using logarithm and then derivative) $$x^{(2(1-x))}\leq x^22^{-5(x-0.25)(x-0.5)+1}$$ Using Bernoulli's inequality : $$x^22^{-5(x-0.25)(x-0.5)+1}\leq 2(x^2+x^2(-5(x-0.25)(x-0.5)))$$ So Remains to show : $$2(a^2+a^2(-5(a-0.25)(a-0.5)))+2(b^2+b^2(-5(b-0.25)(b-0.5)))+2c^2+c\leq 1\quad (2)$$ Question : Have you a proof ? How to show $(2)$ ? Thanks in advance !
Some thoughts We first give some auxiliary results (Facts 1 through 4). The proofs are not difficult and thus omitted. Fact 1: If $\frac{1}{2} \ge x \ge \frac{1}{4}$, then $x^{2(1-x)} \le \frac{528x^2+572x-93}{650}$. Fact 2: If $0\le x \le \frac{1}{4}$, then $x^{2(1-x)} \le 3x^2 - 2x^3$. (Hint: Use Bernoulli inequality.) Fact 3: If $\frac{1}{2} \ge x \ge \frac{1}{4}$, then $x^{2(1-x)} \le \frac{752x^2-24x-1}{320}$. Fact 4: If $0\le x \le \frac{1}{4}$, then $x^{2(1-x)} \le 3x^2 - 4x^3$. (Hint: Use Bernoulli inequality.) Now, we split into two cases: * *$c \le \frac{1}{5}$: By Facts 1-2, it suffices to prove that $$\frac{528a^2+572a-93}{650} + \frac{528b^2+572b-93}{650} + 3c^2 - 2c^3 + c \le 1$$ or $$650c^3-264a^2-264b^2-975c^2-286a-286b-325c+418 \ge 0.$$ It is verified by Mathematica. *$\frac{1}{5} < c \le \frac{1}{4}$: By Facts 1, 3, 4, it suffices to prove that $$\frac{528a^2+572a-93}{650} + \frac{752b^2-24b-1}{320} + 3c^2 - 4c^3 + c \le 1$$ or $$83200c^3-16896a^2-48880b^2-62400c^2-18304a+1560b-20800c+23841 \ge 0.$$ It is verified by Mathematica.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is the area of a line really $0$? Let's take a square with side $A$. The area of this is defined as $A\times A$. The way I explained this to myself is by reapeatedly deviding the square till I reached a single line. Then stacking up lines one on top of the other $A$ times. But if the area of a line was really $0$ no matter how many lines were stacked up, they would always have an area of $0$.(would exist in one dimension only) Another way to see this is defining a function $V(F)$ which gives the volume of two dimensional figure $F$. If this volume became $0$ , no matter how many such Figures I stack up on each other they would never have a volume. When this is combined with physics it makes a little more sense. Even if I were to chop up a cube infinitely I could never reduce its height to less than the Planck length. This means that the volume would be $Area \times ℓP$ (Planck length) So the question is for a three dimensional object to exist must every two dimensional object have a non zero volume?
Your interpretation of the area of the square makes sense if each of your "lines" has a width of $1$ (that is why you have exactly $A$ such lines). Usual lines have zero width and you need to have uncountably infinitely many of them to obtain a height of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3927841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Cyclic normal subgroup of perfect group is in the center I've been trying for a while to solve an exercise/prove a proposition, which at first seemed elementary, but now I even doubt if it's a true proposition. The proposition is: Let $G$ be a perfect group and let $K$ be a cyclic, normal subgroup of $G$. Show that $K$ is contained in the center of $G$ (i.e. $Z(G)$). Obviously it's enough to show that the generator of $K$ is in the center, i.e. it commutes with every element of the group, but I can't figure out why that's true. I thought about using Grun's lemma and show that if the generator of $K$ wasn't in the center, then its $Z(G)$-coset would be in the center of $G/Z(G)$, but it turned out to be the same approach as the first one. Then I thought about showing that the orbit (under the action of conjugation) of the generator is a singleton and I found that $\sqrt{|G|}$ is a lower bound for the size of the stabilizer $|Stab(x)|$, where $x$ is the generator, but that's just for a finite group $G$, and I couldn't actually find a higher bound (it'd be fine if I could show that x is actually stable under conjugation from any $g\in G$). Thank you in advance for any help.
You need to take advantage of the fact that the normal subgroup is cyclic somehow. So let $H$ be cyclic and normal in $G$, where $G$ is perfect. Then by the $N/C$ Theorem, $G/C_G(H)$ is isomorphic to a subgroup of $\operatorname{Aut}(H)$. But since $H$ is cyclic, $\operatorname{Aut}(H)$ is abelian. Thus $G/C_G(H)$ is abelian which implies that $C_G(H) \geq G'=G$. Therefore, $C_G(H)=G$ and that is equivalent to $H \leq \operatorname{Z}(G)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3928270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why $-(\Delta)^{-1}$ is compact operator? Assume $$ A= -(\Delta)^{-1}: L^2(\Omega)\rightarrow L^2(\Omega), $$ I know $A$ is self-adjiont. Then why the space $H^1_0(\Omega)$ can be compactly imbedded in $L^2(\Omega)$ by means the fact that $A$ is compact operator ? From the point of view of notation, by $H^1_0(\Omega)$ I mean the Sobolev space $\mathring W^{1,2}(\Omega)$.
I suppose that $\Omega$ is a bounded open subset of $\mathbb R^n$ and $\Delta$ is defined as the Friedrichs extension of Laplace operator $\Delta: \mathscr D(\Omega) \rightarrow L_2(\Omega)$. In this case $-\Delta$ is positive densely defined operator with $D_\Delta \subset H^1_0( \Omega)$. Operator $A = \Delta^{-1}$ is defined and is continuous in the following sense: $A: L_2(\Omega) \rightarrow H^1_0(\Omega)$. Now we discuss the compactness of $A:L_2(\Omega) \rightarrow L_2(\Omega)$. If you assume Rellich theorem ($j:H^1_0(\Omega) \rightarrow L_2(\Omega)$ is a compact embedding), then $A:L_2(\Omega) \rightarrow L_2(\Omega)$ is equal to a composition of continuous operators $L_2(\Omega) \xrightarrow{A} H^1_0(\Omega) \xrightarrow{j} L_2(\Omega)$, where the second operator is compact. Thus $A$ is compact. If you assume that $A:L_2(\Omega) \rightarrow L_2(\Omega)$ is compact, then $\sqrt{A}:L_2(\Omega) \rightarrow L_2(\Omega)$ is also compact. It is known that $\sqrt{A}$ is an isomorphism betweeen $L_2(\Omega)$ and $H^1_0(\Omega)$. Therefore $(\sqrt{A})^{-1}:H^1_0(\Omega) \rightarrow L_2(\Omega)$ is a continuous operator. Now you can represent the embedding $H^1_0(\Omega) \rightarrow L_2(\Omega)$ as the composition $H^1_0(\Omega) \xrightarrow{(\sqrt{A})^{-1}} L_2(\Omega) \xrightarrow{\sqrt{A}} L_2(\Omega)$, where the second operator is compact. Thus, the embedding is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3928459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Infinitely many $ n \in \mathbb{N} $ such that $ n^2+1 $ has two divisors $ a,b $ such that $a-b=n $ Prove that there is infinitely many $ n \in \mathbb{N} $ such that $ n^2+1 $ has two divisors $ a,b $ such that $a-b=n $. It is obvious that if $ p\mid n^2+1 $ then $\gcd(p,n)=1$. I tried to use the Chinese remainder theorem, but I got nothing. Please help me.
It works for $b,a$ consecutive terms in the sequence $x_j$ beginning $$ 1, 2, 5, 13, 34, ..$$ with $$ x_{j+2} = 3 x_{j+1} - x_j $$ which are (every second) Fibonacci numbers. Although the problem does not require this, the product is precisely the given number $n^2 + 1.$ Thus the hint that $a^2 - 3ab + b^2 = -1$ really did finish the matter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3928602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How do I find the residuals of this problem? Find all the singularities in the finite plane and the corresponding residues. Show the details. $$\frac{8}{1+z^2}$$ I am a bit stuck on this problem: So I know the residual is going to be the coefficient associated with the first negative exponent in the Laurent series. Why do I have to find both the singularities and the residual? They seem like different questions no? So the singularities are at $z = i$ because $i^2 = -i$ and that's when the denominator will equal 0. But how do I find the residuals? So I remember that: $$\frac{1}{1+z} = 1 - z + z^2 - ...$$ So I now multiply each element by 8: $$\frac{8}{1+z} = 8(1 - z + z^2 - ...)t$$ But none of these terms have an exponent of -1 so I cannot find the coefficient... Sigh, what am. I doing wrong? Next problem, same as above but different function: $$\frac{1}{1-e^z}$$
$f(z)=\frac{8}{z^2+1}$ has simple poles at $z=\pm i$ $$\text{Res}(f(z),i)=\underset{z\to i}{\text{lim}}\frac{8 (z-i)}{(z-1) (z+i)}=\frac{8}{2i}=-4i$$ in a similar way can be proved that $$\text{Res}(f(z),-i)=4i$$ Forgot the second question $$\frac{z}{1-e^z}=-1+\frac{z}{2}-\frac{z^2}{12}+O\left(z^3\right)$$ the dividing by $z$ we get $$g(z)=\frac{1}{1-e^z}=-\frac{1}{z}+\frac{1}{2}-\frac{z}{12}+O\left(z^2\right)$$ Therefore $$\text{Res}(g(z),0)=-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How valid is this concept or does this already have a name? I was going through my school papers and found an interesting question, so I experimented a bit more and found out a pattern, so I made a formula for such matrices. $$A = \begin{bmatrix}x&-(x-1)\\x+1&-x\end{bmatrix}$$ where $x > 0$ is an integer $$A^n = \begin{cases} I, & \text{if $n$ is even} \\ A, & \text{if $n$ is odd} \end{cases}$$ where $I$ is identity matrix of order 2. I just wanted to know if this has been found before or whether it has a name too or if there are some cases that does not obey this. Hope someone can format my question properly, I'm new to this community. Hope this is the correct way of putting things together too. Thanks!!
Thank you for addressing our comments and clarifying your question! Any square matrix which satisfies your equation $$A^n = \begin{cases} I, & \text{if $n$ is even} \\ A, & \text{if $n$ is odd} \end{cases}$$ is its own inverse, i.e. it is an involutory matrix. This is because if $n = 2$ you have $AA = I$, which by definition means that $A$ is its own inverse. From that simple statement you can extrapolate your formula, because if $n$ is even then $$A^{n} = A^{2m} = (AA)^{m} = I^{m} = I$$ for some integer $m$, and if $n$ is odd then $$A^{n} = A^{2k+1}=A^{2k}A=(AA)^{k}A = I^{k}A=IA=A$$ for some integer $k$. In fact, for any $2\times 2$ matrix $$\begin{pmatrix} a & b\\c & -a\end{pmatrix},$$ of which your matrix is an example, such a matrix will be involutory if $a^{2} + bc = 1$. We can verify this for your matrix $A$: $$x^{2} - (x-1)(x+1) = 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Find $P(X+Y > \frac{3}{2})$ given that X is uniformly distributed on [0,1] and the conditional distribution of Y given X=x is uniform on [1-x,1]. Suppose $X$ has the uniform distribution on the interval [0,1] and that the distribution of random variable $Y$ given that $X=x$ is uniform on the interval $[1-x,1].$ Find the probability that $P(X+Y > \frac{3}{2})$ given to 4 decimal places. So far this is what I have: $P(X+Y > \frac{3}{2}) = 1 - P(X+Y < \frac{3}{2})= 1 - P(Y < \frac{3}{2} - X)$ $P(Y < \frac{3}{2} - X) = \int_0^1 \int_{1-x}^{\frac{3}{2}-x} f_{XY}(x,y) \,dy \,dx = \int_0^1 \int_{1-x}^{\frac{3}{2}-x} \frac{1}{x} \,dy \,dx $ $= \int_0^1 [\frac{y}{x}]_{1-x}^{\frac{3}{2}-x} \,dx = \int_0^1 \frac{1}{2x} \, dx = [\frac{1}{2}ln(2x)]_0^1$ As $ln(2x)$ can't be evaluated at $0$ I can't go any further. Have I done anything wrong and what is a better method to answer this question so that this problem doesn't occur? Note: I calculated $f_{XY}(x,y)$ from the distributions of $f_X(x)$ and $f_{Y|X}(y|x)$ as $f_{XY}(x,y)=f_{Y|X}(y|x)f_X(x).$
I get \begin{eqnarray} P[X+Y > {3 \over 2}] &=& \int_{x={1 \over 2}}^1 \int_{y={3 \over 2}-x}^1 {1 \over x} dy dx \\ &=& \int_{x={1 \over 2}}^1 (1-{1 \over 2x}) dx \\ &=& (x-{ 1\over 2} \ln x) \mid_{1 \over 2}^1 \\ &=& {1 \over 2}(1+\ln {1 \over 2}) \\ &\approx& 0.1534 \end{eqnarray} Where did the bounds come from? First, note that the distribution is supported in $[0,1]^2$. So we are interested in the set $D=\{(x,y)| x+y > {3 \over 2} \} \cap [0,1]^2$. Note that if $x \le {1 \over 2}$, then $x+y \le {3 \over 2}$ so we can write $D = \{(x,y)| x+y > {3 \over 2} , x \in ({1 \over 2}, 1], y \in [0,1] \} = \{(x,y)| x+y > {3 \over 2} , x \in ({1 \over 2}, 1], y \in ({3 \over 2} -x,1] \} $. Just as a check, if $x \in ({1 \over 2}, 1]$ and $y \in ({3 \over 2} -x,1]$ then $y \in (1-x,1]$, so for $(x,y) \in D$, the partial pdf for $Y$ is $ {1 \over 1-(1-x)} = { 1\over x}$ and the partial for $X$ is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convergence of independent random variables with different distributions Let $Y_1,Y_2,...,$ be independent r.v. with distribution $P(Y_k=k)=P(Y_k=−k)=1/2$ for all k. Here, $S_n=Y_1+Y_2+...+Y_n \ \ ∀k>0$. Does ${S_n/n^{3/2}}$ converge in distribution? If yes, please write the limit distribution. Note: each Yi has a different distribution I know that the Law of Large Numbers requires i.i.d. random variables but I don't know how to start given this condition.
A simple calculation shows $E\Big(\frac{S_n}{n\sqrt{n}}\Big)=0$ while $$V\bigg(\frac{S_n}{n\sqrt{n}}\bigg)=\frac{n(n+1)(2n+1)}{6n^3}\longrightarrow\frac{1}{3}$$ as $n \longrightarrow \infty$. The latter calculation suggests $\frac{S_n}{n\sqrt{n}}$ won't converge in distribution to the constant random variable $0$ since its limiting variance is non$-$vanishing. Moreover, since $\frac{S_n}{n\sqrt{n}}$ resembles an average of some sort, we may hypothesize that $\frac{S_n}{n\sqrt{n}}$ converges in distribution to a $N(0,1/3)$ random variable. Let's use moment generating function to see this. The moment generating function of $\frac{S_n}{n\sqrt{n}}$ is $$E\bigg(e^{\frac{tS_n}{n\sqrt{n}}}\bigg)=E\bigg(e^{\frac{tY_1}{n\sqrt{n}}}\bigg)\times \ldots \times E\bigg(e^{\frac{tY_n}{n\sqrt{n}}}\bigg)=\prod_{k=1}^n\cosh\bigg(\frac{kt}{n\sqrt{n}}\bigg)$$ You can see here that the right hand side in the above expression converges to the function $y=e^{x^2/6}$ which is precisely the moment generating function of a $N(0,1/3)$ random variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does $1-ab\geq (1-a)(1-b)$ hold, for $a,b\in [0,1]$? Does this inequality hold, for any $a,b\in[0,1]$ $$1-ab\geq (1-a)(1-b)?$$ I'm don't have idea to conclude $1-ab\geq 1-a-b+ab = (1-a)(1-b)$. Anyone can prove (or disprove) it?
$1-ab \ge 1-a \ge (1-a)(1-b).\blacksquare$ Update: I found this is equivalent to that of @Albus Dumbledore and @Tony Ip.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
What are some examples of "visually beautiful" math texts? Are there good examples of math textbooks that seamlessly combines art, cartoon drawing or computer graphics with the texts? I know that there are a few books on the market, which are not text-book per se, that are geared towards general audience, where there are plenty of artistic figures naturally generated by the phenomena under study, usually things of geometric nature, Escher tiling, or things like chaos. Wolfram's a New Kind of Science comes to mind. What are some other examples of "visually beautifu" or "artistic" math textbooks or texts.
Visual Complex Analysis by Tristan Needham
{ "language": "en", "url": "https://math.stackexchange.com/questions/3929882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 8, "answer_id": 5 }
3D polyhedron with no axis of symmetry In learning to work with inertia tensors I came to wonder if there were any polyhedra with no axis of symmetry; and if so, are there any regular polyhedra ? My workings so far : the only shape I can think of in 3D that has no axis of symmetry is of course an ellipsoid; so I was wondering if this is due to the fact that it has no angles... Thanks and sorry if this is too much of an open question; I'll delete it if needed.
The following polyhedra all have some symmetry – either a rotation axis or a mirror plane (which has an associated normal axis): * *Platonic solids and Kepler–Poinsot solids *Prisms and antiprisms, including star forms *Archimedean solids and uniform star polyhedra *Johnson solids So no regular polyhedron, by any reasonable definition, is wholly asymmetric. However, it is very easy to make such a polyhedron. Just take six distinct edges that differ by any sufficiently small amount – $7,8,9,10,11,12$ will do – and build a tetrahedron with them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3930274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculating $\sum_{k=0}^{n} {n\choose k} \sqrt{k}$ I am trying to calculate the following expression but not sure how this can be solved yet. $$\sum_{k=0}^{n} {n\choose k} \sqrt{k}$$ The wiki page (https://en.wikipedia.org/wiki/Binomial_coefficient) contains a similar expression with $\sqrt{k}$ replaced by $k$ or $k^2$. If the above expression can be solved, I also wonder if we can solve it for any $k^j$ with any real number $j>0$?
Let $X$ be a binomial random variable, with $n$ trials and probability $1/2$. Your expression is exactly $2^nE[X^j]$. Using the central limit theorem, for large $n$, $X$ is well approximated by $\frac{\sqrt{n}}2Z+\frac{n}2$, where $Z$ is a standard normal random variable. I will then use the approximation \begin{align} E[X^j] &\approx E\left[\left(\frac{\sqrt{n}}2Z+\frac{n}2\right)^j\right]\tag 1 \end{align} to get an approximation for $E[X^j]$. I am not at this time able to quantify the error of the approximation in $(1)$, which is necessary to determine how good of an approximation the final answer is. \begin{align} 2^nE[X^j] &\approx 2^nE\left[\left(\frac{\sqrt{n}}2Z+\frac{n}2\right)^j\right] \\& =2^{n}(n/2)^jE\left[(1+Z/\sqrt{n})^j\right] \\&\stackrel{1}=2^{n}(n/2)^j\sum_{i=0}^\infty \binom{j}{i}E[Z^i]n^{-i/2} \\&\stackrel{2}=\boxed{2^{n}(n/2)^j\sum_{i=0}^\infty \binom{j}{2i}(2i-1)!!\cdot n^{-i}} \end{align} * *In $1$, we use the Taylor series for $f(x)=(1+x)^j=\sum_{i\ge 0}\binom{j}ix^i$, where the binomial coefficient is defined for non-integer $j$ as $$\binom{j}i=\frac{j(j-1)\cdots(j-i+1)}{i!}.$$ *In $2$, we reindex $i\gets 2i$, and use the known moments of the standard normal random variable. We can then further approximate by only taking the first several terms of the infinite summation. For example, when $j=1/2$, you get $$ \sum_{k=0}^n \binom{n}kk^{1/2}=2^nE[X^{1/2}]\approx 2^n(n/2)^{1/2}\left(1-\frac18n^{-1}-\frac{15}{128}n^{-2}-\dots\right), $$ so that $$ \sum_{k=0}^n \binom{n}kk^{1/2}\approx 2^n(n/2)^{1/2}(1+O(1/n)) $$ Again, use this with caution, because the goodness of this approximation depends on the the quality of the approximation $(1)$, which I am not able to quantify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3930413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Inverse Laplace transformation of $\frac{s^2}{(s^2+1)^2}$ I am a bit lost how to compute the inverse Laplace transformation of $$ \frac {s^2}{(s^2+1)^2}$$ I think it will be some combination of sine and cosine (oscillation-like), however I ran into some weird expressions when I tried to expand the expression as $$ \frac {s}{(s^2+1)} \frac {s}{(s^2+1)}$$ Could someone please give some suggestions? Thanks in advance!
$$ F(s)=\frac{s^2}{(s^2 + 1)^2} = \frac{s^2 + 1}{(s^2 + 1)^2} - \frac{1}{(s^2 + 1)^2}$$ $$ F(s)= \frac 1{s^2+1} +\dfrac {1}{2s}\dfrac {d}{ds}\frac 1{(s^2 + 1)}. $$ Apply inverse Laplace Transform: $$f(t)=\sin t -\dfrac 12 \int_0^t 1 \times \tau \sin \tau \ d\tau$$ $$f(t)=\dfrac 12 (\sin t +t \cos t)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3930910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Exercise Baby Rudin 4.8 Let $f$ be a real uniformly continuous function on the bounded set $E$ in $\mathbb{R}^1$. Prove that $f$ is bounded on $E.$ I am struggling to show that $E$ is compact (I know how to resolve the problem after that). I was originally going to say that it is a subset of a 1-cell since it's bounded but I know that $(0,1)$ isn't compact when $[0,1]$ is. Is it possible to make an argument about limit points?
Why does $E$ need to be compact? You only know that it is bounded, it need not be closed. You can solve it using the Bolzano-Weierstrass Theorem like this. Suppose otherwise: without loss of generality, let $f$ be not bounded above. Then for any $n\in\mathbb{N}$, there exists $x_n\in E$ such that $f(x_n)>n$. Because $(x_n)_{n\in\mathbb{N}}$ is a sequence in a bounded subset of $\mathbb{R}$, it has a convergent, and thus Cauchy, subsequence $(x_{n_k})_{k\in\mathbb{N}}$. Since $f$ is uniformly continuous, we can fix some $\delta>0$ such that for all $x,y\in E$, $$|x-y|<\delta\implies |f(x)-f(y)|<1.$$ As $(x_{n_k})_{k\in\mathbb{N}}$ is Cauchy, there exists some $N\in\mathbb{N}$ such that for all $l,m>N$, $|x_{n_k}-x_{n_l}|<\delta$. Then for all $l,m>N$, $|f(x_{n_k})-f(x_{n_l})|<1$. This contradicts the unboundedness of $(f(x_{n_k}))_{k\in\mathbb{N}}$ (Why?) and therefore, $f$ is bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3931091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Constructing projective resolution of a chain complex I am trying to construct the projective resolution in the category of chain complexes of $\dots \to 0 \to M \to 0 \to \dots$ It seems like it should be possible to do this in terms of the projective resolution of $M$ but I am completely stuck. I know a projective chain complex is split exact and formed by projectives, so if we think of the resolution as a half plane double complex, the column with $M$ must be a projective resolution of $M$. I was trying to use the trick of $0 \to P \to P \to 0$ is a projective complex whenever $P$ is projective, but if I put that on top of our complex we don't necessarily get exactness.
In this case, you are in the category of bounded above complexes, where a $\textit{projective resolution}$ of a complex (in this case $\bar{M}:\cdots\rightarrow 0\rightarrow M\rightarrow0\rightarrow\cdots$) means a bounded-above complex of projectives $P$ with a quasi-isomorphism $P\rightarrow \bar{M}$. So, if you take the usual projective resolution of $M$ as a module, $$\cdots\rightarrow P^{-n}\rightarrow P^{-n+1}\rightarrow\cdots\rightarrow P^{-1}\rightarrow P^{0}\rightarrow M\rightarrow0\rightarrow\cdots$$ we can construct the projective resolution of $\bar{M}$ as follows $\require{AMScd}$ \begin{CD} \cdots @>>>P^{-1} @>>>P^{0} @>>> 0 @>>>\cdots\\ @V{f^{-2}}VV @V{f^{-1}}VV @V{f^{0}}VV @V{f^{1}}VV @V{f^{1}}VV\\ \cdots@>>>0@>>>M @>>> 0 @>>> \cdots \end{CD} where the arrow $f:\bar{P}\rightarrow \bar{M}$ is obviously a quasi-isomorphism. In the homotopic category $K(\mathscr{A})$ (where $\mathscr{A}$ is an abelian category such as the category of modules over a ring ) you can generalize this and talk about $K$-projective resolutions, complexes $X$ in $K(\mathscr{A})$ which verify that $Hom(X,Z)=0\ ,\ \forall Z\in\mathscr{Z}=\lbrace Z\in K(\mathscr{A})\ \text{such that}\ H^{n}(Z)=0\ \forall \ n\in\ \mathbb{N} \rbrace $. The good thing is that if $P$ is a bounded above complex of projectives, then is $K$-projective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3931232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Getting specified correlation matrix from i.i.d. zero mean Gaussian random variables Say there are two zero mean, unit variance Gaussian random variables, $X_1$ and $X_2$. The covariance matrix of the vector $\bf{X} = [X_1, X_2]^T$ is then \begin{align*} C_X = \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}. \end{align*} Now I would like to get a new vector $\bf{Y}$ from $\bf{X}$ using linear transformations, such that the covariance matrix of $\bf{Y}$ is $$ C_Y = \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix}. $$ I tried the following linear transformation $$ \bf{Y} = \begin{bmatrix} 1 & \rho \\ \rho & 1 \end{bmatrix} \bf{X}. $$ This gives me the covariance matrix $$ C_Y = \begin{bmatrix} 1+\rho^2 & 2\rho \\ 2\rho & 1+\rho^2 \end{bmatrix}. $$ I can reduce the variance of $X_1$ and $X_2$ to $\frac{1}{2}$ so that I can have $\rho$ in the off diagonal entries. Under this adjustment $Y_1$ and $Y_2$ would each have variance $\frac{1+\rho^2}{2}$, which I can normalize to get $1$ in the diagonal entries. So now I have two questions * *Is this the best way to get the required covariance matrix? *If the dimensionality of $\bf{X}$ is $n$, I would need to divide $\bf{X}$ by $\sqrt{n}$. Is that right? How would I normalize the diagonal entries of $\bf{Y}$ to be $1$? In this case I would require the covariance matrix to have $\rho$ in all the off-diagonal entries and $1$ along diagonal entries. Any help is appreciated.
Choose $Y = C_Y^{\frac{1}{2}}X$, $$ \begin{align*} \mathbb{E}[YY^T] &= \mathbb{E}[C_Y^\frac{1}{2} X X^T C_Y^\frac{1}{2}] \\ &= C_Y^\frac{1}{2} \mathbb{E}[X X^T] C_Y^\frac{1}{2} \\ &= C_Y^\frac{1}{2} C_X C_Y^\frac{1}{2} \\ &= C_Y \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3931633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find maxima and minima of $f(x,y) = x^3 + y^3 -3x -3y$ in $x + 2y=3$ I need to find max and min of $f(x,y)=x^3 + y^3 -3x -3y$ with the following restriction: $x + 2y = 3$. I used the multiplier's Lagrange theorem and found $(1,1)$ is the minima of $f$. Apparently, the maxima is $(-13/7, 17/7)$ but I could not find it via Lagrange's theorem. Here's what I did: I put up the linear system: $\nabla f(x,y) = \lambda \, \nabla g(x,y)$ $g(x,y) = 0$ then, $(3x^2 -3, 3y^2 -3) = \lambda (1,2)$ $x + 2y -3 = 0$ Solving for $\lambda$, I got $\lambda = 0$, which gave me $x = 1$ and $y = 1$. How can I find the maxima if lambda only gives one value which is $0$?
As others have said, you don't necessarily need to use Lagrange multipliers. But since you've set up the system, we can see what happens: $$ 3x^2-3=\lambda\\ 3y^2-3=2\lambda\\ x+2y-3=0 $$ From the first two equations, we have $3y^2-3=2(3x^2-3)$, which simplifies to $y^2-1=2(x^2-1)$. Rearranging the linear constraint, we have $x=3-2y$. Putting this information together leads to $$7y^2-24y+17=0.$$ You could solve this using the quadratic formula, but it is quicker to observe that $-24 = -7-17$: $$7y^2-7y-17y+17=0$$ and so $(7y-17)(y-1)=0$. This will give you the two local extrema.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3931807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Prove $\sin x < x < \tan x$ for $0 < x < \pi/2$ algebraically I know there are some proofs showing that, for $0 < x < \pi/2$: $\sin x < x < \tan x $ with the use of the unit circle, however, can I prove this algebraically in some way as well?
One simple way could be to demonstrate that , over $[0, pi/2)$ $\sin$ is concave while $\tan$ is convex, and that $\sin(0)=0=\tan(0)$ and $\sin '(0) = 1 =\tan'(0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3931964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Expectation of $\sqrt{r_t}$ in $\textrm{d}r_t=\kappa(\theta-r_t)\,\textrm{d}t+\sigma\sqrt{r_t}\,\textrm{d}B_t$ I'm currently looking at the CIR model and trying to calculate the expectation of $r_t$. I know that by Fubini's theorem, $\mathbb{E}\left(\int_0^t\kappa(\theta-r_s)\,\textrm{d}s\right)=\kappa\theta t-\int_0^t\mathbb{E}(r_s)\,\textrm{d}s$ and we can create an ODE from it, but as for the second term I know I can't use Fubini's theorem, so what can I do? I know one way is to multiply the SDE by $\textrm{e}^{\kappa t}$ but I haven't seen how it helps with the square root term, and Shreve claims it is an Ito integral but I don't see how $\sqrt{r_t}\in\mathcal{L}^2_{\mathcal{F}}$. All help is appreciated, thanks!
We need the following Lemma. Lemma. Consider the SDE $$dX_t = f(X_t)dt + g(X_t)dB_t, \quad X_0 = x_0 \in \mathbb R^{+}. \tag{1}$$ with solution $\{x_t; 0 \leq t \leq T\}.$ Assume that the linear growth condition holds, i.e. There exists a constant $K \in \mathbb R$ such that for all $x \in \mathbb R$ $$|f(x)|^2 \bigvee |g(x)|^2 \leq K(1+x^2).$$ Then there exists a constant $C$ such that $$E[\sup_{0 \leq t \leq T} X_t^2] \leq C.$$ (For a proof of this lemma see for example lemma 3.2 - page 51 in Xuerong Mao's book) Now going to your problem: your SDE is of the SDE (1) type with $f(x)= \kappa(\theta -x)$ and $g(x) = \sigma \sqrt{x}.$ We can see that clearly the linear growth condition holds, so the lemma applies and there exists $C>0$ such that $E[\sup_{0 \leq t \leq T} r_t^2] \leq C.$ This implies that $E\int_0^T (\sigma \sqrt{r_t})^2dt < \infty.$ Thus the process $I_t:= \int_0^t (\sigma \sqrt{r_t})dB_t, 0 \leq t \leq T$ is a martingale and has constant expectation. Therefore, $E[I_T]=E[I_0]=0.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $H\unlhd G$ and $a\in G$. If the coset $aH$ has order $3$ in $G/H$ and $|H| = 10$, what are the possible $|a|$ in $G$? Context I am an undergrad student taking Abstract Algebra I, and this is a problem on a homework assignment. Problem Let $H$ be a normal subgroup of $G$, and let $a ∈ G$. If the coset $aH$ has order 3 in the factor group $G/H$, and $|H| = 10$, what are the possible orders of $a$ in $G$? What I know * *Since $H$ is a normal subgroup, its left coset should equal its right coset. *Since $H$ has 10 elements, the highest order of an element should be 10.
Hint: The order of $a^3$ is a divisor of the order of $H$ (Lagrange's theorem). On the other hand, if the order of $a$ is $d$, the order of $a^3$ is $\dfrac d{\gcd(d,3)}$. Can you deduce the possibilities for $d$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Motivation for Open Mapping Theorem In complex analysis, the open mapping theorem says: A holomorphic function carries open sets to open sets. I keep hearing that this is a deep, profound theorem. But I struggle to see why it is significant. How do I interpret it intuitively? For example, does it suggest something about how a holomorphic function locally stretches space? Does it place some sort of bound on the rate of growth of a holomorphic function near a point? The closest I have to an intuition comes from the real analysis example, $f(x) = x^2$, which is continuous but not open (since $f$ maps the open interval $(-1,1)$ to the interval $[0,1)$, which is not open). This happens because $f$ is not injective at $x=0$, but I'm not sure if this intuition carries over to the complex case. I think I am misinterpreting the theorem here, so any insight would be much appreciated. Thanks!
I came across a satisfactory intuition a few days back. A holomorphic function can never map something $2$-dimensional (like an open set) onto something $1$-dimensional (like a line or a curve). The function either maps the $2$-dimensional object to a single point (in which case, the function is constant), or it maps it to something else that's $2$ dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can a function be differentiable everywhere on its domain, but not Lipschitz on its domain? If we suppose that a function is differentiable at every point on its domain (but place no more restrictions than that), does it follow that the function is Lipschitz on its domain? I think that it does not, and I suspect that it is because we do not require that the function is continuously differentiable. But I'm not sure how to prove my thought - specifically, I can't come up with a counterexample. If I am right that it is not Lipschitz, can we say if it will be locally Lipschitz at every point?
If $f\colon\Bbb R\longrightarrow\Bbb R$ is the function defined by $f(x)=x^2$, then $f$ is differentiable and even a $C^\infty$ function, but it is not Lipschitz continuous. On the other hand, if you define$$\begin{array}{rccc}f\colon&\Bbb R&\longrightarrow&\Bbb R\\&x&\mapsto&\begin{cases}x^2\sin\left(\frac1{x^2}\right)&\text{ if }x\ne0\\0&\text{ if }x=0,\end{cases}\end{array}$$then $f$ is not Lipschitz continuous on any neighborhood of $0$. This has to do with the fact that $f'$ is unbounded (which is stronger than being discontinuous).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A combinatorial identity involving Stirling numbers of the second kind Answering a recent question I came across the following interesting identity: $$ \sum_{k=0}^m\binom mk{n+k+1 \brace k+1}k!=\sum_{k=0}^m\binom mk (-k)^{m-k}(k+1)^{n+k}. $$ Is there a simple way to prove it?
Consider the following problem. You want to count the number of functions $f$ form $[n+m]$ to $[m+1]$ such that for $y\in [m]\subseteq [m+1],$ if $f^{-1}(y)= \emptyset,$ then $f(y)=m+1$ (such elements $y$ will be called special). In the LHS you have this counted in the following way. Either you will have $m+1$ in the image of the non-special elements or not. In both cases, select $k$ special elements $y\in [m]$ which can be done in $\binom{m}{k}$ ways, and then make a surjective function of the other elements in ${n+m-k\brace m-k}(m-k)!$ or ${n+m-k\brace m-k+1}(m-k+1)!$ ways (depending on $m+1$ being or being not in the image). You will get $$LHS = \underbrace{\sum _{k=0}^m\binom{m}{k}{n+m-k\brace m-k}(m-k)!}_{m+1\text{ not in the image}}+\underbrace{\sum _{k=0}^m\binom{m}{k}{n+m-k\brace m-k+1}(m-k+1)!}_{m+1\text{ in the image}}.$$ Which, using the recursion of Stirling numbers, can be seen to be equal to $$LHS =\sum _{k=0}^m\binom{m}{k}\left ({n+k\brace k}k!+{n+k\brace k+1}(k+1)!\right )=\sum _{k=0}^m\binom{m}{k}k!{n+k+1\brace k+1}.$$ Meanwhile, in the alternative universe of the RHS, you can be think of it as $$RHS = \sum _{k=0}^m\binom{m}{k}(-1)^{m-k}k^{m-k}(k+1)^{n+k}=\sum _{k=0}^m\binom{m}{k}(-1)^{k}(m-k)^{k}(m-k+1)^{n+m-k},$$ if you put apart the first term looks like $$RHS = (m+1)^{n+m}-\sum _{k=1}^m\binom{m}{k}(-1)^{k-1}(m-k)^{k}(m-k+1)^{n+m-k},$$ you can think of this as functions from $[n+m]$ to $[m+1]$ with some particular property. The property being the same as for the LHS. Here, you will then construct sets $$A_i = \{f:[n+m]\rightarrow [m+1]:i\text{ is not in the image and }f(i)\neq m+1\}.$$ So, by the PIE principle, you can think of the RHS as $$\left |[m+1]^{[n+m]}\setminus \bigcup _{i=1}^mA_i\right |.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Describe $n$ x $n$ matrix $A$ , such that it follows $A+A^T = 2A^{-1}$? $A+A^T = 2A^{-1}$ => $(A+A^T)A= 2I$ $$\sum_{k=1}^n a_{kj}(a_{ik}+a_{ki}) = 2I_{ij}$$ So if $i = j$: $$\sum_{k=1}^na_{ki}(a_{ik}+a_{ki}) = 2$$ And if $i != j$: $$\sum_{k=1}^n a_{kj}(a_{ik}+a_{ki}) = 0$$ I am now stuck at this stage and can't figure out how to simplify the conditions.
Claim: $A$ must be symmetric. Proof: We have $$ A + A^T = 2A^{-1} \implies A^{-1} = \frac 12[A + A^T]. $$ We see that $A^{-1}$ is symmetric since $$ [A^{-1}]^T = \left(\frac 12[A + A^T]\right)^T = \frac 12[A^T + A^{TT}] = \frac 12[A^T + A] = A^{-1}. $$ It follows that $$ A^T = [(A^{-1})^{-1}]^T = [(A^{-1})^{T}]^{-1} = (A^{-1})^{-1} = A. \quad \square $$ From there, we see that $$ A + A^T = 2A^{-1} \implies 2A = 2A^{-1} \implies A^2 = I. $$ $A$ will satisfy your condition iff $A$ is symmetric with $A^2 = I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3932984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Integrating $1/\theta$ over a range My textbook is telling me: $$\int_{x}^{1} \frac{1}{\theta} d\theta = {|\log x|}$$ While I arrive at: $$\int_{x}^{1} \frac{1}{\theta} d\theta = |\log(1)| - {|\log(x)|} = - {|\log(x)|}$$ Why am I wrong? $0 < x < 1$ Edit: Here is a pic from the textbook
The answer in the book is not wrong but is over-complicated. The integral of $1/\theta$ in the positive real line is $\ln(\theta)$. Therefore $$\int_x^1 1/\theta d\theta = \ln(1)-\ln(x)= -\ln(x)$$ However since $0<x<1$ (I suppose) we have that $\ln(x)$ is negative and so $-\ln(x) = |\ln(x)|$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3933188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of a simple Integral Need to show that $$\lim_{t\to 1^-}(1-t)\int_0^t\frac{g(s)}{(1-s)^2}ds=g(1)$$ for any continuous function $g(s)$. I tried a variable change $u=1/(1-s)$ which gives $du=ds/(1-s)^2$ with $$(1-t)\int_1^{\frac{1}{1-t}}g(1-\frac{1}{u})du$$ I cannot seem to figure out how to get to the limit, given that thus far it's correct.
According to Theorem 5.13 in Rudin's "Principles of mathematical analysis" (slightly restated) If $f$ and $h$ are real and differentiable in $(0,1)$ and if $h'(x)\neq 0$ in a neighborhood of $1$, and $h(x)\to +\infty$ as $x\to 1^-$, and in addition $f'(x)/h'(x)\to A$ as $x\to 1^-$, then $f(x)/h(x)\to A$ as $x\to 1^-$. Applying this to $$ f(x)=\int_0^x\frac{g(s)}{(1-s)^2}\,ds\quad\text{and}\quad h(x)=\frac{1}{1-x}, $$ we get (here we use the fundamental theorem of calculus and the property that $g$ is continuous) $$ \frac{f'(x)}{h'(x)}=\frac{g(x)/(1-x)^2}{1/(1-x)^2}=g(x)\to g(1)\quad (x\to 1^-). $$ Hence, the limit you look for (check the conditions) is also $g(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3933308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uniform distributed sequences If $X_n$ uniformly distributed $U[0,X_{n-1}=x_{n-1}]$ for $n>0, X_0=1$. How do we show the sequence $X_n^a$ is uniform.
$E(S_{n+1}|S_1,S_2,..,S_n)=(1+a)^{n+1} E(X_{n+1}^{a}| X_1,X_2,\dots,X_n)$ since $\sigma (S_1,S_2,\dots,S_n)=\sigma (X_1,X_2,\dots,X_n)$ Hence $E(S_{n+1}|S_1,S_2,\dots,S_n)=(1+a)^{n+1} \frac {\int_0^{X_n} t^{a}dt} {X_n}=(1+a)^{n} X^{a}_n=S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3933613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determine the linear function $f: R \rightarrow R$ There's a linear function $f$ whose correspondence rule is $f(x)=|ax^2-3ax+a-2|+ax^2-ax+3$. State the values of the parameter $a$ that fully define the function $f$. According to the problem condition the function $ f (x) $ is linear by so the coefficient that accompanies $ x ^ 2 $ must be equal to 0, this is meets when: \begin{align*} ax^2-3ax+a-2 & < 0\\ a(x^2-3x) &<2-a \\ \ a\underbrace{\left(x^2-3x+\frac{9}{4}\right)}_{ \left(x-\frac{3}{2}\right)^2≥ 0} &<2-a +\frac{9}{4}a\\ a \left(x-\frac{3}{2}\right)^2 &<2+\frac{5}{4}a\\ 0 & <2+\frac{5}{4}a \\ -\frac{8}{5} & <a \end{align*} What else is missing to analyze?
For any $a$, in order for the $x^2$ term to vanish, the inequality $ax^2-3ax+a-2\le0$ must hold for all values of $x$. Using your analysis, this is equivalent to $$a\left(x-\frac32\right)^2\le2+\frac54a$$ If $a > 0$, there is some $x$ large/small enough such that LHS > RHS. Hence $a \le 0$. Moreover, if $2 + \dfrac54 a < 0$, the above inequality fails for $x = \dfrac32$. Hence $a \ge -\dfrac 85$. This gives the range $-\dfrac 8 5 \le a \le 0$; indeed in this range, LHS is nonpositive for all $x$ while RHS is nonnegative, so our initial inequality holds, and the function is linear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3933777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Normalise rest of list I have an list of fractions as such: sum(3/10, 1/2, 1/5) = 1 and I multiply some element (the second in this case) by some known factor (1/2 here): sum(3/10, 1/4, 1/5) != 1 So the answer would be: sum(????, 1/4, ???) == 1, where arr[0] and arr[2] are proportionate to their original values. How can I normalise the rest of the array, so that the array still sums to 1, but the second element remains the same (1/4 in this case) and the rest of the elements are proportionately changed? I've tried adding/multiplying the rest of the elements by the factor but it doesn't work.
A general solution where you can have the ratio according to the array numbers. Let the 3 numbers in your array be : $x,\ y,\ z$ Condition 1: $$x+y+z = 1$$ where $y= \frac{1}{2}$ Condition 2: You have scaled $y$ by ratio of $0.5$, let us scale the other 2 numbers by $\alpha$ $$\alpha x+ 0.25 + \alpha y = 1$$ So the 2 equations are: $$x+z = \frac{1}{2}$$ $$\alpha x+\alpha z = \alpha(x+z) = \frac{3}{4} $$ $$\alpha = \frac{3\times2}{4}= \frac{3}{2}$$ Hope this helps...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3933916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability DIE question on Bayes Theoram Suppose a girl throws a die. If she gets a 5 or 6, she tosses a coin three times and notes the number of heads. If she gets 1,2,3 or 4, she tosses a coin once and notes whether a head or tail is obtained. If she obtained exactly one head, what is the probability that she threw 1,2,3 or 4 with the die ? My Doubt? Let E1 be the event that the outcome on the die is 5 or 6 and E2 be the event that the outcome on the die is 1,2,3 or 4. A - exactly one head. I am not able to understand why the probability of 4 / 6 in {1,2,3,4} not getting multiplied with the probability of exactly one head i.e 1/2 to get P (A/E2)?
The probability you mentioned is $$P[E_2 \cap A] = P[E_2] \cdot P[A|E_2] = \left(\frac{2}{3}\right) \left(\frac{1}{2}\right) = \frac{1}{3}$$ However, the question is asking for $P[E_2|A]$, which, by Bayes' Theorem, is $$\begin{align} P[E_2|A] &= \frac{P[A|E_2] \cdot P[E_2]}{P[A]} \\ \\ &= \frac{P[A|E_2] \cdot P[E_2]}{P[A|E_1] \cdot P[E_1] + P[A|E_2] \cdot P[E_2] } \\ \\ &= \frac{\left(\frac{2}{3}\right) \left(\frac{1}{2}\right)}{\left(\frac{1}{3}\right) \left(\frac{3}{8}\right) + \left(\frac{2}{3}\right) \left(\frac{1}{2}\right)} \\ \\ &= \frac{8}{11} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3934182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $\sum_{p=0}^{k}(-1)^p\binom{k}{p}p^k=\sum_{p=0}^{k}(-1)^p\binom{k}{p}(p+a)^k, a \in \mathbb{R}$ true? If so, how? I come here from this answer actually: https://math.stackexchange.com/a/3743805/765852 Due to re-indexing, $$\sum_{p=1}^{k+1}(-1)^p\binom{k}{p-1}p^{k+1}$$ should equal $$-\sum_{p=0}^{k}(-1)^p\binom{k}{p}(p+1)^k,$$ yet it's written as $$-\sum_{p=0}^{k}(-1)^p\binom{k}{p}p^k.$$ I tested some results with my calculator and it seems that this is not only true for $(p+1)^k$, but also for $(p+a)^k$, where $a \in \mathbb{R}$. Is this the case? If so, how would one go about proving this? I actually tried my hand at proving this (it's just using simple algebra). However it requires a property which I also tried proving but I am not sure about its proof. I'm going to shamelessly plug that in here (it's open): How would I prove $\sum_{p=1}^{n}\binom{n}{p}p^m(-1)^p=0, m \in \mathbb{N}, 1\leq m < n $ via induction? Ok here's the proof: Statement: $$\sum_{p=0}^{k}(-1)^p\binom{k}{p}(p+a)^k=\sum_{p=0}^{k}(-1)^p\binom{k}{ p}p^k, a \in \mathbb{R}$$ Proof: \begin{align*} \sum_{p=0}^{k}(-1)^p\binom{k}{p}(p+a)^k &= \sum_{p=0}^{k}(-1)^p\binom{k}{p}(p^k+kp^{k-1}a+\binom{k}{2}p^{k-2}a^2+...kpa^{k-1}+a^k) \\ &= \sum_{p=0}^{k}(-1)^p \binom{k}{p}p^k+ (-1)^p\binom{k}{p}kp^{k-1}a + (-1)^p\binom{k}{p} \binom{k}{2}p^{k-2}a^2... \\ &= \sum_{p=0}^{k} (-1)^p\binom{k}{p}p^k+ \sum_{p=0}^{k} (-1)^p\binom{k}{p}kp^{k-1}a+ \sum_{p=0}^{k} (-1)^p\binom{k}{p} \binom{k}{2}p^{k-2}a^2... \\ &= \sum_{p=0}^{k} (-1)^p\binom{k}{p}p^k+ka \sum_{p=0}^{k} (-1)^p\binom{k}{p}p^{k-1}+a^2 \binom{k}{2} \sum_{p=0}^{k} (-1)^p\binom{k}{p}p^{k-2}... \end{align*} Then, by the property linked above, it should be $$\sum_{p=0}^{k} (-1)^p\binom{k}{p}p^k+0+0+....+0+ \sum_{p=0}^{k} (-1)^p\binom{k}{p}$$ Simply by the binomial theorem, the last term should also be zero. This completes the proof.
Using \begin{eqnarray*} [x^k]: k! e^{(a+p)x} = (a+p)^k. \end{eqnarray*} We have \begin{eqnarray*} \sum_{p=0}^{k} (-1)^p \binom{k}{p}(a+p)^k &=& [x^k]: k! \sum_{p=0}^{k} (-1)^p \binom{k}{p} e^{ax} (e^x)^p \\ &=& [x^k]:k! e^{ax} (1-e^x)^k= (-1)^k k! \\ \end{eqnarray*} and so the sum is independent of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3934383", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $\angle p + \angle q + \angle r + \angle s + \angle t = 500^\circ$, find $\angle A + \angle B + \angle C + \angle D + \angle E$. Consider the following star figure. If $\angle p + \angle q + \angle r + \angle s + \angle t = 500^\circ$, find $\angle A + \angle B + \angle C + \angle D + \angle E$. What I Tried: Here is the figure :- Some might already know the sum of the interior angles of an $n$-angled star, is $(n - 4)180^\circ$. Using that one might not deduce that fast that the answer is not $180^\circ$ in this case. In fact, the star is defined such that we draw it from a specific point, and continue making it without lifting the pencil. For this, this is not the case. In fact it is more difficult to answer this question, from my point of view. The answer was given as $140^\circ$ . Joining lines and taking variables will take a long time, and in some cases the extended lines will not meet at good places, making it more hard. Can anyone help me how is that the answer?
Considering the angle sum of the star figure, which is actually a 10-gon: $$(360^\circ - p)+\dots + (360^\circ -t)+\angle A + \dots + \angle E = (10-2)\times 180^\circ $$ $$360^\circ \times 5 - (p+q+r+s+t)+\angle A + \dots + \angle E = 8\times 180^\circ $$ Now solve for $\angle A + \dots + \angle E$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3934499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Transforming a Weibull Distribution X follows a Weibull distribution (k, $\lambda$), but I want to find the expected value of Y where $$y = A e^{(-Bx)}$$ So I'm trying to solve: $$E[Y] = {\int_0^\infty}A e^{(-Bx)}{f(x)}{dx}$$ where $f(x)$ is the pdf of the Weibull distribution. $$f(x)=\frac {k}{\lambda} (\frac {x}{\lambda})^{k-1}e^{-(x/\lambda)}$$ This is as far as I've gotten: $$E[Y] = A \frac {k}{\lambda^k} {\int_0^\infty} x^{k-1}(e^{-x})^{(\frac {1}{\lambda}+B)}{dx}$$ I can see the form of the Gamma function in there, but I'm not sure how to isolate it. Note: I actually know the expected value of Y that I want (as well as the values of A, k, and $\lambda$). I'm hoping the solution here will be easy enough to invert and then solve for B.
For the time being, let $a=B+\frac 1 \lambda$ $$\int_0^\infty x^{k-1}\,e^{-a x}\,dx=a^{-k} \Gamma (k)\quad \text{if} \quad \Re(k)>0\land \Re(a)>0$$ So, you want to solve for $a$ the equation $$a^{-k} \Gamma (k)=\frac{\lambda ^k}{A k}\mathbb E[Y]\implies a=\frac 1\lambda\left(\frac{A k \Gamma (k)}{\mathbb E[Y]}\right)^{\frac{1}{k}}$$ then $B=a-\frac 1 \lambda$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3934692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Efficient method to evaluate integrals with multiple roots in denominator $$ \int \frac{x^2-1}{x\sqrt{(x^2 + \beta x +1 )} \sqrt{(x^2 + \alpha x +1 )}} dx $$ I saw this question but I don't think the methods in answers are much applicable here. My attempt: $$ \int \frac{x^2-1}{x\sqrt{(x + \frac{\beta}{2})^2 + ( 1 - \frac{\beta^2}{4} )} \sqrt{(x + \frac{\alpha}{2})^2 + ( 1 - \frac{\alpha^2}{4} )}} dx= \frac{1}{\sqrt{ 1- \frac{\beta^2}{4} } \sqrt{1- \frac{\alpha^2}{4} }}\int \frac{x^2-1}{x\sqrt{\frac{(x + \frac{\beta}{2})^2}{ ( 1 - \frac{\beta^2}{4} )} +1} \sqrt{\frac{(x + \frac{\alpha}{2})^2}{ ( 1 - \frac{\alpha^2}{4} )} +1}} dx $$ Not sure what's the best way to go now..
Divide both numerator and denominator by $x^2$ $$\int \frac{1-\frac{1}{x^2}}{\sqrt{(x+\beta +\frac{1}{x})}\sqrt{(x+\alpha +\frac{1}{x})}}dx$$ which immediately suggests using the substitution $t=x+\frac{1}{x}$ $$\int \frac{dt}{\sqrt{\left(t+\frac{\alpha+\beta}{2}\right)^2-\left(\frac{\alpha-\beta}{2}\right)^2}} = \cosh^{-1}\left[\frac{t+\frac{\alpha+\beta}{2}}{\left|\frac{\alpha-\beta}{2}\right|}\right]$$ from a simple hyperbolic substitution. This makes the final answer $$\cosh^{-1}\left[\frac{x+\frac{1}{x}+\frac{\alpha+\beta}{2}}{\left|\frac{\alpha-\beta}{2}\right|}\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3934821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Generalising norms over arbitrary fields I am curious about the structure we require on arbitrary fields if we want to generalise the notion of norms on vector spaces over fields other than either $\mathbb{R}$ or $\mathbb{C}$. Specifcally, given a vector space $V$ over $\mathbb{F}$, what structure must be imposed on $\mathbb{F}$, such that we have a map $\|\cdot\|: V \to \mathbb{F}$ s.t. for any $a \in \mathbb{F}$ and any $\bar u, \bar v\in V$, $$\text{(i) }\|\bar u+ \bar v\| \leq \|\bar u\|+\|\bar v\|$$ $$\text{(ii) }\|\bar v\| = e \to \bar v = \boldsymbol{0}, \text{where } e \text{ is the additive identity of } \mathbb{F}$$ $$\|a\bar u\| = |a|\|\bar u\|$$ From this question here, it seems that positive definiteness requires that we have an ordered field. The accepted answer implies this is enough structure to generalise the inner product. Where I am getting somewhat confused is how the generalisation works for norms. Specifically, is it really enough to have just have an ordered field in order to satisfy the norm property of $\|a\bar x\| = |a|\|\bar x\|$? I know that having an ordered field allows us to generalise (i) and (ii), but how do we define what the modulus of $a$ is? I think the linked question is relevant since if we generalise the inner product over an arbitrary field, and we get a norm out of those inner products, we have made some progress. How does ordering the field allow us to define the modulus of scalars in a vector space over an ordered field? Do we need to specify additional structure?
The trivial real-valued absolute value $$|a|_{tr}=\cases{1 \ if \ a\ne 0\\ 0 \ otherwise}$$ works over any field $F$. We can then consider the trivial real-valued norm $$\| v\| = \sup_j |v_j|_{tr}, \qquad v\in F^n$$ When $char(F)=0$ you can think to $\|.\|$ as being $F$-valued, taking its values in the ordered subfield $\Bbb{Q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3935003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate the limit $\frac{(2n-1)x_1+(2n-3)x_2+\dots +3x_{n-1}+x_n}{n^2}$ when $x_n\to x$. Let $\{x_n\}$ be a sequence in $\Bbb R$ and $x_n\to x$ as $n\to \infty$. Then $$\frac{(2n-1)x_1+(2n-3)x_2+\dots +3x_{n-1}+x_n}{n^2}\to x$$. Do anyone know how to solve this kind of problem efficiently? I think I need to estimate $$\left|\frac{(2n-1)x_1+(2n-3)x_2+\dots +3x_{n-1}+x_n}{n^2}-x\right|$$ Is there some other ways to calculate the limit directly? Thanks for any comments.
(Fill in the gaps as needed. If you're stuck, explain what you've done and why you're stuck.) * *Show that if $ x_i \rightarrow x$, then$ \frac{1}{n} \sum x_i \rightarrow x$. *Show that if $ x_i \rightarrow x$, then $ \frac{1}{n^2} \sum (2i-1) x_i \rightarrow x$. *Hence, conclude that $ \frac{2}{n} \sum x_i - \frac{1}{n^2} \sum (2i-1) x_i \rightarrow x $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3935298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the probability that three babies are boys, given that at least one is a boy? I know this is a simple problem but I am arguing with a friend about its solution, so I want to show him an "official" proof! Suppose that in any birth, the probability to have a boy is $48.5\%$. * *If we have three persons expecting to deliver, what is the probability that at least one of them gives birth to a boy? *If we know that at least one will give birth to a boy (suppose we have accurate ultra-sound results), what is the probability all three will have a boy? For the first question, we calculate the probability of one NOT having a boy, which is $1-0.485 = 0.515$ and then the required probability of all three not having a boy is $0.515^3 = 0.1365$ so the probability that at least one will have a boy is $1-0.1365 = 0.8634 = 86.34\%$. For the second question, since the three events are independent, the probability that all three will have a boy given that at least one will have a boy is equal to the probability that the other two will have a boy. Is it $0.485^2$? I am not sure about the second one.
In the second problem, please note your set excludes the case where none of the fetus is a baby boy as you know at least one of them is. So the probability in the second case that all three fetuses are baby boys $ \displaystyle = \frac{0.485^3}{1-0.515^3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3935490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 0 }
Prove that $x \gt y \gt 0 \implies x \gt x-y$ Prove that $x \gt y \gt 0 \implies x \gt x-y$ I have approached it as below. $y \gt 0 \implies y=x+(-x)+y \gt 0 \implies x+((-x)+y)=x+(-(x-y)=x-(x-y)>0 \implies x\gt x-y$ What I observe in this proof is that is does not consider the fact that $x \gt y$ and $x \gt 0$. Is this proof valid?
The proof is indeed valid. And there's a slightly shorter proof which goes like this: $$y > 0 \implies y+(-y) > 0+(-y) \implies 0 > -y \implies x+0 > x+(-y) \implies x > x-y $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3935829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Showing that if $3x$ is even then $3x+5$ is odd I'm learning the absolute basics of how to do proofs, and am really struggling. If 3x is even then 3x+5 is odd. This is the solution: I get that even numbers are 2n and odd numbers are 2n+1. For the life of me, I CANNOT get it into that form shown below. I feel so dumb. I tried looking up other answers before posting, but nothing I found is this basic. Work: -Assumptions- 3x = 2n 3x+5 = 2k+1 -Trying to make sense of 3x- 3x+5 = 2k+1 3x = 2k-4 -Plugging in 2k-4 for 3x- 2k-4 = 2n 2k = 2n+4 k = n+2 -Plugging in n+2 for k- 3x+5 = 2(n+2)+1 ...This is where I gave up. I don't know where I'm going with this anymore.
Remember here that $n$ represents ANY natural number. You got to the answer but you didn’t even realize it. That’s probably because you are thinking syntactically rather than semantically. What I mean is the literal string of symbols $2(n+2)$ didn’t register to you as even because it is not the same as the string $2n$. But $n+2$ is a natural number just like $n$ is. So the the strings $2(n+2)$ and $2n$ both represent even numbers, and so $2(n+2) + 1$ is odd just as you have shown in your last line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3935988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
How to solve the heat equation for functional boundary conditions and the step function as the initial condition? We are trying to solve the one-dimensional heat equation: $\frac{\partial u}{\partial t} = k \frac{\partial^{2}u }{\partial x^{2}}$. The initial condition is: $u(x,0)= u_{1} $ when $x < 1,$ and $0$ when $x \ge 1$. The boundary conditions are as follows: $u(0,t) = \frac{1}{t+a^{-1}}+a$ $u(L,t) = \frac{-1}{t+a^{-1}}+a$ $a$ here is a constant. We are trying to figure out how to solve this equation. Should we go the Fourier series route to solve this(https://www.youtube.com/watch?v=ToIXSwZ1pJU) or should we do separation of variables?
When solving the homogeneous problem on a finite domain with time dependent boundary conditions you want to get the equilibrium temperature. So if we have $$\begin{cases} u_{t} = k u_{xx} & 0 \leq x \leq L , t>0 \\ u(x,0) = f(x) , & 0 \leq x \leq L \\ u(0,t) = A(t) , u(L,t) = B(t) & t > 0 \end{cases}$$ In this case we need a candidate solution for the equilibrium temperature so we create a function called the reference temperature, $r(x,t)$ and we want the function to satisfy. $$ r(0,t) = A(t) \\ r(L,t) = B(t) $$ we can create one like so $$ r(x,t) = A(t) + \frac{x}{L}\big[B(t) - A(t) \big]$$ Now, instead we have $$ v(x,t) = u(x,t) - r(x,t) $$ and this solution $v(x,t)$ will solve the homogeneous problem. Ok, now that it has been set up we'll let $$ r(x,t) = A(t) + \frac{x}{L}\big[B(t) - A(t) \big]$$ and now we replace $A(t) = \frac{1}{t+a^{-1}} + a$ and $B(t) = \frac{-1}{t+a^{-1}} + a $ $$ r(x,t) = \frac{1}{t+a^{-1}} + a + \frac{x}{L} \big [\frac{-2}{t+a^{-1}} \big]$$ Now, instead we have this homogeneous problem you should be able to solve $$\begin{cases} v_{t} = k v_{xx} & 0 \leq x \leq L , t>0 \\ v(x,0) = f(x) - r(x) , & 0 \leq x \leq L \\ v(0,t) = 0 , v(L,t) = 0 & t > 0 \end{cases}$$ The solution to this is $$ v(x,t) = \sum_{i=1}^{\infty} a_{n} \sin(\frac{n \pi x}{L}) e^{-k t(\frac{n \pi}{L})^{2} }$$ you solve for the coefficients $a_{n}$ and we get $$ a_{n} = \frac{2}{L} \int_{0}^{L} \big[ f(x) - r(x) \big] \sin(\frac{n \pi x}{L}) dx $$ where $f(x)$ is your function $$ f(x) = \begin{cases} u_{1} & x < 1 \\ 0 & x \geq 1 \end{cases}$$ which looks like the complement of a Heaviside function multiplied by the constant $u_1$. Try that and see what happens. This about the reference temperature function $r(x,t)$. We need a function such that it obeys the boundary conditions. If $u(x,t) = v(x,t) + r(x,t)$, we have already said that $v(x,t)$ is the homogeneous solution. So $u(0,t) = v(0,t) + r(0,t)$ and $u(L,t) = v(L,t) + r(L,t)$. But $v(0,t) = 0$ and $v(L,t) = 0$. $ x = 0$ we get $$ r(0,t) = A(t) + \frac{0}{L}[B(t) - A(t)] = A(t)$$ and at $x=L$ we have $$ r(L,t) = A(t) + \frac{L}{L}[B(t) -A(t)] = B(t) + A(t) - A(t) = B(t) $$ which is what we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove the following sequence is convergent: $a(1) = 1$ and $a(n+1) = 1/2 \cdot (a(n) + 3/a(n))$ I know the limit is $\sqrt 3$. And I tried proving that the sequence is decreasing from $a(2)$, yet when I got to this point, I have been stuck: $a(n+1) - a(n) = (3 - a(n)^2) / 2a(n)$ ? Thank you very much!
Hint: On the interval $(\sqrt 3,+\infty)$ the function $\:f(x)=\dfrac12\Bigl(x+\dfrac 3x\Bigr)$ is increasing and $\:f(x)<x$. Note that $a_2=2\in(\sqrt 3,+\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
A minor point in Serre's Representation Theory Let $R$ be a commutative ring of characteristic zero, $G$ a finite group. For each conjugacy class $[x]$ in $G$, define $e_{[x]} = \sum_{s\in[x]}s$. It is easily checked that these $\{e_{[x]}\}_{x\in G}$ form a basis for $Z(\mathbb{C}[G])$. Serre makes the claim that "... each product $e_{[x]}e_{[y]}$ is a linear combination with integer coefficients of the $e_{[z]}$." While I can see how these $e_{[x]}$'s form a basis for $Z(\mathbb{C}[G])$, for some reason I can't see this fact as easily. In fact, I can't even intuit why $e_{[x]}e_{[y]}$ should also be an element of $Z(\mathbb{C}[G])$ when written out as $$ e_{[x]}e_{[y]} = \left(\sum_{g\in G}gxg^{-1}\right)\left(\sum_{h\in G}hyh^{-1}\right) = \sum_{g,h\in G}gxg^{-1}hyh^{-1} $$ To me it's not clear at all that the above double sum is "a linear combination with integer coefficients of the $e_{[z]}$." Can someone provide some insight?
Well, since $e_{[x]}$ and $e_{[y]}$ are both in $Z(\mathbb{C}[G])$, so is their product. Moreover, it is clear that $e_{[x]}e_{[y]}$ is an integer linear combination of elements of $G$ (since it is just some sum of elements of $G$). But since it is in $Z(\mathbb{C}[G])$, the coefficients must be constant on each conjugacy class, and so it is in fact an integer linear combination of $e_{[z]}$'s. To see this more explicitly from the expression $$e_{[x]}e_{[y]} = \sum_{g,h\in G}gxg^{-1}hyh^{-1},$$ note that $$k(gxg^{-1}hyh^{-1})k^{-1}=(kg)x(kg)^{-1}(kh)y(kh)^{-1}.$$ So, this sum is unchanged if you conjugate by $k$: the terms just get permuted, with the term indexed by $(g,h)$ becoming the term indexed by $(kg,kh)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Simple proof of $\int_I \exp(-f(x))dx\geq \int_I \exp(-g(x))dx$ if $\int_I (f(x) - g(x))\exp(-g(x))dx = 0$ If for two functions $f(x)$ and $g(x)$ we have that: $$\int_I (f(x) - g(x))\exp(-g(x))dx = 0 \tag{1}$$ where $I$ is an arbitrary interval on the real line, then $$\int_I \exp(-f(x))dx\geq \int_I \exp(-g(x))dx \tag{2}$$ This follows in a straightforward way from the Bogoliubov inequality. It seems to be a reasonable powerful tool to get to sharp bounds for certain integrals or summations, but it doesn't seem to be used a lot for this purpose in mathematics. I was wondering if a more straightforward proof can be given for the special case of integrals and summations. This can be used to obtain sharp lower bounds for integrals of the form $\int_I \exp(-f(x))dx$ by writing down a function $g(x)$ containing parameters, and then imposing the constraint (1) to eliminate one parameter and then maximizing $\int_I \exp(-g(x))dx$ w.r.t. the remaining parameters. A simple example is to take $f(x) = x^2$ and $g(x) = a + b x$ for $b>0$ and $I$ the positive real line, which yields the inequality $\pi \geq e$.
Using $e^u \ge 1+u$ for $u \in \Bbb R$ we have $$ e^{-f(x)} - e^{-g(x)} = \bigl (e^{g(x)-f(x)} - 1 \bigr) e^{-g(x)} \ge \bigl(g(x)-f(x)\bigr) e^{-g(x)} $$ so that $$ \int_I e^{-f(x)} \, dx - \int_I e^{-g(x)} \, dx \ge \int_I \bigl(g(x)-f(x)\bigr) e^{-g(x)} \, dx \, . $$ If $(1)$ holds then the right-hand side is zero, and that implies $(2)$. We can even conclude that strict inequality holds in $(2)$, unless $f(x) = g(x)$ for almost all $x \in I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Ratio of obtuse triangles to acute triangles in the square. I came across OEIS sequences A190020 and A190019, and I noticed that they seemed to grow at a similar rate. A190020: Number of obtuse triangles on a (n X n)-grid (or geoboard) A190019: Number of acute triangles on a (n X n)-grid (or geoboard) In particular, I'm interested in their ratio, which gives a sense of the relative frequency of obtuse vs acute triangles when points are chosen in a square. Empirically, it looks like $\displaystyle \lim_{n \rightarrow \infty}\frac{A190020(n)}{A190019(n)} \approx 2.6391$. (See graph of the ratio below.) * *What is a heuristic for why the limit appears to be finite and greater than 1? *How do you prove a finite upper bound for this ratio? *How do you prove a nonzero lower bound for this ratio? *Does this limit converge? Is it known what it converges to? *If three points are selected uniformly at random to form a triangle $\Delta$ in the unit square $[0,1] \times [0,1]$, does $$\frac{P(\Delta \text{ is obtuse})}{P(\Delta \text{ is acute})} = \lim_{n \rightarrow \infty}\frac{A190020(n)}{A190019(n)}?$$
I have an outline for the calculation. It is a six-fold integral, for the six coordinates of the triangle. I think the six-fold integral can be evaluated to a single integral, which has several dozen elementary terms. I would need Maple or Mathematica to finish this. Let $S$ be the unit square $[0,1]\times[0,1]$. Let $A$ and $B$ have coordinates $A(x,y)$, and $B(x+r\cos\theta,y+r\sin\theta)$ Let the ray $AB$ meet the edge of $S$ at $F$. Let the line through $A$, perpendicular to $AB$, meet the square's edges at $D$ and $E$. The probability $P$ that the triangle is obtuse is three times the probability that angle $\angle BAC$ is obtuse. In the following formula, $I_{BAC \text{ is obtuse}}$ is $1$ if the angle at $A$ is obtuse, and $0$ otherwise. $$P = 3\iint_S da\iint_S db \iint_S dc\text{ } I_{BAC \text{ is obtuse}}$$ The integral over $c$ is the area bounded by $S$ and $DE$, on the opposite side from $B$. This has a dozen formulas, depending on which edges $D$ and $E$ are on. Let $T(x,y,\theta)$ be this area. The specific formula for the picture is $\frac12(x+y\tan\theta)(y+x\cot\theta)$ $$T(x,y,\theta)=\iint_S dc \text{ }I_{BAC\text{ is obtuse}}\\ P = 3\iint_Sda\iint_Sdb\text{ } T(x,y,\theta)$$ One can integrate over $b$ using polar coordinates centered at $A(x,y)$. Since $T$ is independent of $r$, this can be integrated out $$\iint_S db T(x,y,\theta)=\int dr \int d\theta \text{ }rT(x,y,\theta)\\ = \int d\theta\frac12(AF)^2 T(x,y,\theta)\\ P = 3\iint_S da \int_0^{2\pi}d\theta \frac12(AF)^2 T(x,y,\theta)$$ As $\theta$ runs through $2\pi$, there are twelve different formulas for $(AF)^2T(x,y,\theta)$, depending on which sides $D$, $E$ and $F$ are on. All twelve have an elementary integral. We can thus reach a formula $$U(x,y) = \int_0^{2\pi}d\theta\frac12(AF)^2 T(x,y,\theta)\\ P = 3\iint_S \text{ }dx\text{ } dy\text{ }U(x,y)$$ $U(x,y)$ has several dozen terms, involving $x^i(1-x)^jy^k(1-y)^l$ for exponents that run from $-2$ to $4$. It also has terms involving $\log x,\log (1-x),\log y$ and $\log(1-y)$. The log terms are multiplied by polynomials in $x$ and $y$; the exponents for those terms are not negative. $U(x,y)$ has a different form if $A$ is in the yellow region, or orange region; and fourteen other formulas in the rest of the square. By symmetry, we just need to integrate over the yellow and orange regions, and multiply by eight because of the square's symmetry. Each formula of $U(x,y)$ must be integrated over the appropriate region. Suppose $U_1(x,y)$ is the formula for the yellow region and $U_2$ for the orange region. $$P = 24\int_0^{1/2}dx\int_{1/2-\sqrt{1/4-x^2}}^xdy \text{ }U_1(x,y) + \\ 24\int_0^{1/2}dx\int_0^{1/2-\sqrt{1/4-x^2}}dy\text{ } U_2(x,y) $$ I think the $y$ integrals have elementary form, but since the limits are on the circular arc, I haven't found an elementary form for the final $dx$ integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$2n$ cells were marked on an infinite triangular grid, can you always find a triangle that contains exactly $n$ marked cells? The problem: (Solve the problem for each positive integer $n$ separately) $2n$ cells were marked on an infinite triangular grid, is it always possible to find a triangle (made by the grid lines) that contains exactly $n$ marked cells? My progress: I managed to solve the problem for every odd integer and for $n=4,6,8,10,12$ image links: all odd examples and $4,6,8,10,12$ My inspiration to create the problem: my inspiration
Proof for $n$ odd Consider $4k+2$ marked cells arranged as per the OP's diagrams and suppose that triangle $ABC$ contains exactly $2k+1$ marked cells, where $AB$ is horizontal. The lines $AB$ and $AC$ divide the plane into $4$ regions. Let the region containing triangle $ABC$ contain exactly $2k+1+a$ marked cells and let the directly opposite region contain $\alpha$ marked cells. Define $b,c,\beta, \gamma$ similarly and then $$a+b+c+\alpha+\beta+\gamma=2k+1. ( 1 )$$ Considering edge $BC$ we see that $a+\beta+\gamma$ is even. Similarly, $b+\alpha+\gamma$ is even and $c+\alpha+\beta$ is $0,2k-2$ or $2k$. If $\gamma \ne 0$, then all the marked points in triangle $ABC$ are in a triangle of side $3$ units which is clearly impossible. Therefore $\gamma = 0$. If $\alpha +\beta=0$ then $a,b,c$ are all even, contradicting equation $1$. So there are two cases:- If $c+\alpha+\beta=2k$. Then $a+b=1$. Considering edge $BC$ we see that $a$ is $0$ or at least $2$. The same is true for $b$ and so this case is impossible. If $c+\alpha+\beta=2k-2$. Then $a+b=3$. Considering edge $BC$ again we see that $a$ is $0,2$ or at least $4$. The same is true for $b$ and so this case is also impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3936873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 1, "answer_id": 0 }
A question of Roots of unity By considering the ninth roots of unity, show that: $\cos(\frac{2\pi}{9}) + \cos(\frac{4\pi}{9}) + \cos(\frac{6\pi}{9}) + \cos(\frac{8\pi}{9}) = \frac{-1}{2}$. I know how to find the roots of unity, but I am unsure as to how I can use them in finding the sum of these $4$ roots.
Note that\begin{multline}\cos\left(\frac{2\pi}9\right)+\cos\left(\frac{4\pi}9\right)+\cos\left(\frac{6\pi}9\right)+\cos\left(\frac{8\pi}9\right)=\\=\frac12\left(e^{2\pi i/9}+e^{-2\pi i/9}+e^{4\pi i/9}+e^{-4\pi i/9}+e^{6\pi i/9}+e^{-6\pi i/9}+e^{8\pi i/9}+e^{-8\pi i/9}\right)\end{multline}But this is half the sum of all ninth roots of unity other than $1$. So, it's half the sum of the roots of$$x^8+x^7+x^6+x^5+x^4+x^3+x^2+x$$and that sum is $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3937022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Are there particular cases of Muntz theorem which can be proved with an elementary way? The Muntz theorem states that the space $S := span\{x^{\lambda_i}\}_{i \geq 0}$ is dense in $(C[0,1], \| .\|_{\infty})$ if and only if $\sum_{i = 0}^{\infty} \frac1{\lambda_i} = \infty$ (where $\lambda_0 = 1$ and {$\lambda_i\}_{i \geq 1}$ are stricly positive real numbers). I'm interested in the implication : $\sum_{i = 0}^{\infty} \frac1{\lambda_i} < \infty$ implies $S$ is not dense. The proof I know uses complex analysis (with infinite product) and Hahn-Banach theorem. But, are there particular cases (for example : $\lambda_n = n^2$ or $\lambda_n = n!$) where this implication can be proved in a more elementary way ?
We can prove it simply by : 1- first considering the space $(C[0,1], \|. \|_2)$ where we can compute explicitly the $L^2$ distance between $x^m$ and $span(x^{\lambda_1}, \dots, x^{\lambda_n})$. It is given by determinant of a gram matrix. Thus we prove the result in $(C[0,1], \|. \|_2)$. 2- then for the case $\|. \|_{\infty}$ to approximate a continuous function $f$, we approximate $f$ by a $C^1$ function $g$ (a polynomial for example) and then approximate $g'$ in $L^2$ norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3937268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find $x$ when $x^2\equiv y\bmod p$ Given $p$ is prime and $y$ is a constant. What's the fastest possible way to find $x$ where $x^2\equiv y\bmod p$? Example: $x^2\equiv97\bmod101$ would give us $x=81$ as one of the solutions. What's the fastest way to compute any one of the solutions of $x$? Constraints: * *$0\le p\le10^9$ *$0\le y\le p$ *$0\le x<p$
The standard approach to compute square roots modulo primes is the Tonelli–Shanks algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3937412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof that $\int_0^\infty\frac{\ln x}{x^3 - 1} \, dx = \frac{4 \pi^2}{27}$ I realise this question was asked here, but I'm not able to work with any of the answers. The hint given by my professor is Integrate around the boundary of an indented sector of aperture $\frac{2 \pi}{3}$ but when I try that I can't figure out how to deal with the (divergent) integral along the radial line at angle $2 \pi / 3$. My issue with the accepted answer is that it uses the residue theorem where it doesn't apply, at least as we've learned it, since $$z \mapsto \frac{\log^2z}{z^3 - 1}$$ has non-isolated singularities on the closed region bounded by the proposed contour (due to branch cuts), and I am not sure how to relate the integral along the real axis to one over a contour modified to avoid the branch cut. For a fixed $\varepsilon > 0$, and for any $\delta \in (0, 1 - \varepsilon)$, we could let $\log_{-\delta / 2}$ be the branch of the logarithmic function with a cut along the ray $\operatorname{arg}z = -\delta / 2$ and define a contour which goes along the positive real axis from $\varepsilon$ to $1 - \delta$, a semicircle in the upper half plane of radius $\delta$ around $1$, the positive real axis from $1 + \delta$ to $2$, an arc of radius $2$ around $0$ with central angle $2 \pi - \delta$, the ray $\operatorname{arg}z = 2 \pi - \delta$ from $r = 2$ to $r = \varepsilon$, and finally an arc of radius $\varepsilon$ around $0$ back to $\varepsilon$. But then, for example, I don't know how to calculate the limit of integral along the arc of radius $\varepsilon$ $$\lim_{\delta \to 0}\int_0^{2 \pi - \delta}\frac{\log_{-\delta / 2}^2(\varepsilon e^{i \theta})}{\varepsilon^3 e^{3 i \theta} - 1} \varepsilon i e^{i \theta} \, d\theta.$$ If I instead try to first use the substitution $x = e^u$ on the real integral and then compute a contour integral, I still get a divergent integral that I don't know how to handle, this time along the top of an indented rectangle.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\on}[1]{\operatorname{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\cal C}$ is a key-hole contour which "takes care" of the $\ds{\ln}$-branch cut with $\ds{0 < \arg\pars{z} < 2\pi}$. Integrand poles are $\ds{\expo{2\pi\ic/3}}$ and $\ds{\expo{4\pi\ic/3}}$. Residue Theorem yields: \begin{align} \oint_{\cal C}{\ln^{2}\pars{z} \over z^{3} - 1}\,\dd z& = 2\pi\ic\bracks{% \pars{{2\pi \over 3}\,\ic}^{2}\,{\expo{2\pi\ic/3} \over 3} + \pars{{4\pi \over 3}\,\ic}^{2}\,{\expo{4\pi\ic/3} \over 3}} \\[5mm] & = -\,{4 \over 27}\pars{3\root{3} - 5\ic}\pi^{3} \end{align} Note that the singularity at $\ds{z = 1}$ lies on the $\ds{\ln}$-branch cut. Hereafter we properly deals with this fact ( see below ). Namely, \begin{align} &\bbox[5px,#ffd]{\oint_{\cal C}{\ln^{2}\pars{z} \over z^{3} - 1}\,\dd z} \\[5mm] = &\ \int_{0}^{\infty}{\ln^{2}\pars{x} \over \pars{x - 1 + \ic 0^{+}}\pars{x^{2} + x + 1}}\,\dd x \\[2mm] + &\ \int_{\infty}^{0}{\bracks{\ln\pars{x} + 2\pi\ic}^{2} \over \pars{x - 1 - \ic 0^{+}}\pars{x^{2} + x + 1}}\,\dd x \\[5mm] = &\ \mrm{P.V.}\int_{0}^{\infty}{\ln^{2}\pars{x} \over \pars{x - 1}\pars{x^{2} + x + 1}}\,\dd x \\[2mm] - &\ \mrm{P.V.}\int_{0}^{\infty}{\ln^{2}\pars{x} + 4\pi\ic\ln\pars{x} - 4\pi^{2} \over \pars{x - 1}\pars{x^{2} + x + 1}}\,\dd x + {4\pi^{3} \over 3}\,\ic \\[5mm] = &\ -4\pi\ic\int_{0}^{\infty}{\ln\pars{x} \over x^{3} - 1}\,\dd x + 4\pi^{2}\ \underbrace{\mrm{P.V.}\int_{0}^{\infty}{\dd x \over x^{3} - 1} \,\dd x}_{\ds{-\,{\root{3} \over 9}\,\pi}}\ +\ {4\pi^{3} \over 3}\,\ic \end{align} \begin{align} &\int_{0}^{\infty}{\ln\pars{x} \over x^{3} - 1}\,\dd x \\[2mm] = &\ {-\,\pars{4/27}\pars{3\root{3} - 5\ic}\pi^{3} - 4\pi^{2}\pars{-\root{3}\pi/9} - 4\pi^{3}\ic/3 \over -4\pi\ic} \\[2mm] & = \bbx{4\pi^{2} \over 27} \approx 1.4622 \\ & \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3937683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Right tail of log-concave densities. Consider any density $g$ with support $(a,b)$ and $-\infty<a<b<+\infty$. Is it possible to have $g$ log-concave on $(a,b)$ even if $\lim_{q\uparrow b}g(q)=+\infty$? An (1995, 1998) proves that the right tail of a log-concave density is at most exponential, but the proof seems to assume $(a,b)=\mathbb{R}$. It should hold for, say, $(a,b)=(0,1)$, right?
If$a,b$ are finite then random variables distributed according to $g$ are bounded. Bounded random variables are subgaussian. Subgaussian random variables are subexponential. So then using the claim cited in your question, yes it is possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3937877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of $n^{\frac{1}{n}}$. We have to prove that $n^{\frac{1}{n}}$ converges to $1$. I have proved it using the binomial theorem where we can substitute $(1+t)$ in place of $n$ and proceed forward. However along with the question another approach was mentioned where we can make use of the Monotone Convergence Theorem. For this approach, I proved that $a_n=\left(1+\frac{1}{n}\right)^n$ is increasing and using $a_n<a_{n+1}$ I proved that $n^{\frac{1}{n}}$ is decreasing after the third term. Also it is bounded as all terms are greater than $0$. So by MCT it should be convergent. But I am not able to evaluate the limit and am only able to prove its existence here. Please help.
$$a_n = n^{1/n} \implies \ln a_n = \frac{\ln n}{n} \implies \lim\ln a_n = 0 \implies \ln \lim a_n = 0 \implies \lim a_n = e^0 = 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3938007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
Is my method of solving equation correct? The problem in question is $$\sqrt[5]{16+\sqrt{x}}+\sqrt[5]{16-\sqrt{x}}=2$$ using $$a+b=2$$ where $a=\sqrt[5]{16+\sqrt{x}}$ and $b=\sqrt[5]{16-\sqrt{x}}$ $$(a+b)^5=32$$ $$(a+b)^2(a+b)^3=32$$ $$a^5+5a^4b+10a^3b^2+10a^2b^3+5ab^4+b^5=32$$ $$a^5+b^5+5ab(a^3+b^3)+10a^2b^2(a+b)=32$$ $$a^5+b^5+5ab\biggl(\frac{32}{(a+b)^2}\biggr)+10a^2b^2(a+b)=32$$ Got $\frac{32}{(a+b)^2}$ from the fact that $(a+b)^2(a+b)^3=32$ and $a+b=2$ $$a^5+b^5+5ab\biggl(\frac{32}{(2)^2}\biggr)+10a^2b^2(2)=32$$ $$a^5+b^5+40ab+20a^2b^2=32$$ From when I defined a and b earlier, I substitute and get $$\left(\sqrt[5]{16+\sqrt{x}}\right)^5+\left(\sqrt[5]{16-\sqrt{x}}\right)^5+40\sqrt[5]{\left(16-\sqrt{x}\right)\left(16+\sqrt{x}\right)}+20\sqrt[5]{\left(16-\sqrt{x}\right)^2\left(16+\sqrt{x}\right)^2}=32$$ $$\require{cancel}\cancel{16}\cancel{+\sqrt{x}}+\cancel{16}\cancel{-\sqrt{x}}+40\sqrt[5]{256-x}+20\sqrt[5]{\left(256-x\right)\left(256-x\right)}=\cancel{32} 0$$ $$40\sqrt[5]{256-x}+20\sqrt[5]{\left(256-x\right)\left(256-x\right)}=0$$ $$20\biggl(2\sqrt[5]{256-x}+\sqrt[5]{\left(256-x\right)\left(256-x\right)\biggr)}=0$$ Then let $u=\sqrt[5]{256+{x}}$, $$20(2u+u^2)=0$$ $$u(u+2)=0$$ $$u=0,-2$$ Substituting u to get x from $u=\sqrt[5]{256+{x}}$, I get $$x=\cancel{-288},256$$ However, since the original equation has a $\sqrt{x}$, which can't be negative, I eliminate $x=-288$, leaving just $$x=256$$ as my answer. So, this is how I arrived on my answer. Did I perform any mathematical errors or any illegal mathematical maneuvers? Please let me know. Thank you!
When you went from : $$a^5+b^5+5ab(a^3+b^3)+10a^2b^2(a+b)=32$$ to $$a^5+b^5+5ab\biggl(\frac{32}{(a+b)^2}\biggr)+10a^2b^2(a+b)=32$$ You incorrectly assumed that $$a^3 + b^3 = (a+b)^3 = \biggl(\frac{32}{(a+b)^2}\biggr)$$ Therefore you cannot say that $$a^3 + b^3 = \frac{32}{2^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3938150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 1 }
Group homomorphism between quotient groups. Let $G$ and $H$ be two groups and $M, N$ be normal subgroups of $G$ and $H$ respectively. Consider a relation $\phi: G \to H$ be a homomorphism. Now, my question is what is the necessary condition on $\phi$ such that it induces a homomorphism between $\frac{G}{M}$ and $\frac{H}{N}.$
The map $\pi\circ \phi\colon G\to H/N$ factors through $G/M$ if and only if $M\subseteq \mathrm{ker}(\pi\circ\phi)$. The kernel of $\pi\circ\phi$ is $\phi^{-1}(\ker(\pi)) = \phi^{-1}(N)$. So a necessary and sufficient condition is $M\subseteq \phi^{-1}(N)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3938314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Check polynomial $p_n(x)$ is always divisible by $x^2 - 2x\cos (t) +1$ For any real $t$ and $n \in\mathbb{N}$, I want to show that $p_n(x)=x^{n+1}\cos [(n-1)t] - x^n\cos (nt) - x\cos (t) +1$ is divisible by $q(x) = x^2 - 2x\cos (t) +1$. As a hint I have: complex numbers. I don't know how to attack the problem using complex numbers. In the case $n=1$, $p_1(x)=q(x)$. Also I compute and check by hand the case $n=2$. Then I start thinking about induction, but the idea failed. Any idea how this can be solved via complex numbers? Thank you in advance.
This is equivalent to $z^{\pm1}$ being roots, with $z:=e^{it}$. We only need check $\pm=+$, since $p_n$ has real coefficients, so its nonreal roots are in conjugate pairs. So just simplify$$p_n(z)=z^{n+1}(z^{n-1}+z^{1-n})/2-z^n(z^n+z^{-n})/2-z(z+z^{-1})/2+1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3938559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many invertible elements are there in $\mathbb {Z_ {990}}$ and $\mathbb {Z_ {1060}}$. How many invertible elements are there in $\mathbb {Z_ {990}}$ and $\mathbb {Z_ {1060}}$ . Justify your answer. Hello, could someone explain to me how to find the invertible elements of a set as large as $\mathbb {Z_ {990}}$ and $\mathbb {Z_ {1060}}$ I thought about putting together the set and trying each one but it is very big and it can take forever
I will explain how to find the invertible elements of $\mathbb{Z}_n$. An element $a$ is invertible in $\mathbb{Z}_n$ if and only if therte exists $b\in\mathbb{Z}_n$ such that $a\cdot b=1$, in other words $a\cdot b\equiv 1\pmod{n}$. $0$ is clearly not invertible. Observe that if $a$ is not relatively prime with $n$, then this is impossible, as $a\cdot b$ will always be divisible by $\gcd(a,n)$ so this would mean $\gcd(a,n)|1$. So all elements not relatively prime woth $n$ are not invertible. What about the elements of $\mathbb{Z}_n$ which are relatively prime to $n$? Well let $a$ be one of them. Take the set $\{a,2a,3a,...,(n-1)a\}$. Suppose $ia\equiv ja$ for some $i$ and $j$. This means $a\cdot|i-j|$ is divisible by $n$ so because $a$ is relatively prime to $n$, then $|i-j|$ is divisible by $n$, but $n>|i=j|$ so $|i-j|=0$ so $i=j$. Thus, all elements of $\{a,2a,3a,...,(n-1)a\}$ are distinct$\pmod{n}$ and there are no elements divisible by $n$ in the above set (because $a\neq 0$ and $a$ and $n$ are coprime), so in the above set there must be an element $\equiv 1\pmod{n}$, thus $a$ is invertible. So the invertible elements in $\mathbb{Z}_n$ are all the numbers smaller than $n$, which are coprime to $n$. The number of such elements is $\varphi(n)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3938861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Name for vector without sign/sense? (Just module and direction) If vectors are the equivalence class of oriented segments, what is the name for the equivalence class of unoriented segments? These "unsigned vectors" would have a module (or normal value) and a direction but no sense/sign. An example of a use for such entities is the cross product: if we don't know the orientation/headiness of the space, we can "unify" the two possible answers as a single "unsigned vector".
I do not know of any special name for this concept. In the context of Graph theory there is a clear distinction between directed and undirected graphs. The edges are sometimes known as "lines". Thus, in graph theory, both undirected lines and directed lines appear on an equal footing. Then, depending on context, a "line" is assumed to be either directed or undirected. In the context of projective space the fundamental objects are "points" which are defined to be equivalence classes of vectors that are said to be related if they are nonzero scalar multiples of each other. In your case the relation is much more restricted and only a vector and its negative are related. Better is the context of convex sets in Euclidean space. Any two distinct points determine the set of convex linear combinations of the two points. This is the line segment with the two given points as the end points of the segment. The displacement from one of the endpoints to the other is a vector in the vector space associated to the Euclidean space. Of course, the displacement going the other way is the negative of the first vector. Despite this, I don't know of any special name for the equivalence class of a vector and its negative. However, in the specific case that you mention of the cross product of two vectors, there is the concept of pseudovector. The Wikipedia article states: In three dimensions, a pseudovector is associated with the curl of a polar vector or with the cross product of two polar vectors It also states: A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field and angular velocity. In mathematics, pseudovectors are equivalent to three-dimensional bivectors, from which the transformation rules of pseudovectors can be derived. Thus, the reason for confusion about the status of the cross product is that the result is better thought of as a bivector and not as an actual ordinary or polar vector.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3939020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Matrix operator $p$-norm and eigenvalues Denote by $\|\cdot\|_p$ the operator $p$-norm for matrices in $\mathcal M_{d\times d}(\mathbb R)$. Let $a_n \to 0$ be a sequence of positive real numbers, and $\mathbf A$ a positive definite matrix. We know that when $p=2$, $\|\mathbf I-a_n \mathbf A \|_2 = 1-a_n \lambda_{\min}(\mathbf A)$ when $n$ is large. Question: When $1<p<2$ or $p>2$, is there a constant $K$, such that $$ \|\mathbf I-a_n\mathbf A \|_p \leqslant 1-Ka_n $$ holds for large $n$?
I finally solved this problem in my paper. Behold the preprint of it (see Section 3.1 onwards): https://arxiv.org/abs/2102.10346
{ "language": "en", "url": "https://math.stackexchange.com/questions/3939399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What do these mean pdv . vdp . d(pv)? I am a bit new to calculus and if we have two variables p and v $$pdv \ vdp \ d(pv)$$ what will the above quantities representmean ?
You can think of it as the relation of the changes between two quantities as we put some 'perturbation' into the system. It is more transparent for finite changes: $$ \Delta (PV) = V \Delta P + P \Delta V + \Delta P \Delta V$$ Now, suppose we make the magnitude of changes very small $ \Delta P , \Delta V \to (0,0)$ then: $$ d(PV) = V dP + P dV$$ This is useful for relating the 'change' in thermodynamic variables. Illustration Suppose we are dealing with an ideal gas and $PV= nRT$ , assuming constant moles : $$ nR dT = V dP + PdV$$ Suppose we have a reversible isothermal process, then $ dT = 0$, $$ V dP = - P dV$$ Or, $$ \frac{-V}{P} = \frac{dV}{dP}$$ Since volume is always greater than zero and pressure is greater than zero, it is easy to see as we increase the volume $ \frac{-V}{P}$ becomes more negative, so the decrease in volume as we increase pressure more and more is much greater. Indeed, we have found the partial derivative of volume with pressure at constant temperature: $$ \frac{-V}{P} = (\frac{\partial V}{\partial P})_T$$ tl;dr: the relation between 'small changes' in a system, helps us understand how the system behaves to be given 'small disturbances'. eg: I give a small disturbance in pressure, then how does my volume change if the process is reversible isothermal? Edit: Oops! flipped the fraction by accident.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3939608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Represent a function as a difference of two convex function I know that if $f:\mathbb{R}\to\mathbb{R}$ is $C^2$ it can be rewrite as a difference of two convex function. I ask to you if there is someone who know if it's possible extend this property to the function which have $\mathbb{R}^n$ as domain.
It is possible, for example: If $f \in C^2(D,\mathbb{R})$ and $D$ a non-empty, convex and compact subset of $\mathbb{R}^n$, then $f$ is a dc-function, i.e. the difference of two convex functions. The proof can't be done the same way as in $\mathbb{R}$. Note that in this statement you need a compact subset. Here is a reference (Sanjo's answer): https://math.stackexchange.com/a/843020/797553 For a proof use $g(x)=f(x) + \rho/2 \cdot x^Tx$ and $h(x)=\rho/2 \cdot x^Tx$, where $\rho=\left| ~min ~\{\lambda_{min}(Hessf(x)): x\in D\} ~\right|$. Then $f=g-h$. Note that $\rho$ only exists as $D$ is compact. EDIT: For $D=\mathbb{R}^n$ the statement is still true. (see [Konno H., Thach P.T., Tuy H. (1997) D.C. Functions and D.C. Sets. In: Optimization on Low Rank Nonconvex Structures. Nonconvex Optimization and Its Applications, vol 15. Springer, Boston, MA]). In this book you'll also find a proof for the statement above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3939713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving Prop 86 Euclid's Data I was given the following problem: Solve the equations of Prop-86 of the Data algebraically. Show that the two hyperbolas defined by the equations each have their axes as the asymptotes of each other. All I can see about equation of Prop-86 is the system: $$xy=a\\\frac{y^2-b}{x^2}=\alpha$$ I don't understand what to do here. Am I supposed to solve this system of equations? *Edit: This is the work I did. I understand all of it, until the end where I stated: "Therefore, two hyperbolas as defined have their axes as the asymptotes of the other." I wrote that, because based on the question I know this will be the conclusion, but I have no idea how that result actually came to be.
To me they appear to be two separate unconnected problems mentioned together, not in one system. First set/type is inclined to the axes and the second type/set is along the coordinate axis, is all the commonality that is there to it. Plots of $$ \alpha x^2- y^2= \pm b, \;( \alpha= 3, b= \dfrac12) $$ show the desired hyperbolas pair. Similarly in another statement of the same proposition plots of $$ xy= \pm b, \;( b= \dfrac12) $$ show two different hyperbolas nested between (sharing) the same asymptote pair.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3939837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Basic Geometric Series Question-Stuck I'm studying Calc 2 and I have a basic series question. A geometric series $\sum _{n=0}^{\infty }\:Ar^n$ is convergent if |r|<1 and the sum equals $\frac{a}{1-r}$ if the series is convergent. Question: $\sum _{n=1}^{\infty }\:\frac{5}{\pi ^n}=-\frac{5}{\pi }+\sum _{n=0}^{\infty \:}\:\frac{5}{\pi ^n}$ and $\sum _{n=0}^{\infty \:}\:\frac{5}{\pi ^n}$ totals $\frac{5}{1-\frac{1}{\pi }}=\frac{5\pi \:}{\pi \:-1}$. Therefore, $\sum _{n=1}^{\infty }\:\frac{5}{\pi ^n}=-\frac{5}{\pi }+\sum _{n=0}^{\infty \:}\:\frac{5}{\pi ^n}$ = $\frac{5\pi \:}{\pi \:-1}\:-\frac{5}{\pi }$. I know this is wrong per my textbook, but after an hour working on this problem I cannot figure out my error.
I know this is wrong per my textbook, but after an hour working on this problem I cannot figure out my error. $-\dfrac5{\pi^0}=-5$, because $\pi^0=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3940302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$ 3 \sum^{n}_{k=1} \sqrt{k} \geq 2n \sqrt{n} + 1, n \in \mathbb{N} \setminus \{0\}$ Induction proof. How to separate multiplication. I have a problem with induction proofs like this one: $$ 3 \sum^{n}_{k=1} \sqrt{k} \geq 2n \sqrt{n} + 1, n \in \mathbb{N} \setminus \{0\}$$ In the base case it works fine (for $n = 1$): $$3 * \sqrt{1} \geq 2 + 1 \implies 3 = 3$$ Thesis ~ as above. Now for $n + 1$: $$ 3 \sum^{n+1}_{k=1} \sqrt{k} \geq 2(n+1) \sqrt{n+1} + 1$$ $$ 3 \sum^{n}_{k=1} \sqrt{k} + 3\sqrt{n+1} \geq 2n\sqrt{n+1} +1 + 2\sqrt{n+1}$$ And here is my problem: I don't know how to separate $1$ form $n$ in $2n\sqrt{n+1}$ on the left hand side. On the right hand side of the inequality I got a sum which is good, but I can not separate that multiplication on the right. I thought about: * *using Bernoulis inequality, but then I "loose" the square root and can not use my induction thesis. *changing $1$ under the square root for $1$ outside, but then the inequality doesn't work - right hand side is too big *putting both sides of inequality in $e^{ln\{\}}$, but it doesn't work either I don't know whot to do and it is not the first time I am having exactly the same problem in induction proofs. Can anybody help me?
For many problems of this kind, once you prove the base case you only need to compare the increment from $n$ to $n+1$ on both sides, $$3\sqrt{n+1} \ge 2((n+1)\sqrt{n+1}-n\sqrt n) \\ \iff 2n\sqrt{n} \ge (2n-1)\sqrt{n+1}\\ \iff 4n^3 \ge (2n-1)^2(n+1)=4n^3-3n+1\\ \iff 3n\ge 1$$ which is true. One added benefit is this can be easily converted to a non-induction, telescopic proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3940449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Does a given positive measure set in $\mathbb{R}^2$ contains product of two 1 dimensional positive measure set? In $\mathbb{R}^2$ consider the 2 dimensional positive Lebesgue measure set, $$D=\{re^{it}:t\in[0,2\pi),r\in[0,1]\setminus\mathbb{Q}\} . $$ Does there exist sets $A,B\subset\mathbb{R}$ of 1 dimensional positive Lebesgue measure such that $$A\times B\subset D?$$ Note: The above is not true if $$D=\{(x,y)\in [0,1]\times [0,1]:x-y\notin\mathbb{Q}\}.$$
No, it there were two such sets $A$, $B$, then with the map $(x,y) \mapsto (x^2, y^2)$ ($t \mapsto t^2$ bi-Lipschitz on compacts inside $(0, \infty)$) , it would transform into $A_1$, $B_1$ of measure non-zero inside $\{(x,y)\ | \ x+y \not \in \mathbb{Q}^2\}$, which is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3940824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
GCD identity for univariate polynomials I would like to know if my solution to the following question concerning the gcd of two polynomial correct> Question: Suppose $f(x)$, $g(x)$ and $h(x)$ are polynomials in $F[x]$. Let $gcd(f(x),g(x)h(x))=1$. Show that $gcd(f(x),g(x))=gcd(f(x),h(x))=1$ Proof: Since $gcd(f(x),g(x)h(x))=1$, then there exists some polynomials $u(x)$ and $v(x)$ in $F[x]$ where $f(x)u(x)+(g(x)h(x))v(x)=1$. $(*)$ Applying associativity of multiplication in $(*)$ gives $(g(x)h(x))v(x)=g(x)(h(x)v(x))$ then $f(x)u(x)+(g(x)h(x))v(x)=f(x)u(x)+g(x)(h(x)v(x))=1$ which implies $gcd(f(x), g(x)=1$. Applying commutativity to $(*)$, then again associativity of multiplication to $(*)$ gives $(g(x)h(x))v(x)=(h(x)g(x))v(x))$ and $(h(x)g(x))v(x))=h(x)(g(x)v(x))$ in $(*)$ implies that $f(x)u(x)+(g(x)h(x))v(x)=f(x)u(x)+(h(x)g(x))v(x)=f(x)u(x)+h(x)(g(x)v(x))=1$, which gives $gcd(f(x), h(x))=1$ also. Hence we have $gcd(f(x), g(x))=gcd(f(x), h(x))=1$ Thank you in advance
The gcd $(a,b)\,$ is characterized by the following fundamental gcd Universal Property $$ c\mid a,b\,\color{#0a0}\Leftarrow\!\color{#c00}\Rightarrow\, c\mid (a,b)\quad \text{[gcd Universal Property]}\qquad$$ For $\,c = (a,b)\,$ arrow $\color{#0a0}{(\Leftarrow)}$ shows $\,(a,b)\mid a,b,\,$ i.e. $\,(a,b)\,$ is a common divisor of $\,a,b,\,$ and the opposite arrow $\color{#c00}{(\Rightarrow)}$ shows that $\,(a,b)\,$ is divisible by every common divisor of $\,a,b,\,$ so $\,(a,b)\,$ is a greatest (degree) common divisor (unique if we normalize it to be monic, i.e. lead coef $=1).\,$ Therefore generally $\,(f,g)\mid f,g\,\Rightarrow\, (f,g)\mid f,gh\,\color{#c00}\Rightarrow\, \color{#90f}{(f,g)}\mid \color{#0a0}{(f,gh)}$. Further, in your particular case we are given that $\,\color{#0a0}{(f,gh)\!=1}\,$ thus $\color{#90f}{(f,g)}\! =\! 1$ Your use of the Bezout identity to deduce this is essentially repeating the Bezout-based proof of the gcd Universal Property in this particular case. Instead you should abstract out and prove this basic property (whose proof is easier), then invoke it as above (by name). This will work more generally in rings where there is no Bezout identity (e.g. $\,\Bbb Z[x]\,$ or $\,\Bbb Q[x,y]\:\!)\,$ but the gcd universal property still holds true (it is the definition of the gcd in general domains).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3941038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
prove the following linear functional is not induced by measure Let the linear functional $I$ below defined on $C^1_c(\mathbb{R})$. as follows: $$I(f) = \frac{df}{dx}(0)$$ Prove it's not induced by a measure,or even signed measure.that is no such measure makes $I(f) = \int fd\mu = \frac{df}{dx}(0)$. I try to use sequence of function $f_n$ such that makes $\frac{df_n}{dx}(0) \to \infty$,but the total mass of $\int f_nd\mu \to 0$ ?If exist such measure it can not be the Lebesgue measure ,since we can construct such function,can it holds for some other measures?
Assume that there is some (positive) measure $\mu$ that induces this functional. Clearly $\mu\ne0$. Take $f_n(x)$ a function such that * *$f_n(x)=1$ if $x\in[-n,n]$. *$f_n(x)=0$ if $x\notin [-n-1,n+1]$. *$f_n\in{\cal C}^1(\Bbb R)$. *$0\le f_n(x)\le 1$ for all $x\in\Bbb R$. Then $I(f_n)=f_n'(0)=0$, obviously. On the other hand, $I(f_n)=\int_{\Bbb R} f_nd\mu \ge \int_{(-n,n)} f_n d\mu=\int_{(-n,n)}d\mu=\mu(-n,n) $. So $\mu(-n,n)=0$ for all $n\in\Bbb N$; and since $\Bbb R=\bigcup_n (-n,n)$, by subaditivity, $\mu(\Bbb R)\le \sum_n \mu(-n,n)=0$, so $\mu=0$, wich is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3941281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Choosing a number from infinite numbers Imagine infinite set of, let's say, natural numbers. I choose one of the infinite numbers randomly. Let's call that number n. If I choose another number too, can it be the same number (n), theoretically?
I'd loved to put this in a comment, but I can't because I am new here. I just wanted to say that since you picked that particular number the first time, it has a non-zero probability of being picked. Thus it definitely can be picked again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3941605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Differentiablity of a function $f(z)$ in the complex vs reals I am quite confused about the differentiability of a function in the complex plane vs the real numbers. Consider the function $f(z) = |z|^2$ for $z \in \mathbb{C}$. Using the definition of a derivative, and taking the limit as $\Delta z \rightarrow 0$, we can see that it is only differentiable at $z = 0$. Elsewhere, the limits are not unique, so do not exist. However, if we consider a similar function $F(x) = |x|^2$ for $x \in \mathbb{R}$, then $F'(x) = 2x$ and exists everywhere in $\mathbb{R}$. We know that $\mathbb{C}$ is a field extension of $\mathbb{R}$, so $\mathbb{R} \subset \mathbb{C}$. The domain of $f(x)$ is then a subset of the domain of $f(z)$. Why isn't $f(z)$ instead differentiable for all $z = x + i0$? Likewise for a similar function, $g(z) = \operatorname{Re}(z)$. $g(z)$ is nowhere differentiable in $\mathbb{C}$, yet $G(x) = x$ is differentiable everywhere in $\mathbb{R}$. Why is this the case? Is it due to the limit being used in the definition of a derivative? That the limit requires that $|f(z) - f(z_0)| < \epsilon$, which means that the limit of $f(z)$ must approach $f(z_0)$, as $z \rightarrow z_0$, independent of the direction for the limit to exist. If we were only working with the reals, the limit as $x \rightarrow x_0^+$ must equal that of $x \rightarrow x_0^-$ for the limit to exist, but in the complex plane, the limit must be equal as $z \rightarrow z_0$ from all sides of $z_0$, which is why the derivative for a function in $\mathbb{R}$ can exist, but not a similar one in $\mathbb{C}$? Does this mean that given two similar functions $f(z)$ and $F(x)$ acting on both $\mathbb{C}$ and $\mathbb{R}$, respectively. Let $f(z=x+iy) = u(x,y) + iv(x,y)$, $u(x,y) = F(x)$, and $z_0 = x_0 + iy_0$, then $$f(z) \text{ differentiable at point } z_0 \in \mathbb{C} \text{ and } u'(z) \text{ exists at } z_0 \implies F(x) \text{ differentiable at } x_0 \in \mathbb{R}?$$
Why isn't $f(z)$ instead differentiable for all $z = x + i0$? Because the existence of $\lim\limits_{z \to z_0}\frac{f(z)-f(z_0)}{z-z_0}$ is a much stronger requirement than the existence of $\lim\limits_{x \to x_0}\frac{f(x+i0)-f(x_0+i0)}{x-x_0}$. Intuitively, the first limit involves points in "two dimensional neighborhood" of $z_0$ while the second one only involves points lying on a line. $$f(z) \text{ differentiable at point } z_0 \in \mathbb{C} \text{ and } u'(z) \text{ exists at } z_0 \implies F(x) \text{ differentiable at } x_0 \in \mathbb{R}?$$ The answer is yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3942008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Definition of odd topological K-theory using circles I wanted to check whether the following characterization of odd complex topological $K$-theory is correct. Let $X$ be a compact Hausdorff space. Then $K^{-1}(X)$ can be defined as $\tilde{K}^0((X\times\mathbb{R})^+)$, where $^+$ means one-point compactification. Let $S^1\subseteq\mathbb{C}$ be the unit circle. Since $(X\times\mathbb{R})^+$ is homeomorphic to $(X\times S^1)/(X\times\{1\})$, Question: Can we say that an element of $\tilde{K}^0((X\times\mathbb{R})^+)$ is a difference of vector bundles $E$ and $F$ over $X\times S^1$ such that $E$ and $F$ are trivial over $X\times\{1\}$ and the virtual vector bundle $E-F$ has virtual dimension $0$ over $X\times\{1\}$?
I am not sure what is the virtual dimension, but think your point may be essentially correct. In general, we have the isomorphism $$K^0(X\times S^1)\cong K^1(X)\oplus K^0(X)$$ and hence we can define $K^1(X)=\ker(K^0(X\times S^1)\rightarrow K^0(X))$. It is exactly the first definition in the origional paper Vector bundles and homogeneous spaces of Atiyah--Hirzebruch for topological $K$-theory. Here the map $$K^0(X\times S^1)\rightarrow K^0(X)$$ is given by the pullback along $i:X\times\{1\}\hookrightarrow X\times S^1$. By construction, we know that pullback along a subset induces exactly the restriction i.e. $i^*E=E|_{X\times\{1\}}$ for any vector bundle $E$. In the ring $K^0(X\times S^1)$, every element is the difference $[E]-[F]$, and it is mapped to $[f^*E]-[f^*F]$ in $K^0(X)$. The image is trivial if and only if $[f^*E]=[f^*F]$ or equivalently that $E|_{X\times\{1\}}$ and $F|_{X\times\{1\}}$ are stably equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3942270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove the following and use it to evaluate the integral: I want to prove that:$$\int_{-\infty}^\infty f(x)dx=\int_{-\infty}^\infty f\left(x-\frac1x\right)dx$$ And use the result of this proof to evaluate:$$\int_{-\infty}^\infty\frac{x^2}{x^4+1}dx$$
Here is how to apply it to $$\int_{-\infty}^\infty\frac{x^2}{x^4+1}dx = \int_{-\infty}^\infty\frac{1}{(x-\frac1x)^2+2}dx = \int_{-\infty}^\infty\frac{1}{x^2+2}dx=\frac\pi{\sqrt2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3942446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Uniqueness of a Fixed Point Let $D(0,r) = \{ x \in \mathbb{R}^n | \ \rVert x \rVert \le r \}$ and $f: D(0,r) \to \mathbb{R}^n$ be a map with $a) \rVert f(x) - f(y) \rVert \le 1/3 \rVert x - y \rVert, \ \ \ b) \rVert f(0) \rVert \le (2/3)r$. I want to determine if there is a unique fixed point of $f$. I think all I need to do is show that $f$ maps $\text{cl}(D(0,r))$ into itself. Then $f$ obviously satisfies the hypotheses of the contraction mapping principle, but I'm having difficulty showing $f$ maps $\text{cl}(D(0,r))$ into itself. Can someone give me a hint? Thanks! EDIT: I just came up with a way of showing it. $$ \rVert f(x) \rVert \le \rVert f(x) - f(0) \rVert + \rVert f(0) \rVert \le \frac13 \rVert x \rVert + \frac23r \le r $$ So $f$ maps $D(0,r)$ to itself. So by the contraction mapping principle, $f$ has a unique fixed point. Can someone verify my proof?
First note that $\text{cl}(D(0,r)) = D(0, r)$ because according to your definition, $D(0,r) = \{ x \in \mathbb{R}^n \mid \; \rVert x \rVert \le r \}$ is already closed. Now if $x \in D(0, r)$ then $\Vert x \Vert \le r$ and therefore $$ \Vert f(x) \Vert \le \Vert f(x) - f(0) \Vert + \Vert f(0) \Vert \\ \le \frac 1 3 \Vert x - 0 \Vert + \frac 23 r \le r $$ so that $f(x) \in D(0, r)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3942637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Nonlinear Implicit Finite Difference How do you produce an implicit finite difference system with a nonlinear term in the pde? For example, if you have the reaction-diffusion equation: $$\frac{\partial{u}}{\partial{t}} = \Delta u + f(u)$$ with $f$ being non-linear, e.g. $f(u) = u^2$, how do you form a finite difference system to solve? I understand how to derive if $f(u)=0$. For example in a 1D setting, you have the form: $$w_{i}^{n-1} = (1+2\lambda)w_{i}^n - \lambda(w_{i+1}^n + w_{i-1}^n)$$ producing a system in the form: $$Aw^n=w^{n-1}$$
If you plan to use the so-called "Method of Lines", i.e. first you discretize in space (in this case you obtain the classical $[1, -2, 1]$ stencil) and then in time, then you obtain a (large) stiff system of ODEs of the form $$u_h' = Au_h + f(u_h)$$ where $u_h$ is the unknown vector, still continuous in time: $(u_h(t))_i=u(x_i,t)=u(x_0 + ih, t)$ Depending on how the time integration is performed, you may end up with an implicit scheme: the simplest is Backward Euler $$u_h^{n+1} = u_h^{n} + kA u_h^{n+1} + f(u_h^{n+1})$$ which requires you to solve at each time step a non linear system of equations and it's generally expensive. Usually is $f(u)$ is non-linear, like your case, one may prefer to use an IMEX approach (IMplicit-EXplicit) and make explicit the nonlinearity $f(u)$ like @PierreCarre wrote in his answer. The advantage is that all you need to do now is to solve a linear system for each time step, which is fairly cheaper: $$u_h^{n+1} = (I-kA)^{-1}(u_h^n+f(u_h^n))$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3942933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I solve inequalities where both sides are not equal to zero, and that use absolute values and fractions? For example something like $$\dfrac{|x-2|+3}{4-|2x+8|}\geq-5.$$ I understand how to solve inequalities where there is an absolute value, however in fraction form just confuses me.
Hint: * *When $4-|2x+8|>0$ the inequality is always satisfied (LHS is positive). *When $4-|2x+8|<0$ the inequality is satisfied if and only if $|x-2|+3\leq -5(4-|2x+8|)$ (the inequality reverses when multiplying both sides by a negative number).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\sqrt{3} + \sqrt{7}$ is a primitive element of $\mathbb{Q}(\sqrt{3},\sqrt{7})$ I want to find a primitive element of $\mathbb{Q}(\sqrt{3},\sqrt{7})$ over $\mathbb{Q}$. I try with $\sqrt{3} + \sqrt{7}$. Let $X = \sqrt{3} + \sqrt{7}$ we get $$ X^4 - 20X^2 + 16 = 0 $$ We can prove that $[\mathbb{Q}(\sqrt{3},\sqrt{7}) : \mathbb{Q}] = 4$ since $\sqrt{3}$ and $\sqrt{7}$ are the roots of $X^4 - 10X^2 + 21$ and it is irreducible over $\mathbb{Q}$ by the Eisenstein criterion with $p = 3$. Let $P(X) = X^4 - 20X^2 + 16$. Since $\sqrt{3} + \sqrt{7}$ is a root of $P$ then $[\mathbb{Q}(\sqrt{3}+\sqrt{7}) : \mathbb{Q}] \leq 4$. If we prove that $P$ is irreducible over $\mathbb{Q}$ then the degree is equal to 4. But the Eisenstein criterion does not seem to work... So, I try another thing. Let $\alpha = \sqrt{3} + \sqrt{7}$, we get $\alpha^2 = 10 + 2 \sqrt{21}$ and $\mathbb{Q} \subset \mathbb{Q}(10 + 2 \sqrt{21}) \subset \mathbb{Q}(\sqrt{3} + \sqrt{7})$. Now $$ [\mathbb{Q}(10 + 2 \sqrt{21}) : \mathbb{Q}] = 2 \Rightarrow 2 \mid [ \mathbb{Q}(\sqrt{3} + \sqrt{7}) : \mathbb{Q} ] $$ The degree must be 1, 2 or 4. But $\mathbb{Q}(10 + 2 \sqrt{21}) \subset \mathbb{Q}(\sqrt{3} + \sqrt{7})$ and $ [ \mathbb{Q}(\sqrt{3} + \sqrt{7}) : \mathbb{Q}(10 + 2 \sqrt{21}) ] > 1$ then the degree must be 2 or 4. I don't know how to conclude with the formula $$ [ \mathbb{Q}(\sqrt{3} + \sqrt{7}) : \mathbb{Q} ] = [\mathbb{Q}(\sqrt{3} + \sqrt{7}) :\mathbb{Q}(10 + 2 \sqrt{21}) ] [\mathbb{Q}(10 + 2 \sqrt{21}) : \mathbb{Q}] $$ Thank you. I think I did a mistake somewhere.
Cubing $\alpha=\sqrt 3+\sqrt 7$, you obtain the linear system \begin{cases} \alpha=\sqrt 3+\sqrt 7, \\[1ex] \alpha^3=24\sqrt 3+16\sqrt7, \end{cases} from which can easily deduce $\sqrt 3$ and $\sqrt 7$ as a linear combination of $\alpha$ and $\alpha^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do you solve $y'=\sin(y)$? How do you solve $y'=\sin(y)$? I ended up with $$-\ln(\csc(y)+\cot(y)) = x+C$$ But then I simplified it to $$e^{-x+C} = \frac1{\sin(y)} + \frac{\cos(y)}{\sin(y)}$$ Where do I go from here?
Notice that for any integer $k$, the constant function $y(x)=\pi k$ is a solution because $$\sin\left(y(x)\right)=\sin(\pi k)=0=y'(x)$$ To find the others, we may assume that $y(x)$ is not of the form $\pi k$. Then $\sin(y(x))$ is not identically zero, so $$\csc(y(x))y'(x)=1$$ Antiderivatives of $\csc(y(x))y'(x)$ and $1$ are given by $-\tanh^{-1}\left[\cos(y(x))\right]$ and $x$, so these two must differ by a constant, say $c$. $$-\tanh^{-1}\left[\cos(y(x))\right]=x+c$$ We can now solve equation for $y(x)$. As always, $c$ was redefined to absorb the minus sign. \begin{align*} \tanh^{-1}\left[\cos(y(x))\right] &= c-x\\ \cos(y(x)) &= \tanh(c-x)\\ y(x) &= 2\pi k\pm\cos^{-1}\left[\tanh(c-x)\right] \end{align*} In the last line, we made use of the not-so-easy-to-prove fact that for $|y|\leq 1$, $$y=\cos(x)\iff x=2\pi k\pm\cos^{-1}(y)$$ In summary, the solutions are $$y(x)=\pi k$$ and $$y(x) = 2\pi k\pm\cos^{-1}\left[\tanh(c-x)\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Projection of linear combination of real numbers Suppose a real number $\alpha$ is a rational linear combination of two linearly independent real numbers, such as $\pi$ and $\sqrt 3$. For instance, say $$\alpha = \tfrac12\pi + 5\sqrt 3.$$ By a simple linear algebra argument ($\{\pi,\sqrt3\}$ is a $\mathbb Q$-basis), it's clear that the coefficients of $\alpha$ are uniquely determined. Question: Given $\alpha\in\operatorname{span}_\mathbb Q(\{\pi,\sqrt 3\})$, how do we determine the coefficients of $\pi$ and $\sqrt 3$? It feels like there is enough information, but at the same time, I imagine that we can find different coefficients which give us numbers arbitrarily close to $\alpha$, so it is important that we have an exact expression for $\alpha$. (In other words, if we only know that $\alpha\approx 10.23105$, say, I imagine it would be hopeless to expect that we can determine the coefficients, even if we restrict our view to the module $\operatorname{span}_{\mathbb Z}(\{\pi,\sqrt 3\})$.) The reason I ask this question is because for each $n$, the integral $$I(n)=\int_0^{\pi/3}\sin^{2n}x\,dx$$ is a number of the form $a\pi+b\sqrt 3$. It's easy to obtain the recurrence relation $$I(n) = \begin{cases} \hfil\frac\pi3\hfil & \text{if $n=0$}\\[3pt] (1-\tfrac1{2n})\,I(n-1)-\frac1{4n}\big(\tfrac{\sqrt3}2\big)^{2n-1} & \text{otherwise} \end{cases}$$ for $I(n)$, but I want to be able to compute the coefficients of $\pi$ and $\sqrt 3$ separately (in exact form) using a computer program. I appreciate any help with solving either of the two problems, I'm assuming a solution to one necessarily sheds light on the other, that's why I haven't posted these as two separate questions.
Using Mathematica RSolve[{i[n] == (1 - 1/(2 n)) i[n - 1] - 1/(4 n) (Sqrt[3]/2)^(2 n - 1), i[0] == Pi/3}, i[n], n] $$I_n=\frac{3^{n+\frac{1}{2}} \, _2F_1\left(1,n+1;n+\frac{3}{2};\frac{3}{4}\right)}{2^{2 n+2}(2 n+1) }$$ Where $_2F_1$ is the hypergeometric function and has the series expansion $$_2F_1(a,b;c;z)=\sum _{k=0}^{\infty } \frac{a_k b_k z^k}{k! c_k}$$ $a_k,b_k,c_k$ are the Pochhammer symbol In the table below the first exact values of the integrals $$ \begin{array}{c|r} n & I_n\\ \hline 0 & \frac{\pi }{3} \\ 1 & \frac{4 \sqrt{3} \pi -9}{24 \sqrt{3}} \\ 2 & \frac{8 \sqrt{3} \pi -27}{64 \sqrt{3}} \\ 3 & \frac{20 \sqrt{3} \pi -81}{192 \sqrt{3}} \\ 4 & \frac{560 \sqrt{3} \pi -2511}{6144 \sqrt{3}} \\ 5 & \frac{\sqrt{3} \left(560 \sqrt{3} \pi -2673\right)}{20480} \\ 6 & \frac{7 \left(440 \sqrt{3} \pi -2187\right)}{40960 \sqrt{3}} \\ \ldots & \ldots\\ \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Reducing $2018^{2018}\pmod6$ via $\mod2$ and $\mod3$ I've been tasked to reduce $$2018^{2018}\pmod6$$using the results of reducing$$2018^{2018}\pmod2$$and $$2018^{2018}\pmod3$$ I have deduced that $$2018^{2018}\equiv0\pmod2$$and $$2018^{2018}\equiv1\pmod3$$but I don't see a way to combine these two congruences into one $\mod6$ congruence. Could someone tell me how this is achieved? I have a feeling the answer is really simple, but I just don't see it. Is there also a more generalized strategy of solving modular congruences via the divisors of the modulo number($2$ and $3$ are divisors of $6$)? The way I have currently solved this problem is to first reduce $$2018^{2018}\pmod6$$ to $$2^{2018}\pmod6$$Then we look for patterns:$$\begin{align}2^1&\equiv2\pmod6\\2^2&\equiv4\pmod6\\2^3&\equiv2\pmod6\\2^4&\equiv4\pmod6\\&\ \ .\\&\ \ .\\&\ \ .\\2^n&\equiv\begin{cases}4&\text{if }n\equiv0\pmod2\\2&\text{if }n\equiv1\pmod2\end{cases}\pmod6\end{align}$$for $n\in\Bbb N$. In the case of $2018^{2018}$, $2018\equiv0\pmod2$, so $$2018^{2018}\equiv2^{2018}\equiv4\pmod6$$
The Chinese remainder theorem states that, for these congruence equations: $$ x\equiv 0 \pmod 2\\ x\equiv1\pmod 3 $$ Then different solutions of $x$ are congruent mod $6$, that is, $x_1 \equiv x_2 \pmod 6$. You have calculated $2018^{2018} \bmod 2$ and $2018^{2018} \bmod 3$, and know that $x_1 = 2018^{2018}$ is a solution. By trial and error for all the integers $0,1, \cdots, 6-1$, or by direct construction, $x_2 = 4$ is another solution. So $$2018^{2018} \equiv 4 \pmod 6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Solving $x^d \equiv a \pmod{29}$ using primitive roots Can anyone please detail the general approach to questions of the form $x^d \equiv a \pmod{29}$? For example, Wolfram Alpha states that $x^5 \equiv 8 \pmod{29}$ has one solution, $x^4 \equiv 4 \pmod{29}$ has no solution, and $x^7 \equiv 12 \pmod{29}$ has many solutions, but I have no idea how I'd go about proving that no solutions exist, or that any solutions I find are comprehensive. I know that $2$ is a primitive root $\bmod 29$, so $2^{28} \equiv 1 \pmod{29}$. For the first one I can manipulate to get $(2^{23})^5 \equiv 8 \pmod{29}$ to give $x \equiv 2^{23} \equiv 10 \pmod{29}$ as a solution, but this doesn't prove that no other solutions exist, does it? Also, I don't know how this approach would work with the third problem because $12$ cannot be written $2^n$. Furthermore, I'm unsure of the general technique to show that the second is not soluble?
Since $2$ is a primitive root modulo $29$, its multiplicative order is $28$, which means $$2^y \equiv 2^z \pmod{29} \iff y \equiv z \pmod{28} \tag{1}\label{eq1A}$$ One way to solve your congruence equations is to first express the right hand sides as a congruent power of $2$. The first $2$ examples are quite easy. With the third one, i.e., $$x^7 \equiv 12 \pmod{29} \tag{2}\label{eq2A}$$ note $12 \equiv 3(4) \equiv (32)(4) \equiv 2^7 \pmod{29}$ (as J. W. Tanner's question comment states). Next, since $2$ is a primitive root and $x \not\equiv 0 \pmod{29}$, then there's an integer $1 \le a \le 28$ such that $x \equiv 2^a \pmod{29}$. Thus, \eqref{eq2A} becomes $$\left(2^{a}\right)^7 \equiv 2^7 \pmod{29} \implies 2^{7a} \equiv 2^7 \pmod{29} \tag{3}\label{eq3A}$$ Using \eqref{eq1A}, this gives $$7a \equiv 7 \pmod{28} \implies a \equiv 1 \pmod{4} \tag{4}\label{eq4A}$$ This gives multiple answers of $x \equiv 2^{4b + 1} \pmod{29}$ for $0 \le b \le 6$, i.e., $x \equiv 2, 3, 19, 14, 21, 17, 11 \pmod{29}$. Next, with your second example, $$x^5 \equiv 8 \equiv 2^3 \pmod{29} \tag{5}\label{eq5A}$$ as done before, let $x \equiv 2^a \pmod{29}$ to get $$2^{5a} \equiv 2^3 \pmod{29} \implies 5a \equiv 3 \pmod{28} \implies a \equiv 23 \pmod{28} \tag{6}\label{eq6A}$$ Thus, $x \equiv 2^{23} \pmod{29}$ is the answer, as J. W. Tanner's question comment also indicates. With your final example of $$x^4 \equiv 4 \equiv 2^2 \pmod{29} \implies x^2 \equiv \pm 2 \tag{7}\label{eq7A}$$ note $2$ is not a quadratic root modulo $29$ (which requires, as shown in this table that $p \equiv 1, 7 \pmod{8}$ but $29 \equiv 5 \pmod{8}$), and also $-2$ is not a quadratic residue (since that requires $p \equiv 1, 3 \pmod{8}$). In addition, using the method I show above, gives $$2^{4a} \equiv 2^2 \pmod{29} \implies 4a \equiv 2 \pmod{28} \implies 2a \equiv 1 \pmod{14} \tag{8}\label{eq8A}$$ However, it's not possible to have an even value be equivalent to an odd value with an even modulo (e.g., since it would require $14 \mid 2a - 1$), so there are no solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943839", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Proving $3\times3$ nilpotent matrix satisfies $A^3=0$ Let $A$ be a $3$ $\times$ $3$ matrix. Prove that if there exists $n$ $>$ $3$ such that $A^n$= $0$, then $A^3$ = $0$. Can someone help explain how to prove this? I know that since A is nilpotent, its only eigenvalue ($\lambda$) is $0$. So, I was thinking that I could argue the following: $A^3$$\dot{\vec{v}}$=$\lambda^3$$\dot{\vec{v}}$=$\dot{\vec{0}}$, which implies that $A^3$=$0$. I don't think this proof is fully correct, however, because it doesn't use the fact that $n>3$. I could have made the same argument to suggest that $A^2=0$, which is not necessarily true when a $3$ $\times$ $3$ matrix A is nilpotent.
There are a few ways to do that. If you know about minimal polynomials, then you can use that the minimal polynomial of $A$ has degree at most $3$ and divides $A^n$. A less advanced way: Note that for each $k$, the range of $A^k$ is a subspace of ${\mathbf R}^3$, and $range(A^3) \subset range(A^2)\subset range(A) \subset {\mathbf R}^3$. The dimensions must correspondingly be nonincreasing as you go down the list. If $range(A^3)$ weren't of dimension zero, then you'd have $range(A^k) = range(A^{k+1}) \neq \emptyset$ for some $k \leq 2$. Then by induction $range(A^l) = range(A^k)$ for all $l \geq k$. This would include $l = n$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3943958", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
Use probability generating function to calculate probability Let $N$ be a discrete random variable having a probability generating function $G _ { N } ( z ) = e ^ { - 3 + 2 z + z ^ { 2 } }$ Find $\Pr(N ≤ 2)$ If the PGF is in the form of $\sum z ^ { x } p ( x )$, I can work out the PMF of $N$. But how can I deal with this case?
Break up $G_N(z)=e^{-3}e^{2z}e^{z^2}$ and work out the Cauchy product of the last two factors: $$G_N(z)=e^{-3}(1+2z+2z^2+\cdots)(1+z^2+\cdots)=e^{-3}(1+2z+3z^2+\cdots)$$ where $\cdots$ indicates $z^3$ or higher terms which we don't need. Therefore $P(N\le2)=e^{-3}(1+2+3)=6e^{-3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3944277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why $\log_b(x)$ = $-\log_{1/b}(x)$? IN a textbook I am asked to consider the graphs of $f(x) = \log_{1/2}(x)$ and $-\log_2(x)$. They appear identical and then the textbooks tells the rule that $\log_b(x)$ = $-\log_{1/b}(x)$. I tried to prove this to myself on paper. I can write both equations in exponential form. Pretend $x=4$: $$\log_{1/2}(4)=\frac{1}{2}^y=4$$ Then write the same for the other version: $$-\log_2(4)=-2^y=4$$ If I know these are equivalent then the two exponent versions should be the same no? $$4=\frac{1}{2}^y=-2^y$$ It's not clicking and I don't 'get it' from here. Is my last equation true? Why are they equivalent?
Let $y=-\log_{1/b}(x)$. We see that \begin{align} &y=\log_{1/b}(x^{-1}) \\[4pt] \implies&\left(\frac{1}{b}\right)^y=\frac{1}{x} \\[4pt] \implies& \frac{1}{b^y}=\frac{1}{x} \\[4pt] \implies& x=b^y \\[4pt] \implies& y=\log_b(x) \, . \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3944411", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
Calculating error for $\pi$ in an expansion I have the following equation: $\quad\pi = 4\arctan(1/2) + 4\arctan(1/3).$ I want to calculate how many terms of the expansion ($n$) I have to calculate in order for the error to be less than $10^{-10}$. I have an idea for the method to be used: Taking the power series of $\arctan(x)$ and using the remainder integral of the form $$R_n \;=\; \int\frac{{(-1)^n t^{2n}}}{1+t^2}\,dt.$$ So I want the sum of the two remainders from the expansion of each $\arctan(x)$ to be $<$ $\frac{10^{-10}}{4}$. $$\int_0^{1/2}\frac{{(-1)^n t^{2n}}}{1+t^2}\,dt \;+\; \int_0^{1/3}\frac{{(-1)^n t^{2n}}}{1+t^2}\,dt\;<\; \frac{10^{-10}}{4}.$$ Now to solve this, I can't do without making approximations since $t^{2n} > \frac{{(-1)^n t^{2n}}}{1+t^2}$ on my intervals; I approximate using this, getting $$\int_0^{1/2}t^{2n}\,dt \;+\; \int_0^{1/3}t^{2n}\,dt.$$ Evaluating, I get $$\frac{(1/2)^{2n+1}}{2n+1} + \frac{(1/3)^{2n+1}}{2n+1} < \frac{10^{-10}}{4}.$$ I have no idea how to solve this. Would I make another approximation? Or is my method completely wrong to start with. Edit: After making some huge approximations i.e. getting rid of the $2n+1$'s on the denominator and setting $1/3$ to be $1/2$, I got that $2n+1 =36.$ However, I am not sure whether this is correct.
As the Taylor expansion of $\arctan=\sum _{n=1}^{\infty } \frac{(-1)^{n+1} x^{2 n-1}}{2 n-1}$ is an alternating series, the error is less than the abs value of the first term neglected, that is equivalent to find an $n$ such that $$\frac{4 \left(\frac{1}{2}\right)^{2 n-1}}{2 n-1}+\frac{4 \left(\frac{1}{3}\right)^{2 n-1}}{2 n-1}=\frac{1}{10^{10}}$$ Using Mathematica I got an approximated value of $n=16$. Indeed $$4 \sum _{n=1}^{16} \frac{(-1)^{n+1} \left(\frac{1}{2}\right)^{2 n-1}}{2 n-1}+4 \sum _{n=1}^{16} \frac{(-1)^{n+1} \left(\frac{1}{3}\right)^{2 n-1}}{2 n-1}\approx 3.14159 265357$$ While $\pi\approx 3.14159265358$. Error is $1.14\times10^{-11}<10^{-10}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3944545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Number of possible states on a $6\times 6$ grid You have a 6x6 square grid. You have 18 black and 18 white pieces. How many ways can you fill the board (one piece per square) such that there are 3 black and 3 white pieces per column, and such that there are 3 black and 3 white pieces per row? I've tried to do it purely combinatorally , just looking row by row, but it gets confusing very fast. Is there perhaps some way to do it utilizing symmetries, like how a solution translated horizontally or vertically is a solution, or how the 90 degree rotation around the center of a solution is a solution?
By considering the case of a $2 \times 2$ grid with $1$ black piece and $1$ white piece for each row and each column you can count $2$ ways (i.e. BW-WB and WB-BW). It is not too difficult to count patiently all cases for a $4 \times 4$ grid, totalling $90$ ways. Searching OEIS with $2,90$, you can find OEIS sequence A058527 and, for the $6 \times 6$ grid the value $297200$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3944665", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Adjoining an inverse to a central element of an algebra Given a (not necessarily commutative) algebra $A$, and a central element $c \in A$, is it always possible to enlarge $A$ to a an algebra $A'$ in which $c$ is invertible? I guess one can take a set of generators and relations for $A$, and add a formal inverse $c^{-1}$ along with extra relations making $c^{-1}$ central and satisfying $c^{-1}c = c^{-1}c = 1$. But I guess this is a specific example of a general formal process. What could this process be?
Let $S=\{a^n \mid n \in \Bbb N\}$. Then $S$ satisfies the Ore condition, so one can form the Ore localization $AS^{-1}$. This is constructed in the same way that localizations of commutative rings work. This is treated in many non-commutative algebra books, e.g. in Lam's "Lectures on Modules and Rings". In this special case, there's another contruction yielding the same result: we can form the polynomial ring $A[x]$ and divide by the two-sided ideal $(ax-1)A[x]=A[x](ax-1)=A[x](ax-1)A[x]$ to get $A[x]/(ax-1)A[x]$. The class of $x$ will be a formal inverse of $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3944955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }