Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Find the length of a triangle's side I have the following triangle and I'm supposed to find the value of x. First thought that came to mind is to use the the following tangent equation $$\tan(y)=\frac{x}{27}$$ and $$\tan(19+y) = \frac{2x}{27}$$ which implies that $$\tan(19+y) =2\tan(y)$$ and solve for $y$ and once I've found $y$ I can easily find $x$. I don't have an easy solution for the last equation and I feel that I'm complicating things unnecessarily. (It's a grade 10 geometry question). Edit: Using the fact $\tan(a+b) = \dfrac{\tan(a)+\tan(b)}{1-\tan(a)\tan(b)}$ yields the following quadratic equation $$2\tan(19)\tan^2(y)-\tan(y)+\tan(19) = 0$$which gives you that $\tan(y) = 0.890833$ or $\tan(y)= 0.561272$ which means $x = 27\times0.890833$ or $x= 27\times 0.561272$. So the question is now which $x$ should I pick and why? and is this more complicated than it should be?
I think you can just use the sinus rule and pythagorean theorem, then: $$\frac{x}{\sin(19)} = \frac{\sqrt{x^2+27^2}}{\sin(71-y)}$$ And we see: $$\sin(71-y)=\frac{27}{\sqrt{4x^2+27^2}}$$ Thus, if we substitute: $$\frac{x}{\sin(19)} = \frac{\sqrt{x^2+27^2}\sqrt{4x^2+27^2}}{27}$$ Solving for $x$: $$x=15.15...\lor x=24.05...$$ ... witch is in compliance with your answer. PS. Both values of $x$ comply, because they satisfy the equation. And is that tan property known in 10th grade? Otherwise use the above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Why is the inverse of an average of numbers not the same as the average of the inverse of those same numbers? I have a set of numbers (in my case: mean retention time (MRT) in the stomach (h)) of which I want to calculate the average gastric passage rate (/h). Gastric passage rate = 1/MRT. My question is why 'the average of the calculated gastric passage rates of those numbers' is not the same as 'the calculated gastric passage rate of the averaged MRTs'. The next question is: what is the right way? So for example: $x = 5; 10; 4; 2.$ Average $= 5.25 h \Rightarrow 1/5.25 = 0.19$/h $1/x = 0.2; 0.1; 0.25; 0.5.$ Average $= 0.26$/h So should I first take the average of the MRTs and then take the inverse for calculating the gastric passage rate (first way) or should I first take the inverse of all numbers for calculating the gastric passage rates and then take the average of that number (second way). Thanks in advance!
You already got many answers why it doesn't work for a "normal" arithmetic average. In fact there exists a type of average for which this is true. It would be the same if you were using a geometric mean instead of an arithmetic mean. In that case, the algebra becomes: $$\sqrt[N]{\prod_1^N a_i} = \sqrt[N]{a_1\cdots a_N} = a_1^{1/N}\cdots {a_N}^{1/N} = \\\frac{1}{{a_1}^{-1/N}\cdots {a_N}^{-1/N}} = \frac{1}{\displaystyle\sqrt[N]{\prod_1^N {a_i}^{-1}}} = \frac{1}{\displaystyle\sqrt[N]{\prod_1^N \frac 1{a_i}}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 11, "answer_id": 5 }
Cohomology and Eilenberg-MacLane spaces It is well known that for any abelian group $G$, and any CW-complex $X$, the set $[X, K(G,n)]$ of homotopy classes of maps from $X$ to $K(G,n)$ is in natural bijection with the $n^{\mathrm{th}}$ singular cohomology group $H^n(X; G)$ with coefficients in $G$. My question is, is there a similar bijection if the group is nonabelian? Notice that we only need to consider $[X, K(G,1)]$. In particular, I am trying to figure out what $[X, K(G,1)]$ looks like if $G$ is a finite group.
Assume $X$ connected. Yes, this is known. For pointed homotopy classes this is $\text{Hom}(\pi_1 X, G)$. This is 1B.9 of Hatcher, usually the first place one sees obstruction theory, and requires less work than the case of $n > 1$. For unpointed homotopy classes of maps, one quotients by conjugacy of elements of $G$. As you point out in the comments below, this is 4A.2 of Hatcher.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confirming Axioms of Vector Spaces that rely on modular arithmetic V is a vector space where $$V = \{\mathrm{rotations}\} = \{\theta : θ ~ \text{is a real number and} ~ 0 ≤ θ < 2π\}$$ Addition is defined by $$θ_1 + θ_2 := (θ_1 + θ_2) ~ \mathrm{mod} ~ 2π$$ Scaling by real numbers is defined by $$rθ = rθ ~ \mathrm{mod} ~ 2π$$ My question as to do with the axioms of Additive Associativity and Additive Inverses. AA is defined as $$(u+v)+w=u+(v+w) \tag{$u,v,w ∈ V$}$$ and AI is defined as: there exists a vector w such that w=-v where $$v+w=0_v \tag{$v,w ∈ V$}$$ With regards to AA, I am unsure of how mod2π would effect the addition. So if u=θ1, v=θ2 and w=θ3, then (u+v)+w would be $$((θ_1+θ_2) ~ \mathrm{mod} ~ 2π)+θ_3) ~ \mathrm{mod} ~2π$$ right? How can I prove that is the same thing as $$(θ_1+(θ_2+θ_3) ~ \mathrm{mod} ~2π) ~ \mathrm{mod} ~2π$$ wihtout losing generality? Can the mod be pulled out of the expression and done afterwards since θ is a real number? Secondly, for AI I assume that $$w=-v$$ would not mean literally negative v, but rather the inverse that would provide the zero vector since there are no negative elements in V. For example, if $$v=3π/2$$ then $$w=π/2$$ then $$v+w=0$$ as defined by the addition of vectors in this space. Am I right in assuming this? Thanks for all your help!
See both of u/Nick 's comments for the answer to this question. "Mods" can be pulled out of the expression and the additive inverse would be 2$\pi$ - v.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3107870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
GCDs for the polynomial ring over a Galois field. You can find many examples of computing the inverse of an element inside a Galois field. (For example here) What happens if we look at the polynomial ring over a Galois field and would like to compute gcd of two elements? Since this is a euclidean domain the GCD should be well-defined. Let's say we have $\mathbb{F}_8$ as the Galois field. Since $\mathbb{F}_8$ is isomorphic to $\mathbb{F}_2[X]/(X^3+X+1)$, I can think about the elements of $\mathbb{F}_8$ as the polynomials $aX^2+bX+c$ with $a,b,c \in \mathbb{F}_2$. Now we look at the polynomial ring $\mathbb{F}_8[Y] \cong \mathbb{F}_2[X]/(X^3+X+1) [Y] \cong \mathbb{F}_2[X,Y]/(X^3+X+1)$. (Are these congruences correct?) So elements of $\mathbb{F}_8[Y]$ are for example $Y^3+X+1$, or just $Y^2$ or $Y+X^2$. Does anybody knows a way (or references) to calculate the gcd of some of this elements? Calculating $\gcd(Y^3+X+1, Y^2)$ I only came this far: $$ Y^3 + X + 1 = Y \cdot Y^2 + X +1$$ $$Y^2 = ?_a \cdot (X+1) + ?_b$$ If I should guess I would say that $2 \geq \deg_y(?_a) > \deg_y(?_b)$ has to be fulfilled, but I think this is impossible. Any help or any ideas are appreciated! Thanks!
You ned rules for simplifying expressions in $\mathbf F_8$. With your setting, if you denote $\omega$ the congruence class of $X$ in $\mathbf F_2[X]/(X^3+X+1)$, you know that $$\omega^3=\omega+1\qquad\text{(we're in characteristic }2),$$ so the last division is written as $$Y^2=(\omega+1)^{-1}Y^\cdot (\omega+1)+0.$$ Now, $\;\omega^3+\omega=1=\omega(\omega^2+1)=\omega(\omega+1)^2$, so $\;(\omega+1)^{-1}=\omega(\omega+1)$ and the last division is ultimately $$Y^2=\bigl(\omega(\omega+1)Y^2\bigr)\cdot(\omega+1).$$ To answer your last questions, $\deg ?_a=0$, and $\:?_b=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can $x^n-(x-1)^n$ be Prime if $n$ is Not Prime? I'm hoping someone can provide an answer or a link to a proof regarding this question. Edit: The question has been put on hold because I did not expound on why the answer to this was of interest to me or the community, so, even though I've received my answer, I will elaborate. In my spare time I've been attempting to develop novel ways of generating odd compound numbers as well as primes. A pattern emerged as I iterated across $\mathbb{n \in N}$. Prime numbers were only turning up where $\mathbb{n \in P}$.
Hint: $x^n - y^n$ is divisible by $x^r - y^r$ if $r$ divides $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the conditional distribution of the uniform distribution Let $U$ denote a random variable uniformly distributed over $(0,1)$. Compute the conditional distribution of $U$ given that $$(a)\,\,\,U>a\\ (b)\,\,U<a$$ where $0<a<1.$ For $(a)$ I tried the following, $$f_{U\mid U>a}(u\mid u>a)=\frac{f(u,u>a)}{f_{U>a}(u>a)}=\frac{{\displaystyle \frac{1}{1-a}}}{{\displaystyle \int_a^1\frac{du}{1-a}}}=\frac1{1-a}$$ and for $(b)$ $$f_{U\mid U<a}(u\mid u<a)=\frac{f(u,u<a)}{f_{U<a}(u<a)}=\frac{{\displaystyle \frac{1}{a}}}{{\displaystyle \int_0^a\frac{du}{a}}}=\frac1{a}$$ would this process be correct? Because I feel as if $f_{U>a}(u>a)$ should be equal to $\int_0^1\frac{du}{1-a}$ because we are integrating with respect to $U$ and not $U>a$ and the interval of definition of $U$ is $(0,1)$, or am I wrong? Here I just used the definiton given by the book which is $$f_{X\mid Y}(x\mid y)=\frac{f(x,y)}{f_Y(y)}=\frac{f(x,y)}{\int_{-\infty}^\infty f(x,y)\,dx}$$
The conditions do not change the fact that the distribution is uniform, only the interval for which it is defined. Therefore for an interval of $0\le u\le a$, the density function is $\frac{1}{a}$, while for an interval $a\le u\le 1$, the density function is $\frac{1}{1-a}$. Your analysis is correct. To clarify your concern about the integral, remember that by imposing the condition, the density function is $0$ outside the limit imposed by the condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to show there are only two discrete valuation rings with quotient field $k(x)$? I want to show that only discrete valuation rings with quotient field as $k(x)$ containing $k$ are: $\mathcal{O_{a} (\mathbb{A^{1}})}$ for each $a \in k$ and $\mathcal{O_{\infty}}$; the former is the set of rational functions on $\mathbb{A^{1}}$ (affine 1-space, that is field $k$ here) that are defined at $a \in k$, it is a discrete valuation ring with uniformizing parameter $x-a$ and the latter is the ring $$ \left\{\frac{F}{G} \in k(x) \mid \deg(G) \geq \deg(F) \right\} $$ with $\frac{1}{x}$ as its uniformizing parameter. My idea was to first observe that if $S$ is any DVR, then it cannot be clearly field of quotients $k(x)$, since in the book (Fulton, Algebraic Curves) we have not defined them as fields. So, $S\subset k(x)$. It will contain the ring $k[x]$. Now I will use a previous exercise that says that "If $R$ is a DVR with quotient field $K$ and $m$ as its maximal ideal then for $z\in K, z \notin R$, we must have $z^{-1} \in m$." and another that says that "Further if $R\subset S\subset K$ and $S $ is also a DVR, and the maximal ideal of $S$ contains $m$ then $S =R$." But I don't know how I can start. Any hint would be appreciated, thanks!
Let me start by pointing out the a DVR $R\subset k(x)$ may not contain $k[x]$, and in fact you $\mathscr{O}_\infty$ doesn't. Here's an outline of how to prove the result: (1) using the second exercise you mention (about maximality of DVRs), prove that a DVR R that does contain $k[x]$ must be the localization of $k[x]$ at some maximal ideal, and so isomorphic to some $\mathscr{O}_a (\mathbb{A}^1)$ (hint: if $\mathfrak{m}$ is the maximal ideal of $R$, what is $\mathfrak{m}\cap k[x]$?) (2) by the first exercise you cite, if $R$ does not contain $k[x]$, it must contain $x^{-1}$; use maximality again to prove that such a DVR must be $\mathscr{O}_\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Constrained (and non) extrema of $f(x,y)=2x^2-2xy^3+3y^2$ I need to find the critical points on the boundary,inside $D$,outside $D$, and find the image of this function (constrained on $D$). $f(x,y)=2x^2-2xy^3+3y^2$ $D=\{2x^2+3y^2\le 9\}$ Critical points non-constrained: $f_x=4x-2y^3=0$ $f_y=-6xy^2+6y=0$ --> $6y(-xy+1)=0$ $f_y$ is equal to $0$ if $y=0$ or $x=\frac{1}{y}$, plugging it in $f_x$ brings to these critical points : $(0,0),(\pm \frac{1}{2^{\frac{1}{4}}},\pm 2^{\frac{1}{4}})$ using the second derivate test: $(0,0),(\pm \frac{1}{2^{\frac{1}{4}}},\pm 2^{\frac{1}{4}})$ seem to be saddle points (?). Constrained Critical points (extremas): $$ \left\{ \begin{array}{c} 4x-2y^3=4x\lambda\\ -6xy^2+6y=6y\lambda\\ 2x^2+3y^2-9=0 \end{array} \right. $$ $-6xy^2+6y=6y\lambda$ --> $6y(-x+1-\lambda)=0$ from this one I can see that it nullifies when $y=0$ or $x=1-\lambda$. If $y=0$ the first equation gets to $4x(1-\lambda)=0$ which means that it nullifies when $x=0$ or $\lambda = 1$. Looking at the third equation tells us that $(x,y)=(0,0)$ can't be used; Plugging $y=0$ in the third equation : possible critical points $(\pm \frac{3}{\sqrt(2)},0)$ What I can say about $(\pm \frac{3}{\sqrt(2)},0)$ ? I think I missed some points.
Critical points non-constrained: $$(0,0),\quad \left(\pm2^{-1/4}, \pm2^{1/4}\right)$$ Global minimum: $$f_{min}=f(0,0)=0$$ Saddle points: $$f\left(\pm2^{-1/4}, \pm2^{1/4}\right)=2\sqrt2$$ Constrained Critical points: from system $$\left\{ \begin{array}{c} 4x-2y^3=4x\lambda\\ -6xy^2+6y=6y\lambda\\ 2x^2+3y^2-9=0 \end{array} \right.$$ eliminate $\lambda$. We get $$ \left\{ \begin{array}{c} (2x^2-y^2)y^2=0\\ 2x^2+3y^2-9=0 \end{array} \right. $$ Solutions is $$\left(\pm\frac{3}{\sqrt2},0\right),\; \left(\frac{3}{2\sqrt2},-\frac32\right),\; \left(-\frac{3}{2\sqrt2},\frac32\right),\; \left(\frac{3}{2\sqrt2},\frac32\right),\; \left(-\frac{3}{2\sqrt2},-\frac32\right) $$ Global maxima: $$f_{max}=f\left(\frac{3}{2\sqrt2},-\frac32\right)=f\left(-\frac{3}{2\sqrt2},\frac32\right)=9+\frac{81}{{{2}^{\frac{7}{2}}}}\approx16.159456$$ Local minimum: $$f_{min}=f\left(\frac{3}{2\sqrt2},\frac32\right)=f\left(-\frac{3}{2\sqrt2},-\frac32\right)=9-\frac{81}{{{2}^{\frac{7}{2}}}}\approx1.84054384$$ Saddle points: $$f\left(\pm\frac{3}{\sqrt2},0\right)=9$$ WolframAlpha return in this case "local maxima". Other method: parametric equations of ellipse is $x=\frac{3 \cos{(\phi)}}{\sqrt{2}},y=\sqrt{3} \sin{(\phi)}$. We get $$f=9-9 \sqrt{6} \cos{(\phi)} {{\sin{(\phi)}}^{3}}$$ with critical points on $[0, 2\pi]$: $$0,\frac{\pi}{3},\frac{2\pi}{3},\pi,\frac{4\pi}{3},\frac{5\pi }{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many essentially different strings are there of length $\leq n$ and over an alphabet of size $|\Sigma| = m$? For example, $aaaaaabb \simeq ccccccdd$ essentially, because a smallest grammar algorithm would perform the exact same steps to reduce one as the other. So how can I phrase this in terms of formal strings, their lengths, etc? If the length is $0$ there is 1 string. If the length is $1$ there is $1$ string. If the length is $2$, then $t = aa$ or $t = ab$ wlog so there are $2$ strings. $n = 3 \implies t = aaa, aab, abb, aba, abc$ wlog so there are 5 strings!! How do you expect me to count this without your help :)
The equivalence classes you are trying to count are called "restricted growth strings". The sequence of counts of all RGS of length $n$ are the "Bell numbers", after the mathematician Eric Temple Bell who studied them in the 1930s. This corresponds to the count of your "essentially different strings" for the case $n=m$. See sequence A000110 in the Online Encyclopedia of Integer Sequences; there is a link to Bell's work in the reference section. For the more general question, the number of RGS of length $n$ from an alphabet of size $m$ can be computed as the sum of Stirling numbers of the second kind: \begin{equation}\sum\limits_{k=1}^m\begin{Bmatrix} n \\ k \end{Bmatrix}\end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $ X \sim (0,1) $ and $ Y \sim (-1,2)$ be independent. Compute the distribution function of $Z=X+Y$ - how to break into cases? Let $ X \sim (0,1) $ and $ Y \sim (-1,2)$ be independent. Compute the distribution function of $Z=X+Y$ - how to break into cases? I first found the density functions: $$ f_x(t) =\begin{cases} 1 && t\in[0,1] \\ 0 && else \end{cases} $$ $$ f_y(t) =\begin{cases} \frac{1}{3} && t\in[-1,2] \\ 0 && else \end{cases} $$ Now: $ F_z(t)=P(Z\leq t)=P(X+Y\leq t)=\int_{-1}^{2}\int_{0}^{t-y}\frac{1}{3}dxdy$ But now I am stuck with breaking the result into cases. How could it be done? I simply can't understand how to break that into cases when we have two differently distributed variables? With one variable I would usually draw the function and then break to cases according to it's behavior, but how could it be done here? Thanks
Assuming that you know that the sum of two independent random variables is their convolution, we have $$F_z(t)= \int_{-\infty}^{\infty}f_X(t-\tau)f_Y(\tau) \ d \tau = \frac{1}{3} \int_{-1}^{2}f_X(t-\tau) \ d \tau = \frac{1}{3}\int_{t-2}^{t+1}f_X(x) \ dx$$ Now, we have cases according to the integral boundaries and the PDF of $X$, namely: 1) If $t+1<0$, i.e. $t<-1$, the integral evaluates to zero. 2)If $t-2>1$, i.e. $t>3$, the integral evaluates to zero. 3) If $t-2<0$ ($t<2$) given that $t+1<1$, i.e. $t<0$ So for $-1 < t < 0$ $$F_z(t)=\frac{1}{3}\int_{0}^{t+1}f_X(x) \ dx = \frac{1}{3}(t+1)$$ 4) If $t-2 > 0$ ($t > 2$). So for $2 <t < 3$ $$F_z(t)=\frac{1}{3}\int_{t-2}^{1}f_X(x) \ dx = \frac{1}{3}(3-t)$$ 5) Now $0 < t < 2$ $$F_z(t)=\frac{1}{3}\int_{t-2}^{t+1}f_X(x) \ dx =\frac{1}{3}\int_{0}^{1} 1 \ dx =\frac{1}{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Rational expression $K(t)$ is a transcendental extension of $K$? I know this question was asked before, but none of the previous threads end up answering the question satisfactorily enough for me. So let me try to summarize my problems succinctly: * *The notation $\mathbb{Q}(\sqrt{2})$ is commonly used to denote the smallest sub-field generated by $\mathbb{Q} \cup \{\sqrt{2}\}$. However, for a general sub-field $K$, $K[t]$ is defined to be the ring of polynomials over $K$ which then follows $K(t)$ which is the field of polynomials over $K$ (or rational expressions). However, is $K(t)$ simply a way of notation or does it follow the convention of $\mathbb{Q}(\sqrt{2})$? If the latter, how does this relate with rational functions over $K$? *I am not understanding the proof for how $K(t)$ is a transcendental extension of $K$ which goes as follows: If $p$ is a polynomial over $K$ s.t. $p(t)=0$ then $p=0$ by definition of $K(t)$, so the extension is transcendental. I understand that to show transcendence over $K$ we shall assume some element $t =\frac{r(s)}{q(s)}\in K(s)$ satisfies $p(t) =0, p\in K[t]$ and show that p must be identically $0$ as a result. However, where are we using the definition of $K(t)$ to show this is so? The other threads are linked here: * *What is $K(t)$ and why is a transcendental extension of $K$? *Show a rational function is transcendental over a field.
Well, if $t$ is algebraic over $K$, then $K[t]=K(t)$, i.e., ring extension equals field extension. If $t$ is transcendental over $K$, then $K[t]$ is a polynomial ring and $K(t)$ the field of rational functions of $t$ over $K$. The construction of $K(t)$ from $K[t]$ is the same as the construction of $\Bbb Q$ from $\Bbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3108974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculating $\int_{0}^{\pi}\text{sinh}\left(\sin\left(x\right)\right)\text{d}x$ I was wondering if it exists a beautiful exact value to $$ \int_{0}^{\pi}\text{sinh}\left(\sin\left(x\right)\right)\text{d}x$$ which has a nice graph over $\left[0,\pi\right]$, but i can't get to compute it.
If you know the Beta and Gamma functions well, you can see the sequence $$a_n:=\int_0^\pi\sin^{2n+1}x\mathrm{d}x=2\int_0^{\pi/2}\sin^{2n+1}x\mathrm{d}x=\operatorname{B}\left(\frac{1}{2},\,n+1\right)=\frac{n!\sqrt{\pi}}{\Gamma\left(n+\frac{3}{2}\right)}$$satisfies$$a_0=2,\,\frac{a_{n+1}}{a_n}=\frac{2(n+1)}{2n+3}$$so by induction$$a_n=2^{n+1}\prod_{i=1}^{n-1}\frac{i}{2i-1}=\frac{2^{2n+1}n!^2}{(2n+1)!}.$$So your integral is $$2\sum_{n\ge 0}\frac{n!^2}{(2n+1)!^2}4^n=\pi L_0(1),$$with $L_n(x)$ the modified Struve function. (I'm not an expert on this function myself, but Wolfram Alpha gives the result, and I suspect it can be derived from the previous link's Eq. (1) using Legendre's duplication formula.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Proof that $\lim_{x\downarrow 0}x^me^{\frac{-1}{x}} =0,m\in\mathbb{Z}$ with L'Hospital For the case that $m\geq0$ I don't need to apply L'Hospital. Let $m<0$ We have $x^m=\frac{1}{x^{-m}}$ We also know that $x^{-m}\rightarrow 0$ as $x\rightarrow 0$ We also know that $e^{-\frac{1}{x}}<\epsilon\iff x<-\frac{1}{\ln \epsilon}$ Therefore: $e^{-\frac{1}{x}}\rightarrow 0 $ as $x\rightarrow 0$ Since $x^m$ and $e^{-\frac{1}{x}}$ are both smoth (infinitely times differentiable) in $\mathbb{R^+}$ I can use L'Hôspital. I have got the hunch that I have to use L'Hospital $-m$ times. But I don't know how the Expression would look like then. Here is what I have tried to calculate the first derivative: $(\frac{e^{-\frac{1}{x}}}{{x^{-m}}})^{'}=\frac{e^{-\frac{1}{x}}\frac{1}{x^2}x^{-m}-e^{-\frac{1}{x}}(-m)x^{-m-1}}{{x^{-2m}}}=x^{-m}\frac{e^{-\frac{1}{x}}\frac{1}{x^2}-e^{-\frac{1}{x}}(-m)x^{-1}}{{x^{-2m}}}=\frac{e^{-\frac{1}{x}}\frac{1}{x}-e^{-\frac{1}{x}}(-m)}{{x^{-m-1}}}=e^{-\frac{1}{x}}\frac{\frac{1}{x}+m}{x^{-m-1}}=e^{-\frac{1}{x}}\frac{1+xm}{x^{-m}}=\frac{e^{-\frac{1}{x}}}{{x^{-m}}}+\frac{me^{-\frac{1}{x}}}{{x^{-m-1}}}$ But this gets me nowhere because I did not get rid off a power of $x^{-m}$ Please help me to figure out where the Problem is and what the term would ook like after I have differentiated it $m$-times.
Direct application of L'Hospital's Rule does not provide a tractable way forward as mentioned in the OP. To see this, we begin by writing (for $m<0$, $|m|\in\mathbb{N}$) $$\begin{align} \lim_{x\to0^+}\left(x^me^{-1/x}\right)=\lim_{x\to0^+}\left(\frac{e^{-1/x}}{x^{|m|}}\right)\tag1 \end{align}$$ But, differentiating $|m|$ times, we find that $$\begin{align} \lim_{x\to0^+}\left(\frac{e^{-1/x}}{x^{|m|}}\right)&=\lim_{x\to0^+}\frac{P_m(1/x)e^{-1/x}}{(|m|!)} \end{align}$$ where $P_m(x)$ is a polynomial of order $2m$. The result of this has actually increased the difficulty in evaluating the limit of interest. So, let's pursue alternative ways forward. Since we can represent $e^x$ by its Taylors series, $e^x=\sum_{n=0}^\infty \frac{x^n}{n!}$,then clearly for $x>0$ and any integer $|m|$, $e^x\ge \frac{x^{|m|+1}}{(m+1)!}$. Therefore, we see that $$\begin{align} \left|x^m e^{-1/x}\right|&=\left|\frac{x^m}{e^{1/x}}\right|\\\\ &\le \left|\frac{x^m}{\frac{(1/x)^{|m|+1}}{(|m|+1)!}}\right|\\\\ &=(m+1)!x^{m+|m|+1}\tag2 \end{align}$$ The right-hand side of $(2)$ approaches $0$ as $x\to 0^+$ and we are done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Continuous function between topological space mapping to a closed set A function between two topological spaces is continuous if the preimage of every open set is open. I am getting confused with constant functions. $$f:(X,\tau_X)\to(Y,\tau_Y) \text{ is continuous if } \forall V\in Y \text{ open } f^{-1}(V) \text{ is open in X} $$ Let $f(\mathbb{R},st)\to(\mathbb{R},st)$ be the constant function $f(x)=x_0$ with $st$ denoting standard topology. This is clearly continuous. To show this let $V$ be open in $(\mathbb{R},st)$. If $x_0 \in V$ then $f^{-1}(V)=R$, if $x_0\notin \mathbb{R}$ then $f^{-1}(V)=\varnothing$ both of which are open thus $f$ is continous. I am confused as to why there is such a $V$ open. The image of any non empty set under $f$ is $\{x_0\}$ which is closed.
The statement says that $f$ is continuous if and only if for each open subset $V$ of $\mathbb R$, $F^{-1}(V)$ is also an open set. Note that it says “every open subset $V$ of $\mathbb R$”, not “every subset $V$ of $f(\mathbb R)$ which happens to be an open subset of $\mathbb R$”. So, if, for instance, $f(x)=2$ and $V=(1,20)$, then $V$ is an open subset of $\mathbb R$ and $f^{-1}(V)=\mathbb R$, which is an open subset of $\mathbb R$:
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I find the coefficient of $x^6$ in $(1+x+\frac{x^2}{2})^{10}$ efficiently with combinatorics? To find the coefficient of $x^6$ in $(1+x+\frac{x^2}{2})^{10}$, I used factorization on $(1+x+\frac{x^2}{2})$ to obtain $\frac{((x+(1+i))(x+(1-i)))}{2}$, then simplified the question to finding the coefficient of $x^6$ in $(x+(1+i))^{10}(x+(1-i))^{10}$, then dividing by $2^{10}$. Then, we find that the coefficient of $x^6$ would be: $$\sum_{i=0}^{6} \binom{10}{6-i} \binom{10}{i} (1-i)^{10-(6-i)} (1+i)^{10-i}$$ with the knowledge that $(1-i)(1+i)=2$, I simplified to $$\binom{10}{6}\binom{10}{0}2^4((1-i)^6+(1+i)^6)+\binom{10}{5}\binom{10}{1}2^5((1-i)^4+(1+i)^4)+\binom{10}{4}\binom{10}{2}2^6((1-i)^2+(1+i)^2)+\binom{10}{3}\binom{10}{3}2^7$$ Note: the formula $(1+i)^x+(1-i)^x$ gives: $ 2(2^{\frac{x}{2}}) \cos(\frac{x\pi}{4})$ After simplifying and reapplying the division by $2^{10}$, I get $(\frac{0}{1024}) + (-8)(2520)(\frac{32}{1024}) + (\frac{0}{1024}) + (120)(120)(2^7)$, which gives $0-630+0+1800,$ which is 1170, and I checked this over with an expression expansion calculator. If the original equation was $(1+x+x^2)^{10}$, I would have used binomials to find the answer, however, the $x^2$ was replaced by $\frac{x^2}{2}$. My question is whether anyone has a combinatorics solution to this question, rather than just algebra. It would be nice if the solution did not require complex numbers.
Write the expression as $\frac{1}{2^{10}}\left((x+1)^2+1\right)^{10}$. Hint if you would like to do it using combinatorial arguments : Coefficient of $x^6$ in $(1+x^2)^{n}$ would give you the number of tuples such that $a_1 + a_2 + \cdots a_{n} = 6$ such that $a_i\in \{0,2\}$. This should be easy- just note that any three of the $a_i$s should be $2$ which gives $n\choose 2$ as the answer to the example here. To add to the hint, note that if you write $x+1=t$, then you need to calculate the coefficients of $x^6$ which you get directly from the coeff of $t^6$ and another from the coeff of $t^4$ because $t^4 = (1+x)^4.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why is clifford group a group? Let $C(Q)$ denote the clifford algebra of vector space $Q$ with respect to a quadratic form $q:V \rightarrow \Bbb R$. Hence we have the relation $w^2 = Q(w) \cdot 1$ for $w \in V$. Let $\alpha:C(Q) \rightarrow C(Q)$ be the canonical automoprhism $\alpha^2=id, \alpha=-x$. The Clifford group of $Q$ is $$\Gamma (Q) = \{ x \in C(Q)^* \, ; |, \alpha(x) \cdot v \cdot x^{-1} \in V \text{ for all } v \in V \}$$ How is this set closed under inverses?
I will assume, as it is usually defined, that $C(Q)^*$ is the set of invertible elements in $C(Q)$. Then, the existence of an element $x^{-1}\in C(Q)$ such that $xx^{-1}=x^{-1}x=1$ is, by definition, guaranteed for each $x\in C(Q)^*$. What we need to show is that $x^{-1}$ is in fact an element in $\Gamma(Q)$. First, for each $x\in \Gamma(Q)$, the function $\sigma(x):V\rightarrow V$ given by $\sigma(x)(v)=\alpha(x)vx^{-1}$ is a vector space isomorphism. It is clear that it is a linear transformation, since the Clifford product is bilinear. Moreover, $$ \sigma(x)(v)=0 \;\;\Rightarrow\;\; \alpha(x)vx^{-1}=0\;\;\Rightarrow\;\; v=0,$$ where we simply multiplied (using the Clifford product) the right side by $x^{-1}$ and the left side by $\alpha(x)^{-1}$. Finally, since $\sigma(x)$ is an isomorphism, for each $v\in V$ we have a $w\in V$ such that $\alpha(x)wx^{-1}=v$. Here, we remember that since $\alpha$ is an automorphism, we have $\alpha(x)^{-1}=\alpha(x^{-1})$. It then follows that $$ v=\alpha(x)wx^{-1} \;\;\Rightarrow\;\; vx=\alpha(x)w \;\; \Rightarrow\;\; \alpha(x^{-1})vx=w\in V, $$ that is, $x^{-1}\in \Gamma(Q)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving "$\forall_{\epsilon>0} \exists_{N \in \mathbb{N}} \forall_{n \geq N}: x^{n} < \epsilon$" for $0I'm trying to prove a rather simple analysis statement but I think I'm overseeing something. For $0<x<1$ I need to prove $$\forall_{\epsilon>0} \exists_{N \in \mathbb{N}} \forall_{n \geq N}: x^{n} < \epsilon.$$ I now have the following: We use theorem 1.5 which states \begin{equation}\label{theorem_1.5} \forall_{\epsilon>0}, \exists_{N \in \mathbb{N}} : \frac{1}{N} < \epsilon. \end{equation} Also we make use of the fact that if $0<x<1$ it follows that \begin{equation}\label{equation_1} (N+1)x^{N}<\frac{1}{1-x}. \end{equation} Suppose $0<x<1$ and let $\epsilon>0$ be given. Now let $\epsilon'=\frac{1}{1-x}\epsilon$. Then from equation 1, equation 2 and the fact that $N+1>N$ it follows that \begin{equation} \begin{split} (N+1)x^{N} &< \frac{1}{1-x},\\ x^{N} &< \frac{1}{1-x}\frac{1}{N+1} < \frac{1}{1-x}\frac{1}{N}\\ x^{n} &< \frac{1}{1-x} \epsilon\\ x^{n} &< \epsilon'. \end{split} \end{equation} My gut feeling says there is something wrong with my reasoning. It might be the fact that I predetermine $\epsilon'$, but I'm not sure. Any thoughts would be appreciated.
Put $y=1/x$, then $y>1$, hence $y=1+z$ for some $z>0$. Then we get, with Bernoulli: $$\frac{1}{x^n}=y^n=(1+z)^n \ge 1+nz >nz.$$ Hence $$ x^n < \frac{1}{nz}.$$ Can you proceed ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find $A^5-4A^4+7A^3+11A^2-A-10I$ If$$A= \begin{bmatrix} 1 & 4 \\ 2 & 3 \end{bmatrix},$$then find $A^5-4A^4+7A^3+11A^2-A-10I$ where $I$ is Identity matrix of $2^{nd}$ order. Answer should come in terms of $A$ and $I$. My approach: I thought it would might end up in a pattern so I found $$A^2=\begin{bmatrix} 9 & 16 \\ 8 & 17 \end{bmatrix}$$ and similarly $A^3$ but got no such pattern. I don't want to evaluate it to up to $A^5$. Any better approach or solution is much appreciated.
The characteristic polynomial of that matrix is $\lambda^2-4\lambda-5$. So, by the Hamilton-Cayley theorem, $A^2-4A-5\operatorname{Id}=0$. In other words, $A^2=4A+5\operatorname{Id}$. Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3109939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that the function $f(x)=e^{-\frac{1}{x}},\text{ if }x>0\wedge f(x)=0,\text{ if }x\leq 0$ is smooth and all derivatives vanish in $x=0$ I had an idea but I think it is wrong. I have said $e^z$ is smooth because the n-th derivative of $e^z$ is always $e^z$ and then I have substituted $z$ with $-\frac{1}{x}$. But just calculating the first derivative Shows me that I am probably wrong because: $(e^{z})^{'}=e^{z}=e^{-\frac{1}{x}}$ But $(e^{z})^{'}=(e^{-\frac{1}{x}})^{'}=\frac{e^{-\frac{1}{x}}}{x^2}$ Then also $e^{-\frac{1}{x}}=\frac{e^{-\frac{1}{x}}}{x^2}\iff 1=\frac{1}{x^2}$ Which is not true. Why does the Substitution not work? I have also tried to see a pattern in the derivatives to give an explicit Formula for the n-th derivative. Using the product-formula for derrivatives $$(e^{-\frac{1}{x}})^{'}=(e^{-\frac{1}{x}})(-\frac{1}{x})^{'}\overset{'}{\rightarrow}(e^{-\frac{1}{x}})^{'}(-\frac{1}{x})^{'}+(e^{-\frac{1}{x}})(-\frac{1}{x})^{''}=(e^{-\frac{1}{x}})^{'}(-\frac{1}{x})^{'}+(e^{-\frac{1}{x}})(-\frac{2}{x^3})=(e^{-\frac{1}{x}})^{'}(-\frac{1}{x})^{'}+(e^{-\frac{1}{x}})(\frac{1}{x^2})(-\frac{2}{x})=(e^{-\frac{1}{x}})^{'}(-\frac{1}{x})^{'}+(e^{-\frac{1}{x}})^{'}(-\frac{2}{x})=(e^{-\frac{1}{x}})^{'}((-\frac{1}{x})^{'}+(-\frac{2}{x}))\rightarrow ...$$ I have got the hunch that the n-th derivative can be written in a recursive form like $$(e^{-\frac{1}{x}})^{(n)}\text{[ It is the derrivatove not the nth power ]}=(e^{-\frac{1}{x}})^{(n-1)}\cdot r$$ $r$ is the remainder and I am trying to figure out how it would look like. From my Observations so far I guess that it Looks like a sum $\sum_{k=1}^{n}(-1)^k\frac{n-k}{x^k}$. But I Need some help to prove it. My attempts so far have proved to be not very fruitful that's why I am asking for your support. In order to Show that the derivative vanishes at 0 I first of all Need to find out what the derivative for positive $x$ Looks like. Then the plan would be to Show that the Limit ffrom the left as well as from the Right side is $0$
When $x<0$ then $f^{(n)}(x)=0$ for all $n\geq0$. When $x>0$ then $f^{(n)}(x)=p_n(1/x)e^{-1/x}$ for some polynomial $t\mapsto p_n(t)$. The latter is true for $n=0$ with $p_0(t)\equiv1$, and $$f^{(n+1)}(x)={d\over dx}\biggl(p_n(1/x)e^{-1/x}\biggr)=\bigg({-1\over x^2}{p_n}'(1/x)+p_n(1/x){1\over x^2}\biggr)e^{-1/x}=:p_{n+1}(1/x)e^{-1/x}$$ for $x>0$ and $n\geq0$. Finally I claim that $f^{(n)}(0)=0$ for all $n\geq0$, and that $f^{(n)}$ is continuous at $0$. For $n=0$ this is true by definition of $f$. Assume that it is true for some $n\geq0$. Then $$\lim_{x\to0+}{f^{(n)}(x)-f^{(n)}(0)\over x}=\lim_{x\to0+}{1\over x} p_n(1/x)e^{-1/x}=0\ ,$$ and trivially $$\lim_{x\to0-}{f^{(n)}(x)-f^{(n)}(0)\over x}=0\ .$$ This proves that $f^{(n+1)}(0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Weak Convergence Lemma - Is Banach needed? Lemma. Let $X$ be a normed space. * *If $x_n \rightharpoonup x$ in $X$ and $x_n^* \to x^*$ in $X^*$, then $\lim_{n \to \infty} x_n^*(x_n) = x^*(x)$. *If $X$ is even Banach, then $x_n \to x$ in $X$ and $x_n^* \overset{*}\rightharpoonup x^*$ in $X^*$ implies $\lim_{n \to \infty} x_n^*(x_n) = x^*(x)$. The lecture notes I work with, had this wrong at first - it didn't include Banach for the second statement. (The proof uses boundedness of $(x_n^*)$ which relies on Banach-Steinhaus!) Is Banach really needed, though? Does anyone have a counterexample?
Here is a counter-example for the second statement without $X$ being complete. The first statement is valid for non-complete $X$ (by means of embedding into the Banach space $X^{**}$). Take $X=c_{00}$ provided with the $l^2$-norm. Define $x_n = n^{-1}e_n$ and $$ x_n^*(y):= n y_n. $$ Then $x_n^*\rightharpoonup^*0$, $x_n\to0$, but $x_n^*(x_n)=1 $ for all $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
probability that at least two part will be defective Question The probability that a part manufactured by a company will be defective is $0.05$. If $15$ such parts are selected randomly and inspected, then the probability that at least two part will be defective is ________. (round off to two decimal places) My Approach $$P(\text{atleast 2 part defective})=1-P(\text{No part defective}) -P(\text{1 part defective})$$ $$=1-(0.95)^{15}-((0.95)^{14} \times 0.05)$$ Is it correct?
Not quite, $$P=1-(0.95)^{15}-{15\choose 1}((0.95)^{14} \times 0.05)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused by ODE $f''(x)=\lambda f(x).$ So the solutions to the characteristic equation is $r_{1,2}=\pm\sqrt{\lambda}.$ We thus have three cases to consider in order to find the solution of $f''=\lambda f.$ However for this question, the only relevant one is when the roots become complex, that is Case 1: $\lambda < 0.$ Theorem: a second order homogenous ODE of the form $ay''+by'+cy=0$ with complex roots $r_{1,2}=a\pm bi$ for the characteristic equation have solutions of the form $$y(x)=e^{ax}(A\cos(bx)+B\sin(bx)).\tag1$$ My prof writes that in my case, I have solutions of the form $$f(x)=A\cos\left(\sqrt{|\lambda|}x\right)+B\sin\left(\sqrt{|\lambda|}x\right).\tag{2}$$ Questions: 1) How is this possible if we dont not know what the complex number is? That is, she doesn't have $a$ and $b$ in order to plug it into $(1).$ 2) Why has my prof omitted the $e^{ax}$ factor? 3) Why do we need the absolute value bars around the $\lambda$:s in $(2)?$
1) First, there's a conlict of notation : the $a$ in $ay''+by'+c=0$ is not the same $a$ appearing in $r_{\pm}=a\pm bi$. That being said, $f''=\lambda f\Longleftrightarrow f''-\lambda f=0$. Its characteristic equation is $r^2-\lambda=0$, which can be easily solved. 2) She didn't. It's just that in this case, the solutions to the characteristic equation have their real parts equal to $0$. 3) Cos' the solution to the characteristic equations are $r_{\pm}=\pm i|\lambda|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I create a seam for an ellipse? I want to create a seam for an ellipse. (Mathematically I think this means that there are constant-length normal lines between the ellipse and the curve that creates the seam, but I'm not 100% sure that definition is accurate.) I know I can't do this by creating another ellipse with the axes reduced by the seam width because the distance between the two ellipses isn't constant. (i.e. in this image the green line is shorter than the two red lines.) I can convert ellipses into cubic Beziers. Is there a way to calculate a modification to the control points of the inner Bezier to make the distance between the outer ellipse and inner Bezier constant?
What you've described is typically called an "offset curve". The offset curve for an ellipse is ... not nice. The points of the ellipse itself satisfy a nice quadratic like $$ \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1 $$ while those of the offset curve satisfy...a polynomial of degree 8 perhaps? I can't recall, but it's certainly not quadratic, hence not an ellipse (as you observe). But more important, it's also not described by a cubic spline, so that approach won't work either. I'm sorry to say that you just have to just do the calculus and work things out: At the point $(x, y)$ of the ellipse above, a normal vector is $(\frac{2x}{a^2}, \frac{2y}{b^2})$, so the unit normal is $$ n(x, y) = \frac{1}{\sqrt{\frac{x^2}{a^4} + \frac{y^2}{b^4}}}(\frac{x}{a^2}, \frac{y}{b^2}) $$ and your offset curve is at location $$ (x, y) + s n(x, y), $$ where $s$ is the offset distance (positive for "larger than the original ellipse"; negative for "smaller than"). That only works for relatively small values of negative $s$; for larger negative numbers, you get "focal points" and things get messy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
prove that : $\dfrac{a^2}{2}+\dfrac{b^3}{3}+\dfrac{c^6}{6} \geq abc$ for $a ,b ,c \in \mathbb{R}^{>0}$ prove that : $\dfrac{a^2}{2}+\dfrac{b^3}{3}+\dfrac{c^6}{6} \geq abc$ for $a ,b ,c \in \mathbb{R}^{>0}$ . I think that must I use from $\dfrac{a^2}{2}+\dfrac{b^2}{2} \geq ab$ but no result please help me .!
I'd like to add a calculus approach to the above answers. If we define the function $$ f(a,b,c) = \dfrac{a^2}{2}+\dfrac{b^3}{3}+\dfrac{c^6}{6} - abc $$ and look for the positions in $\mathbb{R}_{>0}^3$ where the gradient vanishes, we will find $$ \overline{\triangledown } f = 0 \Rightarrow a=b=c=1 $$ The value of $f$ at $(1,1,1)$ is 0. However, the function $f$ happens to be "too symmetric" so that the Hessian matrix at $(1,1,1)$ has zero determinant, and thus the second derivative test gives no information about the nature of this critical point (the matrix has a zero eigenvalue). Nevertheless, we can try a different "version" of the above function, putting $1/c$ in place of $c$. $$ g(a,b,c) = \dfrac{a^2}{2}+\dfrac{b^3}{3}+\dfrac{1}{6 c^6} - \dfrac{ab}{c} $$ Of course, this small transformation keeps (1,1,1) as the only critical point and leaves its nature unchanged. Checking the Hessian for this function at $(1,1,1)$ we get $$H = \begin{bmatrix} 1 & -1 & 1 \\ -1 & 2 & 1 \\ 1 & 1 & 7 \end{bmatrix}$$ which, using Sylvester's Criterion, is positive definite. Thus the point $(1,1,1)$ is a local minimum and since there is no other critical point in the area of interest, it is also a global minimum. That reveals that $$ f(a,b,c) = \dfrac{a^2}{2}+\dfrac{b^3}{3}+\dfrac{c^6}{6} - abc >= 0 $$ for all $(a,b,c) \in \mathbb{R}_{>0}^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
If $\cos\theta=\frac{\cos\alpha+\cos \beta}{1+\cos\alpha\cos\beta}$, then prove that one value of $\tan(\theta/2)$ is $\tan(\alpha/2)\tan(\beta/2)$ If $$\cos\theta = \frac{\cos\alpha + \cos \beta}{1 + \cos\alpha\cos\beta}$$ then prove that one of the values of $\tan{\frac{\theta}{2}}$ is $\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$. I don't even know how to start this question. Pls help
I think i have found out how to approach it. I hope this proof is satifactory. Using the half angle formula, $$\tan{\frac{\theta}{2}} = \pm \sqrt{\frac{1 - \cos\theta}{1 + \cos\theta}} \longrightarrow \text{eq.1}$$ Evaluating $\frac{1 - \cos\theta}{1 + \cos\theta}$ first, $$\frac{1 - \cos\theta}{1 + \cos\theta} = \frac{1 - (\frac{\cos\alpha + \cos\beta}{1 + \cos\alpha\cos\beta})}{1 + (\frac{\cos\alpha + \cos\beta}{1 + \cos\alpha\cos\beta})}\text{ }[\text{since } \cos\theta= \frac{\cos\alpha +\cos\beta}{1 + \cos\alpha\cos\beta}]$$ $$=\frac{1+\cos\alpha\cos\beta - \cos\alpha -\cos\beta}{1+\cos\alpha\cos\beta + \cos\alpha+ \cos\beta}$$ $$=\frac{(1-\cos\alpha)(1-\cos\beta)}{(1+\cos\alpha)(1+\cos\beta)}$$ Substituting this value of $\frac{1 - \cos\theta}{1 + \cos\theta}$ into equation 1, $$\tan{\frac{\theta}{2}} = \sqrt{\frac{(1-\cos\alpha)(1-\cos\beta)}{(1+\cos\alpha)(1+\cos\beta)}}$$ $$=\sqrt{\frac{(1-\cos\alpha)^2(1-\cos\beta)^2}{(1-\cos^2\alpha)(1-\cos^2\beta)}}$$ $$=\pm\frac{(1-\cos\alpha)(1-\cos\beta)}{\sin\alpha\sin\beta}$$ Taking the positive value of $\tan{\frac{\theta}{2}}$, $$\frac{(1-\cos\alpha)(1-\cos\beta)}{\sin\alpha\sin\beta} = \frac{4\sin^2\frac{\alpha}{2}\sin^2\frac{\beta}{2}}{4\sin\frac{\alpha}{2}\cos{\frac{\alpha}{2}}\sin{\frac{\beta}{2}}\cos{\frac{\beta}{2}}} \text{(using half angle and double angle formula)}$$ $$=\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$$ Therefore, $\tan{\frac{\alpha}{2}}\tan{\frac{\beta}{2}}$ is one of the values of $\tan{\frac{\theta}{2}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3110848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\lim\limits_{(x,y)\to(0,0)}\frac{x^3y-xy^3}{x^4+2y^4}$ does not exist. Show that $$\lim_{(x,y)\to(0,0)}\frac{x^3y-xy^3}{x^4+2y^4}$$ does not exist. I'm not even sure how to approach this. I tried factoring out $xy$ in the numerator to get $xy(x^2 - y^2)$, but I don't think that gets me anywhere with the denominator.
Let's approach the limit along the line $y=mx.$ $\begin{align} &\lim_{(x,y)\to (0,0)}\dfrac{x^3y-xy^3}{x^4+2y^4}\\ &=\lim_{x\to 0}\dfrac{x^3mx-x(mx)^3}{x^4+2(mx)^4}\\ &=\lim_{x\to 0}\dfrac{x^4m-m^3x^4}{x^4+2m^4x^4}\\ &=\lim_{x\to 0}\dfrac{m-m^3}{1+2m^4}\\ &=\dfrac{m-m^3}{1+2m^4}\\ \end{align}$ So what can you conclude about the limit ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Evaluating the definite integral of $\cot^2(t)dt$ Given that: $$\cot(t) \to \infty \text{ when } t=0$$ and: $$\int \cot^2(t) dt = -\cot(t) -t +C$$ It seems strange that: $$\left[-\cot(t)-(t)\right]^{\pi+\frac{\pi}{2}}_{\frac{\pi}{2}} = (-\cot^2(\pi+ \left(\frac{\pi}{2}\right))-(\pi+\left(\frac{\pi}{2}\right)))-(-\cot^2(\frac{\pi}{2})-(\frac{\pi}{2})) = \pi$$ since there as an asymptote in the middle of the area. Would somebody be able to identify my mistake? Even just a reference for me to read would be very much appreciated. I am no mathematician, apologies!
The fundamental theorem of calculus breaks if there is a singularity in the integration interval. Check the conditions of application: https://en.wikipedia.org/wiki/Fundamental_theorem_of_calculus#Formal_statements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111091", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find $\lim \limits_{x \to 0} \frac{\sqrt{x^3+4x^2}} {x^2-x}$ when $x\to 0^+$ and when $x\to 0^-$? I'm trying to find: $$ \lim \limits_{x \to 0} \frac{\sqrt{x^3+4x^2}} {x^2-x} $$ Since there is a discontinuity at $x=0$ I know that I have to take the limits from both sides, $x \to 0^+$ and $x \to 0^-$, and check if they're equal. If I factor it I get: $$ \lim \limits_{x \to 0} \left(\frac{\sqrt{x+4}} {x-1}\right) = - 2$$ Is this the same as $x \to 0^+$? If so, how do I approach the problem for $x \to 0^-$? If not, how do I do I do it from both sides?
Limit from right side is $ \lim \limits_{x \to 0^+} \frac{\sqrt{x^3+4x^2}} {x^2-x} \\ = \lim \limits_{x \to 0^+} \left(\frac{ |x| \sqrt{x+4}} { x(x-1) }\right) \\ = \lim \limits_{\delta \to 0} \left(\frac{ |0+\delta| \sqrt{ (0+\delta) +4}}{ (0+\delta)( (0+\delta) -1 ) }\right) \ [ \ \text{substituting} \ x = 0 + \delta \ , \delta > 0 \ ] \\ = \lim \limits_{\delta \to 0} \left(\frac{ \delta \sqrt{ \delta+4}}{ \delta (\delta-1) }\right) \\ = -2 $ Limit from left side is $ \lim \limits_{x \to 0^-} \frac{\sqrt{x^3+4x^2}} {x^2-x} \\ = \lim \limits_{x \to 0^-} \left(\frac{ |x| \sqrt{x+4}} { x(x-1) }\right) \\ = \lim \limits_{\delta \to 0} \left(\frac{ |0-\delta| \sqrt{ (0-\delta) +4}}{ (0-\delta)( (0-\delta) -1 ) }\right) \ [ \ \text{substituting} \ x = 0 - \delta \ , \delta > 0 \ ] \\ = \lim \limits_{\delta \to 0} \left(\frac{ -\delta \sqrt{ 4 - \delta }}{ \delta (-1 - \delta) }\right) \\ = 2 $ $ \therefore \ \lim \limits_{x \to 0^+} \frac{\sqrt{x^3+4x^2}} {x^2-x} \neq \lim \limits_{x \to 0^-} \frac{\sqrt{x^3+4x^2}} {x^2-x} \\ \Rightarrow \lim \limits_{x \to 0} \frac{\sqrt{x^3+4x^2}} {x^2-x} \ \text{does not exist} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 3 }
If $\tan(x_1) \cdots\tan(x_n)=1$ for acute $x_i$, then does it follow that $\cos(x_1)+\cdots+\cos(x_n) \leq n\sqrt{2}/2$? It is easily seen that if $x,y\in[0,\pi/2)$ satisfy $\tan(x)\tan(y)=1$, then $$\cos(x)+\cos(y)\le\sqrt 2$$ A much more delicate fact is that if $\tan(x)\tan(y)\tan(z)=1$ (while $0\le x,y,z<\pi/2$), then $$\cos(x)+\cos(y)+\cos(z)\le \frac{3\sqrt 2}2$$ I can prove this, but the proof is a little complicated; can anybody suggest a nice, simple proof? As a generalization, suppose that $n\ge 4$, $x_1,\dotsc,x_n\in[0,\pi/2)$, and $\tan(x_1)\dotsb\tan(x_n)=1$. Does it follow that $$ \cos(x_1)+\dotsb+\cos(x_n) \le \frac{n\sqrt 2}2? $$
For three variables we ca use C-S: Let $\tan{x}=\sqrt{\frac{b}{a}},$ $\tan{y}=\sqrt{\frac{c}{b}},$ where $a$, $b$ and $c$ are positives. Thus, $\tan{z}=\sqrt{\frac{a}{c}}$ and by C-S we obtain: $$\sum_{cyc}\cos{x}=\sum_{cyc}\frac{1}{\sqrt{1+\tan^2x}}=\sum_{cyc}\sqrt{\frac{a}{a+b}}\leq$$ $$\leq\sqrt{\sum_{cyc}\frac{a}{(a+b)(a+c)}\sum_{cyc}(a+c)}\leq\frac{3}{\sqrt2},$$ where the last inequality it's just $$\sum_{cyc}c(a-b)^2\geq0.$$ The generalization is wrong for all $n\geq4$. Try $x_1=x_2=...=x_{n-1}\rightarrow0^+$ and $x_n\rightarrow\frac{\pi}{2}^-$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Is the Quotient Group Cyclic? I'm just wondering how to show that a quotient group $H = (G/N)$ is cyclic? Let $G= \mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/6\mathbb{Z}$ Let $N = \left<(2,3)\right>$ , where N is a cyclic subgroup of G Is it correct to say that $G$ has order $24$ and $N$ has order $2$? Can I then say that the order the the quotient group $H$ is $24/2 = 12$? How should I go about showing that H is cyclic?
Is it correct to say that G has order 24 and N has order 2? $G$ has order $24$ because $\mathbb{Z}/4\mathbb{Z}$ has $4$ elements and $\mathbb{Z}/6\mathbb{Z}$ has $6$ elements and $4\cdot 6 = 24$. In order to see if $N$ has order $2$ you should check what are the elements in $N:\;$ we have the trivial element, of course, and we also have $(2,3)$. Since $(2,3)+(2,3)=(0,0),$ there are no more elements in $N,$ and so it is of order $2$. Can I then say that the order the the quotient group H is 24/2=12? Yes, the order of $H=G/N$ is $\frac{|G|}{|N|}$ where $|G|=24$ is the order of $G$ and $|N|=2$ is the order of $N$. How should I go about showing that H is cyclic? If $H$ is of order $12$ it necessarily takes one of the following forms * *$H$ is isomorphic to $\mathbb{Z}/12\mathbb{Z},$ which is cyclic. Or *$H$ is isomorphic to $\mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/3\mathbb{Z}$. The latter is not cyclic, so you need to show that $H$ can't take that form. In order to do this, show that $(1,0)+N\in H$ is not of order $2$ but is of order $4$. (This is easy because $(2,0)\not\in N$ but $(4,0)=(0,0)\in N$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that the midperpendiculars $MN$ pass through a constant point. Consider two circles $O_1$ and $O_2$, the intersection of $O_1$ and $O_2$ are $A$ and $B$. Let $M$ be the point on $O_1$, $N$ on $O_2$, $M,N$ moving clockwise on $O_1,O_2$ and $\angle{AO_1M }=\angle{AO_2N }$. Show that the midperpendiculars of $MN$ pass through $P$ is a constant point. I tried circles of Apollonius but failed. Help me,pls.
This has notihng to do with Apollonius. Step 1: Let line $O_1M$ meet line $O_2N$ at $Q$. Then since $$\angle QO_1A = \angle QO_2A $$ we see that points $O_1, O_2, Q$ and $A$ are concyclic and so $\angle O_1QO_2 = \angle O_1AO_2$ is constatnt. Step 2: Let $S$ be a midpoint for segment $O_1O_2$. Let us prove that $SP$ is constant, that is, $P$ is on circle with center at $S$. We use (position) vectors: \begin{eqnarray} 4\cdot SP^2 &=& 4\vec{SP}^2\\ &=& 4(P-S)^2\\ &=& 4 \Big({1\over 2}(M+N)-{1\over 2}(O_1+O_2)\Big)^2\\ &=& 4 {1\over 4}\Big((M-O_1)+(N-O_2)\Big)^2\\ &=& (\vec{O_1M}+\vec{O_2N})^2\\ &=& \vec{O_1M}^2+\vec{O_2N}^2 +2\vec{O_1M}\cdot \vec{O_2N}\\ &=& r_1^2+r_2^2+2r_1r_2\cos \angle(\vec{O_1M}, \vec{O_2N})\\ &=& constant \end{eqnarray} Here we use step 1 conslusion that $\angle (\vec{O_1M}, \vec{O_2N}) = \angle O_1QO_2$. Step 3: Try to finish yourself... :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the indefinite integral of $\int_{} \frac{x}{x^2+4}dx$ I am beginning to question whether the indefinite integral actually exists or I am doing something wrong with my u-substitution. Let $u = x^2 + 4, du = 2xdx,$ $$ \begin{align} \int_{} \frac{x}{x^2+4}dx &= \int_{}x(x^2 + 4)^{-1} \\ &= \frac{1}{2} \int_{} u^{-1}du \\ &= \frac{1}{2} \frac{u^0}{0} = ??? \end{align} $$ Then I tried to choose a different $u$ Let $u=x^2, du = 2xdx$ $$\int_{}\frac{x}{x^2+4} = \int_{} \frac{\sqrt u}{u^2 + 4} = ... $$ But I still run into the same problem trying to use the power rule to simplify the integral. I am beginning to think that an indefinite integral actually does not exist, but I am not sure what basis I have to assert that statement.
$$\int \frac{xdx}{x^2+4}=\frac 12 \int \frac{2xdx}{x^2+4}=\frac12 \ln(x^2+4)+C$$ where we have used that $$\frac{d}{dx}(x^2+4)=2x$$ and $$\int \frac{dt}{t}=\ln|t|+C.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Relation between eigenvalues of $A^{\top}BB^{\top}A$ and $B^{\top}AA^{\top}B$ I have two real value matrices: $A \in \mathbb{R}^{m \times n}$ and $B \in \mathbb{R}^{m \times p}$. If I know which are the eigenvalues of $A^{\top}BB^{\top}A$, what can I say about the eigenvalues of $B^{\top}AA^{\top}B$? I suspect that they coincide and that the extra eigenvalues are zeroes since this is what happens with the eigenvalues of $A^{\top}A$ and $AA^{\top}$. Is this correct? I will do some tests with Octave now. Update: the Octave results for a random $10 \times 10$ and a random $10 \times 20$ matrix seem to confirm my claim. Try it yourself: m = 10, n = 10, p = 20; A=rand(m, n); B=rand(m, p); eig(A'*B*B'*A) eig(B'*A*A'*B)
Fact. The nonzero eigenvalues of $X^\top X$ and $XX^\top$ are the same. We may apply this fact to your two matrices by taking $X=A^\top B$. Indeed, this gives \begin{align*} X^\top X &= (A^\top B)^\top(A^\top B)=B^\top AA^\top B & XX^\top &= (A^\top B)(A^\top B)^\top = A^\top BB^\top A \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3111902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is the largest integer value of $n$ for which $8^n$ evenly divides $(100!)$? I know that this may be an unnecessary question, but I am a bit confused. The problem asks for the highest integer $n$ such that $8$ to the power of $n$ is divisible, evenly of course, by $100$. Now, I searched the site, and, in general, it seems that one can use floor function for a problem like this, but this seems to only work for prime numbers possibly. My process, which I realized was incorrect: The floor function of $100/8 = 12$, and then doing it for the second power would lead to one, and, by adding those up, I acquired thirteen. Of course, after seeing the answer, $32$, I went back to see what was wrong and did the problem slower. I got $12$ numbers from the numbers in $100!$, and then got another $8$ from $2 \times 4$, but, that can be applied for all the multiples of $2$ and $4$ that aren't of $8$. So, essentially, I am wondering if there is a quicker method for calculating this number without specifically counting out the numbers. Thanks in advance!
It's easiest, I think, to do this with powers of $2:$ $$\left\lfloor{100\over2}\right\rfloor+ \left\lfloor{100\over4}\right\rfloor+ \left\lfloor{100\over8}\right\rfloor+ \left\lfloor{100\over16}\right\rfloor+ \left\lfloor{100\over32}\right\rfloor+ \left\lfloor{100\over64}\right\rfloor=97=32\cdot3+1 $$ so the greatest exponent of $8$ is $32$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Permute columns by pre-multiplying and rows by post-multiplying? I was looking at Gilbert Strang's lectures on Linear Algebra and noticed that in lecture 2, Elimination with Matrices, around the 40nth minute he mentions that you can use the permutation matrix, $$P= \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix} $$ so that $AP$ is a permutation of $A$'s columns and $PA$ is a permutation of $A$'s rows. I was wondering if there exists a matrix $P'$ such that $P'A$ is a permutation of $A$'s columns and $AP'$ is a permutation of $A$'s rows. * *How to prove there is no $P'$ such that $P'A=AP$ and $AP'=PA$ for all $n\times n$ matrices? *For which matrices $A$ there is such a $P'$? Tried the $2\times 2$ case $L$ pre-multiplies $A$ and permutes its columns, $$ \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix} \begin{bmatrix} a & b \\ c & d \end{bmatrix}=\begin{bmatrix} b & a \\ d & c \end{bmatrix} \Rightarrow \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}=\frac{1}{ad-bc}\begin{bmatrix} bd-ac & a^2-b^2 \\ d^2-c^2 & ac-bd \end{bmatrix}=L $$ $R$ post-multiplies $A$ and permutes its rows, $$ \begin{bmatrix} a & b \\ c & d \end{bmatrix} \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}= \begin{bmatrix} c & d \\ a & b \end{bmatrix} \Rightarrow \begin{bmatrix} x_1 & x_2 \\ x_3 & x_4 \end{bmatrix}=\frac{1}{ad-bc}\begin{bmatrix} cd-ab & d^2-b^2 \\ a^2-c^2 & ab-cd \end{bmatrix}=R $$ * *$L\neq R$ in the general case *$det(A)\neq 0$ for $R$ and $L$ to exist *if $det(A)\neq 0$ and $a=d=0$, then $P'=L=R$ Additional notes I feel I have to clarify further. I was looking for a matrix $P'$ that behaves just like $P$ but from the opposite side, meaning $P'$ permutes columns when pre-multiplying $A$ and rows when post-multiplying $A$.
$\newcommand\bm\boldsymbol$ If such $\bm P'$ works for all matrices, then for all $\bm A$, $$ \bm {AP} = \bm {P'A}, $$ then specifically it works for the identity matrix $\bm I$, i.e. $$ \bm {IP} = \bm {P'I}, $$ then the only candidate of $\bm P'$ is $\bm P$ again. But clearly $$ \bm {PA} = \bm {AP} $$ only holds for specific matrices $\bm A$. Conclusion: maybe for some $\bm A$, there exists $\bm P'$ that $\bm {P'A}$ swap two columns of $\bm A$, but there exists no universal matrices $\bm P'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Proving IID Central Limit Theorem using Lindeberg Conditions. The goal is to prove the IID Central Limit Theorem through Lindeberg's Condition. Suppose that $X_1,X_2,\ldots\displaystyle\sim\text{i.i.d.}$ with $E[X_i]=\mu$ and $Var[X_i]=\sigma^2<\infty$. Let $Y_i=X_i-\mu$ and $s_n^2=\sum_{i=1}^{n}Var[Y_i]=n\sigma^2$. Prove that $Z_n:=\frac{\sum_{k=1}^{n}(X_k-\mu)}{s_n}\rightarrow$N$(0,1)$ in distribution. Lindeberg's condition is as follows: If the following holds: $\displaystyle\lim_{n\rightarrow\infty}\frac{1}{s_n^2}\sum_{i=1}^{n}E\big[Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot s_n \ )}\big]=0$ for all $\epsilon>0$ Then $Z_n\rightarrow$N$(0,1)$ in distribution. So going right to lindbergs condition: $\displaystyle\lim_{n\rightarrow\infty}\frac{1}{n\sigma^2}\sum_{i=1}^{n}E\big[Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}\big]=\displaystyle\lim_{n\rightarrow\infty}\frac{1}{\sigma^2}E\big[Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}\big]$ Now at this point I do know I can use Lebesgue Dominated Convergence theorem due to the following: $ |Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}|=Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}\leq Y_i^2$ and $Y_i^2$ is integrable as $E[Y_i^2]=\sigma^2<\infty$ This means: $\displaystyle\lim_{n\rightarrow\infty}\frac{1}{\sigma^2}E\big[Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}\big]=\frac{1}{\sigma^2}E\big[\displaystyle\lim_{n\rightarrow\infty}Y_i^2\cdot\mathbb{I}_{( \ |Y_i|\geq\epsilon\cdot \sigma\sqrt{n} \ )}\big]$ Now I do not understand how this above is zero, and that's where I am stuck. I do know that by Markov's inequality: $P(|Y_i|\geq\epsilon\sigma\sqrt{n})\leq\frac{E\big[|Y_i|\big]}{\epsilon\sigma\sqrt{n}}\rightarrow0$ as $n\rightarrow\infty$ as $E[|Y_i|]<\infty$, and so the probability of this event becomes zero. Any help with understand this final step would be much appreciated!
All you need is $EY_1^{2} I_{\{|Y_1| >\epsilon \sigma \sqrt n\}} \to 0$ as $n \to \infty$ and this follows from DCT. [$Y_1^{2} I_{\{|Y_1| >\epsilon \sigma \sqrt n\}} $ is dominated by $Y_1^{2}$ which is integrable. Of course, the events $\{|Y_1| >\epsilon \sigma \sqrt n\}$ decrease to empty set so $Y_1^{2} I_{\{|Y_1| >\epsilon \sigma \sqrt n\}} \to 0$ almost surely. ].
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Center is a normal subgroup of G This is a problem from Herstein's Topics in Algebra. I have already shown the above result using the definition of normal subgroup. But now I want to prove it by constructing a homomorphism such that kernel is center of the group G. How can I construct such homomorphism? I was thinking of going like this. Given a $g$ in $G$ construct $E_g(x)=g x g^{-1}$ So given each element we have a transformation. Set of transformations like this form a group with inverse given by $E_{g^{-1}}$. Kernel consist of all those elements for which $E_g$ is identity. In other words, $E_g(x)=x$ or $g x g^{-1}=x$ or $gx=xg $ for all $x$. That is $g$ commutes with everything in $G$. Am I going into the right direction. Edit: I became interested in proving the result through homomorphism approach as problem is in section 2.7 which is titled homomorphisms. Herstein must be expecting us to take this route.
You are very close. You already have a way to transform $g$ into an element of... something ... where $g$ transforms into the identity function if and only if $g$ communtes with everything in $G$. You also know that the something contains some sort of mappings. Now you need to write that down with correct terms. That is, instead of "transforming $g$ into an element of some set that contains mappings, you need to * *Write down exactly what set $g$ is mapping into, what the operation on that set is, and make sure that set is a group. *Write down the mapping as a mapping between two groups *Prove that the mapping is a homomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$f$ has the form $f(x) =ax^2+bx+c$. Every differentiable function $f:R \rightarrow R$ with the property that $(2h)f′(x) =f(x+h)−f(x−h)$ for all $x \in R$ and all $h$ has the form $f(x) =ax^2+bx+c$. I would like to get some help on this one. Taking partial derivatives was an idea, but is that legal?
We have $f'(x)={1\over2}\bigl(f(x+1)-f(x-1)\bigr)$ for all $x$. Since here the RHS is differentiable it follows that $f'$ is differentiable as well; in fact $f\in C^\infty$. We now differentiate $2 h f'(x)=f(x+h)-f(x-h)$ two times with respect to $h$ and obtain $$0=f''(x+h)-f''(x-h)\qquad\forall\,x\quad\forall\, h\ .$$ This implies $f''(u)=f''(v)$ for all $u$ and $v$, i.e., $f''(x)$ is a constant, say $2a$. This in turn implies $f'(x)=2ax+b$ and $f(x)=ax^2+bx+c$ for constant $a$, $b$, $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112546", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can you keep this raffle fair? So there is a car draw in my area. There are 3221 participants in the draw. The winner is decided by a trustee drawing each digit from a separate drum. So from the first drum there is 0-3, the second 0-9 (this is the same for the third and the fourth drum). So if I draw 0-4-0-7 the winner is the ticket holder of 407. My first suspicion was those that have a ticket in the 3 thousands have a better chance of winning, correct me if I'm wrong. If it is the case that the probability is askew then how can one keep this draw fair? While maintaining the concept of drawing digits from a drum. If this is not possible could anyone give suggestions on other draw methods that could work? Just putting all tickets in one bucket isn't an option though. Thank you!
Whether or not the raffle is fair, depends on how you draw the ticket. Let's say I draw, from the four different bins, the numbers $3, 5, 1, 0$. This is an invalid number, so the price cannot be assigned. If I simply restart the whole procedure, each time drawing four numbers until a valid number shows up, then each participant has the same probability of winning. However, we can also start drawing from left to right, and only redraw the number which resulted in an invalid sequence. Note that the first number is always valid, but the validity of the second number depends on the first number: if I first drew a $3$, I cannot draw the number $3$ to $9$. Indeed, in this case, the probability that a random person whose ticket number starts with a $3$ has a probability of: $$\frac{1}{4} \cdot \frac{1}{222} = \frac{1}{888} > \frac{1}{3221}$$ of winning the raffle. Note that, using this procedure, the two people holding the tickets $3220$ and $3221$ have the highest probability of winning the raffle: $$\frac{1}{4} \cdot \frac{1}{3} \cdot \frac{1}{3} \cdot \frac{1}{2} = \frac{1}{72}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many roots does an exponential polynomial have? Let $s$ be a complex variable and consider two polynomials with real coefficients: $$A(s) = s^n + a_{n-1}s^{n-1}+\ldots+a_1s+a_0,$$ $$B(s) = s^m + b_{m-1}s^{m-1}+\ldots+b_1s+b_0,$$ where $n \ge m$. Let $k$ be a real constant. I am looking for roots of the function $$A(s)+e^{sk}B(s)=0.$$ Obviously, when $k=0$ I have just a polynomial of degree $n$ and I have $n$ roots. Then I have two questions: * *Is it true that this function always has $n$ roots for any fixed $k$? (Negative, see UPD2). *Do these roots depend continuously on $k$? UPD: We also assume that $n\ge 1$. UPD2: Let us take $A(s)=s$ and $B(s)=1$. Then we have $$s+e^{ks} = 0.$$ For $k=0$ the only roots is $s_1=-1$. However, for $k=-1$ we have $$se^{s}=-1$$ and we have multiple solutions. So the answer to the first question is negative. However, I do not know if the roots are continuous with respect to $k$.
Set $A(s)=-1$, $B(s)=1$, and $k=2\pi$. Then you obtain the equation $$e^{2 \pi s}=1,$$ which has solutions $s = 0, \pm i, \pm 2i, \pm 3i, \ldots$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is $Y_n=f(X_n)$ a Markov chain, when $X_n$ is? Let $X_n$ be an independent Markov chain which has values (states) $X_n={0,1,2}$ with its transition matrix. $$\\p= \begin{pmatrix} 0 & \frac{1}{2} & \frac{1}{2} \\ \frac{1}{2} & \frac{1}{2} & 0 \\ 1 & 0 & 0 \end{pmatrix} $$ Let $Y_n=f(X_n)$ $f(0)=0$ $f(1)=1$ $f(2)=1$ is $Y_n$ a Markov chain? My intuition and solution: It is not a Markov chain. $P(Y_n=1|Y_{n-1}=1,Y_{n-2}=0)=\frac{1}{4}$ $P(Y_n=1|Y_{n-1}=1)=\frac{1}{2}*p(1)$, where p(1) is a probability that we are at the state 1, given we are either at the state 1 or 2. Surely probability that we are at the state 1 isn't equal to half, which is obvious if we look at the transition matrix, but how to calculate this probability? My guess: Lets calculate stationary probabilities, which are $(\frac{2}{5} ,\frac{2}{5} ,\frac{1}{5})$, so we are on average two times more often in the state one than in state 2. which would mean that $p(1)$, I am looking for is equal to $\frac{2}{3}$? EDIT: I found a way to avoid looking for this probability: it is enough to calculate the probability $P(Y_n=1|Y_{n-1}=1,Y_{n-2}=1,Y_{n-3}=0)$, but still I would like to know if I had been right before I found this way.
The concept you're looking for here is called Lumpability. If you aggregate states of a Markov chain and the chain is "lumpable", then the aggreate process that you obtain is again a Markov chain. Lumpability property (see Theorem 6.3.2 in Finite Markov chain, by Kemeny ans Snell): A discrete-time Markov chain $\{X_{i}\}$ is lumpable with respect to the partition $T=\{t_1,\ldots,t_M\}$ if and only if, for any subsets $t_i$ and $t_j$ in the partition, and for any states $n,n'$ in subset $t_i$, \begin{align} {\displaystyle \sum _{m\in t_{j}}p(n,m)=\sum _{m\in t_{j}}p(n',m)} \end{align} In your case, it is easy to see that your chain is not lumpable within the partition $T=\{\{0\}, \{1,2\}\}$, as $p(1,0)\neq p(2,0)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3112981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integral representation of Digamma function A similar question was already asked about 2 years ago: Integral representation of the Digamma function Someone asked for the derivation of the integral representation of the Digamma-function and it was answered, but I don't see how you get from here: $$ \psi^{(0)}(x)=\frac{\int_{0}^{\infty}t^{x-1}ln(t)e^{-t}dt}{\int_{0}^{\infty}t^{x-1}e^{-t}dt} $$ To here: $$ \psi (x)=\int _{0}^{\infty }\left({\frac {e^{-t}}{t}}-{\frac {e^{-xt}}{1-e^{-t}}}\right)\,dt $$ I really thought a lot about it, but I just don't see how it is done... :( Thank you so much for your much-appreciated help :)
I'll give a more detailed version of Jack D'Aurizio's answer on the other question.Start with the Weierstrass product for the $\Gamma$ function $$ \Gamma(z+1) = e^{-\gamma z}\prod_{n\geq 1}\left(1+\frac{z}{n}\right)^{-1}e^{z/n}\tag{1}.$$ By definition, \begin{align*} \psi(z+1) &= \frac{d}{dz} \log \Gamma(z+1) \\ &= \frac{d}{dz} \log\left( e^{-\gamma z}\prod_{n\geq 1}\left(1+\frac{z} {n}\right)^{-1}e^{z/n}\right)\\ &= \frac{d}{dz}\left( -\gamma z + \sum_{n\geq 1}\left(\frac{z}{n} + \log \left(1+\frac{z} {n}\right)^{-1}\right) \right)\\ &= -\gamma + \sum_{n\geq 1}\left(\frac{1}{n} - \frac{1}{z+n}\right) \end{align*} Now note that \begin{align*}\gamma &= \lim_{k\rightarrow \infty} - \log(k) + \sum_{n=1}^k 1/n \\ &= \lim_{k \rightarrow \infty} -\sum_{n=1}^{k-1} \big( \log(n+1) - \log(n)\big) + \sum_{n=1}^{k} \frac{1}{n} \\ &= \sum_{n=1}^{\infty} \big( \log(n) - \log(n+1) + \frac{1}{n} \big) \end{align*} since the inner sum is a telescoping series. And so \begin{align*} \psi(z+1) &= \sum_{n\geq 1}\left[\log(n+1)-\log(n)+\frac{1}{n+z}.\right] \end{align*} Now he cites the identity $$ \log(n+1)-\log(n) = \int_{0}^{+\infty}\frac{e^{-nt}-e^{-(n+1)t}}{t}\,dt$$ (by Frullani), and $$\frac{1}{n+z} = \int_{0}^{+\infty} e^{-(n+z)t}\,dt$$ by a Laplace transform (or just by hand). And so we get \begin{align*} \psi(z+1) &= \sum_{n\geq 1}\int_{0}^{+\infty}\frac{e^{-nt}-e^{-(n+1)t}}{t} + e^{-(n+z)t}\,dt\\ &=\int_{0}^{+\infty} \sum_{n\geq 1}\frac{e^{-nt}-e^{-(n+1)t}}{t} + e^{-(n+z)t}\,dt\\ &=\int_{0}^{+\infty} \left( \frac{1- e^{-t}}{t} + e^{-zt} \right)\sum_{n\geq 1} e^{-nt} \,dt \\ &=\int_{0}^{+\infty} \left( \frac{1- e^{-t}}{t} + e^{-zt} \right) \frac{e^{-t}}{1-e^{-t}} \,dt \\ &=\int_{0}^{+\infty} \left(\frac{e^{-t}}{t} + \frac{e^{-(z+1)t}}{1-e^{-t}} \right) \,dt \\ \end{align*} and we are done (modulo whatever typos/mistakes I've made.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Have similar theories like knot theory been developed in higher dimensions? Well, my question is kind of basic but I hope it would be taken seriously by the community. Also, I'm very new to this topic and I want to study knot theory in future. Knot theory is the study of embedding $S^{1}$ in $\mathbb{R}^3$. Right? So, it seems reasonable to consider embedding $S^n$ in $\mathbb{R}^m$ for appropriate $(n,m) \in \mathbb{N}\times\mathbb{N}$. Are there any well-developed theories for these embeddings? If yes, is there a name for these theories? Are there any references to learn about them? Also, I would be happy to know about some self-contained and good references to learn about knot theory and these higher order theories.
A reference to higher-dimensional knot theory: E. Ogasa, Introduction to higher-dimensional knots. For classical knot theory, I like the book D. Rolfsen, "Knots and Links". It is a bit dated (written by 1970s) but very readable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Understanding why a limit proof using another limit works Sorry for the title, hopefully I can explain it better. I think the title is about as good as I could get in terms of description. I have a problem: Let $x_n \ge 0$ for all $ N \in \mathbb{N}$ If $(x_n) \to x$, show that $(\sqrt{x_n}) \to \sqrt(x)$ Assume that we have already proved the limit going to zero. My proof was as follows: Our goal is to find an $N$ that satisfies the inequality: $|\sqrt{x_n|} - \sqrt{x}| \lt \epsilon$ with epsilon being arbitrary. So: $|\sqrt{x_n|} - \sqrt{x}| \lt \epsilon$ $|\sqrt{x_n|}| \lt \epsilon + \sqrt{x}$ $|\sqrt{x_n|}|^2 \lt (\epsilon + \sqrt{x})^2$ $|x_n| \lt (\epsilon + \sqrt{x})^2$ $|x_n| \lt \epsilon^2 + 2 \epsilon \sqrt{x} + x$ $|x_n - x| \lt \epsilon^2 + 2 \epsilon \sqrt{x}$ Since we already know $|x_n - x|$ can be made arbitrarily small we are ready to proceed. Then allow $\epsilon > 0$ to be arbitrary and choose an $N \in \mathbb{N}$ satisfying: $|x_n - x| \lt \epsilon^2 + 2 \epsilon \sqrt{x}$ For $n \ge N$ we find after some algebra (to save typing the above backwards) $|\sqrt{x_n|} - \sqrt{x}| \lt \epsilon$ Which shows that given the limit we can choose an $N$ for any given $\epsilon$ and find that all $n \ge N$ will be inside the $\epsilon$-neighborhood of $\sqrt{x}$. Where I am confused is how I am using the given limit $(x_n) \to x$. I am sort of following a template here from the author. Adding this extra limit has confused me. How does reducing the inequality $|\sqrt{x_n|} - \sqrt{x}| \lt \epsilon$ to $|x_n - x| \lt \epsilon^2 + 2 \epsilon \sqrt{x}$ and then knowing "we can make it arbitrarily small" help us prove the given limit? What is the intuition?
You're using the fact that $x_n \rightarrow x$ in the very first step: that's how you know that you can make $|x_n - x|$ arbitrarily small. And you need to be able to make that difference arbitrarily small for the rest of the proof to work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Computing the matrix of a Linear Transformation For a matrix $A\in M_n(\mathbb{F})$ consider the linear transformation $T_A:\mathbb{F}^n\rightarrow \mathbb{F}^n$. Denote the $B_{st}=\{e_1,...,e_n\}$ the standard basis of $\mathbb{F}^n.$ Compute the matrix $[T_A]_{B_{st}}$. So I don't really want anyone to solve the problem for me. I'm posting this because I do not understand whats being asked for in the problem. So I know $A\in M_n(\mathbb{F})$ means that $A$ is an $n\times n$ matrix with $\mathbb{F}$-valued entries. I was taught that every linear transformation $T:V\rightarrow V$ has an associated matrix $[T]_B\in M_n(\mathbb{F})$ which "mirrors" the transformation in $\mathbb{F}^n$ with respect to basis $B$. But I don't understand what $T_A$ is in this problem. Is it just denoting that $A$ is a linear transformation? Also, if $T_A$ is a map from $\mathbb{F}^n\rightarrow \mathbb{F}^n$, then what is $[T_A]_{B_{st}}$? I was taught that $[\cdot]_{B_{st}}$ is a coordinate map from $V\rightarrow \mathbb{F}^n$ where if $x\in V$ and $x=a_1e_1+...+a_ne_n \mapsto (a_1,...,a_n)\in \mathbb{F}^n$. But if $T_A$ is a matrix already I don't know what $[\cdot]_{B_{st}}$ is doing. Any help with understanding is much appreciated.
The map $T_A$ is$$\begin{array}{rccc}T_A\colon&\mathbb{F}^n&\longrightarrow&\mathbb{F}^n\\&v&\mapsto&Av.\end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why $\max\{f,g\}$ is continuous if $f$ and $g$ are continuous? Let $f,g:\mathbb{R}\to \mathbb{R}$ be continuous functions. I need to prove that $x\mapsto \max\{f(x),g(x)\}$ is continuous without using the fact that $\max\{a,b\}=\frac{a+b+|a-b|}{2}$. Let $\varepsilon >0$ and $a\in\mathbb{R}$. Suppose, without loss of generality, that $f(a)\leq g(a)$. There is $\delta >0$ such that $|f(x)-f(a)|<\varepsilon $ and $|g(x)-g(a)|<\varepsilon $ when $|x-a|<\delta $. I'm trying to prove that $$\lvert\max\{f(x),g(x)\}-g(a)\rvert<\varepsilon,$$ but I really have problems. I know that $\max\{f(x),g(x)\}=f(x)$ or $g(x)$, and thus $$\lvert\max\{f(x),g(x)\}-g(a)\rvert=\begin{cases}|f(x)-g(a)|\\ \text{or} \\ |g(x)-g(a)|\end{cases},$$ but I can't write it down rigorously since the fact that $\max\{f(x),g(x)\}=f(x)$ or $g(x)$ depend on $x$.
Rather straightforward: Let $\epsilon$ be given For $\epsilon/2$ there are $\delta_{1,2}$ s.t. $|x-x_0| < \delta = \min \delta_{1,2}$ implies $|f(x)-f(x_0)| < \epsilon/2$ , and $|g(x)-g(x_0)| < \epsilon/2$. Then $|x-x_0| \lt \delta$ implies $(1/2)|f(x)+g(x)+|f(x)-g(x)|- f(x_0)-g(x_0)-|f(x_0-g(x_0)|| $ $\le (1/2)|f(x)-f(x_0)| +$ $(1/2)|g(x)-g(x_0)| +$ $(1/2)||f(x)-g(x)| -$ $|f(x_0)-g(x_0)||$ $\le (1/2)(\epsilon/2) +(1/2)(\epsilon/2) + (1/2)|(f(x)-g(x))-(f(x_0)-g(x_0))| $ $\le \epsilon/2 + (1/2)|f(x)-f(x_0)| +(1/2)|g(x)-g(x_0)| \lt \epsilon.$ Used: Continuity of $f,g$; triangle inequality, reverse triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Does $\triangle ABC$ exist such that $\triangle ABC \sim \triangle DEF$, with $D, E, F$ being the incentre, centroid, orthocentre of $\triangle ABC$? Question: Does $\triangle ABC$ exist such that $\triangle ABC \sim \triangle DEF$, with $D, E, F$ being the incentre, centroid, orthocentre of $\triangle ABC$, resp.? For such a triangle to exist, it must be obtuse. Besides that, I have no idea how to prove or disprove it. For the case of $D, E, F$ being the orthocentre, centroid, circumcentre, it's impossible as they lie on the same line (Euler's line). That's the motivation of the problem. I have a feeling that brute force methods are needed (coordinate geometry). But I hate such an ugly approach. Any idea?
Yes, this triangle exists. (Found using brute force approximation.) Coordinates: $$A\approx(0.182,0.260)\quad B=(0,0)\quad C=(1,0)\\ D\approx(0.229,0.120)\quad E\approx(0.394,0.087)\quad F\approx(0.182,0.571)$$ Angles: $$a=d\approx107.2957\quad b=e\approx55.0744\quad c=f\approx 17.6299$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
A formula for tan(2x) Help with solving... Suppose that $\tan^2x=\tan(x-a)·\tan(x-b)$, show that $$\tan(2x)=\frac{2\sin(a)·\sin(b)}{\sin(a+b)}$$ As far as I know, so far the $\tan2x$ can be converted to $\frac{2\tan x}{1-\tan^2x}$ using the double angle formula and the $\tan 2x$ can be further be substituted to the following given item ($\tan(x-a·)\tan(x-b)$) however the problem now is that there are no ways to simplify this to my knowledge...
The intial equations can be written $$t^2=\frac{t-t_a}{1+t\,t_a}\frac{t-t_b}{1+t\,t_b},$$ $$\frac{2t}{1-t^2}=\frac{2t_at_b}{t_a+t_b}.$$ From the second equation, we can draw $t^2$ as a linear function of $t$. Then, replacing $t^2$ several times in the first equation, we should get an identity. With $c$ denoting the cotangent, $$t^2=1-(c_a+c_b)t,$$ and $$t^2=\frac{t-t_a}{1+t\,t_a}\frac{t-t_b}{1+t\,t_b}$$ becomes $$1-(c_a+c_b)t=\frac{1-(c_a+c_b)t-(t_a+t_b)t+t_at_b}{1+(t_a+t_b)t+t_at_b(1-(c_a+c_b)t)}.$$ The denominator simplifies as $$1+t_at_b,$$ and we do have $$(1+t_at_b)(c_a+c_b)=c_a+c_b+t_a+t_b.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3113894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Find the area of the shaded region of two circle with the radius of $r_1$ and $r_2$ In the given figure , $O$ is the center of the circle and $r_1 =7cm$,$r_2=14cm,$ $\angle AOC =40^{\circ}$. Find the area of the shaded region My attempt: Area of shaded region $=\pi r^2_2 - \pi r^2_1= \pi( 196-49)= 147\pi$ Is it true ?
Use the formula for the area of a sector (the angle is measured in radians) and the formula for the area of a circle: $$ A=\frac{1}{2}r^2\theta $$ $$ A=\pi r^2 $$ $40^\circ$ in radians would be: $$ 40^\circ=\frac{40\pi}{180} $$ The area of the top peice: $$ A_1=\frac{1}{2}r_2^2\cdot \frac{40\pi}{180} - \frac{1}{2}r_1^2\cdot\frac{40\pi}{180}=\frac{\pi}{9}\left(r_2^2-r_1^2\right) $$ The area of the bottom piece: $$ A_2=\pi r_1^2-\frac{1}{2}r_1^2\cdot\frac{40\pi}{180}=\frac{8\pi}{9}r_1^2 $$ Thus, the area of the shaded region would be: $$ A_1+A_2=\frac{\pi}{9}\left(r_2^2-r_1^2\right)+\frac{8\pi}{9}r_1^2=\frac{\pi}{9}\left(r_2^2+7r_1^2\right)=\frac{\pi}{9}\left(14^2+7\cdot7^2\right)=\frac{539\pi}{9} $$ Answer: $A=\frac{539\pi}{9}$ square units.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Determinant of a particular matrix. What is the best way to find determinant of the following matrix? $$A=\left(\begin{matrix} 1&ax&a^2+x^2\\1&ay&a^2+y^2\\ 1&az&a^2+z^2 \end{matrix}\right)$$ I thought it looks like a Vandermonde matrix, but not exactly. I can't use $|A+B|=|A|+|B|$ to form a Vandermonde matrix. Please suggest. Thanks.
Note that $$ \det\left(\begin{matrix} 1&ax&a^2+x^2\\1&ay&a^2+y^2\\ 1&az&a^2+z^2 \end{matrix}\right) =\det \left(\begin{matrix} 1&ax&x^2\\1&ay&y^2\\ 1&az&z^2 \end{matrix}\right)=a\cdot\det \left(\begin{matrix} 1&x&x^2\\1&y&y^2\\ 1&z&z^2 \end{matrix}\right) $$ which boils down to Vandermonde determinant $$ a(x-y)(y-z)(z-x). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
General closed form for $L(\phi)=\int_0^\phi \log(\sin x)\mathrm dx$ when $\phi\in(0,\pi)$? I would like to know if there is a general closed form for $$L(\phi)=\int_0^\phi \log(\sin x)\mathrm dx,\qquad \phi \in(0,\pi)$$ Context: (below are also the extent of my search for a closed form.) I would like to know such a closed form because it would give rise to many really neat infinite products. Here's how: We start with $$\sin x=x\prod_{n\geq1}\frac{\pi^2n^2-x^2}{\pi^2n^2}$$ So $$\log\sin x=\log x+\sum_{n\geq1}\log\frac{\pi^2n^2-x^2}{\pi^2n^2}$$ Then integrating both sides over $[0,\phi]$, $$L(\phi)=\phi(\log\phi-1)+\sum_{n\geq1}\phi\left[\log\frac{\pi^2n^2-\phi^2}{\pi^2n^2}-2\right]+\pi n\log\frac{\pi n+\phi}{\pi n-\phi}$$ Then using $$\sum_i\log a_i=\log\prod_i a_i$$ as well as $$a\log b=\log(b^a)$$ We have $$L(\phi)+\log\frac{e^\phi}{\phi^\phi}=\log\prod_{n\geq1}\left(\frac{\pi^2n^2-\phi^2}{(e\pi n)^2}\right)^{\phi}\left(\frac{\pi n+\phi}{\pi n-\phi}\right)^{\pi n}$$ So taking $\exp$ on both sides, $$\prod_{n\geq1}\left(\frac{\pi^2n^2-\phi^2}{(e\pi n)^2}\right)^{\phi}\left(\frac{\pi n+\phi}{\pi n-\phi}\right)^{\pi n}=\frac{\exp[\phi+ L(\phi)]}{j(\phi)}$$ Where $j(x)=x^x$. Similarly, if we set $$k(\phi)=\int_0^\phi\log(\cot x)\mathrm dx$$ We see that $$k(\phi)=L(\phi+\pi/2)-L(\phi)+\frac\pi2\log 2$$ And it can be shown in a similar way that $$\prod_{n\geq1}\frac{(\pi^2n^2-(\phi+\pi/2)^2)^{\phi+\pi/2}}{(e\pi n)^\pi(\pi^2n^2-\phi^2)^{\phi}}\left(\frac{(\pi n+\pi/2+\phi)(\pi n-\phi)}{(\pi n-\pi/2-\phi)(\pi n+\phi)}\right)^{\pi n}=2^{\pi/2}\frac{j(\phi)}{j(\phi+\pi/2)}\exp[\pi/2+k(\phi)]$$ And since there are a few nice closed forms for $L(\phi)$ and $k(\phi)$, and there are these beautiful products to accompany them, it would be very fitting if there were a general closed form.
Beside Clausen functions, using one integration by parts $$\int\log (\sin (x)) \,dx=x \log (\sin (x))-\int x \cot(x)\,dx$$ and $$\int x \cot(x)\,dx=x \log \left(1-e^{2 i x}\right)-\frac{1}{2} i \left(x^2+\text{Li}_2\left(e^{2 i x}\right)\right)$$ making by the end $$\int_0^\phi\log (\sin (x)) \,dx=\frac{1}{2} i \left(\phi ^2+\text{Li}_2\left(e^{2 i \phi }\right)\right)-\phi \log \left(1-e^{2 i \phi }\right)+\phi \log (\sin (\phi ))-\frac{i \pi ^2}{12}$$ If you have a look here, you can notice that the Clausen function can be given in terms of polylogarithms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Let $a,b\in G$ elements of order $5$. If $a^3=b^3$ then $a=b$. Prove or disprove: let $G$ be a group and $a,b\in G$ elements of order $5$. If $a^3=b^3$ then $a=b$. I saw the following example which tries to disprove the theorem: $G=\mathbb{Z}_{10}$ and $a=2,b=8$. I'm not sure about that part, but $o(2)=5$ and $o(2)=8$. I think that $o(2)=\infty$ because there is not $n$ so $2^n\,mod\,10=1$ but I'm not sure. Does this example disproves the theorem?
Since $$b^5=a^5 =a^3a^2 =b^3a^2\implies a^2=b^2$$ so $$b^3=a^3 =a^2a =b^2a\implies a=b$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Finding rational solutions for $3x^2+5y^2 =4$ This question comes from Rational Points on Elliptic Curves (Silverman & Tate), exercise $1.6$ (b). I want to calculate all rational solutions for $3x^2+5y^2 =4$. However, I think that there are no rational solutions because if we homogenize we get $3X^2+5Y^2 =4Z^2$ and mod $3$ the only solution is $Z=Y=0$. Is this sufficient?
Your argument isn't quite sufficient. This is because the observation that "mod $3$ the only solution is $Z=Y=0$" applies equally well to the equation $9X^2+5Y^2=4Z^2$, which does have solutions, e.g., $(X,Y,Z)=(2,0,3)$. What you need to say is that "mod $3$ the only solution is $Z=Y=X=0$." (On a side note, it would be better to use congruence notation, $\equiv$, instead of the equal sign.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Simplifying 3rd root of (24 * sqrt(3)) I have problems following a solution towards simplifying a given polynomial. $$Polynomial: p(x)=x^5+{\sqrt 3}x^4+24{\sqrt 3}x^2+72x$$ the zeros of this function (Polynomial roots? English isn't my native language, so I don't know how to express the point(s) at which the function meets the X-axis) are: $$x_0 = -{\sqrt 3} \\ x_1 = 0 \\$$ and the complex ones, which are calculated with what "remains" after polynomial division, drawing the 3rd root, etc.: $$x=\sqrt[\Large 3]{-24\sqrt 3}$$ The first problem comes now. The next step, without explanation, simplifies the above to the following: $$x=\sqrt[\Large 3]{8*\sqrt 3^3e^{\Large_{i\pi}}}$$ How is this done or rather what's the logic behind it? Especially the 8 that somehow was transformed from the 24.
$24 = 3*8$ so $24 \sqrt {3} = 8*3*\sqrt{3} = 8\sqrt{3}^3$ $-1 = e^{\pi i}$ so $-24\sqrt{3} = 8*(\sqrt 3)^3 e^{\pi i}=2^3(\sqrt 3)^3e^{\pi i}$ And so $\sqrt[3]{-24\sqrt 3}=\sqrt[3]{2^3\sqrt{3}^3e^{\pi i}} = 2\sqrt 3 e^{\frac {(2k + 1)}3\pi i}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let $f:X\to X $ be continuous. Does $f $ have a fixed point when $X=[0,1)$ or $X=(0,1) $? Let $f:X\to X $ be continuous. Show that if $X=[0,1] $, $f $ has a fixed point(i.e. there exists $x$ such that $f (x)=x$). What happens if $X $ equals $[0,1) $ or $(0,1) $? First part of the question is an immediate consequence of intermediate value theorem(for a proof, see here). I think $f (x)=x^2$ is a counterexample when $X=(0,1) $, since $x^2\lt x $ for all $x\in (0,1) $. But to be honest, I don't understand what causes breakdown of the fixed point theorem on $(0,1)$, since IVT only requires connectedness of domain. Is this related to the non-compactness of $(0,1)$? Also I can't think of any counterexample for the case $X=[0,1)$(assuming fixed point is somehow related to compactness). Any help is appreciated. Thank you.
The IVT is not the only ingredient here. The way the theorem works is by setting up this square: where the line in the middle is $y = x$. A function from $[0, 1]$ to $[0, 1]$ that intersects this line will have a fixed point at the point of intersection. The IVT kicks in when we have a function whose graph enters the top triangle and the bottom triangle at various points, e.g. The fact is, by the IVT, the function has to cut the line somewhere, i.e. it must have a fixed point. But, this makes an assumption! The function may only exist in one triangle or the other, but not in both. That is, why can we not have $f(x) > x$ for all $x$ or $f(x) < x$ for all $x$? The picture above illustrates it. Due to the function having a full domain of $[0, 1]$, there's a squeeze happening. The green function is about as close as we can have to a function satisfying $f(x) > x$. Similarly, the red function attempts to have $f(x) < x$. But, both are pinched towards the diagonal line. The green function must have $f(1) = 1$, and the red function must have $f(0) = 0$. This illustrates the necessity of defining all the way to $0$ and $1$. Removing either of these points means that the functions are not squeezed to a fixed point (we'd only ensure that $f(x)$ and $x$ become arbitrarily close).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proving an equality using given ones; no need for differentiation Prove that $(\frac ab+\frac bc+\frac ca)(\frac ba+\frac cb+\frac ac)\geqslant9$ The formulas given were $$\frac{a+b} {2}\geqslant\sqrt {ab}$$ $$a^2+b^2\geqslant2ab$$ $$\frac{a+b+c} {3}\geqslant\root3\of{abc}$$ $$a^3+b^3+c^3\geqslant3abc$$ for all $a\gt0, b\gt0, c\gt0.$ I couldn't think of any way to do this; please help.
Use that $$\frac{a}{b}+\frac{b}{c}+\frac{c}{a}\geq 3\sqrt[3]{\frac{a}{b}\frac{b}{c}\frac{c}{a}}=3$$ and $$\frac{b}{a}+\frac{c}{b}+\frac{a}{c}\geq 3\sqrt[3]{\frac{b}{a}\frac{c}{b}\frac{a}{c}}=3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Let $N,M$ be normal subgroups of $G$ with $N\cap M=\{e\}$. Prove that $M\subset C_{G}(N)$ and $N\subset C_{G}(M)$. First consider the following definition: Let $G$ be a group and $H$ a subgroup of $G$. The center: $$ C_{G}(H)=\{g\in G\,:\,gh=hg,\,\forall h\in H\}$$ Now I'm trying to prove the following theorem: Let $N,M$ be normal subgroups of $G$ so $N\cap M=\{e\}$. Prove that $M\subset C_{G}(N)$ and $N\subset C_{G}(M)$. What I tried: We know that $N\triangleleft G$ so there is $n_{0}\in N$ so for every $n\in N$ and every $g\in G$, $n_{0}=g^{-1}ng$. We can convert it to be: $n=gn_0g^{-1}$. We also know that $M\triangleleft G$ so there is $m_{0}\in M$ so $m=gm_0g^{-1}$. We can see that: $$ nm=(gn_0g^{-1})(gm_0g^{-1})=g(n_0m_0)g^{-1}\\ mn=(gm_0g^{-1})(gn_0g^{-1})=g(m_0n_0)g^{-1}$$ But how I continue from here? As I understand I need to show that $nm=mn$.
It is useful to see $[M,N] := \langle mnm^{-1}n^{-1} |m\in M$ $ n\in N\rangle $. Shall show this is the trivial subgroup. $mnm^{-1} \in N$ and hence $mnm^{-1}n^{-1}\in N$ but then again $nm^{-1}n^{-1}\in M$ and hence $mnm^{-1} \in M$. So each generator is in $M\cap N=e$. Thus every element of $M$ commutes with every element if $N$. This solves your problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3114966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Conditional Independence, Decomposition Is there some set of independence relations between three random variables $X$, $Y$ and $Z$ such that $P(Z \mid X, Y)$ = $P(Z \mid X) \cdot P(Z \mid Y)$? (I feel like there should be, but I can't find it).
The relationship you are hoping to see: $$ p(z\mid x,y)=p(z\mid x) p(z\mid y)\tag1$$ isn't true for the graph $x\to z\leftarrow y$ in general. The graph $x\to z\leftarrow y$ represents a joint distribution for $(X,Y,Z)$ that can be factored into the form $$ p(x,y,z)=p(x)p(y)p(z\mid x,y).\tag2 $$ By summing (2) over $z$, we find that $X$ and $Y$ are independent. However, independence between $X$ and $Y$ doesn't imply (1). For a counterexample, let $X$ and $Y$ be independent, taking values $0$ and $1$ with equal probability. Define $Z:=X+Y$. Then the joint distribution of $(X,Y,Z)$ satisfies the relationship (2). Plugging $x=0, y=1, z=1$, we calculate $p(z\mid x,y)=1$, $p(z\mid x)=\frac12$, and $p(z\mid y)=\frac12$, so (1) doesn't hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to integrate $\cos^3x$ by parts I've converted $\cos^3(x)$ into $\cos^2(x)\cos(x)$ but still have not gotten the answer. The answer is $\dfrac{\sin(x)(3\cos^2x + 2\sin^2x)}{3}$ My answer was the same except I did not have a $3$ infront of $x$ and my $2\sin^2x$ was not squared. Help!
Since $\cos^3x=(1-\sin^2 x)\cos x$, you can do $\sin x=t$ and $\cos x\,\mathrm dx=\mathrm dt$, thereby getting$$\int1-t^2\,\mathrm dt.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Show that $n^2-1+n \sqrt{d}$ is always a unit in $\mathbb{Z}[\sqrt{d}]$ We let $n\in \mathbb{Z}$, $n>2$ and $d=n^2-2$. We want to show that $n^2-1+n\sqrt{d}$ is a unit of $\mathbb{Z}[\sqrt{d}]$. My initial idea was to consider the induced norm $N:R\to\mathbb{Z}$, given by $N(a+b\sqrt{d})=a^2-db^2$. We know that if $R$ is the ring of integers of some quadratic number field, and $\alpha \in R$, then $N(\alpha)=\pm1 \Leftrightarrow \alpha \in R^{\times}$, and as $N(n^2-1+n\sqrt{d})=(n^2-1)^2-dn^2=(n^2-1)^2-(n^2-2)n^2=n^4-2n^2+1-n^4+2n^2=1$, we must have $n^2-1+n\sqrt{d}\in \mathbb{Z}[\sqrt{d}]$. I then realised that $\mathbb{Z}[\sqrt{d}]$ is the ring of integers of a quadratic number field if and only if $d\not\equiv1\ (\textrm{mod}\ 4)$, so my proof does not apply for the case of $d\equiv 1\ (\textrm{mod}\ 4)$. I then decided to try to find a proof that proves the general case, perhaps using fundamental units, seeing as the rings under consideration all have $d>2$. Having been unable to make any meaningful progress in this department, I decided to consult the community. All help would, as always, be highly appreciated.
Why not prove it directly? Multiply it by its conjugate and see you get $1$. $$\begin {align} \left(n^2-1+n\sqrt d\right)\left(n^2-1-n\sqrt d\right)&=\left(n^2-1\right)^2-n^2d\\ &=\left(n^2-1\right)^2-n^2\left(n^2-2\right)\\&=1 \end {align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Poisson Process Conditional Expectation Given $X_t$ a Poisson process such that $\lambda = 1$ find $E[X_1\mid X_2]$ and $E[X_2\mid X_1]$. The first one is pretty straight forward since we have $E[X_2 - X_1] = E[X_1] = 1$ so then we get $E[X_2 \mid X_1] = E[X_2 - X_1 + X_1 \mid X_1] = E[X_2 - X_1 \mid X_1] + E[X_1\mid X_1] = 1+ X_1$ as $X_2-X_1$ is independent of $X_1$. As for $E[X_1\mid X_2]$ I'm not entrirely sure, I already know the answer should be $\frac{X_2}{2}$ but that is not what I am getting. It relates to binomial if I understand correctly. I believe I should have $$P(X_1\mid X_2) = \frac{P(X_1 = x, X_2 = y)}{P(X_2 = y)} = \frac{P(X_1 = x, X_2 - X_1 = y-x)}{P(X_2 = y)} = \frac{P(X_1= x)P(X_2 - X_1 = y-x)}{P(X_2 = y) } = \frac{e^{-1}(x!)^{-1}e^{-1}((y-x)!)^{-1}}{e^{-2}(2^{y}/y!)} = \frac{y!}{x!(y-x)!}2^{-y}$$ but Im missing the $2^{-(y-x)}$. Maybe I made a mistake in my conditional probability.
By what you calculated, the PMF of $X_1$ conditional on $X_2 =y$ is that of a $\mathrm{Bin}(y,1/2)$ random variable. Therefore, $\mathbb{E}[X_1\mid X_2 = y] = y/2$ and $\mathbb{E}[X_1\mid X_2] = X_2/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Should we distinguish the minus sign from the negative sign? In the set $\mathbb{C}$ of complex numbers, the minus sign "-" may be used for following: * *As a unary operator $-_u$, given a complex number $a$, $-_ua$ is the unique number (called the negative of a) $c$ such that $a+c=c+a=0$, where $0$ is the additive identity. *As a binary opeartor $-_b$, given two complex numbers $a$ and $d$, $a-_bd$ is the sum of $a$ and $-_ub$. Though $-_u$ and $-_b$ are closely related, they are completely different objects in mathematics: $_u$ is considered as a map from $\mathbb{C}$ to $\mathbb{C}$ while $-_b$ is a map from $\mathbb{C}\times\mathbb{C}$ to $\mathbb{C}$. As maps, they have the same codomain but different domains. In some calculators such as TI nspire, two different keys are used for the two different meanings of $-$. However, in our everyday writing, we seldom differentiate the unary opeartor $-_u$ and $-_b$. Shouldn't we use diffrent symbols for them; after all, though closely related, they are different? If we do not use different symbols for them, the following simple calculation appears confusing: \begin{equation*}\begin{array}{c} \phantom{\times9}-23\\ \underline{-\phantom{9}-15} \\ \phantom{999}-8 \end{array}\end{equation*} Also, I find the idea that in the same equation $-1-1=-2$, the first $-$ has different meaning from the second $-$ unsatisfactory.
We do distinguish most of the time, at such a subconscious level that it seems either automatic or unthinking. Heck, we're even capable of resolving meanings that a computer would find contradictory. Take for instance $$\frac{3}{2} = 1 + \frac{1}{2}.$$ In a cake recipe, you might find something like 1-1/2 cups light brown sugar Put that in the context of JavaScript and it could be misunderstood as $$1 - \frac{1}{2} = \frac{1}{2}.$$ But if the author had meant one half rather than three halves, why didn't they just write one half? We understand in the recipe context that the author meant three halves, not one half. If that's not manly enough an example, try a search for a 1-1/2 socket wrench.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 2 }
Is there a proof of $\lnot \forall x, P(x) \iff \exists x, \lnot P(x)$ I am interested in how one would formally prove: $\lnot \forall x, P(x) \iff \exists x, \lnot P(x)$ I realize that it's basically saying that: $\lnot(P(x_0) \land P(x_1) \land ... \land P(x_n)) \iff \lnot P(x_0) \lor \lnot P(x_1) \lor ... \lor \lnot P(x_n)$ Which is an "intuitive" proof assuming we already accept De Morgan's, but I am curious if there is a formal way to prove it (e.g. Fitch-style).
Fitch style proof: \begin{array}{lll} 1&\neg \forall x \ P(x) & Assumption\\ 2&\quad \neg \exists x \ \neg P(x)&Assumption\\ 3&\quad \quad a&\\ 4&\quad \quad \quad \neg P(a) & Assumption\\ 5&\quad \quad \quad \exists x \ \neg P(x)&\exists \ Intro \ 4\\ 6&\quad \quad \quad \bot& \bot \ Intro \ 2,5\\ 7&\quad \quad \neg \neg P(a)& \neg \ Intro \ 4-6\\ 8& \quad \quad P(a)& \neg \ Elim \ 7\\ 9&\quad \forall x \ P(x) & \forall \ Intro \ 3-8\\ 10&\quad \bot & \bot \ Intro \ 1,9\\ 11&\neg \neg \exists x \ \neg P(x)&\neg \ Intro \ 2-10\\ 12&\exists x \ \neg P(x)&\neg \ Elim \ 11\\ \end{array} Conceptual explanation: the basic strategy is to prove this by a proof by contradiction. That is, if it is not the case that there is some non-P, then everything is a P, which contradicts the assumption that not everything is a P.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are the projections along orthogonal direction of multivariate normal distribution with diagonal covariance matrix independent? I'm taking a probability class and my prof used the following theorem IIRC. Let $g\sim\mathcal{N}(\mu,\Sigma)$ where $\Sigma$ is diagonal( I don't know if this condition is necessary) and $\langle u,v\rangle=0$, then $\langle g,u\rangle$ and $\langle g,v\rangle$ are independent. Is this correct? If so, how to prove this? I believe the following is a special case of the theorem: Are the random variables $X + Y$ and $X - Y$ independent if $X, Y$ are distributed normal? I tried to use the same technique to prove the theorem but got stuck.
Elaborating on Minus One-Twelfth's comment: The pair $(g^\top u, g^\top v)$ is jointly normal. (Why?) Thus independence is equivalent to $\text{Cov}(g^\top u, g^\top v) = 0$. The covariance is $$\begin{align}\text{Cov}(g^\top u, g^\top v) &= E[(g^\top u - E[g^\top u])(g^\top v - E[g^\top v])] \\ &= E[((g - \mu)^\top u)((g - \mu)^\top v)] \\ &= E[u^\top (g - \mu)(g-\mu)^\top v] \\ &= u^\top E[(g-\mu)(g-\mu)^\top] v \\ &= u^\top \Sigma v. \end{align}$$ If $\Sigma$ is a multiple of the identity matrix and if $\mu = 0$, then independence is equivalent to $u^\top v = 0$. However, in general you have to use the above expression $u^\top \Sigma v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For what values of $a$ and $b$ is the function $\frac{x^ay^b}{x^2+y^2}$ continuous at $(0,0)$? I have the function $$f(x,y)=\begin{cases}\dfrac{x^ay^b}{x^2+y^2} &(x,y)\neq(0,0)\\ 0 &(x,y)=(0,0) \end{cases}$$ I am trying to figure out what constants $a$ and $b$ will make the function continuous at $(0,0)$. I know that the limit has to be $0$ as $(x,y)\to(0,0)$ for it to be continuous. Using polar coordinates, I think that $a+b \geq 3$ since $x^2+y^2= r^2$ and the limit of any polar function $r^x$ with $x > 0$ as $r\to0$ is $0$. Am I on the right track or am I missing something?
$\dfrac{x^ay^b}{x^2+y^2} \overset{POLAR \, \, COORD.}{\Longrightarrow} \dfrac{r^a\cos^a(\theta) r^b\sin^b(\theta)}{r^2} \Longleftrightarrow r^{a+b-2}\cos^a(\theta)\sin^b(\theta)$ What happens for different values of $a+b-2$ in the limit, $\lim_{r\rightarrow 0}r^{a+b-2}\cos^a(\theta)\sin^b(\theta)=0$ ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3115918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Linear Algebra, proving subset is a subspace Let $W$ be a subset of vector space $V$ over $K$. $\forall \mathbf{u}, \mathbf{v} \in W,\alpha \in K, \alpha \mathbf{u} + \mathbf{v} \in W$ , show that $W$ is a subspace over $K$. Hence, show that the set of linear combinations $$W = \{\alpha_1 \mathbf{v}_1 + \alpha_2\mathbf{v}_2 +\ldots + \alpha_n\mathbf{v}_n, \mathbf{v}_i \in V, \alpha_i \in K,i = 1, \ldots, n\}$$ is a subspace of $V$ over $K$. I don't understand what $W$ contains? Is $W = \{u,v\}$? or is $W = \operatorname{Span}\{u,v\}$?? Please help, I am new to linear algebra.
To check that $W$ is a vector subspace you need to check the 3 following conditions: i) $W$ is non empty (clear if $V$ is non empty), ii)if $\mathbf x \in W$ and $\mathbf y \in W$, then $\mathbf x+\mathbf y \in W$. iii)If $\alpha \in K$, and $\mathbf x \in W$, then $\alpha \mathbf x \in W$ For your second question, you need to check these three conditions again. Again (i) should hold. For ii) and iii), observe that: for $x_i$ and $y_i$ in $K$, and $v_i$ in $V$, we have: $(x_1 v_1+...+x_n v_n)+(y_1 v_1+...+y_n v_n)=(x_1+y_1)v_1+...+(x_n+y_n)v_n$ , which is again a linear combination of the $v_i$'s. Now check that: for $c$ in $K$: $c(x_1 v_1+...+x_n v_n)=cx_1 v_1+...+cx_n v_n$ is again a linear combination of the $v_i$s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Does $\mathbb{F}_9$ contain a 4th root of unity? I realised that I don't know how to construct $\mathbb{F}_9$. I'm guessing that $\mathbb{F}_9 = \mathbb{F_3(\theta)}$, where $\theta$ is the root of some irreducible polynomial over $\mathbb{F}_3[x]$ of degree two? Must I even construct $\mathbb{F}_9$ in order to determine whether it contains a 4th root of unity or is there some other simpler way I'm missing?
A non-trivial $4$th root of unity is a square root of $-1$, since $(x^4-1)=(x^2-1)(x^2+1)=(x-1)(x+1)(x^2+1)$, and $x^2+1$ is irreducible over $\mathbf F_3$. So the answer is yes: $$\mathbf F_9\simeq \mathbf F_3[x]/(x^2+1),$$ and if you denote $\omega$ the congruence class of $x$, the non-trivial $4$th roots of unity are $\omega$ and $-\omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
How to solve this recursion relation? Suppose: $$2k(n-k)a_k=n(n-1)+(n-k)(k+1)a_{k+1}+(n-k)(k-1)a_{k-1}$$ where $k=1, 2, ..., n-1$ and $a_n=0$, how to derive $a_k$? I tried to find pattern $a_1-a_2=n/2$; $a_2-a3=n(2n-1)/6(n-2)$, it become more and more complicated and I can't find the rule.
Hint. Making $b_k = k a_k$ we have $$ -b_{k-1}+2 b_k - b_{k+1} = \frac{n(n+1)}{n-k} $$ This can be simplified by making $c_k = b_k-b_{k-1}$ giving $$ c_k-c_{k+1} = \frac{n(n+1)}{n-k} $$ After solving for $c_k$ can be then solved for $b_k$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $D$ be the differentiation operator on $V$. Find $D^*$. Let $V$ be the vector space of the polynomials over $R$ of degree less than or equal to $3$ with the inner product space $(f|g)=\int_{0} ^{1}f(t)g(t) dt$, and let $D$ be the differentiation operator on V. Find $D^*$ Attempt As I did the calculation was very large, so I will explain what I did step by step so that you say if I made a mistake or I hit STEP 1: First I considered a canonical basis of $R _{\leq 2} [X]$ that is $ \{ 1,x,x^2,x^3\}$. I did the Gram-Schmidt Process to orthogonalize this base. Resulted in $$ \left\{ 1, x-\frac{1}{2}, x^2-x+ \frac{1}{6}, x^3-\frac{3}{2}x^2+\frac{3}{5}x- \frac{1}{2} \right\}$$ STEP 2: Orthonormalize this base, dividing each element by its norm (in this part I may have been wrong because they gave many calculations). Resulted in $$ \{ 1, \sqrt{12} \left(x-\frac{1}{2} \right) , \sqrt{180} \left( x^2-x+ \frac{1}{6} \right) , \sqrt{\frac{33600}{18772}} \left( x^3-\frac{3}{2}x^2+\frac{3}{5}x- \frac{1}{2} \right) \}$$ STEP 3: Write the matrix of $D$ $$\begin{bmatrix}0&\sqrt{12}&0& \frac{1}{10} \sqrt{\frac{33600}{18772}}\\0&0&2\sqrt{15}& 0\\ 0&0&0& \sqrt{\frac{33600}{33378960}} \\ 0&0&0& 0 \end{bmatrix}$$ STEP4: The matrix of $D^*$ is the conjugate transpose of the matrix of $D$,(in this case only transpose) according to the corollary: Let $V$ be a finite-dimensional inner product space, and let $T$ be a linear operator on $V$. In any orthonormal basis for $V$, the matrix of $T^*$ is the conjugate transpose of the matrix of $T$. At least the idea is correct? Is there any easier way to do this exercise?
Yes, the idea is correct (I didn't verify the coefficients in the orthonormalization). Here's an alternative method: Integrating by parts gives $$\langle f, Dg \rangle = \int_0^1 f g' dx = f(1) g(1) - f(0) g(0) + \int_0^1 - f' g \,dx = f(1) g(1) - f(0) g(0) + \langle -Df, g \rangle.$$ Thus, if we can find an operator $S$ such that $\langle S f, g \rangle = f(1) g(1) - f(0) g(0)$, we will have $$\langle f, D g \rangle = \langle (S - D) f, g \rangle$$ and thus $$D^* = S - D .$$ Remark Up to this point, the formulation works for all functions in $L^2([0, 1])$, and if we enlarge our space to include the Dirac delta function, by definition we can write $S = \delta(t - 1) - \delta(t)$ and thus $D^* = -D + \delta(t - 1) - \delta(t) .$) To find an explicit formula for $S$, we look for polynomials $r, s \in V$ such that $$\langle S f , g \rangle = \langle f(1) r - f(0) s , g \rangle$$ for all $f , g \in V$. On the one hand, $\langle S f, g \rangle = \langle D f, g \rangle + \langle f, D g \rangle$, and except for $k = l = 0$, this gives $\langle S(x^k), x^l \rangle = 1$. So, writing $r = \sum_{i = 0}^3 r_i x^i$ and $s = \sum_{i = 0}^3 s_i x^i$ and integrating against monomials $f = x^k, g = x^l$ gives (for $k > 0$) $$1 = \langle S (x^k), x^l \rangle = \langle r, x^l \rangle .$$ Writing this in terms of $r_0, r_1, r_2, r_3$ for $l = 0, 1, 2, 3$ and integrating gives rise to the system $$\pmatrix{1&\frac{1}{2}&\frac{1}{3}&\frac{1}{4}\\\frac{1}{2}&\frac{1}{3}&\frac{1}{4}&\frac{1}{5}\\\frac{1}{3}&\frac{1}{4}&\frac{1}{5}&\frac{1}{6}\\\frac{1}{4}&\frac{1}{5}&\frac{1}{6}&\frac{1}{7}} \pmatrix{r_0\\r_1\\r_2\\r_3} = \pmatrix{1\\1\\1\\1} .$$ Solving gives $$r_0 = -4, r_1 = 60 , r_2 = -180, r_3 = 140 ,$$ so $r(x) = 4 (35 x^3 - 45 x^2 + 15 x - 1)$. A similar argument (and using that $\langle S(1), 1 \rangle = 0$) gives that $s(x) = -4 (35 x^3 - 60 x^2 + 30 x - 4)$. Putting everything together gives $$\boxed{D^* f = -D f + 4 [f(1) (35 x^3 - 45 x^2 + 15 x - 1) - f(0) (35 x^3 - 60 x^2 + 30 x - 4)]} .$$ The matrix on the left-hand side is the $4 \times 4$ Hilbert matrix; there is a general formula for the inverse of the analogous $n \times n$ matrix, which means we can write down an explicit formula for the adjoint for the differentiation operator on the space of polynomials of degree $\leq n$ for general $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that the product of any two numbers between two consecutive squares is never a perfect square In essence, I want to prove that the product of any two distinct elements in the set $\{n^2, n^2+1, ... , (n+1)^2-1\}$ is never a perfect square for a positive integers $n$. I have no idea on how to prove it, but I've also yet to find a counterexample to this statement. Can anyone help?
For any two numbers $n^2+a,\ n^2+b;\ 0<a<b<(2n+1)$, their product will satisfy $n^4<n^4+(a+b)n^2+ab<(n^2+2n+1)^2$. All of the squares between $n^4$ and $(n^2+2n+1)^2$ will have the form $(n^2+m)^2=n^4+2mn^2+m^2;\ 1\le m\le 2n$ If $n^4+(a+b)n^2+ab$ is a perfect square, it will be one of the squares between $n^4$ and $(n^2+2n+1)^2$ and hence equal to $n^4+2mn^2+m^2$ for some $m$. Thus $(a+b)=2m;\ ab=m^2$ for some $m$ Rearranging, we get $m^2=\frac{a^2+2ab+b^2}{4}=ab=\frac{4ab}{4}$, or $a^2+b^2=2ab$. This implies both $a\mid b$ and $b\mid a$, meaning $b=a$ and the numbers being multiplied to obtain a perfect square are not distinct. Added by edit: John Omielan comments (for the specific case $k=1$) that my original answer fails to consider possible solutions of the form $a+b=2m-k;\ ab=m^2+kn^2$. He separately provides a more complete answer that addresses those cases. Marty Cohen comments that I can only properly conclude $n^4+(a+b)n^2+ab=n^4+2mn^2+m^2 \Rightarrow n^2(a+b-2m)=(m^2-ab)$. Let me address those shortcomings. If $(n^2+a)(n^2+b)=(n^2+m)(n^2+m)$ then either $a=b=m$ (addressed in my original answer) or $a<m<b$ (which this edit will address). If $n^2(a+b-2m)=(m^2-ab)$, then $(m^2-ab)$ is divisible by $n^2$, so $(m^2-ab)=kn^2=n^2(a+b-2m)$, or $(a+b-2m)=k\Rightarrow a+b=2m+k$. $m,a,b$ have limits on their sizes. $m<b\le 2n\Rightarrow m^2< 4n^2$. Also $a<b\le2n \Rightarrow ab<4n^2$ Hence, $|(m^2-ab)|<4n^2 \Rightarrow |k|=(a+b-2m)<4$. For $(a+b)=2m+k,\ |k|=0,1,2,3$, where $k=0$ corresponds to the case where $a=b=m$. The midpoint $t$ between $a$ and $b$ is the average $t=\frac{a+b}{2}$. Let $b-t=r,\ t-a=r$. Note that if $a$ and $b$ have different parity, $t$ and $r$ may have half integral values. Finally, $b-a=2r$, but since $b\le2n,\ a\ge 1$, then $2r=b-a<2n\Rightarrow r<n$. $(n^2+a)(n^2+b)=((n^2+t)-r)((n^2+t)+r)=(n^2+t)^2-r^2$. Therefore, unless $m>t$, $(n^2+a)(n^2+b)<(n^2+m)^2$. But $t=\frac{a+b}{2}=m-\frac{k}{2}$. $m-t=\frac{1}{2},1,\frac{3}{2}$. Within the constraints of the original question, $m$ must be larger than the average of $a$ and $b$, but it must be very close to that average. Substituting $t-\frac{k}{2}$ for $m$ in $(n^2+m)^2$, we get $((n^2+t)-\frac{k}{2})^2$. Can that equal $(n^2+a)(n^2+b)=((n^2+t)-r)((n^2+t)+r)=(n^2+t)^2-r^2$? Letting $s=(n^2+t)$ for simplicity in keeping track during expansion, we ask whether $(s-\frac{k}{2})^2=(s-r)(s+r)=s^2-r^2\Rightarrow ks-\frac{k^2}{4}=r^2\Rightarrow k(n^2+t)-\frac{k^2}{4}=r^2$? $r<n\Rightarrow r^2<n^2$. It is obvious for $3\ge k>1$ that $k(n^2+t)-\frac{k^2}{4}>n^2>r^2$. For $k=1$, $n^2+t-\frac{1}{4}>n^2 \iff t>\frac{1}{4}$. For $a<m<b$, $\min(a+b)=4 \Rightarrow \min(t)=2; 2>\frac{1}{4}$. There are no solutions to the question for $a<m<b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Partition pairs of numbers such that no partition contain two pairs with the same number. Problem Integers $1, 2, 3, ..., n$ can form $\binom{n}{2}$ pairs of numbers. I want to partition these pairs of numbers such that: * *The number of partition is as low as possible *None of the partitions contains two pairs that share the same number. How to do that? Example For n = 5, the following partition satisfies requirement 2: * *A: (1, 2), (3, 4) *B: (2, 3), (4, 5) *C: (1, 3), (2, 4) *D: (3, 5) *E: (1, 4), (2, 5) *F: (1, 5)
This is an edge coloring of the complete graph $K_n$. For each color, the edges of that color will form one part of the partition. Each color can only be used on $\lfloor \frac n2\rfloor$ edges. So we need at least $\binom n2/\lfloor \frac n2\rfloor$ colors: this is $n-1$ when $n$ is even, and $n$ when $n$ is odd. This is in fact possible, and is explained in Wikipedia's article on Baranyai's theorem (a more general statement). The construction has a nice geometric description. Put $n-1$ vertices equally spaced in a circle and $1$ vertex at the center. For each color, pick one edge out of the central vertex, and all the edges perpendicular to that edge. Here is an image of the $n=8$ case (taken from the Wikipedia article): For odd $n$, just use the coloring of $K_{n+1}$ obtained with this approach, and ignore edges out of vertex $n+1$. An alternative formulation of the odd-$n$ solution is to partition pairs $\{x,y\}$ into $n$ classes based on $x+y \bmod n$. (Then $\{x,y\}$ and $\{x,z\}$ cannot be in the same class for $y \ne z$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that F vanishes at infinity. Suppose $1 ≤ p < ∞, f ∈ L^p(R)$, and $F(x) = \int_{x}^{x+1} f(t) dm(t)$ Prove that F vanishes at infinity. We know that $\int_R |f|^p < \infty$, then, of course, for any $x, F(x)< \int_x^{x+1} |f|^p < \infty$. But I want to show that not only is it finite, but it goes to zero as $x$ goes to $\infty$. I was thinking to approach it by way of contradiction. Assume that, (WLOG assume $f \geq 0$) $lim_{x \rightarrow \infty} F(x) > 0$ $\implies lim_{x \rightarrow \infty} \int_x^{x+1} f(t) dm(t) > 0$ I thought perhaps to approach this with partial sums.. and show that it contradicts that $f \in L^p(R)$. But I have had no luck with this. Help? Hints? I would greatly appreciate it!
Let $g_n(t) = |f(t)|^p\cdot \chi_{[-n,n]}(t)$. Note that $0\leqslant g_n(t)\nearrow |f(t)|^p$ pointwise a.e. as $n\to\infty$. By the Monotone Convergence Theorem, $$ \int g_n(t)\,dt \to \int |f(t)|^p\,dt \quad\text{as $n\to\infty$}. $$ Hence the difference $$ \int|f(t)|^p\,dt - \int_{-n}^n|f(t)|^p\,dt \geqslant \int_n^\infty|f(t)|^p\,dt $$ can be made as small as desired by taking $n$ sufficiently large. In particular, the integral $$ \int_x^\infty|f(t)|^p\,dt \geqslant \int_x^{x+1}|f(t)|^p\,dt $$ can be made arbitrarily small by taking $x$ sufficiently large. If $p = 1$, then we are done by the triangle inequality applied to $F(x)$. If $p > 1$, apply Hölder's inequality \begin{align*} |F(x)| &\leqslant \int_x^{x+1}|f(t)|\,dt \leqslant \bigg(\int_{x}^{x+1}|f(t)|^p\,dt\bigg)^{1/p}\bigg(\int_{x}^{x+1}1^q\,dt\bigg)^{1/q}=\bigg(\int_{x}^{x+1}|f(t)|^p\,dt\bigg)^{1/p}, \end{align*} and the estimate from above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3116907", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Median divisor of even perfect numbers I noticed that when divisors of even perfect numbers are listed in ascending order, the middle divisor (I guess the median), is always of the form $2^n$, some power of 2. If true is there a proof for this, or does it happen all the time? I only checked up to the 8th perfect number. Thank you and apologies for the possible silliness of the question.
Every even perfect number is of the form $n=2^{p-1}(2^p-1)$ with $2^p-1$ prime. Therefore, the factors of $n$ are $1, 2, 2^2, 2^3, ..., 2^{p-1}, 2^p-1, 2(2^p-1), 2^2(2^p-1), 2^3(2^p-1), ...2^{p-2}(2^p-1),$ and the middle one is $2^{p-1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
If $p^4 | z^2$, then $p^2 | z$ for $p$ a prime and $z$ some positive integer. Problem: Suppose that $p$ is a prime, and $z$ is some positive integer. If $p^4 | z^2$, then $p^2 | z$. Thoughts: If for some positive integer $a$, that $p^4 a = z^2$, then necessarily $p^2 \sqrt{a} = z$, so that if my desired conclusion is correct, then $a$ would have to be a perfect square. How can I show this is true? Edit: I know that if $gcd(p^4, a) = 1$, then necessarily $a$ is a square.
$\dfrac{z^2}{p^4} = n\in\Bbb Z\ $ so $\ x = \dfrac{z}{p^2}\Rightarrow\,x^2= n\,$ $\Rightarrow\,x\in\Bbb Z\,$ by the Rational Root Test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Is it the right way to determine the coefficients? I am having trouble with understanding the logical completeness of a solution of exercise on my textbook (Linear Algebra Done Right). we have $Tp=(bp(1)p(2),c\sin p(0))$ &$p\in P(R)$. Prove that if T is a linear map, then $b=c=0$. It answers: consider $f(x)=\pi/2$ and $g(x)=\pi/2$, then they both are polynomials. So examining the property of the linear map: $T(x+y) = T(x)+T(y)$. we have $T(f+g) = (b\pi^2,c\sin(\pi))$ and $T(f)+T(g) = (b\pi^2/4,csin(\pi/2))+(b\pi^2/4,csin(\pi/2)) = (b\pi^2/2,2csin(\pi/2))$ Thus b = c = 0. My logic is that since the $f$ and $g$ are only two instances of the polynomials, using which to determine the value of $b$ and $c$ can be dangerous. Because we did not try polynomials with degree 1,2,3,4..... I think the more appropriate way to prove is using the property of polynomials but it would make this way more complex. I am curious about your opinion. I am really confused about whether I am being critical or I am just lack of experience in mathematic proof. Thanks for your time
A proof is either correct or not. There's no issue of danger. I dare say mathematics is not dangerous. Nobody ever cut himself on an integral (though metaphorically, yes, to be sure). One might come up with a simpler, more elegant proof, say; but I don't think it's very likely in this case. Mathematicians tend to prefer simpler proofs, for fairly obvious reasons. Btw, the polynomials used are degree zero, that is,constant polynomials. They are in a sense the simplest types of polynomials. So, in sum, I think it's clearly inexperience at this point. But don't worry. Practice makes perfect.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117325", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
solve $(3x-1)^{x-1}=1$ Q:How do I solve for $x$? $$(3x-1)^{x-1}=1$$ When I first saw this question I thought it was quadratic equation, $(3x-1)(x-1)=1$. But friend say no! It is raised to the power of $x-1$. What?!! I could take the log $$(x-1)\log(3x-1)=0$$ $\log(3x-1)=0$ $3x-1=1$ $x=\frac{2}{3}$. There is a problem, he told me there are more answers!! I can't sees it. Can anyone please show me if there are more answer?
Hint:$$(x-1)\log(3x-1)=0\implies \begin{array} xx-1=0\\ 3x-1=1\end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Numbers with exactly 1 square (prime) factor I have recently learned that numbers with no square (prime, assumed in the following) factor are called square-free numbers. I have read that it would asymptotically grows towards $$\#\{SquareFree\} under\ n = \frac{6n}{\pi^2}$$ I am curious and am wondering if there's a similar behaviour for people with exactly 1 square prime factor or even k? What I am referring to here: For example, 2100 = $2^2\times3\times5^2\times7$ which has 2 square prime factors, namely $2^2$ and $5^2$. Here $1^2$ and $10^2$ doesn't count for my purpose. Thanks for reading and thinking :)
Here I would proudly present to you the result that I and some friends have actually got after doing more math on it. Would someone read it and check if it's correct? First, it is known that the number of square-free is $$\sum_{i=1}^{\sqrt{n}}\mu{(i)}\left\lfloor{\frac{n}{i^2}}\right\rfloor$$ Where $\mu$ is the mobius function. Now I would apply this to the question, counting numbers with exactly k prime square factors. If $i$ is not square-free, its coefficient is still 0. If $i$ has less than k prime factors, its coefficient is 0 as well because we don't count it at all. If it has m prime factors where m≥k, it's coefficient will be $$(-1)^{m-k}\binom{m}{k}$$ by "Generalized Inclusion-Exclusion Principle". Hence, if we denote the number of primes factors of $i$ by $p(i)$, formula is $$C_k(n)=(-1)^k\sum_{i=1}^{\sqrt{n}}\left\lfloor{\frac{n}{i^2}}\right\rfloor\binom{p(i)}{k}$$ Would someone like to calculate the ratio of this to n? i.e. $\lim_{n\rightarrow\inf}\frac{C_k(n)}{n}$? Much love, Gareth
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Hahn-Banach needed to show equality? Let $X$ be a normed space and $x_1, x_2 \in X$. Suppose $x^{\ast}(x_1) = x^{\ast}(x_2)$ for all $x^{\ast} \in X^{\ast}$. Then $x_1 = x_2$. Do we need Hahn-Banach (hence, equivalently some sort of axiom of choice) to prove this?
For finite dimensional vector spaces we do not need Hahn Banach. Let $X$ be a finite dimensional space with basis $\{e_1,\dots,e_n\}$, and $\{f_1,\dots,f_n\}$ be the usual basis for $X^*$. Let $x_2=\sum_{i=1}^n\alpha_ie_n$ and $x_1=\sum_{i=1}^n\beta_ie_n$, and suppose $f(x_1)=f(x_2)$ for all $f\in X^*$. In particular that means that $f_i(x_1)=f_i(x_2)$ for each $i\in \{1,\dots,n\}$, which means that $\alpha_i=\beta_i$ for each $i\in \{1,\dots,n\}$, so $x_1=x_2$. We can generalise this argument to any vector space with a Schauder basis, using the continuity of elements in the dual, but in an infinite vector space without a Schauder basis I believe we will need to use the Hahn-Banach theorem. Although I do not rule out the possibility that there may be intermediate spaces where we don't need the full Hahn-Banach theorem, and I hope more learned members of the community will be able to discuss such spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $\cos^220^\circ-\cos20^\circ\sin10^\circ+\sin^210^\circ=\frac34$ The original exercise is to Prove that $$4(\cos^320^\circ+\sin^310^\circ)=3(\cos20^\circ+\sin10^\circ)$$ Dividing both sides by $\cos20^\circ+\sin10^\circ$ leads me to the problem in the question title. I've tried rewriting the left side in terms of $\sin10^\circ$: $$4\sin^410^\circ+2\sin^310^\circ-3\sin^210^\circ-\sin10^\circ+1\quad(*)$$ but there doesn't seem to be any immediate way to simplify further. I've considered replacing $x=10^\circ$ to see if there was some observation I could make about the more general polynomial $4x^4-2x^3-3x^2-x+1$ but I don't see anything particularly useful about that. Attempting to rewrite in terms of $\cos20^\circ$ seems like it would complicate things by needlessly(?) introducing square roots. Is there a clever application of identities to arrive at the value of $\dfrac34$? I have considered $$\cos20^\circ\sin10^\circ=\frac{\sin30^\circ-\sin10^\circ}2=\frac14-\frac12\sin10^\circ$$ which eliminates the cubic term in $(*)$, and I would have to show that $$4\sin^410^\circ-3\sin^210^\circ+\frac12\sin10^\circ=0$$ $$4\sin^310^\circ-3\sin10^\circ+\frac12=0$$
We need $$4\cos^220^\circ-4\cos20^\circ\sin10^\circ+4\sin^210^\circ=3$$ $$\iff2(1+\cos40^\circ)-2(\sin30^\circ-\sin10^\circ)+2(1-\cos20^\circ)=3$$ $$\iff\cos40^\circ-\cos20^\circ=-\sin10^\circ$$ which is evident from Prosthaphaeresis Formulas
{ "language": "en", "url": "https://math.stackexchange.com/questions/3117979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
How to prove that $\left(\ln(\ln(x)) \right)^2 \lt \ln(x)$ How to prove that $\left(\ln(\ln(x)) \right)^2 \lt \ln(x)$ for sufficiently large $x$ This is what I did. Using L'Hopital's rule we have $$\lim_{x\to\infty}\frac{\left(\ln(\ln(x)) \right)^2 }{ \ln(x)}=0$$ So this implies that $\left(\ln(\ln(x)) \right)^2 \lt \ln(x)$ Is that enough?
Hint: Star from an inequality traditionally used in high-school to prove that $\;\lim_{x\to+\infty}\dfrac{\ln x}x=0$: $$\ln x<\sqrt x\quad\forall x>4,$$ and replace $x$ with $\ln x$: if $\ln x>4$, then $$\ln(\ln x)<\sqrt{\mkern1mu\ln x\mathstrut}$$ Note that, as $x>4>\mathrm e$, both sides of the inequality are positive, hence you can square to obtain the required inequality for all $x>\mathrm e^4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Notation for "Defined as proportional to" If I say $x$ is defined as $y+z$ then I can say $x := y+z$. If I want to say $x$ is defined as proportional to $y+z$, then how can I say that? Would I say $x :\propto y+z$?
The symbol $\propto$ is indeed meaning that. It means that there exists $k$ a constant in a $\mathbb{K}$ field so that $x=k\cdot (y+z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Tail bound of sum less than sum of tail bounds Problem: Suppose we have a probability space $(\Omega, \mathcal{M}, P)$, a random variable $X$ on this space, and two nonnegative measurable functions $f,g:\mathbb{R}\to[0, \infty)$. Choose some $\epsilon > 0$. Prove that $$P(f(X)+g(X)>\epsilon) \le P(f(X)>\epsilon/2) + P(g(X)>\epsilon/2)$$ Attempt: by definition we know $P(f(X)+g(X)>\epsilon) = P(\{\nu \vert f(X)(\nu)+g(X)(\nu)))>\epsilon\})$, and it seems like a basic real analysis exercise, however I'm having trouble splitting this term into the two desired terms. Any help is appreciated!
Observe that $$ \{ \nu \in \Omega \mid f(X)(\nu) + g(X)(\nu) > \epsilon \} \subset \{ \nu \in \Omega \mid f(X) (\nu) > \tfrac \epsilon 2\} \ \cup \ \{ \nu \in \Omega \mid g(X) (\nu) > \tfrac \epsilon 2\}. $$ So \begin{align*} P\left( \{ \nu \in \Omega \mid f(X)(\nu) + g(X)(\nu) > \epsilon \}\right) & \leq P \left( \{ \nu \in \Omega \mid f(X) (\nu) > \tfrac \epsilon 2\} \ \cup \ \{ \nu \in \Omega \mid g(X) (\nu) > \tfrac \epsilon 2\} \right) \\ & \leq P \left( \{ \nu \in \Omega \mid f(X) (\nu) > \tfrac \epsilon 2\} \right) + P \left( \{ \nu \in \Omega \mid g(X) (\nu) > \tfrac \epsilon 2\} \right). \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of an event occurs at least three times before another event occurs cars arrives according to a Poisson process with rate=2 per hour and trucks arrives according to a Poisson process with rate=1 per hour. They are independent. What is the probability that at least 3 cars arrive before a truck arrives? My thoughts: Interarrival of cars A ~ Exp(2 per hour), Interarrival of trucks B ~ Exp(1 per hour). Probability that at least 3 cars arrive before a truck arrives $= 1- Pr(B<A) - Pr(A<B)Pr(B<A) - Pr(A<B)Pr(A<B)Pr(B<A) \\= 1 - (\frac{1}{3})-(\frac{2}{3}\cdot\frac{1}{3})-(\frac{2}{3}\cdot\frac{2}{3}\cdot\frac{1}{3})\\=\frac{8}{27}.$ Is this correct?
Let $M_t$ be the Poisson process which counts the arrival of trucks. Then by the given condition we have $(M_t)_{t>0} \sim PP(1).$ Let $X_1,X_2, X_3, \cdots$ be the time gaps between arrival of cars. Then $X_n \sim \text{iid} \exp (2).$ So the required probability is $P(M_{X_1+X_2+X_3} < 1).$ Now $$\begin{align} P(M_{X_1+X_2+X_3} < 1) & = P(M_{X_1+X_2+X_3} = 0). \\ & = \int_{0}^{\infty} P(M_{X_1+X_2+X_3} = 0 \mid X_1+X_2+X_3 = t) f_{X_1+X_2+X_3} (t)\ \text{dt}. \\ & = \int_{0}^{\infty} P(M_t = 0) f_{X_1+X_2+X_3} (t)\ \text{dt}. \end{align}$$ Now $M_t \sim \text {Poisson}\ (t)$ and $X_i$'s are iid with exponential$(2)$ it follows that $X_1+X_2+X_3 \sim \text {Gamma} (3,2).$ So $$\begin{align} P(M_{X_1+X_2+X_3} < 1) & = 4 \int_{0}^{\infty} t^2e^{-3t}\ \text {dt}. \end{align}$$ Using gamma function we find that $$\int_{0}^{\infty} t^2 e^{-3t}\ \text{dt} = \frac {\Gamma(3)} {27} = \frac {2} {27}.$$ So the required probability is $4 \times \frac {2} {27} = \frac {8} {27},$ as you have obtained.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How many ways can n identical balls be distributed into k distinct boxes, such that at least one box is empty? This is a problem in my combinatorics book that uses the principle of inclusion-exclusion. I can follow almost all of what is said, except the book says that if we consider $A_{i}$ to be the set of solutions where box i is empty, then $|A_{i}| = {n-(k-1)-1 \choose k-1}$. The book does not explain why this is true. And I want to know why, since I thought that $|A_{i}| = {n+(k-1)-1 \choose k-1}$. So that you can get to the root of my misunderstanding, my reasoning was that a placement of n identical balls into k distinct boxes is the same as the number of nonnegative integer solutions to $x_{1}+\cdots+x_{k} = n$. Any help would be much appreciated!
To begin with, I'd solve the problem differently: By a stars-and-bars, there are $n-1\choose k-1 $ ways to place $n$ balls into $k$ bins such that no bin is empty. Subtract this from the $n+k-1\choose k-1$ ways to place $n$ balls into $k$ bins without restriction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Lagrange multipliers with $\lambda = 0$ problem We want to maximize the function $f(x_1,x_2,x_3,x_4) = \sum_{i=1}^{4}a_i^2x_i$ over the compact set $\Omega = \{x \in \mathbb R^4: |x| = 1, \langle x,a\rangle = 0\}$ where $a = (a_1,a_2,a_3,a_4)$ is some non zero vector This is a continuous function over a compact set so it admits minimum and maximum. Thus we need to solve the system $\begin{cases}a_i^2 = 2\lambda_1 x_i +\lambda_2 a_i \\ |x|-1 = 0\\\langle x,a \rangle = 0\end{cases}$ I solved this for the case that $\lambda_1 \neq 0$. But when $\lambda_1 = 0$ I have too many variables and no way to isolate $x$ - how do I solve this?
Hint If $\lambda_1 = 0$ then $a_i^2 = \lambda_2 a_i$ which implies $\lambda_2 = a_i \Rightarrow a = (\lambda_2, \lambda_2, \lambda_2, \lambda_2)$, or $\lambda_2 = 0$, and you should be able to compute a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$\frac{\int fg dx}{\int g dx}=f(0)$ We already know that $$\lim_{n \rightarrow +\infty} \int_{-1}^1 (1-x^2)^n dx = 0$$ If we have $f(x) \in C[-1,1]$ then prove $$\lim_{n \rightarrow +\infty} \frac{\int_{-1}^1 f(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx } = f(0)$$ My thought is $\lim\limits_{n \rightarrow +\infty} \frac{\int_{-1}^1 f(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx } - f(0) = \lim\limits_{n \rightarrow +\infty} \frac{\int_{-1}^1 [f(x)-f(0)](1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx } $ so assume $f(0) = 0$. And $\lim\limits_{n \rightarrow +\infty} \frac{\int_{-1}^1 f(x)(1-x^2)^n dx}{\int_{-1}^1 (1-x^2)^n dx } \le \lim\limits_{n \rightarrow +\infty} \frac{\int_{-1}^{-\delta} M(1-x^2)^n dx + \int_{-\delta}^\delta \epsilon (1-x^2)^n dx + \int_{\delta}^1 M(1-x^2)^n dx}{\int_{-\delta}^{\delta} (1-x^2)^n dx } $ where $M$ is the upper bounder of $|f(x)|$ and $\epsilon$ is small enough since $f(0)$. It suffice to show that $\lim\limits_{n \rightarrow +\infty} \frac{\int_{-1}^{-\delta} (1-x^2)^n dx}{\int_{-\delta}^\delta (1-x^2)^n dx } =0$.
You could observe (by calculus) that $g_n(x) = (1-x^2)^n / \int_{-1}^1 (1-u^2)^ndu$ is the density function for a random variable $X_n$ with expectation $0$ and with variance tending to 0 as $n\to\infty$. (This last by checking $\int_{-1}^1x^2g_n(x)dx\to0$, which is a Beta function calculation.) Hence $X_n\to0$ in probability, and hence in distribution. Hence $Ef(X_n)\to f(0)$, by the ``portmanteau theorem.''
{ "language": "en", "url": "https://math.stackexchange.com/questions/3118986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Confusion Dividing A Fraction with a Whole Number... In the lesson I am doing, I divide fractions. Here is my problem: 28/55 / 7 I had to look up how to do this problem. According to Math Is Fun, you divide the denominator by the whole number and then simplify if possible. I did this, and got: 28/385 I couldn't simplify it, and the online quiz would only let me enter 2 digits in the denominator textbox. I couldn't figure it out, so I put in a random answer. They said it was: 4/55 How did they get this answer, and where did I go wrong? Please help, because I can't figure it out.
For a general purpose, suppose we need to find $y=\frac{\frac{a}{b}}{\frac{c}{d}}$ where $\frac{c}{d}\not=0.$ Then we can do the following - $$y\frac{c}{d}=\frac{a}{b}\implies \frac{c}{d}=\frac{\frac{a}{b}}{y}\\ \implies \frac{d}{c}=\frac{y}{\frac{a}{b}}\\ \implies y=\frac{d}{c} \times \frac{a}{b}.$$ For your problem, take $\frac{a}{b}=\frac{28}{55}$ and $\frac{c}{d}=\frac{7}{1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Dimension of global sections Let $X$ be a hypersurface of degree $d$ in a projective space of dimension $n$. Is there a formula which expresses the dimension of the dimension of the space of global sections of the line bundle $O(k)$ on $X$?
If $n = 1$, then $X$ is a union of $d$ points, so $\mathcal O_{X}(k) = \mathcal O_X$, which has $d$ linearly-independent global sections. If $n \geq 2$, then consider the short exact sequence, $$ 0 \to \mathcal O_{\mathbb P^n} (-d) \to \mathcal O_{\mathbb P^n} \to \mathcal O_X \to 0.$$ Tensoring with $\mathcal O_{\mathbb P^n}(k)$, we get the short exact sequence, $$ 0 \to \mathcal O_{\mathbb P^n} (k-d) \to \mathcal O_{\mathbb P^n}(k) \to \mathcal O_X (k)\to 0.$$ Now look at the long exact sequence: $$ 0 \to H^0 (\mathcal O_{\mathbb P^n} (k-d)) \to H^0 (\mathcal O_{\mathbb P^n}(k)) \to H^0 (\mathcal O_X (k))\to 0 ,$$ (where I've used the fact that $H^1 (\mathcal O_{\mathbb P^n} (k-d))= 0$ when $n \geq 2$ to get the $0$ at the end). Thus $$ {\rm dim \ } H^0 (\mathcal O_X (k)) = {\rm dim \ } H^0 (\mathcal O_{\mathbb P^n}(k)) - {\rm dim \ } H^0 (\mathcal O_{\mathbb P^n}(k - d)),$$ so you can read off the dimension of $H^0 (\mathcal O_X (k))$ from the standard formula, $${\rm dim \ } H^0 (\mathcal O_{\mathbb P^n } (r)) = \begin{cases} \binom{n + r}{n} & {\rm if \ } r \geq 0 \\ \ \ \ 0 &{\rm otherwise} \end{cases}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Use of the product and quotient rule in differentiation In the process of rounding up the product and quotient rule, I got confused when my textbook said that the product rule should not be used if one of the factors of the product is a constant, and the quotient rule should not be used if the denominator is a single term. I've used the rules for both of these conditions and i got the answers right, the only thing was that it took longer to solve, so the question is can I still use the product and quotient rule under such conditions?
When on of the term is a constant, it is equivalent to using linearity to factor out the constant. Therefore, it is unnecessary use of product and quotient rule, but not false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
"A Simple Proof of Zorn's Lemma" article. (explanation of a step) There is a short proof of Zorn's lemma by J.W.Lewin: A Simple Proof of Zorn's Lemma.pdf Summary: I don't understand this step: "Therefore if z is the least member of $A\setminus P(B, y)$, we have $P(A, z) = P(B, y)$." I'm interested in both inclusions from left to right and from right to left. A subset A of X is conforming if the following two conditions hold: * *The order $<$ is a well order of the set A. *For every element $x\in A$, we have $x = f(P(A, x))$. Theorem If A and B are conforming subsets of X and $A\neq B$, then one of these two sets is an initial segment of the other. Proof We may assume that $A\setminus B\neq\emptyset$. Define x to be the least member of $A\setminus B$. Then $P(A, x) \subseteq B$. We claim that $P(A, x) = B$. To obtain a contradiction, assume that $B\setminus P(A, x)\neq\emptyset$, and define y to be the least member of $B\setminus P(A, x)$. Given any element $u\in P(B, y)$ and any element $v\in A$ such that $v < u$, it is clear that $v \in P(B, y)$. Therefore if z is the least member of $A\setminus P(B, y)$, we have $P(A, z) = P(B, y)$. (???)
Here I want to clarify part of the @Matematleta's proof which took some of my time. On the other hand, if $<$, then since $\in (,)$, we have $\in (,)$, which is a contradiction. We assume that $<$. 1) From $\in A\setminus (,)$, we obtain $z\in A \land [z\notin B\lor z\geqslant y]$ 2) The second OR-case $z\geqslant y $ implies contradiction: $z\geqslant y >\alpha>z$. That means that $z\notin B$. 3) $z \in A \land z\notin B$ implies $z\in A\setminus B$. y is the least element of $A\setminus B$. So $y\leqslant z$. 4)Then $z<\alpha<x\leqslant z$ - contradiction. That means that the assumption $<$ is false. p.s. Proof of the 4th proposition: $\alpha < x$ here is from $\alpha\in P(A,x)$. In it's turn, this is from $\alpha\in B$ and $\alpha\notin B\setminus P(A,x)$. Proposition $\alpha\notin B\setminus P(A,x)$ is true because $\alpha\in B\setminus P(A,x)$ leads to a contradiction: $y\leqslant \alpha< y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Constructing an isomorphism to show two groups are isomorphic For each $n\geq 2$, consider $C_n = \{ (a,b) \in \mathbb{Z}^2 : a \equiv b \mod \: n\}$. I want to show that $C_n$ is isomorphic to $\mathbb{Z} \times \mathbb{Z}$. To do this, I know I need to construct a bijection that preserves products as in the definition of an isomorphism. I’m at a loss for how to construct such a bijection, because I don’t see how any function I can think of can be both injective and surjective. Any guidance?
The bijection $f:C_n\to\mathbb Z^2$ may be constructed as follows: $$f(a,b)=\left(a,\frac{b-a}n\right)$$ Its inverse is $$f^{-1}(a,b)=(a,bn+a)$$ $f$ is a homomorphism because $$f(a,b)+f(c,d)=\left(a,\frac{b-a}n\right)+\left(c,\frac{d-c}n\right)=\left(a+c,\frac{(b+d)-(a+c)}n\right)=f(a+c,b+d)$$ $f$ is an injection: suppose $f(a,b)=f(a',b')=(c,d)$. Then $a=a'$, and manipulating $\frac{b-a}n=\frac{b'-a'}n$ we get $b=b'$ too. $f$ is surjective because of the inverse function demonstrated above. Thus $f$ is a bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
We have an urn with 6 red balls and 4 green balls. We have an urn with 6 red balls and 4 green balls. We draw balls from the urn one by one without replacement, noting the order of the colors, until the urn is empty. Let X be the number of red balls in the first five draws, and Y the number of red balls in the last five draws. Compute the covariance Cov(X,Y). My work: X=I1+I2+...+I5 Y=J6+J7+...+J10 where Ii=1 if red and Jj=1 if green. Cov(X,Y)= E(XY)-E(X)E(Y) E(XY)=E(I1J6)+E(I1J7)+...+E(I5J10)=25*E(I1J6) =25*(6/10)(4/9) E(X)=5*(6/10) E(Y)=5*(4/10) So, Cov(X,Y)=25(6/10)(4/9)-25(6/10)(4/10)=2/3 I don't know I am doing it right. Are there other ways to solve this? Thank you.
The simplest approach is to observe that $X + Y = 6$, as there are only $10$ balls in the urn, hence one is guaranteed to have drawn all $6$ red balls. Thus $$\begin{align*} \operatorname{Cov}[X,Y] &= \operatorname{Cov}[X, 6-X] \\ &= \operatorname{E}[X(6-X)] - \operatorname{E}[X]\operatorname{E}[6-X] \\ &= \operatorname{E}[6X] - \operatorname{E}[X^2] - \operatorname{E}[X](6-\operatorname{E}[X]) \\ &= 6 \operatorname{E}[X] - \operatorname{E}[X^2] - 6 \operatorname{E}[X] + \operatorname{E}[X]^2 \\ &= - \left(\operatorname{E}[X^2] - \operatorname{E}[X]^2\right) \\ &= - \operatorname{Var}[X]. \\ \end{align*}$$ Then note that the distribution of $X$ is hypergeometric; namely, $$\Pr[X = x] = \frac{\binom{6}{x}\binom{4}{5-x}}{\binom{10}{5}}, \quad x \in \{1, 2, 3, 4, 5\}.$$ (It is impossible to not get a red ball in the first five draws, as there are only four green balls; and it is obviously not possible to get six red balls as there are only five.) Do you recall the variance of a hypergeometric distribution? If not, it is not hard to derive the formula. And even then, if you cannot, with the above distribution having only $5$ possible outcomes, it is not difficult to explicitly compute $\operatorname{E}[X]$ and $\operatorname{E}[X^2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I evaluate this indefinite integral? I am currently working on the following problem: $\int x (2x+3)^{99}$ I have tried using u-substitution $(u = 2x+3)$ and integration by parts, but have not been able to make any progress that leads me to an answer. I thought about actually computing $(2x+3)^{99}$, but I think that would make the problem even more complicated. How would I go about solving this question?
You have the right idea, but just split the result into $2$ integrations after you make the substitution. In particular, $$u = 2x + 3 \Rightarrow du = 2dx \Rightarrow dx = \frac{du}{2} \tag{1}\label{eq1}$$ Also, $$x = \frac{u - 3}{2} \tag{2}\label{eq2}$$ Thus, $$\int x\left(2x + 3\right)^{99}dx = \int \frac{u - 3}{4}u^{99} du = \frac{1}{4}\int u^{100} du - \frac{3}{4}\int u^{99} du \tag{3}\label{eq3}$$ I assume you should be able to finish the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding an explicit formula for a Hamiltonian vector field I've been looking at this question: Existence of vector field given a smooth function That is: Given a symplectic manifold $M$ of dimension $2n$, with a symplectic form $\omega \in \Omega^2(M)$, do we have for all smooth function $f\in C^\infty(M)$ a vector field $U_f$ on $M$ such that $df=i_{U_f}\omega$? Alex M. gives an answer there using the musical isomorphisms, which assumes some previous knowledge with pseudo-Riemannian manifolds. I was wondering if one can give an explicit expression to the obtained vector field, as I am not so strong on the subject of Riemannian metrics, and cannot really decipher in terms of tangent vector what the expression should be. I apologize if this is a trivial question, and would appreciate any help gaining insight as to how one can obtain an explicit formula for a Hamiltonian vector field?
My attempt at an answer using the help given by user3257842: Let $\{ e_1(p),....,e_{2n}(p) \}$ be a basis for $T_p(M)$, orthonormal with respect to the Riemannian metric $\rho$. Define the representing matrix of $\omega_p$ to be $B(p)\in \mathbb{R}_{2n \times 2n}$, by: $ B_{i,j}(p):= \omega_p \Big( e_i, e_j \Big) $ We know that for two tangent vectors $\eta_{(1)}$ and $\eta_{(2)}$ we have: $ \omega(\eta_{(1)},\eta_{(2)}) = \eta_{(1)}^t \cdot B(p)\cdot \eta_{(2)} $, with coordinate vectors with respect to the above orthonormal basis. For all $\eta\in T_p(M)$ we have that: $ dH_p(\eta)= \rho( \nabla H(p), \eta ) $ And by our choice of basis, we have that: $ dH_p(\eta)= \Big( \nabla H(p) \Big) ^t \cdot \eta $, with the coordinate vectors with respect to our orthonormal basis. Let us define $X_h(p)$ by: $ X_H(p):=\Big( \nabla H(p) \Big) ^t\cdot \Big( B(p) \Big)^{-1} $ By this definition, we can see that: $ \Big( X_H\Big)^t B(p)= \Big( \nabla H(p) \Big) ^t\cdot \Big( B(p) \Big)^{-1} \cdot B(p)=\Big( \nabla H(p) \Big)^t $ And therefore: $ \omega(X_H(p), \eta)= dH(\eta) \quad \text{for all} \quad \eta \in T_p(M) $ This will be our Hamiltonian vector field. Let us notice, that since $\omega$ is invertible and smooth, if we denote it's inverse $\omega^{-1}$, then it is also smooth. i.e, the entries of $\Big( B(p) \Big)^{-1}$ are also smooth as a function of $p$. And we obtain that $X_H$ is a result of a composition of smooth maps, and is therefore smooth.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3119919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Injective and surjective homomorphisms between non cyclic group of order $4 $ to $Z_8$ Let $G$ be a non cyclic group of order $4$. Consider the following statements: $I:$ There is no one-one map homomorphism from $G$ to $Z_8$ $II:$ There is no onto homomorphism from $Z_8$ to $G$ Then which of these statements are true? Okay, So by classification, $G\simeq Z_2\times Z_2$ Consider statement $I$ If $\phi $ is $1-1$ homomorphism $\phi:G\to^{1-1}\to Z_8\rightarrow \frac{G}{ker\phi}\simeq H \leq Z_8$, where $H$ is a subgroup of $Z_8$ and hence cyclic. But here $ker\phi = \{e\}$ so, $G\simeq H \leq Z_8$ which is not possible since $G$ is not cyclic. So $I$ is true. Consider statement $II$ if $\phi$ is onto homomorphism from $\phi: Z_8\to G$, then $\frac{Z_8}{ker\phi}\simeq G$ But $\frac{Z_8}{ker\phi}$ is cyclic since $Z_8$ is cyclic, so $\frac{Z_8}{ker\phi}\simeq G$ is not possible since $G$ is non cyclic. So $II$ is true. So both statements are true. Is this correct?
I think your arguments are correct, but can be shortened. Part I: Suppose that there is such a homomorphism $\phi$. Then $\phi(G)\cong G$ is a subgroup of $C_8$, hence cyclic, a contradiction. Part II: Suppose that $\phi$ is such a homomorphism. Then $\phi(C_8)\cong G$, where $\phi(C_8)$ is cyclic, because $C_8$ is cyclic, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Square root of a specific matrix in M(3,Z) over Z Let $ \begin{bmatrix} a&b&c \\ d&e&f\\ g&h&i\\ \end{bmatrix}^2 = \begin{bmatrix} x&0&0\\ 0&0&-y\\ 0&1&-z\\ \end{bmatrix} $ Where all of the components are integer. I am trying to figure out what will be the condition of $a,b,c,d,e,f,g,h,i$ or $x,y z$. so that the equation is true. I mean For example, a diagonal integer matrix has a square root over $\mathbb{Z}$ if the diagonal entries are perfect square. Something like that condition. I'd tried to solve it by a system of equation, but its VERY hard because, its solving $9$ equations. And I cant do it. What I just showed that $$y = -\frac{f}{h}$$ and $$cdh = bgf.$$ And thats all. Im stuck. My question is, is their any other way, or other approach to solve this ? Pls help me with this.
Yeah, so a very normal approach to solve this sort of equation is to generalize to the reals and diagonalize. Suppose your matrix can be factored as $$ M = CDC^{-1}, $$ for $D$ diagonal, then its square can be factored as $$M^2 = C D C^{-1}CDC^{-1} = C D^2 C^{-1},$$so the same coordinate-change matrix $C$ is used for the square as for the original, but $D^2$ is very easy to compute for a diagonal matrix -- just square each component. This argument probably works in reverse but it probably misses out on a "necessary" branch of a "necessary and sufficient condition" -- if you work it out you may find that non-diagonalizable matrices can also have square roots. You might be able to work out the necessary condition by using generalized eigenvectors, since any matrix always has a complete set of generalized eigenvectors, but that might be a step ahead of where you are right now. Still, one can persuasively argue for example from this that $b=c=d=g=0$ and $a = \sqrt{x}$ as that part of the diagonalization is "already done for you." So what's left is diagonalizing $$\begin{bmatrix}0&-y\\1&-z\end{bmatrix}$$To do this it is helpful to get the eigenvalues, and to get that it's helpful to know that the determinant is $y$ which must be the product of the eigenvalues, while the trace is $-z$ which must be their sum; solving $\lambda_+\lambda_- = y$ with $\lambda_+ + \lambda_- =-z$ gives $$\lambda_\pm = \frac{-z \pm \sqrt{z^2 - 4y}}{2}$$and from there I think the initial $(0, 1)$ column makes this rather easy to diagonalize as one gets something like $$C = \begin{bmatrix}\lambda_+&\lambda_-\\1&1\end{bmatrix},$$ whose determinant is $\lambda_+ - \lambda_-$ and therefore a final solution looks something like $$\begin{bmatrix}e&f\\h&i\end{bmatrix} = \frac{1}{\lambda_+ - \lambda_-} \begin{bmatrix} \lambda_+&\lambda_-\\1&1\end{bmatrix} \begin{bmatrix} \sqrt{\lambda_+}&0\\0&\sqrt{\lambda_-}\end{bmatrix} \begin{bmatrix} 1&-\lambda_-\\-1&\lambda_+\end{bmatrix}. $$ It's a bit messy, of course, but one can derive that it would be sufficient for this square root to exist for $x > 0$ and $z^2 - 4y > 0$ while $-z -\sqrt{z^2 - 4y} > 0$ as well. There might be some other family of solutions lurking about -- I don't know -- but those are the "main" ones.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Open balls in $p$-adic numbers. I am new to $p$-adic numbers and was watching an introductory video about it. At 14:51, he says that $r$ only takes values in the form of $p^n$. However, I don't understand why $r$ must be restricted to numbers in the form of $p^n$. Given any r that is not in the form of $p^n$, there will be an integer $i$ such that $p^i<r<p^{i+1}$. Thus, the open ball set of $r$ would be the same as $p^{i+1}$
As in any metric space, you can define an open ball of radius any real number. But if the metric only takes values that are powers of $p$, we might as well restrict attention to balls of radius a power of $p$. I suspect that this was precisely the point the video was trying to make.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do we divide Permutations to get to Combinations? I'm having a hard time reasoning through the formula for combinations $\frac{n!}{k!\left(n-k\right)!}$. I understand the reason for the permutations formula and I know that for the combinations we divide by $k!$ to adjust for the repeated cases, since order does not matter. But what exactly happens when we divide the set of permutations by $k!$ ? I know this may seem like a silly question... I just can't take this for granted, lest I miss a chance to apply it correctly. Can you describe what's happening here step by step, sort like debugging a script?
You divide by k! since that's the number of permutations for the k objects that you've taken. All the permutations/orders of each set of k objects is equivalent since you're dealing with combinations, so you have to divide that out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 12, "answer_id": 9 }
Wave equation on a disk (circular membrane) Solve wave equation in a disk, axisymmetric case $$\begin{cases} \frac{\partial^2u}{\partial t^2}=\frac{c^2}{r}\frac{\partial}{\partial r}(r\frac{\partial u}{\partial r}) \,\,\, \,,0<r<a\quad,t>0\\ u(r,0)=f(r),\quad\frac{\partial u}{\partial t}(r,0)=g(r),\quad u(a,t)=0 \end{cases}$$ My attempt: Note the function ${\displaystyle u}$ does not depend on the angle ${\displaystyle \theta ,}$ because we have axisymmetric case of a circular membrane. Let $u(r,t)=R(r)P(t)$ and replacing in the PDE we have: $$R(r)P''(t)=c^2[\frac{R'(r)}{r}+R''(r)]P(t)\tag1$$ Dividing $(1)$ for $R(r)P(t)$ we have: $$\frac{P''(t)}{c^2P(t)}=[\frac{1}{r}\frac{R'(r)}{R(r)}+\frac{R''(r)}{R(r)}]$$ The left-hand side of this equality does not depend on ${\displaystyle r,}$ and the right-hand side does not depend on ${\displaystyle t,}$ it follows that both sides must be equal to some constant ${\displaystyle \lambda.}$ Then $$\lambda=\frac{P''(t)}{c^2P(t)}=[\frac{1}{r}\frac{R'(r)}{R(r)}+\frac{R''(r)}{R(r)}]$$ Of this we have two equations $$\begin{cases} P''(t)-c^2\lambda P(t)=0\\ rR''(r)+R'(r)-r\lambda R(r)\tag2 \end{cases}$$ We're going to solve $P''(t)-c^2\lambda P(t)=0$ If $\lambda=0$ then the solution is of the form: $$P(t)=c_1+c_2t$$ If $\lambda>0$ then the solution is of the form $$P(t)=c_1e^{ckt}+c_2e^{-ckt}$$ If $\lambda<0$ then the solution is of the form $$P(t)=c_1\cos(ckt)+c_2\sin(ckt)$$ Here i'm stuck.
The BVP $$ r^2R'' + rR' - \lambda r^2 R = 0, \quad R(0) < \infty, R(a) = 0 $$ only has a non-trivial solution when $\lambda < 0$. You can check that $\lambda = 0$ returns a general solution of $A+B\ln r$, and $\lambda > 0$ returns modified Bessel functions, neither of which will satisfy the boundary conditions. The substitution $\rho = \sqrt{-\lambda}r$ results in Bessel's equation (check this for yourself), so we have a general solution $$ R(r) = AJ_0(\sqrt{-\lambda}r) + BY_0(\sqrt{-\lambda}r) $$ Note that $Y_0$ blows up at $r=0$ so we need to set $B=0$. This leaves the boundary condition $J_0(\sqrt{-\lambda}a)=0$. Let $\alpha_n$ be the zeroes of $J_0(x)$ with $n=1,2,3,\dots$ then we can rewrite the solution as $$ R_n(r) = J_0\left(\frac{\alpha_n}{a}r\right) $$ up to a multiplicative constant. Therefore $$ u(r,t) = \sum_{n=1}^\infty \left[C_n \cos\left(\frac{\alpha_n}{a}ct\right) + D_n \sin\left(\frac{\alpha_n}{a}ct\right)\right]J_0\left(\frac{\alpha_n}{a}r\right) $$ The initial conditions give \begin{align} u(r,0) &= f(r) = \sum_{n=1}^\infty C_n J_0\left(\frac{\alpha_n}{a}r\right) \\ u_t(r,0) &= g(r) = \sum_{n=1}^\infty \frac{c\alpha_n}{a}D_n J_0\left(\frac{\alpha_n}{a}r\right) \end{align} To determine the remaining constants, you must find the Fourier-Bessel series of $f(r)$ and $g(r)$. The process is very similar to the Fourier series, since Bessel functions are also mutually orthogonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }