Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Proof by induction that $3^n - 1$ is an even number How to demonstrate that $3^n - 1$ is an even number using the principle of induction? I tried taking that $3^k - 1$ is an even number and as a thesis I must demonstrate that $3^{k+1} - 1$ is an even number, but I can't make a logical argument. So I think it's wrong the assumption... Thanks
Hint: try substituting back in $3^k = 2x+1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 9, "answer_id": 2 }
First-order linear ordinary differential equation with piecewise constant source term Find a continuous solution satisfying the DE: $\frac{dy}{dx} + 2y = f(x)$ $$\begin{align} f(x) &= \begin{cases} 1, & 0 \leq x \leq 3 \\ 0, & x>3\text{.}\end{cases}\\ y(0)&=0\end{align} $$ I don't get this problem at all. Can anyone explain what the above means for starters?
This ODE is first order linear. You've probably seen it as follows: Consider $$ \left \{ \begin{array}{cc} y'(x) + p(x) y(x) = g(x) \\y(0) =0 \end{array} \right.$$ The solution is found using an integrating factor $$ \mu (x) = \exp \left [ \int_{0}^x p(s) ds \right ]$$ and "factoring" the ODE to obtain $$ y(x) = \frac{1}{\mu(x)} \int_{0}^x \mu(s) g(s) ds $$ Thus in our case we have $$ \mu(x) = \exp \left [ \int_0^x 2ds \right ] = \exp (2x) $$ so $$y(x) = e^{-2x} \int_0^x 1_{[0,3]}(s)e^{2s} ds = \left \{ \begin{array}{cc} \frac{1}{2}(e^{-2x}-1) & x <3 \\ \frac{1}{2}(e^{-2x}-e^{6-2x}) & x\geq 3 \end{array} \right . $$ Notice that the limit from both side of $y$ to $x=3$ approach the same limit. Thus $y$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Convolution of binomial coefficients As part of a (SE) problem I've been working on, I came up with this expression: $$ \sum_{i=0}^M\binom{M-1+i}{i}\binom{M+i}{i} $$ I'd like to get a closed form for this, but after a considerable amount of time searching my references and online sources (not to mention the time I've spent bashing this into other equally opaque equivalences), I've come up empty. Does anyone have a clue? I'll be happy to link to the original if asked, but the expression more or less tells the story.
Here is an alternate representation of the sum. Suppose we seek to evaluate $$S_M = \sum_{q=0}^M {q+M-1\choose M-1} {q+M\choose M}.$$ Introduce $${q+M\choose M} = \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{(1+w)^{q+M}}{w^{M+1}} \; dw.$$ and furthermore introduce $$[[0\le q \le M]] = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{1+z+z^2+\cdots+z^M}{z^{q+1}} \; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{z^{M+1}-1}{(z-1)z^{q+1}} \; dz$$ which controls the range so we may let $q$ go to infinity to obtain for the sum $$\frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{z^{M+1}-1}{(z-1)z} \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{(1+w)^{M}}{w^{M+1}} \sum_{q\ge 0} {q+M-1\choose M-1} \frac{(1+w)^q}{z^q} \; dw\; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{z^{M+1}-1}{(z-1)z} \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{(1+w)^{M}}{w^{M+1}} \frac{1}{(1-(1+w)/z)^M} \; dw\; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{z^{2M+1}-z^M}{(z-1)z} \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{(1+w)^{M}}{w^{M+1}} \frac{1}{(z-(1+w))^M} \; dw\; dz \\ = \frac{1}{2\pi i} \int_{|z|=\epsilon} \frac{z^{2M}-z^{M-1}}{(z-1)^{M+1}} \frac{1}{2\pi i} \int_{|w|=\epsilon} \frac{(1+w)^{M}}{w^{M+1}} \frac{1}{(1-w/(z-1))^M} \; dw\; dz.$$ Extracting the residue at $w=0$ we obtain $$\sum_{q=0}^M {M\choose M-q} {q+M-1\choose M-1} \frac{1}{(z-1)^q}.$$ There are two contributions to the outer integral here, the first is $$\mathrm{Res}_{z=1} \sum_{q=0}^M {M\choose q} {q+M-1\choose M-1} \frac{1}{(z-1)^{q+M+1}} \sum_{p=0}^{2M} {2M\choose p} (z-1)^p \\ = \sum_{q=0}^M {M\choose q} {q+M-1\choose M-1} {2M\choose M+q}.$$ The second is $$\mathrm{Res}_{z=1} \sum_{q=0}^M {M\choose q} {q+M-1\choose M-1} \frac{1}{(z-1)^{q+M+1}} \sum_{p=0}^{M-1} {M-1\choose p} (z-1)^p$$ and this is easily seen to be zero. The product of the three binomials is $$\frac{M!}{q!\times (M-q)!} \frac{(q+M-1)!}{(M-1)! \times q!} \frac{(2M)!}{(M+q)!\times (M-q)!}.$$ which yields $$\frac{M}{M+q} \frac{(2M)!}{q! \times (M-q)! \times q! \times (M-q)!}.$$ which finally gives for the sum $$M {2M\choose M} \sum_{q=0}^M \frac{1}{M+q} {M\choose q}^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
confidence intervals with a mean How many people must be surveyed to estimate $90\%$ confidence for the mean, if the standard deviation is known to be $21.0$ and the allowable margin of error is $5$? What I have so far is: $\frac{1}{2}(1-0.90)=0.05$, $Z= 1.645$ $(1.645\cdot 21.0/5)^2= 47.73$ Is that right?
Yes, what you have is correct. Confidence intervals are represented as $$\bar{X}\pm E$$ Where $E$ is the maximum error of estimate and $$E=z_{\alpha/2}\left(\frac{\sigma}{\sqrt{n}}\right)$$ Solving the last equation for $n$ yields $$n=\left(\frac{z_{\alpha/2}\cdot\sigma}{E}\right)^2$$ So for your values $$n=\left(\frac{1.645\cdot 21}{5}\right)^2\approx47.73$$ And so we round to $48$ and you are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Monotone convergence theorem in a special case Suppose $C^{+}[0,1]$ be the set of all continuous functions with domain $[0,1]$ taking non-negative values only. Let $\lambda : C^{+}[0,1] \to [0,\infty)$ be a map that satisfies $$\lambda(f+g)=\lambda(f)+\lambda(g)$$ Now suppose $\{g_k\}$ be a sequence of functions in $C^{+}[0,1]$ that satisfies $$0\le g_1\le g_2\le \cdots, \ \ \ \ \ \ \ \ \ g_k \to g \in C^{+}[0,1]$$ Show that $$\lim_{k\to\infty} \lambda(g_k)=\lambda(g)$$ My progress: I was able to show that $\lim\limits_{k\to\infty} \lambda(g_k)\le \lambda(g)$. So, I am trying to show $\lambda(g) \le \lim\limits_{k\to\infty} \lambda(g_k)$. I am stuck here. I have knowledge of the proof of actual monotone convergence theorem. There it uses the notion of simple function which are easy to handle and then they prove the general case by taking the supremum over simple functions. But note that simple functions do not belong to $C^{+}[0,1]$. So I think we cannot use the same technique. Any idea to tackle this one? Hints, links, solutions all are welcome.
We consider $f_n = g-g_n \in C^+([0,1])$. Then $f_n\ge f_{n+1}\ge \cdots$ and $\{f_n\}$ converges uniformly to the zero function. Consider the constant function $I(x) = 1$ and let $C = \lambda (I)$. One can show by induction that $$\lambda \left(\frac 1k I\right) = \frac{C}{k}.$$ Now for each $k$, there is $n_k$ so that $f_m \le \frac 1k I$ for all $m \ge n_k$. Thus $$\lambda (f_m) \le \frac Ck$$ for all $m\ge n_k$. Hence $\lim_{n\to \infty} \lambda (f_n) = 0 = \lambda (0)$. Thus $\lim_{n\to \infty} \lambda(g_n) = \lambda (g)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
An inequality, comprising of liminf and Sup of a set The following was an example in a book called Applied analysis by Hunter. I am not exactly sure of what it really means or to how to approach it { $ x_{n,\alpha}\in \mathbb R $ |$ n \in \mathbb N $ and $\alpha \in A $} is a set of real numbers indexed by the natural numbers and a set A $ \sup_{\alpha \in A}$[$\liminf_{n \to \infty } x_{n,\alpha}$] $\le$ $\liminf _{n \to \infty} [\sup_{\alpha \in A} x_{n,\alpha}$] I have some problems understanding the set too, what does it mean to be indexed by both $\Bbb N $ and A? Is it just a different sequences indexed by the elements of A? If so how can I approach the problem. Thank you
Being indexed by two sets $X$ and $Y$ is effectively (or literally, depending on your definitions) the same as being indexed by the product space $X \times Y$. Your intuition, however is dead on. Just think of it that for each $a\in A$, you have a sequence. Actually solving the problem more or less involves writing out the definition of supremum and playing around for a while.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
On the existence of field morphisms Let $K$ and $L$ be two fields, does the existence of two field morphisms $f\colon K\rightarrow L,\ g\colon L\rightarrow K$ imply that, as abstract fields, $K\cong L$ (not necessarily via $f$ or $g$)?
I think I found a counterexample: Let $F:=\overline{\Bbb Q(x_1,x_2,\dots)}$, i.e. the algebraic closure of the extension of $\Bbb Q$ by infinitely many independent transcendent elements. Let $K:=F(x_0)$ a simple transcendent extension. This field is not algebraically closed. Finally, let $L:=\overline K$, its algebraic closure. Unless I am mistaken, $L=\overline{F(x_0)}=\overline{\Bbb Q(x_0,x_1,x_2,\dots)}\cong F$. Thus we gain field morphisms $K\to L$ and $L\cong F\to K$, though $L\not\cong K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Compute $I=\int e^{it}\cos t\,dt$ Compute $I:=\int e^{it}\cos t\,dt$, where $i\in\mathbb{C}$. Attempt: Let $u=e^{it}$, $dv=\cos t\,dt$. Then $du=ie^{it}\,dt$ and $v=\sin t$ Using integration by parts, we have $I=u=e^{it}\sin t-i\int e^{it}\sin t\,dt$. Say $I_1:=\int e^{it}\sin t\,dt$. By the same method, we have $I_1=-e^{it}\cos t+i\cdot I$. So we get $ I=e^{it}\sin t-i(-e^{it}\cos t+i\cdot I)=e^{it}\sin t+ie^{it}\cos t+I$. Can anyone find my mistake?
Your work is correct and means that, for a imaginary exponent, the classical double integration by part (*), that I suppose you well know for real exponent, does not work. Simply you have found that your primitive function $I$ is defined up to constant $i= e^{it \sin t}+i e^{it}\cos t$. (*) For a real exponent $a$ we have: $$ \int e^{at}\cos t dt=e^{at}\sin t-a\int e^{at}\sin t dt $$ and $$ \int e^{at}\sin t dt=-e^{at}\cos t+a\int e^{at}\cos t dt $$ so that: $$ \int e^{at}\cos t dt=e^{at}\sin t+a e^{at}\cos t -a^2\int e^{at}\cos t dt $$ and we find: $$ \int e^{at}\cos t dt=\dfrac{e^{at}(\sin t+a \cos t)}{(1+a^2)} $$ up to a constant. Note that, if $a$ is not a real number, for this derivation we need $a^2\ne -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proof that ${{2n}\choose{n}} > 2^n$ and ${{2n + 1}\choose{n}} > 2^{n+1}$, with $n > 1$ i'm trying to proof these two terms. I started with an induction, but I got stuck... Can anybody help?
Induction works pretty well, since: $$\frac{\binom{2n+2}{n+1}}{\binom{2n}{n}}=\frac{4n+2}{n+1}=2\cdot\frac{2n+1}{n+1}\geq 3 $$ and: $$\frac{\binom{2n+3}{n+1}}{\binom{2n+1}{n}}=2\cdot\frac{2n+3}{n+2}\geq 3$$ as soon as $n\geq 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Definition of exponential function, single-valued or multi-valued? If we define $$e^z=1+z+\frac{z^2}{2!}+\cdots$$ then it is single-valued. However, if we write $$e^z=e^{z\ln e}$$ then it is multi-valued. Besides, $a^z$ is multi-valued in general. It is kind of strange if only when the base is $e$ that it is single-valued. My thought: Is it true that there are two exponential functions, let's call them $\exp(z)$ and $e^z$? Where $\exp(z)$ is defined by $$\exp(z)=1+z+\frac{z^2}{2!}+\cdots$$ and is single-valued, while $e^z$ is defined by $$e^z = \text{exp}(z\ln e)$$ and is multi-valued? Here $\ln z$ is defined by $\exp(\ln z)=z$ and is multi-valued.
It's true that the complex map $\ln$ is multi-valued, but in the case of your second definition it doesn't matter, because $\exp$'s periodicity kills the extra arguments of $\ln$, so the definition becomes single-valued and perfectly ok. For example, $\arg(e)=\Theta=0$ and $\ln|e|=1$, so your definition becomes: $$e^{z\cdot \ln(e)}=e^{z\cdot (\ln|e|+(\Theta+2k\pi))i}\text{ },k\in\mathbb{Z}\Rightarrow$$ $$e^{z\cdot \ln(e)}=e^{z(1+2k\pi i)}\Rightarrow$$ $$e^{z\cdot \ln(e)}=e^z\cdot e^{z2k\pi i},k\in\mathbb{Z}\text{ (1)}$$ Correction after egreg's comment: The following two steps are in fact wrong. $$e^{z\cdot \ln(e)}=e^z\cdot (e^{2k\pi i})^z\Rightarrow$$ $$e^{z\cdot \ln(e)}=e^z\cdot 1^z=\exp(z)$$ They would be right, had the OP specified only the principal branch of $\ln$. Since there's no such concensus, I am correcting my answer, based on egreg's comment. I agree then that the definition is multi-valued, and it seems the actual values are given by the last line (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1319952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
Pulling balls out of a box A box has $b$ blue balls and $r$ red balls. We pull the balls without returning them. What is the probability that the $k$th pull is the first red ball and what is the probability that the last ball is red? Also: do you know a site which fully explains these kinds of problems and all the variation you can have?
Answering the first question: $\left(\prod\limits_{n=0}^{k-2}\dfrac{b-n}{b+r-n}\right)\cdot\left(\dfrac{r}{b+r-(k-1)}\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Jordan-homomorphism; equivalent properties Let $\phi:A\to B$ a linear map between $C^*$-algebras $A$ and $B$. I want to know, why the following properties are equivalent: $(i) \phi(ab+ba)=\phi(a)\phi(b)+\phi(b)\phi(a)$ and $(ii) \phi(a^2)=\phi(a)^2$ for all $a,b\in A$. The direction => is ok: Let $a\in A$, it is $\phi(aa+aa)=\phi(2a^2)=\phi(a)\phi(a)+\phi(a)\phi(a)=2\phi(a^2)$. Now, $\phi$ is linear, therefore if you multiply both sides with $\frac{1}{2}$, it is $\frac{1}{2}\phi(2a^2)=\phi(\frac{1}{2}*2a^2)=\phi(a^2)=\frac{1}{2}2\phi(a)^2=\phi(a)^2$. You obtain $(ii)$. How to prove the other direction <=? I started with $\phi(aa)=\phi(a)\phi(a)$ for every $a\in A$. Therefore it is $\phi(aa+bb)=\phi(aa)+\phi(bb)=\phi(a)\phi(a)+\phi(b)\phi(b)$. Now I'm stuck. Does $\phi(aa)=\phi(a)\phi(a)$ for all $a\in A$ imply $\phi(ab)=\phi(a)\phi(b)$ for all $a,b\in A$? If not, how to continue? Regards
If we assume that $\varphi(a^2)=\varphi(a)^2$ for all $a$, then for $a,b$, we have, using linearity and the square-preserving property, $\varphi((a+b))^2=(\varphi(a)+\varphi(b))^2=\varphi(a)^2+\varphi(a)\varphi(b)+\varphi(b)\varphi(a)+\varphi(b)^2.$ Now, observe that $\varphi((a+b)^2)=\varphi(a^2+ab+ba+b^2)=\varphi(a^2)+\varphi(ab+ba)+\varphi(b^2)$. By assumption, these two equations are the same, and we know that squares are preserved. Then comparing the equations gives $\varphi(ab+ba)=\varphi(a)\varphi(b)+\varphi(b)\varphi(a)$. If I recall correctly, a Jordan homomorphism need not be a homomorphism in the usual sense.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What would a base $\pi$ number system look like? Imagine if we used a base $\pi$ number system, what would it look like? Wouldn't it make certain problems more intuitive (eg: area and volume calculations simpler in some way)? This may seem like a stupid question but I do not remember this concept ever being explored in my Engineering degree. Surely there is some application to the real world here. I am interested in answers that demonstrate which problems would become more elegant to represent and compute. I am also interested in any visualizations that leverage the meaning of that scale. Never-mind a logarithmic scale, what would a $\pi$arithmic scale be and what would simple areas on it mean? From the comments, I realise that the normal representation of numbers is flawed (or difficult to use) for this idea, so maybe it's worth modifying it slightly. eg: let: $[1] = 1.\pi^0 = 1$ $[2][1] = 2.\pi^1 + 1.\pi^0 = 2\pi + 1$ $[2.3][1] = 2.3 . \pi^1 + 1.\pi^0 = 2.3\pi + 1$ $[1][2][3] = 1.\pi^2 + 2.\pi^1 + 3.\pi^0 = \pi^2+2\pi+3 $
The picture below may answer your queries. My opinion is that it would help in some problems. * *Wikipedia: Non-integer representation
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Prove $ \lim\limits_{x\to\infty}y_{n}=\sqrt{x}$ if $y_{n}=\frac{1}{2}\left(y_{n-1}+\frac{x}{y_{n-1}}\right),n\in \mathbb{N},x>0,y_{0}>0$ Can someone say how to solve this problem? In solution, it says that it stars with $$\frac{y_{n}-\sqrt{x}}{y_{n}+\sqrt{x}}=\left(\frac{y_{n-1}-\sqrt{x}}{y_{n-1}+\sqrt{x}}\right)^2,n\ge 1$$ How to get to this formula?
We have that $$y_n=\frac12\left(y_{n-1}+\frac{x}{y_{n-1}}\right)\implies y_{n-1}^2-2y_ny_{n-1}+2x=0 \tag 1$$ Now, multiplying $(1)$ by $x^{1/2}$ yields the following obvious identity $$y_{n-1}^2\sqrt{x}-2y_ny_{n-1}\sqrt{x}+2x\sqrt{x}=-(y_{n-1}^2\sqrt{x}-2y_ny_{n-1}\sqrt{x}+2x\sqrt{x}) \tag 2$$ Next, adding $y_ny_{n-1}^2+xy_n-2xy_{n-1}$ to both sides of $(2)$ reveals $$\begin{align} &y_ny_{n-1}^2+xy_n-2xy_{n-1}+y_{n-1}^2\sqrt{x}-2y_ny_{n-1}\sqrt{x}+2x\sqrt{x}\\\\ &=y_ny_{n-1}^2+xy_n-2xy_{n-1}-(y_{n-1}^2\sqrt{x}-2y_ny_{n-1}\sqrt{x}+2x\sqrt{x}) \tag 3 \end{align}$$ Factoring both sides of $(3)$ we have $$(y_n-\sqrt{x})(y_{n-1}+\sqrt{x})^2=(y_n+\sqrt{x})(y_{n-1}-\sqrt{x})^2 \tag 4$$ whereupon rearranging $(4)$ gives the coveted identity $$\frac{y_n-\sqrt{x}}{y_n+\sqrt{x}}=\left(\frac{y_{n-1}-\sqrt{x}}{y_{n-1}+\sqrt{x}}\right)^2$$ It is easy to see that the sequence $$\phi_n\equiv \frac{y_n-\sqrt{x}}{y_n+\sqrt{x}}$$ converges since we have $\phi_n=\phi_{n-1}^2$ and $\phi_n<1$ for all $n$. Thus, $\phi_n \to 0$ and thus $y_n\to \sqrt{x}$ as was to be shown!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Find $\lim\limits_{x\to \infty} \frac{x\sin x}{1+x^2}$ $$\lim_{x\to \infty} \frac{x\sin x}{1+x^2}$$ Using L'hopital I get: $$\lim_{x\to \infty} \frac{x\cos x + \sin x}{2x}=\lim_{x \to \infty}\frac{\cos x}{2}$$ However, how is it possible to evaluate this limit?
Some answers are saying L'Hopital is not applicable here because the numerator does not $\to \pm \infty.$ Actually L'Hopital is valid if we assume only the denominator $\to \pm \infty:$ If $f,g$ are differentiable on $(a,\infty), \lim_{x\to \infty} g(x) = \pm \infty,$ and $\lim_{x\to\infty} \frac{f'(x)}{g'(x)} = L,$ then $\lim_{x\to\infty} \frac{f(x)}{g(x)} = L.$ For some reason this "better" L'Hopital is not as well known as the usual L'Hopital. It should be, because the proof is about as easy as the usual one, and the result resembles its cousin, the Stolz-Cesaro theorem for convergence of a sequence. (Recall that in SC, only the denominator sequence is assumed to $\to \pm \infty.$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 4 }
Show that $30 \mid (n^9 - n)$ I am trying to show that $30 \mid (n^9 - n)$. I thought about using induction but I'm stuck at the induction step. Base Case: $n = 1 \implies 1^ 9 - 1 = 0$ and $30 \mid 0$. Induction Step: Assuming $30 \mid k^9 - k$ for some $k \in \mathbb{N}$, we have $(k+1)^9 - (k+1) = [9k^8 + 36k^7 + 84k^6 + 126k^5 + 126k^4 + 84k^3 + 36k^2 + 9k] - (k+1)$. However I'm not sure where to go from here.
And here are the congruences requested to complement the answer by Simon S $$\begin{array}{c|ccccc} n&n-1&n+1&n^2+1&n^4+1\\ \hline 0&4&1&1&1\\ 1&0&2&2&2\\ 2&1&3&0&4\\ 3&2&4&0&2\\ 4&3&0&2&2 \end{array}$$ And one can see that one of these factors is always $\equiv 0\pmod{5}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Show that a zero of $f$ is a fixed point of $g$ I want to show that a solution of the equation $x^2+cos(x)-10x=0$ is a fixed point of $g(x)=(x^2+cos(x))/10$. I tried using the quadratic equation but my solution doesn't simplify nicely in $g$. I'm not sure how to deal with the $cos(x)$ term- any ideas?
If $x$ solves $x^2+\cos(x)-10x=0$, then $x^2+\cos(x)=10x$. So $$ g(x)=(x^2+\cos(x))/10=(10x)/10=x $$ No need for the quadradic equation. I have absolutely no idea what values of $x$ even look like here. but certainly if you bothered to find one it would be a fixed point of $g$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $U=\bigcup_{j=1}^{\infty}K_j$ then $U=\bigcup_{j=1}^{\infty}\mathring{K}_j$ Let $U\subset\mathbb{R}^n$ be a domain (connected open set), such that $U=\bigcup_{j=1}^{\infty}K_j$ where each $K_j$ is a compact set, and $K_j\subset K_{j+1}$ for all $j\ge 1$. Is the following proposition true? $U=\bigcup_{j=1}^{\infty}\mathring{K}_j$ $\mathring{K}_j:$ interior of $K_j$ Any help would be appreciated.
No, consider $n=1$ with $U=(-1,1)$ and $K_n= [-1+1/n,-1/n]|\cup \{0\} \cup [1/n,1-1/n].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Creating a CFL - based off unknown Regular language Suppose $A\subseteq\Sigma^{\ast}$ is a regular language. Let $B=\{xy^R:x,y\in\Sigma^* , |x|=|y|, x \ XOR \ y \in A\}$ Prove that B is context free. I am struggling with understanding B. My only thought on how to start, is to assume A is in Chomsky Normal Form then use this knowledge to alter the rules of A to suit B? This is where I am utterly lost. I would appreciate a push in the right direction. So far I can create the example that if 001 and 100 are in A, then 100 and 001 are in B.
In your example, you say that $100$ and $001$ are in $B$, but that's not quite correct. I think you may be confusing $x$ and $y$ with the elements of $B$. Let's let $A = \{001\}$. Then an $x$ and a $y$ such that $x \text{ XOR } y \in A$ could be $x = 111$, $y = 110$. So the string $xy^R = 111011$ would be in $B$. Like any CFG of the form $\{xy^R : \text{(condition)}\}$, you don't need to ever have more than one nonterminal at a time here. You'll be generating the characters of $x$ and $y$ in pairs. Your productions will all look like: $$A \rightarrow c_1Bc_2$$ with $A$ and $B$ nonterminals, $c_1$ and $c_2$ terminals. Strong Hint (mouseover to see): Use the single nonterminal to keep track of your state in $A$ 's DFA.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1320783", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused with $x$ and $a$ of Taylor Series Given general formula of Taylor Series: $$T_{n}(x)=\sum_{n=0}^{\infty}{\frac{f^{(n)}(a)}{n!}(x-a)^{n}}$$ Why Taylor Series for $f(x)=\sin{x}$ is always evaluated at $x=0$ (the evalution means $a=0$) so we can get desired $\sin{x}$ value like $\sin{\pi}?$
The derivatives of increasing order at $a$ are $$\sin(a),\cos(a),-\sin(a),-\cos(a),\sin(a),\cos(a),-\sin(a),-\cos(a),\cdots$$periodically, and the development is $$T_a(x)=\sin(a)+\cos(a)(x-a)-\sin(a)\frac{(x-a)^2}2-\cos(a)\frac{(x-a)^3}{3!}\cdots.$$ If you evaluate it at $x=a$, all terms but the first vanish: $$T_a(a)=\sin(a)+\cos(a)(a-a)-\sin(a)\frac{(a-a)^2}2-\cos(a)\frac{(a-a)^3}{3!}\cdots=\sin(a).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding strict Liapunov function Find a strict Liapunov function for the equilibrium point (0, 0) of $$x' = −2x − y^ 2$$ $$y' = −y − x^2 .$$ Find δ > 0 as large as possible so that the open disk of radius δ and center (0, 0) is contained in the basin of (0, 0). My attempt was $V = \frac{ax^2+by^2}{2}$, which gives $$ \dot{V}= -2ax^2 -by^2 -axy^2 - bx^2y$$ But I couldnt find the open set. Any hint? Thanks!
Write $\dot{V}$ in this form: $$\dot{V}=-(2a+by)x^2-(ax+b)y^2$$ For it to be less than $0$, we need $$2a+by>0\implies y>-\frac{2a}{b}\\ ax+b>0\implies x>-\frac{b}{a}$$ So the disc has radius the minimum of $\frac{2a}{b}$ and $\frac{b}{a}$, or $2r$ and $\frac{1}{r}$, with $r=\frac{a}{b}$. Can you continue from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Improper integral: $\int_0^\infty \frac{\sin^4x}{x^2}dx$ I have been trying to determine whether the following improper integral converges or diverges: $$\int_0^\infty \frac{\sin^4x}{x^2}dx$$ I have parted it into two terms. The first term: $$\int_1^\infty \frac{\sin^4x}{x^2}dx$$ converges (proven easily using the comparison test), but the second term: $$\int_0^1 \frac{\sin^4x}{x^2}dx$$ troubles me a lot. What could I do?
For the second term, you can re-write your function as $\frac{\sin x}{x} \cdot \frac{\sin x}{x} \cdot \sin^2(x)$. Note that on $[0,1]$, this function is continuous (by the Fundamental Trig Limit you can extend the function to be defined as $f(0)=1$ at $x=0$). But then any continuous function on a closed interval is integrable, so $\int_0^1 \frac{\sin^4(x)}{x^2}$ converges. Hence the whole integral converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Rooks Attacking Every Square on a Chess Board 8 rooks are randomly placed on different squares of a chessboard. A rook is said to attack all of the squares in its row and its column. Compute the probability that every square is occupied or attacked by at least 1 rook. The first step I took was to state that there are $64C8$ ways to decide how to place the 8 rooks on the chessboard. Next, I tried to experiment with a physical chessboard to see how this could be done. The only way I found that every square on the board can be attacked is if one rook is in either every horizontal row or every vertical row. Therefore, there are $2 * 8^8 - 2$ ways to place the rooks. To clarify, it is "-2" because the diagonals are counted twice. Is there a case that I overlooked, or did I solve the problem correctly? Thanks, You Know Me......
Consider e.g. In how many different ways can we place $8$ identical rooks on a chess board so that no two of them attack each other? for the number of ways that 8 rooks can be placed so that all squares are attacked or occupied. Then, how many ways in total can you place 8 rooks regardless of attacked or occupied squares?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Finding proper subfields Let $\omega$ denote the cube root of unity such that $\omega\neq 1$. I want to find the subfields properly contained in $\mathbb Q(\sqrt[3]{2},\omega)$ and containing $\mathbb Q$ properly. Two of then certainly $\mathbb Q(\sqrt[3]{2})$ and $\mathbb Q(\omega)$. I have been told that the correct answer is 4. But what are the other two subfields? Are they $\mathbb Q(\omega+\sqrt[3]{2})$ and $\mathbb Q(\omega\sqrt[3]{2})$? I have no idea. Please help me to understand this.
* *$\mathbb Q(\omega + \sqrt[3]2)$ is actually equal to $\mathbb Q(\omega,\sqrt[3]2)$. (I can't say this with 100% certainty since I didn't work it out by hand, but it should be.) *$\mathbb Q(\omega\sqrt[3]2)$ is a third subfield. *$\mathbb Q(\omega^2\sqrt[3]2)$ is the fourth subfield. You're probably wondering where $\mathbb Q(\omega^2\sqrt[3]2)$ comes from. This field, along with $\mathbb Q(\sqrt[3]2)$ and $\mathbb(\omega\sqrt[3]2)$, is generated by a root of the polynomial $x^3-2$, for which $\mathbb Q(\omega,\sqrt[3]2)$ is a splitting field. In field theory, a good strategy is to take some special element $\alpha$ and look at the other roots of the minimal polynomial of $\alpha$ over the base field, called the conjugates of $\alpha$. By the way, what I just said isn't a proof that these are the only nontrivial subfields, or that these subfields are distinct. You should be able to prove distinctness without too much trouble, but I'm not sure how to prove that we didn't miss any other subfields without using tools from Galois theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321461", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem solving $2\times 2$ equation system $$\left\{\begin{align} 2x + 3y &= 10 \\ 4x - y &= - 1 \end{align}\right. $$ 1) I don't get why I have to multiply first equation by $-2$ and not $+2$ 2) How does $x=\frac{1}{2}$ I'm only $15$ so try and explain as simple as possible. Thanks.
it don't matter whether you multiply by the first equation by $2$ or $-2.$ what you want to do is to go from a set of simultaneous equations of two variables $x, y$ to an equation with just one variable. in this example you are trying to eliminate $x$ so that you end up with an equation in only $y.$ if we multiply the first equation by $2,$ we have $$\begin{align} 4x + 6y &= 20\\4x-y &=-1. \end{align}$$ subtracting $(2)$ from $(1),$ we have $$7y = 21 $$ which give you $y = 3.$ putting this in one of the original equations $$4x-y = -1 \to 4x -3 = -1 \to 4x = -1+3 = 2 .$$ that gives you $x = 1/2.$ had you multiplied by $(-2),$ you would have ended up with $$\begin{align} -4x - 6y &= -20\\4x-y &=-1. \end{align}$$ now instead of subtracting, you will add the two and get $y = 3$ again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Which Coefficient will Make a 2-Variable Linear System Solvable? This is most likely a pretty simple problem although my textbook doesn't quite explain how to solve it. I have a linear system with two equations and two variables (x and y) below: 2x - 5y = 9 4x + ay = 5 ('a' being the unknown coefficient of y) The question asks to find the value of a that will make this system have a single solution. At first this seemed trivial, although I tried 1) multiplying the top equation by -2 and then adding the two equations to eliminate x. I then got stuck at 10y + ay = -4. And 2) I tried only multiplying the top by 2 to get the 4x terms in both equations, in which case I thought that I could somehow remove the x terms out of both equations entirely but I don't think that's correct (unless I do it to both sides which doesn't help me much?). Thanks in advance for any help! I also wasn't able to think of any suitable tags for this question except "systems of equations" (which is supposed to be used on conjunction with a more specific tag). Couldn't find "linear equations" or other more suited tags.
The only thing that would keep the system from yielding a single value is if $2a+20=0$ or $a=-10$. In that case the loci of the two equations would be parallel. No intersections means no solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Coupled differential equation arising in flow line. So, I ran (certainly not literally) across these two coupled differential equation given by: $$x'(t)=\left(x(t)\right)^2-\left(y(t)\right)^2 $$ $$ y'(t)=2x(t)y(t)$$ These equation occurred when I was trying to figure flow lines, to a vector field given by. $\vec F:\mathbb{R}^2\to \mathbb{R}^2\;\; \mid \; \vec{F}(x,y)=(x^2-y^2,2xy) $ And the flow line is given by $\vec{r}'(t)=\vec{F}(\vec{r}(t))$ , where $t$ is any introduced parameter. And obviously $\vec{r}(t)=(x(t),y(t))$ , Right? Here's further background if it might be useful , http://ocw.mit.edu/courses/mathematics/18-022-calculus-of-several-variables-fall-2010/lecture-notes/MIT18_022F10_l_17.pdf. Example 17.10 And now for what have I tried. I differentiated equation in terms of $x'(t)$, to get, $$x''(t)=2x(t)x'(t)-2y(t)y'(t)$$ From original equation we have $y(t)=\pm \sqrt{(x(t))^2-x'(t)}$ Using this and above, to eliminate $y'(t)$ and $y(t)$ from the equation $y'(t)=2x(t)y(t)$, yet not really yielding good expression. Any help, to make the solution bit easier?
what happens if we make a change of variables $$ x = r \cos \theta, y = r \sin \theta. $$ we have $$\begin{align} r'\cos \theta-r \sin \theta \, \theta' &= r^2 \cos (2\theta)\\ r'\sin \theta+r \cos \theta\, \theta' &= r^2 \sin 2 (\theta)\\\end{align} $$ from these we get $$r' = r^2\cos \theta, \theta' = r\sin \theta $$ if we divide one by the other we have a separable equation $$\frac{dr}r = \frac{\cos \theta \, d\theta}{\sin \theta}$$ the solution is $$r = c\sin \theta \to x^2 + y^2 =cy$$ which represents a circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
proof of existence While I was studying for my finals i found the following problem: Let $f:[a,b]\rightarrow \mathbb{R}$ be a Riemann integrable real function, also let $\displaystyle \int\limits_a^bf(t)\,\mathrm{d}t=3$ Prove that there exist $\displaystyle t_1,t_2 \!\in \! (a,b):\int\limits_{t_1}^{t_2}f(t)\,\mathrm{d}t=1$ Could you give me any hint or a way to start the solution?
Hint: The function $F\colon [a,b]\to \mathbb{R}$ defined by $F(x) = \int_a^x f(t)dt$ is continuous, and $F(a)=0$, $F(b)=3$. Use the Intermediate Value Theorem on it. Then: You can find $c\in(a,b)$ such that $F(c)=2$. Can you continue? And then: Now, do a similar step with $G\colon [a,c]\to \mathbb{R}$ defined by $G(x) = \int_x^c f(t)dt$, to find $d\in(a,c)$ such that $G(d)=1$, i.e. $\int_d^c f(t)dt=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Single variable improper integral Say I have an integral of $x/(1+x^2)$ that goes from negative infinity to infinity, and then part it into two integrals $A + B$ (let $A + B = I_\text{tot}$) where $A$ and $B$ have the limits from R to Infinity and negative infinity to R, respectively, and where R is any R-number. Now, if I integrate e.g. A I get 0.5*(natural log of u) after u-substitution and A having the limit from R to Infinity we know that the integral (and even I_tot because infinity+B=infinity) will diverge, and now I want to know if it is correct to say "The integral diverges or The integral diverges to infinity"? Because I compared many improper integrals that diverged but I couldn't find any patterns to confirm any difference.
$$ \int_{-\infty}^r \frac x {1+x^2} \,dx = -\infty \text{ and }\int_r^\infty \frac x {1+x^2}\,dx = +\infty. $$ One can say that one of these diverges to $-\infty$ and the other diverges to $+\infty$. However, notice also that $$ \lim_{r\to\infty} \int_{-r}^r \frac x {1+x^2}\,dx = 0\text{ and } \lim_{r\to\infty} \int_{-r}^{2r} \frac x {1+x^2}\,dx = \log_e 2>0. $$ The fact that the rearrangement in the last line can yield two different numbers is something that can happen only if the positive and negative parts both diverge to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1321883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Calculate the limit? The answer sheet says: I understand all of this apaprt from the last bit where it says I don't understand how/why they have put logs and exponentials in there.
Since $exp$ and $log$ are reciprocals, then \begin{equation*} a^{b}=e^{\ln (a^{b})}=e^{b\ln a}. \end{equation*} So, \begin{equation*} \left( \frac{3}{4}\right) ^{x}=e^{x\ln \left( \frac{3}{4}\right) }=\frac{1}{% e^{-x\ln \left( \frac{3}{4}\right) }}=\frac{1}{e^{x\ln \left( \frac{4}{3}% \right) }}. \end{equation*} On the other hand, $\ln \left( \frac{4}{3}\right) >0$ since $\ln x>0$ for all $x>1,$ and $\frac{4}{3}>1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating a sum (begginer) I am stuck on this evalutaion number. So the sum is $$\sum_{k=1}^n{1\over (k+1)^2}-{1\over k^2}$$ I can't find a way :(. Do we use Comparasion or partial sums? Thanks for the help!
$$\sum_{k=1}^n\ \left(\frac{1}{(k+1)^2} - \frac{1}{k^2}\right)$$ $S_1 = \frac{1}{2^2} - \frac{1}{1^2}$ $S_2 =\left(\frac{1}{2^2} - \frac{1}{1^2}\right)+\left(\frac{1}{3^2}-\frac{1}{2^2}\right)$ $S_3 = \left(\frac{1}{2^2} - \frac{1}{1^2}\right)+\left(\frac{1}{3^2}-\frac{1}{2^2}\right)+\left(\frac{1}{4^2}-\frac{1}{3^2}\right)$ . . . $S_n = -1+\frac{1}{(n+1)^2} = \sum_{k=1}^n\ \left(\frac{1}{(k+1)^2} - \frac{1}{k^2}\right)$ $\\$ From here you should be able to see what the sum goes to as $n \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$n$-words from the alphabet $A=\{0, 1, 2, 3\}$. How many of them have an even number of zeros and ones? Consider all $n$-words from the alphabet $A=\{0, 1, 2, 3\}$. How many of them have an even number of zeros and ones? I showed that the number of $n$-words from $\{0, 1, 2, 3\}$ with an even number of zeros is $\displaystyle X_n=\frac{4^n+2^n}{2}$ and with an odd number of zeros is $\displaystyle Y_n=\frac{4^n-2^n}{2}$. But I have not been demonstrated the number $T_n$ of $n$-words have an even number of zeros and ones. Thanks for your help
For a word $w$, let $w(k)$ be the number of times that $k$ appears in $w$, and $l(w)$ the length of $w$. Let $A_n=\{w:l(w)=n,w(0)\text{ even and }w(1)\text{ even}\}$. Let $B_n=\{w:l(w)=n,w(0)\text{ odd and }w(1)\text{ even}\}$. Let $C_n=\{w:l(w)=n,w(0)\text{ even and }w(1)\text{ odd}\}$. Let $D_n=\{w:l(w)=n,w(0)\text{ odd and }w(1)\text{ odd}\}$. Take a word $w$ of length $n$ and let's try to generate a word of length $n+1$ by appening a digit to the end. If $w\in A_n$ then the new digit can be $2$ or $3$. If $w\in B_n$ the new digit must be $0$. If $w\in C_n$ the new digit must be $1$. If $w\in D_n$ no valid word can be generated this way. Then: $$|A_{n+1}|=2|A_n|+|B_n|+|C_n|$$ Similarly, we get: $$|B_{n+1}|=|A_n|+2|B_n|+|D_n|$$ $$|C_{n+1}|=|A_n|+2|C_n|+|D_n|$$ $$|D_{n+1}|=|B_n|+|C_n|+2|D_n|$$ For word of length $1$, we have: $|A_1|=2$, $|B_1|=1$, $|C_1|=1$, $|D_1|=0$. This set of equations allows you to find recursively any $|A_n|$. In fact, given the matrix $$M=\left(\begin{matrix}2&1&1&0\\1&2&0&1\\1&0&2&1\\0&1&1&2\end{matrix}\right)$$ the values for $|A_n|$, etc. are given by $$M^n\left(\begin{matrix} 2\\ 1\\ 1\\ 0\end{matrix}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I solve $y''+y'+7y=t$? How do I solve $y''+y'+7y=t$ where $y(0)=0$ and $y'(0)=0$ $(t\geq 0)$? I tried to solve this by Laplace transformation, but I couldn't find the inverse of $1/(s^2(s^2+s+7))$. How would I solve this?
You have the Laplace transform, so lets do a partial fraction decomposition. We want to find $a,b,c,d$ such that $$\frac{1}{s^2(s^2 + s +7)} = \frac{a}{s} + \frac{b}{s^2} + \frac{cs + d}{s^2 + s + 7}.$$ So we have $$as(s^2 + s +7) + b(s^2 + s +7) + cs^3 + ds^2 = 1.$$ Collecting coefficients we have $$ (a+c)s^3 + (a+b+d)s^2 + (7a+b)s + 7b =1$$ So after working your way through this, you should find, $$a = -\frac{1}{49},b = \frac{1}{7}, d= -\frac{6}{49}, c = \frac{1}{49}.$$ The final trick is to complete the square so that you have $$ s^2+s +7 = \left(s+\frac{1}{2}\right)^2 +\left(\frac{3\sqrt{3}}{2}\right)^2.$$ So $$\frac{1}{s^2(s^2 + s +7)} = \frac{1}{49}\left(\frac{-1}{s} + \frac{7}{s^2} + \frac{s+\frac{1}{2} }{\left(s+\frac{1}{2}\right)^2 +\frac{27}{4}}+\frac{13}{3\sqrt{3}}\frac{\frac{3\sqrt{3}}{2}}{\left(s+\frac{1}{2}\right)^2 +\frac{27}{4}}\right).$$ From this point the solution can be read off tables. Edit: I'd like to add some insight into wythagoras' answer. Let's suppose you have some function $y$. Suppose then you differentiate it a couple of times and you end up with a polynomial. Then it follows that $y$ had to be a polynomial to start with. Remember that differentiating polynomials gives you another polynomial of one less order. Hence, if i add a polynomial to it's derivatives, then what comes out the other end must be the same order as what i started with. So when i have $y'' + y'+7y =t$ then i know that i must have a polynomial, and that the highest order of that polynomial must be of order $t$. So we are well justified assuming that the particular solution has the form $y_p= bt +a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Mathematical function for weighting football results Let's assume team A has scored x goals, and team B has scored y goals in a match. I'm trying to find a function that returns a number between 0 and 1 based on the goals from the match. Ideally, the function would reach 1 when y is 0 and x approaches infinity. I have a very simple function that works almost entirely as desired, and that is x/(x+y). The problem with this function is that 1-0 gives the same function output as 10-0, 1.
One possibility is $$f(x,y)=\frac{x+1}{x+y+2}$$ This never equals $1$ or $0$, but the more lopsided the win for $x$ the closer it gets to $1$. This also is symmetric in $x$ and $y$, as you said you wanted in a comment. The score $1-0$ gives a value of $\frac 23$, while $10-0$ gives a value of $\frac{11}{12}$. This seems to meet all your criteria. Ties give the value $\frac 12$, which makes sense. This does show, however, that you can get the same result for differing scores. Is this acceptable?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Proving that $d(f,g)=\|f-g\| = \sup \limits_{0\leq x \leq 1} |f(x)-g(x)|$ is a metric on $X=C[0,1]$ Let $X=C[0,1]$ be the set of all continuous functions on $[0,1]$. For any two functions $f,g\in X$, set $$d(f,g)=\|f-g\| = \sup \limits_{0\leq x \leq 1} |f(x)-g(x)|.$$ Prove $(X,d)$ is a metric space. My attempt: This question comes down to showing that $d$ is a metric, which means that: * *$d(f,g)\geq 0$ *$d(f,g)=0\iff f=g$ *$d(f,g)=d(g,f)$ *Triangular sadness. 1) Nonnegativity is guaranteed by taking the supremum of the absolute value of a sum, since the absolute value of a sum is non-negative, the supremum of non-negative numbers is non-negative. $\square$ 2) For the supremum of $|f(x)-g(x)|$ to be equal to $0$, the greatest number for $|f(x)-g(x)|$ from $0\leq x\leq 1$ must be $0$, hence $|f(x)-g(x)|=0$ for all values on this range, and thus $f(x)=g(x)$ is the constant zero map. $\square$ 3) $|f(x)-g(x)|=|g(x)-f(x)|$ implies that $\sup \limits_{0\leq x \leq 1} |f(x)-g(x)|=\sup \limits_{0\leq x \leq 1} |g(x)-f(x)|$ $\square$ 4) Triangular sadness: We need to prove that $d(f,g)\leq d(f,h) + d(g,h)$ I.e. $\sup \limits_{0\leq x \leq 1} |f(x)-g(x)|\leq \sup \limits_{0\leq x \leq 1} |f(x)-h(x)|+\sup \limits_{0\leq x \leq 1} |g(x)-h(x)|$ Now I am not sure how I can 'behave' with the sup 'function'. Can I do the following? $$\sup|f(x)-g(x)| = \sup|f(x)-g(x)+h(x)-h(x)|=\sup|f(x)-h(x)+(h(x)-g(x))|$$ Then use normal triangular inequality: $$\sup|f(x)-h(x)+(h(x)-g(x))|\leq \sup(|f(x)-h(x)|+|h(x)-g(x)|)=\sup(|f(x)-h(x)|+|g(x)-h(x)|)$$ This is obviously less than $\sup \limits_{0\leq x \leq 1} |f(x)-h(x)|+\sup \limits_{0\leq x \leq 1} |g(x)-h(x)|$, but I don't know how to prove it. How do I finish that one off.
For all $x \in [0,1]$ we have $$ |f(x)-g(x)|\leq |f(x)-h(x)|+|h(x)-g(x)|\leq \| f-h\|+\|h-g\|$$ and the last bound doesn't depend on $x$ so $\sup _{x\in [0,1]}|f(x)-g(x)|\leq \| f-h\|+\|h-g\|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Bounding $\sum_{k=1}^N \frac{1}{1-\frac{1}{2^k}}$ I'm looking for a bound depending on $N$ of $\displaystyle \sum_{k=1}^N \frac{1}{1-\frac{1}{2^k}}$. The following holds $\displaystyle \sum_{k=1}^N \frac{1}{1-\frac{1}{2^k}} = \sum_{k=1}^N \sum_{i=0}^\infty \frac{1}{2^{ki}}=\sum_{n=0}^\infty \sum_{\substack{q\geq 0 \\ 1\leq p \leq N\\pq=n}}\frac{1}{2^{pq}}\leq 2N$ which does not seem accurate. Numerical trials suggest that $\displaystyle N-\sum_{k=1}^N \frac{1}{1-\frac{1}{2^k}}$ is a convergent sequence (it's easy to prove it is decreasing).
We have $$\sum_{k\leq N}\frac{2^{k}}{2^{k}-1}=N+\sum_{k\leq N}\frac{1}{2^{k}-1} $$ and recalling that holds the identity $$\sum_{k=1}^{\infty}\frac{1}{1-a^{k}}=\frac{\psi_{\frac{1}{a}}\left(1\right)+\log\left(a-1\right)+\log\left(\frac{1}{a}\right)}{\log\left(a\right)} $$ where $\psi_{q}\left(z\right) $ is the $q $-polygamma function, we have $$\sum_{k\leq N}\frac{2^{k}}{2^{k}-1}=N+\sum_{k\leq N}\frac{1}{2^{k}-1}\leq N+\sum_{k\geq1}\frac{1}{2^{k}-1}=$$ $$=N+1-\frac{\psi_{\frac{1}{2}}\left(1\right)}{\log\left(2\right)}\approx N+1.606695. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Finding a polynomial by divisibility Let $f(x)$ be a polynomial with integer coefficients. If $f(x)$ is divisible by $x^2+1$ and $f(x)+1$ by $x^3+x^2+1$, what is $f(x)$? My guess is that the only answer is $f(x)=-x^4-x^3-x-1$, but how can I prove it?
One way you could express this idea is through functions $g,h$ where $$f(x) = g(x)(x^2+1) \\ f(x)+1 = h(x)(x^3+x^2+1)$$ This means that $$g(x)(x^2+1)+1 =h(x)(x^3+x^2+1)$$ or that $$h(x) = \frac{g(x)(x^2+1)+1}{x^3+x^2+1}$$ So you could begin by choosing any polynomial $g$ you want so long as it forces $h$ to be a polynomial. Then $f$ will be a polynomial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Trigonometric Equation involving tangent - How do I solve it? I would like to know how to solve this and most importantly make intuitive sense of it. Thanks in advance! Equation: $\tan(x) = -1.5$ (using a calculator with $\arctan(-1.5)$) My Results: $x = -0.9820 + 2 \pi n$ Book says the result is: $x = 2.1588 + 2\pi n$, $x = 5.3004$
When you have an equation $$\tan x = \tan \alpha$$ then there are an infinte number of solutions of x. The solutions such that $x\in(-\pi,\pi]$ are called the principle solutions. The general solution of the above equation is $$x=n\pi+\alpha\,;\;\;\;\;n\in\mathbb{Z}.$$ Here $\alpha = \arctan -1.5$ So the solutions are $\approx -0.9820 + n\pi$. All three solutions are correct. You can see this from the graph. Since the the function tan x is periodic so the graph of tan x repeats after every pi units. The places where the line y=-1.5 intersects y=tan x are the solutions. As the graph keeps repeating, so there are infinite solutions. All solutions are at a distance of pi.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there proof show that $\log x$ is undefined and make no sense at $ x=0$? I was asked by someone: why $\log x$ is undefined at $x=0 $? Is there proof show that $\log x$ is undefined at $x=0$? Note(01):: log is the inverse function of the exponential function. note(02): I edited my question as I meant why it's not make a sens at $x=0$ ? Thank you for your help .
The natural logarithm is defined as $$ \log x = \int_{1}^{x} \frac{1}{t} \, dt $$ Naively plugging in $0$, we get $$ \log 0 = \int_{1}^{0} \frac{1}{t} \, dt = - \int_{0}^{1} \frac{1}{t} \, dt, $$ However, the area under $y = 1 / t$ from $0$ to $1$ diverges to $\infty$, which would imply $$ \log 0 = - \infty $$ Intuitively, that's why $\log 0$ is undefined, which is inline with your original question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1322925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 4 }
Confusion with the formula to sum the terms of a finite geometric series The formula to sum the terms of a finite geometric series is the following: $$\frac{a_1(1 - r^{n+1})}{1 - r}$$ where $a_1$ is the first term, $r$ is the common ratio, and $n + 1$ is the number we want to sum up to. Now, my problem is really in this last part, I have seen some formulas that use $r^n$, others use $r^{n+1}$. My questions are: * *In general, how does the power to which we raise $r$ change depending on the index of the sum? *What if we want to start the sum from a different index, for example $1$ or $3$ instead of $0$. *How does the number of terms we want to sum influence the formula? I know these might seem like stupid questions, but I am just confused, and it might be the time to understand exactly what's going on. I have seen the derivation of the formula, but I am still not understanding the indices.
In general, for $n\geq m$ and $r\neq 1$, $$ \sum_{k=m}^n r^k = r^m \sum_{k=m}^n r^{k-m} = r^m \sum_{\ell=0}^{n-m} r^{\ell} = r^m\cdot \frac{1-r^{n-m+1}}{1-r} $$ To remember it: the exponent in the denominator is the number of terms in the sum. To check (very basic check): when $n=m$, you only have one term, and it's $r^m$—so the result should simplify to $r^m$ in that case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $x,y$ is a 2-edge cut of a graph $G$; every cycle of G that contains $x$ must also contain $y$ $x,y$ are cut edges; if I understand the definition, it means that if we delete both $x$ and $y$ our resulting graph is disconnected. I'm very confused because I started like this: Let $C$ be a cycle that contains $x$ but not $y$...(here I'm thinking how to get a contradiction with the fact that they are cut edges)...I'll make a couple of drawings... Oh-oh, this one doesn't work! If I remove both, the graphs ends up disconnected, yet they don't belong to the same cycle. I don't know if I'm having logic problems here or the definition of vertex cuts needs the cut to be minimum, because in this case just removing $y$ would be enough! Definitions used in the book:
You missunderstood the definition of a cut. I hope is clear now thanks to @Gregory J. Puleo. That being said here is my answer: Let $C$ be a cycle that contains $x$ and that does not contain $y$. Since $x\in [S,\overline{S}]$ and $x \in C$ it must exist another edge $z\in[S,\overline{S}]$ to complete the cycle between both "partitions". So it must be the case that $z=y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Need a hint to evaluate $\lim_{x \to 0} {\sin(x)+\sin(3x)+\sin(5x) \over \tan(2x)+\tan(4x)+\tan(6x)}$ I know that $\sin A + \sin B + \sin C = 4\cos({A \over 2})\cos({B \over 2})\cos({C \over 2})$ when $A+B+C=\pi$. If ${x \to 0}$ then I have a half circle, right? If it is right then I have $\tan(2x) + \tan(4x) + \tan(6x)=\tan(2x)\tan(4x)\tan(6x)$. I got stuck at $${4\cos({x \over 2})\cos({3x \over 2})\cos({5x \over 2}) \over \tan(2x)\tan(4x)\tan(6x)}$$ Is this correct? Do you have a hint to help me?
In what follows I will provide full details of the idea of Abhishek Parab. First divide top and bottom by $x,$ then \begin{eqnarray*} \frac{\sin x+\sin 3x+\sin 5x}{\tan 2x+\tan 4x+\tan 6x} &=&\frac{\left( \dfrac{\sin x+\sin 3x+\sin 5x}{x}\right) }{\left( \dfrac{\tan 2x+\tan 4x+\tan 6x}{x}\right) } \\ && \\ &=&\frac{\left( \dfrac{\sin x}{x}+3\left( \dfrac{\sin 3x}{3x}\right) +5\left( \dfrac{\sin 5x}{5x}\right) \right) }{\left( 2\left( \dfrac{\tan 2x}{% 2x}\right) +4\left( \dfrac{\tan 4x}{4x}\right) +6\left( \dfrac{\tan 6x}{6x}% \right) \right) } \end{eqnarray*} Now using the standard limits \begin{equation*} \lim_{u\rightarrow 0}\frac{\sin u}{u}=\lim_{u\rightarrow 0}\frac{\tan u}{u}=1 \end{equation*} the limit follows \begin{eqnarray*} \lim_{x\rightarrow 0}\frac{\sin x+\sin 3x+\sin 5x}{\tan 2x+\tan 4x+\tan 6x} &=&\frac{\left( \lim\limits_{x\rightarrow 0}\dfrac{\sin x}{x}+3\left( \lim\limits_{3x\rightarrow 0}\dfrac{\sin 3x}{3x}\right) +5\left( \lim\limits_{5x\rightarrow 0}\dfrac{\sin 5x}{5x}\right) \right) }{\left( 2\left( \lim\limits_{2x\rightarrow 0}\dfrac{\tan 2x}{2x}\right) +4\left( \lim\limits_{4x\rightarrow 0}\dfrac{\tan 4x}{4x}\right) +6\left( \lim\limits_{6x\rightarrow 0}\dfrac{\tan 6x}{6x}\right) \right) } \\ &=& \\ &=&\frac{\left( 1+3\left( 1\right) +5\left( 1\right) \right) }{\left( 2\left( 1\right) +4\left( 1\right) +6\left( 1\right) \right) }=\frac{3}{4}. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Fourier transform of a Green's function I was studying for an exam and I found this question which has caused me a bit of trouble: Given the Green's function that satisfies the equation $$\Box G(\mathbf{r}-\mathbf{r}',t-t')=-4\pi\delta(\mathbf{r}-\mathbf{r}',t-t'),$$ where $\Box$ is the d'Alembertian operator $$\Box=\Delta-\frac{1}{c^2}\frac{\partial^2}{\partial t^2}$$ show that, given a suitable method of handling the poles, its Fourier transform is $$g(\mathbf{k},\omega)=\frac{1}{4\pi^3\left(\|\mathbf{k}\|^2-\tfrac{\omega^2}{c^2}\right)}.$$ The aim of the question is obviously to look at ways of solving the wave equation by Fourier transform, but I'm a little bit unsure how to proceed, especially since $g$ looks a bit bizarre to me. The term $\omega$ refers to the angular frequency of the wave, and $\mathbf{k}$ the wave vector. What is the simplest/best way to go about this? Finding $G$ first and then taking its Fourier transform seems like a tedious way to solve this problem. Thanks for any help.
Generally you would Fourier transform the differential equation. The elements of the D'Alembertian transform to $\vec{k}\cdot\vec{k}$ (the Laplacian) and $-\omega^2$ (the 2nd time derivative), while the delta function transforms to $1$. Then you get something like $$ (2\pi)^4 \left(\left\lVert \vec{k}\right\lVert^2 - \frac{\omega^2}{c^2}\right) g= 4\pi $$ Solving for $g$ gives you the answer. This is obviously greatly simplified and makes many assumptions about certain integrals converging, but it's the general approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Experimental Units I am doing a paper airplane project for school. I am doing a two-factor experiment, each with three levels, the factors are design of the plane and angle of launch. I am also having three people make a copy each of the airplane designs to account for variation when manufacturing the planes. Am I correct in believing that there are 9 experimental units (3 designs $\times$ 3 people each making each design)? Or are there 81 experimental units (3 designs $\times$ 3 launch angles $\times$ 3 people making a plane each $\times$ 3 repetitions).
If your factors are design of the plane and angle of launch you have three of each so $9$ cases. You have decided that the manufacturer and the repetition do not matter systematically, so repeating them will reduce the random variability. You have $81$ data points that you have decided to bin into $9$ bins of $9$ points. This is a reasonable approach when you cannot afford many trials. You have to guess what matters and what doesn't. The problem comes when something you thought was uniform is not, like one manufacturer is better than the others. Maybe s/he creases the paper better, so there is less air resistance to the flight. Then the one of his/her planes that got launched at a better angle will bias your selection toward that design. It is hard.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Differentiability of $\cos |x-5|+\sin |x-3|+|x+10|^3-(|x|+4)$. Question : Test the differentiability of the function $$\cos |x-5|+\sin |x-3|+|x+10|^3-(|x|+4)$$ at the points $5, 3, -10, 0$. Solution : Now $\cos |x|$ is differentiable everywhere. So is $\sin |x|$ as well as $|x|^3$. Therefore the only issue is the function $|x|$ at $x=0$. Therefor the function is differentiable at all the other points except $x=0$. Is this correct?
No, it is not correct. First, $\cos |x-5|$ does not vanish at $5$. Remember $\cos 0=1\neq 0$. $\cos |x|$ is differentiable everywhere. $\sin |x|$ is differentiable everywhere except at $x=0$. More generally, $|x|^n$ is differentiable everywhere for any $n\in\Bbb Z_{\ge 2}$. $|x|$ is differentiable everywhere but at $x=0$. So the function is differentiable everywhere but at the points $x$ such that either $x-3=0$ or $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Combinatorics of given alphabet I'm looking for the formula to determine the number of possible words that can be formed with a fixed set of letters and some repeated letters. For instance take the 8-letter word SEASIDES and find out how many possible anagrams can be made: there are 3x S, 2x E, and 1 each of A, I and D; words of less than 8 letters are allowed. The formula for all 8 letters is not too difficult W8 = 8! / (3! * 2!). However, this doesn't include shorter anagrams, which seem a less intuitive formula.
You can use $$\displaystyle \sum_{s=0}^3 \sum_{e=0}^2 \sum_{a=0}^1 \sum_{i=0}^1 \sum_{d=0}^1 \dfrac{(s+e+a+i+d)!}{s!\,e!\,a!\,i!\,d!}$$ and if you like ignore the $ a!\,i!\,d!$ which is always $1$. This will give $9859$ possibilities. You may want to subtract $1$ if you want to exclude $0$-letter anagrams, leaving $9858$ possibilities. For the full $8$-letter anagrams, each index takes its maximum value, giving $\frac{8!}{3! \, 2!}=3360$ possibilities. It seems there are another $3360$ $7$-letter anagrams (is this a co-incidence?), so between them more thatn two-thirds of the total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Partition Generating Function a) Let $$P(x)=\sum_{n=0}^{\infty} p_nx^n=1+x+2x^2+3x^3+5x^4+7x^5+11x^6+\cdots$$ be the partition generating function, and let $Q(x)=\sum_{n=0}^{\infty} q_nx^n$, where $q_n$ is the number of partitions of $n$ containing no $1$s. Then $\displaystyle\frac{Q(x)}{P(x)}$ is a polynomial. What polynomial is it? b) Let $P(x)$ be the partition generating function, and let $R(x)=\sum_{n=0}^{\infty} r_nx^n$, where $r_n$ is the number of partitions of $n$ containing no $1$s or $2$s. Then $\displaystyle \frac{R(x)}{P(x)}$ is a polynomial. What polynomial is it? (Put answer in expanded form) How can I start this problem?
Hint: Try to understand why $$P(x)=\frac{1}{\prod_{n=1}^{\infty}\left(1-x^n\right)},$$ and what are the corresponding expressions for $Q(x)$ and $R(x)$. Illustration: \begin{align*} \frac{1}{1-x}\cdot \frac{1}{1-x^2} \cdot \frac{1}{1-x^3}\cdot\ldots =\,\Bigl(&1+x^1+x^{1+1}+x^{1+1+1}+\ldots\Bigr)\\ \times\,&\Bigl(1+x^2+x^{2+2}+x^{2+2+2}+\ldots\Bigr)\\ \times\,&\Bigl(1+x^3+x^{3+3}+\ldots\Bigr)\times \ldots\end{align*} \begin{align*} =1+x^1+\left(x^{1+1}+x^2\right)+\left(x^{1+1+1}+x^{2+1}+x^3\right)+\\+\left(x^{1+1+1+1}+x^{2+1+1}+x^{2+2}+x^{3+1}+x^4\right)+\ldots\end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Simplifying sum equation. (Solving max integer encoded by n bits) Probably a lack of understanding of basic algebra. I can't get my head around why this sum to N equation simplifies to this much simpler form. $$ \sum_{i=0}^{n-2} 2^{-i+n-2} + 2^i = 2^n - 2 $$ Background To give you some background I am trying to derive $MaxInt(n) = 2^n-1$ which describes that the maximum integer which can be created using the two's complement where $n$ is the number of bits the integer is encoded by. How two's complement encodes numbers with 4 bits is explained by the images below: Therefore: $$ MaxInt(n) = \sum_{i=0}^{n-2} 2^i = (2^0 + 2^1 + ... + 2^{n-3} + 2^{n-2} ) $$ Maybe there is a way of integrating this or simplifying it but I figured that this is a similar problem to sum to N where $$ \frac{T(n) + T(n)}{2} = T(n) = \sum_{i=1}^{n} n-i+1 = \sum_{i=1}^{n} i $$ So following this logic $MaxInt(n)$ is also equal to: $$ MaxInt(n) = \frac{MaxInt(n) + MaxInt(n)}{2} $$ Since $$ (2^0 + 2^1 + ... + 2^{n-3} + 2^{n-2}) = (2^{n-2} + 2^{n-3} + ... 2^2 + 2^1 + 2^0) $$ Then $$ MaxInt(n) = \sum_{i=0}^{n-2} 2^{n-2-i} = (2^{n-2} + 2^{n-3} + ... 2^2 + 2^1 + 2^0) $$ Putting it all together: $$ MaxInt(n) = \frac{\sum_{i=0}^{n-2} 2^{n-2-i} + \sum_{i=0}^{n-2} 2^i}{2} $$ $$ MaxInt(n) = \frac{\sum_{i=0}^{n-2} 2^{-i+n-2} + 2^i}{2} $$ Which is when I got stuck, cheating with wolfrom alpha I found that $$ \sum_{i=0}^{n-2} 2^{-i+n-2} + 2^i = 2^n - 2 $$ But I don't know why. If you see a better alternative way (i.e. not using sum to N method) of deriving $MaxInt(n) = 2^n-1$ please let me know. Thanks for reading.
The thing you've asked to show isn't too hard: \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \sum_{i=0}^{n-2}\left(2^i\right) \\ &= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(1 + 2 + \ldots + 2^{n-2}\right) \\ &= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(2^{n-1} - 1\right) \\ \end{align} where that last thing comes form the formula for the sum of a geomtric series, which I think you probably know. Now let's simplify the left-hand term... \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &= \sum_{i=0}^{n-2} \left(2^{-i+n-2}\right) + \left(2^{n-1} - 1\right) \\ &= \sum_{i=0}^{n-2} \left(2^{(n -2)-i)}\right) + \left(2^{n-1} - 1\right) \\ &= \left(2^{n -2} + 2^{n-3} + \ldots + 2^0\right) + \left(2^{n-1} - 1\right) \\ \end{align} which we recognize as another geometric series, written in reverse order; the sum there gives us \begin{align} \sum_{i=0}^{n-2} \left(2^{-i+n-2} + 2^i\right) &= \left(2^{n -2} + 2^{n-3} + \ldots + 2^0\right) + \left(2^{n-1} - 1\right) \\ &= \left(2^{n -1} - 1\right) + \left(2^{n-1} - 1\right) \\ &= 2 \cdot 2^{n -1} - 2 \\ &= 2^{n} - 2. \end{align} Quick proof for the geometric series: If we expand $$ U = (1 - a) (1 + a + a^2 + \ldots a^k) $$ with the distributive law, and then gather like terms via the commutative law for addition, we get this: \begin{align} U &= (1 - a) (1 + a + a^2 + \ldots + a^{k-1} + a^k)\\ &= 1 \cdot (1 + a + a^2 + a^{k-1} + \ldots a^k) - a \cdot (1 + a + a^2 + \ldots + a^{k-1} + a^k)\\ &= (1 + a + a^2 + \ldots + a^{k-1} + a^k) - (a + a^2 + a^3 + \ldots + a^k + a^{k+1})\\ &= 1 + (a + a^2 + \ldots a^k) - (a + a^2 + a^3 + \ldots + a^k) - a^{k+1})\\ &= 1 - a^{k+1}) \end{align} so we have that $$ 1-a^{k+1} = (1-a) (1 + a + \ldots + a^k) $$ hence (for $a \ne 1$), $$ 1 + a + \ldots + a^k = \frac{1-a^{k+1}}{1-a}, $$ which is the formula for the sum of a finite geometric series whose ratio is not 1. For an infinite series whose ratio has absolute value less than 1, the infinite sum turns out to be $\frac{1}{1-a}$, by the way, but this requires a careful definition of a sum for an infinite series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1323931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
finding out if two vectors are perpendicular or parallel I'm not sure if I quite get this. For example, (1, -1) and (-3, 3) take the cross product, you will end up with -3 + (-3) This doesn't equal 0, so it's not perpendicular. So that leaves me with it being parallel. When are two vectors parallel?
They are parallel if and only if they are different by a factor i.e. (1,3) and (-2,-6). The dot product will be 0 for perpendicular vectors i.e. they cross at exactly 90 degrees. When you calculate the dot product and your answer is non-zero it just means the two vectors are not perpendicular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
A collection $\{f_\alpha\}_{\alpha \in A}$ so that $\sup_{\alpha \in A} f_{\alpha}(x)$ is finite and non-measurable Background Give an example of a collection of measurable non-negative functions $\{f_\alpha\}_{\alpha \in A}$ such that if $g$ is defined by $g(x)=\sup_{\alpha \in A} f_{\alpha}(x)$, then $g$ is finite for all values of $x$ but $g$ is non-measurable. ($A$ is allowed to be uncountable. Attempt Let $A$ be the Vitali set. Then $A$ is not Lebesgue measurable. For each $\alpha \in A$, let $$f_{\alpha}(x)=\begin{cases} 1 &\mbox{if } x=\alpha \\ 0 &\mbox{if } x \neq \alpha \end{cases}.$$ Then for each $\beta \in \mathbb{R}$, $$\{x:f_{\alpha}(x)>\beta\}\in \{\varnothing, \{\alpha\},\mathbb{R}\},$$ so $f_\alpha$ is measurable with respect to the Lebesgue $\sigma-$ algebra. However, $$g(x)=\begin{cases} 1 &\mbox{if } x\in A \\ 0 &\mbox{if } x \not\in A \end{cases},$$ which is finite and non-measurable since $\{x:g(x)>0\}=A.$ Question Is my example correct? Specifically, is more work required to show that $g$ turns out to be as I have claimed?
Your proof is correct except specifying in more detail about $$\{x:f_{\alpha}(x)>\beta\}\in \{\varnothing, \{\alpha\},\mathbb{R}\}$$ like $$ \{x:f_{\alpha}(x)>\beta\}=\begin{cases} \varnothing & \text{ if } \beta\geqslant1 \\ \{\alpha\} & \text{ if } 0\leqslant\beta<1 \\ \mathbb{R} & \text{ if } \beta<0 \end{cases} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Sum of weighted chi square distributions Let $X_1 \sim \chi_{k}^2$ and $X_2 \sim \chi_{k}^2$ be i.i.d and both $a_1$ and $a_2$ positive real values. How can be expressed the PDF of $Y = a_1X_1 + a_2X_2$? Is it also a chi-square distribution? thanks
PDF of weighted sum of TWO iid chi-square distributions: Please check page 5 Corollary 1 https://arxiv.org/pdf/1208.2691.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What conditions must be checked for that $c$ is Cohen over $V$. $\textbf{Hechler forcing} $ Let $\mathbb{D}=\{(s,f): s \in \omega^{<\omega},f\in \omega^{\omega} \text{and} s \subseteq f\}$, ordered by $(t,g)\leq (s,f)$ if $s \subseteq t$, $g$ dominates $f$ everywhere, and $f(i) \leq t(i)$ for all $i \in |t|\setminus|s|$. It generically adds a new real $d=\bigcup\{s:(s,f)\in G$for some $f \in \omega^{\omega}\}$. I want to show that $\mathcal{Hechler forcing}$ adds a $\mathcal{Cohen}$ real. Let $d \in \omega^{\omega}$ be a Hechler real over $V$. Define $c \in 2^\omega$ by $c(n)=d(n)$ mod $2$. What conditions must be checked for that $c$ is Cohen over $V$. $\textbf{Eventually different forcing}$ $\mathbb{E}$ consists of pairs $(s,F)$, , where $s \in \omega^{<\omega}$ and $F$ is a finite set of reals with $(s,F)\leq (t,G)$ iff $t \subseteq s$ and $G \subseteq F$ and $\forall{i \in \text{dom}(s\setminus t)}\forall{g \in G}(s(i)\neq g(i))$.It generically adds a new real $f_{G}=\bigcup\{s:(s,H)\in G\}$. I want to show that Eventually different forcing adds a $\mathcal{Cohen}$ real. If $f_{G} \in \omega^{\omega}$ is a generic real, Define $c \in 2^\omega$ by $c(n)=1$ if $f_{G}$ is even or $c(n)=0$ if $f_{G}$ is odd What conditions must be checked for that $c$ is Cohen over $V$. Any suggestion please.
A real $c\in2^\omega$ is Cohen over $V$ if and only if, for all $D\subseteq2^{<\omega}$, if $D$ is dense in $2^{<\omega}$ (meaning that every $s\in2^{<\omega}$ has an extension in $D$) and $D\in V$, then $D$ contains an initial segment $c\upharpoonright n$ of $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $K$ is finite Galois over $\mathbb{Q}$ I just need a bit of quick help in understanding some solutions to a problem set. The question is this: (a) Let $K=\mathbb{Q}(\alpha)$ with $\alpha$ a zero of $f(x) = x^3-3x+1$. Prove that $K$ is Galois over $\mathbb{Q}$. I easily showed, $f(x)$ is irreducible, and detemined that is $\alpha$ s a root of $f(x)$, then so is $\alpha^2-2$. I was stuck, and the solutions says: Let $\beta$ be a third zero of $f(x)$, then $\alpha + ( \alpha^2-2)+ \beta = 0$, since the coefficient of $x^2$ is zero in $f(x)$. Why is this so? I am struggling to see this.
Have you heard of the discriminant? This is one way to show that K is Galois over the field of rational numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Plotting a part of curves (with possible solution as an attempt) I may mess up with this question! I plotted them using http://www.desmos.com/calculator How to plot only a part of some curves? For example: Plot of $y=x^2$: But if i want to plot the curve from $x=1$ to $x=2$ which will look something like: How to achieve it? My attempt: I can only do $[n,\infty)$, that is starting from $x=n$ but ending up in $\infty$. [first limitation] Lets take a function say $f(x)=x$... So suppose if i want to plot it from $x=3$...what i do is plot $y=({\sqrt{x-3}})^2 + 3$ thus giving me : But it also does not work on even degree equation like $y=x^2$ which gives: [second limitation] ($y=({\sqrt{x^2-3}})^2 + 3$ ) And also does not work with curves like $\sin{x}$ as using $y=({\sqrt{f(x)-n}})^2 + n$ (as i have been using) makes the values non-real if $n\geq 1$. [third limitation] My 2nd attempt: I plotted $y=x^2+0{\sqrt{x-1}}^2+0{\sqrt{3-x}}^2$ to get : which is exactly what i want but i guess the method is a little crude. Hence final method : $y=f(x)+0{\sqrt{x-a}}^2+0{\sqrt{b-x}}^2$ ... am i right?
Rory's suggestion is quite good, and it should work for Desmos. But apparently what you want is to stealthily include domain information; that's what it will boil down to, as we'll see. To this end, you can cook up auxiliary "characteristic functions" that are $1$ at least on the domain you want, and $0$ certain places where you don't want to plot the function. Thus, we'll let $$\ell_b(x) = \frac{1}{2(x - b)}\bigg(x - b - \lvert x - b\rvert\bigg) = \begin{cases}1, &x < b\\0, &x > b\end{cases}$$ so that $\ell_a(x)$ encodes whether $x \in (-\infty, a)$; that is, whether $x$ is to the $\ell$eft of $a$. Note that $\ell_a(a)$ is not defined (and I don't particularly care to find a way to make it either $1$ or $0$) at $x = b$. Now, given any function $f(x)$, if you plot $\dfrac{f(x)}{\ell_b(x)}$, you'll get the graph of $f$ only for $x < b$. You can do something analogous to define a characteristic function for the right-opening ray $(a, \infty)$, defining $$r_a(x) = \frac{1}{2(a - x)}\bigg(a - x - \lvert a - x \rvert\bigg) = \begin{cases}0, &x < a\\1, &x > a\end{cases},$$ again noting that $r_b(x)$ is not defined for $x = a$. Now, if you want to plot the function $f(x)$ for $a < x < b$, this is exactly $$\frac{f(x)}{2}\bigg(\frac{1}{\ell_b(x)}+\frac{1}{r_a(x)}\bigg),$$ provided $a < b$. So, it's really just sneakily including domain information, but if it's what you want to do, you can. Things are trickier to define a function on a union of intervals like $(-\infty, a) \cup (b, \infty)$, so I'll let you think about that, if you want. The key is just to figure out a way to divide by $0$ outside of your desired domain, and divide by $1$ on the desired domain (while rescaling, if you need to use a sum).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Strange logic question, truth of predictions 1: Half of my predictions come true; 2: I predict A; 3: I predict B. Now, suppose A come true, so that the prediction 2 is true; and B come false. So, half of my predictions came true and 1 is also true; BUT in this way, 2/3 of my predictions came true so 1 is false. Contradiction?? Maybe it's only a nonsensical mental whimsy of mine.
This doesn't seem like a contradiction to me: (1) and (3) are false, and (2) is true. Alternatively, we can make things more comlicated by introducing time (https://en.wikipedia.org/wiki/Temporal_logic) into the equation, so that there is a moment when (1) is true (immediately after (2) has been verified and (3) disproved)), but then (1) becomes false immediately later. But I don't see the need for this interpretation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
When does equality hold in this inequality? The following inequality can be proven as follows: Let $n\geq3$ and $0=a_0<a_1<\dots<a_{n+1}$ such that $a_1a_2+a_2a_3+\dots+a_{n-1}a_n=a_na_{n+1}$. Show that \begin{equation*} \frac{1}{{a_3}^2-{a_0}^2}+\frac{1}{{a_4}^2-{a_1}^2}+\dots+\frac{1}{{a_{n+1}}^2-{a_{n-2}}^2}\geq\frac{1}{{a_{n-1}}^2}. \end{equation*} Solution: The expression on the left-hand side can be rewritten as $$ \frac{a_1^2 a_2^2}{a_1^2 a_2^2 a_3^2 - a_0^2 a_1^2 a_2^2} + \frac{a_2^2 a_3^2}{a_2^2 a_3^2 a_4^2 - a_1^2 a_2^2 a_3^2} + \cdots + \frac{a_{n-1}^2 a_n^2}{a_{n-1}^2 a_n^2 a_{n+1}^2 - a_{n-2}^2 a_{n-1}^2 a_n^2}. $$ Applying the Cauchy-Schwarz inequality then yields $$ \begin{align*} &\frac{a_1^2 a_2^2}{a_1^2 a_2^2 a_3^2 - a_0^2 a_1^2 a_2^2} + \frac{a_2^2 a_3^2}{a_2^2 a_3^2 a_4^2 - a_1^2 a_2^2 a_3^2} + \cdots + \frac{a_{n-1}^2 a_n^2}{a_{n-1}^2 a_n^2 a_{n+1}^2 - a_{n-2}^2 a_{n-1}^2 a_n^2} \\ & \ge \frac{\left( a_1 a_2 + a_2 a_3 + \cdots + a_{n-1} a_n \right)^2}{a_1^2 a_2^2 a_3^2 - a_0^2 a_1^2 a_2^2 + a_2^2 a_3^2 a_4^2 - a_1^2 a_2^2 a_3^2 + \cdots + a_{n-1}^2 a_n^2 a_{n+1}^2 - a_{n-2}^2 a_{n-1}^2 a_n^2} \\ & = \frac{a_n^2 a_{n+1}^2}{a_{n-1}^2 a_n^2 a_{n+1}^2 - a_0^2 a_1^2 a_2^2} \ge \frac{1}{a_{n-1}^2}. \end{align*} $$ When does equality hold?
Deduced from the equality condition of Cauchy-Schwarz inequality, we have: $$ \forall 0 \leqslant k \leqslant n-1, \, \, a_{k+3}^2 = a_k^2 + c$$ where $c > 0$ is a constant. So the equation has 3 possiblities that depends on $n$. For example, if $n = 3K + 1$: $$\sum_{k=0}^{K-1} (a_0 + kc)(a_1 + kc) + (a_1 + kc)(a_2 + kc) + (a_2 + kc)(a_0 + (k+1)c) = (a_1 + Kc)(a_2 + Kc) - (a_0 + Kc)(a_1 + Kc).$$ We can solve the above equation with respect to $c$(quadratic) and see what the roots are. If one of the roots is positive then the equality holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1324994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $\phi: M_1 \to M_2$ a diffeomorphism between diff. manifolds, prove that if $M_2$ is orientable then so is $M_1$ Let $\phi: M_1 \to M_2$ a local diffeomorphism between two differentiable manifolds $M_1,M_2$. I want to prove that if $M_2$ is orientable so is $M_1$. Attempt: In order a manifold to be orientable the determinant of the Jacobian matrix must be positive. That means, for $M_2$ say, that is I have two mappings $g_{1}: V \subset \mathbb{R}^n \to V_1$ and $g_{2}: V \subset \mathbb{R}^n \to V_2$ then it must hold that $$ \text{det}\, (g_1 \circ g_2^{-1}) >0. $$ Now the pullback of this transition map to $M_1$, with transition mappings $f_1: U \to U_1$ and $f_2: U \to U_2$, should be $$ f_1 \circ f_2^{-1} = \phi^{-1}(g_1 \circ g_2^{-1}), $$ right? Then, I do not know how to show that the determinant of $f_1 \circ f_2^{-1}$ is also positive which is required in order to show that $M_1$ is orientable.
Equivalent condition for manifold $M$ to be orentable is that there exists non-vanishing form $\Omega$ of degree equal to the dimension of manifold. So let $M_2$ be orientable of dimension $n$. Then there exists non-vanishing $\Omega_2\in\Omega^n(M_2).$ Let $\phi:M_1\rightarrow M_2$ be a diffeomorphism. Set $\Omega_1:=\phi^*\Omega_2.$ By definition of $\Omega_1\in\Omega^n(M_1).$ Now it is sufficient to show that $\Omega_1$ vanish nowhere. In fact for any point $x\in M_1$ and any lineary independet vecotrs $X_1,\dots,X_n\in T_x M_1$ we have the following: $$\Omega_1(X_1,\dots,X_n)=\Omega_2(\phi_*X_1,\dots\phi_*X_n).$$ Since $\phi$ is an diffeomorphism we get that $\phi_*$ is an isomorphism of vector spaces $T_x M_1$ and $T_{\phi(x)} M_2.$ Hence $\phi_*X_1,\dots\phi_*X_n$ are lineary independent as well. So $$\Omega_1(X_1,\dots,X_n)=\Omega_2(\phi_*X_1,\dots\phi_*X_n)\neq 0.$$ Existance of $\Omega_1$ proves that $M_1$ is orientable. Remarks about definition equivalence. I encourage you to see for yourself that those definitions are equivalent (in any textbook about differential geometry). The Jacobi's determinant definition for global facts is usually cubersome due to the local nature which makes references to maps and atlases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Integration of $\frac{f'(x)}{f(x)}$? $\frac{f'(x)}{f(x)}$ integrated must be $$\int\frac{f'(x)}{f(x)}dx=\ln\rvert f(x)\rvert+c.$$ But when trying to show this with partial integration I get another result: $$ \begin{align*} &\int\frac{\frac{d}{dx}f(x)}{f(x)}dx \\ =\quad&\int\frac{d}{dx}(f(x))\cdot (f(x))^{-1}dx \\ =\quad&f(x)\cdot (f(x))^{-1}-\int -f(x)\cdot (f(x))^{-2}dx \\ =\quad&1+\int (f(x))^{-1}dx \\ =\quad&1+\ln\rvert f(x)\rvert+c \end{align*} $$ Is there a mistake or can I summarize $1+c$, because $1$ is a constant?
Yes, you are correct. In fact you can always summerize constants. Another example: Let's take two approaches to finding for $x\in (-1,1)$ $$\int -\frac{1}{\sqrt {1-x^2}}dx$$ Approach 1: Since $$(\arccos x)'=-\frac{1}{\sqrt{1-x^2}}$$ we have $$\int -\frac{1}{\sqrt {1-x^2}}dx=\arccos x+C \tag{1}$$ Approach 2: Since $$(\arcsin x)'=\frac{1}{\sqrt{1-x^2}}$$ We have $$\int -\frac{1}{\sqrt {1-x^2}}dx=-\arcsin x +C$$ But $$\arccos x+\arcsin x=\frac{\pi}{2}$$ Therefore $$\int -\frac{1}{\sqrt {1-x^2}}dx=\arccos x-\frac{\pi}{2} +C\tag{2}$$ They are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 5 }
What does $\sum_{k=0}^\infty \frac{k}{2^k}$ converge to? This problem comes from another equation on another question (this one). I tried to split it in half but I found out that $$\sum_{k=0}^\infty \frac{k}{2^k}$$ can't be divided. Knowing that $$\sum_{k=0}^\infty x^k=\frac{1}{1-x}$$ I wrote that $$\sum_{k=0}^\infty \frac{k}{2^k}=\sum_{k=0}^\infty \left(\frac{\sqrt[k] k}{2}\right)^k=\frac{1}{1-\frac{\sqrt k}{2}}=\frac{2}{2-\sqrt[k] k}$$ But that's not what I wanted. Could anyone help me?
Start with: $$\frac{1}{1-x}=\sum_{k=0}^{\infty}x^k.$$ Then take derivative with respect to $x$. $$\frac{1}{(1-x)^2}=\sum_{k=1}^{\infty}kx^{k-1}.$$ Multiply by $x$. $$\frac{x}{(1-x)^2}=\sum_{k=1}^{\infty}kx^{k}.$$ Now substitute $x=\frac{1}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 1 }
Geometric justification of the trigonometric identity $\arctan x + \arctan \frac{1-x}{1+x} = \frac \pi 4$ The trigonometic identity $$ \arctan x + \arctan \frac 1 x = \frac\pi 2\quad\text{for }x>0 $$ can be seen to be true by observing that if the lengths of the legs of a right triangle are $x$ and $1$, then the two acute angles of the triangle are the two arctangents above. Does the identity $$ \arctan x + \arctan \frac{1-x}{1+x} = \frac \pi 4 $$ have a similar geometric justification? PS: The second identity, like the first, can be established by either of two familiar methods: * *Use the usual formula for a sum of two arctangents; or *differentiate the sum with respect to $x$. PPS: Secondary question: Both of the functions $x\mapsto\dfrac 1 x $ and $x\mapsto\dfrac{1-x}{1+x}$ are involutions. Does that have anything to do with this? Presumably it should mean we should hope for some geometric symmetry, so that $x$ and $\displaystyle\vphantom{\frac\int\int}\frac{1-x}{1+x}$ play symmetrical roles. PPPS: In a triangle with a $135^\circ$ angle, if the tangent of one of the acute angles is $x$, then the tangent of the other acute angle is $\displaystyle\vphantom{\frac\int\int}\dfrac{1-x}{1+x}$. That follows from the second identity above. If there's a simple way to prove that result by geometry without the identity above, then that should do it.
Here's another geometrical picture of what the identity means. The tangent of an angle can also be interpreted as the slope of a line. That is, take a straight line through the origin of an XY cartesian coordinate system, then its equation in cartesian coordinates is $$y=\tan{\theta} \; x$$ where $\theta$ is the angle between that line and the X-axis. There is however one line that does not fit in this scheme, but we're going to change that. Obviously, the Y-axis does not fit into that scheme because it's cartesian equation is $x=0$. We'll however extend our possible slopes with $\infty$ so that the Y-axis has slope $\infty$ and therefore $\tan \pi/2$ is defined to be $\infty$. Now, going back to our cartesian plane, I want you to look at the following linear transformation $$\mathbb{R}^2\to\mathbb{R}^2:\left(\begin{array}{c}x\\y\end{array}\right)\mapsto\frac{1}{\sqrt{2}}\left(\begin{array}{cc} 1 & 1 \\ 1 & -1\end{array}\right)\left(\begin{array}{c}x\\y\end{array}\right)$$ Since this is a linear transformation, it leaves the origin invariant but more than that it also maps straight lines through the origin to straight lines through the origin. Therefore, we can look to define it in terms of the slopes of these lines. Since the slope of a line is given by $z=y/x$ we have that $$\mathbb{R}\cup \{\infty\}\to \mathbb{R}\cup \{\infty\} : z\mapsto\frac{1-z}{1+z}$$ This is exactly your transformation from before. Going back to my original linear transformation, we can check that it has eigenvalues $1$ and $-1$ with respective eigenvectors $(1,\sqrt{2}-1)$ and $(1,-\sqrt{2}-1)$. I've written those in such a way that we can immediately read off the slopes as $\sqrt{2}-1$ and $-\sqrt{2}-1$. This indicates that our transformation is a reflection about the line with slope $\sqrt{2}-1$. This slope corresponds to an angle of $\pi/8$. Now, if I reflect an arbitrary line, the image of the line and the line both make an angle with the X-axis, the sum of which is $\pi/4$. You have to be careful in defining the angles for this to work, as the "orientation" of the line you reflect matters in defining the sign of the angle. Or you could work $\mod\pi$, which is a symmetry of the $\tan$ function anyway. The transformation $$\mathbb{C}\cup \{\infty\}\to \mathbb{C}\cup \{\infty\} : z\mapsto\frac{1-z}{1+z}$$ is also known as a Möbius transformation when it is working in the complex plane (or rather the Riemann sphere). But before that, they were also refered to as linear transformations, which at first may seem odd, because the prescription of the transformation is anything but linear. However, because of the connection I explicitly built up here with the group of linear transformations, it is more clear why the name is fitting. More precisely it is known that the group of Möbius transformations is related to the projective special linear group $PSL(2,\mathbb{C})$. If I have more time, I'll try to look further into the geometric meaning of your identity in the context of the Riemann sphere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Why $\lim_{R\to\infty}\int_{0}^{\pi}\sin(R^{2}e^{2i\theta})iRe^{i\theta}\:\mathrm{d}\theta = -\sqrt{\frac{\pi}{2}}$ This is a short question, but I'm simply not sure where to start, I know by Jordan's Lemma that the integral is not $0$, but I only know the below result due to Mathematica. $$\lim_{R\to\infty}\int_{0}^{\pi}\sin(R^{2}e^{2i\theta})iRe^{i\theta}\:\mathrm{d}\theta=-\sqrt{\frac{\pi}{2}}$$ I need the result in order to proceed with evaluating a real integral using a contour integral. Does anyone have any advice on how to approach the integral.
Just in order to add something to the other answers, in order to compute the Fresnel integrals you may also use the Laplace transform. Since: $$\mathcal{L}^{-1}\left(\frac{1}{\sqrt{x}}\right) = \sqrt{\frac{1}{\pi s}},\qquad\mathcal{L}(\sin x)=\frac{1}{1+s^2},\qquad\mathcal{L}(\cos x)=\frac{s}{1+s^2} $$ we have: $$ \int_{0}^{+\infty}\sin(x^2)\,dx = \frac{1}{2}\int_{0}^{+\infty}\frac{\sin x}{\sqrt{x}}\,dx = \frac{1}{2}\int_{0}^{+\infty}\frac{ds}{\sqrt{\pi s}(1+s^2)}=\frac{1}{\sqrt{\pi}}\int_{0}^{+\infty}\frac{du}{1+u^4}$$ as well as: $$ \int_{0}^{+\infty}\cos(x^2)\,dx = \frac{1}{2}\int_{0}^{+\infty}\frac{\cos x}{\sqrt{x}}\,dx = \frac{1}{2}\int_{0}^{+\infty}\frac{s\,ds}{\sqrt{\pi s}(1+s^2)}=\frac{1}{\sqrt{\pi}}\int_{0}^{+\infty}\frac{u^2\,du}{1+u^4}.$$ Through the substitution $u\to\frac{1}{u}$ we can see that the last two integrals are the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Maximal ideals of polynomial ring We know that if $k$ is algebraically closed, then each maximal ideals of $k[x_1, x_2, \ldots , x_n]$ are of the form $(x_1 - a_1, x_2 - a_2, \ldots, x_n - a_n),$ where $a_1, a_2, \ldots , a_n \in k$ (Hilbert's Nullstellensatz Theorem). In the case when $k$ is not algebraically closed is it correct to say that a maximal ideal $m$ of $k[x_1, x_2, \ldots, x_n]$ has residue field $k$ if and only if $m = (x_1 - a_1, x_2 - a_2, \ldots, x_n - a_n)$ for some $a_1, a_2, \ldots, a_n \in k.$ Thank you.
Yes. Set $R=k[x_1...x_n]$ and suppose $R/m\cong k$. Then we have a homomorphism $R\to k$ given by composing $R\to R/m$ with this isomorphism, and each $x_i$ is sent to some element $a_i$ of $k$. This completely determines the kernel of $R\to k$, for instance since $R$ is a free algebra, and so realizes $m$ in the desired form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Proving continuity for all $x$. I am having difficulty in proving the following problem. Any hints would be greatly appreciated. Let $f:[0,1]\times [0,1] \rightarrow \mathbb{R}$ be such that for each $y \in [0,1]$, $f(\cdot,y)$ is continuous and for each $ x \in [0,1]$, $f(x,\cdot)$ is measurable. Assume that $f$ is also bounded. Show that the function $F(x) = \int_{[0,1]}f(x,y)\,dy$ is continuous for all $x \in [0,1]$.
This is simply the Dominated Convergence Theorem. Write $f_x(y) := f(x,y)$, then $F(x) = \int_0^1 f_x(y)dy$. For any $x_0 \in [0,1]$, as $x \to x_0$, $f_x \to f_{x_0}$ pointwise by continuity along the horizontal lines. Since $|f_x|$ is uniformly bounded for all $x$, we have by the DCT that $$ \lim_{x \to x_0}F(x) = \lim_{x \to x_0}\int_0^1 f_x(y)dy = \int_0^1 \lim_{x \to x_0}f_x(y)dy = \int_0^1 f_{x_0}(y)dy = F(x_0) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $2|x|/(1+x^{2})<1$ true if $|x|<1$? [Solved] I was asked to show that the series $$\sum_{k\geq 0}a_{k}\left ( \frac{2x}{1+x^{2}} \right )^{k}$$ is convergent for $x\in (-1,1)$, where $\{ a_{k}\}_{k\geq 0}$ is a real bounded sequence. My main question is, how do I know that $|2x/(1+x^{2})|<1$, that is, $2|x|/(1+x^{2})<1$ is true for $|x|<1$? EDIT: Thanks for your answers. It took time to find it out on my own. I hope this too is correct. Let $y(x)=2x/(1+x^{2})$. Since $y'(x)>0$ for all $x\in (-1,1)$ then $y(x)$ is strictly increasing for all $x\in (-1,1)$. Hence $y(-1)<y(x)<y(1)$ for all $x\in (-1,1)$. Since $y(-1)=-1$ and $y(1)=1$, then $| y(x)|<1$ is true for $|x|<1$.
The claim is true if $2|x| < 1+x^2$ which is true by AM-GM inequality, because equality occurs only when $x^2=1$. Because $|x|<1 \implies x\neq 1$ the inequality is strict. The AM-GM inequality holds for the domain of positive reals, but all terms involved are positive even when $x$ is negative so the above inequality is true for all reals except for $x=\pm{1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a theory of induced representations for semigroups? Given a semigroup $G$, a subgroup $H\subseteq G$ (not merely a subsemigroup) and a representation $\rho: H\rightarrow GL(V)$ for some vector space $V$, is there a canonical definition of an induced representation on some vector space $V^\prime$ containing $V$? Is there even a well developed representation theory of semigroups? If so, please give me references. (I encountered this question while working on my master's thesis, which includes a generalization/extension of certain Hecke operators on vector valued modular forms.) Thanks in advance, mathmax Edit: Corrected typo; of course, $\rho$ has to be a representation of $H$ not of $G$
I am writing a book on the representation theory of monoids. The latest version is http://www.sci.ccny.cuny.edu/~benjamin/monoidrep.pdf You will see that there is a more subtle way to induce representations of a monoid from its maximal subgroups than the way suggested by Phil which is essential for constructing the simple modules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How prove $\frac{(a-b)^4+(b-c)^4+(c-a)^4}{(a+b+c)^4}=2$ Let $a,b,c\in R$,and such $ab+bc+ac=0,a+b+c\neq 0$ show that $$\dfrac{(a-b)^4+(b-c)^4+(c-a)^4}{(a+b+c)^4}=2$$
Hint: Express the numerator in terms of elementary symmetric polynomials, as the denominator and constraint already are. You get $$(a-b)^4+(b-c)^4+(c-a)^4 \\= 2(a+b+c)^4 - 12(ab + bc + ca) (a+b+c)^2 + 18(ab+bc+ca)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1325944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculating the radius of convergence of a series. Let $d_n$ denote the number of divisors of $n^{50}$ then determine the radius of convergence of the series $\sum\limits_{n=1}^{\infty}d_nx^n$. So obviously we need to calculate the limit of $\frac{d_{n+1}}{d_n}$. I am guessing I need some information about the asymptotic behavior of $d_n$. Any help? The options given are $1 ,0 , 50 ,\frac{1}{50}$
Note that $d_1=1$, and for all $n \ge 2$, $d_n \le n^{50}$. So $$\frac{1}{R}=\limsup_{n \to \infty}{d_n}^{\frac{1}{n}}$$ and $$1 \le \limsup_{n \to \infty}{d_n}^{\frac{1}{n}} \le \lim_{n \to \infty}(n^{\frac{1}{n}})^{50}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find two finite abelian groups of same order $G$ and $H$ such that $G \ncong H.$ Find two finite abelian groups of same order $G$ and $H$ such that $G \ncong H.$ I found groups $\mathbb{Z}_4$ and $V$ (Klein's group) that satisfy it, but would like more examples. I'm trying to use the result that $p$ is a prime number then all group of $p^2$ order is abelian.
The three abelian groups of order $8$, namely $G_1$, $G_2$ and $G_3$, belong to three different isotopy classes and hence there can't be two of them being isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
How are basis elements also elements of the topology? I read the following definitions in Munkres' Topology (2nd Edition): If $X$ is a set, a basis for a topology on $X$ is a collection $\mathcal B$ of subsets of $X$ (called basis elements) such that * *For each $x\in X$, there is at least one basis element $B$ containing $x$. *If $x$ belongs to the intersection of two basis elements $B_1$ and $B_2$, then there is a basis element $B_3$ containing $x$ such that $B_3\subset B_1\cap B_2$. If $\mathcal B$ satisfies these two conditions, then we define the topology $\mathcal T$ generated by $\mathcal B$ as follows: A subset $U$ of $X$ is said to be open in $X$ (that is, to be an element of $\mathcal T$) if for each $x\in U$, there is a basis element $B\in\mathcal B$ such that $x\in B$ and $B\subset U$. Note that each basis element is itself an element of $\mathcal T$. From this, I am having difficulties understanding how the last sentence follows. Could someone please point me in the right direction?
Just expanding on Daniel Fischer's comment a bit: From the definition of an open set from a basis, we have: A subset $U$ of $X$ is said to be open iff for each $x\in U$, there is a basis element $B\in\mathcal B$ such that $x\in B\subseteq U$. So consider this: Take a $B \in \mathcal{B}$ and further let $x\in B$. There exists a basis element, namely $B$, such that: $x \in B \subseteq B$, and hence $B$ is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bounded sequence with subsequences all converging to the same limit means that the sequence itself converges to the same limit. I have a question regarding one exercise in Stephen Abbotts' Understanding Analysis. The question is: Assume $(a_n)$ is a bounded sequence with the property that every convergent subsequence of $(a_n)$ converges to the same limit $a \in \mathbb{R}$. Show that $(a_n)$ must converge to $a$. I was then presented this proof (from the solutions manual): Assume for contradiction that $(a_n)$ does not converge to $a$, then from the negation of convergence we have (I'm not quite sure in this part): $$\exists \epsilon>0, \forall N\in \mathbb{N}, \exists n\in \mathbb{N} : n\geq N \wedge |a_n-a|\geq \epsilon. $$ Using this we can construct a subsequence $(a_{n_j})$ that diverges from $a$ as follows: for arbitrary $N\in \mathbb{N}$, we can always find a, say, $n_1$ such that, $n_1 \geq N$ where $|a_{n_1}-a|\geq \epsilon$. Since $n_1$ itself is in $\mathbb{N}$, then we can again find a $n_2 \geq n_1$ so that $|a_{n_2}-a|\geq \epsilon$. And in general we can find a $a_{n_{j+1}}$ after choosing an appropriate $a_{n_j}$ so that $|a_{n_{j+1}}-a|\geq \epsilon$. From the construction of such a subsequence, isn't the contradiction that $(a_{n_j})$ does not converge to $a$ already proof that $(a_n)$ should converge to $a$? I am asking because the proof that I've read continues to say that the Bolzanno-Weierstrass theorem can be used to get another subsequence from the constructed $(a_{n_j})$ which diverges from $a$, and that the proof ends there. I think I understand it, but I fail to see why it is necessary. Can somebody enlighten me on this?
Here is how I would approach the problem - since $(a_n)$ is a bounded sequence, there exist subsequences $(a_{n_k})$ and $(a_{n_l})$ such that \begin{align}\lim_{k\to\infty} a_{n_k} &= \limsup_{n\to\infty} a_n\\ \lim_{l\to\infty} a_{n_l} &= \liminf_{n\to\infty} a_n. \end{align} (It is a good exercise to prove the above.) By hypothesis, $(a_{n_k})$ and $(a_{n_l})$ have the same limit. Therefore $$\limsup_{n\to\infty} a_n = \liminf_{n\to\infty} a_n = \lim_{n\to\infty} a_n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Solving $\int_0^{\infty}\,dk\,\exp{(-\delta^2k^2)}\,\frac{J_1(kR)}{k^2}$ I would like to understand if there is a closed formula for this integral: $$\int_0^{\infty}\,dk\,\exp{(-\delta^2k^2)}\,\frac{J_1(kR)}{k^2}$$ where $R,\delta>0$ and $J_1(\cdot)$ is the bessel function of first kind of order 1. By using the series expansion of $J_1$ and the Guassian integral, I end up with this series (with a prefactor $\delta$): $$\sum_{l=0}^{\infty}\frac{(-1)^l}{2^{2l+2}}\,\frac{(l-1)!}{l! (l+1)!}\,\left(\frac{R}{\delta}\right)^{2l+1}$$ Checking with Wolfram the series does converge but is it possible to find a nicer solution? Or otherwise another way to solve the integral above?
With the help of Mathematica I got: $$\mathcal{L}\left(\frac{J_1(R\sqrt{x})}{x^{3/2}}\right)=\frac{|R|}{2}\left(1-2\gamma-\frac{4s}{R^2}\left(1-e^{-\frac{R^2}{4s}}\right)+\log\frac{4}{R^2}-\Gamma\left(0,\frac{R^2}{4s}\right)\right)$$ and by expanding the RHS as a series it is not difficult to check it matches your series.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integral of Absolute Value of $\sin(x)$ For the Integral: $\int |\sin (ax)|$, it is fairly simple to take the Laplace transform of the absolute value of sine, treating it as a periodic function. $$\mathcal L(|\sin (ax)|) = \frac{\int_0^\frac{\pi} {a} e^{-st} \sin (ax)} {1 - e^{-\frac{ \pi s} {a}}} = \frac{a (1 + e^{-\frac{ \pi s} {a}})} {(a^2 +s^2)(1 - e^{-\frac{ \pi s} {a}})} $$ Then the integral of the absolute value would simply be $ 1/s $ times the Laplace transform. Splitting into partial fractions we have: $$\mathcal L(\int |\sin (ax)|) = \frac{a (1 + e^{-\frac{ \pi s} {a}})} {s(a^2 +s^2)(1 - e^{-\frac{ \pi s} {a}})} = \frac{(1 + e^{-\frac{ \pi s} {a}})} {as(1 - e^{-\frac{ \pi s} {a}})} - \frac{s (1 + e^{-\frac{ \pi s} {a}})} {a(a^2 +s^2)(1 - e^{-\frac{ \pi s} {a}})} $$ The first term is close to the floor function, when separated: $$ \frac{(1 + e^{-\frac{ \pi s} {a}})} {as(1 - e^{-\frac{ \pi s} {a}})} = \frac{(1 - e^{-\frac{ \pi s} {a}}) + 2e^{-\frac{ \pi s} {a}}} {as(1 - e^{-\frac{ \pi s} {a}})} = \frac{1} {as} + \frac{2e^{-\frac{ \pi s} {a}}} {as(1 - e^{-\frac{ \pi s} {a}})}$$ This leads to $$ \frac{1} {a} + \frac {2\cdot\text{floor}(\frac{ax}{\pi})} {a}$$ Since the second term looks to be $ s/a^2 $ multiplied by the original function, this points to the derivative, or $$-1\cdot\text{sign}(\sin(ax))\cos (ax)/a$$, for a total integral is: $$ \frac{1} {a} + \frac {2\cdot \text{floor}(\frac{ax}{\pi})} {a} -\text{sign}(\sin(ax))\cos(ax)/a$$ The Laplace transform method to evaluate an integral seems bulky - is there a better method than this? And what is the cosine version?
Without loss of generality, let us assume $a=\pi$ and compute $$\int_0^X|\sin(\pi x)|dx.$$ The function has period $1$ so that when integrating from $0$ to $X$, there are $\lfloor X\rfloor$ whole periods and a partial one from $X-\lfloor X\rfloor$ to $X$, i.e. by shifting, from $0$ to $X-\lfloor X\rfloor$. Hence $$\int_0^X|\sin(\pi x)|dx=\lfloor X\rfloor\int_0^1\sin(\pi x)dx+\int_0^{X-\lfloor X\rfloor}\sin(\pi x)dx=\frac1\pi\left(2\lfloor X\rfloor-\cos(\rfloor\pi x)\Big|_0^{X-\lfloor X\rfloor}\right)=\frac1\pi\left(2\lfloor X\rfloor+1-\cos(\pi(X-\lfloor X\rfloor)\right).$$ For the cosine version, shift the argument by $\dfrac\pi2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Working with $i$ and finding roots How do you find the cubic polynomial $f(x)$ with integer coefficients, where $1-3i$ and $-2$ are the roots of $f(x)$? It has been too long since I have worked with $i$ and I have been unable to generate it.
The other solutions correctly point out that the third root must be $1+3i$, and therefore the polynomial must be $$(x-(1-3i))(x-(1+3i))(x-2)$$ or a multiple of it (notice that multiplying through the entire thing by a constant does not change the zeroes). I will add to this one tip for simplifying the expression: Notice that you can regroup and multiply out the factors that involve $i$ as follows: $$(x-(1-3i))(x-(1+3i)) = ((x-1) + 3i)((x-1) - 3i) = (x-1)^2 + 9$$ which might make the problem slightly easier to complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
High School Trigonometry ( Law of cosine and sine) I am preparing for faculty entrance exam and this was the question for which I couldn't find the way to solve (answer is 0). I guess they ask me to solve this by using the rule of sine and cosine: Let $\alpha$, $\beta$ and $\gamma$ be the angles of arbitrary triangle with sides a, b and c respectively. Then $${b - 2a\cos\gamma \over a\sin\gamma} + {c-2b\cos\alpha \over b\sin\alpha} + {a - 2c\cos\beta \over c\sin\beta}$$ is equal to (answer is zero but I need steps).
First lets consider $$\frac{b-2a\cos\gamma}{a\sin\gamma}$$ The numerator looks similar to the RHS of the cosine law, but not quite. It would be nice to see $-2ab\cos\gamma$ instead of $-2a\cos\gamma$. So lets just multiply by $b$. Then the numerator would look like this $$b^2-2ab\cos\gamma$$ Now all that is missing is the $a^2$, so just add it $$a^2+b^2-2ab\cos\gamma$$ Now my expression is equal to $c^2$. Easy, right? It would be nice if math was really like this, but unfortunately, we can't just add and multiply arbitrary constants whenever it seems convenient to do so. We can, however, add zero (e.g. $+\phi-\phi$) and multiply by one (e.g. $\frac{\phi}{\phi}$). $$\frac{b-2a\cos\gamma}{a\sin\gamma} = \frac{b-2a\cos\gamma}{a\sin\gamma}\cdot\frac{b}{b}=\frac{b^2-2ab\cos\gamma}{ab\sin\gamma}=\frac{a^2+b^2-2ab\cos\gamma-a^2}{ab\sin\gamma}=\frac{c^2-a^2}{ab\sin\gamma}$$ We have just added zero ($a^2-a^2$), so next, lets multiply by one ($\frac{c}{c}$), so we can isolate $\frac{c}{\sin\gamma}$. $$\frac{c^2-a^2}{ab\sin\gamma}=\frac{c^2-a^2}{ab\sin\gamma}\cdot\frac{c}{c}=\frac{c^2-a^2}{abc}\cdot\frac{c}{\sin\gamma}$$ Keeping in mind, of course that $$\Psi=\frac{c}{\sin\gamma}=\frac{a}{\sin\alpha}=\frac{b}{\sin\beta}$$ Then our expression reduces to $$\frac{b-2a\cos\gamma}{a\sin\gamma}+\frac{c-2b\cos\alpha}{b\sin\alpha}+\frac{a-2c\cos\beta}{c\sin\beta}=\frac{c^2-a^2}{abc}\cdot\frac{c}{\sin\gamma}+\frac{a^2-b^2}{abc}\cdot\frac{a}{\sin\alpha}+\frac{b^2-c^2}{abc}\cdot\frac{b}{\sin\beta}$$ $$=\frac{\Psi}{abc}\big((c^2-a^2)+(a^2-b^2)+(b^2-c^2)\big)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 5 }
Are projections surjective? Suppose $A,B$ are non empty and consider projection $P: A \times B \to A $ given by $P( (a,b) ) = a $. Show $P$ is surjective. Attempt: Let $y \in A $ be arbitrary. We know that for every $x \in B $, it follows that $P ( ( y,x) ) = y $. So, for every $y \in A $, we can always find an element $(a,b) \in A \times B $, namely $(a,b) = (y,x) $ so that $P(y,x) = y $. In fact, $P$ is surjective. Is this correct?
Pretty much yes. You need to cite non-emptiness of $B$ explicitly... but I would still give full credit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1326947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Visual approach to abstract algebra I'm currently finding abstract algebra to be very fascinating. However, one of the things that pulls me back is that I sometimes find it hard to understand something visually. For example, one could visualise the First Isomorphism Theorem as being a circle with a smaller circle inside (kernel) mapping to another large circle with a dot (zero element), and the "annulus" left when you ignore the kernel is equivalent to the other circle, except for the dot. I have a very amazing book Visual Complex Analysis, and was wondering if there's a similar one for abstract algebra.
For group theory see Nathan Carter's Visual Group Theory and its accompanying software, Group Explorer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
suggest an elementary text in analysis I have just completed my undergraduate course in mathematics but I don't feel better in analysis, there is a mugup of books in my book collection But don't know what to choose who will help me to teach me analysis I have the options of Battle, Apostol and Rudin and some other local book so will u plz suggest me which book will provide me a rigour course at elementary level so that I could understand Rudin well in my master,
Well I have quite a collection of analysis books myself. Here are my favorites. "Introduction to real analysis - Bartle and Sherbert" Is my favorite book on the subject. Everything is well laid out and easy to understand. Unfortunately the treatment very elementary in nature. For example there is no treatment of improper integrals or a thorough treatment of power series. If you are weak in this subject I suggest you do a thorough study of this book. Solve all the exercises. "Mathematical analysis - Apostol" Is a exhaustive book on the subject. Every course I have ever done in real analysis is covered in this book. The exercises are very well laid out. This is the book I would pick if I had to brush up my analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can ANY 2 or 3 dimensional shape be reversed engineered to give an equation (formula) for its shape?? Can ANY 2 or 3 dimensional shape be reversed engineered to give an equation (formula) for its shape? In other words given ANY 2 or 3 dimensional shape that ones draws on a graph can one reverse engineer it to find a formula of given shape?
It depends. If you do not know the shape, then the answer is simply no, because there is no way to observe everything about the shape in most reasonable models of the world. For example, if a shape is infinitely divisible and you can only observe one point at a time, then even if its boundary is a continuous surface, you cannot determine what it is by finitely many observations (there could be a tiny bump in a small region where you did not make an observation). In the real world quantum mechanics actually implies that you will alter an object simply by observing it, which is even worse. If however you already know a (mathematically) precise description of the shape, then that same description will give you an equivalent equation for the surface.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
System of Equations: any solutions at all? I am looking for any complex number solutions to the system of equations: $$\begin{align} |a|^2+|b|^2+|c|^2&=\frac13 \\ \bar{a}b+a\bar{c}+\bar{b}c&=\frac16 (2+\sqrt{3}i). \end{align}$$ Note I put inequality in the tags as I imagine it is an inequality that shows that this has no solutions (as I suspect is the case). This is connected to my other question... I have found that $(4,1,1)/6$ and $\mu=(2,2+\sqrt{3}i,2-\sqrt{3}i)/6$ are square roots of $(2,1,1)/4$ in $(\mathbb{C}\mathbb{Z}_3,\star)$ but am trying to understand why $\mu$ is not positive in the C*-algebra when, for example, $(14,-6+5i,-6-5i)$ is.
By Cauchy-Schwarz, we must have$$\left(|a|^2+|b|^2+|c|^2 \right) \left(|\overline{c}|^2+|\overline{a}|^2+|\overline{b}|^2 \right) \ge \left| a\overline c + b\overline a + c\overline b\right|^2$$ $$\implies \frac19 \ge \frac1{36}|2+\sqrt3 \; i|^2 = \frac7{4\cdot9}$$ which is obviously not possible...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
System of equations $x + xy + y = 11$ and $yx^2 + xy^2 = 30$ I have problem with solving this one. Total number of solutions from system of equations? \begin{cases} x + xy + y = 11 \\ x^2y + y^2x = 30 \end{cases} There is a system of equation and I have tried to get some normal solutions, but I always get the fourth degree polynomial from which I do not know how to get simple 'x's and 'y's. I know that task asks me to just find total number, but I would like to know which solutions are those. This is adjusted for high school mathematics level.
Hint: Use the substitution a=x+y and b=xy. Thus your two equations become a+b=11, ab=30. Can you solve the above equation, and from that solve for x and y?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Does there exist a connected metric space, with more than one point and without any isolated point, in which at least one open ball is countable? Does there exist a connected metric space, with more than one point and without any isolated point, in which at least one open ball is countable?
Assume a ball $B(x_0,r)\subseteq X$ is countable, $x_0\in X$, $r>0$. As $X$ is not a one-point space, there exists $x_1\ne x_0$ and we may assume $r\le d(x_0,x_1)$. Then by pigeon-hole there exists $0<\rho <r$ such that $d(x_0,x)\ne \rho$ for all $x\in B(x_0,r)$. By definition of $B(x_0,r)$, in fact $d(x_0,x)\ne\rho$ for all $x\in X$. Then the sets $\{\,x\in X:d(x_0,x)<\rho\,\}$ and $\{\,x\in X:d(x_0,x)>\rho\,\}$ are open, disjoint, cover $X$ and are not empty (as witnessed by $x_0$ and $x_1$, respectively. Hence $X$ is not connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Understanding a matrix bound/inequality I came across the following statements; For a positive matrix $A$, that is bounded $0 \leq A \leq I,$ where $I$ is the identity matrix and a statement like $Y \leq X$ means that $X-Y$ is positive semidefinite. The following inequality is true: $$A(I-A)A \leq \frac{4}{27} I \tag{1}$$ The explanation as to why the inequality is true, follows from noticing that $$\max_{0\leq a \leq 1} a^2(1-a) = \frac{4}{27} \tag{2}$$ This is my question, why does (2) "explain" or justify the inequality (1). Is there a theorem or perhaps a reference in a book, one can recommend to further understand this logic? In other words I am seeking an explanation why (2) is sufficient to show (1).
If $A$ is positive semidefinite, $A$ is unitarily similar to a diagonal matrix with nonnegative diagonal entries ($0\leq A\leq I$ implying that these diagonal entries are in the interval $[0,1]$). So $A=UDU^*$, where $D$ is this diagonal matrix and $U$ is unitary. Then $$ A(I-A)A=UDU^*(I-UDU^*)UDU^*=U[D(I-D)D]U^*. $$ Note that $D(I-D)D$ is a diagonal matrix with the entries $d^2(1-d)$ on the diagonal, where $d$ is the corresponding diagonal entry of $D$. Now apply (2).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Disjoint refinement of a set of sets Say I have some set $F$ of sets. This is obviously a cover of $\cup F$. Is there a general algorithm that makes "this" cover disjoint? That is, a set $F'$ of pairwise disjoint sets with the properties $\cup F = \cup F'$ and $(\forall A \in F')(\exists B \in F)(A \subset B)$? This question covers the countable case.
I can tell you how to prove that such a refinement exists. It's not entirely elementary, and it probably doesn't count as an "algorithm". I don't know how much set theory you know - you can find the required background for what's below by searching Wikipedia for "ordinal", "Well ordering theorem", and "transfinite recursion" (if you don't find that try transfinite induction). Ok. The well ordering theorem implies that any set is bijective with some ordinal. So there exists an ordinal $\alpha$ and an "enumeration" of $F$, so that we can write $$F=\{S_\beta\,:\,\beta<\alpha\}.$$ Here $\beta<\alpha$ refers to ordinals $\beta$. Now the magic of transfinite recursion shows that there exists a family of sets $S'_\beta$ (for $\beta<\alpha$) such that for every $\beta<\alpha$ we have $$S'_\beta=S_\beta\setminus\bigcup_{\gamma<\beta}S_\gamma.$$Clearly $S'_\beta\subset S_\beta$, clearly the $S'_\beta$ are pariwise disjoint, and it's easy to check by transfinite induction that $$\bigcup_{\beta<\alpha}S'_\beta=\bigcup_{\beta<\alpha}S_\beta.$$(For that last bit: Say $x\in\bigcup_{\beta<\alpha}S_\beta$. Let $\beta$ be the smallest ordinal with $x\in S_\beta$. Then $x\in S'_\beta$.) This could also be done by Zorn's Lemma. But, once you get past the machinery of ordinals and transfinite recursion, the proof above is much nicer and more intuitive (imo): It's just like the countable case! (Except different, of course...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Sums $a_k=[\frac{2+(-1)^k}{3^k}, (\frac{1}{n}-\frac{1}{n+2}), \frac{1}{4k^2-1},\sum_{l=0}^k{k \choose l}\frac{1}{2^{k+l}}]$ Determien the sums of the following series'. 1:$\sum_{k=0}^\infty \frac{2+(-1)^k}{3^k}$ 2:$\sum_{k=0}^\infty (\frac{1}{n}-\frac{1}{n+2})$ 3:$\sum_{k=0}^\infty \frac{1}{4k^2-1}$ 4:$\sum_{k=0}^\infty \sum_{l=0}^k{k \choose l}\frac{1}{2^{k+l}}$ I have always been bad with infinite series', so I decided to practice them strongly for the next couple of days. I found an exercise in a textbook and kinda troubling with some of them. Here's what I was thinking of doing. 1: I thought about dividing this into two sums, $\sum_{k=0}^\infty \frac{2}{3^k}+\sum_{k=0}^\infty \frac{(-1)^k}{3^k}$, which should be okay since they converge, right? Anyway, then I can see the geometric series there, giving me $3+\frac{3}{4}=3,75$. Is that correct? 2: I think that's what's called a telescope series? So I tried writing down some of the first numbers to see where it starts canceling each other out: $\frac{1}{1}-\frac{1}{3}+\frac{1}{2}-\frac{1 }{ 4}+\frac{ 1}{ 3}-\frac{ 1}{ 5}+\frac{1 }{ 4}-\frac{1 }{6 } + ...$. Since $\frac{1}{n}$ converges to zero for $n\to \infty$, the sum of this infinite series is $\frac{3}{2}$? About 3 and 4. I'm pretty much lost there. Can't seem to find an approach to find the sum. Any tips?
$(1)$ Looks good to me. $(2)$ You are correct that this is a telescoping series and your answer is correct (but the index should start at $k=1$, not $k=0$) You could also rewrite the series as $$\begin{align}\sum_{k=1}^\infty \frac{1}{k}-\frac{1}{k+2} = \sum_{k=1}^\infty \frac{1}{k}-\sum_{k=1}^\infty \frac{1}{k+2} \\ = \left(1+\frac{1}{2}+ \sum_{k=3}^\infty \frac{1}{k}\right)-\sum_{k=3}^\infty \frac{1}{k} \\ = 1+\frac{1}{2}\end{align}$$ to more rigorously see that the series converges to $\frac{3}{2}$. $(3)$ Write the fraction $\frac{1}{4k^2-1}$ as $\frac{1}{(2k-1)(2k+1)}$, then use partial fraction decomposition to find constants $A,B$ where $\frac{1}{(2k-1)(2k+1)} = \frac{A}{2k-1}+ \frac{B}{2k+1}$ You may again need to invoke an argument about telescoping series. For $(4)$, is that really the binomial coefficient $\binom{k}{l} = \frac{k!}{l!(k-l)!}$? I did not encounter that in my calc class unit on infinite series. However, start writing out a few terms of the series to get a handle on what the general behavior is. That is, fix $k=0$ and find a value. Then set $k=1$ to get a few more values, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Galois extension and Galois group Let $x$ be a real root of the polynomial $X^3-X+1$, and $y,\overline{y}$ two other roots in $\mathbb{C}$, and $K$ be the cubic field $\mathbb{Q}[x]$. Show that $y+\overline{y}=-x$ , $y\overline{y}=-1/x$, and $[(y-x)(\overline{y}-x)(y-\overline{y})]^2=-23$. Show that $L=K[\sqrt{-23}]=\mathbb{Q}[x,y,\overline{y}]$ and that field is a Galois extension of degree $6$ of $\mathbb{Q}$. Determine its Galois group $G$ and subfields of $L$ which are Galois over $\mathbb{Q}$. I don't now how to solve this. I need help.
Hint: Every extension that contains a complex number has even order, since if it contains $i$ it can be written as $\mathbb Q (\alpha, i) / \mathbb Q (i) / \mathbb Q$. By the tower law, the order of this field over $\mathbb Q$ is $[\mathbb Q (\alpha, i) : \mathbb Q(i)] [\mathbb Q (i) : \mathbb Q]$. Since $[\mathbb Q(i) : \mathbb Q] = 2$, the order of the extension is even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The fundamental vector fields of a principal bundle are vertical. Let $p:P\to M$ be a principal $G$-bundle. To each $A$ in the Lie algebra of $G$ corresponds a fundamental vector field $A^*$ on $M$ defined by $$A^*_u=\frac{d}{dt}|_{t=0} u(exp(tA))$$ How can we see that $A_u^*$ is a vertical vector? Idea: We need to verify that $p_*(A_u^*)=0$. None of the manipulations that I apply to the left hand side help. Supposedly, the statement is obvious from the fact that the action of $G$ takes each fiber to itself.
First of all a fundamental vector field is a vector field on $P$ not on $M$ as you wrote in your question. As you say the action of $G$ on $P$ preserves the fibers i.e. $p(u) = p(u g)$ for all $u \in P , g \in G$. If $A$ is in the Lie algebra of $G$ then $exp(tA)$ is a monoparametric group. For $u \in P$ we have a curve $\gamma(t):= u.exp(tA)$ in $P$. By the assumption the curve $\gamma$ is contained in a fiber: $$p(\gamma(t)) = p(\gamma(0))$$ and taking derivative you get $$p_*(A_u^*) = 0$$ which show you that indeed $A_u^*$ is vertical i.e. tangent to a fiber.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Maple Help format output collect terms with same coefficients and multiple variables I realize this may have been asked before, however I have no idea what to search for specifically, since the topic is so generic it returns so many results that I cannot find what I am looking for. eqn3:=3.9024*x+3.9024*y; How can I get Maple to output as (3.9024)(x+y) instead of (3.9024)(x)+(3.9024)(y)? I have tried factor, collect, simplify, etc. I am using Maple 16
Note that automatic simplification distributes numeric coefficients, e.g. if you enter > expr:= 3.9024*(x+y); the result will be $$ expr := 3.9024 x + 3.9024 y $$ To defeat this, one way is to use the `` function. > E:= map(t -> (t = ``(t)), indets(expr,float)): > collect(subs(E, expr),``); $$ (x + y) (3.9024) $$ Unfortunately I don't see how to put the $(3.9024)$ before the $(x+y)$. It thought it would be possible with sort, but I can't seem to find the right options. Well, here's a rather kludgy work-around. > restart; expr:= 3.9024*(x+y); E:= map(t -> t = ``(t), indets(expr,float)); subs(x=w-y,E,expr); sort(%,w); subs(w=x+y,%); $$ (3.9024)(x+y) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1327952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing $\sum_{n\geq 0}n\frac{1}{4^n}$ Can I compute the sum $$ \sum_{n\geq 0}n\frac{1}{4^n} $$ by use of some trick? First I thought of a geometrical series?
The idea is to take the derivative of the geometric series. For $|x|<1$, $$\frac1{(1-x)^2}=\sum_{n=1}^\infty nx^{n-1}$$ So, $$\frac x{(1-x)^2}=\sum_{n=1}^\infty nx^n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Characterising a rotation matrix We have the operator $T: \mathbb{R}^3 \to \mathbb{R}^3$ given by $T(\vec{x}) = A\vec{x}$ $$A = \frac{1}{3}\begin{pmatrix} -2 & 1 & 1 \\ 1 & -2 & 1 \\ 1 & 1 & -2 \end{pmatrix}$$ I have orthogonally diagonalized A and found that $$Q = \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{-1}{\sqrt{2}} & \frac{-1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}} & \frac{-1}{\sqrt{6}} \\ \frac{1}{\sqrt{3}} & 0 & \frac{2}{\sqrt{6}} \end{pmatrix}$$ I now have to characterize the matrix by seeing what it does to the standard basis. The problem is, I find it very difficult to visualize such matters. I somehow see that the y-axis is rotated by 45 degrees counterclockwise (in one plane), but I'm rather uncertain about the other two axes, since they are rotations in 3D, rather than the simple 2D. As a note, this is not a homework problem, but a practice problem for my exam. I have not done anything in particular to the Rodrigue's formula or about the trace. EDIT: The operator is indeed from $\mathbb{R}^3 \to \mathbb{R}^3$.
After orthogonally diagonalizing your matrix, you should find that $A = QDQ^T$, where $Q$ is as you've given it and $$ D = \pmatrix{ 0&0&0\\ 0&-1&0\\ 0&0&-1} $$ We can easily see what happens to a vector in the eigenvector basis. That is, if we define $$ (a,b,c)_B = Q \pmatrix{a\\b\\c} $$ Then you can see that $$ A(a,b,c)_B = (0,-b,-c) $$ That is: with respect to this new basis, we are projecting down to the "$yz$ plane" (with respect to the new basis) and then rotating in that plane by $180^\circ$. If you want to see what $A$ does to the a vector in the standard basis, then there's no point in diagonalizing in the first place.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every subset has first and last element -> set is finite Let $X$ be a partially ordered set, so that every non empty subset of $X$ has a first and a last element. Show that $X$ is a finite set. And what if every subset only has a first element? Well, I proved it is a linear order, but now I'm stuck. Anyone ready to clear things up for me? Thanks a lot!
One may also give a more "direct" proof. Assume $X$ is infinite. We shall construct an strictly increasing/decreasing sequence which will contradict the existence of a last/first element. Start with some element $x_1\in X$. At least one of the sets $\left\{ x>x_1 \right\}, \left\{ x<x_1 \right\}$ is infinite. Assume without loss of generality $\left\{ x>x_1 \right\}$ is infinite. By assumption, this set (as a subset of $X$), has a first element $x_2$. The set $\left\{ x>x_2 \right\}=\left\{ x>x_1 \right\}\setminus \left\{ x_2 \right\}$ is also infinite, and also contains a first element $x_3$. In this fashion we obtain a nested sequence of nonempty sets $\left\{ x>x_n \right\} \supsetneq \left\{ x>x_{n+1} \right\}$. Picking an element from each set which isn't the last one yields a strictly increasing sequence, which is a subset of $X$ without a last element. Contradiction. (If you don't like the freedom of "picking an element", just use the sequence $ \left\{ x_n \right\}$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
A bit confused about definition of set of mappings in Herstein's Algebra I have been just trying to start working with the books Topics in Algebra by I.N. Herstein, and I am having a bit of trouble understanding a definition. It is, Definition: If $S$ is a nonempty set then $A(S)$ is the set of all one-to-one mappings of $S$ onto itself. I think my problem is that I am just having trouble thinking of an example of what this is representing? I mean lets say we were considering $S=\{1,2\}$ for example, then does A(s) contain some functions say, f, g for example for which $f:S \to S$ is bijective and g is as well? My apologize if this is hard to understand my question, and it is possible my intuition is completely off, so I would really appreciate any explanations/insight about this. I am also curious if there is a more common name for this set $ A(S)$. Thanks all!
Yes, by Herstein's definition, $A(X)=\{\text{bijective functions }f:X\to X\}$. Just for emphasis, since this seems to be something you're confused on, the elements of $A(X)$ are precisely the bijective functions from $X$ to $X$. There's nothing else in $A(X)$ other than that, and every such bijective function is included. The operation of the group $A(X)$ is composition of functions: $$\text{for any }f,g\in A(X),\;g\circ f\;\text{ is defined by the rule}\quad (g\circ f)(x)=g(f(x))$$ The more common notation (and name) for this set is $S_X$, the symmetric group on the set $X$. Here's the relevant Wikipedia page.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
evaluate $\int_0^{2\pi} \frac{1}{\cos x + \sin x +2}\, dx $ This is supposed to be a very easy integral, however I cannot get around. Evaluate: $$\int_0^{2\pi} \frac{1}{\cos x + \sin x +2}\, dx$$ What I did is: $$\int_{0}^{2\pi}\frac{dx}{\cos x + \sin x +2} = \int_{0}^{2\pi} \frac{dx}{\left ( \frac{e^{ix}-e^{-ix}}{2i}+ \frac{e^{ix}+e^{-ix}}{2} +2\right )}= \oint_{|z|=1} \frac{dz}{iz \left ( \frac{z-z^{-1}}{2i} + \frac{z+z^{-1}}{2}+2 \right )}$$ Although the transformation is correct I cannot go on from the last equation. After calculations in the denominator there still appears $i$ and other equations of that. How can I proceed?
Via corindo's rearrangement: we have $$ I = \int_0^{2 \pi} \frac{1}{\sqrt{2}\cos(x - \pi/4) + 2}\,dx =\\ \frac 1{\sqrt{2}} \int_0^{2 \pi} \frac{1}{\cos(x - \pi/4) + \sqrt{2}}\,dx = \\ \frac 1{\sqrt{2}} \int_0^{2 \pi} \frac{1}{\cos(x) + \sqrt{2}}\,dx =\\ \frac 1{\sqrt{2}} \oint_{|z| = 1} \frac{1}{iz((z + z^{-1})/2 + \sqrt{2})}\,dx = \\ \frac {\sqrt{2}}{i} \oint_{|z| = 1} \frac{1}{(z^2 + 2\sqrt{2}z + 1)}\,dz $$ Note the integrand has one pole inside the unit circle, namely $$ z_{0} = 1-\sqrt{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Fundamental polygon square $abab$ What is the most convenient description of the space with fundamental polygon a square, with all vertices identified, glued by $abab$? If we were to identify only opposite vertices, we would get $\mathbb{R}P^2$. So I believe one description is $\mathbb{R}P^2$ with two points identified, which is homotopy equivalent to $\mathbb{R}P^2 \vee S^1$. Is this the best we can do, or is it something else in disguise? This answer suggests that it should be a surface, but I'm having a hard time seeing how it is. There is no neighborhood of the glued point homeomorphic to $\mathbb{R}^2$, is there? If so, how?
I'm putting what I have wrote in the comments together in this answer : Claim The polygon $abab$ with all the vertices identified is homotopy equivalent $\Bbb RP^2 \vee S^1$. First consider the polygon $abab$ with diagonally opposite pair of vertices identified, which is homeomorphic to $\Bbb RP^2$. Since all of the vertices must be identified, attach a $1$-cell to the resulting space along the disjoint union of the unidentified vertices; nullhomotoping the attaching map along the surface of $\Bbb RP^2$ gives you the desired homotopy type. You can see that this is not a surface by looking at the wedged point. Any neighborhood of the wedge point looks like an open disk wedge $(0, 1)$, which is certainly not homeomorphic to $\Bbb R^2$. As for the polygon in the MO question, the standard convention for identification on the polygon of $abab$ is that you identify diagonally opposite pairs of vertices, so that's just the usual $\Bbb RP^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Slow decreasing function that exhibits asymptotic behaviour I am currently doing some work on modelling the effects of treated nets usage on mosquito populations. Nets do not retain their maximum efficacy forever. They lose their chemical efficacy after about three years and all that is left is the physical protection offered by the net which I estimate to be $20\%$ of the original efficacy. I am trying to model this behaviour. I need a continuous function over the interval $[0,1095]$ which decreases slowly from 1 when $x=0$ and asymptotically approaches $0.2$ when $x \to 1095 $. I tried an ellipse of the form $y=\sqrt{1-\dfrac{x^2}{(1095)^2}}$, but I realized the function is equal to zero when $x=1095$, which is not what I want. Any help will be appreciated.
How about something like $$ y=1-0.8\,\frac{e^{ax}-1}{e^{1095a}-1}? $$ The sign of $a$ determines concavity or convexity; $a=-0.005$ gives a nice graph.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Regular Quadratic Space - isotrope vector I am currently trying to solve the following exercise: Show that every regular quadratic space of finite dimension $E$ that contains at least one isotrope vector, has a basis consisting only of isotrope vectors. I think my problem with this exercise is that I don't really have a ''feeling'' for the definitions. Quadratic form: A quadratic form $q$ over $E$ is an application $E \to K$ such that there exists a bilinear form $f : E\times E \to K$ such that $\forall x\in E$ $q(x)=f(x,x)$. $ker(q) = \{x\in E|\forall y\in E: f(x,y)=0\}$ Regular Quadratic Space: $(E,q)$ is called regular, if $ker(q) = \{0\}$ Isotrope: $x$ is called isotrope, if $f(x,x)=q(x)=0$ for $x\not=0$ in $E$. Is it the right way to work with these definitions? I don't really see how to get from one isotropic vector to the whole basis, and what information it gives me that I have one isotropic vector in the space. I would be very happy for a hint to get a start! All the best, Luca
You don't need the full Witt decomposition, just split off a hyperbolic plane, so $$V = W\oplus \langle u, u' \rangle$$ with $f(u,u')=1$, ... Take any basis $w_1$, $\ldots$, $w_m$ of the subspace $W$. We get the basis of $V$ \begin{eqnarray} &w_i& - \frac{q(w_i)}{2}(u+u') \text{ for } 1 \le i \le m, \ \text{and}\ u, u' \end{eqnarray} consisting of isotropic vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
When I can reverse the logical operators? I heard say that is logically equivalent to say it: $$\neg (p \vee q) = p \land q$$ So every time you have a negation operator in front can make a "distributive" altering the operator from within? And if $\neg(p \vee \neg q)$ then result is $p \land \neg q$? Can anyone give some examples?
Example, as requested... Suppose I want to know what is the domain of the function $$ \frac{1}{x^2-4x+3} $$ That is, I want to find all the $x$ where that denominator is not zero. So I factor the denominator. $x^2-4x+3=(x-1)(x-3)$. When is it zero? Either $x=1$ or $x=3$. So when is it nonzero? $$ \neg\;\big(x=1\;\lor\;x=3\big) $$ That is, (by the principle explained in Henning's answer) $$ x\neq 1\;\land \;x\neq 3 $$ In order to belong to the domain, $x$ must be different from $1$ and different from $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
The rows of an orthogonal matrix form an orthonormal basis A matrix $A \in \operatorname{Mat}(n \times n, \Bbb R)$ is said to be orthogonal if its columns are orthonormal relative to the dot product on $\Bbb R^n$. * *By considering $A^TA$, show that $A$ is an orthogonal matrix if and only if $A^T = A^{−1}$. *Deduce that the rows of any $n × n$ orthogonal matrix $A$ form an orthonormal basis for the space of $n$-component row vectors over $\Bbb R$. I am trying to do part 2. What I tried is that since we figured out that $A^T = A^{-1}$, and the inverse of $A$ is the left product of elementary matrices to $A$, the row space of $A^TA =$ row space of $A$. Also, since $A^TA = I$, a basis of the row space of $A$ is a basis of the row space of $I$. Since the columns of $I$ are the standard basis of $\Bbb R^n$ $(e_1, ..., e_n)$, and are orthonormal to each other, they form an orthonormal basis of $\Bbb R^n$. Something tells me this proof is wrong. Could someone give me some guidance?
We are given this definition: matrix $A \in \mathbb{R}^{n \times n}$ is "orthogonal" if and only if its columns are orthonormal. We want to show that the rows of $A$ form an orthonormal basis for $\mathbb{R}^n$ (technically an orthonormal basis for $V=$"the space of $n$-component row vectors over $\mathbb{R}$", but there is clearly an isomorphism $T:V \to \mathbb{R}^n$ defined by $T(v)=v^T \in \mathbb{R}^n$ such that both $T$ and $T^{-1}$ preserve orthonormality). We can equivalently show that the columns of $A^T$ are orthonormal, noting that $n$ orthonormal vectors form a basis for $\mathbb{R}^n$. By the definition given, this means we must show that $A^T$ is an orthogonal matrix. David Wheeler's answer illustrates that $A^T$ is an orthogonal matrix iff $(A^T)^{-1}=(A^T)^T$, and egreg's answer completes the proof. So $A\in \mathbb{R}^{n \times n}$ is orthogonal iff its rows are orthonormal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
How big is the chance that a arbitrary man is taller than a arbitrary woman? I'm a first year mathematics student, and I'm having trouble with computing the following: Assume that in a country the height $X$ of men is normally distributed, with $\mu_X = 180$ (the expected value), $\sigma_X = 6$ (the standard deviation) and the same for the height of women, $Y$, with $\mu_Y = 173$ and $\sigma_Y = 5$. How big is the chance that a arbitrary man is taller than a arbitrary woman? I don't know how to solve this question. I'm able to compute chances like $\mathbb{P}(X \geqslant k)$ or $\mathbb{P}(X \leqslant k)$ for some $k \in \mathbb{R}$ using the formula $\mathbb{P}(X \leqslant k) = \Phi(\frac{k + \frac{1}{2} - \mu}{\sigma})$. I think I have to compute in some way $\mathbb{P}(X > Y)$, but I don't see how to do that. Could you please help me with this? Thanks in advance!
The probability density function of a $N(\mu,\sigma)$ random variable $X$ is given by: $$ f_X(x) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{(x-\mu)^2}{2\sigma^2}}$$ hence our probability is given by the integral: $$ \mathbb{P}[X\geq Y]=\frac{1}{60\pi}\iint_{x>y}\exp\left(-\frac{(x-180)^2}{72}-\frac{(y-173)^2}{50}\right)\,dx\,dy\approx\color{red}{81.49\%}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1328959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If $f:\mathbb{R}\rightarrow\mathbb{R}$, $f\in C^{\infty}(\mathbb{R})$ and $f(0)=0$ then $\frac{f(x)}{x}\in C^{\infty}(\mathbb{R})$ If $f:\mathbb{R}\rightarrow\mathbb{R}$, $f\in C^{\infty}(\mathbb{R})$ and $f(0)=0$ then $\frac{f(x)}{x}\in C^{\infty}(\mathbb{R})$. Following ther is what i did: the Maclaurin series for $f(x)$ is $$f(x)=f(0)+\sum\limits_{n=1}^{\infty}\frac{f^{(n)}(0)}{n!}x^n\stackrel{f(0)=0}{=}\sum\limits_{n=1}^{\infty}\frac{f^{(n)}(0)}{n!}x^n$$ whence, we suppose $x\neq 0$, $$g(x)\stackrel{\text{def}}{=}\frac{f(x)}{x}=\sum\limits_{n=1}^{\infty}\frac{f^{(n)}(0)}{n!}x^{n-1}$$ then $g(x)\in C^{\infty}(\mathbb{R}\setminus\{0\})$. We can prove that $$\lim_{x\rightarrow 0}g^{(k)}(x)=\frac{f^{(k+1)}(0)}{k+1}=g^{(k)}(0)\quad\forall k\in \mathbb{N}$$ Then $g(x)\in C^{\infty}(\mathbb{R})$. Is it right?
We can write $f(x) = \int_0^x f'(t)\,dt = x\int_0^1 f'(xs)\,ds.$ So $$g(x) = \int_0^1f'(xs)\,ds.$$ for all $x.$ You can differentiate all day through the integral sign, showing $g\in C^\infty.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rotation of a line by a matrix Give the equation of the line $\ell'$ that is obtained by rotating $\ell$: $x+2y=5$ by an angle of $\theta=\frac{1}{2}\pi$ with center point $O(0,0)$. The rotation matrix is $\left.\begin{pmatrix}\cos \alpha & -\sin \alpha \\ \sin \alpha & \cos \alpha\end{pmatrix}\right|_{\alpha=\frac{1}{2}\pi}=\begin{pmatrix}0 & -1 \\ 1&0\end{pmatrix}$. Two points on $\ell$ are $(1,2)$ and $(3,1)$, whose images are respectively $\begin{pmatrix}0 & -1 \\ 1&0\end{pmatrix}\begin{pmatrix}1\\2\end{pmatrix}=\begin{pmatrix}-2\\1\end{pmatrix}$ and $\begin{pmatrix}0 & -1 \\ 1&0\end{pmatrix}\begin{pmatrix}3\\1\end{pmatrix}=\begin{pmatrix}-1\\3\end{pmatrix}$. Hence, the gradient of $\ell'$ is $\dfrac{\Delta y}{\Delta x}=2$. Therefore, the equation of $\ell'$ is $y=2(x+2)+1\iff \boxed{y=2x+5}$. I have two questions: my book says the answer should be $y=2x+6$. If I am wrong, what did I wrong? Second, is there anyone who suggests a more elegant of faster method for this problem?
I notice that the point $(1,2)$ is on the original line. After a rotation by $\pi/2$, this point becomes $(-2,1)$. I also notice that the original slope was $-1/2$. A rotation by $\pi/2$ has the effect of finding a line perpendicular to our starting line, so it will have slope $2$. The equation of a line with slope $2$ passing through $(-2,1)$ is $y - 1 = 2(x + 2)$, or rather $y = 2x + 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }