Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Solving for positive reals: $abcd=1$, $a+b+c+d=28$, $ac+bc+cd+da+ac+bd=82/3$ $$a,b,c,d \in \mathbb{R}^{+}$$ $$ a+b+c+d=28$$ $$ ab+bc+cd+da+ac+bd=\frac{82}{3} $$ $$ abcd = 1 $$ One can also look for the roots of polynomial $$\begin{align} f(x) &= (x-a)(x-b)(x-c)(x-d) \\[4pt] &= x^4 - 28x^3 + \frac{82}{3}x^2 - (abc+abd+acd+bcd)x + 1 \end{align}$$ and $f(x)$ has no negative roots... but how else do I proceed? There is a trivial solution $\frac{1}{3}, \frac{1}{3}, \frac{1}{3}, 27$. We just need to prove it's unique.
Assume $d = \max{a,b,c,d}$. Looking at the inequality: $$(a+b+c)^2\geq 3(ab+bc+ca)$$ beginning edit by Will: from Michael, $$ 82 = 3 (bc+ca+ab) + 3d(a+b+c), $$ from displayed inequality $$ 82 \leq (a+b+c)^2 + 3d(a+b+c) $$ $$ 82 \leq (28-d)^2 + 3 d (28-d) $$ $$ 82 \leq 784 - 56d + d^2 + 84d - 3 d^2 $$ $$ 0 \leq 702 + 28 d - 2 d^2 $$ $$ 0 \geq 2 d^2 - 28 d - 702 $$ $$ 0 \geq d^2 + 14 d - 351 $$ $$ 0 \geq (d+13)(d-27). $$ As $d >0$ we get $$ 0 \geq d-27 $$ $$ 27 \geq d $$ end of edit by Will will give you $d\leq 27.$ Consequently, $abc\geq \dfrac{1}{27}.$ SECOND EDIT by WILL $$ f = ( ab + bc + ca)^2 - 3abc(a+b+c) $$ $$ 4(b^2 - bc + c^2) f = \left( 2 (b^2 - bc + c^2) a - bc(b+c) \right)^2 + 3b^2 c^2 (b-c)^2 $$ Conclusion: permute the letters, $ f \geq 0$ and $f \neq 0$ unless $a=b=c.$ Real $a,b,c$ otherwise unrestricted END SECOND EDIT by WILL From $a+b+c \geq 1$and $abc\geq \dfrac{1}{27},$ we find that $ab+bc+ca\geq \dfrac{1}{3}.$ Then, $$\dfrac{1}{3}\leq ab+bc+ca = \dfrac{82}{3} - d(28-d)\iff d^2-28d+27 \geq 0$$ This means $(d-27)(d-1)\geq 0$ so $d = 27.$ The rest should follow immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Newton's evaluation of $1 + \frac{1}{3} - \frac{1}{5} - \frac{1}{7} + \frac{1}{9} + \frac{1}{11} - \cdots$ How might have Newton evaluated the following series? $$\sqrt{2} \, \frac{\pi}{4} = 1 + \frac{1}{3} - \frac{1}{5} - \frac{1}{7} + \frac{1}{9} + \frac{1}{11} - \cdots$$ The method of the this thread applies by setting $x=\pi/4$ in the Fourier series for $f(x) = \pi/2 - x/2$ and then subtracting the extraneous terms (which are a multiple of the Gregory-Leibniz series for $\pi/4$). I read that this series appears in a letter from Newton to Leibniz. However, I do not have access the letter which appears in this volume.
Although the question appears to be about how Newton historically did it, I'll convert a popular comment to an answer showing how techniques from his era, similar to those that handle the Gregory series, evaluate the series above: $$\begin{align}\sum_{n\ge0}\left(\frac{1}{8n+1}+\frac{1}{8n+3}-\frac{1}{8n+5}-\frac{1}{8n+7}\right)&=\sum_{n\ge0}\int_{0}^{1}x^{8n}\left(1+x^{2}\right)\left(1-x^{4}\right)dx\\&=\int_{0}^{1}\frac{1+x^{2}}{1+x^{4}}dx\\&=\int_{0}^{1}\frac{1+x^{2}}{\left(1-x\sqrt{2}+x^{2}\right)\left(1+x\sqrt{2}+x^{2}\right)}dx\\&=\frac{1}{2}\sum_{\pm}\int_{0}^{1}\frac{dx}{1\pm x\sqrt{2}+x^{2}}\\&=\frac{1}{\sqrt{2}}\sum_{\pm}\left[\arctan\left(x\sqrt{2}\pm1\right)\right]_{0}^{1}\\&=\frac{\arctan\left(\sqrt{2}+1\right)+\arctan\left(\sqrt{2}-1\right)}{\sqrt{2}}\\&=\frac{\pi}{2\sqrt{2}}.\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3764826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How to solve this ODE: $y'(x) e^x = y^2(x)$? I am trying to solve the differential equation $$y'(x) e^x = y^2(x) \quad (DE) $$ This is a Bernoulli form DE i.e $y'(x) + a(x)y(x) = b(x)y^r(x)$, where $r = 2, a(x) = 0, b(x) = \frac1e $ * *Let $u(x) = y^{1-r} = y^{-1} \iff u'(x) = -y^{-2}(x) y'(x)$ *Then for $y \neq 0$: $(DE) = \frac{y(x)'}{y^r(x)} = e^{-x} \iff -u'(x) = -e^x (2)$ But $(2)$ is a seperate variable form ODE therefore: $$ u(x) = e^{-x} + C \iff \frac1y = e^{-x} + C \iff $$ $$ \bbox[15px,#ffd,border:1px solid green]{y(x) = \frac{1}{e^x + C} }$$ with $y(x) =0$, not being a solution of the DE. It all seems right to me, but wolfram has another opinion i.e $$ \bbox[15px,#ffd,border:1px solid blue]{y\left(x\right)\:=\:\frac{-e^x}{\left(Ce^x\:-\:x\:-\:1\right)}} $$ I never won an argument against Wolfie, so I am wondering what I did wrong in my solution.
Your equation is equivalent to $$\frac{\dot{y}}{y^2}=e^{-x}$$ as long as $y(t)\neq0$. (Notice that $y(t)\equiv0$ is a solution to your problem) Integrating over some intervals,say $[x_0,x]$ leads to $$ \int^x_{x_0}\frac{y'(t)}{y^2(t)}\,dt=\int^x_{x_0}e^{-t}\,dt=-e^{-t}|^x_{x_0}=e^{-x_0}-e^{-x} $$ The integral on the left can be simplifies by change of variables $u=y$ to get $$ -\frac{1}{y(t)}\Big|^x_{x_0}=\frac{1}{y(x_0)}-\frac{1}{y(x)}=e^{-x_0}-e^{-x} $$ solving for $y(x)$ on gets $$ \frac{1}{y(x)}=\frac{1}{y(x_0)}-e^{-x_0}+e^{-x} $$ and so $$ y(x)=\frac{1}{y(x_0)^{-1}-e^{-x_0}+e^{-x}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Calculation of $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}}$ Shouldn't $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}} = |\sec(x)|$? Why does Symbolab as well as my professor (page one, also below) claim that $\left(\frac{1}{\cos^2x}\right)^{\frac{1}{2}} = \sec(x)$, which can be negative? Also, the length of a vector cannot be negative... isn't it?
You are correct that $\sqrt{x^2} \neq x$ for every $x\in\Bbb R$. Otherwise, we would have absurdity like $1 = \sqrt{1}=\sqrt{(-1)^2} =-1$. For $x\in\Bbb R$ we indeed have $\sqrt{x^2}=|x|$. However, we have $\sqrt{x^2}=x$ for all $x\geq 0$. Hence, as you professor seems to assume that $t\in [-\pi/2,\pi/2]$, his claim that $\sqrt{\frac{1}{\cos^2(x)}}=\sec(t)$ is correct. Indeed for $t\in [-\pi/2,\pi/2]$, we have $\cos(t)\geq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does local cohomology commutes with direct sums? Let $A$ be a commutative noetherian ring, $I\subseteq A$ an ideal, $M_\alpha$ be $A$-modules, $\forall\alpha\in J$. It is easily seen that the $I$-torsion commutes with direct sums: $$\Gamma_I(\bigoplus_{\alpha\in J}M_\alpha)=\bigoplus_{\alpha\in J}\Gamma_I(M_\alpha).$$ This is because, those elements in the direct sum annihilated by a power of $I$ also have each of its components annihilated by the same power of $I$, and conversely we can annihilate the direct sum of these components by a large enough power of $I$, too. Since the local cohomology $H_I^n$ is defined as the right derived functors of $\Gamma_I$, I am wondering whether we can similarly show $$H_I^n(\bigoplus_{\alpha\in J}M_\alpha)\cong \bigoplus_{\alpha\in J}H_I^n(M_\alpha).$$ I have seen some proofs of a more general result about local cohomology commuting with direct limits, but I am looking for a straight-forward proof here. Thank you very much for your help!
Convince yourself first that $H_I^n(M) = \varinjlim_k \operatorname{Ext}_R^n(R / I^k, M);$ then, use the fact that Ext commutes with finite direct sums in the second component, i.e., $$\operatorname{Ext}_R^n(R / I^k, \oplus_{i = 1}^m M_i) \cong \oplus_{i = 1}^m \operatorname{Ext}_R^n(R / I^k, M_i).$$ For the first fact, use the definition of the local cohomology modules as the right-derived functors of $\Gamma_I(M).$ Convince yourself that $\Gamma_I(M) \cong \varinjlim_k \operatorname{Hom}_R(R / I^k, M);$ then, use the facts that (1.) direct limits commute with cohomology and (2.) Ext is the right-derived functor of Hom. Unfortunately, I am not aware of a more straightforward proof than this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Show that if the matrix of $T$ with respect to all bases of $V$ is same, then $T = \alpha I$ where $T$ is a linear operator on $V$ Now the only hint I could derive from the question that we might have to use eigenvalues as $Tx = \lambda x$ for $T = \lambda I$. I think eigenvalues are invariant with basis. Any help would be appreciated
It seems that this is related to the isotropic matrices, i.e. matrices proportional to the identity. When you use a change of basis you transform $T$ into something else (in general components of the transformed matrix are different, but this does not occur in here). Recall the transformation of $T$ into $Q$: $Q=V^{-1}T \: V$. You can replace $T$ by $\alpha I$ to check the result for any $V$ (hint: pull out the scalar and use properties of the identity).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765481", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How to prove that $\int_0^1 f(x)\,dx = f(0) + \frac{1}{2}f'(c)$ for some $ c \in [0,1]$? * *Given that ${f}$ is differentiable on the interval $[0,1]$ I need to prove that $\int_0^1 f(x)dx = f(0) + \frac{1}{2} f'(c)$ for some $ c \in [0,1]$. *I'm aware of integral mean value theorem, which gives us the following: Exists point $c \in [0,1]: {f}(c) = \frac{1}{1} \int_0^1 f(x)\,\mathrm{d}x$ But I can't get further and it looks like that's not the right way at all. I'll be happy to get any tips or key statements that will lead me to the solution, please.
Let $\;a=\int_0^1 f(x)\,dx-f(0)$. It is sufficient to prove that there exists $c\in\left]0,1\right[$ such that $a=\frac{1}{2}f’(c)$. We define the following function: $$\phi(t)=\int_0^t f(x)\,dx+(1-t)f(t)+a(1-t)^2:[0,1]\to\mathbb{R}$$ $\phi(t)$ is differentiable on $[0,1]$ and $$\phi'(t)=f(t)-f(t)+(1-t)f’(t)-2a(1-t)=(1-t) \left[f’(t)-2a\right]$$ for all $t\in[0,1]$. $$\phi(0)=f(0)+a=\int_0^1 f(x)\,dx.$$ $$\phi(1)=\int_0^1 f(x)\,dx.$$ Since $\phi(t)$ is a differentiable function on $[0,1]$ and $\phi(0)=\phi(1)$, we can apply Rolle’s Theorem, hence there exists $c\in\left]0,1\right[$ such that $\phi’(c)=0$. Consequently we get that $$(1-c)\left[f’(c)-2a\right]=0$$ but $\;1-c>0,\;$ therefore $\;f’(c)-2a=0\;$ that is $$a=\frac{1}{2}f’(c).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Does ${f(x)=\ln(e^{x^2})}$ reduce to ${x^2\ln(e)}$ or ${2x\ln(e)}$? I'm confused with the expression ${f(x) = \ln(e^{x^2})}$.I know the rule ${\log_a(x^p) = p\log_a(x)}$. So does the given expression reduce to ${x^2\ln(e)}$ or ${2x\ln(e)}$?
Here you have $\ln(e^p)$ with $p = x^2$, so the correct reduction is $(x^2)\ln(e) = x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why does every oscillating sequence diverge? Is this statement true? $$\bf\text{Every oscillating sequence diverges.}$$ My thoughts: $\bf{False}$. $s_n = (-1)^n$ does not converge. But it's bounded, therefore, not divergent either. Divergent means diverging to $-\infty$ or $+ \infty$, yes? Solution key: $\bf{True}$. If a sequence oscillates, then its limit inferior and limit superior are unequal. If follows that it cannot converge, for if it converged all its subsequences would converge to the same limit. Three other places discussing oscillating convergence: * *This website says: "Oscillating sequences are not convergent or divergent. Such as 1, 0, 3, 0, 5, 0, 7,..." I agree. *This SE post says: "Diverge means doesn't converge." But, I think it can be neither? *This SE post says: "$\sin xe^{-x}$ is oscillating and convergent." I agree. So, is the solution key correct? Who's right here?
In the usual definitions that I have picked up from various instructors and textbooks, "this sequence diverges" just means "this sequence doesn't converge". You could self-consistently define things the other way, but this is unusual in my experience. By contrast, "this sequence oscillates" is usually not rigorously defined. Thus when we somewhat casually say "this sequence doesn't converge because it oscillates forever" we really mean "this sequence doesn't converge because it oscillates with an amplitude bounded away from zero forever". With no formal definition setting the context, I would probably say that $\frac{(-1)^n}{n}$ both oscillates and converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solution to autonomous differential equation with locally lipscitz function As I was learning about the following theorem and its proof from the book Nonlinear Systems by H. K. Khalil, I encountered a difficulty in grasping some parts of the proof. Theorem: Consider the scalar autonomous differential equation \begin{equation} \dot{y}=-\alpha(y),\ y(t_0)=y_0,\tag{1} \end{equation} where $\alpha$ is a locally Lipschitz class $\kappa$ function defined on $[0,a)$. For all $0\leq{y_0}<a$, this equation has a unique solution $y(t)$ defined for all $t\geq{t_0}$. Moreover, \begin{equation} y(t)=\sigma(y_0,t-t_0),\tag{2} \end{equation} where $\sigma$ is a class $\kappa\ell$ function defined on $[0,a)\times[0,\infty)$. The proof goes as follows. Since $\alpha(.)$ is locally Lipschitz, the equation (1) has a unique solution $\forall\ {y_0}\geq{0}$. Because $\dot{y}(t)<0$ whenever $y(t)>0$, the solution has the property that $y(t)\leq{y_0}$ forall $t\geq{t_0}$. By integration we have, \begin{equation} -\int_{y_0}^{y} \dfrac{dx}{\alpha(x)}= \int_{t_0}^{t} d\tau. \end{equation} Let b be any positive number less than $a$ and define $\eta(y)=-\int_{b}^{y}\dfrac{dx}{\alpha(x)}$. The function $\eta(y)$ is strictly decreasing differentiable function on $(0,a)$. Moreover, $\lim_{y\to{0}}\eta(y)=\infty$. This limit follows from two facts. First, the solution of the differential equation $y(t)\to{0}$ as $t\to\infty$, since $\dot{y}(t)<0$ whenever $y(t)>0$. Second, the limit $y(t)\to{0}$ can happen only asymptotically as $t\to\infty$; it cannot happen in finite time due to the uniqueness of the solution. Here I do not quite understand the second fact (in italics) how the uniqueness of solution ensures that $y(t)$ goes to $0$ asymptotically as $t\to\infty$. Any hints on this are greatly appreciated.
That's not what it's saying. It's saying $y(t) \to 0$ can't happen in finite time, i.e. there can't be a solution $Y(t)$ of the differential equation with $Y(t_0) = y_0$ and $Y(t_1) = 0$ for some $t_1 > t_0$. Suppose that did happen. Note that $y(t) = 0$ is also a solution of the differential equation, because part of the definition of class $\kappa$ is $\alpha(0)=0$. So this would contradict the Existence and Uniqueness Theorem, as there would be two different solutions $Y$ and $0$ having the same value at $t_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3765911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do Approximate Eigenvalues Imply Approximate Eigenvectors? My apologies in advance if this has already been asked somewhere. Suppose I have two real symmetric matrices $A$ and $B$ in $\mathbb{R}^{d \times d}$ for which $\lVert A - B \rVert_{op} \le \varepsilon$. Further, call the eigenvalue-eigenvector pairs for $A$ and $B$ as $(\lambda_i, u_i)$ and $(\tau_i, v_i)$, for all $i \in [d]$, and suppose that $\lVert u_i \rVert_2 = \lVert v_i \rVert_2 = 1$ for all $i \in [d]$. My question is: under what condition can we say something interesting about $\lVert u_i - v_i \rVert_2$? So far, I've tried using the following facts. * *For all $i$, $\lvert \lambda_i - \tau_i \rvert \le \varepsilon$. *If $\lvert \lambda_i - \tau_i \rvert \le \varepsilon$, then we can write $\lVert Bu_i - \lambda_i u_i \rVert \le \varepsilon$ (the reason I thought this might be useful is that it shows that the eigenvalue-eigenvector pairs for $A$ are almost eigenvalue-eigenvector pairs for $B$, in some sense) I'm not sure where to go from here, or if I should be looking someplace else entirely. Thank you in advance for the help!
Having $\|A - B\|$ small is not enough, in itself, to make $u_i$ and $v_i$ close. Consider any real symmetric matrices $A_0$ and $B_0$ (with distinct eigenvalues to avoid any problems of degeneracy), and take $A = t A_0$ and $B = t B_0$. Thus $\|A - B\| = |t| \|A_0 - B_0\|$ can be made arbitrarily small by taking $t$ to be small. But the eigenvectors of $A$ and $B$ are the same as the eigenvectors of $A_0$ and $B_0$, and thus don't have to be close at all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Simple proof of: if $ax\equiv ay \pmod{m}$, and $\gcd(a,m)=1$, then $x\equiv y$ I'm working on a proof of: "if $ax\equiv ay \pmod{m}$, and $\gcd(a,m)=1$, then $x\equiv y\pmod{m}$". Here's what I have so far: Suppose $ax\equiv ay\pmod{m}$, and $\gcd(a,m)=1$ By definition, $ax = ay + mp$ for some $p\in\mathbb{Z}$ By definition, $ay = ax + mr$ for some $r\in\mathbb{Z}$ By Bezout's identity, it must be that $\gcd(a,m) = ax$ Similarly, it must be that $\gcd(a,m) = ay$ Therefore, $ax = ay$ Obviously, $x=y$ Q.E.D. Is this ok?
The proof you gave may have a flaw: if $1=gcd(a,m)=ax=ay$, then $|a|=|x|=|y|=1$, which is not the case. By Bezout's Identity, from $ ax=ay+mp$ and $ay=ax+mr$, we can only imply $ax$ and $ay$ are multiplier of $gcd(a,m)$ The proposition you stated is a special case of a general proposition: if $ax\equiv ay (mod$ $m)$, then $x\equiv y(mod$ $\frac{m}{gcd(a,m)})$ Proof: With the assumption we can have $m|a(y-x)$, therefore $\frac{m}{gcd(a,m)}|\frac{a}{gcd(a,m)}(y-x)$, which implies $\frac{m}{gcd(a,m)}|(y-x)$. i.e $x\equiv y(mod$ $\frac{m}{gcd(a,m)})$ This is basically due to Euclid's Lemma(which can be proven with Bezout's Identity): if $a|bc$ and $gcd(a,b)=1$, then $a|c$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Proving that $\mathbb{Z}[i]/\langle 2+3i\rangle $ is a finite field Prove that $\mathbb{Z}[i]/\langle 2+3i\rangle $ is a finite field. Hi. I can't try a few steps in the next solution $$\mathbb{Z}[i]/\langle 2+3i\rangle \simeq \mathbb{Z}[x]/\langle 1+x^2,2+3x\rangle$$ and $9(1+x^2)+(2-3x)(2+3x)=13$ then $13\in \langle 1+x^2,2+3x\rangle $. Thus $$\mathbb{Z}[x]/\langle 1+x^2,2+3x\rangle=\mathbb{Z}/\langle 13,1+x^2,2+3x\rangle \\ \simeq \mathbb{Z}_{13}[x]/\langle 1+x^2,2+3x\rangle \simeq \mathbb{Z}_{13}$$ The last isomorphism induced by $x\mapsto 8$ (check $\langle 1+x^2,2+3x\rangle=\langle x-8\rangle $ in $\mathbb{Z}_{13}[x]$) Therefore $\mathbb{Z}[i]/\langle 2+3i\rangle \simeq \mathbb{Z}_{13}$ finite field. Question 1. Why $\mathbb{Z}[i]/\langle 2+3i\rangle\simeq \mathbb{Z}[x]/\langle1+x^2,2+3x\rangle$? I have this: Let $$f:\mathbb{Z}[x]\to \mathbb{Z}[i]/\langle 2+3i\rangle $$ with $f(p(x))=p(i)+\langle 2+3i\rangle $ homomorphism with $\ker(f)=\langle 1+x^2,2+3x\rangle $ then $$\mathbb{Z}[x]/\langle 1+x^2,2+3x\rangle \simeq \mathbb{Z}[i]/\langle 2+3i\rangle $$It is correct? Question 2. Why $\mathbb{Z}/\langle 13,1+x^2,2+3x\rangle \simeq \mathbb{Z}_{13}[x]/\langle 1+x^2,2+3x\rangle $? Question 3. Why $\langle 1+x^2,2+3x\rangle=\langle x-8\rangle$?
Question 1: We have $\Bbb Z[i]\simeq \Bbb Z[x]/\langle x^2+1\rangle$. And the third isomorphism theorem says that when we are dividing out by first $x^2+1$, and then $2+3x$, we are allowed to divide out but both of them simultaneously. Question 2: Again justified by the third isomorphism theorem, dividing out by $13$ before the other generators. Question 3: Here we have $$ 9(2+3x)=18+27x=-8+x $$ So $\langle x^2+1,2+3x\rangle$ contains $x-8$. Now also note that $$ 3(x-8)=3x-24=3x+2\\ (x-8)^2+3(x-8)=x^2-16x+64 +3x-24=x^2+1 $$ So $\langle x-8\rangle$ contains both $x^2+1$ and $2+3x$. Since each ideal contains the generators of the other ideal, the two ideals must be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $V=Z(x;T)\oplus Z(y;T)$ and the $T$-annihilators $\mu_{T,x},\,\mu_{T,y}$ do not share any common divisors implies that $V$ is cyclic Provided that $V=Z(x;T)\oplus Z(y;T)$ where $Z(v;T)$ denotes the cyclic subspace and the corresponding $T$-annihilators $\mu_{T,x},\,\mu_{T,y}$ do not share any common divisors, show that $V$ is itself cyclic. My approach was to first identify a possible cyclic vector, which was $x+y$ in this case. I then tried to show that every element of $V$ is an element of the cyclic vector space spanned by $T^jx+y,\ j\in\mathbb{N}\cup \{0\}$ but the problem seems to be the condition that the $T$-annihilators $\mu_{T,x},\,\mu_{T,y}$ do not share any common divisors. How do I apply this or how do I continue? Edit: Definition of the $T$-annihilator as in T-Annihilators and Minimal polynomial : Definition: $T$-annihilator of a vector $\alpha$(denoted as $p_\alpha$) is the unique monic polynomial which generates the ideal such that $g(T)\alpha = 0$ for all $g$ in this ideal.
Hint: It suffices to show that $Z(x+y;T)$ contains both $x$ and $y$. To that end, note that the restrictions $\mu_{T,x}(T) \mid_{Z(y;T)},\mu_{T,y}(T) \mid_{Z(x;T)}$ are invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is the spectrum of a shift operator the closed unit disk? Consider the following text from Murphy's: "$C^*$-algebras and operator theory": In example 2.3.2, why is $\sigma(u) = \Bbb{D}$ (= the closed unit disk)? I can see that $\sigma(u) \subseteq \Bbb{D}$ and $\sigma(u^*) = \Bbb{D}.$ Thanks in advance!
$$ \lambda \in \sigma(u) \iff \overline{\lambda} \in \sigma( u^*).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find the set of lines of fixed length connecting two circles? Suppose I have two circles as shown below. I want to find the set of line segments of size $\mathbf{w}$ connecting the two circles. What's a nice way to do this? I thought of a simple, ugly way. I could write out the equations \begin{align} (u_x - a)^2 + (u_y - b)^2 &= \lVert \mathbf{u}\rVert^2\\ (v_x - c)^2 + (v_y - d)^2 &= \lVert \mathbf{v}\rVert^2\\ \sqrt{(v_x - u_x)^2 + (v_y-u_y)^2} &=\lVert \mathbf{w}\rVert \end{align} The first two equations are the equations for the two circles, while the last formula is the distance formula for the line segment. However, when I try to do the math, it's really ugly, so I was wondering if there's a nicer way to do this. Geometrically, I think this basically amounts to treating the third equation like a circle with an origin centered on the edge of the first circle and then pivoting the third circle's origin around the edge of the first circle to trace out the line between them, like this: I feel like there also might be a way to do this with matrices, but I'm not sure how I would do that. $$\begin{bmatrix} 1 & 1 & -2a & -2b & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 & 1 & -2c & -2d & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 1 & 1 & 0 & 0 & -2 & 0 & 0 & -2 \\ \end{bmatrix} \begin{bmatrix} u_x^2 \\ u_y^2 \\ u_x\\ u_y\\ v_x^2\\ v_y^2\\ v_x\\ v_y\\ u_xv_x\\ u_xv_y\\ u_yv_x\\ u_yv_u\\ \end{bmatrix} = \begin{bmatrix} \lVert \mathbf{u}\rVert^2 - a^2 - b^2 \\ \lVert \mathbf{v}\rVert^2 - c^2 - d^2\\ \lVert \mathbf{w}\rVert^2 \end{bmatrix} $$ But I'm not sure where I should go from there.
Not a complete answer It's not going to be pretty no matter how you go about it, but I'd recommend a couple of steps. * *Rotate and translate until $(a, b) = (0,0)$ and $(c, d) = (c', 0)$. Then scale everything so that $\|u\| = 1$ (although maybe this step isn't needed). *Observe that there may be no solutions. If the lengths of $u$, $v$, and $w$ add up to less than the distance $D$ between $(a,b)$ and $(c, d)$, then there's no possible line segment. Similarly, if $D < \|w\| - (\|u\| + \| v \|)$, then there's no solution. *In all other cases, there ARE solutions, but the set of solutions may be disconnected ---- it seems just possible, to me, that you could have two separate "batches" of connecting segments without any way to get from one batch to the other, continuously, through connecting segments. I don't have an example, but I have some strong suspicions. *I'd be inclined to write the points of one circle in the form $(\cos t, \sin t)$, and the other as $(c' + r \cos s, 0 + r \sin s)$, and reduce to a problem of finding $s$ and $t$ that satisfy the distance formula. (BY the way, you should definitely square both sides of your third formula to get rid of the square root). My guess is that you're going to find yourself with a horrible system of equations for which every solution ends up with a bunch of "if it's in this case, do this, otherwise do that" things in them. This just has to look of an ugly problem to me. I hope someone else will prove me wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I prove $A − A(A + B)^{−1}A = B − B(A + B)^{−1}B$ for matrices $A$ and $B$? The matrix cookbook (page 16) offers this amazing result: $$A − A(A + B)^{−1}A = B − B(A + B)^{−1}B$$ This seems to be too unbelievable to be true and I can't seem to prove it. Can anyone verify this equation/offer proof?
\begin{align} A - A(A+B)^{-1}A & = A(A+B)^{-1}(A+B) - A(A+B)^{-1}A \\ &= A(A+B)^{-1}(A+B - A)\\ &= A(A+B)^{-1}B \\ &= (A+B - B)(A+B)^{-1}B \\ &= (A+B)(A+B)^{-1}B - B(A+B)^{-1}B \\ &= B - B(A+B)^{-1}B \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3766930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Show that the residue is $c_{-1}=-\frac{q''(z_0)}{(q'(z_0))^3}$. Suppose that $f(z) = \frac{1}{(q(z))^2}$ where the function $q$ is analytic at $z_0$, $q(z_0) = 0$, and $q'(z_0)\neq 0$. Show the residue is $c_{-1}=-\frac{q''(z_0)}{(q'(z_0))^3}$. First I would like to acknowledge that there is an old question on stack exchange that provides a solution to this, but seeing as it is several years old I didn't think I could comment and ask questions on it to much avail. I do not understand the solution given at all, could someone explain it or show me a different way? The solution from the old question (answer credit to Julian Aguirre) goes: (I'll italicise my questions) Assume without loss of generality that $z_0=0$. (why can we do that? How is it equivalent?) $q(z)=q'(0)z+\frac{q"(0)}{2}z^2+O(z^3)$ (Is this a Laurent Series?What does the O mean?) $(q(z))^2=(q'(0)^2)z^2+q'(0) q''(0)z^3+O(z^4)$ (How did we get the $q'(0) q''(0)z^3+O(z^4)$?) $\frac{1}{q(z)^2}=\frac{1}{(q'(0)^2)z^2}(\frac{1}{1+\frac{q"(0)}{q'(0)}z+O(z^2)})=\frac{1}{(q'(0)^2)z^2}(1-\frac{q"(0)}{q'(0)}z+O(z^2))$ How does this get us the desired result?? Is there another way to do it?
Why can we do that? It can be done with a simple substitution: set $t=z-z_0$. Then $z=z_0\iff t=0$, and the function $f(z)$ becomes the function $g(t)=\dfrac 1{\bigl(q(z_0+t)\bigr)^2}$ Is this a Laurent series? What does the $O$ mean? It is not a Laurent series, since it is not a series. It is just the Taylor's expansion of the Taylor series of $q(z)$ at order$2$, the remainder $r(z)$ being $O(z^3)$ in Bachmann notation (asymptotic analysis), which means that $\dfrac{r(z)}{z^3}$ is bounded when $z\to 0$. How did we get the $\:q'(0) q''(0)z^3+O(z^4)\,$? Simply calculating the polynomial part of $\bigl(q(z)\bigr)^2$ and applying the rules of calculation with $O$, i.e. truncating everything with degree $\ge 3$ (it becomes a part of $O(z^3)$) How does this get us the desired result? After factoring out $(q'(0)^2)z^2$ in the denominator, they're left with $$\frac{1}{1+\underbrace{\frac{q''(0)}{q'(0)}z+O(z^2)}_{u}}$$ which they expand at order $1$ with the usual formula, truncating it again, at the same order as the denominator. So they obtain \begin{align} \frac{1}{q(z)^2}&=\frac{1}{(q'(0))^2\,z^2}\, \frac{1}{1+\cfrac{q"(0)}{q'(0)}z+O(z^2)}\\& =\frac{1}{(q'(0))^2 \,z^2}\biggl(1-\frac{q''(0)}{q'(0)}z+O(z^2)\biggr) \\[1ex] &=\frac{1}{(q'(0)^2)}\,\frac1{z^2}-\frac{q''(0)}{(q'(0))^3}\,\frac 1z +O(1). \end{align} Is this clearer?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3767046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How should one understand the "indefinite integral" notation $\int f(x)\;dx$ in calculus? In calculus, it is said that $$ \int f(x)\; dx=F(x)\quad\text{means}\quad F'(x)=f(x)\tag{1} $$ where $F$ is a differentiable function on some open integral $I$. But the mean value theorem implies that any differentiable function $G:I\to \mathbb{R}$ with the property $G'(x)=f(x)$ on $I$ can be determined only up to a constant. Since the object on the right of the first equality of (1) is not unique, we cannot use (1) as a definition for the symbol $\int f(x)\;dx$. Formulas for antiderivatives are usually written in the form of $\displaystyle \int f(x)\;dx=F(x)+C$. For example, $$ \int \cos x\;dx = \sin x+C\;\tag{2} $$ where $C$ is some "arbitrary" constant. One cannot define an object with an "arbitrary" constant. It is OK to think about (2) as a set identity: $$ \int \cos x\; dx = \{g:\mathbb{R}\to\mathbb{R}\mid g(x)=\sin x+C,\; C\in\mathbb{R}\}. \tag{3} $$ So sometimes, people say that $\int f(x)\;dx$ really means a family of functions. But interpreting it this way, one runs into trouble of writing something like $$ \int (2x+\cos x) \; dx = \int 2x\;dx+\int \cos x\; dx = \{x^2+\sin x+C:C\in\mathbb{R}\}\;\tag{4} $$ where one is basically doing the addition of two sets in the middle, which is not defined. So how should one understand the "indefinite integral" notation $\int f(x)\;dx$? In particular, what kind of mathematical objects is that?
I think the problem is not only for antiderivative, but more generally it has to cope with the abuse of notation for multivalued functions. Take for instance the complex logarithm $\ln(z)=\overbrace{\ln(r)+i\theta}^{\operatorname{Ln}(z)}+i2k\pi$. You have to understand $$\ln(z_1z_2)\color{red}=\ln(z_1)+\ln(z_2)$$ As $$\exists (k_1,k_2,k_3)\in\mathbb Z^3\mid \operatorname{Ln}(z_1z_2)+i2k_3\pi=\operatorname{Ln}(z_1)+i2k_1\pi+\operatorname{Ln}(z_2)+i2k_2\pi$$ In the same way, the expression $$\int (f+g)\color{red}=\int f+\int g$$ Should be seen as $$\exists (C_1,C_2,C_3)\in\mathbb R^3\mid H(x)+C_3=F(x)+C_1+G(x)+C_2$$ In all these instances, you can simply regroup all the constant terms on RHS and write: $$\int (f+g)\color{red}=F(x)+G(x)+C$$ This is the red equal sign $\color{red}=$ which is overloaded from the normal equal sign $=$, we give it extra properties (equality modulo a constant) when the context concerns multivalued functions, that's all.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3767159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
Uniform motion question A biker and a skater set out at 4:30 PM from the same point, headed in the same direction. The biker is travelling at a rate of 15 km/hr faster than twice the speed of the skater. In 1.5 hours, the biker is 35 km ahead of the skater. Find the rate of the skater. My approach: Biker speed = $S_1$, Skater speed = $S_2$ $S_1 = 2S_2 + 15$ $1.5(2S_2 + 15) + 1.5S_2 = 35$ I get $2.8 \ \text{km}/\text{hr}$ for $S_2$ Which is not correct. Tell me where I made the mistake.
The equation of motion for the biker is $x_1(t)=(2v+15) t$. The equation of motion of the skater is $x_2(t)=vt.$ We know that $x_1(1.5)-x_2(1.5)=35.$ Thus $$(2v+15)\cdot 1.5-1.5v=35$$ $$1.5(2v+15-v)=35$$ $$v+15=\frac{70}{3}$$ $$v=\frac{25}{3}~\text{km}/\text{hr}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3767265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What does distance of a point from line being negative signify? When we take distance from the line, we take $$ d = \frac{ Ax_o + By_o + C}{ \sqrt{A^2 +B^2}}$$ usually with a modulus on top, now my question is if I evaluate this distance as negative what does it mean? Can I decide on which half-plane a point using this?
Expanding on the answer of @Andrew Chin, the sign says in which of the two half-planes (in which the line divides the whole plane) lays the point. If it is from the same side of the vector $(A,B)$ then the result is positive, otherwise negative. Sometime called oriented distance. For example, for the line of equation $x-2y+1=0$ we have ${\bf n}=(A,B)=(1,-2)$ and the following graphic representation
{ "language": "en", "url": "https://math.stackexchange.com/questions/3767405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
$(A^c\times B)\cup(A\times B^c)\cup(A^c \times B^c)=(A^c\times Y)\cup(X\times B^c)$ Let $(X,T)$ and $(Y,T')$ be two topological spaces and $A\subset X ,B\subset Y$.Show that $$(A^c\times B)\cup(A\times B^c)\cup(A^c \times B^c)=(A^c\times Y)\cup(X\times B^c)$$ solution $$1...........(A^c\times B)\cup(A\times B^c)\cup(A^c \times B^c)=(A^c\times B)\cup\Biggl((A\cup A^c)\times(B^c\cup B^c)\Biggl)$$ $$2........=(A^c\times B)\cup(X\times (B^c\cup B^c))$$ $$3.......=(A^c\cup X \times B\cup (B^c\cup B^c))$$ $$4......=(A^c\cup X \times (B\cup B^c)\cup B^c)$$ $$5......=(A^c\cup X \times Y\cup B^c)$$ $$6....=(A^c\times Y)\cup(X\times B^c)$$ Is this proof correct?
Already in the first step you’ve written something that suggests a misconception on your part. It’s true that $(A\times B^c)\cup(A^c\times B^c)=(A\cup A^c)\times(B^c\cup B^c)$, but only because $(A\times B^c)\cup(A^c\times B^c)=(A\cup A^c)\times B^c$; it is not in general true that $$(W\times X)\cup(Y\times Z)=(W\cup Y)\times(X\cup Z)\;.\tag{1}$$ For instance, if $W=X=\{0\}$ and $Y=Z=\{1\}$, then $$\begin{align*} (W\times X)\cup(Y\times Z)&=(\{0\}\times\{0\})\cup(\{1\}\times\{1\})\\ &=\{\langle 0,0\rangle\}\cup\{\langle 1,1\rangle\}\\ &=\{\langle 0,0\rangle,\langle 1,1\rangle\}\;, \end{align*}$$ but $$\begin{align*} (W\cup Y)\times(X\cup Z)&=(\{0\}\cup\{1\})\times(\{0\}\cup\{1\})\\ &=\{0,1\}\times\{0,1\}\\ &=\{\langle 0,0\rangle,\langle 0,1\rangle,\langle 1,0\rangle,\langle 1,1\rangle\}\;. \end{align*}$$ The remaining steps are ambiguous, since you didn’t use enough parentheses, but it appears that you really did think that $(1)$ is true and used it at your step $3$. There it definitely fails. What is true is that $$(X\times Z)\cup(Y\times Z)=(X\cup Y)\times Z\;;\tag{2}$$ you should prove this, if you’ve not done so already. Thus, a good first couple of steps would be $$\begin{align*} (A^c\times B)\cup(A\times B^c)\cup(A^c\times B^c)&=(A^c\times B)\cup\big((A\cup A^c)\times B^c\big)\\ &=(A^c\times B)\cup(X\times B^c)\;. \end{align*}$$ Clearly $(A^c\times B)\cup(X\times B^c)\subseteq(A^c\times Y)\cup(X\times B^c)$, so you could finish the proof by showing that $(A^c\times Y)\cup(X\times B^c)\subseteq(A^c\times B)\cup(X\times B^c)$. There are several ways to do this; for instance, you could use ‘element-chasing’, assuming that $\langle x,y\rangle\in(A^c\times Y)\cup(X\times B^c)$ and showing that $\langle x,y\rangle\in(A^c\times B)\cup(X\times B^c)$. In keeping with the more algebraic style that you were using, however, you could notice that $$A^c\times Y=A^c\times(B\cup B^c)=(A^c\times B)\cup(A^c\times B^c)$$ by $(2)$, so that $$\begin{align*} (A^c\times Y)\cup(X\times B^c)&=(A^c\times B)\cup(A^c\times B^c)\cup(X\times B^c)\\ &=(A^c\times B)\cup(X\times B^c)\;, \end{align*}$$ since $A^c\times B^c\subseteq X\times B^c$. And as you can see, we’ve not just shown that $$(A^c\times Y)\cup(X\times B^c)\subseteq(A^c\times B)\cup(X\times B^c)\;:$$ we’ve shown that $$(A^c\times Y)\cup(X\times B^c)=(A^c\times B)\cup(X\times B^c)\;.$$ If you wanted to, you could reverse the order of this last calculation and append it to the first part to get a direct proof of equality: $$\begin{align*} (A^c\times B)\cup(A\times B^c)\cup(A^c\times B^c)&=(A^c\times B)\cup\big((A\cup A^c)\times B^c\big)\\ &=(A^c\times B)\cup(X\times B^c)\\ &=(A^c\times B)\cup\big((A^c\times B^c)\cup(X\times B^c)\big)\\ &=\big((A^c\times B)\cup(A^c\times B^c)\big)\cup(X\times B^c)\\ &=(A^c\times Y)\cup(X\times B^c)\;. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3767920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An illusionist and their assistant are about to perform the following magic trick Let $k$ be a positive integer. A spectator is given $n=k!+k−1$ balls numbered $1,2,\dotsc,n$. Unseen by the illusionist, the spectator arranges the balls into a sequence as they see fit. The assistant studies the sequence, chooses some block of $k$ consecutive balls, and covers them under their scarf. Then the illusionist looks at the newly obscured sequence and guesses the precise order of the $k$ balls they do not see. Devise a strategy for the illusionist and the assistant to follow so that the trick always works. (The strategy needs to be constructed explicitly. For instance, it should be possible to implement the strategy, as described by the solver, in the form of a computer program that takes $k$ and the obscured sequence as input and then runs in time polynomial in $n$. A mere proof that an appropriate strategy exists does not qualify as a complete solution.) Source: Komal, October 2019, problem A $760$. Proposed by Nikolai Beluhov, Bulgaria, and Palmer Mebane, USA I can prove that such a strategy must exist: We have a set $A$ of all permutations (what assistant sees) and a set $B$ of all possible positions of a scarf (mark it $0$) and remaining numbers (what the illusionist sees). We connect each $a$ in $A$ with $b$ in $B$ if a sequence $b$ without $0$ matches with some consecutive subsequence in $a$. Then each $a$ has degree $n-k+1$ and each $b$ has degree $k!$. Now take an arbitrary subset $X$ in $A$ and let $E$ be a set of all edges from $X$, and $E'$ set of all edges from $N(X)$ (the set of all neighbours of vertices in $X$). Then we have $E\subseteq E'$ and so $|E|\leq |E'|$. Now $|E|= (n-k+1)|X|$ and $|E'| = k!|N(X)|$, so we have $$ (n-k+1)|X| \leq k!|N(X)|\implies |X|\leq |N(X)|.$$ By Hall marriage theorem there exists a perfect matching between $A$ and $B$... ...but I can not find one explicitly. Any idea? Update: 2020. 12. 20. * *https://artofproblemsolving.com/community/c6t309f6h2338577_the_magic_trick *https://dgrozev.wordpress.com/2020/11/14/magic-recovery-a-komal-problem-about-magic/
NOTE: I found counterexamples to the explicit $f$ I initially posted. I removed it but am leaving the rest of the answer up as a partial solution. Notation and Remarks Let $S_n$ denote the set of permutations of length $n$ and let $C_{n,k}$ be the set of covered permutations. For example, the permutation $12345678 \in S_8$ and $123\cdot\cdot\cdot78 \in C_{8,3}$. For brevity I often drop the subscripts. The act of covering gives us a relation $\sim$ between the two sets, i.e. we say that $\pi \in S$ and $c \in C$ are compatible if $c$ is a covering of $\pi$. We can visualize this compatibility relation as a bipartite graph. For $k=2$ and $n=3$ we have: $ \qquad \qquad \qquad \qquad \qquad \qquad $ We need to find an injective function $f : S_n \rightarrow C_{n,k}$ such that $\pi$ and $f(\pi)$ are always compatible. The assistant performs the function $f$ and the illusionist performs the inverse $f^{-1}$. As per the problem requirements, both $f$ and $f^{-1}$ need to be computable in poly(n) time for a given input. We note that given such an $f$, computing $f^{-1}$ is straightforward: given a covering $c$, we consider the compatible permutations. Among these, we choose the $\pi$ such that $f(\pi) = c$. For $n = k! + k - 1$ we note that $|S_n| = |C_{n,k}| = n!\,$. We also note that each permutation is compatible with exactly $k!$ coverings and each covering is compatible with exactly $k!$ permutations. As OP mentioned, the Hall marriage theorem thus implies that a solution exists. We can find such a solution using a maximum matching algorithm on the bipartite graph. This is how @orlp found a solution for $k=3$ in the comments. However, the maximum matching algorithm computes $f$ for every permutation in poly(n!) time, where we instead need to compute $f$ for a single permutation in poly(n) time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 2, "answer_id": 1 }
Line Integral gives no work done? For the following question, $$ \mathbf{F}=\langle-y, x\rangle $$ For this field: Compute the line integral along the path that goes from (0,0) to (1,1) by first going along the $x$ -axis to (1,0) and then going up one unit to (1,1) . I got an answer of $0$, by doing: But the answer key concludes that the answer is $1$: To compute $\int_{C} \mathbf{F} \cdot d \mathbf{r}$ we break the curve into two pieces, then add the line integrals along each piece. First, fix $y=0$ (so $d y=0$ ) and let $x$ range from 0 to 1 . $$ \int_{x=0}^{x=1} \mathbf{F} \cdot d \mathbf{r}=\int_{x=0}^{x=1}-y d x+x d y=\int_{0}^{1} 0 d x=0 $$ Next, fix $x=1$ (so $d x=0$ ) and let $y$ range from 0 to 1: $$ \int_{y=0}^{y=1} \mathbf{F} \cdot d \mathbf{r}=\int_{y=0}^{y=1}-y d x+1 d y=1 $$ We conclude that $\int_{C} \mathbf{F} \cdot d \mathbf{r}=1$ I understand the solution from the answer key, but I don't get why my solution doesn't work. Please assist.
Here's how physicists often do it: $$\int_C \mathbf{F}\cdot d\mathbf{r} = \int_C (F_x \hat{i} + F_y \hat{y})\cdot (\hat{i}\,dx + \hat{j}\,dy) = \int_C F_x\,dx + F_y\,dy$$ On the first part $x$ goes $0 \to 1$ and $y=0$ is constant so $dy = 0$ and hence $$\int_{C_1} F_x\,dx + F_y\,dy = \int_{x=0}^{x=1} -y\,dx = 0.$$ Similarly, on the second part $y$ goes $0 \to 1$ and $x=1$ is constant so $dx = 0$ and hence $$\int_{C_2} F_x\,dx + F_y\,dy = \int_{y=0}^{y=1} x\,dy = \int_{y=0}^{y=1} dy=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Can $a \bmod 3$ be represented arithmetically without the mod or other integer-related functions? I've noticed that $a \bmod b$ (with 'mod' in the operational sense) can also be represented using various tricks of formulation. For instance, that $$a \bmod b = a - b\left\lfloor\frac{a}{b}\right\rfloor .$$ There are several ways to do it by (ab)using various functions like ceiling, max, abs, and the like. However, I realized yesterday that $$a \bmod 2 = \frac{1-(-1)^a}{2}.$$ I find this interesting as I consider basic exponentiation to be a purer operation in some sense than something like floor, perhaps in that it doesn't have a built-in sense of conditionality or knowledge of fractional parts. Furthermore, you can use substitution to reach arbitrarily high powers of $2$, as in $$a \bmod 4 = \frac{1-(-1)^a}{2}+1-(-1)^{\frac{a-\frac{1-(-1)^a}{2}}{2}},$$ which amounts to right-shifting $a$ by one spot and repeating the process to get the second bit you need for the $\bmod 4$. I realize that's not pretty, but I'm interested that it's possible. The motivation here is identifying situations where formulae may implicitly support a richness of computational complexity one wouldn't expect, which parity detection and manipulation goes a long way towards. Which leads me to my question: Is there some way using exponentiation or other basic operations to find a comparable expression for $a \bmod 3$, or ideally, $a \bmod b$?
Note that $\sin(2\pi n/3)=0, \dfrac{\sqrt3}2, $ or $-\dfrac{\sqrt3}2$, according as $n\equiv0, 1, $ or $2\bmod3$, respectively. Furthermore, $f(x)=\dfrac{x\left(x-\dfrac{\sqrt3}2\right)2}{-\dfrac{\sqrt3}2\left(\dfrac{\sqrt3}2-\dfrac{\sqrt3}2\right)}+\dfrac{x\left(x+\dfrac{\sqrt3}2\right)1}{\dfrac{\sqrt3}2\left(\dfrac{\sqrt3}2+\dfrac{\sqrt3}2\right)}=\dfrac{3x^2-\dfrac{\sqrt3}2x}{\dfrac 32}=2x^2-\dfrac x{\sqrt3}$ has the property that $f(0)=0, f\left(\dfrac{\sqrt3}2\right)=1, $ and $f\left(-\dfrac{\sqrt3}2\right)=2$. Therefore, $f(\sin(2\pi n/3))=2\sin^2(2\pi n/3)-\dfrac {\sin(2\pi n/3)}{\sqrt3}=n \bmod 3.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Evaluating $\iint dx\,dy$ over the region bounded by $y^2=x$ and $x^2+y^2=2x$ in the first quadrant Identify the region bounded by the curves $y^2=x$ and $x^2+y^2=2x$, that lies in the first quadrant and evaluate $\iint dx\,dy$ over this region. In my book the solution is like: $$\begin{align}\\ \iint dx\,dy &=\int_{x=0}^1\int_{y=\sqrt x}^{\sqrt{2x-x^2}} \, dx \, dy\\ &=\int_{x=0}^1 \big[y\big]_{\sqrt x}^{\sqrt{2x-x^2}}\,dx\\ &=\int_0^1\left(\sqrt{2x-x^2}-\sqrt{x}\right)\,dx\\ &{\begin{aligned}\\ =\int_0^1\sqrt{1-x^2}\,dx-\int_0^1\sqrt{x}&\,dx\text{(applying} \int_0^af(x)\,dx=&\int_0^af(a-x)\,dx \text{ in the first part)}\\ \end{aligned}\\}\\ &=\left[\frac{\sqrt{1-x^2}}{2}+\sin^{-1}x\right]_0^1-\left[\frac{x^{\frac{3}{2}}}{\frac32}\right]_0^1\\ &=\frac{\pi}{2}-\frac12-\frac23(1-0)\\ &=\frac{\pi}{2}-\frac76\\ \end{align}\\ $$ And I did it like: $$\begin{align}\\ \iint dx\,dy &=\int_{x=0}^1\int_{y=\sqrt x}^{\sqrt{2x-x^2}}dx\,dy\\ &=\int_0^1\left(\sqrt{2x-x^2}-\sqrt{x}\right)\,dx\\ &=\int_0^1\sqrt{1-(x-1)^2}\,dx-\int_0^1\sqrt{x}\,dx\\ &{\begin{aligned}\\ =&\left[\frac{x-1}{2}\sqrt{1-(x-1)^2}+\frac12\sin^{-1}(x-1)\right]_0^1&-\left[\frac23x^{\frac32}\right]_0^1\\ \end{aligned}\\}\\ &=-\frac{\pi}{4}-\frac23\\ \end{align}\\ $$ Which one is correct?
Clearly, your area cannot be negative, so your result is immediately incorrect. The system $$x = y^2 \\ x^2 + y^2 = 2x$$ is readily solved by substitution. We have $$\begin{align} 0 &= x^2 + y^2 - 2x \\ &= x^2 + x - 2x \\ &= x^2 - x \\ &= x(x-1). \end{align}$$ Hence $x \in \{0, 1\}$ and the full solution set is $$(x,y) \in \{(0,0), (1, -1), (1, 1)\}.$$ In the first quadrant, the area of interest may be expressed as $$\begin{align} \int_{x = 0}^1 \int_{y = \sqrt{x}}^\sqrt{2x-x^2} \, dy \, dx &= \int_{x=0}^1 \sqrt{2x - x^2} - \sqrt{x} \, dx \\ &= \int_{x=0}^1 \sqrt{1 - (1-x)^2} - \int_{x=0}^1 \sqrt{x} \, dx \\ &= \int_{u=0}^1 \sqrt{1-u^2} \, du - \left[\frac{2}{3}x^{3/2}\right]_{x=0}^1 \\ &= \int_{\theta = 0}^{\pi/2} \sqrt{1 - \sin^2 \theta} \cos \theta \, d\theta - \frac{2}{3} \\ &= \int_{\theta = 0}^{\pi/2} \cos^2 \theta \, d \theta - \frac{2}{3} \\ &= \int_{\theta = 0}^{\pi/2} \frac{1 + \cos 2\theta}{2} \, d\theta - \frac{2}{3} \\ &= \left[\frac{\theta}{2} + \frac{\sin 2\theta}{4}\right]_{\theta = 0}^{\pi/2} - \frac{2}{3} \\ &= \left(\frac{\pi}{4} + 0 - 0 + 0\right) - \frac{2}{3} \\ &= \frac{\pi}{4} - \frac{2}{3}. \end{align}$$ This step-by-step calculation should resolve all doubt. This is because for a fixed $x \in [0,1]$, we note $$y = \sqrt{x} \le \sqrt{2x-x^2}.$$ Alternatively, we may change the order of integration, but this requires us to solve the equation for the circle in terms of $x$. We can do this by completing the square: $x^2 - 2x + y^2 = 0$ implies $$1-y^2 = x^2 - 2x + 1 = (x-1)^2,$$ hence $$x = 1 \pm \sqrt{1-y^2},$$ and we choose the negative root because we require $x < 1$. Therefore, the area can be expressed as $$\int_{y=0}^1 \int_{x=1 - \sqrt{1-y^2}}^{y^2} \, dx \, dy.$$ Both integrals evaluate to $$\frac{\pi}{4} - \frac{2}{3}.$$ As has already been noted, the figure is misleading because the point $(1,1)$ lies directly above the center of the circle at $(1,0)$. We can also check our solution by noting that the desired area is equal to the area under a parabola $y = x^2$ on $x \in [0,1]$, minus the area of a unit square from which a quarter of a unit circle has been cut out; i.e., this is simply $$\int_{x=0}^1 x^2 \, dx - \left(1 - \frac{\pi}{4}\right) = \frac{\pi}{4} - \frac{2}{3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluate $\int \frac{2-x^3}{(1+x^3)^{3/2}} dx$ Evaluate: $$\int \frac{2-x^3}{(1+x^3)^{3/2}} dx$$ I could find the integral by setting it equal to $$\frac{ax+b}{(1+x^3)^{1/2}}$$ and differentiating both sides w.r.t.$x$ as $$\frac{2-x^3}{(1+x^3)^{3/2}}=\frac{a(1+x^3)^{3/2}-(1/2)(ax+b)3x^2(1+x^3)^{-1/2}}{(1+x^3)}$$$$=\frac{a-ax^3/2-3bx^2}{(1+x^3)^{3/2}}$$ Finally by setting $a=2,b=0$, we get $$I(x)=\frac{2x}{(1+x^3)^{1/2}}+C$$ The question is: How to do it otherswise?
@ClaudeLeibovici has a point, because what you did is a well-worn technique, that of using an Ansatz. The basic idea is to make an educated guess as to the form of the solution, then make it more specific with your calculations, as you did. So it's worth understanding what makes a specific Ansatz a sensible starting point: * *It makes sense to assume a $(1+x^2)^{-3/2}$ factor results from differentiating $(1+x^2)^{-1/2}$: if nothing else, integration by parts makes sense of that. *You could have started with a more general Ansatz, $I(x)=f(x)(1+x^3)^{-3/2}$, so the problem is equivalent to $(x^3+1)f^\prime-\tfrac32x^2f=2-x^3$. Since constant $f$ doesn't solve this, it's natural to try a linear $f$ next, which worked for you. Maybe this answer isn't what you were looking for, but it's important to understand how to use an Ansatz as more than an accident.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Show that the solution of the equation $x^5-2x^3-3=0$ are all less than 2 (using proof by contradiction). The question is "Show that the solution of the equation $x^5-2x^3-3=0$ are all less than 2." I have attempted to answer this question using proof by contradiction and I think my answer is either wrong or not a well written solution. I would like to know if I solved it right and if I did, I would like some advice on how I can improve writing proofs. My attempt: Assume to the contrary that solution of this equation is greater then or equal to 2. Let $x=\frac 2p$. Then we have $ (\frac 2p)^5-2(\frac 2p)^3-3=0 $ $ \frac {2^5}{p^5} - \frac {2^4}{p^3} - 3 = 0$ We now consider two case: when $p=1$ and $p<1$. when $p=1$: $ 2^5 - 2^4 - 3 = 32 - 16 - 3 = 13$. Since $x = 2$ is not a solution this is a contradiction. When $p<1$: If $p<1$, then we know $1/p>1.$ This implies $\frac {1}{p^{n+1}} > \frac {1}{p^n}.$ Since $\frac 1{p^3}(2^5-2^4)-3 > 2^5-2^4-3 = 13 > 0$, it is clear that the inequality $ \frac 1{p^5}(2^5)- \frac 1{p^3} 2^4-3 > \frac 1{p^3}(2^5-2^4)-3 > 2^5-2^4-3 = 13 > 0$ holds Since for all $x > 2$ is not a solution this is a contradiction. Thus, solution $x$ is less then 2.
Let $x$ be a root and $x>2$. Thus, $$x^5-2x^3-3=x^5-2x^4+2x^4-4x^3+2x^3-3>0,$$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $A$ is a rank $1$ matrix, then $A^2= \operatorname{Tr}A \cdot A$ I am posting this question because I want to know if my proof is correct (I know that the result holds for $\mathbb{K}=\mathbb{C}$ and I see no reason why it wouldn't work for an arbitrary field, but I just want to be sure). Claim : Let $A \in \mathcal{M}_n(\mathbb{K})$ ($\mathbb{K}$ is a field, $n\in \mathbb{N}, n\ge 2$) such that $\operatorname{rank}A=1$. Then we have that $A^2=\operatorname{Tr}A\cdot A$. Proof : Since $\operatorname{rank}A=1$, $A's$ lines are proportional i.e. $A= \begin{pmatrix} b_1c_1 & b_1c_2 &...& b_1c_n\\ b_2c_1 & b_2c_2 &...& b_2c_n\\ ... & ... & ...& ...\\ b_nc_1 & b_nc_2 &...& b_nc_n\\ \end{pmatrix}=\begin{pmatrix} b_1 & 0 &...& 0\\ b_2 & 0 &...& 0\\ ... & ... & ...& ...\\ b_n & 0 &...& 0\\ \end{pmatrix}\cdot \begin{pmatrix} c_1 & c_2 &...& c_n\\ 0 & 0 &...& 0\\ ... & ... & ...& ...\\ 0 & 0 &...& 0\\ \end{pmatrix}.$ Let $B:=\begin{pmatrix} b_1 & 0 &...& 0\\ b_2 & 0 &...& 0\\ ... & ... & ...& ...\\ b_n & 0 &...& 0\\ \end{pmatrix}$ and $C:=\begin{pmatrix} c_1 & c_2 &...& c_n\\ 0 & 0 &...& 0\\ ... & ... & ...& ...\\ 0 & 0 &...& 0\\ \end{pmatrix}$. We have that $A^2=B(CB)C=\operatorname{Tr}A\cdot BC=\operatorname{Tr}A\cdot A$ and we are done.
It's technically correct, but I think it's clearer to write $A$ as $bc^T$ with $b,c\in\mathbb K^n$. The identity can then be proved easily as $$ A^2=(bc^T)(bc^T)=b(c^Tb)c^T=(c^Tb)bc^T=(\operatorname{Tr}A)A. $$ If you decompose $A$ into a product of two square matrices $B$ and $C$, one may not immediately see why $B(CB)C=(\operatorname{Tr}A)BC$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculus midterm exam help. I received my grade for my calculus midterm and according to my professor I answered the two following questions incorrectly but he refuses to tell me why. If anyone could explain to me where I went wrong I would appreciate it greatly. According to my professor, the proof used in the first question is insufficient. For the second question, he claims my answer is incorrect and should be (3cos(t), 3sin(t), t). Thank you guys for your help!
For question 1 I imagine that your professor was expecting you to provide a cartesian equation of the plane in the form $ax + by +cz = 0$. So you should expand what you wrote with $ \vec r = x \vec i + y \vec j +z \vec k$. For question 2 Your initial speed is incorrect and equal to $6 \vec j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3768865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does a symmetric diagonally dominant real matrix $A$ with nonnegative diagonal entries satisfy $(x^{2p-1})^T A x \geq 0$? In https://mathworld.wolfram.com/DiagonallyDominantMatrix.html, I find that A symmetric diagonally dominant real matrix with nonnegative diagonal entries is positive semidefinite. If $A \in \mathbb{R}^{N\times N}$ is symmetric diagonally dominant real matrix with nonnegative diagonal entries, is it still ture that \begin{align} (\mathbf x^{2p-1})^T A \mathbf x \geq 0, \quad \forall \mathbf x \in \mathbb{R}^N \end{align} where $p \geq 1$ is an integer, and the $(2p-1)$-th power of the vector $\mathbf{x}$ is element wise, i.e. $\mathbf x^{2p-1} = [x_1^{2p-1}, \cdots, x_N^{2p-1}]^T$. EDIT 1 I wrote a short matlab code to verify the inequality clear; N = 10; A0 = 2*rand(N, N) - 1; % random value in [-1, 1] A = A0 + A0'; % construct symmetric matrix; v = (sum(abs(A), 2) - abs(diag(A))); % diagonally dominant for i = 1:N A(i,i) = v(i); % Assign v to the diagonal elements end xv = 2*rand(N, 1000000) - 1; p = 3; x = min(dot((xv.^p), A * xv)) Thank you very much!
This is true. Denote the standard basis of $\mathbb R^N$ by $e_1,e_2,\ldots,e_N\}$. If $A$ has a nonzero off-diagonal entry $a_{ij}$, let $$ B=|a_{ij}|\left(e_i+\operatorname{sign}(a_{ij})e_j\right)\left(e_i+\operatorname{sign}(a_{ij})e_j\right)^T. $$ Then both $B$ and $A-B$ are diagonally dominant and their diagonals are nonnegative. Proceed recursively, we can extract positive multiples of matrices in the form of $(e_i+se_j)(e_i+se_j)^T$ (with $s=\pm1$) from $A$, until only a nonnegative diagonal matrix $D$ remains. Clearly $(x^{2p-1})^TDx =\sum_{i=1}^Nd_{ii}x_i^{2p}\ge0$. Also, when $s=\pm1$, we have \begin{align} (x^{2p-1})^T(e_i+se_j)(e_i+se_j)^Tx &=x_i^{2p}-sx_i^{2p-1}x_j-sx_ix_j^{2p-1}+x_j^{2p}\\ &\ge|x_i|^{2p}-|x_i|^{2p-1}|x_j|-|x_i||x_j|^{2p-1}+|x_j|^{2p}\\ &=(|x_i|-|x_j|)(|x_i|^{2p-1}-|x_j|^{2p-1})\\ &=(|x_i|-|x_j|)^2\sum_{k=0}^{2p-2}|x_i|^k|x_j|^{2p-2-k}\ge0. \end{align} It follows that $(x^{2p-1})^TAx\ge0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $\sum_{n=1}^\infty a_n b_n $ is convergent if $\sum_{n=1}^\infty (b_n -b_{n+1})$ is absolutely convergent , $\sum a_n $ convergent Prove that $\sum_{n=1}^\infty a_n b_n $ is convergent if $\sum_{n=1}^\infty a_n$ is convergent and $\sum_{n=1}^\infty (b_n -b_{n+1})$ is absolutely convergent series. Since, $\sum_{n=1}^\infty (b_n -b_{n+1})$ converges absolutely i.e. $\sum_{n=1}^\infty \vert (b_n -b_{n+1}) \vert$ converges implies $\sum_{n=1}^\infty (b_n -b_{n+1})$ converges also. Also $\sum_{n=1}^\infty a_n $ is also a convergent sequence. Let, $A_n= \sum_{k=0}^n a_k $. Then for $0 \leq p \leq q $, we have $\sum_{n=p}^q a_n b_n = \sum_{n=p}^{q-1} A_n (b_n-b_{n+1})+ A_q b_q - A_{p-1} b_p$ I think I need to use the comparison test or I need to show $\sum b_n $ is bounded.
The fact that $\sum_n(b_n-b_{n+1})$ converges means $b_1-b_{n+1}=\sum_{k=1}^n(b_k-b_{k+1})$ converges, hence $b_n$ converges. Similarly, $A_n:=\sum_{k=1}^na_k$ converges. Hence both sequences $A_n$ and $b_n$ are bounded by, say, $A$ and $B$ respectively, and both are Cauchy sequences. $$\sum_{n=p}^qa_nb_n=\sum_{n=p}^qA_n(b_n-b_{n+1})+A_qb_{q+1}+b_pA_{p-1}$$ Hence \begin{align}|\sum_{n=p}^qa_nb_n|&\le |\sum_{n=p}^qA_n(b_n-b_{n+1})|+ |A_q||b_{q+1}-b_p|+|b_p||A_q-A_{p-1}|\\ &\le A\sum_{n=p}^q|b_n-b_{n+1}|+A|b_{q+1}-b_p|+B|A_q-A_{p-1}|\to0\end{align} as $p,q\to\infty$ since all terms are Cauchy. Hence the series is Cauchy and converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Derive $\mathbf b$ from $\mathbf a = \mathbf b× \mathbf c$ I have an equation: $$\mathbf a = \mathbf b × \mathbf c,$$ where $\mathbf a$ $\mathbf b$ and $\mathbf c$ are 3-vectors. How could I derive $b$ from the equation and express it in terms of $\mathbf a$ and $\mathbf c$?
Note that * *if $\mathbf{c}\cdot\mathbf{a}\neq 0$ then there are no solutions. *if $\mathbf{c}=\mathbf{0}$, then there are no solutions unless $\mathbf{a}=\mathbf0$, in which case every $\mathbf{b}$ works. *if $\mathbf{c}\neq\mathbf{0}$, then $\mathbf{p}\times\mathbf{c}=\mathbf{0}$ if and only if $\mathbf{p}$ is a scalar multiple of $\mathbf{c}$. So if $\mathbf{a},\mathbf{c}\neq\mathbf{0}$ and $\mathbf{c}\cdot\mathbf{a}=0$, the solutions are $\mathbf{b}=\mathbf{b_0}+\lambda\mathbf{c}$, some $\mathbf{b}_0$ such that $\mathbf{b}_0\times\mathbf{c}=\mathbf{a}$. How could we find such a $\mathbf{b}_0$? One way is to select it so that $b^2$ is minimized. That is, $\mathbf{b}_0$ is perpendicular to $\mathbf{c}$. Since $\mathbf{b}_0$ is perpendicular to $\mathbf{a}$ (from $\mathbf{a}=\mathbf{b}\times\mathbf{c}$), we get $\mathbf{b}_0=\mu\mathbf{c}\times\mathbf{a}$. To solve for $\mu$, we take scalar triple product $$ \mathbf{b}\cdot(\mathbf{c}\times\mathbf{a})=\mathbf{a}\cdot(\mathbf{b}\times\mathbf{c})=\mathbf{a}\cdot\mathbf{a}=a^2 $$ so $$ \mu=\frac{a^2}{\lvert\mathbf{c}\times\mathbf{a}\rvert^2} $$ and so the solutions are $\mathbf{b}=\frac{a^2}{\lvert\mathbf{c}\times\mathbf{a}\rvert^2}(\mathbf{c}\times\mathbf{a})+\lambda\mathbf{c}$ for some scalar $\lambda$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
F is a set of 101 functions from[10] to itself. Prove that $\exists f,g \in F$ and $\exists i,j \in [10]$ such that $f(i) = g(i)$ and $f(j) = g(j)$. $F$ is a set of $101$ functions from $\{1,2,\ldots,10\}$ to itself. Prove that $\exists f,g\in F$ and $\exists i,j\in\{1,2,\ldots,10\}=[10]$ so $f(i)=g(i)$ and $f(j)=g(j)$. I didn’t manage to solve this problem. It looks like pigenhole principle but for some reason it didn’t work for me any idea? Thank you.
When $m$ and $n$ are positive integers, how many distinct functions are there from $\{1,2,...,n\}$ to $\{1,2,...,m\}$? To define a function $f:\{1,...,n\}\to\{1,...,m\}$ means that for each element $i\in\{1,...,n\}$ we have specified its image $f(i)\in\{1,...,m\}$. Since the codomain has $m$ many elements, there are $m$ choices that we have for $f(i)$. Because there are $n$ many points to specify the image of, we see that there are $m^n$ total functions. Now we can solve your problem. Take $i=1$ and $j=2$. Then each map $f:\{1,...,10\}\to \{1,...,10\}$ restricts to a map $\tilde{f}:\{1,2\}\to\{1,...,10\}$. From the argument above, we see that there are $10^2=100$ maps $\{1,2\}\to\{1,...,10\}$. Thus, if we have a collection of $101$ maps $f_1,...,f_{101}$ from $\{1,...,10\}$ to itself, then by restricting their common domains we get a collection of $101$ maps $\tilde{f}_1,...,\tilde{f}_{101}$ from $\{1,2\}$ to $\{1,...,10\}$. Since we have shown that there are only $100$ such maps, there must exist $k,l\in\{1,...,101\}$ where $\tilde{f}_k=\tilde{f}_l$. That is, $f_k(1)=f_l(1)$ and $f_k(2)=f_1(2)$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Discrete null recurrent Markov chain implies that $\lim_{n\to \infty} P(X_n=j) = 0$ In the book “Markov Chains” (by Norris), in Theorem 1.8.5 the author proves that if the chain is aperiodic irreducible and null recurrent, then $\lim_{n \to \infty} P(X_n = j) = 0$. I’m having some trouble understanding one of the steps in the proof. First, since the chain is null recurrent, then, for $T_j$ equal the hitting time at state $j$, $$ \sum^\infty_{k=0}P(T_j > k\mid X_0=j) = +\infty $$ Given $\epsilon >0$, choose $K$ such that $$ \sum^{K-1}_{k=0}P(T_j >k\mid X_0=j) \geq 2/\epsilon $$ The following steps are the ones that I don’t understand. The author states that for $n\geq K-1$ $$ 1\geq \sum_{k=n-K+1}^n P(X_k =j) P(T_j >n-k\mid X_0=j)= \sum_{k=0}^{K-1} P(X_{n-k}=j) P(T_j >k\mid X_0=j) $$ So $P(X_{n-k}=j)\leq \epsilon/2$ for some $k\in\{0,...,K-1\}$. I don’t understand from where we get neither of these two inequalities. If someone could clarify, I would really appreciate.
The first inequality $1\geq \sum_{k=n-K+1}^n P(X_k =j) P(T_j >n-k\mid X_0=j)$ follows from the fact that the right hand is actually a probability. It's the probability of hitting $j$ at some time $k$ for the first time and then not returning to $j$ before a time $n-k$ has passed. The sum is over disjoint sets since we consider the first hitting time and the return time in a certain time interval. For the second inequality, it's because we couldn't have $P(X_{n-k}=j) > \epsilon/2$ for all $k \in \{0, \dots, K-1\}$ since that would mean we have: $1 \geq \sum_{k=0}^{K-1} P(X_{n-k}=j) P(T_j >k\mid X_0=j)>\sum_{k=0}^{K-1} \frac{\epsilon}{2} P(T_j >k\mid X_0=j) \geq \frac{\epsilon}{2} \frac{2}{\epsilon}=1$ where we have used your second inequality: $\sum^{K-1}_{k=0}P(T_j >k\mid X_0=j) \geq 2/\epsilon$. We have reached $1>1$ which is absurd so we must have $P(X_{n-k}=j)\leq \epsilon/2$ for some $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate the following integral $ \int_1^{\infty} \frac{\lbrace x\rbrace-\frac{1}2}{x} dx$ $$\int_1^{\infty} \frac{\lbrace x\rbrace-\frac{1}2}{x} dx$$ Here $\lbrace\cdot\rbrace$ denotes the fractional part. I found this challenging integral, and I'm curious about the solution, so I decided to do some efforts to solve it, but sadly I didn't, any hints? Attempts: \begin{align} \int_1^{\infty} \frac{\lbrace x\rbrace-\frac{1}2}{x} dx&=\int_1^{\infty} \frac{1\lbrace x\rbrace-1}{2x} dx\\ &=\int_1^{\infty}\frac{\lbrace x\rbrace}{x}-\frac{1}{2x}dx\\ &=\int_1^\infty \frac{x-\lfloor x\rfloor-1}{x}-\frac{1}{2x}dx\\ &=\int_1^\infty \frac{x-\lfloor x\rfloor-1}{x} dx -\int_1^\infty \frac{dx}{2x} \end{align} I thought about this property: $$\int_0^\infty \varphi (x) dx=\lim_{a\to \infty} \int_0^a \varphi(x) dx$$ So I applied it only for the second fraction because its antiderivative was easy enough, and here's what I've got: \begin{align} \int_1^\infty \frac{dx}{2x}&=\lim_{a\to \infty} \int_1^a \frac{dx}{2x}\\ &=\lim_{a\to \infty}\frac{\ln (x)}{2}\bigg\vert_0^a\\ &=\lim_{a\to \infty}\frac{\ln (a)}2 -\frac{\ln (0)}{2} \end{align} And here I felt that I'm wrong I can't get $\infty -\infty$, So any thoughts or hints, I'll be thankfull!
This function is not integrable in the Lebesgue sense, so you can only evaluate the Cauchy principal value. That is, what you want to evaluate is the limit $$\lim_{M \rightarrow +\infty} \int_1^M \frac{\{x\}-\frac12}xdx.$$ It is easy to see that it suffices to take the limit for integer values of $M$. We first compute, for every positive integer $k$: $$\int_k^{k + 1}\frac{\{x\}-\frac12}xdx = \int_k^{k + 1}\frac{x- k-\frac12}xdx = 1 - \left(k + \frac 1 2\right) (\ln(k + 1) - \ln k).$$ We then take the sum: $$\int_1^{M + 1} \frac{\{x\}-\frac12}xdx = \sum_{k = 1}^M\left(1 - \left(k + \frac 1 2\right) (\ln(k + 1) - \ln k)\right).$$ This simplifies to: $$M - \left(M + \frac12\right)\ln(M + 1) + \ln M!$$ which, by Stirling's formula, converges to $\ln\frac{\sqrt{2\pi}}e\approx-0.0810614668$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3769948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How many distinct permutations of the string "NADAMADRID" have the word DAM appearing in them? How many distinct permutations of the string "NADAMADRID" have the word DAM appearing in them? Normally, under the Mississippi Rule, you would take the total number of characters factorial, then divide by the product of all the characters that repeat factorial. In this case however, they ask how many times a certain word will appear in the permuations of a bigger string. I was confused on how to do this problem, and how i would count these possibilites.
If we discard the word $DAM$ temporarily, we have $7$ letters to arrange, out of which two letters of two types are similar. The arrangement of these can be done in $\frac{7!}{(2!)^2}$ ways. Once this is done, we have $8$ spaces for the word $DAM$ to be inserted into , so in total you get $$\frac{7!}{(2!)^2} \times 8 = 10080$$ ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3770168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Proofs by Induction: are my two proofs correct? I have been trying to understand how proof by mathematical induction works, and I am struggling a bit. But, I think I am understanding it and I just want to verify that what I am doing is correct (and if not, why?) I have attached a screenshot (as a link) of my problem (black ink) and my work (red ink). My main issue is understanding what the final conclusion should be. What I did was check to see if the left and right side of the problem were equal after assuming $k + 1$ is true, and adding the appropriate terms to both sides, and simplifying. So, in my final steps of the induction phase, my question is, did I reach the right result? Prove: $1 + 3 + 6 + \cdots + \dfrac{n(n + 1)}{2} = \dfrac{n(n + 1)(n + 2)}{6}$. Base: $P(1) = 1$. Induction: \begin{align*} \underbrace{1 + 3 + 6 + \cdots + \frac{k(k + 1)}{2}}_{\dfrac{k(k + 1)(k + 2)}{6}} + \frac{(k + 1)(k + 2)}{2} & = \frac{(k + 1)(k + 2)(k + 3)}{6}\\ \frac{k(k + 1)(k + 2)}{6} + \frac{(k + 1)(k + 2)}{2} & = \frac{(k + 1)(k + 2)(k + 3)}{6}\\ \frac{k(k + 1)(k + 2) + 3(k + 1)(k + 2)}{6} & = \frac{(k + 1)(k + 2)(k + 3)}{6}\\ \frac{(k + 1)(k + 2)(k + 3)}{6} & = \frac{(k + 1)(k + 2)(k + 3)}{6} \end{align*} Prove: $5 + 10 + 15 + \cdots + 5n = \dfrac{5n(n + 1)}{2}$ Base: $P(1) = 5$ Induction: \begin{align*} 5 + 10 + 15 + \cdots + 5k + 5(k + 1) & = \frac{5k(k + 1)}{2} + 5(k + 1)\\ \frac{5k(k + 1)}{2} + 5(k + 1) & = \frac{5k(k + 1)}{2} + 5(k + 1) \end{align*} My problem and my work
In a proof by mathematical induction, we wish to establish that some property $P(n)$ holds for each positive integer $n$ (or for each integer greater than some fixed integer $n_0$). We must first establish that the base case holds. Once we establish that it holds, we may assume the property holds for some positive integer $k$. We then need to prove that if $P(k)$ holds, then $P(k + 1)$ holds. Then, if our base case is $P(1)$, we obtain the chain of implications $$P(1) \implies P(2) \implies P(3) \implies \cdots$$ and $P(1)$, which establishes that the property holds for every positive integer. You should not assume $P(k + 1)$ is true. We must prove that $P(1)$ holds and that if $P(k)$ holds, then $P(k + 1)$ holds for each positive integer $k$. Let's look at the first proposition. Proof. Let $P(n)$ be the statement that $$1 + 3 + 6 + \cdots + \frac{n(n + 1)}{2} = \frac{n(n + 1)(n + 2)}{6}$$ Let $n = 1$. Then $$\frac{n(n + 1)}{2} = \frac{1(1 + 1)}{2} =\frac{1 \cdot 2}{2} = 1 = \frac{1 \cdot 2 \cdot 3}{6} = \frac{1(1 + 1)(1 + 2)}{6}$$ Hence, $P(1)$ holds. Since $P(1)$ holds, we may assume $P(k)$ holds for some positive integer $k$. Hence, $$1 + 3 + 6 + \cdots + \frac{k(k + 1)}{2} = \frac{k(k + 1)(k + 2)}{6}$$ This is our induction hypothesis. Let $n = k + 1$. Then \begin{align*} 1 + 3 + 6 + & \cdots + \frac{k(k + 1)}{2} + \frac{(k + 1)(k + 2)}{2}\\ & = \frac{k(k + 1)(k + 2)}{6} + \frac{(k + 1)(k + 2)}{2} && \text{by the induction hypothesis}\\ & = \frac{k(k + 1)(k + 2) + 3(k + 1)(k + 2)}{6}\\ & = \frac{(k + 1)(k + 2)(k + 3)}{6}\\ & = \frac{(k + 1)[(k + 1) + 1][(k + 1) + 2]}{6} \end{align*} Thus, $P(k) \implies P(k + 1)$ for each positive integer $k$. Since $P(1)$ holds and $P(k) \implies P(k + 1)$ for each positive integer $k$, $P(n)$ holds for each positive integer $n$.$\blacksquare$ I will leave the second proof to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3770280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove: $(n-1)! \equiv -1 \pmod{n}$ I'm trying to self-learn some number theory. Currently, I'm trying to prove the following: $$(n-1)! \equiv -1 \pmod{n}$$ However, I'm a bit stumped on this one. Honestly, I'm not sure where's a good place to start.
This is false for general $n$. For example, $5!=120\equiv 0\pmod 6$, not $-1$. In fact, $(n-1)!\equiv 0\pmod{n}$ for all composite $n\geq 6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3770420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Corollary 5.39, Lee - Introduction to Smooth Manifolds I am struggling with understanding how to prove Corollary 5.39 in Lee - Introduction to Smooth Manifolds. I found an answer on this post: A characterization of tangent space to level set of a smooth submersion, but I do not understand the computation done there. Why do we have $d\Phi_p^i(v) = v\Phi^i$? By definition of the pushforward, for $f \in C^\infty(\mathbb{R})$ we have $d\Phi_p^i(v)f = v(f\circ\Phi^i).$ But by the linked answer, it seems we would also have $d\Phi_p^i(v)f = (v\Phi^i)f$. But $(v\Phi^i)f \in C^\infty(\mathbb{R})$, not $\mathbb{R}$. What am I missing here?
We know that $d\Phi_p^i(v)$ is a vector in $T_{\Phi^i(p)}\mathbb{R} = \text{span }\Big(\frac{d}{dt}\big|_{\Phi^i(p)}\Big)$. So $d\Phi^i_p(v) = a \frac{d}{dt}\big|_{\Phi^i(p)}$ for some real number $a$. Since we often identify one dimensional vector $\lambda \, \frac{d}{dt}\big|_{t}$ with its component $\lambda$ itself, so (in our case) we write $d\Phi^i_p(v) = a$. But $a$ is exactly $v\Phi^i$ because $$ a = \Big( a \, \frac{d}{dt}\Big|_{\Phi^i(p)} \Big) \text{Id}_{\mathbb{R}} = d\Phi^i_p(v) \,\text{Id}_{\mathbb{R}} = v(\text{Id}_{\mathbb{R}} \circ \Phi^i ) = \color{blue}{v \Phi^i}. $$ So we can write $d\Phi^i_p(v) = v \Phi^i$. The claim in the corollary still holds whether or not we apply this identification. Personally, i prefer to prove the corollary (without identification) as : $d\Phi_p(v) = 0 \in T_{\Phi(p)}\mathbb{R}^k$ if and only if $$ 0=d\Phi_p(v)y^i =v (y^i \circ \Phi) = v\Phi^i,\quad i=1,\dots,k, $$ where $y^i : \mathbb{R}^k \to \mathbb{R}$ are standard coordinate functions of $\mathbb{R}^k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3770568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Painting of a block in P&C Consider a grid as shown in figure having 8 blocks of dimension 1 × 1. Each block is to be painted with either blue or red colour. In how many ways this can be done if at least one block of dimension 2 × 2 is to be painted red completely ? I am not able to approach this problem. Can this problem be solved by inclusion-exclusion principle.
Define sets $A,B,C$ as follows . . . * *Let $A$ be the set of configurations for which the upper left $2{\times}2$ submatrix is painted red.$\\[4pt]$ *Let $B$ be the set of configurations for which the upper right $2{\times}2$ submatrix is painted red.$\\[4pt]$ *Let $C$ be the set of configurations for which the lower right $2{\times}2$ submatrix is painted red. The goal is to find $|A\cup B\cup C|$. Applying the principle of inclusion-exclusion, $$ |A\cup B\cup C| = \Bigl(|A|+|B|+|C|\Bigr) - \Bigl(|A\cap B|+|B\cap C|+|C\cap A|\Bigr) + |A\cap B\cap C| $$ Then we get * *$|A|=|B|=|C|=2^4$ since for each of those three sets, there are exactly $4$ free squares.$\\[4pt]$ *$|A\cap B|=|B\cap C|=2^2$ since for each of those two sets, there are exactly $2$ free squares.$\\[4pt]$ *$|C\cap A|=2$ since for that set, there is exactly $1$ free square.$\\[4pt]$ *$|A\cap B\cap C|=1$ since for that set, there are no free squares.$\\[4pt]$ hence $$ |A\cup B\cup C| = 3{\,\cdot\,}2^4-(2{\,\cdot\,}2^2+2)+1 = 39 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3770938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The quadratic equations $x^2+mx-n=0$ and $x^2-mx+n=0$ have integer roots. Prove that $n$ is divisible by $6$. QUESTION: Suppose that $m$ and $n$ are integers, such that both the quadratic equations $$x^2+mx-n=0$$ and $$x^2-mx+n=0$$ have integer roots. Prove that $n$ is divisible by $6$. MY APPROACH: $\because$ the roots $\in\Bbb{Z}$ therefore, discriminant of the quadratic equations must be a perfect square.. $$\therefore m^2+4n=p^2$$ and $$m^2-4n=q^2$$ for some, $p,q≥0$ and $p,q\in\Bbb{Z}$. Now subtracting these equations we get, $$8n=p^2-q^2$$ $$\implies p^2-q^2\equiv0\pmod{8}$$ Therefore, $p$ and $q$ cannot be of the form $(2×n)$ where $n$ is odd. But this does not seem to help much. So I going back one step, we can write, $$n=\frac{p^2-q^2}{8}$$ But here I am stuck.. I do not know how may I use $8$ with the property of squares to prove that $n$ must be divisible by $6$.. Any help will be much appreciated... Thank you so much :)
Now as you got $m^2+4n=p^2$ and $m^2-4n=q^2$, we solve further by taking cases. Case 1: $m$ is even Therefore let $m=2k$ for some positive integer $k$ and by judging the equation we can see that $p$ and $q$ are even too. Let $p=2a$ and $q=2b$ for some positive integers $a$ and $b$. Substituting the values of $m,p$ and $q$, we get $$k^2+n=a^2 \\ k^2-n=b^2$$ This implies $$a^2+b^2=2k^2 \\ a^2-b^2=2n$$ Now let's assume that $n$ is odd, but that would mean $a^2-b^2$ is not divisible by $4$, so either $a^2 \equiv 1\pmod{4}$ and $b^2\equiv 0 \pmod{4}$ or vice versa (Remember that a square is always $\equiv 0~\text{or}~1\pmod{4}$). Therefore $a^2+b^2\equiv 1 \pmod{4}$ in both cases. But we have on the other side $a^2+b^2=2k^2$ which is always either $\equiv 0 \pmod{4}$ or $\equiv 2 \pmod{4}$. Thus, we get a contradiction. This implies $n$ is even. Now we assume $n$ is not divisible by $3$ i.e either $2n\equiv 1 \pmod{3}$ or $2n\equiv 2\pmod{3}$. Now a square is always either $\equiv 0 ~\text{or}~ 1\pmod{3}$. Therefore $a^2-b^2=2n\equiv 2\pmod{3}$ is never possible and thus the remaining possibility is $a^2-b^2=2n\equiv 1 \pmod{3}$. This implies $a^2\equiv 1 \pmod{3}$ and $b^2\equiv 0\pmod{3}$. Therefore, $a^2+b^2 \equiv 1\pmod{3}$, but we had from the other equation $a^2+b^2=2k^2$ which is always $\equiv 0~\text{or}~2\pmod{3}$. Thus, we get a contradiction. Hence, $n$ is divisible by $3$ as well. Thus, $n$ is divisible by $6$. Similar analysis goes for the other case where $m$ is odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Let $A_1 \cap A_2 \cap \dots \cap A_n \neq \varnothing $. Then $A_1 \cup A_2 \cup \dots \cup A_n \neq \varnothing$. Let $A_1,A_2,\dots, A_n$ be sets such that $A_1 \cap A_2 \cap \dots \cap A_n \neq \varnothing $ holds for all $n$. Then $A_1 \cup A_2 \cup \dots \cup A_n \neq \varnothing$. Is the following proof correct? Proof: If $A_1 \cap A_2 \cap \dots \cap A_n \neq \varnothing $, then $A_1,A_2,\dots, A_n$ must all have at least one common element. Therefore the sets $A_1,A_2,\dots, A_n$ are all non-empty. Hence there exists one non-empty set among $A_1,A_2,\dots, A_n$. Hence $A_1 \cup A_2 \cup \dots \cup A_n \neq \varnothing$.
Writing formal details: $$A_1 \cap A_2 \cap \dots \cap A_n \neq \varnothing \Rightarrow\\\Rightarrow \exists x \in A_1 \cap A_2 \cap \dots \cap A_n \Rightarrow\\\Rightarrow \exists i, 1 \leqslant i \leqslant n, x \in A_i \Rightarrow\\ \Rightarrow\ x \in A_1 \cup A_2 \cup \dots \cup A_n \Rightarrow\\ \Rightarrow A_1 \cup A_2 \cup \dots \cup A_n \neq \varnothing$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Equal number of $n$th roots of unity within character values $f_1(a), \dots, f_m(a)$. (Apostol exercise 6.12 Intro to ANT) This problem is taken from Exercise 6.12 from Apostol's "Introduction to Analytic Number Theory". Verbatim, the problem states Let $f_1, \dots, f_m$ be the characters of a finite group $G$ of order $m$, and let $a$ be an element of $G$ of order $n$. Theorem 6.7 shows that each number $f_r(a)$ is an $n$th root of unity. Prove that every $n$th root of unity occurs equally often among the numbers $f_1(a), f_2(a), \dots, f_m(a)$. Theorem 6.7 above simply states that every $f_r(a)$ is an $n$th root of unity. The case of $n = 1$ is trivial, I'll assume $n > 1$ throughout the rest of this question. Apostol gives as a hint that evaluation of the sum $$\sum_{r=1}^m \sum_{k=1}^n f_r(a^k) e^{-2\pi ik/n}$$ in "two different ways" shows how many $f_r(a) = e^{2\pi i/n}$. The sum above evaluates to $m$, since $$S = \sum_{r=1}^m \sum_{k=1}^n f_r(a^k) e^{-2\pi ik/n} = \sum_{k=1}^n e^{-2\pi ik/n} \sum_{r=1}^m f_r(a^k), $$ and, by Theorem 6.13, $$\sum_{r=1}^m f_r(a^k) = \begin{cases} m & k = n \\ 0 & \text{otherwise}. \end{cases}$$ Therefore, $S$ vanishes for $k = 1, 2, \dots, n-1$, which implies $S = me^{-2\pi i} = m$. The "second way" to evaluate $S$ is to notice that the sum $$\sum_{k=1}^n [f_r(a)e^{-2\pi i/n}]^k$$ equals $n$ only when $f_r(a) = e^{2\pi i/n}$, otherwise, $f_r(a)e^{-2\pi i/n}$ is some non-one $n$th root of unity $\omega$ and the sum above equals $$\sum_{k=1}^n \omega^k = 0$$ by properties of roots of unity. Thus, since $S = m$, if $\alpha$ represents the number of $f_r(a) = e^{2\pi i/n}$, then $m = \alpha n$, or $\alpha =m/n$. However, what would lead one to consider the sum $S$ in the first place? Where did it come from? It seems almost as if $S$ appeared out of "thin air". Is there another way to prove this statement without the evaluation of $S$? Thanks in advance!
Here is a proof, using only facts from Chapter 6 of Apostol, and using its notation. Let $a\in G$ have order $n$. The characters form a group under multiplication. Therefore, if $f_1,\dots,f_r$ are the characters, then multiplying by $f_i$ simply permutes the characters. In particular, for any fixed $a\in G$, it permutes the values in the set $$X=\{f_j(a)\mid 1\leq j\leq r\}.$$ This is the set we wish to prove is simply each $n$th root of unity appearing $r/n$ times. Suppose that $f_i(a)$ is a primitive $n$th root of unity. Then multiplying by $f_i$ has the effect of multiplying all elements in the set $X$ by $f_i(a)$, a primitive $n$th root of unity. The only way such a multiplication can preserve $X$ is if $X$ is uniform. Thus we must check that there is such an element $f_i$. If not, then all of the $f_j(a)$ (since it is closed under multiplication as the $f_i$ form a group) must be all $m$th roots of unity for some $m$ strictly dividing $n$ (as they are already $n$th roots of unity). But now, $a^{n/m}$ is a non-trivial element, and $f_i(a^{n/m})=1$ for all $1\leq i\leq r$. By Theorem 6.10 from Apostol (column orthogonality relation), $a^{n/m}$ is the identity, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is a function increasing if the derivative is positive except at one point of an interval? Let $f: \mathbb{R} \to \mathbb{R}$ be differentiable on $(a,b)$. Suppose $f' > 0$ on $(a,b)$ except at a point $c \in (a,b)$ (that is, $f'(c) \leq 0$). * *Is $f$ increasing on $(a,b)$? *Must $f'(c)$ be zero, or can it be negative? Clearly $f$ is increasing on $(a,c) \cup (c,b)$ but I'm not sure about how the value at $c$ compares with the values at other points. And I think $f'(c)$ must be zero: If $f'(c) < 0$ then for small positive $h$ we have $\frac{f(c+h) - f(c)}{h}$ is also negative (by definition of the derivative as a limit of this ratio), so $f(c+h) - f(c) < 0$. Since $f'$ is positive on $(c,c+h)$, the Mean Value Theorem implies that $f(c+h) - f(c) = f'(d)h$ for some $d \in (c, c+h)$, and $f'(d)h$ is a product of two positive numbers, hence positive. So $f(c+h) - f(c) > 0$, a contradiction.
You can use the fact that the derivative has intermediate value property to rule out $f'(c)<0$.(this will contradict that $c$ is only point where derivative is not positive) Now pick a $x<c$ then by MVT there is $\eta\in(x,c)$ such that $f(c)-f(x)=f'(\eta)(c-x)>0$ [this is actually independent of derivative being defined at $c$ or not]. Similarly for $c<y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Why isn't projection of $\mathbf{v}$ onto $\mathbf{u}$ defined so that it's perpendicular to the straight line segment connecting $\mathbf{u, v}$? Kuldeep Singh. Linear Algebra: Step by Step (2013). p 312 Why isn't $\mathbf{proj_u v}$ defined as the red vector below with $\mathbf{v} \perp \mathbf{d}?$ I already know, but can't intuit why, $\mathbf{proj_u v}$ is defined as the green vector below with $\mathbf{p} \perp \mathbf{u}$.
It's so we can reconstruct the original vector easily from its projections. If we have an orthonormal basis $B=\{b_1,\dots, b_n\}$ of a vector space $V$, and any vector $v\in V$, then we have $$v=\sum_{i=1}^n \operatorname{proj}_{b_i}(v).$$ Essentially, $\operatorname{proj}_uv$ is the "$u$-component" of $v$. For instance, a vector $v=(1,2,3)\in\mathbb R^3$ can be split into its $x$-, $y$-, and $z$-components by projecting it onto the corresponding unit vectors $e_x,e_y,e_z$ and then reconstructed as $$\operatorname{proj}_{e_x}(v)+\operatorname{proj}_{e_y}(v) +\operatorname{proj}_{e_z}(v)=1e_x+2e_y+3e_z=(1,2,3)=v.$$ This is not as easily done with your way of projecting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771550", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What do the sum of the reciprocal of n squared make if n is a natural number? I’ve recently found out that $\sum_{n=1}^{\infty}\frac{1}{n^2+n}$ makes 1, since it becomes $\frac{1}{2}, \frac{2}{3}$ and so on. After then, I’ve became curious if I do the same thing with the reciprocal of n squared, or $$\sum_{n=1}^{\infty} \frac{1}{n^2}$$ I couldn’t find out the answer. Their sums don’t make a neat form like $\frac{a}{a+1}$. Could anyone tell me what it approaches to?
The sum is $\pi^2/6$. Euler first figured that out. It's no surprise and no disgrace that you didn't. See https://en.wikipedia.org/wiki/Basel_problem .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Represent $f(x) - f(y)$ as an integral Description I've come across the following transition in a textbook of Convex Optimisation. I couldn't figure out what's going on so that I'd appreciate if anyone hits me with any hint! Problem Suppose $x, y \in \mathbb{R}^n$ and $f$ be a $\beta$-smooth convex function on $\mathbb{R}^n$; Then the transition of interest goes as $$f(x) - f(y) = \int^1_0 \nabla f(y + t(x - y))^{T} (x - y) dt $$ Additional Comment This transition appears in the proof of the convergence rate of Gradient Descent for a smooth convex objective function $f$.
This is just the fundamental theorem of line integrals, which says for (sufficiently smooth) functions $f$, $$f(x) - f(y) = \int_{C} \overrightarrow{\nabla f(r)} \cdot \vec{dr}$$ where $x$ and $y$ are the endpoints of $C$. Think of it as a generalization of the fundamental theorem calculus to higher dimensions, which roughly says that the difference in a function evaluated at two points $x$ and $y$ can be calculated as the integral of its derivative over the range $[x, y]$. Here, the derivative is the gradient (since we have multiple variables now). For your problem, $\beta$-smoothness is more than sufficient to use the theorem above, so just let $r(t) = y + t(x - y)$ be a parameterization of a straight line $L$ from $x$ to $y$ such that $r(0) = x$ and $r(1) = y$. Then from the fundamental theorem for line integrals it follows that \begin{align*} f(x) - f(y) &= \int_{L} \overrightarrow{\nabla f}(r) \cdot \overrightarrow{dr} = \int_0^1 \overrightarrow{\nabla f (r(t))} \cdot \overrightarrow{r'(t)} dt \\ &= \int_{0}^1 \overrightarrow{\nabla f (y + t(x - y))} \cdot \overrightarrow{(x - y)} dt \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771765", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Four Number Theorem : Let $a$, $b$, $c$, $d$ be integers such that $ab = cd$. Let $a$, $b$, $c$, $d$ be integers such that $ab = cd$. Then there exist integers $x$, $y$, $z$, $w$ such that $xy = a$, $zw = b$, $xz = c$, $yw = d$. My Progress: I tried playing with $\gcd(a,c)$, $\gcd(a,d)$,$\gcd(b,c)$, $\gcd(b,d)$. But I am not able to proceed. Please post hints rather than solution, it helps me a lot. Thanks in advance.
Firstly note that it is sufficient to prove the theorem when $a,b,c,d,x,y,z,w$ are all natural numbers. For if any of the given numbers is $0$ then the solution tuple $(x,y,z,w)$ is trivial and if there are negatives involved you can look for $x,y,z,w$ for $\lvert a \rvert,\lvert b \rvert, \lvert c \rvert, \lvert d \rvert$ and then adjust for signs. If $b = 1$ you can take $(x,y,z,w) = (c,d,1,1)$, say the result holds for all $a,b,c,d$ when $1 \leq b < n$ and say $an = cd$ for some $a,c,d$. Let $p$ be a prime divisor of $n$ then $p \vert c$ or $p \vert d$. Say $p \vert c$, then we'll have an equation of the form $am = c'd$ where $n=mp,c=pc'$ and $1 \leq m<n$ so by hypothesis there exists $(r,s,t,u)$ all naturals such that $a = rs, m = tu, c' = rt, d = su$ that gives $n = (pt)u$ and $c = r(pt)$, therefore $ (r,s,pt,u)$ is the tuple corresponding to $an = cd$, similarly one can find the tuple if $p \vert d$. This proves the theorem for natural numbers by induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 1 }
Confusion of additive and multiplicative notations. If gcd$(m,n)=1$, then $\phi(mn)=\phi(m)\phi(n)$. My text book write that $f:\Bbb Z/_{(mn)}\Bbb Z\to\Bbb Z/_m\Bbb Z \times \Bbb Z/_n\Bbb Z$ by $f([a]_{mn})=([a]_m,[a]_n)$ is an isomorphism. So far, I think he discusses the additive group. But when he claims that $f(U(_{mn}\Bbb Z))=U(_m\Bbb Z)\times U(_n\Bbb Z)$, he using the multiplication on $\Bbb Z/_{(mn)}\Bbb Z$ and I'm confused that whether changing operations is alright or not. If he want to use the multiplication on $\Bbb Z/_{(mn)}\Bbb Z$, why don't he just write that $f:(\Bbb Z/_{(mn)}\Bbb Z)^{\times}\to(\Bbb Z/_m\Bbb Z)^\times \times (\Bbb Z/_n\Bbb Z)^\times$ although $(\Bbb Z/_n\Bbb Z)^\times$ and $(\Bbb Z/_m\Bbb Z)^\times$ may not be group. Hope someone can help me to understand the notation here.
The first isomorphism refers to a ring isomorphism. The second assertion refers to the mapping of the multiplicative groups of the involved rings. They are mapped accordingly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3771972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simplify $\displaystyle\prod_{j=1}^{K}\exp(-2\pi\lambda_j(sp_j)^{2/\alpha}\int_0^{\infty}r\int_0^{\infty}e^{-t(1+r^{\alpha})}dtdr)$. An equation involving the poisson point process is formulated as: $$\prod_{j=1}^{K}\exp(-2\pi\lambda_j(sp_j)^{2/\alpha}\int_0^{\infty}r\int_0^{\infty}e^{-t(1+r^{\alpha})}dtdr).$$ Some algebraic manipulations are carried out and the equation is rewritten as: $$\exp(-s^{2/\alpha}C(\alpha)\sum_{i=1}^{K}\lambda_ip_i^{2/\alpha}),$$ where $$C(\alpha)=\frac{2\pi^2 csc(\frac{2\pi}{\alpha})}{\alpha}.$$ I only know the manipulation is related to Gamma function. I want to know the datails about the manipulation. Thanks a lot! : )
Focus on simplifying $$2\pi\int_0^{\infty}r\int_0^{\infty}e^{-t(1+r^{\alpha})}dtdr =2\pi\int_0^{\infty}\frac r{1+r^\alpha}dr$$ Substitute $u=\frac1{1+r^\alpha}$ to get a Beta function form $$\frac{2\pi}{\alpha}\int_0^1(1-u)^{\frac2\alpha-1}u^\frac2\alpha du=\frac{2\pi}{\alpha}\Gamma\left(1-\frac2\alpha\right)\Gamma\left(\frac2\alpha\right)$$ Use the Gamma function property $$\Gamma\left(1-z\right)\Gamma\left(z\right)=\pi\csc(\pi z), z\not\in\mathbb Z$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A particular problem on series Problem: Show that there exist $c>0$ such that for all $N\in \mathbb N$ we have $$ \sum_{n=N+1}^{\infty}\left(\sqrt{n+\frac{1}{n}}-\sqrt{n}\right)\le \frac{c}{\sqrt{N}} $$ I have no clue how to solve this. All I know is this fact $$\int_0^1\left(\sum_{n\in \mathbb N}\frac{1}{ \sqrt{n^3+nx}}\right)\mathrm dx=\sum_{n\in \mathbb N}\left(\int_0^1\frac{1}{ \sqrt{n^3+nx}}\mathrm dx \right)$$ $$=2\sum_{n=1}^{\infty}\left(\sqrt{n+\frac{1}{n}}-\sqrt{n}\right)$$ as the series of function is uniformly convergent . Note: I am quoting this fact regarding the series of function because the above problem was meant to be solved as a consequence of this fact . But other methods are welcome as well.
$$ \sqrt{n+\frac{1}{n}} -\sqrt n = \frac{1}{n\left(\sqrt{n+\frac{1}{n}} +\sqrt n\right)} \leq \frac{1}{2n^{3/2}} .$$ Now $$ \sum_{n=N+1}^\infty n^{-3/2} \leq \int_N^\infty x^{-3/2} dx =\frac{ 2}{\sqrt N}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find the asymptote and the upper bound for $1\big/ \ln\left(\frac{x+1}{x-1}\right)$ as $x\to\infty$. I need to show that $\left[\ln\left(\frac{x+1}{x-1}\right)\right]^{-1}$ asymptotically approaches to a line from below that I should determine, and hence, show that $\frac{x}{2} > \left[\ln\left(\frac{x+1}{x-1}\right)\right]^{-1}$ for $x>1$. I would be very happy if someone helps. Thank you.
Fix $x > 1$ and write $\varphi(t) = \frac{1}{x+t}$. Since $\varphi_x$ is strictly convex, it follows from the Jensen's inequality that \begin{align*} \frac{1}{2} \log\left(\frac{x+1}{x-1}\right) = \frac{1}{2} \int_{-1}^{1} \varphi_x(t) \, \mathrm{d}t > \varphi_x \left( \frac{1}{2} \int_{-1}^{1} t \, \mathrm{d}t \right) = \frac{1}{x} \end{align*} On the other hand, again by the strict convexity of $\varphi_x$, \begin{align*} \frac{1}{2} \log\left(\frac{x+1}{x-1}\right) = \frac{1}{2} \int_{-1}^{1} \varphi_x(t) \, \mathrm{d}t < \frac{\varphi_x(-1) + \varphi_x(1)}{2} = \frac{x}{x^2-1}. \end{align*} Altogether, it follows that $$ \frac{x}{2} - \frac{1}{2x} < \left[\log\left(\frac{x+1}{x-1}\right)\right]^{-1} < \frac{x}{2} $$ for all $x > 1$. Addendum. It is easy to check that the above function admits the Laurent expansion of the form $$ \left[\log\left(\frac{x+1}{x-1}\right)\right]^{-1} = \frac{x}{2} - \sum_{n=0}^{\infty} \frac{a_{2n+1}}{x^{2n+1}} $$ as $x\to\infty$. Now here comes a hard part: $\texttt{Mathematica}$ seems to suggests $a_{2n+1} > 0$ for all $n \geq 0$. For instance, $$ a_1 = \frac{1}{6}, \quad a_3 = \frac{2}{45}, \quad a_5 = \frac{22}{945}, \quad a_7 = \frac{214}{14175}, \quad a_9 = \frac{5098}{467775}, \quad \cdots. $$ It will be interesting to be able to prove it, although I have no good idea to begin with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Product of two Riemann surfaces $X$ with $H^1(X,T_X)In Buchdahl's paper Algebraic deformations of compact Kähler surfaces, the author made a remark that: the product of two Riemann surfaces of genus at least 5 satisfies the dimension of $H^1(X,T_X)$ < dimension of $H^2(X,\mathcal{O})$, but I can't see why, why the genus must larger than 5? How to compute the dimension of $H^1(X,T_X)$ of the product of two Riemann surfaces, by the way how can we know $H^2(X,T_X)$ should not be zero? Any comment is welcome, thanks!
If $X = C \times D$ then $T_X = T_C \boxtimes \mathcal{O}_D \oplus \mathcal{O}_C \boxtimes T_D$ and by Kunneth formula $$ h^1(X,T_X) = h^1(T_C)h^0(\mathcal{O}_D) + h^0(T_C)h^1(\mathcal{O}_D) + h^0(\mathcal{O}_C)h^1(T_D) + h^1(\mathcal{O}_C)h^0(T_D) = (3g(C) - 3) + 0 + (3g(D) - 3) + 0. $$ Similarly, $$ h^2(X,\mathcal{O}_X) = h^1(C,\mathcal{O}_C)h^1(D,\mathcal{O}_D) = g(C)g(D). $$ It remains to note that $$ g(C)g(D) - 3g(C) - 3g(D) + 6 = (g(C) - 3)(g(D) - 3) - 3 $$ and if $g(C),g(D) \ge 5$ then this is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $\exists (\varphi_1, \ldots, \varphi_m) \in \mathcal{L}(V, \mathbf{F}) : T(v) = \varphi_1(v)w_1 + \cdots + \varphi_m(v)w_m$, $\forall v \in V$ Suppose $T \in \mathcal{L}(V, W)$, and $(w_1, \ldots, w_m)$ is a basis of $\operatorname{range}(T)$. Prove that there exists $(\varphi_1, \ldots, \varphi_m) \in \mathcal{L}(V, \mathbf{F})$ such that $$ T(v) = \varphi_1(v)w_1 + \cdots + \varphi_m(v)w_m $$ for every $v \in V$. $V$ and $W$ are vector spaces over a field $F$. My initial attempt to this question is as follows: $(w_1, \ldots, w_m)$ is a basis for $\operatorname{range}(T)$. Therefore we can write $T(v)$ as $$ T(v) = \alpha_1 w_1 + \cdots + \alpha_m w_m $$ where $\alpha_1, \ldots, \alpha_m \in \mathbf{F}$. Define $\varphi_i(v) = \alpha_i$ for $i=1, \ldots,m$. At this point I became stuck. How am I suppose to show that for every choice of $v$, my definition for $\varphi_i$ is going to give the correct value or $\alpha_i$? After looking at the answer, they just continue on from where I became stuck to show all $\varphi_i$ are linear maps. But, this just shows that $\varphi_i$ is a linear map, not that it exists or satisfies the given condition?
Here's a slightly different approach: Given $v\in V$, let $\Phi(v)\in F^m$ be the unique $m$-tuple such that $$ T(v)=(\Phi(v))_1w_1+\dots+(\Phi(v))_mw_m $$ where we write $x_i$ for the $i$th entry of the tuple $x$. This definition makes sense because $w_1,\dots,w_m$ form a basis. Let $\varphi_i(v):=(\Phi(v))_i$. You can either check that $\varphi_i$ is linear by hand, or argue that $\Phi$ is linear and that $\varphi_i=\pi_i\circ \Phi$, where $\pi_i:F^m\to F$ is the $i$th projection, which is also linear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving the system $\sqrt{x} + y = 7$, $x + \sqrt{y} = 11$ I want to solve the following nonlinear system of algebraic equations. Indeed, I am curious about a step by step solution for pedagogical purposes. I am wondering if you can come up with anything. I tried but to no avail. \begin{align*} \sqrt{x} + y &= 7 \\ x + \sqrt{y} &= 11 \end{align*} The answer is $x=9,\,y=4$. A geometrical investigation can give us better insights as depicted below. $\hspace{2cm}$
Rearrange both equations to isolate the term with the square root, then square both sides. This new system will likely add additional solutions, so we will need to check to make sure any result we get is really a solution. Our new system is: $$x=y^2-14y+49$$ $$y=x^2-22x+121$$ Substituting x from the first equation into the second equation gives the quartic equation: $$y=y^4-28y^3+98y^2+196y^2-1372y+2401-22y^2+308y-1078+121$$ $$0=y^4-28y^3+272y^2-1065y+1444=(y-4)(y^3-24y^2+176y-361)$$ We knew that we could factor out $(y-4)$ because we knew that we had a solution at $x=9,y=4$ Using the rational root theorem, because the leading coefficient is $1$, any rational roots must be factors of the constant term. This results in possible roots $\pm1,\pm19,\pm361$. Plugging in each of these possibilities shows that there are no other rational roots. Using the cubic equation, we find that the other roots are approximately $y=12.8, y=7.9, and y=14.3$. Our first original equation tells us that the only valid real solutions are for $y\leq7$, so we can reject all of these and we are left with the only solution being $x=9, y=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
proposition of derivate Let $(X,{\tau})$ a topological space and $A$ a subset of $X$, show that: $\operatorname{der}A=\{x\in \operatorname{cl}A\mid \operatorname{cl}A=\operatorname{cl}(A-\{x\})\}$ So I try to prove this. I suppose $x\in \operatorname{der}A$ by definition, for all $V\in\mathcal{N}(x)$, $V\cap(A-\{x\})\neq \emptyset$, so $x\in \operatorname{cl}(A-\{x\})$ but I don't know how I can continue. If you give me some hint or suggest, I will be grateful. Thank you. ($\mathcal{N}(x)$ is the neighborhood system of $x$ and $\operatorname{der} A=\{x\in X\mid \forall V\in \mathcal{N}(x),V\cap (A-\{x\})\neq \emptyset\}$).
You also need that $x \in \operatorname{cl}(A)$ iff for all $V \in \mathcal{N}(x)$, $V \cap A \neq \emptyset$, the closure of $A$ is the set of all adherent points of $A$. So $x \in \operatorname{der}(A)$ means that $x$ is an adherent point of $A - \{x\} $, or $x \in \operatorname{cl}(A-\{x\})\subseteq \operatorname{cl}(A)$ and this is equivalent to $$\operatorname{cl}(A)= \operatorname{cl}(A-\{x\})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $E\subset\mathbb{R^{n}}$ and $\hat{p}\in\mathbb{R^{n}}$, then all interior points of $E$ are limit points of $E$ as well. Let $E\subset\mathbb{R^{n}}$ and $\hat{p}\in\mathbb{R^{n}}$, then all interior points of $E$ are limit points of $E$ as well. If $\hat{p}\in int(E)$ where $int(E)$ is the set containing all the interior points of $E$, then there is an $r'>0$ such that an open ball centered at $\hat{p}$ with an $r'$ radius and completely contained in $E$ can be formed. My intention is to prove that $\forall r>0$, $(Ḅ̣_{r}(\hat{p})\setminus \{\hat{p}\})\cap E \neq \emptyset$. Where $Ḅ̣_{r}(\hat{p})$ is the set including all points contained in the open ball of radius $r$ (in other words: $Ḅ̣_{r}(\hat{p})=\{\hat{s}\in \mathbb{R^{n}}| \thinspace d(\hat{p},\hat{s})<r\}$). I was thinking that perhaps the way to go is to take an arbitrary $\hat{p}\in int(E)$ and an $R$ such that $R=min\{r,r'\}$ and then try and get to the definition of a limit point from there; however I seem to be stuck and I'm starting to wonder if this is the way to go at all.
Hint: If $\hat{p}\in{\rm int}E$ then there exists $Ḅ̣_{r}(\hat{p})$ with $Ḅ̣_{r}(\hat{p})\subset E$. What do you add for the punctured ball $Ḅ̣_{r}(\hat{p})\smallsetminus\{\hat{p}\}$? What happens if $(Ḅ̣_{r}(\hat{p})\smallsetminus\{\hat{p}\})\cap E=\varnothing?$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there a closed form of $\sum_{n=0}^{\infty} \frac{(-1)^n}{(4n+1)!!}$? This may be an impossible problem. But I imagine it's worth asking still. What is the closed form of the sum: $$\sum_{n=0}^{\infty} \frac{(-1)^n}{(4n+1)!!}$$ Perhaps there isn't a closed form. Double factorials are WAY out of my comfort zone so any tips would be appreciated. Thanks
Too long for a comment. Using the same approach as in @Xoque55's answer, we could go one step further and consider $$f(x)=\sum_{n=0}^{\infty} \frac{(-1)^n}{(4n+1)!!}x^{4n}=\, _1F_2\left(1;\frac{3}{4},\frac{5}{4};-\frac{x^4}{16}\right)$$ which write $$f(x)=\frac{\sqrt \pi}x \left(C\left(\frac{x}{\sqrt{\pi }}\right) \cos \left(\frac{x^2}{2}\right)+S\left(\frac{x}{\sqrt{\pi }}\right) \sin \left(\frac{x^2}{2}\right) \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3772943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How I get the nth derivative of the function $y = e^x x^2$ I'm totally confused. Please anyone help me. $y' = e^x x^2 + 2 e^xx$ $y''= e^x x^2 + 4 e^xx + 2e^x$ $y'''= e^x x^2 + 6 e^xx + 6 e^x$ next $y''''$ but I failed to get any pattern.
We need to find an expression to $\frac{d^{n}y}{dx^{n}}$. Observe that you can use $\underline{\text{product rule}}$ and $\underline{\text{chain rule}}$ in each calculation of the derivative to $y$. Indeed $$\frac{dy}{dx}=e^{x}(x^{2}+2x+(1-1)(1))$$ $$\frac{d^{2}y}{dx^{2}}=e^{x}(x^{2}+2(2)x+(2-1)(2))$$ $$\frac{d^{3}y}{dx^{3}}=e^{x}(x^{2}+2(3)x+(3-1)(3))$$ $$\frac{d^{4}y}{dx^{4}}=e^{x}(x^{2}+2(4)x+(4-1)(4))$$ $$\frac{d^{5}y}{dx^{5}}=e^{x}(x^{2}+2(5)x+(5-1)(5))$$ so, by an inductive argument, we can see that $$\boxed{\frac{d^{n}y}{dx^{n}}=e^{x}(x^{2}+2nx+(n-1)(n)), \quad n \in \mathbb{N}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 3 }
Is $\operatorname{Aut}(D_{12})\simeq D_{12}$? Let $D_{12}$ be the dihedral group of order 12. Then $$|\operatorname{Aut}(D_{12})|=6\phi(6)=12=|D_{12}|,$$ and the standard method of proof for $$\operatorname{Aut}(D_6)\simeq D_{6}\qquad\mbox{and}\qquad \operatorname{Aut}(D_8)\simeq D_{8}$$ seems to also work for $$\operatorname{Aut}(D_{12})\simeq D_{12}.$$ But in this article (p.461, 14th line from the top), it says that $n=3$ and $n=4$ are the only numbers for which $$\operatorname{Aut}(D_{2n})\simeq D_{2n}.$$ Could this be an error? Or am I missing something?
The article is wrong. First, the automorphism group is definitely $D_{12}$: here is Magma code that proves it. > G:=DihedralGroup(6); > A:=AutomorphismGroup(G); > A:=PermutationGroup(A); > IdentifyGroup(A); <12, 4> > IdentifyGroup(G); <12, 4> Second, even his Theorem A states that $\mathrm{Aut}(D_{12})\cong \mathrm{Hol}(Z_6)$, and this is dihedral of order $12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For What positive values of x ; the below mentioned series is convergent and divergent? $$ \sum \frac{1}{x^{n}+x^{-n}}$$ My attempt $$ \begin{aligned} &\begin{aligned} \therefore & u_{n+1}=\frac{x^{n+1}}{x^{2 n+2}+1} \\ \therefore & \frac{u_{n+1}}{u_{n}}=\frac{x^{n+1}}{x^{2 n+2}+1} \cdot \frac{x^{2 n}+1}{x^{n}} \\ & \frac{u_{n+1}}{u_{n}}=x \frac{1+\left(\frac{1}{x}\right)^{2 n}}{x^{2}+\left(\frac{1}{x}\right)^{2 n}} \end{aligned}\\ & i f x>1 \quad \therefore \quad \frac{1}{x}<1 \quad \therefore \text { when } n \rightarrow \infty,\left(\frac{1}{x}\right)^{2 n} \rightarrow 0 \end{aligned}\\ $$ What to do next?
In your attempt you have shown that $\lim \sup \frac {u_{n+1}} {u_n} \leq x \frac {1+0} {x^{2}+0}=\frac 1 x <1$. Hence Ratio Test tells you that the series is convergent for $x>1$. But there is a bettter approach as shown below: For $x>1$ it is dominated by $\sum \frac 1 {x^{n}}$ which is convergent. For $0<x<1$ it is dominated by $\sum x^{n}$ which is convergent. For $x=1$ it is divergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
RMM 2015 /P1: Does there exist an infinite sequence of positive integers $a_1, a_2, a_3, . . .$ Does there exist an infinite sequence of positive integers $a_1, a_2, a_3, . . .$ such that $a_m$ and $a_n$ are coprime if and only if $|m - n| = 1$? My Progress:This is a very beautiful problem ! I think I have got a construction , but I am not able to have/define the explicit formula for the nth term. Here is the construction, Let $a_1=2\cdot 3$, $a_2=5\cdot 7$, $a_3=2\cdot 11$, $a_4=3\cdot 5 \cdot 13$, $a_5=2\cdot 7\cdot 17$ , $a_6=3\cdot 5 \cdot 11 \cdot 19$ , $a_7=2\cdot7\cdot13\cdot23$, $a_8=3\cdot5\cdot11\cdot17\cdot29$ , $a_9=2\cdot7\cdot13\cdot 19 \cdot 31$ and so on . I am trying to find some patterns, but I am not able to observe anything. So what I am doing is, for the construction of $a_n$ term, I look at $a_{n-1}$ ,then I start from $a_1$ and then try to put a factor $p$ of $a_1$ in $a_n$ such that gcd ($a_{n-1},p$)=$1$ . Similarly for $a_2$, $a_3$, and so on. At the end I add another prime which was not used in any of the $a_i$'s. Also we have to make sure that no ${a_i} \mid a_j$ for $i<j$ Also note that I am only using primes . Sorry, if something is not clear. Hope one can provide me some hints and guide. Thanks in advance.
We'll do an inductive process, defining $a_{i,j}$ for integers $i \ge 1$ and $j \ge 0$. Let $p_n$ be the $n$'th prime. Initially, take $a_{1,1}= p_1 p_2 $, $a_{2,1} = 1$, $a_{n,1} = p_1$ if $n \ge 3$ is odd and $p_2$ if $n \ge 4$ is even. Note that $a_{n,1}$ and $a_{n+1,1}$ are coprime, and $a_{1,1}$ and $a_{n,1}$ are not coprime for $n \ge 3$. Suppose at stage $k$, all $a_{n,k}$ and $a_{n+1,k}$ are coprime, $a_{i,k}$ and $a_{j,k}$ are not coprime for $i \le k$ and $j \ge i+2$, and all prime factors of the $a_{n,k}$ are in the first $2k$ primes. Let $a_{k+1,k+1} = a_{k+1,k} p_{2k+1} p_{2k+2}$, $a_{n,k+1} = a_{n,k} p_{2k+1}$ if $n \ge k+3$ is even, $a_{n,k+1} = a_{n,k} p_{2k+2}$ if $n \ge k+3$ is odd, $a_{n,k+1} = a_{n,k}$ if $n < k$ or $n=k+1$. Then we still have $a_{n,k+1}$ and $a_{n+1,k+1}$ coprime, while $a_{i,k+1}$ and $a_{j,k+1}$ are not coprime for $i \le k+1$ and $j \ge i+2$, and all prime factors of the $a_{n,k+1}$ are in the first $2k+2$ primes. Finally, take $a_n = a_{n,n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Problem with the power of functions' set and number of discontinuity points I am considering functions $f:\mathbb{R}\rightarrow \mathbb{R}$ with property that $\forall_{r\in\mathbb{R}}$ exists a limit $\lim_{x\rightarrow r}f(x)$ (it doesn't have to be equal to $f(r)$). I have two problems. The first with showing that such function $f$ has countable number of discontinuity points. I have found only that monotonic functions have such property that it doesn't help me a lot. Second problem is connected with evaluating the power of set of such functions. We know of course that the power of set of the continuous functions is equal to continuum which is a hint, but still it doesn't help me much. I appreciate any help, because I am thinking about it from several days.
Whatever you indicate, it can be either a function with removable discontinuity or a continuous function. Whenever $f$ have removable discontinuity, then $f$ can't be a monotone function, For example, $f(x) = \begin{cases} 2, & \text{if x=1} \\ x, & \text{otherwise} \end{cases} $ And, how many are there such type functions? Then it is uncountable, you can see that easily. As, we can creat, for each $a\in\mathbb{R} $, $f(x) = \begin{cases} a, & \text{if x=a+1} \\ x, & \text{otherwise} \end{cases} $ And, so, clearly cardinality of power set of such functions is equal to the cardinality of $\mathbb{R}^{\mathbb{R}} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Indefinite integral of $\sin^8(x)$ Suppose we have the following function: $$\sin^8(x)$$ We have to find its anti-derivative To find the indefinite integral of $\sin^4(x)$, I converted everything to $\cos(2x)$ and $\cos(4x)$ and then integrated. However this method wont be suitable to find the indefinite integral $\sin^8(x)$ since we have to expand a lot. Is there any other way I can evaluate it easily, and more efficiently?
The development through the double angle formulae is not so long, let me show. \begin{align} \sin^8x &=(\sin^2x)^4=\\ &=\left(\frac{1-\cos2x}{2}\right)^4=\\ &=\frac{1}{16}(1-4\cos2x+6\cos^22x-4\cos^32x+\cos^42x)=\\ &=\frac{1}{16}[1-4\cos2x+3(1+\cos4x)-4\cos2x(1-\sin^22x)+(\cos^22x)^2]=\\ &=\frac{1}{16}\left[1-4\cos2x+3(1+\cos4x)-4\cos2x(1-\sin^22x)+\left(\frac{1+\cos4x}{2}\right)^2\right]=\\ &=\frac{1}{16}\left[1-4\cos2x+3(1+\cos4x)-4\cos2x(1-\sin^22x)+{}\right.\\ &\qquad\qquad\qquad\left.+\frac{1}{4}(1+2\cos4x+\cos^24x)\right]=\\ &=\frac{1}{16}\left[1-4\cos2x+3(1+\cos4x)-4\cos2x(1-\sin^22x)+\right.\\ &\qquad\qquad\qquad\left.+\frac{1}{4}\left(1+2\cos4x+\frac{1+\cos8x}{2}\right)\right]=\\ \end{align} so we have \begin{align} \int\sin^8xdx &=\frac{1}{16}\left[x-2\sin2x+3\left(x+\frac{1}{4}\sin4x\right)-2\left(\sin2x-\frac{1}{3}\sin^32x\right)+{}\right.\\ &\qquad\left.\frac{1}{4}\left(x+\frac{1}{2}\sin4x+\frac{1}{2}\left(x+\frac{1}{8}\sin8x\right)\right)\right]+C \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Determine when $\sum_1^\infty \frac{(2n)!x^n}{n(n!)^2}$ converges. Determine when $\sum_1^\infty \frac{(2n)!x^n}{n(n!)^2}$ converges. By ratio test, when $|x|<1/4$, the sum converges, $|x|>1/4$ diverges. But I'm not sure about $|x|=1/4$. By Stirling's approximation $\frac{(2n)!(1/4)^n}{n(n!)^2}\sim\frac{4^n}{n{4^n}\sqrt{ \pi n}}\sim\frac{1}{\sqrt{\pi}n^{3/2}}$, so when $|x|=1/4$, the sum converges. But I haven't learned Stirling's approximation yet, so how to prove that when $|x|=1/4,$ the sum converges, without using it?
Let's study what happens when $x=1/4$. Define $a_n:=\frac{(2n)!x^n}{n(n!)^2} $, $b_n:=\dfrac{1}{n^{5/4}}$ and observe the following growth rates: $$ \dfrac{a_{n+1}}{a_n} = \frac{(2n+2)!x^{n+1}}{(n+1)((n+1)!)^2} \frac{n(n!)^2}{(2n)!x^n} = \dfrac{(2n+2)(2n+1)n}{4(n+1)^3} = \dfrac{(2n+1)n}{2(n+1)^2}=\big(1-\dfrac{1}{n+1}\big) \big(1-\dfrac{1}{2(n+1)}\big)$$ $$ \dfrac{b_{n+1}}{b_n} =\dfrac{n^{5/4}}{(n+1)^{5/4}} = \big(1-\dfrac{1}{n+1}\big) \big(1-\dfrac{1}{n+1}\big)^{1/4}$$ As $n\to +\infty$ $$\dfrac{b_{n+1}}{b_n} -\dfrac{a_{n+1}}{a_n} =\big(1-\dfrac{1}{n+1}\big) \bigg[\big(1-\dfrac{1}{n+1}\big)^{1/4} - \big(1-\dfrac{1}{2(n+1)}\big) \bigg] =\big(1-\dfrac{1}{n+1}\big) \bigg[\dfrac{1}{4(n+1)} + o\big(\dfrac{1}{n+1}\big) \bigg]=\big(1-\dfrac{1}{n+1}\big) \dfrac{1+o(1)}{4(n+1)}$$ The expression $1+o(1)$ is evenually positive, therefore the RHS is eventually positive and so there exists $N > 0$ s.t.: $$a_{n+1}/a_n \leq b_{n+1}/b_{n} \qquad \forall n\geq N \qquad (1)$$ For any integer $k > N$, considering repeatedly $(1)$ for $ N \leq n\leq k-1$ and multiplying termwise those inequalities we get: $$a_{k}/a_N \leq b_{k}/b_{N}$$ This means: $$a_{k}\leq b_{k}\frac{a_N}{b_{N}} \qquad \forall k \geq N$$ Therefore $$ \sum_{k=N}^{+\infty} a_k \leq \sum_{k=N}^{+\infty} b_{k}\frac{a_N}{b_{N}} \leq \ +\infty$$ showing that $\sum a_n$ converges. This also proves that the series converges absolutely if $x=-1/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Using generating function to solve non-homogenous recurrence relation The given recurrence relation is: $$ a_{n} + 2a_{n-2} = 2n + 3 $$ with initial conditions: $$ a_{0}=3$$ $$a_{1}=5$$ I know $$G(x) = a_0x^0 + a_1x^1 + \sum_{n=2}^{\infty} a_nx^n $$ and $$ a_{n} = -2a_{n-2} + 2n + 3$$ Therefore: $$G(x) = 3 + 5x + \sum_{n=2}^{\infty} (-2a_{n-2} + 2n + 3).x^n $$ $$G(x) = 3 + 5x - 2.G(x).x^2 + \sum_{n=2}^{\infty} (2n + 3).x^n $$ I don't know how to go about solving $$\sum_{n=2}^{\infty} (2n + 3).x^n $$ And thus I can't calculate G(x) and the sequence a(n). Can someone please guide me how can I solve this? EDIT 1: From what I have got to know: $$G(x) (1 + 2.x^2) = 3 + 5x + 2x/(1-x)^2 + 3/(1-x) $$ $$G(x) = \frac{3 + 5x + 2x/(1-x)^2 + 3/(1-x)}{1 + 2.x^2} $$ How can I convert it into partial fractions?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\bbox[10px,#ffd]{a_{n} + 2a_{n - 2} = 2n + 3\,\qquad a_{0} = 3,\quad a_{1} = 5}:\ {\Large ?}}$ With $\ds{\verts{z} < 1}$: \begin{align} \sum_{n = 2}^{\infty}\pars{a_{n} + 2a_{n - 2}}z^{n} & = 2\sum_{n = 2}^{\infty}nz^{n} + 3\sum_{n = 2}^{\infty}z^{n} \\[2mm] \pars{-a_{0} - a_{1}z + \sum_{n = 0}^{\infty}a_{n}z^{n}} + 2z^{2}\sum_{n = 0}^{\infty}a_{n}z^{n} & = 2z\,\partiald{}{z}\sum_{n = 2}^{\infty}z^{n} + 3\sum_{n = 2}^{\infty}z^{n} \\[2mm] -3 - 5z + \pars{2z^{2} + 1}\sum_{n = 0}^{\infty}a_{n}z^{n} & = 2z\,\partiald{}{z}\pars{z^{2} \over 1 - z} + 3\,{z^{2} \over 1 - z} \end{align} $$ \sum_{n = 0}^{\infty}a_{n}z^{n} = {3 - z \over \pars{1 - z}^{2}\pars{1 + 2z^{2}}} \implies a_{n} = \bracks{z^{n}}{3 - z \over \pars{1 - z}^{2}\pars{1 + 2z^{2}}} $$ \begin{align} a_{n} & = \braces{3\bracks{z^{n}} - \bracks{z^{n - 1}}} {1 \over \pars{1 - z}^{2}\pars{1 + 2z^{2}}} \\[5mm] & = \braces{3\bracks{z^{n}} - \bracks{z^{n - 1}}} \sum_{i = 0}^{\infty}\pars{i + 1}z^{i} \sum_{j = 0}^{\infty}\pars{-2z^{2}}^{j} \\[5mm] & = 3\sum_{i = 0}^{\infty}\sum_{j = 0}^{\infty} \pars{i + 1}\pars{-2}^{j}\bracks{i + 2j = n} \\[1mm] & - \sum_{i = 0}^{\infty}\sum_{j = 0}^{\infty}\pars{i + 1}\pars{-2}^{j} \bracks{i + 2j = n - 1} \\[5mm] & = 3\sum_{j = 0}^{\infty}\pars{n - 2j + 1}\pars{-2}^{j} \bracks{n - 2j \geq 0} \\[1mm] & - \sum_{j = 0}^{\infty}\pars{n - 2j}\pars{-2}^{j} \bracks{n - 1 - 2j \geq 0} \\[5mm] & = 3\sum_{j = 0}^{\left\lfloor n/2\right\rfloor} \pars{n + 1 - 2j}\pars{-2}^{j} - \sum_{j = 0}^{\left\lfloor\pars{n - 1}/2\right\rfloor} \pars{n - 2j}\pars{-2}^{j} \end{align} You just need to perform the sums.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inequality in a Sobolev space In the Sobolev space $H^2(\mathbb{R}^2)$ is it true that there exists a constant $c>0$ such that $$\|u\|_{L^2(\mathbb{R}^2)}\le c\|\nabla u\|_{L^2(\mathbb{R}^2)}\;,\quad\quad \text{for all}\quad u\in H^2(\mathbb{R}^2)?$$ Thank you in advance.
So this is called the Poincaré inequality. It is true on a bounded domain. When you are in $\mathbb{R}^d$ and not in a bounded domain, you need some additional weights (for example it works with Gaussian weights, or sufficiently decaying weights). In the form you are asking, it is false. indeed, take $\langle x\rangle = \sqrt{1+|x|^2}$ and $$ u(x) = \frac{1}{\langle x\rangle} $$ Then $u∉ L^2$ (since $u^2 ∼ |x|^{-2}$ when $x\to\infty$), but $$ ∇u = \frac{-x}{\langle x\rangle^3} $$ therefore $∇u ∈ L^2$ since $|∇u|^2 = \frac{|x|^2}{\langle x\rangle^6} ∼ |x|^{-4}$ when $x\to\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How do I create an offset shape that is a specific distance from a given circle, in the direction of the origin? I'm an amateur engineer, working on a CAD design - but sadly, I'm not a mathematician. In other words, this question might sound like homework, but it's not, I promise. I have an existing circle, which has a radius of $23.5$ with the center point being $(0,15)$ I need to write an equation ($s$) which I can input into my CAD software (i.e. no calculus, but algebra is OK), which will draw a continuous shape that is smaller than the existing circle, but is always the EXACT same distance ($d$) away from the circle, when measured across ANY line that passes through the origin $(0,0)$ I think the resulting shape ($s$) should NOT be a smaller circle. It should be an odd shape of some kind, maybe an ellipse or egg or some other type of squished circle? But that's as much as I know. I have no idea where to even begin solving this, my trigonometry is way too rusty ...
The equation of the circle is $(x,y)=(r\cos\theta,a+r\sin\theta)$. The question is for the curve $s(x,y)$ such that $(1-s)|(x,y)|=d$. Solving this gives $s=1-\frac{d}{\sqrt{x^2+y^2}}$. Hence the curve is given by the pseudo-code: r=23.5; a=15; x(t):=r*cos(t); y(t):=a+r*sin(t); s(t):=1-d/sqrt(x(t)^2+y(t)^2); plot((s(t)*x(t),s(t)*y(t)),(t,0:2*PI))
{ "language": "en", "url": "https://math.stackexchange.com/questions/3773955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Right exactness of quotienting out the maximal divisible subgroup For every abelian groups $G$ let $\mathrm{d}G$ be its maximal divisible subgroup. Then $G \mapsto G/\mathrm{d}G$ is a right exact functor $\mathbf{Ab} \to \mathbf{Ab}$. Let $$ 0 \to G \xrightarrow{i} H \xrightarrow{p} K \to 0 $$ be an exact sequence. I'm having trouble showing for the sequence $$ G/\mathrm{d}G \xrightarrow{i'} H/\mathrm{d}H \xrightarrow{p'} K/\mathrm{d}K \to 0 $$ (where $p' \colon h + \mathrm{d}H \mapsto p(h) + \mathrm{d}K$ ) that $\ker p' \subseteq \operatorname{im} i'$. So let $h + \mathrm{d}H \in \ker p'$. Then $p(h) \in \mathrm{d}K$. I need to show that there is $h'$ in $\mathrm{d}H$ such that $p(h) = n p(h')$ for some integer $n$ in order to draw the conclusion. Any help would be appreciated!
Consider the short exact sequence: $$0\to\mathbb{Z}^{\mathbb{N}_{>0}}\stackrel i \to \mathbb{Z}^{\mathbb{N}_{>0}}\stackrel p\to \mathbb{Q}\to 0,$$ where $i(f_n)=(n+1)e_{n+1}-e_n$ and $p(e_n)=\frac1{n!}$ for $n\geq 1$. Applying the functor results in the sequence: $$0\to\mathbb{Z}^{\mathbb{N}_{>0}}\stackrel i \to \mathbb{Z}^{\mathbb{N}_{>0}}\to 0\to 0,$$ which is not exact in the middle term. Note $\mathbb{Z}^{\mathbb{N}_{>0}}$ denotes the direct sum of copies of $\mathbb{Z}$ indexed over $\mathbb{N}_{>0}$ (not the direct product).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
$n-$circular arrangement problem Find the number of ways to arrange $n$ people in a circle so that $3$ people are separated. My approach: The number of ways to arrange $n$ people in a circle is $(n - 1)!$. If the $3$ people are together, the number of arrangement is $(n - 3)!$. The $3$ people can rearrange themselves in $3!$ ways, the number of ways for the $3$ people together is $3!(n - 3)!$. Therefore, the number of ways so that none of the $3$ people is seated together is $(n - 1)! - [3!(n - 3)!]$. Is that correct? If not, where did I go wrong? For instance, 4 girls and 3 boys to be arranged in a circle so that none of the boys is together. In this case, we have $(7 - 1)! - [3!(7 - 3)!] = 576$. Any help is appreciated.
Comment of @Christian Blatter might be helpful to you in counting the missing cases. However, here's an alternate method. Let the three people be $P_1,\ P_2,\ P_3$ and the number of people between $P_1P_2,\ P_2P_3,\ P_3P_1$ be $x_1,\ x_2,\ x_3$ respectively. Now, we've to find number of positive integral solutions of the equation $$x_1+x_2+x_3=n-3$$ Also, the $n-3$ people can arrange themselves in $(n-3)! $ ways and the three people in $2! $ ways. The total no. of ways are $\displaystyle{{n-4}\choose 2}×(n-3)!×2! =(n-3)!(n-4) (n-5) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Show that the kernel ker(B) is a vector subspace of the domain. I want to know how to show that $\ker(B)$ is a vector subspace of the domain.
I'll sketch how to obtain the reduced row echelon form: \begin{align} &\phantom{{}\rightsquigarrow{}} \begin{pmatrix} 3 & -3 & 1 & 5 & 1 & 5 & 5\\ 1 & -1 & 1 & 1 & 1 & 3 & -1\\ 2 & -2 & 1 & 3 & 0 & 5 & 2\\ 2 & -2 & 0 & 4 & 0 & 2 & 1 \end{pmatrix}\rightsquigarrow \begin{pmatrix} 1 & -1 & 1 & 1 & 1 & 3 & -1\\ 3 & -3 & 1 & 5 & 1 & 5 & 5\\ 2 & -2 & 1 & 3 & 0 & 5 & 2\\ 2 & -2 & 0 & 4 & 0 & 2 & 1 \end{pmatrix} \\ &\rightsquigarrow \begin{pmatrix} 1 & -1 & 1 & 1 & 1 & 3 & -1\\ 0 & 1 & -2 & 2 & -2 & -4 & 8\\ 0 & 0 &- 1 & 1 & -2 & -1 & 4\\ 0 & 0 & -2 & 2 & -2 & -4 & 3 \end{pmatrix} \rightsquigarrow \begin{pmatrix} 1 & -1 & 1 & 1 & 1 & 3 & -1\\ 0 & 1 & 0 & 0 & 0 & -2 & 0\\ 0 & 0 &- 1 & 1 & -2 & -1 & 4\\ 0 & 0 & 0 & 0 & 2 & -2 & -5 \end{pmatrix} \end{align} You should ultimately obtain, if I'm not mistaken: $$ \begin{pmatrix} 1 & -1 & 0 & 2 & 0 & 1 & 0\\ 0 & 0 & 1 & -1 & 0 & 3 & 0\\ 0 & 0 & 0 & 0 & 1 & -1 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 \end{pmatrix}$$ which shows that \begin{cases} x_1=x_2-2x_4-x_6, \\ x_3=x_4-3x_6,\\x_5=x_6,&x_7=0. \end{cases} This defines an isomorphism from $K^3$ ($K$ denotes the base field) onto $\ker B$: \begin{align} K^3&\xrightarrow{\quad f\quad} \ker B \\ (x,y,z)\:&|\mkern-7mu\xrightarrow{\quad \enspace\quad}(x-2y-z, x, y-3z, y, z,z,0) \end{align} and this isomorphism maps any basis of $K^3$, for instance the canonical basis, onto a basis of $\ker B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $y = \frac{2}{5}+\frac{1\cdot3}{2!} \left(\frac{2}{5}\right)^2+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^3+\cdots$, find $y^2+2y$ If $$y = \frac{2}{5}+\frac{1\cdot3}{2!} \left(\frac{2}{5}\right)^2+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^3+\cdots$$ what is $y^2+2y$? Attempt: We know that for negative and fractional indices, $$(1+x)^n = 1 + nx + n(n-1)/2!\cdot x^2 + n(n-1)(n-2)/3!\cdot x^3 + \cdots$$ Rewriting the series in question, we get: $$\frac{2}{5} \left(1 + \frac{1\cdot3}{2!}\cdot \frac{2}{5}+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^2+\cdots\right)$$ I know this looks like the binomial expansion above, but I have no idea how to proceed further.
Substitute $x=\frac{2}{5}$: $$y=\sum_{i=1}^\infty\frac{x^i}{i!}(\prod_{k=1}^{i}(2k-1))$$ $$y=\sum_{i=1}^\infty\frac{x^i}{i!}((2i-1)!!)$$ For $|x|<\frac{1}{2}$, this is the Taylor series for $\frac{1}{\sqrt{1-2x}}-1$ $$y=\frac{1}{\sqrt{1-2x}}-1$$ $$y=\frac{1}{\sqrt{1-0.8}}-1$$ $$y=\frac{1}{\sqrt{0.2}}-1$$ $$y=\sqrt{5}-1$$ $$y^2+2y=(\sqrt{5}-1)^2+2(\sqrt{5}-1)$$ $$y^2+2y=5+1-2\sqrt{5}+2\sqrt{5}-2$$ $$y^2+2y=4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774460", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Showing whether an ideal in $\mathbb{Z}[x,y]$ is prime. The ideal $(1+x^2,1+y^2)$ is prime in $\mathbb{Z}[x,y]$? I have this: Analogously to $\mathbb{Z}[x]/(1+x^2)\simeq \mathbb{Z}[i]$, $\mathbb{Z}[x,y]/(1+x^2,1+y^2)\simeq \mathbb{Z}[i]\times \mathbb{Z}[i]$ and $\mathbb{Z}[i]\times \mathbb{Z}[i]$ is not a integral domain. Therefore $(1+x^2,1+y^2))$ is not prime. This is correct? pd: The ideal $(p)$, $p$ prime is prime in $\mathbb{Z}[x,y]$? I have this: $\mathbb{Z}[x,y]/(p)\simeq (\mathbb{Z}[x]/(p))[y]\simeq (\mathbb{Z}_{p}[x])[y]$ and $\mathbb{Z}_{p}$ is a field then $\mathbb{Z}_{p}[x]$ is a field?
You need a completer argument why $$\mathbb{Z}[x,y]/(1+x^2,1+y^2)\simeq \mathbb{Z}[i]\times \mathbb{Z}[i]$$ I’m not even sure it is true. I think the quotient ring is $\mathbb{Z}[i]\otimes_{\mathbb Z}\mathbb{Z}[i].$ Then the zero divisors are $$(i\otimes 1 +1\otimes i)(i\otimes 1-1\otimes i)=0.$$ You are correct, though, the ideal is not prime. We have $$(x-y)(x+y)\in (1+x^2,1+y^2),$$ but neither $x+y$ nor $x-y$ is in the ideal. [You need to show $x-y$ and $x+y$ are not in the ideal, of course.] The quotient can be written as the ring $R$ of all: $$a+bi+cj+dij$$ where $a,b,c,d\in \mathbb Z,$ and $i^2=j^2=-1$ and $ij=ji.$ You can show this is the quotient by taking $\mathbb Z[x,y]\to R$ with $x\mapsto i,y\mapsto j$ and show this map is onto and has kernel $(1+x^2,1+y^2).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How do we solve the equation $2^{x^2-3}=x^{-1/3}$ algebraically? This question was from Khan Academy and, even though Sal solved it through graphing, I want to know how it can be solved algebraically. Here are the steps that I have tried: $2^{x^2-3}=x^{-1/3}$ $2^{{(x^2-3)}^{-3}}=x^{{(-1/3)}^{-3}}$ $2^{-3x^2+9}=x$ $\log_2(x)=-3x^2+9$ After this step I do not know what to do.
There is no explicit solution if you cannot use Lambert function and some numerical method will be required. Solving $$2^{x^2-3}=x^{-1/3}$$ is just the same as finding the zero of function $$f(x)=\log \left(2^{x^2-3} \sqrt[3]{x}\right)$$ If you plot it, you will see that the root is close to $1.65$ which is close to $\sqrt 3$. Developing $f(x)$ as a Taylor series built around $x=\sqrt 3$ would give $$f(x)=\frac{\log (3)}{6}+\frac{ (1+18 \log (2))}{3 \sqrt{3}}\left(x-\sqrt{3}\right)+\left(\log (2)-\frac{1}{18}\right)\left(x-\sqrt{3}\right)^2 +O\left(\left(x-\sqrt{3}\right)^3\right)$$ which is a quadratic equation in $\left(x-\sqrt{3}\right)$. Just solve it and pick the closest root. Converted to decimal, this would give $x\sim 1.660183$ while the exact solution is $x=1.660186\phantom{for some reason edits must be 6 characters.}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Inequality involving size of random variable I’m following my professor’s notes, and I became confused by an inequality he used without justification, but I cannot see why it is true. Any help is appreciated. Let $\{T_t\}_t$ be a sequence of iid, positive, integer-valued random variables. Suppose $$P\{T \ge x \} = \Theta \left(\frac{1}{\sqrt x}\right). $$ What I do not understand is he then states: $$(1) \max_{1 \leq i \leq t} T_i = \Theta (t^2),$$ $$ (2) \sum_{i=1}^t T_i = \Theta (t^2), $$ both in probability. I would appreciate help on how he concluded both of these from the previous fact. Thank you!
Say $ a/\sqrt{x} \le P(T \ge x) \le b/\sqrt{x}$ for $x$ sufficiently large. For (1): $$ P\left(\max_{1\le i \le t} T_i < c t^2\right) = P(T < c t^2)^t $$ When $c > 0$, for $t$ sufficiently large this is between $(1 - a/(\sqrt{c} t))^t$ and $(1 - b/(\sqrt{c} t))^t$, which as $t \to \infty$ go to $\exp(-a/\sqrt{c})$ and $\exp(-b/\sqrt{c})$ respectively. Thus for any $\epsilon > 0$, taking $c_1$ and $c_2$ such that $\exp(-a/\sqrt{c_1}) < \epsilon$ and $\exp(-b/\sqrt{c_2}) > 1-\epsilon$, we find that $P\left(\max_{1\le i \le t} T_i < c_1 t^2\right) < \epsilon$ while $P\left(\max_{1\le i \le t} T_i < c_2 t^2\right) > 1-\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3774979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In triangle $\triangle ABC$, angle $\angle B$ is equal to $60^\circ$; bisectors $AD$ and $CE$ intersect at point $O$. Prove that $OD=OE$. In triangle $\triangle ABC$, angle $\angle B$ is equal to $60^{\circ}$; bisectors $AD$ and $CE$ intersect at point $O$. Prove that $OD=OE$. So I've already made a diagram(it is attached below), but I don't know how to prove it from there. Please help and explain your solution thoroughly because I have a test about this tomorrow and I want to understand this! Thank you! :D
Note that O is the incenter of Triangle ABC. Thus, perpendiculars from O to sides AB and BC have equal length. Let them be G and H respectively. Can you prove that $\triangle$ OGE and $\triangle$ OHD are congruent? Hint : Use the fact that $\angle$ AOC is $120^o$ and also the fact that $\angle$ GOH is also $120^o$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Series expansion for real number Suppose c is a real number with 0 < c < 1. I am interested in the following series. c = 1/n1 + 1/n2 + 1/n3 + 1/n4 + ..... Where n1,n2, are positive integers chose to me as small as possible. For example 4/5 = 1/2 + 1/4 + 1/20 e-2 = 1/2 + 1/5 + 1/55 + 1/9999 + .... By the way this expansion for e is accurate to 9 decimal places. I have proved that if c = n/m then the series terminates before or after n terms. Note 6/109 = 1/17 + plus five more terms. The last term has 44 digits in the denominator. I know that the integers n1, n2 satisfy the inequality n(k+1) >= nk^2-nk + 1. However, if n(k+1) = nk^2-nk+1 for all k=1,2,.... then the sum is 1/(n1-1) so it the series does not terminate you must have n(k+1) >= nk^2 -nk + 2 infinitely often. If you want to calculate pi to 1 trillion digits all you need is the first 40 terms. (I am not seriously suggesting this as a way to calculate pi because finding the terms would be enormously more difficult than calculating pi.) I should warn you that making these calculations is very addictive and I have already got another mathematician hooked. Anyway, I find this expansion very interesting and I was wondering if this expansion has been discussed. One question. Suppose n1=2 and n(k+1)=nk^2-nk+2. The sum sum = 1/2 + 1/4 + 1/14 + 1/184 + 1/33674 + .... = 0.82689.... Is this a known number? Thanks for listening.
This general type of representation (sum of fractions with numerator 1 and no repeat denominators) is called Egyptian fractions. You'll find a lot of further facts in the cited page. Knowing what to search for will net you lots of further tidbits. The name is because pharaonic Egyptians used this (awkward to operate) way to represent fractions, with some exceptions like 2/3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Equation (3.89) seems wrong in Bishop pattern recognition & machine learning book In Bishop's pattern recognition & machine learning book, I seem to have found a serious mistake in an math equation; serious because all subsequent arguments rely on it. It is the eq. (3.89) on page 168: $$ 0 = \frac{M}{2\alpha} -\frac{1}{2}\mathbf{m}_N^T\mathbf{m}_N - \frac{1}{2}\sum_{i}{\frac{1}{\lambda_i + \alpha}} $$ The above equation is obtained by differentiating eq. (3.86) with respect to $\alpha$: $$ \ln p(\mathbf{t}|\alpha, \beta)=(M/2)\ln \alpha +(N/2)\ln\beta -E(\mathbf{m}_N)-(1/2)\ln |\mathbf{A}|-(N/2)\ln(2\pi) $$ where $$ E(\mathbf{m}_N) = (\beta/2)||\mathbf{t}-\mathbf{\Phi}\mathbf{m}_N||^2 +(\alpha/2)\mathbf{m}_N^T\mathbf{m}_N $$ However, because $\mathbf{m}_N$ dependens on $\alpha$ it cannot simply be $\frac{\partial{E(\mathbf{m}_N)}}{\partial\alpha}= (1/2)\mathbf{m}_N^T\mathbf{m}_N$ Correct derivative should instead be: $$ \frac{\partial{E(\mathbf{m}_N)}}{\partial\alpha} = \{\beta\mathbf{\Phi}^T(\mathbf{\Phi}\mathbf{m}_N-\mathbf{t}) + \alpha\mathbf{m}_N\}^T\frac{\partial\mathbf{m}_N}{\partial\alpha}+\frac{1}{2}\mathbf{m}_N^T\mathbf{m}_N $$ Or am I making a big mistake?
You are not making a mistake, you just need to go one step further. Fist, note that $\mathbf{m}_{N}=\beta \mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t}$ with $\mathbf{A} = \alpha I + \beta \boldsymbol{\Phi}^{T}\boldsymbol{\Phi}$. Having that in mind we can start by working your expression out $$ \frac{\partial E\left(\mathbf{m}_{N}\right)}{\partial \alpha}=\left\{\beta \boldsymbol{\Phi}^{T}\left(\boldsymbol{\Phi} \mathbf{m}_{N}-\mathbf{t}\right)+\alpha \mathbf{m}_{N}\right\}^{T} \frac{\partial \mathbf{m}_{N}}{\partial \alpha}+\frac{1}{2} \mathbf{m}_{N}^{T} \mathbf{m}_{N} $$ Now, if we take a closer look, we can find that: $$ \left\{\beta \boldsymbol{\Phi}^{T}\left(\boldsymbol{\Phi} \mathbf{m}_{N}-\mathbf{t}\right)+\alpha \mathbf{m}_{N}\right\}^{T} \frac{\partial \mathbf{m}_{N}}{\partial \alpha} = \left\{ {\beta \boldsymbol{\Phi}^{T}\boldsymbol{\Phi}\mathbf{m}_{N} + \alpha \mathbf{m}_{N} - \beta \boldsymbol{\Phi}^{T}\mathbf{t}} \right\}\frac{\partial \mathbf{m}_{N}}{\partial \alpha} $$ which is the same as $\left\{ {\mathbf{A}\mathbf{m}_{N} - \beta \boldsymbol{\Phi}^{T}\mathbf{t}} \right\}\frac{\partial \mathbf{m}_{N}}{\partial \alpha} = \left\{ \beta \mathbf{A}\mathbf{A}^{-1} \mathbf{\Phi}^{\mathrm{T}} \mathbf{t} - \beta \boldsymbol{\Phi}^{T}\mathbf{t}\right\} \frac{\partial \mathbf{m}_{N}}{\partial \alpha}= 0$. This means that $\frac{\partial E\left(\mathbf{m}_{N}\right)}{\partial \alpha}=\frac{1}{2} \mathbf{m}_{N}^{T} \mathbf{m}_{N}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that Rank $(A)$ = Rank$(A^2)$ $\iff \exists$ B invertible s.t. $A^2= B A$. Given $n \times n$ matrix $A$, not necessarily invertible, what invertible matrix $B$ solves $A^2= B A$? $B$ should be $A$ but that is what not being said in the claim above. Please help. Thanks in advance.
We have yet to see that the converse is true, so here is a bit about that. Consider an $n \times n$ matrix over the field $k.$ Given an invertible matrix $B$ such that $A^2 = BA,$ we claim that $\ker(A^2) = \ker(BA) = \ker(A).$ Certainly, we have that $\ker(A) \subseteq \ker(A^2),$ as any vector $v$ such that $Av = 0$ gives $A^2 v = A(Av) = A(0) = 0.$ Conversely, for $v \in \ker(A^2) = \ker(BA),$ then we have that $0 = (BA)v = B(Av)$ so that $Av$ is in $\ker(B).$ But as $B$ is invertible, we must have that $Av = 0$ so that $v \in \ker(A).$ By the Rank-Nullity Theorem, we have that $$\operatorname{rank}(A^2) + \operatorname{nullity}(A^2) = n^2 = \operatorname{rank}(A) + \operatorname{nullity}(A),$$ but in view of the fact that $\ker(A^2) = \ker(A),$ we find that $\operatorname{rank}(A^2) = \operatorname{rank}(A).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
$H^1$-conforming approximation for elliptic PDE with discontinuous coefficient? everyone Suppose I want to solve the diffusion equation $$-\nabla\cdot a \nabla u=f, \\u=0 \text{ on } \partial \Omega,$$ $f \in L^2(\Omega)$, $\partial \Omega$ is smooth. I use the standard node-based linear elements on a tetrahedral mesh. I know that to find some $u \in H^2(\Omega)$, I need to make assumption on $a$: $a(x) \in C^1(\Omega)$ (Evans) or $a(x) \in C^0(\bar{\Omega})$ (another textbook). In this case I can safely build approximation in $H^1_0(\Omega)$. But I also know that I can define $a$ to be merely piece-wise constant in my code without any harm, and I have seen a lot of such computations. The question: to which space $u$ belong to in the case of the piece-wise constant coefficient? I would be grateful for a reference.
It is quite standard to have $a$ piecewise constant, or more generally a piecewise constant diffusion matrix. Here we consider $a:\Omega\to\mathbb{R}$, but shall require additional regularity. In the stated problem, you have already assumed that the gradient exists. So your question actually boils down to whether or not a solution to the variational problem exists: find $u\in H^1_0(\Omega)$ such that $$\langle a\nabla u,\nabla v\rangle_{\Omega} = \langle f,v \rangle_{\Omega}\quad\forall\,v\in H^1_0(\Omega).$$ If a solution to the above exists, then it follows that it is in $H^1_0(\Omega)$. We require to make the assumption that there exist real numbers $\underline{a},\overline{a}>0$ such that $$ \underline{a} \le a(x) \le \overline{a}\quad\forall\,x\in\Omega.$$ Under this assumption the norm $|v|_{a,H^1_0(\Omega)}^2:=\langle a\nabla v,\nabla v\rangle_{\Omega}$ is well defined and equivalent to the $H^1_0$ norm, thus existence and uniqueness follows from the Riesz Representation Theorem. Note that the condition above covers the piecewise non-negative constant case, but does not allow for $a$ to be negative. If $a<0$ for all $x$, then the above could be rewritten, and a solution would also be guaranteed. The problems arise when $a$ is allowed to switch between positive and negative over $\Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $\int_0^{\infty} \frac{\arctan{(x)}}{x} \ln{\left(\frac{1+x^2}{{(1-x)}^2}\right)} \; \mathrm{d}x = \frac{3\pi^3}{16}$ Prove that $$\int_0^{\infty} \frac{\arctan{(x)}}{x} \ln{\left(\frac{1+x^2}{{(1-x)}^2}\right)} \; \mathrm{d}x = \frac{3\pi^3}{16}$$ This is not a duplicate of this post, the bounds are different and the integral evaluates to a slightly different value. I tried looking at the solution from the linked post but I'm not familiar with harmonic numbers or complex analysis and the real solution is long. I tried IBP but got no where. Any advice for this monster integral (real analysis only please)?
Changing the bounds makes the integral way simpler, because after letting $x\to \frac{1}{x}$ we can get rid of that $\arctan x$. $$I=\int_0^{\infty} \frac{\arctan x}{x} \ln\left(\frac{1+x^2}{{(1-x)}^2}\right)dx\overset{x\to \frac{1}{x}}=\int_0^\infty \frac{\arctan \left(\frac{1}{x}\right)}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx$$ $$\Rightarrow 2I=\frac{\pi}{2} \int_0^\infty \frac{1}{x}\ln\left(\frac{1+x^2}{(1-x)^2}\right)dx\overset{x = \tan \frac{t}{2}}=-\frac{\pi}{2}\int_0^\pi\frac{\ln(1-\sin t)}{\sin t}dt$$ Also from here we know that: $$I(a)=\int_{0}^{\pi} \frac{\ln(1+\sin a\sin x)}{\sin x}dx=a(\pi -a)$$ $$\Rightarrow I=-\frac12 \frac{\pi}{2}I\left(\frac{3\pi}{2}\right)=-\frac12 \frac{\pi}{2}\left(-\frac{3\pi^2}{4}\right)=\frac{3\pi^3}{16}$$ Another way to deal with the last integral (credits to this answer), is to consider: $$\mathcal J(a)=\int_0^\frac{\pi}{2}\arctan\left(\frac{\sin x -\tan\frac{a}{2}}{\cos x}\right)dx$$ And differentiate w.r.t. a, obtaining: $$\mathcal J'(a)=-\frac12\int_0^\frac{\pi}{2}\frac{\cos x}{1-\sin a\sin x}dx=\frac12 \frac{\ln(1-\sin a)}{\sin a}$$ $$\mathcal J(\pi)-\mathcal J(0)=-\frac{\pi^2}{4}-\frac{\pi^2}{8}=\frac12\int_0^\pi\frac{\ln(1-\sin a)}{\sin a}da$$ $$\Rightarrow \int_0^\pi \frac{\ln(1-\sin a)}{\sin a}da=-\frac{3\pi^2}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Problem with extension of a continous function Let $X$ a first-countable Topological Space, let $Y$ an Hausdorff Topological Space, let $A\subset X$ a subset ot $X$ and let $f:A\rightarrow Y$ a continous function. Prove that, if there is an extension $$\overline{f} :\overline{A}\rightarrow Y$$ $\overline{f}$ is solely determined from $f$. I thought that: If there is $g$ that is another extension of $f$, and I call $Z=\lbrace x\in \overline{A}\mid \overline{f} (x)=g(x)\rbrace$. Then, $Y$ is an Hausdorff Space so, $Z$ is closed in $X$ and, $A$ is dense in $\overline{A}$ so $A\subseteq Z\Rightarrow \overline{A}\subseteq Z\Rightarrow \overline{A} = Z$ But I don't think it's right because I didn't use the fact that $X$ is first-countable. I have to use also the fact that $f$ is continous so if $x_{n}\rightarrow x\Rightarrow f(x_{n})\rightarrow f(x)$. Can someone help me?
Your argument works. You may need to elaborate a bit on Then, $Y$ is a Hausdorff space so, $Z$ is closed in $X$ and it may be preferable to write $Z$ is closed in $\overline{A}$ there. Whether you need to elaborate or not depends on what properties of Hausdorff spaces and continuous maps can be assumed as generally known. First countability of $X$ plays no role in it, it was probably assumed to enable arguments with sequences for those who aren't yet used to topological arguments using open and closed sets, neighbourhoods, preimages etc. I find your argument much preferable to a sequence (or net/filter for the version not assuming first countability) argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3775924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate $61^{61} \pmod9$ I have a solution in front of me, but several parts do not quite make sense to me. Since $61 \equiv −2 \pmod9$, we have $$61^{61} \equiv (−2)^{61}\equiv −2 \cdot 8^{60}\pmod{9}$$ First question: how do we get from $(-2)^{61}$ is congruent to $(-2)\cdot 8^{60}$? $$-2\cdot 8^{60}\equiv −2 · (−1)^{61} \pmod{9}$$ Second question: how do we get from $-2 \cdot 8^{60}$ is congruent to $-2 \cdot (-1)^{61}$? $$\equiv −2 \equiv 7 \pmod 9.$$
$61^{61}\equiv(-2)^{61}\equiv(-2)(8)^{20}\equiv(-2)(-1)^{20}\equiv-2\equiv7\pmod9.$ Alternatively, by Euler's theorem, $a^6\equiv1\pmod9$ if $\gcd(a,9)=1$, so $61^{61}\equiv(-2)^{61}\equiv(-2)^{60}(-2)\equiv((-2)^6)^{10}(-2)\equiv7\pmod9.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving a result for $\prod_{k=0}^{\infty}\Bigl(1-\frac{4}{(4k+a)^2}\Bigr)$ $$\prod_{k=0}^{\infty}\Bigl(1-\frac{4}{(4k+a)^2}\Bigr)=\frac{(a^2-4)\Gamma^2\bigl(\frac{a+4}{4}\bigr)}{a^2\Gamma\bigl(\frac{a+2}{4}\bigr)\Gamma\bigl(\frac{a+6}{4}\bigr)}$$ According to WA. I attempted using $$\prod_{k=0}^{\infty}\Bigl(1-\frac{x^2}{\pi^2k^2}\Bigr)=\frac{\sin x}{x}$$ But I couldn’t reindex the product appropriately to use it the way I wanted to (factoring out a 4 and then continuing from there). I’d like to have at least a direction to go in or an idea on how to do the product.
It isn't pretty, but we can prove this by working backwards using Euler's product definition of the gamma function (seen here): $$ \Gamma(x) = \lim_{n\to\infty} n!(n+1)^x \prod_{k=0}^n (x+k)^{-1}. $$ When we substitute that in for the right hand side, we get $$ \begin{align} \frac{(a^2-4)\Gamma^2\bigl(\frac{a+4}{4}\bigr)}{a^2\Gamma\bigl(\frac{a+2}{4}\bigr)\Gamma\bigl(\frac{a+6}{4}\bigr)} &= \frac{a^{2}-4}{a^{2}}\frac{\left(n+1\right)^{\frac{a+4}{2}}n!^{2}\prod_{k=0}^{n}\left(\frac{a+4}{4}+k\right)^{-2}}{\left(n+1\right)^{\frac{a+4}{2}}n!^{2}\left(\prod_{k=0}^{n}\left(\frac{a+2}{4}+k\right)^{-1}\right)\left(\prod_{k=0}^{n}\left(\frac{a+6}{4}+k\right)^{-1}\right)}\\ &= \frac{a^{2}-4}{a^{2}}\frac{\prod_{k=0}^{n}\left(\frac{a+4}{4}+k\right)^{-2}}{\prod_{k=0}^{n}\left(\frac{a+2}{4}+k\right)^{-1}\left(\frac{a+6}{4}+k\right)^{-1}}\\ &= \frac{a^{2}-4}{a^{2}}\prod_{k=0}^{\infty}\frac{\left(\frac{a+2}{4}+k\right)\left(\frac{a+6}{4}+k\right)}{\left(\frac{a+4}{4}+k\right)^{2}}\\ &= \frac{a^{2}-4}{a^{2}}\prod_{k=1}^{\infty}\frac{\left(\frac{a}{4}+k+\frac{1}{2}\right)\left(\frac{a}{4}+k-\frac{1}{2}\right)}{\left(\frac{a}{4}+k\right)^{2}}\\ &=\frac{a^{2}-4}{a^{2}}\prod_{k=1}^{\infty}\frac{\left(\frac{a}{4}+k\right)^{2}-\frac{1}{4}}{\left(\frac{a}{4}+k\right)^{2}}\\ &= \left(1-\frac{4}{a^{2}}\right)\prod_{k=1}^{\infty}\left(1-\frac{4}{\left(a+4k\right)^{2}}\right)\\ &= \prod_{k=0}^{\infty}\left(1-\frac{4}{\left(4k+a\right)^{2}}\right). \end{align} $$ (I omitted the limit of $n$ in the equations because it already takes up so much space.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Solve Bernouilli differential equation I think that my question is dumb... but I am stuck in a step. I need to solve this ode: $$\rho' = \rho(1+\rho^2)$$ I made the variable change $u:=\rho^{-2}$. Then, separating variables and replace $u$ by $\rho^{-2}$, i get $\rho=\frac{1}{\sqrt{e^{-2t}-1}}$. But, when i check the solution, the result is: $\rho=\frac{1}{\sqrt{e^{-2t}-1+\color{red}{\frac{1}{\rho_{0}^2}e^{-2t}}}}$. I think i forget in some integration step the term "$+C$"... I don't know Thanks in advance
EDIT Let $y = \rho$ for writing convenience. To begin with, notice that \begin{align*} y' = y(1+y^{2}) & \Longleftrightarrow \frac{y'}{y(1+y^{2})} = 1 \end{align*} Then we have that \begin{align*} \frac{1}{y(1+y^{2})} = \frac{(1 + y^{2}) - y^{2}}{y(1+y^{2})} = \frac{1}{y} - \frac{y}{1+y^{2}} \end{align*} Finally, we obtain the solution by integrating both sides: \begin{align*} \int\frac{\mathrm{d}y}{y(1+y^{2})} = \int1\mathrm{d}x & \Longleftrightarrow \ln|y| - \frac{\ln(1+y^{2})}{2} = x + c\\\\ & \Longleftrightarrow \ln(y^{2}) - \ln(1+y^{2}) = 2x + k\\\\ & \Longleftrightarrow \ln\left(\frac{y^{2}}{1+y^{2}}\right) = 2x + k\\\\ & \Longleftrightarrow \frac{y^{2}}{1+y^{2}} = \exp(2x+k)\\\\ & \Longleftrightarrow y^{2}(1 - \exp(2x+k)) = \exp(2x+k)\\\\ & \Longleftrightarrow y^{2} = \frac{\exp(2x+k)}{1-\exp(2x+k)}\\\\ & \Longleftrightarrow y = \pm\left(\frac{\exp(2x+k)}{1-\exp(2x+k)}\right)^{1/2} \end{align*} In order to determine $k$, apply the initial condition $y(0) = y_{0}$. Hopefully this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If $A\cap B^\complement=\emptyset$ then $A\cap B=A$ I need help with this excercise. If $A\cap B^\complement=\emptyset$ then $A\cap B=A$ I try $$A\cap B^\complement =\emptyset$$ $$(A\cap B^\complement)\cap B =\emptyset \cap B$$ $$A\cap (B^\complement\cap B )= B$$ $$A=B$$ Is this reasoning correct?
No, your reasoning leads nowhere because you get $\emptyset=\emptyset$. The property you want is a direct consequence of $$ A=(A\cap B)\cup(A\cap B^c). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Does weak continuity imply continuity? I have come across the following excerpt from a mathematical Statistics book: where $H$ and $J$ are Hilbert spaces and $H^{\star}$ is the dual space. For me, the statement after Definition 10 is unconvincing and I cannot, in general, show that weak continuity implies continuity. In particular, if $x_n$ converges strongly to $x$, then it also converges weakly and by weak continuity $f(x_n)$ converges weakly to $f(x)$. However, we need $f(x_n)$ to converge strongly to $f(x)$ in order to have continuity and it's not clear to me how we can obtain this. I expect this to be true when $J$ is finite-dimensional as in that case weak and strong convergence are equivalent but other than that I am at a loss. I was wondering then, is the book wrong on this?
'Clearly' should not have been there but the result is true. This require the so-called Closed Graph Theorem (CGT). If $x_n \to x$ and $f(x_n) \to y$ in the norm then $f(x_n) \to f(x)$ weakly and it converges to $y$ in the norm (hence also weakly) and this implies $y=f(x)$. By CGT it follows that $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find what is span of 2 linearly independent vectors is I have been trying assignment questions of linear algebra and I am unable to solve this particular question Let $x=\left(x_{1}, x_{2}, x_{3}\right), y=\left(y_{1}, y_{2}, y_{3}\right) \in \mathbb{R}^{3}$ be linearly independent. Let $\delta_{1}=x_{2} y_{3}-y_{2} x_{3}, \delta_{2}=x_{1} y_{3}-y_{1} x_{3}$ $\delta_{3}=x_{1} y_{2}-y_{1} x_{2} .$ If $V$ is the span of $x, y$ then * *$V=\left\{(u, v, w): \delta_{1} u-\delta_{2} v+\delta_{3} w=0\right\}$ *$V=\left\{(u, v, w):-\delta_{1} u+\delta_{2} v+\delta_{3} w=0\right\}$ *$V=\left\{(u, v, w): \delta_{1} u+\delta_{2} v-\delta_{3} w=0\right\}$ *$V=\left\{(u, v, w): \delta_{1} u+\delta_{2} v+\delta_{3} w=0\right\}$ I know the definitions of Linearly Independent and Span of vectors but I don't know how to solve this problem due to $\delta_{1}$, $\delta_{2}$,$\delta_{3}$ as I am unable to write V in terms of $\delta_{i}$'s and $(u, v, w)$. Any help will be really appreciated .
Given, $x=(x_1,x_2,x_3), y=(y_1,y_2,y_3) \in \mathbb{R}^3 $ are linearly independent. Also, given, $V$ is the span of that two linearly independent vectors $x,y\in \mathbb{R}^3 $ It means clearly that each $(u,v,w)\in V $ can be uniquely spanned by the linearly independent vectors $x,y\in \mathbb{R}^3 $ That means for each $(u,v,w)\in V $, $$ \begin{vmatrix} x_1 & x_2 & x_3 \\ y_1 & y_2 & y_3 \\ u & v & w \\ \end{vmatrix} =0 $$ $\implies u(x_2y_3-y_2x_3)-v(x_1y_3-y_1x_3)+w(x_1y_2-x_2y_1) = 0 \implies \delta_{1} u-\delta_{2} v+\delta_{3} w=0 $
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I solve the following system of equations by using the Gauss-Jordan method? $$ \left\{\begin{array}{rcrcrcrcr} x & - & 2y & + & 3z & - & 4w & = & 10 \\ 2x & - & 3y & + & 4z & - & 5w & = & 18 \\ 3x & - & 4y & + & 5z & - & 6w & = & 26 \\ 4x & - & 5y & + & 6z & - & 7w & = & 9 \end{array}\right. $$ Tried to solve the problem and matrix came up with RREF \begin{array}{cccc|c}1 & 0 & -1 & 2 & 6 \\0 & 1 & -2 & 3 & -2 \\0 & 0 & 0 & 0 & 0 \\0 & 0 & 0 & 0 & 0\end{array}
A whole line filled with 0s can be eliminated (it corresponds to equation 0=0). The remaining system can be transformed into a square matrix if you pass to the other side the columns corresponding to $z$ and $w$. 1 0 -1 2|6 0 1 -2 3|-2 corresponds to $x=6+\lambda-2\mu$ and $y=-2+2\lambda-3\mu$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776711", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Approximation of open subsets of $\Bbb R^2$ by compact sets. I am having difficulty to prove one of the regularity properties of Lebesgue measure. Here it is $:$ Let $(\Bbb R^2, \mathcal L_{\Bbb R^2}, \lambda_{\Bbb R^2})$ be the Lebesgue measure space on $\Bbb R^2.$ Then prove that $\lambda_{\Bbb R^2} (U) = \sup \left \{\lambda_{\Bbb R^2}(K)\ |\ K\ \text {is compact},\ K \subseteq U \right \},$ for any open subset $U \subseteq \Bbb R^2.$ How do I proceed? Any help will be highly appreciated. Thanks in advance.
I proceed along the same lines as Greg Martin pointed out in his comment above. Here's my answer. Let $S : = \left \{\lambda_{\Bbb R^2} (K)\ |\ K\ \text {is compact},\ K \subseteq U \right \}.$ We need to prove that $\lambda_{\Bbb R^2} (U) = \sup S.$ Since $U$ is open, $U$ is a neighbourhood of each of it's points. So for any $x \in U$ there exists an open ball $B_x$ surrounding $x$ such that $B_x \subseteq U.$ Take any slightly small closed ball $B_x'$ inside $B_x$ for each $x \in U.$ So $B_x'$ is a compact neighbourhood of $x$ sitting inside $U,$ for each $x \in U.$ Since $\Bbb R^2$ is second countable it has a countable open base $\mathcal B = \left \{B_n \right \}_{n=1}^{\infty}.$ So for each $x \in U,$ there exists $B_{m(x)} \in \mathcal B$ such that $x \in B_{m(x)} \subseteq B_x'.$ Now consider the collection $\mathcal B' := \left \{B_{m(x)}\ |\ x \in U \right \}.$ Then we have $U = \bigcup\limits_{x \in U} B_{m(x)}.$ Since $\mathcal B' \subseteq \mathcal B$ and $\mathcal B$ is countable it follows that $\mathcal B'$ is also countable. So we can index the elements of $\mathcal B'$ by natural numbers say $\mathcal B' = \left \{B_{n_r} \right \}_{r=1}^{\infty}$ (This collection may as well be finite; so for each $r \in \Bbb N,$ the corresponding open ball $B_{n_r}$ in the collection $\mathcal B'$ might not be distinct). So we have $U = \bigcup\limits_{r=1}^{\infty} B_{n_r}.$ Now by the construction of $\mathcal B'$ it follows that for any $r \in \Bbb N,$ there exists $x_r \in U$ such that $B_{n_r} \subseteq B_{x_r}' \subseteq U.$ Since $U = \bigcup\limits_{r=1}^{\infty} B_{n_r},$ it follows that $U = \bigcup\limits_{r=1}^{\infty} B_{x_r}'.$ Let $K_n : = \bigcup\limits_{r=1}^{n} B_{x_r}'.$ Then $\{K_n \}_{n=1}^{\infty}$ is a sequence of compact subsets of $U,$ with $K_n \subseteq K_{n+1},$ for all $n \geq 1$ and moreover $\bigcup\limits_{n=1}^{\infty} K_n = \bigcup\limits_{r=1}^{\infty} B_{x_r}' = U.$ So $\{K_n \}_{n=1}^{\infty}$ is sequence of compact subsets of $U$ such that $K_n\ \bigg \uparrow\ U.$ Since $\lambda_{\Bbb R^2}$ is a measure on $\mathcal L_{\Bbb R^2}$ so it is countably additive and hence it is continuous from below. Therefore $\lim\limits_{n \to \infty} \lambda_{\Bbb R^2} (K_n) = \lambda_{\Bbb R^2} (U).$ But each $K_n$ is a compact subset of $U$ and hence $$\lambda_{\Bbb R^2} (K_n) \leq \sup S,\ \text {for all}\ n \geq 1.$$ Therefore $$\lim\limits_{n \to \infty} \lambda_{\Bbb R^2} (K_n) \leq \sup S.$$ But this implies that $$\lambda_{\Bbb R^2} (U) \leq \sup S.\ \ \ \ \ \ \ (1)$$ On the other hand for any compact set $K,$ with $K \subseteq U$ we have by monotonicity of $\lambda_{\Bbb R^2},$ $\lambda_{\Bbb R^2} (U)$ is an upper bound of $S.$ Hence $$\lambda_{\Bbb R^2} (U) \geq \sup S.\ \ \ \ \ \ \ \ (2)$$ Combining $(1)$ and $(2)$ we have $$\lambda_{\Bbb R^2} (U) = \sup S$$ as required. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is a locally compact Hausdorff quotient of a locally compact $\sigma$-compact first countable Hausdorff space always Frechet-Urysohn? This question follows on from a previous one, which has been answered in the negative: Is a locally compact Hausdorff quotient of a locally compact $\sigma$-compact first countable Hausdorff space always first countable? Let $Y$ be a locally compact, $\sigma$-compact, first countable Hausdorff space and $q:Y\to X$ a quotient map with $X$ Hausdorff. Suppose that $X$ is locally compact. Is $X$ a Frechet-Urysohn space?
I describe an example here including the argument why the quotient space (there called $Y$ is not Fréchet-Urysohn (namely the map $q$ is not hereditarily quotient). The space we take a quotient of is a $\sigma$-compact and locally compact subspace of $\Bbb R$, so fits the bill. The Arens space $S_2$ (which is Hausdorff), described here as a quotient of a countable disjoint sum of convergent sequences (clearly a $\sigma$-compact locally compact first countable Hausdorff space), and is a classic example of a sequential but not Fréchet-Urysohn space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3776926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the difference between stochastic process and random variable? I am having a hard time grasping the core difference between a random variable and a stochastic process. * *A random variable assigns a number to every outcome of an experiment. *A random process assigns a function of time to every outcome of an experiment. But the values of this function of time can be represented with ONE SINGLE random variable as well. So what is the point in having a stochastic process when you can represent an experiment with only random variables? Could somebody make one or two examples where the difference is clear? Appreciate it
Given probability space $(\Omega, \mathfrak{B}, P)$ random variable is measurable map $$X:\Omega \to \mathbb{R} $$ while random (i.e. stochastic) process is family of random variables $$X:\Omega \times T \to \mathbb{R}$$ where under $T$ often is considered as time. On example you can understand it so: random variable represent randomness when it do not depend on time. But if it depend?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Set Operation $D = (A \times B) - (B \times C)$ I want to find $$ D = (A \times B) - (B \times C) $$ where $$ A = \{x.y,z\}, \ B = \{1,2\}, \ C = \{x,z\} $$ So far I have computed $$ A \times B = \{(x,1), (x,2), (y,1),(y,2),(z,1),(z,2)\} $$ and $$ B \times C = \{(1,x),(1,z),(2,x),(2,z)\} $$ Hence $$ D = \{(x,1), (x,2), (y,1),(y,2),(z,1),(z,2)\} - \{(1,x),(1,z),(2,x),(2,z)\} $$ This is where I am stuck because cartesian product order matters. The answer is $$ D = \{(x,1), (x,2), (y,1),(y,2),(z,1),(z,2)\} $$ and I have no idea why. Is difference sets when set doesn't share something together like example $\{1,2,3\} - \{2,3,4\} = \{1\}$ if I am not mistaken.
Because no element of $B \times C$ is an element of $A \times B$, the set $D$ is just $A \times B$. Remember: $X - Y = \{ x \in X : x \notin Y \}$ so if every element of $X$ is not an element of $Y$ then $x \notin Y$ is always satisfied so $X - Y = \{x \in X\} = X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Inverse image of anhilator ideals Let $f$ be a ring homomorphism from $R\rightarrow S$ and $J$ be annihilator of some ideal in $S$. Under what conditions on $R$ and kernel of $f$ , $f^{-1} (J) $ is annihilator of some ideal of $R$. This question seems hard to me although its elementary and it gives me hard time in solving this. I think if kernel of $f$ is annihilator of some ideal in $R$ then it comes. But I still in process of proving that. Is there any sufficient/nesscasry conditions for this.
Let $f:R\to S$ be a ring homomorphism and $I$ an ideal of $S$ with annihilator $J$. Then $f^{-1}(I)$ is an ideal. If $j\in f^{-1}(J)$ and $a\in f^{-1}(I)$, then $f(ja)=f(j)f(a)=0$. So for $j$ to annihilate every element of $f^{-1}(I)$ it is necessary that the kernel of $f$ intersects $jf^{-1}(I)$ trivially. Therefore, for $f^{-1}(J)$ to annihilate $f^{-1}(I)$ we need that the kernel of $f$ intersects $f^{-1}(J)f^{-1}(I)$ trivially (watch out because this is not $f^{-1}(IJ)$ in general). This condition is also sufficient because if $r\in R$ annihilates $f^{-1}(I)$, then $f(r)$ annihilates $I$, so $f(r)\in J$, meaning that $r\in f^{-1}(J)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving that sequence is in $\ell^\infty$ Let $(y_k)_{k\in\mathbb{N}}$ be a sequence in $\mathbb{K}$ with the property that for all sequences $(x_k)_{k\in\mathbb{N}}\in\ell^1$ the sequence $\sum_{k=1}^{n} x_k y_k$ has a limit in $\mathbb{K}$ for $n\to\infty$. Prove that $y\in \ell^\infty$. I tried to define an operator $T_y: \ell^1\rightarrow \ell^1, (x_k)_{k\in\mathbb{N}}\mapsto (x_ky_k)_{k\in\mathbb{N}}$, but as I do not know about absolute convergence of $(x_k y_k)_{k\in\mathbb{N}}$, I am not sure if it really maps to $\ell^1$.
Suppose the sequence $(y_n)_{n\in\mathbb{N}}$ to be unbounded. Then for each $n>0$, there is $k_n$ such that $|y_{k_n}|\ge n$. Without loss of generality we can replace the original $y_n$ by this subsequence. Let $x_n:=\frac{\overline{y}_n}{|y_n|n^2}$; the $n^2$ is introduced so $(x_n)\in\ell^1$. Then $$ \sum_nx_ny_n=\sum_n\frac{|y_n|^2}{|y_n|n^2}=\sum_n\frac{|y_n|}{n^2}\ge\sum_n\frac{1}{n} $$ Hence, going the other way, if $\sum_nx_ny_n$ converges for all $(x_n)\in\ell^1$ then $(y_n)\in\ell^\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is logical equivalence itself a proposition? I understand that the biconditional $P \leftrightarrow Q$ is a hypothesis that may be true or false depending on the truth values of P and Q. Furthermore, I understand that the logical equivalence $P \Leftrightarrow Q$ is the assertion that the biconditional is a tautology, i.e. that the hypothesis is true. However, isn't $P \Leftrightarrow Q$ itself a proposition? After all, it is equivalent to the statement that $P \leftrightarrow Q$ is tautologically equivalent to $T:$ $$(P \Leftrightarrow Q) \Leftrightarrow ((P \leftrightarrow Q) \Leftrightarrow T).$$ Is this a circular? Or is there some "higher form" of equality that logical equivalence lives in?
It is an equivalence in a metalanguage, which loosely speaking is a "higher form" of equality. Basically saying that $P\Leftrightarrow Q$ says that "$P$ is true if and only if $Q$ is true", where being true is something derivable from the rules of propositional logic. This is different from saying "$P\leftrightarrow Q$ is true", since the latter is a statement in propositional logic, whereas the former is a statement about propositional logic. Read about metalanguages on the wikipedia article.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can a matrix represent a relation when a relation is not a function? A matrix can represent a relation (https://en.wikipedia.org/wiki/Logical_matrix). How is this so? Since matrices are a representation of linear maps, does this mean that a linear map is not necessarily a function?
Don’t confuse the representation with the thing being represented. A PNG image might show a portrait of person, or a rendered math equation. Both are grids of pixels but that doesn’t mean that human faces are types of math equations (or vice versa). Similarly a matrix is just a rectangular grid of numbers. Linear maps can be expressed using matrices, but so can many other, unrelated, things. Binary relations can be expressed as a matrix of 0s and 1s, and this representation is fruitful since it lets us express new ideas (composition of relations) in terms of old ones (matrix multiplication). But a matrix of numbers might represent a linear map, or might represent a binary relation, and the two are not the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate the limit $\lim_{n\to\infty} \left(\prod_{k=1}^{n}(1+\frac{k}{n})\right)^{\frac{1}{n}}$ I have to calculate this limit $$ \lim_{n\to\infty} \left(\prod_{k=1}^{n}\left(1+\frac{k}{n}\right)\right)^{\frac{1}{n}}$$ I tried it just first to calculate the limit inside the product, but I think I got the answer 1. Any help?
As an alternative, we have that $$\left(\prod_{k=1}^{n}\left(1+\frac{k}{n}\right)\right)^{\frac{1}{n}}=e^{\frac{\sum_{k=1}^{n} \log\left(1+\frac{k}{n}\right) }{n}}=e^{\sum_{k=1}^{n} \left(\frac{k}{n^2}-\frac12\frac{k^2}{n^3}+\frac13\frac{k^3}{n^4}+\ldots\right) }\to \frac4e$$ indeed by Faulhaber's formula $$\sum_{k=1}^{n} \left(\frac{k}{n^2}-\frac12\frac{k^2}{n^3}+\frac13\frac{k^3}{n^4}+\ldots\right)=\sum_{k=1}^n \frac{(-1)^{(k+1)}}{k(k+1)}+O\left(\frac1n\right)\to \ln 4-1$$ indeed by alternating harmonic series $$\sum_{k=1}^n \frac{(-1)^{(k+1)}}{k(k+1)}=\sum_{k=1}^n \frac{(-1)^{(k+1)}}{k}-\sum_{k=1}^n \frac{(-1)^{(k+1)}}{k+1}\to\ln 2-(1-\ln 2)=2\ln 2-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Multinomial coefficient of a sequence in specific form I came across a question which asked me to find the coefficient of $x^{2n}$ in the following polynomial: $$(\sum\limits_{i=0}^{n-1} x^i )^{2n+1}$$ My approach was to isolate every term, i.e. if we choose $x^2$ , n times and 1 n times again, we get a part of the coefficient of $x^{2n}$. Doing the same for $x^4$, n/2 times and iterating this process over and over would take a lot of time and the answer will be in the form of a sigma which maybe reduced as well. However, the answer given was rather simple Required form of answer ${2n+1}\choose{2}$-${{2n+1}\choose{1}}{{3n}\choose{n}}$+${4n}\choose{2n}$ My questions * *(if possible)How should I proceed with my method so that I reach the same results? *Any other method to precisely find the number of solutions for the equation $$(\sum\limits_{i=1}^{2n+1} x_i )=2n$$ where all $0\leq x_i\leq n-1$
I would view it from the formal-power-series angle: $$\left(\sum_{k=0}^{n-1}x^k\right)^{2n+1}=\left(\frac{1-x^n}{1-x}\right)^{2n+1}=\underbrace{\left(\sum_{k=0}^{2n+1}(-1)^k\binom{2n+1}{k}x^{nk}\right)}_{=(1-x^n)^{2n+1}}\cdot\underbrace{\left(\sum_{k=0}^\infty\binom{2n+k}{k}x^k\right)}_{=(1-x)^{-2n-1}}$$ (both are instances of the binomial series), so that the coefficient of $x^{2n}$ is indeed $$\underbrace{(-1)^0\binom{2n+1}{0}\binom{2n+2n}{2n}}_{[x^{n\cdot 0+2n}]}+\underbrace{(-1)^1\binom{2n+1}{1}\binom{2n+n}{n}}_{[x^{n\cdot 1+n}]}+\underbrace{(-1)^2\binom{2n+1}{2}\binom{2n+0}{0}}_{[x^{n\cdot 2+0}]}.$$ Hence the final result is $$\binom{4n}{2n}-\binom{2n+1}1\binom{3n}n+\binom{2n+1}2\;$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3777940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }