Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to describe a three dimensional range. Consider the function $\mathbf{f}:\mathbf{R}^2\rightarrow\mathbf{R}^3$ given by $\mathbf{f}(\mathbf{x})=A\mathbf{x}$, where $A=\begin{bmatrix} 2 & -1 \\ 5 & 0 \\ -6 & 3 \end{bmatrix}$ and the vector $\mathbf{x}$ in $\mathbf{R}^2$ is written as the $2\times1$ column matrix $\mathbf{x}=\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$. Describe the range of $\mathbf{f}$. Clearly, $$ \mathbf{f}(\mathbf{x})=\begin{bmatrix} 2 & -1 \\ 5 & 0 \\ -6 & 3 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \end{bmatrix}=\begin{bmatrix} 2x_1 - x_2 \\ 5x_1 \\ -6x_1 + 3x_2 \end{bmatrix}. $$ However, I am uncertain as to how to "describe" the range of $\mathbf{f}$. Do I describe each component individually?
Hint : In such cases it is best to describe the range as the set {$(x,y,z) |$.. conditions on x,y and z}. In this particular case, can x,y, and z take all the values freely or are there any constraints ? . When you reduce the equation to RREF. you will probably see one constraint. Express it in terms of a free variable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Associated Primes of Square free monomial ideals Let $I$ be a square free monomial ideal in $R=k[x_1,\dots,x_d]$. If $I$ is the edge ideal of a graph $G$, then the associated primes of $I$ are known. My question is: If $I$ is generated by square free monomials, are the associated prime ideals always of the form $(I:a)$ for some monomial $a$? Or if $P\in \mathrm{Ass}(R/I)$ then is it true that $P=(0:a)$ for some monomial $a$?
Yes, they are. The result holds for associated primes of graded modules (see Bruns and Herzog, Lemma 1.5.6(b)(ii)), and monomials are the homogeneous elements in some grading on $k[x_1,\dots,x_n]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Notation Concerning Improper Integral and the Absolute Value of an Integrand I have the following integral $$\int_{- \infty}^\infty e^{-|x|} dx$$ Since the preimages $x$ determine the the images $e^{-|x|}$ for nonnegative and negatives preimages, I believe the integral must be evaluated in the following manner $$\lim \limits_{c \rightarrow 0} \enspace \int_{- \infty}^{c} e^{-|x|} \enspace dx \quad + \quad \int_{0}^{\infty} e^{- |x|} \enspace dx$$ where $c$ obviously approaches $0$ without necessarily equalling $0$, so that $e^{- |x|} = e{-(-x)}$. So, $$\lim \limits_{c \rightarrow 0} \enspace \int_{- \infty}^{c} e^{-|x|} \enspace dx \quad + \quad \int_{0}^{\infty} e^{- |x|} \enspace dx $$ $$= \lim \limits_{c \rightarrow 0} \enspace \int_{- \infty}^{c} e^{x} \enspace dx \quad + \quad \int_{0}^{\infty} e^{-x} \enspace dx $$ $$= 2$$ Also, I think it is wrong to write $$\int_{- \infty}^{\infty} e^{-|x|} \enspace dx \quad + \quad \int_{0}^{\infty} e^{- |x|} \enspace dx$$ Since the first integral includes the value $0$, causing discrepancies with $e^{-|x|}$. This is more of a notation issue and I would like some insight. Thank you.
Notice, the given function $$f(x)=e^{-|x|}\implies f(-x)=e^{-|-x|}=e^{-|x|}=f(x)$$ hence $f(x)=e^{-|x|}$ is an even function Now, using property of definite integral $\color{blue}{\int_{-a}^{a}f(x)dx=2\int_{0}^{\infty}f(x)dx}$, we get $$\int_{-\infty}^{\infty}e^{-|x|}dx=2\int_{0}^{\infty}e^{-|x|}dx$$ Since, $|x|=x\ \forall \ \ 0\leq x$ hence $$2\int_{0}^{\infty}e^{-|x|}dx=2\int_{0}^{\infty}e^{-x}dx=2[-e^{-x}]_{0}^{\infty}$$ $$=2[-e^{-\infty}+e^{0}]=2[0+1]=2[1]=\color{red}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444566", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
If $a | b$, prove that $\gcd(a,b)$=$|a|$. If $a | b$, prove that $\gcd(a,b)$=$|a|$. I tried to work backwards. If $\gcd(a,b)=|a|$, then I need to find integers $x$ and $y$ such that $|a|=xa+yb$. So if I set $x=1$ and $y=0$ (if $|a|=a$) or if I set $x=-1$ and $y=0$ (if $|a|=-a$), is this good enough?
If $a|b,\ |a||b\implies |a|\le gcd(a,b)\le a\le |a|\implies |a|=gcd(a,b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Inverse Laplace transform with branch cut For the purpose of my research on persistent random walks I need to compute the inverse Laplace transform of $$ F(s)=\frac{\mathrm e^{-b\sqrt{s^2-1}}}{s^2-1}.$$ I looked up in tables of integral transforms such as Erdélyi et al., Gradsteyn & Ryzhik, Prudnikov et al. with no success. So I have looked for a proper contour for the Bromwich integral. I came up with this one $\color{green}A$ is the integral we want to compute $$f(t)=\frac1{2\pi\mathrm i}\int_{\gamma-\mathrm i\infty}^{\gamma+\mathrm i\infty}\mathrm e^{st}\frac{\mathrm e^{-b\sqrt{s^2-1}}}{s^2-1}\mathrm ds,$$ $\color{blue}B$ and $\color{blue}K$ vanish when the radius of the large circle goes to infinity. The contributions of $\color{brown}F$ and that of $\color{brown}D$ and $\color{brown}H$ are given by simple poles and yield together a contribution of $\sinh t$. The contribution of $\color{red}C$ and $\color{red}J$ cancel each other. To compute the contributions of $\color{red}E$ and $\color{red}G$, I set $s=-\cos\theta+\mathrm i\varepsilon$ for $\color{red}E$ and $s=\cos\theta-\mathrm i \varepsilon$ for $\color{red}G$. I end up with the following integral $$\color{red}{E+G}\to\frac1{\pi\mathrm i}\;\text{vp.}\int_0^\pi \frac{\mathrm e^{\mathrm ib\sin\theta}}{\sin\theta}\sinh(t\cos\theta)\mathrm d\theta\tag{1}$$ that I didn't manage to reduce to a handier result. Can someone tell me if I'm the right track and perharps help me to come up with a better expression of (1) ?
I have a copy of G.E.Roberts and H.Kaufman "Table of Laplace Transforms", 1966. On page 252, Item 3.2.55 is a more complex example but might apply if $v = 0$. Inverse of $(s + (s^2 - a^2)^{1/2})^v \exp(-b\sqrt{s^2-a^2})/\sqrt{s^2 - a^2}$ is given as: $0$ for $0 < t < b$ and $a^v ((t-b)/(t+b))^{v/2} I_v(a\sqrt{t^2-b^2})$ for $t > b$ With the normal restrictions that $b>0$, $|\operatorname{Re}(v)| < 1$, $\operatorname{Re} s > |\operatorname{Re} a|$. $I_v$ is the modified Bessel function of order $v$. I assume that the expressions would be valid for $v = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Group of order $432$ is not simple How do we prove that a group $G$ of order $432=2^4\cdot 3^3$ is not simple? Here are my attempts: Let $n_3$ be the number of Sylow 3-subgroups. Then, by Sylow's Third Theorem, $n_3\mid 16$ and $n_3\equiv 1 \pmod 3$. Thus, $n_3=1, 4$ or 16. Next, let $Q_1$ and $Q_2$ be two distinct 3-subgroups such that $|Q_1\cap Q_2|$ is maximum. If $|Q_1\cap Q_2|=1$, then we can conclude $n_3\equiv 1\pmod {3^3}$, which will force $n_3=1$ and we are done. If $|Q_1\cap Q_2|=3$, similarly we can conclude $n_3\equiv 1\pmod {3^2}$, and we are done. The problem occurs when $|Q_1\cap Q_2|=3^2$, we can only conclude $n_3\equiv 1\pmod 3$ which is of no help. Thanks for help!
Let $G$ be a simple group of order $432$. Note that $|G|$ does not divide $8!$, thus no proper subgroup of $G$ has index at most $8$. Sylow’s Theorem forces $n_3(G) = 16 \not\equiv 1 \bmod 9$, so that there exist $P_3, Q_3 \in \mathsf{Syl}_3(G)$ such that $|N_G(P_3 \cap Q_3)|$ is divisible by $3^3$ and another prime. Thus $|N_G(P_3 \cap Q_3)| \in \{ 3^3 \cdot 2, 3^3 \cdot 2^2, 3^3 \cdot 2^2, 3^3 \cdot 2^3, 3^3 \cdot 2^4 \}$. In each case either $P_3 \cap Q_3$ is normal in $G$ or its normalizer has sufficiently small index. Remark. For more details on the steps see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1444988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Integral definition of derivative After seeing the integral definitions of div, grad, and curl, I'm left to wonder if we can define the regular derivative of a function $f: \Bbb R \to \Bbb R$ as the limit of an integral. For reference, here's the integral definition of grad $$\operatorname{grad} f := \lim_{V\to 0} \frac{\oint_S f(x)d\sigma}{V}$$ where $S$ is the boundary of the volume $V$. In this same vein, is the following definition also correct? $$\frac{df}{dx}:=\lim_{b\to a} \frac{\int_a^bf(x)dx}{b-a}$$ where $f$ is continuous in $[a,b]\subseteq \Bbb R$. If so, how could it be proven?
No, it is not. By Lebesgue's differentiation theorem, for any $f\in L^1(\mathbb{R})$ and for almost every $x\in\mathbb{R}$ the following limit exists and it equals $f(x)$: $$ \lim_{r\to 0}\frac{1}{2r}\int_{|t-x|<r}f(t)\,dt.$$ If $f$ is a continuous function it is absolutely continuous over any compact interval, hence: $$ \lim_{b\to a}\frac{1}{b-a}\int_{a}^{b}f(t)\,dt = f(a), $$ i.e. any point is a Lebesgue point for a continuous function. On the other hand, it is also true that for any entire function $f(z)$ we have: $$ f'(z) = \frac{1}{2\pi i}\oint_{\|w-r\|=r}\frac{f(w)}{(z-w)^2}\,dw, $$ that is Cauchy's integral formula. Moreover, it is very common to define (pseudo)differential operators through particular integrals. See, for instance, the Wikipedia page about fractional calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Do these metrics determine the same Topology. Let $C([0,1])$ denote the set of all continuous real-valued functions on the interval $[0,1]$. Given elements $f,g ∈ C([0,1])$, define $d_{\infty}(f,g) = \displaystyle{\text{sup}_{x\in [0,1]}}| f(x)−g(x)|$ and $\displaystyle{d_1(f,g) = \int_0^1 | f(x)−g(x)|dx}.$ (a) Show that $d_{\infty}$ is a metric on $C([0,1])$. (b) Show that $d_1$ is a metric on $C([0,1])$. (c) Do $d_{\infty}$ and $d_1$ define the same topology on $C([0,1])$? Part a and part b were fairly easy. For the last part, I know I have to contain one basis element of one metric inside two basis elements of the other metric. My question is, how do you contain a basis element of one metric in the other?
Hint: For the last part, consider the sequence of functions $f_n$ that are zero everywhere except for a triangular bump going up to $1$ and then back down to zero between the endpoints $1/2^{n+1}$ and $1/2^{n}$. Is this sequence Cauchy in terms of your metric $d_1$, with a limit?. Is the sequence Cauchy according to $d_{\infty}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating $\int \frac{\sin x}{\sin 4x}\mathrm dx$ $$\int \frac{\sin x}{\sin 4x}\mathrm dx$$ I have tried to do this by expanding $\sin 4x$ but it was worthless.
Using $\displaystyle \bullet\; \sin 2x = 2\sin x\cdot \cos x$ and $\; \bullet\; \cos 2x = 1-2\sin^2 x$ Let $$\displaystyle I = \int\frac{\sin x}{2\sin 2x\cdot \cos 2x}dx = \int\frac{\sin x}{4\sin x\cos x\cdot \cos 2x}dx = \frac{1}{4}\int\frac{1}{\cos x\cdot \cos 2x}dx$$ Now $$\displaystyle I = \frac{1}{4}\int\frac{\cos x}{\cos^2 x\cdot (1-2\sin^2x)}dx = \frac{1}{4}\int\frac{\cos x}{(1-\sin^2x)\cdot (1-2\sin^2x)}dx$$ Now Put $\sin x= t\;,$ Then $\cos xdx = dt$ So we get $$\displaystyle I = \frac{1}{4}\int\frac{1}{(1-t^2)(1-2t^2)}dt = \frac{1}{4}\int\frac{1}{(2t^2-1)(t^2-1)}dt$$ So we get $$\displaystyle I = -\frac{1}{4}\int\left[\frac{2}{(2t^2-1)}-\frac{1}{t^2-1}\right] = -\frac{1}{4}\int\frac{1}{t^2-\left(\frac{1}{\sqrt{2}}\right)^2}dt+\frac{1}{4}\int\frac{1}{t^2-1}dt$$ So we get $$\displaystyle I = -\frac{1}{4}\cdot \frac{1}{\sqrt{2}}\ln\left|\frac{\sqrt{2}t-1}{\sqrt{2}t+1}\right|+\frac{1}{4}\cdot \frac{1}{2}\ln\left|\frac{t-1}{t+1}\right|+\mathcal{C}$$ So we get $$\displaystyle I = -\frac{1}{4\sqrt{2}}\ln\left|\frac{\sqrt{2}\sin x-1}{\sqrt{2}\sin x+1}\right|+\frac{1}{8}\ln\left|\frac{\sin x-1}{\sin x+1}\right|+\mathcal{C}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
solve the inequations and provide the solutions in brackets I have 2 inequations which I solve, but when I compare the result with websites like wolframalpha I get different results. $|2-x|+|2+x| \leq 10$ I continue with $2-x+2-x\leq10$ OR $2-x+2-x\geq-10$ and I get the results $x\geq3$ and $x\leq7 $ $|6-4x|\geq|x-2|$ $6-4x \geq x-2$ OR $6-4x \leq -x+2 $ and I get $x\leq8/5$ and $x\geq4/3$ Are my results correct or not? Thank you
You have two critical points (points that evaluate to zero in the absolute values). These are $x=2$ and $x=-2$. Now you have three cases which you have to look at $x\in(-\infty,-2]$, $x\in (-2,2]$ and $x\in (2,\infty)$ [Note: these intervals are important because the expressions in the absolute value functions change sign]. For your second problem the critical points are. $6-4x=0 \to x=\frac{6}{4}$ and $x-2=0 \to x = 2$. What cases do we have to check now? EDIT: Steps for solving such equations: * *write down all terms with absolute values: Here $|2-x|$ and $|2+x|$. *find value of x for which these absolute values evaluate to $0$: Here $x=2$ and $x=-2$. These values are called the critical values *Split the real axis at the critical values ($-2, 2$): Here (a) $x\in(-\infty,-2]$, (b) $x\in (-2,2]$ and (c)$x\in (2,\infty)$ *Look at all cases (a), (b) and (c) separately: Here (a): First absolute value is positive and the second is negative: $(2-x)-(2+x) \leq 10$. Solving this $-2x \leq 10$ or $x\geq -5$. Compare with (a) $x\in(-\infty,-2]$ to conclude that only values for $x \in [-5,-2]$ are valid solutions for case (a). (b): First absolute value is positive and the second is positive: $(2-x)+(2+x) \leq 10$. Solving this leads to $4\leq 10$, which is valid for all $x$. Compare with (b) $x\in (-2,2]$ to conclude that only values for $x\in (-2,2]$ are valid solutions for case (b). (c): First absolute value is negative and the second is positive: $-(2-x)+(2+x)\leq10$ or $2x\leq 10$ or $x\leq5$. Compare with (c) $x\in (2,\infty)$ to conclude that only $x\in (2,5]$ are valid solutions. Now combine alle the cases (a), (b) and (c) to conclude that all $x\in [-5,5]$ are solutions to the inequality I leave the rest to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Interesting $\sum_{i=0}^{m-1}(-1)^{i}(m-i)^{2} = \sum_{i=1}^{m}i$ I found that for m $\in N $ $$\sum_{i=0}^{m-1}(-1)^{i}(m-i)^{2} = \sum_{i=1}^{m}i.$$ I found it after doing an exercise. For example: $$5^{2}-4^{2}+3^{2}-2^{2}+1^{2} = 1 + 2 + 3 + 4 + 5 = 15.$$ For me is the first time I saw this formula. I think it is a very nice formula and somehow "estetically symmetric". Have you ever encountered something like this?
The proof is not that hard. Assume WLOG that $m$ is odd. Then $$\begin{align}\sum_{i=0}^{m-1} (-1)^i (m-i)^2 &= \sum_{i=1}^m (-1)^{i+1} i^2 \\ &= 1^2-2^2+3^3-4^2 +\cdots+m^2\\&= 1^2+2^2+3^3+4^2 +\cdots+m^2-2 (2^2+4^2+\cdots(m-1)^2)\\&=\frac16 m (m+1)(2 m+1) - 2 \cdot 2^2 \frac16 \left (\frac{m-1}{2} \right )\left (\frac{m+1}{2} \right ) m\\ &= \frac16 m (m+1) [(2 m+1) - (2 m-2)] \\ &= \frac12 m (m+1)\end{align} $$ So, your observation holds. When $m$ is even, the sign is negative but the result is the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimal primes of $\Bbb C[w,x,y,z]/(w^2x,xy,yz^2)$ and their annihilators I am examining the minimal primes of $R=\Bbb C[w,x,y,z]/(w^2x,xy,yz^2)$, intersections between them, and the annihilators of each minimal prime. If a minimal prime does not contain x, then it must contain both w and y. If it contains x, then it must contain one of y or z. So I think a complete set of minimal primes is given by $(w,y), (x,y)$ and $(x,z)$. (Good so far I hope?) The problem is that I second-guess my judgement on conclusions about the annihilators and intersections. Is it safe to conclude something about the generators of the intersections in terms of the generators of the ideals? The only thing I know for sure is that $(wx, yz)$ is nilpotent and hence contained in all three minimal primes. I can also see $(wx)\subseteq ann(w,y)$, $(w^2z^2)\subseteq ann(x,y)$ and $(yz)\subseteq ann(x,z)$, but I lose my train of thought trying to verify that they are equalities. Any tips or theorems for clarifying how to think about these items are appreciated.
$$(w^2x,xy,yz^2)=(\underline{w}^2,xy,yz^2)\cap(\underline{x},xy,yz^2)=(w^2,\underline{x},yz^2)\cap(w^2,\underline{y},yz^2)\cap(x,yz^2)=(w^2,x,y)\cap(w^2,x,z^2)\cap(w^2,y)\cap(x,y)\cap(x,z^2)=(w^2,y)\cap(x,z^2)\cap(x,y)$$ is a reduced primary decomposition, so the associated (minimal) prime ideals are $(y,w),(x,z),(x,y)$. For intersections, let's consider $(y,w)\cap(x,z)=(xw,zw,xy,yz)$. In $R$ we have $xy=0$, so one can get rid of this. $Ann_R(x,y)=(z^2w^2)$, $Ann_R(x,z)=(yz)$, and $Ann_R(y,w)=(xw)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1445986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
An example of a representation which is simultaneously of real and quaternionic type Can anyone provide an example of a (complex) representation which is simultaneously of real (with a structure map $j^2 = 1$)and quaternionic (with a structure map $j^2 = -1$) type? As this cannot be true for irreducible representations, we know it should be reducible. Reference: Section 6, Chapter II, Representations of Compact Lie Groups by Brocker and Dieck.
Take any rep $V$ over $\Bbb R$ and consider $\Bbb H\otimes_{\Bbb R}V$ as a complex vector space. Say $V$ is a representation of $G$ over $\Bbb R$. Assume we've picked a basis. Then $G$ acts by real matrices, and if we extend scalars (by tensoring with a larger ring of scalars) $G$ still acts by those real matrices. Since $\Bbb C\subset\Bbb H$ we can interpret $\Bbb H\otimes_{\Bbb R}V$ as a complex vector space. The map $j$ comes from the scalar action of ${\bf j}\in\Bbb H$. Since $\Bbb H=\Bbb C\oplus\Bbb C{\bf j}$ this space is $\Bbb C\otimes_{\Bbb R}(V\oplus{\bf j}V)$ so it's of real type.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Varying Order of Covariant Differentiation Does the order of covariant differentiation matter? Will E$_i$$_j$,$_k$$_l$$_t$ =E$_i$$_j$,$_l$$_t$$_k$ ? Does it matter if the tensor E$_i$$_j$ is continuously differentiable?
In general no, for example on a reimannian manifold where the christoffel symbols and its derivatives do not vanish there is a loss in commutativity: $$\nabla_a \nabla_bT^c \ne \nabla_b \nabla_aT^c$$ infact: $$\nabla_a \nabla_bT^c - \nabla_b \nabla_aT^c = R^{c}_{dab}T^d$$ Where $R$ is called the reimann christoffel tensor. It has a beautiful story which you can find by searching for it in wikipedia along with a more geometric picture as a preamble to the concept of curvature. In a euclidean space this is not the case as the mixed partial derivatives are equal and the components of $R$ vanish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Normal distribution with P(x=a) and P(x≥a) Given: height of $1000$ students normally distributed with $\mu=174.5\,\mathrm{cm}$, $\sigma=6.9\,\mathrm{cm}$ Find: a. $P(x<160\,\mathrm{cm})$ b. $P(x=175\,\mathrm{cm})$ c. $P(x\geq188\,\mathrm{cm})$ For a., I used the formula $z=x-\mu/\sigma$, which gives $-2.10$. Looking at the $z$ table, I derived $0.0179$. However, for points b. and c. I cannot find any reference on how to solve "equal to" and "greater than or equal to" probabilities related to normal distribution. We only discussed $P(z < a)$, $P(z > a)$ and $P(a < z < b)$.
One of the properties of a continuous probability distribution like the normal distribution is that the probability of any individual outcome is zero. This is true even if it is possible for that outcome to happen. For example, it is possible that the height of a student is exactly 175 cm, but if you model height with a normal distribution, then the probability of that happening is zero. Using this knowledge, you will find that $P(x\ge a)=P(x=a)+P(x>a)=0+P(x>a)=P(x>a)$. Also note that if you were being asked for the probability, then your answer should be a number from $0$ to $1$. (So your answer of $22$ is not a probability.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that for each number $n \in \mathbb N$ is the sum of numbers $n, n + 1, n + 2, ..., 3n - 2$ equal to the second power of a natural number. I've got a homework from maths: Prove that for each number $n \in \mathbb N$ is the sum of numbers $n, n + 1, n + 2, ..., 3n - 2$ equal to the second power of a natural number. I don't actually get the task. I would assume that the sequence would continue like this: $n, n + 1, n + 2, n + 3, ..., n + k$, but not $3n - 2$. Isn't there something wrong with the book? The key of the book says only: $$ S_n = (2n-2)^2 $$ where $ S_n$ is the sum of all the numbers. Thank You for Your reply, Dominik EDIT Sorry, I made a mistake. It really is $$ S_n = (2n-1)^2 $$ I am really sorry.
By the trick that we know from Gauss we get that $$\sum_{i = 1}^s i = \frac{s(s+1)}{2}$$ If we fill in $s = 3n - 2$ and substract the sum up to $s = n-1$ we see that $$S_n = \sum_{i = n}^{3n - 2} i = \sum_{i = 1}^{3n - 2} i - \sum_{i = 1}^{n-1} i$$ so $$S_n = \frac{(3n - 2)(3n - 1)}{2} - \frac{(n-1)(n)}{2}$$ which gives us $$S_n = 4n^2 - 4n + 1 = (2n-1)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
A metric space is complete when every closed and bounded subset of it is compact $X$ is a metric space such that every closed and bounded subset of $X$ is compact . Then $X$ is complete. Let $(x_{n})_{n}$ be a Cauchy sequence of $X$. Now every Cauchy sequence in a metric space is bounded. If $(x_{n})$ is convergent with limit, say, $x_{0}$, then $\{x_{n} : n \}\cup \{x_{0}\}$ is closed, otherwise the set $\{x_{n} : n \}$ is a closed set. If the sequence was convergent there was nothing to do. So when the sequence's convergent is not known, being closed and bounded, $\{x_{n} : n \}$ is a compact set. In metric spaces, compactness is equivalent to sequential compactness. Hence, the sequence $(x_{n})_n$ has a convergence subsequence $(x_{n_{k}})_k$ converging to, say, $x_{0}$. For being Cauchy, $(x_{n}$ also converges to $x_{0}$. Thus we see that every Cauchy sequence in $X$ is convergent . Hence $X$ is complete. Is my proof correct? For I have doubts regarding the step where I assumed that the range of the sequence, i.e. $\{x_{n} : n \}$ is closed. This would be true if I were in $\mathbb R$ but can this be assured for an arbitrary metric space? Or does that step require modification in some other respect?
Here is a modified version of the proof that I believe fills all the holes: Consider a Cauchy sequence $(x_{k} : k)$ in $X$. Let the limit points of this sequence be denoted $L$. I claim that $L$ is nonempty and in fact contains exactly one point. Then clearly the only point in $L$ is the limit, so it converges. First note that $(x_{k}:k)$ is bounded because $X$ is a metric space. As such, there must exist an open ball $B$ containing the sequence entirely. The closure $\mathrm{cl}(B)$ is compact by assumption, and therefore by sequential compactness $(x_{k} : k)$ must have at least a single limit point. In other words, $L$ is nonempty. Next, why can $L$ not contain more than one point? Assume that it contains distinct points $y$ and $z$, and let $\epsilon = d(y,z)$. Since they are limit points, for each $\delta > 0$ there must be infinitely many points of $\{ x_{k} : k \}$ that are within distance $\delta$ of $y$ and of $z$. However, this is impossible if we choose $\delta < \epsilon/4$. That is, if $x_{y}$ is a point close to $y$ and $x_{z}$ is a point close to $z$, then $d(x_{y}, x_{z}) > \epsilon - 2\delta = \epsilon / 2$ by the triangle inequality. That is, no matter how far along in the sequence we look, we can find points of distance $\epsilon/2$ away from each other. This violates the Cauchy assumption. Therefore, $L$ has exactly one point.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Intuitive explanation of binomial coefficient formula Regarding the formula for binomial coefficients: $$\binom{n}{k}=\frac{n(n-1)(n-2)\cdots(n-k+1)}{k!}$$ the professor described the formula as first choosing the $k$ objects from a group of $n$, where order matters, and then dividing by $k!$ to adjust for overcounting. I understand the reasoning behind the numerator but don't understand why dividing by $k!$ is what's needed to adjust for overcounting. Can someone please help me understand how one arrives at $k!$? Thanks.
Suppose you had 5 books on a shelf and wanted to pick 3. If order mattered (permutations), there would be $5 \cdot 4 \cdot 3$ ways of doing so. If order did not matter (combinations), you would discount the extra permutations. That is, "ABC" would be the single representative of all of its $3!$ permutations. The total number ways of selecting the books is $\frac{5\cdot4\cdot3}{3!}$. For the general case, replace the 5 with $n$ and 3 with $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
choosing N object from set of M>N maximizing overall ratio value/weight I have a set of M objects each with a certain value and weight. From this set I want to take out N objects ($N<M$ of course) and maximize the ratio: total value of the N objects / total weight of the N objects It's a kind of an optimal selection problem. My intuition tells me to choose the N objects with the highest value/weight ratio, that is sort the M objects according to this ratio in descending order and take the first N objects in the sorting sequence. How can I demonstrate doing this will always yield the maximum total ratio? My attempt: if optimal ratio is $\frac{a}{b}$ for a certain selection of $K$ objects and I have to add a $K+1$ object, between one with a ratio $\frac{x}{y}$ and another with ratio $\frac{w}{z}$, given that $$\frac{x}{y} > \frac{w}{z}$$ is the former always an optimal choice? I thought the problem could algebrically be reduced to this: given $a,b$ positive constants, and $x,y,w,z$ all positive so that: $$\frac{x}{y} > \frac{w}{z}$$ can or can never be that: $$\frac{a+x}{b+y} < \frac{a+w}{b+z}$$ I am not able to answer this point, please help!
Supponse $M = 2$ and you have one item with very large weight $w$ and very large value $v$, such that the ratio $v/w$ is maximal by a large amount for this item. Then if you are forced to take another item, usually your best bet will be to choose the item with a very small weight, so that the ratio is relatively unchanged. This is true even if the value/weight ratio for the low weight item is not as good as the value/weight ratio for a much higher weight item.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is $\left( {{\partial ^2 f}\over{\partial t\,\partial s}}\right)\bigg|_{t = s = 0} = ab - ba?$ This is a followup to my previous question here. Let $a, b \in M_n(\mathbb{R})$. Consider the function$$f: \mathbb{R}^2 \to M_n(\mathbb{R}), \text{ }(t, s) \mapsto e^{t \cdot a} e^{s \cdot b} e^{-t \cdot a} e^{-s \cdot b}.$$In my previous question, I asked for a way to see that such function is infinitely differentiable. Can anyone supply a rigorous proof of the fact that$$\left( {{\partial ^2 f}\over{\partial t\,\partial s}}\right)\bigg|_{t = s = 0} = ab - ba?$$All proofs I've tried coming up with I feel lack rigour...
First, note that we do have the usual product rule $(fg)' = f'g + fg'$ for taking (partial) derivatives when products don't commute. I won't go through the details here, but the exact same proof that works for functions $\mathbb{R} \to \mathbb{R}$ goes through. Extending to multi-term products, we get: \begin{align*} (fghr)' &= f'(ghr) + f(ghr)' \\ &= f'ghr + fg'hr + fg(hr)' \\ &= f'ghr + fg'hr + fgh'r + fghr'. \end{align*} Furthermore, the rule $(e^{ta})' = ae^{ta}$ applies: the usual proof using a series expansion works. You can show either of these facts yourself, using proofs found in any advanced calculus book as a template if you can't remember how they go. Put our rules (explicitly mentioned above) together to find \begin{align*} \frac{\partial f}{\partial s} &= e^{ta}be^{sb}e^{-ta}e^{-sb} \\ &- e^{ta}e^{sb}e^{-ta}be^{-sb}. \end{align*} Differentiating again (and being careful about signs), we get: \begin{align*} \frac{\partial^2 f}{\partial t \partial s} &= ae^{ta}be^{sb}e^{-ta}e^{-sb} \\ &- e^{ta}be^{sb}ae^{-ta}e^{-sb} \\ &- ae^{ta}e^{sb}e^{-ta}be^{-sb} \\ &+ e^{ta}e^{sb}ae^{-ta}be^{-sb}. \end{align*} When we substitute $s,t = 0$, all the exponentials become 1, so we get $$ ab - ba - ab + ab, $$ which simplifies to $$ ab - ba $$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1446915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Function to describe teardrop shape If I fill a plastic ziploc-shaped bag with water, the cross section profile should be sort of teardrop shaped (assuming we ignore the edge effects of the bag being sealed on the sides as well as the top and bottom). The bag should "sag"/get wider until to get the center of gravity as low as possible. Initially, getting wider will let more water towards the bottom but eventually this is offset by the bottom of the bag moving up (because the sides are fixed length). Is there a common function that describes the shape the cross-section of the bag makes? I would guess the bottom is a parabola, since gravity likes to make parabolas. Then I would guess the top is linear because its under tension. But I have no idea what the transition region might look like and whether you could put those two together into a nice function.
This involves physics of equilibrium to form an ODE which decides its shape. Assumptions are: Bag is full, does not stretch, too long compared to height, force per unit length $N$ a.k.a. surface tension is proportional to height of water (of density $\gamma$) gravity load column above it, the hydraulic pressure is proportional to vertical depth (y) only, $\kappa$ membrane curvature.. and the like. Equilibrium requires $$ N \kappa = p = \gamma \, y, \; \; \; \frac {y''}{(1+y'^2)^{3/2}} = k y $$ Integration leads to $Elastica$ shape. At the support point there is no depth of water weighing down on it so it is straight as you expected. Deeper down shape is described by elliptic integrals. At the deepest symmetry point $y^{'} $ is zero, so is a parabola locally. The ODE gives insight as to its shape. Where the water level stops there and above that the bag is straight. In Mechanics of materials text-book Den Hartog mentions shapes of heavy mercury drops once adopted for construction of large water tanks. EDIT1: To an extent, the static equilibrium balance between a helium filled lighter than air balloons floating in air is quite similar. They accordingly have an upside down shape of a teardrop. EDIT 2: The balloon gets meridian shape below when curvature changes along axis of symmetry per $ \kappa = 6 ( z -0.7): $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Properties of $f(x)=\ln(1+x^2)+x+2$ vs $g(x)=\cosh x+\sinh x$ This is from an MCQ contest. Consider the two functions: $f(x)=\ln(1+x^2)+x+2$ et $g(x)=ch(x)+sh(x)$. The real number $c$ such that: $(f^{-1})'(2)=g(c)$ * *$1]$ $c=-1$ *$2]$ $c=0$ *$3]$ $c=1$ *$4]$ None of the above statements is correct My Thoughts: note that : $$\left(f^{-1}\right)^\prime (y)=\frac{1}{f'\left(f^{-1}(y)\right)}.$$ and $$ch(x)+sh(x)=e^x$$ but first we should check $f$ it's invertible or not indeed, $f$ is strictly increasing continuous on $\mathbb{R}$ since $$f'(x)=\dfrac{(x+1)^{2}}{x^2+1}\geq 0$$ then $f$ is invertible thus has inverse function $$f(x)=y\iff x=f^{-1}(x) \\ \ln(1+x^2)+x+2=y $$ i'm stuck here Any help will be apprecited
You were on the right track. Note that $$\frac{df^{-1}(x)}{dx}=\left.\frac{y^2+1}{(y+1)^2}\right|_{y=f^{-1}(x)}$$ Now, when $f^{-1}(2)=y$, $\log (1+y^2)+y=0\implies y=0$. So, we have $f^{-1}(2)=0$ and this means that $$\left.\frac{df^{-1}(x)}{dx}\right|_{x=2}=\left.\frac{y^2+1}{(y+1)^2}\right|_{y=f^{-1}(2)=0}=1$$ Since $g(x)=e^x$, then if $g(c)=1$, $c=0$. The answer is, therefore $$\bbox[5px,border:2px solid #C0A000]{c=0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Solving $\lim _{x\to 1}\left(\frac{1-\sqrt[3]{4-3x}}{x-1}\right)$ $$\lim _{x\to 1}\left(\frac{1-\sqrt[3]{4-3x}}{x-1}\right)$$ So $$\frac{1-\sqrt[3]{4-3x}}{x-1} \cdot \frac{1+\sqrt[3]{4-3x}}{1+\sqrt[3]{4-3x}}$$ Then $$\frac{1-(4-3x)}{(x-1)(1+\sqrt[3]{4-3x})}$$ That's $$\frac{3\cdot \color{red}{(x-1)}}{\color{red}{(x-1)}(1+\sqrt[3]{4-3x})}$$ Finally $$\frac{3}{(1+\sqrt[3]{4-3x})}$$ But this evaluates to $$\frac{3}{2}$$ When the answer should be $$1$$ Where did I fail?
I think you overlooked this multiplication $(1+(4-3x)^{1/3})(1-(4-3x)^{1/3})$ which equals $1-(4-3x)^{2/3}$ not $1-(4-3x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }
A high-level reason that $u \cdot (v \times w) = (u \times v) \cdot w$? I can do the algebra to show that for $u, v, w \in \mathbb{R}^3$, this identity is true: $$u \cdot (v \times w) = (u \times v) \cdot w$$ But is there a more high-level reason? I didn't expect the cross and dot product to be connected in this surprising way.
Yes. The volume of the parallelepipid is equal to the scalar triple product. Consider the geometric definitions of the cross-product and inner-product to see this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to prove that n! have a higher order of growth than n^(log n)? I am aware that n^n have a higher order of growth than n!, but how about n^(log n)? Is there a way to get an alternative form of n^(log n) such that when taking the lim n to infinity [alternative form(n^(log n))] / (n!) it would equate to 0?
Using a multiplicative variant of Gauss's trick we have: $$ (n!)^2 = (1 \cdot n) (2 \cdot (n-1)) (3 \cdot (n-2)) \cdots ((n-2) \cdot 3) ((n-1) \cdot 2) (n \cdot 1) \ge n^n $$ So $$ \dfrac{n^{\log n}}{n!} \le \dfrac{n^{\log n}}{n^{n/2}} \le \dfrac{n^{n/4}}{n^{n/2}} = \dfrac{1}{n^{n/4}} \to 0 $$ because $\log n \le n/4$ for $n$ large enough ($n\ge 9$ actually).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is this a proper algebraic proof that $Re(z)\leq\left|z\right|$? Given that $z=(x+iy)\in \mathbb{C}$, I'm supposed to show that $Re(z)\leq\left|z\right|$. This is my attempt: Note that $\sqrt{Re(z)} = \sqrt{x} \leq \sqrt{x^2} \leq \sqrt{x^2 + y^2}$. Since $\sqrt{x^2} = x$, this shows that $x = Re(z) \leq \sqrt{x^2 + y^2} = \left|z\right|$. Is this proof sound mathematically? How could I make it better? Is there another, more ingenious way you would go about it? Updated proof (thanks to @Nameless): Since $Re(z) = x \leq x^2 \leq x^2 + y^2$, then we see that $\sqrt{x^2}\leq\sqrt{x^2+y^2}$. Since $\sqrt{x^2} = x$, it follows that $x = Re(z) \leq \sqrt{x^2 + y^2} = \left|z\right|$, which is what was to be shown. Is this version better? Maybe I'm wording it incorrectly? Thanks.
I think it is easier to write the flow as $$|z|=\sqrt{x^2+y^2}\ge \sqrt{x^2}=|x|=\left|\text{Re}(z)\right|\ge \text{Re}(z)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving (for integers) that primality implies irreducibility Let $Z = \{n \in \mathbb{Z} \; | \; |n| > 1\}$. Let $p\in Z$. $p$ is irreducible if for some $a,b \in \mathbb{Z}$: $$p = ab \implies |a| = 1 \vee |b| = 1$$ $p$ is prime if for $a,b \in \mathbb{Z}$: $$p \mid ab \implies p\mid a \vee p\mid b$$ I would like to prove that $\textrm{prime}(p) \implies \textrm{irreducible}(p)$. Let $p$ be some prime number such that for $a,b \in \mathbb{Z}$, $p = ab$. Then, since every number is divisible by itself, we must have that $p \mid ab$. By definition of $|$, there is some $u \in \mathbb{Z}$ such that $up = ab$. Since $p$ is prime, we know that $p\mid a$ (without loss of generality), so there is some $v \in \mathbb{Z}$ such that $vp = a$. Then, $up = vpb$, so $bv = u$. Well, great, but I'd like to show that $b = \pm 1$. I took a wrong turn somewhere, but where?
You started with $p=ab$, and then weakened this hypothesis to $p|ab$. In fact, with the original, stronger, hypothesis, you have $u=1$, so $b$ is a unit since it divides $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Addition in $R_S$ is well defined Let $R$ be a commutative ring with $1 \neq 0$ and suppose S is a multiplicatively closed subset of $R \backslash {\{\, 0 \,\} }$ containing no zero divisors. We have the relation ∼ defined on $R × S$ via $(a, b) ∼ (c, d) ⇔ ad = bc$. Let $R_S$ denote the set of equivalence classes $\frac{a}{b}$ of $(a,b)$. I want to show that addition in $R_S$, i.e., $\frac{a}{b} + \frac{c}{d}= \frac{ad+bc}{bd}$, is well-defined. Not sure how to approach this. Any help would be appreciated. Thanks!
There is a standard method: Pick $(a^\prime, b^\prime)$ with $(a,b) \sim (a^\prime, b^\prime)$ (hence $ab^\prime = a^\prime b$) and show $\frac{ad+bc}{bd} = \frac{a^\prime d+b^\prime c}{b^\prime d}$. This is really straight forward. Since the addition is symmetric, there is no need to also allow an other representative for the other summand. It would just make the calculation more unconvenient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Identity is the only real matrix which is orthogonal, symmetric and positive definite Show that identity is the only real matrix which is orthogonal, symmetric and positive definite All I could get using above information was that $A^2=I$, hence it is its own inverse. Using the fact that $A$ is positive-definite, I got that all diagonal entries will be greater than $0$, but how does that help? Edit: As $A$ satisfies $(x^2-1)=0$, therefore the minimal polynomial will divide this. Therefore, the minimal polynomial will have $(x-1)$ or $(x+1)$ or both as a factor, as $A$ is positive definite, so $-1$ can't be an eigenvalue, therefore we get that $1$ is an eigenvalue of $A.$ I'm not sure, if this helps, though.
Hint: $A$ is symetric and positive-definite. Use diagonalization.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Category Kernel Question (Unique homomorphism $\alpha_0:\ker\phi\to L$) There is a a question with two parts (prove or disprove), one of which I know how to do, but the other part I am stuck. Let $\phi:M'\to M$ be a homomorphism of abelian groups. Suppose that $\alpha:L\to M'$ is a homomorphism of abelian groups such that $\phi\circ\alpha=0$, for instance the inclusion map $\mu:\ker\phi\to M$. Prove or disprove each of the following: * *There is a unique homomorphism $\alpha_0:\ker \phi\to L$ such that $\mu=\alpha\circ\alpha_0$. (This I am unsure if it is true or untrue) *There is a unique homomorphism $\alpha_1:L\to\ker\phi$ such that $\alpha=\mu\circ\alpha_1$. (This I am quite sure it is true as it is the universal property of the kernel.) I will post my solution to part 2 below.
Part (i) is false: Take $M=L=0$ and $M'=\Bbb{Z}/2\Bbb{Z}$ so that there are unique homomorphisms $$\phi:\ M'\ \longrightarrow\ M\qquad\text{ and }\qquad\alpha:\ L\ \longrightarrow\ M',$$ and they clearly satisfy $\phi\circ\alpha=0$. Then $\ker\phi=M'$ so the inclusion map $\mu:\ \ker\phi\ \longrightarrow\ M'$ is the identity on $M'=\Bbb{Z}/2\Bbb{Z}$, which cannot factor over $L=0$, i.e. we cannot have $\mu=\alpha\circ\alpha_0$. Part (ii) is precisely the usual categorical definition of the kernel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1447879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Minimum value of the expression given below. Is $a,b,c$ are three different positive integers such that : $$ ab+bc+ca\geq 107 $$ Then what is the minimum value of $a^3+b^3+c^3-3abc$. I expanded this expression to $(a+b+c)((a+b+c)^2-3(ab+bc+ca))$ and tried to find the minimum value of $(a+b+c)$ by AM-GM inequalities but the problem is that the minimum value occurs at scenario where $a=b=c$, which is ommitted by assumptions of this question. I'd appreciate some help.
Yes thats the correct answer. We can do it as $3(ab+bc+ca)>321$ and for the value to be non negative (a+b+c)^2 should be just greater than 321 so we can suppose it to be 18 as 18^2=324 and hence (a+b+c)>18 for minimum value it should 18. and also abc should be greatest for minimum value so we struck to values 5,6,7 . and the minimun value will be 54
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How does the Torsion of two vector fields act on their corresponding flows? Let $X$ and $Y$ be vector fields defined on an open neighborhhod of a smooth manifold $M$ endowed with an (arbitrary) affine connection $\nabla$ (i'm not assuming anything apart from it being a connection on $TM$). I'm trying to understand the torsion $T(X,Y)=\nabla_X Y - \nabla_Y X - [X,Y]$ of the connection in terms of flows. Here's what i have so far: Denoting the local flows of $X$ and $Y$ by $\varphi^X_t$ and $\varphi^Y_t$ resp. and their commutator by $\alpha(t)= \varphi^Y_{-t} \varphi^X_{-t}\varphi^Y_t\varphi^X_t$. We have the following relation between the lie bracket and the $\alpha$: $$[X,Y] = \frac{1}{2} \alpha ''(0)$$ So what i'm left with is finding a way to express $\nabla_X Y - \nabla_Y X$ in terms of flows. Obviously there must be some input from the connection. I tried to compute the flow by exponentiating from the lie algebra of vector fields but i didn't get very far... This problem made me realize i have no idea how integral curves and parallel are related. A word about how the they relate to each other would in any case be very helpful. Ideally I'd like to have an expression for $\nabla_X Y - \nabla_Y X$ in terms of parallel transport and the flows of $X$ and $Y$. Is there such a charactrization?
So from Samelson Lie Bracket and Curvature, its shown that $$ (\nabla_Xs)^v =[X^h,s^v]$$ where $ X \in \Gamma(TM)$ and $s \in \Gamma(E)$ and $\nabla$ is the covariant derivative on $E$. The $s^v$ is the vertical extension and is defined as below Like wise the $X^h$ is the horizontal lift of the vector field and is defined as So $$ \nabla_XY-\nabla_YX = \pi([X^h,Y^v] - [Y^h,X^v]) $$ Where $\pi$ is the vertical projection.Therefore $$\nabla_XY - \nabla_YX = \pi(\frac12(\beta^{''}(0)-\gamma^{''}(0)))$$ Where $$ \beta = \varphi_{-t}^{Y^v}\varphi_{-t}^{X^h}\varphi_{t}^{Y^v}\varphi_{t}^{X^h}$$ and $$ \gamma = \varphi_{-t}^{X^v}\varphi_{-t}^{Y^h}\varphi_{t}^{X^v}\varphi_{t}^{Y^h}$$ where $\varphi_{t}^{Y^v} $,$ \varphi_{t}^{X^h} $,$ \varphi_{t}^{X^v}$ and $ \varphi_{t}^{Y^h} $ are the flows of $ Y^v $, $X^h$, $X^v$ and $Y^h$ respectively. I don't know if this makes any sense but I think by finding a way to horizontally lift and vertically extend the flows you would have a characterization of the covariant derivative in terms of the flows of $X$ and $Y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
A question on Gershgorin disk Suppose that the $n$ Gershgorin discs of $A \in {\mathbb{C}^{n \times n}}$ are mutually disjoint. If $A $ has real main diagonal entries and its characteristic polynomial has only real coefficients Why is every eigenvalue of $A$ real?
Hint: if your polynomial has only real coefficients, then if $a+bi$ is a root, then so is $a-bi$. If $a+bi$ is in a disk centered at a real number, is $a-bi$ also in that disk? Can you have two values in the same disk when the disks are disjoint?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448290", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there always a shortest path using vertices of low eccentricity? We say a graph $G$ is self-centered if $\text{rad}(G)=\text{diam}(G)$, so if its radius equals its diameter. In other words, the eccentricity of every vertex is equal. Consider the following claim: let $G$ be a graph that is not self-centered, and let $u$ and $v$ be two distinct arbitrary vertices both with eccentricity equal to $\text{diam}(G)$. Then, there exists a shortest $u$-$v$ path on which $u$ and $v$ are the only vertices with maximum eccentricity, i.e. $\epsilon(u) = \epsilon(v) = \text{diam}(G)$. If $u$ and $v$ are adjacent, this is trivially true. I'm not sure why this would always necessarily hold. Is there a proof or a counterexample?
This is not true. Let $G$ be a wheel graph with at least 8 spokes and subdivide each spoke. You can easily see that the center vertex is the only central vertex of $G$ and the vertices on the rim are the only vertices with maximum eccentricity (4). However, if you take two vertices on the rim that are at distance two, then the only shortest path between them goes along the rim and consists exclusively of vertices with maximum eccentricity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that every subsequence converging to the same limit implies limit of sequence exists. I need to prove that if $\exists$ subsequences $a_{n_{k_{1}}}$ and $a_{n_{k_{2}}}$ (of $a_{n}$) that converge to different limits, then the sequence $a_{n}$ does not converge. I'm not sure how to do this. If I suppose $a_{n_{k_{1}}}$ converges to some limit $l_{1}$, then $\forall \epsilon >0$, $\exists n_{k_{1}}>N$ s.t. $|a_{n_{k_{2}}}-l_{1}|<\epsilon$, and the same thing for $a_{n_{k_{2}}}$ and its limit $l_{2}$. $a_{n}$ not converging means $\exists \epsilon_{0}$ s.t. $\forall N\in \mathbb{N}$, $\exists n_{k}\geq N$ s.t. $|a_{n}-l|\geq \epsilon_{0}$. But, I'm not sure how to put them together in order to prove the implication that I want.
Suppose $a_n$ does converge to some value $\alpha$. This means that $\forall \epsilon>0 \exists N>0$ s.t $\forall n >N$: $|a_n - \alpha| < \epsilon$. Let $a_{n_k}$ be a subsequence of our original $a_n$. Since both sequences are infinite, and $N$ is a finite number, we can infer that there is some $\beta$ such that $\forall i > \beta$: $a_{n_i} \in \{a_n,a_{n+1},...\}$ where $n>N$. we already know that every element in the set $\{a_n,a_{n+1},...\}$ agrees with $|a_i-\alpha|<\epsilon$, and that $a_{n_i} \in \{a_n,a_{n+1},...\}$, so in particular, $|a_{n_i} - \alpha| < \epsilon$. So overall: for all $\epsilon >0 \exists \beta>0$ s.t $\forall i>\beta$: $|a_{n_i} - \alpha|<\epsilon$, so $\alpha$ is the limit of $a_{n_k}$. Result: if $a_n$ converges to $\alpha$, then ANY infinite subsequence of $a_n$ also converges to $\alpha$. So if two subsequences are converging to different limits, then that means the entire sequence does not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
When is the additive identity not the zero vector? My teacher cryptically mentioned today that the zero vector is not always the additive identity. When asked for clarification I was told "we'll get there". He did confirm it is always 0 in matrices filled with real numbers, but I can't think of or find any matrix, whether complex or variable or whatever where anything else would work, or where the zero vector wouldn't work. It might be half a joke to keep me interested, but I'll be a minkeys uncle if it didn't work! I don't know, any ideas?
That depends on what you mean by the zero vector. If you want, you can consider $\mathbb{R}_+$ (the set of positive real numbers) as a vector space, where you define $x\oplus y = xy$ and $\lambda x = x^\lambda$. Then the "additive identity" is actually $1$ (but should probably be called the zero vector in this strange context).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What happens if we remove the non-negativity constraints in a linear programming problem? As we know, a standard way to represent linear programs is $$\begin{array}{ll} \text{maximize} & c^T x\\ \text{subject to} & A x \leq b\\ & x \geq 0\end{array}$$ with the associated dual $$\begin{array}{ll} \text{minimize} & b^T y\\ \text{subject to} & A^T y \geq c\\ & y \geq 0\end{array}$$ We know that in such a case, either both problem have an optimum (at the same point) or one is unfeasible and the other unbounded. Now suppose in these definitions, I remove the non-negativity constraints on $x$ and $y$. I then have two questions. Firstly, in such a case, can an optimum be achieved with an unbounded feasible set? If so, does that mean the dual will have the same optimum? Secondly, what would be a way to check if an optimum is attained if the feasible set is unbounded? Will checking at the vertices only suffice in this case?
If you remove the non-negativity constraint on $x$ then the constraints of the dual program become $A^T y = c$. Similarly, if you drop the non-negativity constraint on $y$ your primal constraints become $A x = b$. For these primal dual pairs of LPs strong duality still holds. That means that if your primal LP has a bounded objective value which is achieved by a solution $x^*$ then there exists a dual feasible solution $y^*$ such that both objective values coincide. Checking only the vertices will not suffice to check if there an optimum is attained. You also have to check the extreme rays to make sure there that the optimum is attained. If you already know the (bounded) optimal objective then you will find an optimal solution at one of the vertices. On a side note: You can always transform a problem without non-negativity constraints to one with such constraints by replacing every variable $x$ which needs not be non-negative by two variables $x^+$ and $x^-$, both of which must be non-negative. In every constraint of the LP $x$ is replaced by $(x^+ - x^-)$. The idea behind this replacement is that $x^+$ is the positive part of $x$ and $x^-$ is the negative part.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448696", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Taylor expansion for $\arcsin^2{x}$ I stumbled upon this particular expansion that was included in this post. $$ \displaystyle \arcsin^{2}(x) = \frac{1}{2} \sum_{n=1}^{\infty} \frac{1}{n^{2} \binom{2n}{n}} (2x)^{2n}$$ This caught my eye because I remember trying to derive a Taylor series for $\arcsin^{2}(x)$ a while ago without much success. Can anyone prove this or point me to a material that would show a proof of this identity? EDIT : Feel free to use any mathematical apparatus at hand. I'm not interested in a proof fit for a certain level, nor am I looking for utmost elegance (though that would be lovely).
You can find a derivation for the Taylor series of $\frac{\arcsin(x)}{\sqrt{1-x^2}}$ in this nice answer. Since $$\frac{d\arcsin^2(x)}{dx} = \frac{2\arcsin(x)}{\sqrt{1-x^2}}$$ the Taylor series for $\arcsin^2(x)$ follows by integration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1448822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Equivalence relations Having trouble proving this is an equivalence relation. Is it suffice to say that let $x y z$ be any string in $\Sigma^*$, $(xz \in L \iff yz \in L) \rightarrow (yz \in L \iff xz \in L)$ shows that $xRy \rightarrow yRx$?
For any $x\in \Sigma^*$, define $L(x)$ to be the language $\{z\in \Sigma^*\mid xz\in L\}$. Then the definition of $R$ can be rephrased as $$xRy \iff L(x)=L(y).$$ This is clearly reflexive, symmetric, and transitive, but assuming this is an elementary course you probably want to at least write out transitivity, making clear you understand why $xRy$ and $yRz$ imply $xRz$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convergence of $\sum_{k=0}^{\infty}{x^k\over (k+1)!}$ How can I prove the convergence of $$\sum_{k=0}^{\infty}{x^k\over (k+1)!}$$ and what is the limit function? I think that I need to use the fact that $\sum_{k=0}^{\infty}{x^k\over k!}$ is a convergent series. Any comments, suggestions or hints would be really appreciated
Hint: notice that $\sum\limits_{k=0}^{\infty}\frac{x^k}{(k+1)!}=\frac{1}{x}\sum\limits_{k=0}^{\infty}\frac{x^{k+1}}{(k+1)!}=\frac{1}{x}\sum\limits_{k=1}^{\infty}\frac{x^k}{k!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show if $N$ is normal subgroup of $G$ and $H$ is a subgroup of $G$, then $N \cap H$ is normal subgroup of $H$. Show if $N$ is normal subgroup of $G$ and $H$ is a subgroup of $G$, then $N \cap H$ is normal subgroup of $H$. attempt: Then recall $N \cap H$ is normal if and only if $h(N \cap H) h^{-1} \subset N \cap H$. Then suppose $j \in (N \cap H)$ and $h \in H$. , so $hjh^{-1} = (h^{-1})^{-1} j(h^{-1}) = k \in (N \cap H),$ for some $k$, then we can solve for $j$ so we get $j = h^{-1}kh \in h^{-1}(N\cap H)h. $ Hence $N \cap H $ is cointained in $h^{-1}(N \cap H)j$ so $N \cap H = h^{-1}(N \cap H)h$. So $N\cap H$ is normal subgroup of $H$. Can someone please verify this? My professor said I need to choose an element from one side and show the element is in the other side too. So containment in both sides. My professor said I can't assume $h(N \cap H) h^{-1} = hNh^{-1} \cap hHh^{-1} = N \cap H.$ Can someone please help me if this is wrong. Thank you very much!
To show $N ∩ H $ is normal in $H$ we must show that the conjugate of $N ∩ H$ by an element of $h$ is $N ∩ H$. So let $h ∈ H$. Then $h(N ∩ H)h ^{−1} = hNh^{−1} ∩ hHh^{−1}$ but $hNh^{−1} = N$ since $N$ is normal in $G$, and $hHh^{−1} = H$ since $H$ is a subgroup of $G$, so $g(N ∩ H)g^ {−1} = N ∩ H$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Boundedness of an operator in $L^p$ space Let's define the following operator $$\mathcal{J}_\epsilon:=(I-\epsilon\Delta)^{-1},$$ where $\epsilon>0$ and $\Delta$ is the Laplacian. We know that $$\Vert\mathcal{J}_\epsilon f\Vert_{L^p(\Omega)}\leq\Vert f\Vert_{L^p(\Omega)},$$ for every $1<p<\infty$, with $\Omega$ an open subset of $\mathbb{R}^3$. Does the same result hold if $\Omega=\mathbb{R}^3$?
You may have already had this proven, but I thought I would include it for completeness. The integral of $e^{-2\pi ix\cdot\xi}$ over a sphere of radius $r$ is $$ \begin{align} 2\pi r\int_{-r}^re^{-2\pi it|\xi|}\,\mathrm{d}t &=2\pi r\int_{-r}^r\cos\left(-2\pi t|\xi|\right)\,\mathrm{d}t\\ &=\frac{2r}{|\xi|}\sin\left(2\pi r|\xi|\right)\tag{1} \end{align} $$ Therefore, the Fourier Transform of $\frac1{1+4\pi^2\epsilon|x|^2}$ is $$ \begin{align} \int_{\mathbb{R}^3}\frac{e^{-2\pi ix\cdot\xi}}{1+4\pi^2\epsilon\,|x|^2}\,\mathrm{d}x &=\int_0^\infty\frac{\frac{2r}{|\xi|}\sin\left(2\pi r|\xi|\right)}{1+4\pi^2\epsilon r^2}\,\mathrm{d}r\tag{2a}\\ &=-\frac1{\pi|\xi|^2}\int_0^\infty\frac{r}{1+4\pi^2\epsilon r^2}\,\mathrm{d}\cos\left(2\pi r|\xi|\right)\tag{2b}\\ &=\frac1{\pi|\xi|^2}\int_0^\infty\cos\left(2\pi r|\xi|\right)\frac{1-4\pi^2\epsilon r^2}{\left(1+4\pi^2\epsilon r^2\right)^2}\,\mathrm{d}r\tag{2c}\\ &=\frac1{2\pi^2|\xi|^2\sqrt\epsilon}\int_0^\infty\cos\left(\frac{|\xi|}{\sqrt\epsilon} r\right)\frac{1-r^2}{\left(1+r^2\right)^2}\,\mathrm{d}r\tag{2d}\\ &=-\frac1{8\pi^2|\xi|^2\sqrt\epsilon}\int_{-\infty}^\infty\exp\left(\frac{i|\xi|}{\sqrt\epsilon} r\right)\left[\frac1{\left(r+i\right)^2}+\frac1{\left(r-i\right)^2}\right]\,\mathrm{d}r\tag{2e}\\ &=-\frac1{8\pi^2|\xi|^2\sqrt\epsilon}2\pi i\frac{i|\xi|}{\sqrt\epsilon}\exp\left(-\frac{|\xi|}{\sqrt\epsilon}\right)\tag{2f}\\ &=\frac1{4\pi|\xi|\,\epsilon}\exp\left(-\frac{|\xi|}{\sqrt\epsilon}\right)\tag{2g}\\[6pt] &=K_\epsilon(x)\tag{2h} \end{align} $$ Explanation: $\text{(2a)}$: use $(1)$ $\text{(2b)}$: prepare to integrate by parts $\text{(2c)}$: integrate by parts $\text{(2d)}$: substitute $r\mapsto\frac{r}{2\pi\sqrt{\epsilon}}$ $\text{(2e)}$: extend the integral to $(-\infty,\infty)$ by symmetry and divide by $2$ $\phantom{\text{(2e):}}$ since the rest of the integrand is even, we can change $\cos(x)$ to $e^{ix}$ $\phantom{\text{(2e):}}$ partial fractions $\text{(2f)}$: use the contour $[-R,R]\cup Re^{i[0,\pi]}$ as $R\to\infty$ and the singularity at $i$ $\text{(2g)}$: simplify $\text{(2h)}$: define the kernel $K_\epsilon$ Simple computation shows that $$ \|K_\epsilon\|_{L^1\hspace{-1pt}\left(\mathbb{R}^3\right)}=1\tag{3} $$ for all $\epsilon$. Furthermore, $$ \mathcal{J}_\epsilon f(x)=K_\epsilon\ast f(x)\tag{4} $$ $(3)$, $(4)$, and Young's Inequality guarantee that the $L^p$ norm of $\mathcal{J}_\epsilon$ is at most $1$ for $1\le p\le\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449434", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Root system for Lie algebras, Does $t_{\alpha+\beta}=t_\alpha+t_\beta$? For a root system, where $\alpha,\beta \in \Delta$ and $\alpha+\beta\in \Delta$ Does $t_{\alpha+\beta}=t_\alpha+t_\beta$? Where $t_\alpha$ I know is denoted in different ways depending on author, one author calls these root vectors. I.e. for $\alpha\in\Delta$, $t_\alpha$ spans $\mathfrak{h}$, $\quad t_\alpha \in [\frak{g_\alpha,g_{-\alpha}}]$ I tried to manipulate things using $h_\alpha = \frac{2}{\langle \alpha,\alpha\rangle} t_\alpha,\quad \alpha\in \Delta\quad [e_\alpha,e_{-\alpha}]=h_\alpha$, expanding with the Jacobi identity, $[e_\alpha,e_\beta]=N_{\alpha,\beta} e_{\alpha+\beta}$. I am starting to think it is false, than Additional notation help $e_\alpha$ is an eigenvector common to all elements of the Cartan subalgebra $\mathfrak{h}$
You don't really give a definition for $t_\alpha$, and I have not seen this notation before. But I assume that $h_\alpha$ and $e_\alpha$ have their standard meaning, and take the relation $t_\alpha = \frac{\langle \alpha, \alpha \rangle}{2}h_\alpha$ you give as definition, where I further assume that $\langle \cdot ,\cdot\rangle$ is an invariant scalar product on the root system. Then the one thing to note is that for such an invariant scalar product, one has an identification of the coroot $\check\alpha(\cdot)$ with $\displaystyle\frac{2 \langle\alpha,\cdot \rangle}{\langle\alpha,\alpha\rangle}$. Whereas on the Lie algebra level, one has $$\check\alpha(\gamma) = \gamma(h_\alpha)$$ for all roots $\gamma$, or in other words, $\check\alpha$ is the evaluation at $h_\alpha$. Putting that together, one gets that the evaluating any root $\gamma$ (or actually any $l \in \mathfrak{h}^*$) at $t_\alpha$ is just $\langle \alpha, \gamma\rangle$ (or $\langle \alpha, l\rangle$). Since the scalar product is bilinear, the assertion follows. (And $t_\alpha \leftrightarrow \alpha$ gives an identification of the $t_\alpha$ with the original root system, like $h_\alpha \leftrightarrow \check\alpha$ describes the dual root system.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For which $k$ with $0Because PARI/GP is not very fast in primilaty testing, I did not check the pairs $k \times 3571\# \pm 1$ in ascending order, but I begun with $k=200,000$ and got the twin prime pair $$210637\times 3571\# \pm 1$$ * *Is there a $k$ with $0<k<210637$, such that $k\times 3571 \# \pm 1$ is a twin prime pair ? I tried to sieve out small factors, but it turned out that many candidates remain because no member of any pair can have a prime factor $\le 3517$. * *Can I speed up the search with PARI/GP by using some clever sieving methods ?
I don't know about PARI/GP, but NewPGen can sieve pretty fast. Running the interval $0<k<200000$ for one minute eliminated about 170000 candidates. https://primes.utm.edu/programs/NewPGen/ @Edit: I could have perhaps explained the programm a bit. It only sieves, for testing you need another program. Pfgw performs probable prime tests, and best thing is, it can use the files created by NewPGen. http://sourceforge.net/projects/openpfgw/
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How do I solve $\lim_{x \to 0} \frac{\sqrt{1+x}-\sqrt{1-x^2}}{\sqrt{1+x}-1}$ indeterminate limit without the L'hospital rule? I've been trying to solve this limit without L'Hospital's rule because I don't know how to use derivates yet. So I tried rationalizing the denominator and numerator but it didn't work. $\lim_{x \to 0} \frac{\sqrt{1+x}-\sqrt{1-x^2}}{\sqrt{1+x}-1}$ What is wrong with $\lim_{x \to 0} \frac{\sqrt{1+x}-\sqrt{1-x^2}-1+1}{\sqrt{1+x}-1} = \lim_{x \to 0} \frac{\sqrt{1+x}-1}{\sqrt{1+x}-1} + \lim_{x \to 0} \frac{1-\sqrt{1-x^2}}{\sqrt{1+x}-1}$ = 1 + DIV?
Set $x=\cos2y$ $$\lim_{x \to 0}\dfrac{\sqrt{1+x}-\sqrt{1-x^2}}{\sqrt{1+x}-1} =\lim_{y\to\pi/4}\dfrac{\sqrt2\cos y-\sin2y}{\sqrt2\cos y-1}$$ $$=-\sqrt2\lim_{y\to\pi/4}\cos y\cdot\lim_{y\to\pi/4}\dfrac{\sin y-\sin\dfrac\pi4}{\cos y-\sin\dfrac\pi4}$$ Method$\#1:$ $$\lim_{y\to\pi/4}\dfrac{\sin y-\sin\dfrac\pi4}{\cos y-\sin\dfrac\pi4} =\dfrac{\lim_{y\to\pi/4}\dfrac{\sin y-\sin\dfrac\pi4}{y-\dfrac\pi4}}{\lim_{y\to\pi/4}\dfrac{\cos y-\cos\dfrac\pi4}{y-\dfrac\pi4}} =\dfrac{\dfrac{d(\sin y)}{dy}_{(\text{ at } y=\pi/4)}}{\dfrac{d(\cos y)}{dy}_{(\text{ at } y=\pi/4)}}$$ Method$\#2:$ Use Prosthaphaeresis Formulas to get $$\dfrac{\sin y-\sin\dfrac\pi4}{\cos y-\sin\dfrac\pi4}=\cdots=-\cot\dfrac{y+\pi/4}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Is it possible to describe the Collatz function in one formula? This is related to Collatz sequence, which is that $$C(n) = \begin{cases} n/2 &\text{if } n \equiv 0 \pmod{2}\\ 3n+1 & \text{if } n\equiv 1 \pmod{2} .\end{cases}$$ Is it possible to describe the Collatz function in one formula? (without modular conditions)
You can use floor/ceiling maps. If you let $f(n) = 2*(n/2 - floor (n/2 ))$ then this function is the indicator of whether $n$ is odd or even, (i.e. $f(n) = 1$ if $n$ is odd and $f(n) = 0$ if $n$ is even), and so then you can define $C(n) = f(n)*(3n + 1) + (1 - f(n))*(n/2)$. However, note that looking at the floor of $n/2$ and using this in your formula by comparing to $n/2$ is basically the same as having different "cases" for the value of the function depending on whether $n$ is odd or even. I don't think there's any better way around it though, if you want one formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1449874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 8, "answer_id": 1 }
equality of complex analysis property I need to prove the following that $e^{\bar{z}} = \bar{e^{z}}$ with $e^z := \Sigma_{k = 0}^{k = \infty} \frac{z^k}{k!}$ The problem that I am having is first we know $e^z$ defined that way always converge, but what I don't understand is we have to see that $\bar{z^k}$ put into the formula above converges to same number as if we do the summation and then converge to a number, and conjugate that. I don't know how to do that..
First, we have $$\overline{\lim_{N\to \infty}S_N}=\lim_{N\to \infty}\overline{S_N}$$ Second, we have $$\overline{S_N}=\overline{\sum_{k=0}^Nf_k(z)}=\sum_{k=0}^N\overline{f_k(z)}$$ Third, we have $$\overline{f_k(z)}=\overline{\left(\frac{z^k}{k!}\right)}=\frac{\overline{z^k}}{k!}$$ Finally, we have $$\overline{z^k}=\bar z^k$$ and we are done, as this last equality follows inductively from the fact that $\overline{z_1z_2}=\bar z_1\bar z_2$!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Justifying differentiation of infinite product Let $(z_n)_{n\in\mathbb{Z}}$ be a sequence of complex numbers s.t. the product $P(z):=\prod_{n=1}^{\infty}{\left(1-\frac{z}{z_{-n}}\right)\left(1-\frac{z}{z_{n}}\right)}$ is absolutely convergent for every $z\in\mathbb{C}$, and hence defines an entire function with zeros at every $z_k$. Is there some nice way to justify $\frac{\mathrm{d}}{\mathrm{d} z}P(z_k)=\frac{-1}{z_k}\prod_{|n|\geq1, n\neq k}{\left(1-\frac{z_k}{z_{n}}\right)}$? I guess one is not allowed to simply use the product rule for differentiation here?
Yes, this can be done. You can find this on ProofWiki: Derivative of Infinite Product of Analytic Functions You don't even need the absolute convergence of the product $f=\prod f_n$, locally uniform convergence (=uniform convergence on compact sets, modulo some details, depending on the author, but many authors do not define it) suffices: Theorem. Let $D\subset\mathbb C$ be open, $(f_n)$ analytic on $D$ and $f=\prod f_n$ locally uniformly convergent (thus analytic). Then $f'=\sum f_n'\prod_{k\neq n}f_k$ locally uniformly. A practical sufficient condition is the locally uniform convergence of $\sum|f_n-1|$, which is satisfied by any reasonable sequence I can think of. Outline of proof First establish $$\frac{f'}f=\sum\frac{f_n'}{f_n}$$ locally uniformly, multiply everything by $f$, and check that both sides coincide at the zeroes of $f$. To obtain this, we'd like $\log f = \sum \log f_n$, locally uniformly in order to differentiate term-wise. This is true, or at least locally: Theorem. If $f=\prod f_n$ locally uniformly and $z_0\in D$, then $$\log \left(\prod_{n=n_0}^\infty f_n\right) = \sum_{n=n_0}^\infty \log f_n + 2k\pi i$$ uniformly on some neighborhood $U$ of $z_0$, for some $n_0\in\mathbb N$, $k\in\mathbb Z$ constant on $U$. The proof consists of very carefully taking logarithms. The surprising part (to me) is that $k$ can be taken constant, i.e. eventually the product does not switch branches anymore.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
Proving $\sum_{k=0}^n {n \choose k}{m-n \choose n-k} = {m \choose n}$ I have tried using the principle of mathematical induction. The base case is simple, but I am having trouble with the induction step. Any help would be great, thanks !
Although a MI method may be used, a Combinatorial argument is much more simple. We assume that we want to select n distinct objects of a total of m objects. We can always select them directly and get RHS.Alternatively, we can separate the m objects into two groups, one of n and the other of m-n. Now assume we take 0 from the group with n, we must choose n from the group with m-n. If we take 1 from the group with n, we must Take (n-1) from the group with (m-n). We continue repeating this & sum up the result, and we get RHS.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450293", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is my algebraic space a scheme? Consider $\mathcal{M}_{1,1}$ over $\bar{\mathbb{Q}}$. I have an algebraic stack $\mathcal{M}$ finite etale over $\mathcal{M}_{1,1}$ I can prove that it is an algebraic space (essentially because all its "hidden fundamental groups" are trivial - ie, all geometric points of $\mathcal{M}$ have trivial 2-automorphism groups) Must it be a scheme?
So the answer is yes, and follows from two facts: * *For an abelian variety $A/S$, and $\sigma\in Aut_S(X)$, if there is a point $s\in S$ such that $\sigma|_{A_s} = id$, then $\sigma = id$. (This is called "rigidity", and can be found in Mumford's Geometric Invariant Theory) *For a stack affine over $\mathcal{M}_{1,1}$, it is representable if and only if every object has no nontrivial automorphisms. (This is Scholie's result, theorem 4.7.0 in Katz/Mazur's Arithmetic Moduli of Elliptic Curves) Statement (2) tells us that we just need to check that our objects have no automorphisms, and (1) tells us that it suffices to check this for elliptic curves over fields, so we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Writing specific word as product of commutators in free groups Let $F$ be a free group on $\{a,b,c,\cdots\}$. Then a word $$a^mb^nc^k\cdots \hskip1cm \mbox{(finite expression)}$$ lies in commutator subgroup $[F,F]$ if and only if sum of powers of $a$ is zero, sum of powers of $b$ is zero, and so on. This follows by considering the natural homomorphism from $F$ to $F/[F,F]$. Now, my question, once we decide that a certain word in $F$ is in $[F,F]$, how to write it as a product of commutators? Is there any inductive procedure? For example, $a^2bc^{-2}a^{-2}b^{-1}c^2.$ This is in $[F,F]$. Is there an algorithm, or inductive procedure, to write this word as product of commutators? (Further examples would be better).
In Applications of topological graph theory to group theory, Gorenstein and Turner give such an algorithm. Furthermore, the number of commutators in the product they find is minimal. I give some details on my blog, and in particular I show how the argument gives $$x^2yx^{-1}y^{-2}x^{-1}y=[x^2yx^{-1},y^{-1}x^{-1}].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
No finitely generated abelian group is divisible My question might be ridiculously easy. But I want prove No finitely generated abelian group is divisible. Let $G$ be a finitely generated abelian group. By definition, group $G$ is divisible if for any $g\in G$ and natural number $n$ there is $h\in G$ such that $g=h^n$. There 2 case: * *$G<\infty$ *$G=\infty.$ In the first case I tried to use the fundamental theorem of finetely generated abelian group. By this theorem, finitely generated abelian group is finite group iff its free rank is zero. At this point I am now stuck. In the second case I can use the following statement "No finite abelian group is divisible." But I cannot prove this state.
Proof with no structure: let $G$ be a f.g. divisible abelian group. Assuming by contradiction that $G\neq 0$, it has a maximal (proper) subgroup $H$; then $G/H$ is a simple abelian group, hence cyclic of prime order, hence is not divisible, contradiction since being divisible passes to quotients. The same shows that a nonzero f.g. module over any commutative ring $A$ that is not a field, is never divisible (where divisible means that $m\mapsto am$ is surjective for every nonzero $a\in A$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $(a+1)(a+2)(a+3)\cdots(a+n)$ is divisible by $n!$ so I have this math problem, I have to prove that $$(a+1)(a+2)(a+3)\cdots(a+n)\text{ is divisible by }n!$$ I'm not sure how to start this problem... I completely lost. Here's what I know: $(a+1)(a+2)(a+3)\cdots(a+n)$ is like a factorial in that we are multiplying consecutive terms, $n$ times. However, how would I prove that it is divisible by $n$? Thanks
Let $$ f(a,n) = (a+1)(a+2)\cdot\ldots\cdot(a+n).\tag{1}$$ We have: $$ f(a+1,n)-f(a,n) = n\cdot f(a+1,n-1) \tag{2} $$ hence the claim follows by applying a double induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1450889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does a cofiber sequence of CW complexes induce a cofiber sequence of skeleta? Suppose $X\to Y \to Z$ is a cofiber sequence of CW complexes. We can replace the maps with homotopic cellular maps $X_n\to Y_n \to Z_n$ taking the $n$-skeleton of $X$ to the $n$-skeleton of $Y$ and similarly for the second map. Is the resulting sequence $X_n\to Y_n\to Z_n$ necessarily a cofiber sequence? I feel the answer must be no, but a counterexample is eluding me.
No. For example, $S^1 \to \bullet \to S^2$ is a cofiber sequence, but it doesn't remain a cofiber sequence after taking $1$-skeleta.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
least-squares problem with matrix Consider the samples of vectors $(x_0,x_1,...,x_k)$ where $x\in \mathbb{R}^m$ and $(y_0,y_1,...,y_k)$ where $y\in \mathbb{R}^n$. I need to find the matrix $A\in \mathbb{R}^{n\times m}$ of the following least-squares problem : $\min_A S= \sum\limits_{i=0}^k (y_i-Ax_i)^T(y_i-Ax_i)$ To find the minimum, I differentiate $\frac{\partial S}{\partial A}=0=\sum\limits_{i=0}^k -2x_i^Ty_i +2x_i^TAx_i$ And finally : $\sum\limits_{i=0}^k x_i^TAx_i=\sum\limits_{i=0}^k x_i^Ty_i$ I am stuck here, how to find $A$ ?
You can rewrite it as $$\sum\limits_{i=0}^k (y_i-Ax_i)^T(y_i-Ax_i)=\sum\limits_{i=0}^k ||y_i-Ax_i||_2^2=||Y-AX||_F^2$$ where X and Y are the matrices of the samples by column, and the $||\cdot||_F$ is the Frobenius norm. In this way you can transpose everything $$\dots=||Y^T-X^TA^T||_F^2$$ and you can treat it as a least squares minimization problem, that can be solved exploiting the pseudoinverse matrix. Remember that here $X^T,Y^T$ and $A^T$ play the roles respectively of the matrix of the linear system, constant term (one vector for each column) and the unknown (one vector for each column).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the gradient of implicit surface the normal vector (i.e. parallel to the normal line)? If we have an implicit surface $g(x, y, z)=...$, the gradient is a normal vector. But why? Do I have to start with tangent vectors and then show the gradient is perpendicular? Not sure how to approach this.
Fix a point $p$ on the surface and take a curve $\gamma=\gamma(t)$ lying on the surface and passing through $p$, say $\gamma(0)=p$. That implies $$ g(\gamma(t))=0 $$ for all $t$. Differentiate this relation termwise. This argument shows that $\nabla g(p)$ is orthogonal to the velocity vector $\dot{\gamma}(0)$ for all curves on the surface that pass through $p$. By definition, the plane tangent to the surface at $p$ is the one that is composed of those vectors. Therefore, $\nabla g(p)$ is orthogonal to the tangent plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $0\to\ker\varepsilon\to P_0\xrightarrow\varepsilon N\to 0$ a projective resolution? Let $R$ be a commutative ring with $1$. My professor defined $\operatorname{Tor}_i^R(M,N)$ as follows: Tensor a projective resolution $\dots\to P_1\to P_0\to M\to 0$ with $N$ and set $$\operatorname{Tor}_i^R(M,N):=\ker(P_i\otimes N \to P_{i-1}\otimes N)/\operatorname{im}(P_{i+1}\otimes N\to P_i\otimes N) \text,$$ where $P_{-1}:=0$. To prove that $\operatorname{Tor}_0^R(M,N)=M\otimes_R N$, I have seen the following argument: Choose a projective resolution $$\dots\to P_1\to P_0\xrightarrow\varepsilon M\to 0 \text;$$ this gives a short exact sequence $$0\to\ker\varepsilon\to P_0\xrightarrow\varepsilon M\to 0 \text.$$ Tensoring with $N$ yields the exact sequence $$0\to\ker\varepsilon\otimes N\to P_0\otimes N\xrightarrow{\varepsilon\otimes\operatorname{id}} M\otimes N\to 0 \text,$$ hence $$\operatorname{Tor}_0^R(M,N)=(P_0\otimes_R N)/(\ker\varepsilon\otimes N) \cong M\otimes N \text.$$ Now most of this is clear to me, except: In the sequence $$0\to \ker\varepsilon\to P_0\xrightarrow\varepsilon M\to0 \text,$$ why is $\ker\varepsilon$ projective? Is this even true, or is the definition incomplete and it does not need to be?
In general $\ker \varepsilon$ is not projective, otherwise every module would have a projective resolution of length one. Consider the following counter example: Let $R=K[x,y]$ where $K$ is a field and let $M=R/(x,y)$. Let $\epsilon: R \to M$ be the map that sends an element to its residue class. The kernel is the ideal $(x,y)$ which is not projective. Also tensoring an exact sequence with $N$ does not give an exact sequence in general.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451322", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to find the range of a function without a graph? Okay so I know how to find the domain of the function $f(x) = \frac{3x^2}{x^2-1}$, which is $x\neq 1$ and $x\neq -1$, but I'm totally confused on how to find the range without using a graph.
Write the function as $$f(x)=\frac{3x^2-3+3}{x^2-1}=3+\frac3{x^2-1}$$ As $f$ is even, we may suppose $x\ge 0,\enspace x\ne q 1$. Now the range of $x^2-1$ is $[-1,+\infty)$ and for $f(x)$, the value $x^2=1$ is excluded, hence the range of $\;\dfrac3{x^2-1}$ is $(-\infty,-3] $ for $x\in [0,1)$ and $(0,+\infty)$ for $x>1$. Hence the range of $f(x)=3+\dfrac3{x^2-1}$ is $\;(-\infty,0]\cup (3,+\infty)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Proving that the Empty Set $\emptyset$ is Unique Verification I know that this question has been asked before, but I am presenting my proof for verification. I do not want anyone to give me a proof or suggest a better one; I only want to know if the following proof is legitimate. Let $A$ and $B$ be empty sets. Let $U$ be the universe in which $A$ and $B$ abide. Clearly, $U = \overline{A}$ and $U = \overline{B}$. However, this implies that $\overline{U} = \bar{\bar{A}} = A$ and $\overline{U} = \bar{\bar{B}} = B$. Since $\overline{U} = \overline{U}$, we conclude that $A=B$, that is, the empty set is unique.
Your proof is not legitimate in ZF (in which there is no "universe" in which $A$ and $B$ abide). (I would say more but you explicitly asked me not to.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is $t^3+t+1$ irreducible over $\mathbb{Q}(\sqrt{-31})$? To prove this, I substituted $a + b \sqrt{-31}$ into the polynomial, where $a,b$ are rational. But it is very complicated. Is there another way to check irreduciblity of $t^3+t+1$ over $\mathbb{Q}(\sqrt{-31})$?
A cubic polynomial $f$ that is irreducible over $\mathbb{Q}$ is irreducible over any quadratic extension of $\mathbb{Q}$. To see this, let $\alpha$ be a root of $f$, and note that $\mathbb{Q}(\alpha)$ has degree $3$ over $\mathbb{Q}$, so it cannot be contained in an extension of degree $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Let $H$ be a group. Let $a, b$ be fixed positive integers and $H=\{ax+by\mid x,y\in \Bbb Z\}.$ Show that $d\mathbb Z =H$ where $d=\gcd(a,b)$. Let $H$ be a group. Let $a, b$ be fixed positive integers and $$H=\{ax+by\mid x,y\in \Bbb Z\}.$$ Show that $d\mathbb Z =H$ where $d=\gcd(a,b)$. Given $d=\gcd(a,b)$ then $d|a, ~d|b$ i.e. $d\alpha =a, ~d\beta =b$ for some integers $\alpha, \beta$. Let $ax+by\in H$ then $ax+by=d(x\alpha + y\beta)\in d\Bbb Z$ i.e. $$H\subset d\Bbb Z ~~~~~~~~~~~~~(1)$$ We now show that $d \Bbb Z \subset H$. By Euclidean algorithm, there exist $u, v$ such that $ua+vb=\gcd(a,b)=d$. My Question I would like to show that $$d\mathbb Z =H.$$ I have shown $$H\subset d\Bbb Z$$ and I am unable to show $$d \Bbb Z \subset H.$$ Please help me to show the desired part ($d \Bbb Z \subset H$).
Citing you By Euclidian algorithm, there exist $u, v$ such that $ua+vb=\gcd(a,b)=d$ so you're done since given $dz\in d\mathbb Z$, $$dz=z(ua+vb)=a(zu)+b(zv)$$ which is a linear combination of $a$ and $b$, thus $dz\in H$. This proves $d \Bbb Z \subset H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the separable differential equation M(x) + N(y)y' = 0 is exact. Show that the separable equation $M(x) + N(y)\frac{dy}{dx} = 0$ is exact. The homework sheet that the teacher gave us said it should be a one-liner. I'm not sure how to prove it that easy (as to put it in one line). I have done this so far: $M(x) dx + N(y) dy = 0$ I know that the derivative of M(x) with respect to y should equal the derivative of N(y) with respect to x. ($M_x = N_y$) But if they don't than it's not exact, right? Even if I bring the M(x)dx to the other side it would be $N(y) dy = - M(x) dx$ Then I still have the issue that M_x might be different than N_y Edit with @Evgeny comment in mind Ok so, then I need to look at it as the derivative of $\frac {d M}{dy} = \frac {dN}{dx}$? Couldn't they still be different?
$$M(x) + N(y)\frac{dy}{dx} = 0 \tag 1$$ is exact because an integral of $(1)$ is $$\int M(x) \, dx+\int N(y) \, dy = \text{ constant } $$ and is a solution of $(1).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1451903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$4ab-2-5b=0$ have no solution $a,b$ are natural numbers . Show that $4ab-2-5b=0$ has no solution . By contradiction : $b(4a-5)=2$ So $4a-5=${1,2} which gives $a\in\mathbb{Q}$ ( contradiction) So the equation has no solution ! Is my proof correct ?
The essence of your proof is correct, but you've written it down in an awful fashion. Contrary to popular belief, mathematics isn't just writing down some formulae and implications. Every proof must contain some explaining words. In this case, a proper proof could look like this: Assume there is an integer solution $a, b$ to the equation. Then the equations implies $b(4a - 5) = 2$. Since $b$ is a nonnegative integer, $4a - 5$ must also be a nonnegative integer [since their product would be nonpositive otherwise]. We also note that $4a - 5$ is a divisor of $2$; But since $2$ is a prime, it only has two nonnegative divisors, namely $1$ and $2$. Neither $4a - 5 = 1$ nor $4a - 5 = 2$ has a solution over the nonnegative integers, which is a contradiction to our assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Different ways finding the derivative of $\sin$ and $\cos$. I am looking for different ways of differentiating $\sin$ and $\cos$, especially when using the geometric definition, but ways that use other defintions are also welcome. Please include the definition you are using. I think there are several ways to do this I didn't find, that make clever use of trigonometric identities or derivative facts. I am looking for these for several reasons. Firstly, it is just interesting to see. Secondly, there may be interesting techniques that can also be applied in a clever way to other derivative problems. It is also interesting to see how proofs can come form completely different fields of mathematics or even from physics. I have included several solutions in the answers.
To the first order, the tangent is a good approximation of the boundary of a shape. So can you approximate $(\cos(\theta+d\theta),\sin(\theta+d\theta))$ in terms of $(\cos\theta,\sin\theta)$? In other words, what's: $(\cos(\theta+d\theta),\sin(\theta+d\theta))-(\cos\theta,\sin\theta)$ Well, it's tangent to the circle, and hence points in the perpendicular direction. And its length is $d\theta$. Hence it's $d\theta(-\sin\theta,\cos\theta)$. [edit] This turns out to be the same as Jyrki Lahtonen's proof B.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 11, "answer_id": 7 }
Based on a coordinate system centered in a sphere where is $M(x, y, z) = 6x - y^2 + xz + 60$ smallest? I am trying to work through a few examples in my workbook, and this one has me completely dumbfounded. Suppose I have a sphere of radius 6 metres, based on a coordinate system centred in that sphere, at what point on the sphere will $M(x, y, z) = 6x - y^2 + xz + 60$ be smallest? I have been racking my brain for ages, but I don't see how best to solve this. If someone has any ideas, I would greatly appreciate your help. Cheers Tim
On your sphere $x^2+y^2+z^2=6^2$, so that $y^2=\ldots$. Substitute this into $M$ to have a function of $x$ and $z$ only.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The closure of $c_{00}$ is $c_{0}$ in $\ell^\infty$ let $x=(x(1),x(2),\ldots,x(n),\ldots)\in c_0$ So for any $\varepsilon>0$ there exists a $n_0\in\mathbb{N}$ such that $|x(n)|<\varepsilon_2$ for all $n\geq n_0$. Now for all $n\geq n_0$ ,$||x_n−x||_\infty=\sup{|x(m)|}_{m\geq n_0}\leq \varepsilon_2<\varepsilon$ i have a doubt that regarding the last line. What makes it possible?
I assume what you want to show is that the closure with respect to the sup-norm, of the space of all sequences that are eventually zero ($c_{00}$) is the space of all sequence that tend to $0$ that is $c_0$. To show this some sequence $x=(x(1), x(2), \dots) \in c_0$ is fixed and one needs to show that for every $\epsilon >0$ there is a $y \in c_{00}$ with $\| x- y \| < \epsilon$. To do this one observes that for any $\epsilon_2$ there is some $n_0$ such that $|x_n| < \epsilon_2 $ for all $n > n_0$. Now let $x_{n_0}= (x(1), x(2), \dots , x(n_0), 0, 0, \dots ) \in c_{00}$. Then $x-x_{n_0}= (0, \dots , 0, x(n_0+1),x(n_0+2), \dots ))$. Now $x(n)< \epsilon_2$ for all $n > n_0$. So $\| x- x_{n_0}\| \le \epsilon_2$. Thus given any $\epsilon > 0$ choosing some $0 < \epsilon_2 < \epsilon$ and proceeding as above you get some $x_{n_0}$ in $c_{00}$ such that $\| x- x_{n_0}\| \le \epsilon_2 < \epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Unit of a purely infinite, simple C*-algebra Suppose that we have a purely infinite, simple C*-algebra with unit $1$. Can we find two projections $p,q$ both equivalent to the identity such that $1=p+q$ and $pq=0$? Well, there are two projections equivalent to $1$ such that $pq=0$ but what can we can we say about $p+q$?
You cannot always get p + q = 1. We can see this using K-theory. Suppose p,q are orthogonal projections, both Murray von-Neumann equivalent to 1, such that p + q = 1. Let [-] denote the class of a projection in K_0. Then we have [1] = [p+q] = [p] + [q] = [1] + [1] Thus [1] = 0. But the class of the unit is not always zero in K_0 of a purely infinite C*-algebra. For instance it is not zero in the Cuntz algebra O_3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the definition of a set? From what I have been told, everything in mathematics has a definition and everything is based on the rules of logic. For example, whether or not $0^0$ is $1$ is a simple matter of definition. My question is what the definition of a set is? I have noticed that many other definitions start with a set and then something. A group is a set with an operation, an equivalence relation is a set, a function can be considered a set, even the natural numbers can be defined as sets of other sets containing the empty set. I understand that there is a whole area of mathematics (and philosophy?) that deals with set theory. I have looked at a book about this and I understand next to nothing. From what little I can get, it seems a sets are "anything" that satisfies the axioms of set theory. It isn't enough to just say that a set is any collection of elements because of various paradoxes. So is it, for example, a right definition to say that a set is anything that satisfies the ZFC list of axioms?
One of the reasons mathematics is so useful is that it is applicable to so many different fields. And the reason for that is that its logical structure starts with "undefined terms" that can then be given definitions appropriate to the field of application. "Set" is one of the most basic "undefined terms". Rather than defining it a-priori, we develop theories based on generic "properties" and allow those applying it to a particular field (another field of mathematics, or physics or chemistry, etc.) to use a definition appropriate to that field (as long as it has those "properties" of course).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "66", "answer_count": 13, "answer_id": 11 }
Limit at infinity for sequence $ n^2x(1-x^2)^n$ I'm supposed to prove that this sequence goes to zero as n goes to infinity. $$\lim_{n\to \infty} {n^2x (1-x^2)^n}, \mathrm{where~} 0 \le x \le 1$$ I've been trying a few things (geometric formula, rewriting $(1-x^2)^n$ as $\sum_{k=0}^n \binom{n}{k} (-1)^k x^{2k} $) and messing around with that. But I can't seem to get anywhere. I could be missing something key that I've forgotten. Can somebody point me in the right direction?
First let us observe that $x(1-x^2)^n$ is smaller than $(1-x^2)^n$ on the interval, since $|x|<1$. The largest point of $(1-x^2)$ is at $x=0$ which is $1$ and the smallest is $0$ at $x=1$ (easy to check) and monotonically decreasing. So the function $(1-x^2)^n$ will be bounded by 1. Any $f(x_0) = (1-{x_0}^2)^n$ will decrease exponentially with $n$ for any $x_0\neq 0$ so the crossing $y=\epsilon>0$ will be pushed towards 0 with increasing $n$. So we see that the function $(1-x^2)^n$ will shrink towards $0$ for all values on the interval, except for $x=0$, but as our function of interest is actually $x$ times the function we investigated, so it will of course be even more suppressed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Generalized Fresnel integral $\int_0^\infty \sin x^p \, {\rm d}x$ I am stuck at this question. Find a closed form (that may actually contain the Gamma function) of the integral $$\int_0^\infty \sin (x^p)\, {\rm d}x$$ I am interested in a Laplace approach, double integral etc. For some weird reason I cannot get it to work. I am confident that a closed form may actually exist since for the integral: $$\int_0^\infty \cos x^a \, {\rm d}x = \frac{\pi \csc \frac{\pi}{2a}}{2a \Gamma(1-a)}$$ there exists a closed form with $\Gamma$ and can be actually be reduced further down till it no contains no $\Gamma$. But trying to apply the method of Laplace transform that I have seen for this one , I cannot get it to work for the above integral that I am interested in. May I have a help?
The integral evaluates to $$ \int_0^{\infty}\sin x^a\ dx=\Gamma\left(1+\frac{1}{a}\right)\sin\frac{\pi}{2a}, $$ but the way I know uses complex analysis. Added by request: We will integrate the function $\exp(-x^a)$ around the circular wedge of radius $R$ and opening angle $\pi/(2a)$, for $a>1$. By the Residue Theorem, $$ 0=\int_0^R\exp(-x^a)\ dx+\int_0^{\pi/(2a)}\exp(-R^ae^{i\theta})iRe^{i\theta}\ d\theta-e^{i\pi/(2a)}\int_0^R\exp(-ix^a)\ dx. $$ The middle integral is $O(Re^{-R^a})$, so sending $R\to\infty$ yields $$ e^{i\pi/(2a)}\int_0^{\infty}\exp(-ix^a)\ dx=\int_0^{\infty}\exp(-x^a)\ dx. $$ To compute the latter integral, let $t=x^a$ so that $dt/t=a\ dx/x$. This yields $$ \int_0^{\infty}\exp(-x^a)\ dx=\frac{\Gamma(1/a)}{a}. $$ Putting everything together, $$ \int_0^{\infty}\exp(-ix^a)\ dx=e^{-i\pi/(2a)}\Gamma\left(1+1/a\right). $$ Taking imaginary parts yields the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
What are some things we can prove they must exist, but have no idea what they are? What are some things we can prove they must exist, but have no idea what they are? Examples I can think of: * *Values of the Busy beaver function: It is a well-defined function, but not computable. *It had been long known that there must be self-replicating patterns in Conway's Game of Life, but the first such pattern has only been discovered in 2010 *By the Well-ordering theorem under the assumptions of the axiom of choice, every set admits a well-ordering. (See also Is there a known well ordering of the reals?)
I'm suprised that the following two haven't shown up: * *What is the smallest Riesel number? *What is the smallest Sierpiński number? In both cases we know they exist because they are smaller than or equal to 509,203 and 78,557 respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 18, "answer_id": 7 }
Prove there is a line that cuts the area in half Suppose you are given two compact, convex sets A and B in the plane, prove there exists a line such that the area of A and B is simultaneously divided in half. Can you help me with this proof? What I think I have to do is give a fuction that measures the area and then use intermediate value theorem, but I don't know how to give this function explicitly, I'm a third semester undergrad, so I cannot use very advanced tools. Thanks for any suggestions.
Okay, this is convoluted so bear with me. For any point x in the plane you can construct a line, L, from the origin of the plane through x. For each real number r (positive or negative) you can construct a line perpendicular to L that is r distance from the origin. Precisely one of these lines at a specific distance r from the origin will cut A is half. (Basically, by the intermediate value theorem by considering a function that measures what portion of A is cut by the perpendicular line r distance from the origin. The is contionous and goes from 0 to 100% and is monotonic so there is exactly one, r, where the result is 50%.) Thus we can define f: Plane -> Real Numbers where f(x) is the real number r where the perpendicular line cuts A in half. This is a continuous function. Define g: Plane -> Real Numbers where g(x) is the real number r where the perpendicular line cuts B in half. Define h(x) = f(x) - g(x). By the fixed point theorem there must be a point y in the plane where h(y) is 0. Then f(y) = g(x) = some real number r. Find the line that is that is r distance from the origin and perpendicular to the line from y to the origin. This line will cut both A and B in half.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1452917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
ZF without regularity allows P(A) member of A Last week, I was doing a bit of reading around on some axiom of regularity(/foundation)-related questions, and found an answer in one of them (which I cannot seem to locate, for the life of me, right now) which stated that, although it was a fairly technical result, one could show, in ZF-reg, that $P(A) \in A$ is possible. Well, that sort of made my brain fizz a little, and so I've been thinking about it for the past few days, but cannot find any documentation relating to the result. If anyone could send me in the right direction, or provide an example or outline of such, it would be much appreciated!
No, it most certainly does not imply that $\mathcal P(A)\in A$. Simply because $\mathcal P(\varnothing)\notin\varnothing$. Moreover, you get an entire model of $\sf ZF$ inside any model of $\sf ZF-Reg$. So there will be plenty of sets which do not have $\mathcal P(A)$ as their element. But it is consistent. The easiest way would be to take one of the "Anti Foundations Axioms" which assert that the failure of regularity (known as foundations as well) is extensive. How extensive? For example, "Every possible graph is realized". Then you only need to show that you can draw a graph of a set and its power set, such that the power set is an element of the set. You can do it directly using permutation models. I cannot give a reference, since I learned it in class a few years ago, and the notes are in Hebrew, but one of the exercises was to show that it is consistent to have $\mathcal P(A)\in A$. If my memory serves me right, $A$ had three elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
function $f(x)=x^{-(1/3)}$ , please check whether my solution is correct? Let $f(x)=x^{-(1/3)}$ and $A$ denote the area of region bounded by $f(x)$ and the X-axis, when $x$ varies from -1 to 1. Which of the following statements is/are TRUE? * *f is continuous in [-1, 1] *f is not bounded in [-1, 1] *A is nonzero and finite I try to explain : graph is : * *False , since left limit is not equivalent to right limit. *True , since f(x) rises to $infinite$ at $x=0$. *True , since we can calculate it by doing integrating the function
The following statements certainly have standard definitions: 1) "$f$ is continuous at $x$" ; 2) "$f$ is continuous". For the function $f(x) = x^{-1/3}$, it's certainly true that "$f$ is continuous" and that "$f$ is not continuous at $0$". However, what about the statement, "$f$ is continuous on $[a,b]$"? Perhaps there is some confusion or disagreement over the meaning of this statement. I think that the statement "$f$ is continuous on $[a,b]$" means that "if $x \in [a,b]$ then $f$ is continuous at $x$". By that definition, statement 1) is false because the function $f(x) = x^{-1/3}$ is not continuous at $0$. So, for $f(x) = x^{-1/3}$, the statement "$f$ is continuous" is true, but the statement "$f$ is continuous on $[-1,1]$" is false.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closed form of: $\displaystyle \int_0^{\pi/2}x^{n}\ln{(\sin x)}dx $ $\displaystyle \int_0^{\pi/2}x^{n}\ln{(\sin x)}dx $ Does a closed form of the above integral exists? $n$ is a positive integer
I want to add a proof which uses complex analysis: First observe that our integral can be split as follows: $$ I_n=-\underbrace{\int_0^{\pi/2}dx\left(ix^{n+1}+\log(\frac{1}{2i})x^n\right)}_{A_n}+\underbrace{\int_0^{\pi/2}dxx^n\log(1-e^{-2ix})}_{B_n} $$ The first part is trivial and yields $A_n=\log(\frac{1}{2i})\frac{x^{n+1}}{n+1}+i\frac{x^{n+2}}{n+2}$+ To calculate $B_n$, define the complex valued function $$ f(z)=\log(1-e^{2z}) $$ Now we integrate this function over a rectangle in the complex plane with vertices $\{0,\frac{\pi}{2},\frac{\pi}{2}-iR,-iR\}$ Because $f(z)$ is analytic in the chosen domain, the contour integral yields zero (Note that we have taken the limit $R\rightarrow\infty$,where the upper vertical line vanishs) $$ \oint f(z)=\underbrace{\int_0^{\pi/2}dxx^n\log(1-e^{-2ix})}_{B_n}+i\underbrace{\int_0^{\infty}dy(\frac{\pi}{2}-iy)^n\log(1+e^{-2y})}_{K_n}+i\underbrace{\int_0^{\infty}dy(-iy)^n\log(1-e^{-2y})}_{J_n}=0 $$ $K_n$ and $J_n$ are now straightforwardly calculated by using the Taylor expansion of $\log(1+z)$ and the binomial theorem $$ K_n=\int_0^{\infty}dy(\frac{\pi}{2}-iy)^n\log(1+e^{-2y})=\sum_{k=1}^n\binom{n}{k}\left(\frac{\pi}{2}\right)^{n-k}(-i)^k\int_0^{\infty}dy\sum_{q=1}^{\infty}(-1)^{q+1}\frac{y^ke^{-2yq}}{q}=\\ \sum_{k=0}^n\binom{n}{k}\left(\frac{\pi}{2}\right)^{n-k}(-i)^k\frac{k!}{2^{k+1}}\sum_{q=1}^{\infty}\frac{(-1)^{q+1}}{q^{k+2}}=\\ \sum_{k=0}^n\binom{n}{k}\left(\frac{\pi}{2}\right)^{n-k}(-i)^k\frac{k!}{2^{k+2}}\left(1-\frac{1}{2^{k+1}}\right)\zeta(k+2) $$ by the same technique $$ J_n=\int_0^{\infty}dy(-iy)^n\log(1+e^{-2y})=(-i)^n \frac{n!}{2^{n+2}} \zeta(n+2) $$ yielding: $$ I_n=B_n+A_n=J_n+K_n+A_n=\\ (-i)^{n+1} \frac{n!}{2^{n+1}} \zeta(n+2)+\sum_{k=0}^n\binom{n}{k}\left(\frac{\pi}{2}\right)^{n-k}(-i)^{k+1}\frac{k!}{2^{k+2}}\left(1-\frac{1}{2^{k+1}}\right)\zeta(k+2)\\ -i\frac{\pi^{n+2}}{2^{n+2}}\frac{2n+1}{(n+2)(n+1)}-\log(2)\frac{\pi^{n+1}}{2^{n+1}} $$ Please note, that the imaginary part of the right hand side has to be zero (because our integral is real), which gives us a disturbing summation formula: $$ \Im\left[\sum_{k=0}^n\binom{n}{k}\left(\frac{\pi}{2}\right)^{n-k}(-i)^{k+1}\frac{k!}{2^{k+2}}\left(1-\frac{1}{2^{k+1}}\right)\zeta(k+2)\right]=\\ (-1)^{n} \frac{n!}{2^{n+2}} \zeta(n+1)\delta_{n+1,2j}+\frac{\pi^{n+2}}{2^{n+2}}\frac{2n+1}{(n+2)(n+1)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Disjoint Set Proof Let A1, A2, ... be an arbitrary infinite sequence of events, and let B1, B2, ... be another infinite sequence of events defined as follows: $B_1 = A_1, B_i = \frac {A_i}{\bigcup^{i-1}_{j=1} A_j}$ for i = 2, 3, ... Prove that B1,B2,... is a disjoint collection of events: I have no idea how to tackle this? I understand what a disjoint collection is, but I don't know where to begin with this one? Any help is appreciated
I'll sketch a proof. Use induction on $B_1,...,B_n$. $B_1$ by itself is obviously a family of disjoint sets so the base case is easy. Let $m=n+1$. If $B_1,...,B_n$ are disjoint, it shouldn't be too hard to show that $B_1,...,B_m$ are disjoint. Basically, $B_m$ is disjoint from the sets $A_1,...A_n$ which are supersets of $B_1,...B_n$. Then deduce for any $B_i$ and $B_j$ with $i \neq j$, $B_i$ and $B_j$ are disjoint. This means that $B_1,...$ are pairwise disjoint. Upon further thought, induction is just overkill. If you have natural numbers $i,j$ with $i \neq j$, then just pick the larger number. WLOG, assume $j>i$. Then $B_j$ is disjoint from $A_i$ by construction. $A_i$ is a super set of $B_i$. So the family is pairwise disjoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving limits without using L'Hôpital's rule $$\lim_{x \to \frac{\pi}{2}}\frac{b(1-\sin x) }{(\pi-2x)^2}$$ I had been solving questions like these using L'Hôpital's rule since weeks. But today, a day before the examination, I got to know that its usage has been 'banned', since we were never officially taught L'Hôpital's rule. Now I am at a loss how to solve questions I breezed through previously. While it is not possible to learn in 18 hours methods that would cover all kinds of problems, I am hoping I can pick up enough of them to salvage the examination tomorrow. It has been hinted to me that the above could be solved using trigonometric techniques, but I'm at a loss as to how.
Hint: Use trigonometry identities: $$\sin(x)=\cos\left(\frac{\pi}{2}-x\right)\\\sin\frac{t}{2}=\sqrt{\frac{1}{2}(1-\cos t)}$$ Specifically, set $t=\frac{\pi}{2}-x$ then you want: $$\lim_{t\to 0} \frac{b(1-\cos t)}{4t^2}$$ Then use the trig identities above, replacing $1-\cos t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 2 }
Integral $\int_{0}^{1}\frac{\log^{2}(x^{2}-x+1)}{x}dx$ Here is an integral I derived while evaluating another. It appears to be rather tough, but some here may not be so challenged :) Show that: $$\int_{0}^{1}\frac{\log^{2}(x^{2}-x+1)}{x}dx=\frac{11}{9}\zeta(3)-\frac{\pi}{72\sqrt{3}}\left(5\psi_{1}\left(\frac13\right)+4\psi_{1}\left(\frac23\right)-3\psi_{1}\left(\frac56\right)\right)$$ $$=\frac{11}{9}\zeta(3)+\frac{4\pi^{3}}{27\sqrt{3}}-\frac{2\pi}{9\sqrt{3}}\psi_{1}\left(\frac13\right)=\frac{11}{9}\zeta(3)-\frac{4\pi}{9}\operatorname{Cl}_{2}\left(\frac{\pi}{3}\right)$$ $$=\operatorname{Cl}_{2}\left(\frac{\pi}{3}\right)-2\operatorname{Cl}_{2}\left(\frac{2\pi}{3}\right)-\frac{4\pi}{9}\operatorname{Cl}_{2}\left(\frac{\pi}{3}\right)$$ I attempted all kinds of 'starts' to no satisfactory end, but things look promising. There are some mighty sharp folks here that may be better at deriving the solution. I thought perhaps the identity: $$\frac{\log^{2}(1-(x-x^{2}))}{x}=2\sum_{n=1}^{\infty}\frac{H_{n}}{n+1}x^{n}(1-x)^{n+1}$$ or the Beta function could be used if given enough ingenuity. This led me to the no-less-imposing Euler/reciprocal of central binomial coefficients sum below. It would be great to just show the middle sum is equivalent to the right sum: $$1/4\sum_{n=1}^{\infty}\frac{H_{n}n\Gamma^{2}(n)}{(n+1)(2n+1)\Gamma(2n)}=1/2\sum_{n=1}^{\infty}\frac{H_{n}}{(n+1)(2n+1)\binom{2n}{n}}=1/3\zeta(3)-2/3\sum_{n=1}^{\infty}\frac{1}{n^{3}\binom{2n}{n}}$$ Is there a general form for $$\sum_{n=1}^{\infty}\frac{H_{n}}{\binom{2n}{n}}x^{n}?$$ I tried starting with the identity: $$\sum_{n=1}^{\infty}\frac{\Gamma^{2}(n)}{\Gamma(2n)}x^{n-1}=\frac{4\sin^{-1}\left(\frac{\sqrt{x}}{2}\right)}{\sqrt{x(4-x)}}$$ and using various manipulations to hammer into the needed form. This, too, turned monstrous. There appears to be a relation to Clausen functions (as with other log integrals such as $\int_{0}^{1}\frac{\log(x)}{x^{2}-x+1}dx$), to wit: I use Cl for sin and CL for cos Clausen functions $$\operatorname{Cl}_{2}\left(\frac{\pi}{3}\right)=\sum_{k=1}^{\infty}\frac{\sin(\frac{\pi k}{3})}{k^{2}}=\frac{\sqrt{3}}{72}\left(\psi_{1}(1/6)+\psi_{1}(1/3)-\psi_{1}(2/3)-\psi_{1}(5/6)\right)$$ $$=\frac{\sqrt{3}}{6}\psi_{1}(1/3)-\frac{\pi^{2}\sqrt{3}}{9}$$ and $$\operatorname{Cl}_{3}\left(\frac{\pi}{3}\right)-\operatorname{Cl}_{3}\left(\frac{2\pi}{3}\right)=\sum_{k=1}^{\infty}\frac{\cos(\frac{\pi k}{3})}{k^{3}}-2\sum_{k=1}^{\infty}\frac{\cos(\frac{2\pi k}{3})}{k^{3}}=\frac{11}{9}\zeta(3)$$ Another approach. I also broke the integral up as such: $$\int_{0}^{1}\frac{\log^{2}(x^{2}-x+1)}{x}dx=\int_{0}^{1}\frac{\log^{2}(1-xe^{\frac{\pi i}{3}})}{x}dx+2\int_{0}^{1}\frac{\log(1-xe^{\pi i/3})\log(1-xe^{-\pi i/3})}{x}dx+\int_{0}^{1}\frac{\log^{2}(1-xe^{-\pi i/3})}{x}dx$$ The middle integral right of the equal sign is the one that has given me the fit. I think this is a fun and head-scratchin' integral that has led me to other discoveries. Maybe a generalization could be obtained with other powers of log such as n = 3, 4, etc. I wonder if they can also be evaluated in terms of Clausens and then into closed forms involving $\zeta(n+1)$ and derivatives of digamma, $\psi_{n-1}(z)?$. Another easier one is $$\int_{0}^{1}\frac{\log(x^{2}-x+1)}{x}dx=\frac{-\pi^{2}}{18}=\frac{-1}{3}\zeta(2)?$$
Asset at our disposal: $$\sum\limits_{n=0}^{\infty} \frac{x^{2n+2}}{(n+1)(2n+1)\binom{2n}{n}} = 4(\arcsin (x/2))^2$$ Differentiation followed by the substitution $x \to \sqrt{x}$ gives: $\displaystyle \sum\limits_{n=0}^{\infty} \frac{x^{n}}{(2n+1)\binom{2n}{n}} = \frac{2\arcsin (\sqrt{x}/2)}{\sqrt{x}\sqrt{1-(\sqrt{x}/2)^2}}$ Thus, we split the series as: $$ \sum\limits_{n=0}^{\infty} \frac{H_n}{(n+1)(2n+1)\binom{2n}{n}} \\= \sum\limits_{n=0}^{\infty} \frac{H_{n+1}}{(n+1)(2n+1)\binom{2n}{n}} - \sum\limits_{n=0}^{\infty} \frac{1}{(n+1)^2(2n+1)\binom{2n}{n}}$$ The first series can be dealt with using, $\displaystyle\frac{H_{n+1}}{n+1} = -\int_0^1 x^n\log(1-x)\,dx$ \begin{align*}\sum\limits_{n=0}^{\infty} \frac{H_{n+1}}{(n+1)(2n+1)\binom{2n}{n}}&= -\sum\limits_{n=0}^{\infty} \int_0^1 \frac{x^n\log(1-x)}{(2n+1)\binom{2n}{n}}\,dx\\ &= -2\int_0^1 \frac{\arcsin (\sqrt{x}/2)\log (1-x)}{\sqrt{x}\sqrt{1-(\sqrt{x}/2)^2}}\,dx\\ &= -8\int_0^{1/2} \frac{\arcsin x \cdot \log (1-4x^2)}{\sqrt{1-x^2}}\,dx\\ &= -8\int_0^{\pi/6} \theta \log (1-4\sin^2 \theta)\,d\theta\\ &= -8\int_0^{\pi/6} \theta \log \left(4\sin\left(\theta + \frac{\pi}{6}\right)\sin\left(\frac{\pi}{6}-\theta\right)\right) \end{align*} Using the Fourier Series, $\displaystyle \log (2\sin \theta) = -\sum\limits_{n=1}^{\infty} \frac{\cos 2n\theta}{n}$ we get: \begin{align*}&\int_0^{\pi/6} \theta\log \left(2\sin\left(\frac{\pi}{6}+\theta\right)\right)\,d\theta \\&= -\sum\limits_{n=1}^{\infty} \int_0^{\pi/6} \frac{\theta\cos \left(\dfrac{n\pi}{3}+2n\theta\right)}{n}\,d\theta\\&= -\frac{\pi}{12}\sum\limits_{n=1}^{\infty} \frac{\sin (2n\pi/3)}{n^2}-\frac{1}{4}\sum\limits_{n=1}^{\infty} \frac{\cos (2n\pi/3)}{n^3} +\frac{1}{4}\sum\limits_{n=1}^{\infty} \frac{\cos (n\pi/3)}{n^3}\end{align*} and, \begin{align*}&\int_0^{\pi/6} \theta\log \left(2\sin\left(\frac{\pi}{6}-\theta\right)\right)\,d\theta \\&= -\sum\limits_{n=1}^{\infty} \int_0^{\pi/6} \frac{(\pi/6 - \theta)\cos \left(2n\theta\right)}{n}\,d\theta\\&= -\frac{1}{4}\zeta(3)+\frac{1}{4}\sum\limits_{n=1}^{\infty}\frac{\cos (n\pi/3)}{n^3}\end{align*} Hence, $$\sum\limits_{n=0}^{\infty}\frac{H_{n+1}}{(n+1)(2n+1)\binom{2n}{n}} = -\frac{2}{9}\zeta(3) + \frac{2\pi}{3}\sum\limits_{n=1}^{\infty}\frac{\sin (2n\pi/3)}{n^2}$$ Similarly we may deal with the second series: \begin{align*}\sum\limits_{n=0}^{\infty} \frac{1}{(n+1)^2(2n+1)\binom{2n}{n}} &= 8\int_0^{1/2} \frac{\arcsin^2 (x)}{x}\,dx \\&= -4\zeta(3)+4\sum\limits_{n=1}^{\infty}\frac{\cos (n\pi/3)}{n^3}+\frac{4\pi}{3}\sum\limits_{n=1}^{\infty} \frac{\sin (2n\pi/3)}{n^2}\end{align*} Combining the results we get: \begin{align*}\sum\limits_{n=1}^{\infty} \frac{H_n}{(n+1)(2n+1)\binom{2n}{n}} &= \frac{22}{9}\zeta(3) - \frac{2\pi}{3}\sum\limits_{n=1}^{\infty} \frac{\sin (2n\pi/3)}{n^2} \\&= \frac{22}{9}\zeta(3) - \frac{\pi}{9\sqrt{3}}\left(\psi'\left(\frac{1}{3}\right) - \psi'\left(\frac{2}{3}\right)\right)\end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453633", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Integration involving negative roots yields two answers? I have this problem that is driving me insane. I am asked to evaluate: $$\int^1_{-1} \sqrt[3]{t} -2 \ dt$$ Now I evaluate this correctly to $$[\frac{3^{4/3}t}{4}-2t]^1_{-1}$$ From here I work this problem has follows: $$[\frac{3(1)^{4/3}}{4}-2(1)]-[\frac{3(-1)^{4/3}}{4}-2(-1)]$$ $\Longrightarrow$ $$[\frac{3}{4}-2]-[\frac{3}{4}+2]$$ $\Longrightarrow$ $$-4$$ Now the answer I get on online calculators(Wolfram, http://www.wolframalpha.com/widgets/view.jsp?id=8ab70731b1553f17c11a3bbc87e0b605) is When it comes to $(-1)^{4/3}$ it should equal to $1$, because $-1^4=1$ and$\sqrt[3]1=1$ and equivalent is the reasoning $\sqrt[3]-1=-1$ and $-1^4=1$ Is there something I am missing or another way to evaluate $(-1)^{4/3}$? And if so what makes that answer stronger than this one? In other words shouldn't both answers be valid? Thanks in advance.
You are correct. The answer that Wolfram gives is not correct in the context. Note that there are also two complex third roots of $-1$, they are $$\frac{1 + i \sqrt{3}}{2}, \frac{1 - i \sqrt{3}}{2}$$ Wolfram is using the first of these.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the sum of the cube of the roots of $ x^3 + x^2 - 2x + 1=0$? I know there are roots, because if we assume the equation as a function and give -3 and 1 as $x$: $$ (-3)^3 + (-3)^2 - 2(-3) + 1 <0 $$ $$ 1^3 + 1^2 - 2(1) + 1 > 0 $$ It must have a root between $[-3,1]$. However, the root is very hard and it appeared on a high school test. How can I solve it simply? The given options were -10, -5, 0 , 5 and 10. Note: we didn't even learn the cube root formula. The test had just logic problems and I didn't use any calculus or complicated stuff. So there must be an easier way without using cube root concepts or formulas.
using the information in the equation $$ \sum_{i=1}^3 r_i^3 = -\sum_{i=1}^3 (r_i^2 - 2r_i + 1) \\ = -5 -\sum_{i=1}^3 r_i^2 $$ also, from the well-known expressions giving the elementary symmetric functions of the roots in terms of the coefficients, $$ \sum_{i=1}^3 r_i^2 = (\sum_{i=1}^3 r_i)^2 - 2(r_1r_2+r_2r_3+r_3r_1) \\ = 5 $$ so the sum of cubes of the roots is $-10$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Are these two statements logically equivalent? XOR and If/then Consider the statement "Everyone in Florida plays either basketball or soccer, but no one plays both basketball and soccer." Let $B(x)$ be the propositional function "$x$ plays basketball", and $S(x)$ be "$x$ plays soccer". I am expressing this statement using quantifiers, and am curious if the following are logically equivalent. * *$\forall x (B(x) \oplus S(x)) $ logically equivalent to *$\forall x ((B(x) \implies \neg S(x)) \land (S(x) \implies \neg B(x))) $
No. The second formula can be satisfied by a population of couch potatoes ie $\neg B(x) \land \neg S(x)$ is consistent with the second formula, but not the first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1453945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $V=L$ a single first-order sentence? Is $V=L$, the axiom of constructibility, a single first-order sentence? Since $V=L$ really stands for $(\forall x)(\exists \alpha)[\alpha \in \mathrm{On} \wedge x \in L_\alpha]$, so my question might as well be "is $x \in L_\alpha$ a single first-order formula?". I was confused about this since prima facie the definition of $L$ involves arbitrary first-order formulas and a satisfaction predicate for them. Of course one can appeal to the recursive definition by using Goedel's operations, but I am still unsure whether all the details work out.
Yes, it is a first-order statement. Recall that the definition of $L_\alpha$ is an inductive definition. $L_0=\varnothing$, $L_{\alpha+1}=\operatorname{Def}(L_\alpha,\in)$ and for limit ordinals $L_\alpha=\bigcup_{\gamma<\alpha}L_\gamma$. It might seem that the successor definition is not "internal", but in fact it is. It uses the internal definition of truth,1 so it is again another definition using complicated inductions. Finally, remember that the replacement schema allows us to take an inductive definition, over all the ordinals, and replace it with a single first-order statement. Much like how induction is internalized in Peano arithmetic. * *Remember that the way we define first-order logic related concepts is really by induction. We have notions of strings, and when a string is a term, etc.; and a notion of a structure, and an assignment function, and then a truth value (after assigning the free variables values, of course). But since this is really just a long long long list of inductions, and the core concept of "strings" is already easy to formalize internally (simply finite sequences from a fixed alphabet), we can write a definition for what the universe "thinks" is the set of strings, and formulas and so on. So all this collapses to a very complicated, very long, very uninteresting formula that we know that we can write.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Obtaining a derivative using limit definition We have the following limit $$ \lim_{h \to 0} \dfrac{(x+h)^{1/4}-x^{1/4}}{h}$$ I want to find this limit (which I figure is just the derivative of $x^{1/4}$) using only elementary methods (algebra, mostly). So you can rewrite the function as $\dfrac{4}{h}(\sqrt{x+h} - \sqrt{x})$, but you still have that $h$ in the denominator which makes it impossible to take the limit. I can't seem to rewrite this in a way that I get the expected answer, can anyone give a hand?
Use substitution: set $y=x^{\tfrac14}, \enspace k=(x+h)^{\tfrac14}-x^{\tfrac14}$. Note $k\to 0\;$ as $h\to 0$. Let's rewrite the variation rate: $$\frac{(x+h)^{1/4}-x^{1/4}}{h}=\frac k{(y+k)^4-y^4}=\frac k{4y^3k+6y^2k^2+4yk^3}=\frac 1{4y^3+6y^2k+4yk^2},$$ which tends to $$\frac1{4y^3}=\frac1{4x^{3/4}}$$ as $h$ (or $k$) tends to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Evaluating a complex line integral Suppose $f(z) = y $. I want to find $\int_{\gamma} y $ where $\gamma$ is the curve joining the line segments $0$ to $i$ and then to $i+2$. Try: Let $\gamma_1(t) = it $ there $ 0 \leq t \leq 1 $ and $\gamma_2 = t + i $ where $0 \leq t \leq 2 $. So, $$ \int_{\gamma} y = \int_{\gamma_1} y dz + \int_{\gamma_2} y dz = \int_0^1 it i dt + \int_0^2 i dt = \int_0^1 - t dt + i 2 = -\frac{1}{2} + 2i $$ IS this correct?
We have $f(z)=y$ on $\gamma$. On $\gamma_1$, $z=it$ and $dz=idt$. Thus, $y=t$ and $$\int_{\gamma_1}f(z)\,dz=\int_0^1t(idt)=\frac{i}{2}$$ On $\gamma_2$, $z=t+i$ and $dz=dt$. Thus, $y=1$ $$\int_{\gamma_2}f(z)\,dz=\int_0^21(dt)=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find maximum of a polynomial? How would I find the value of p which the function f is the maximum: $f(p)= 100 p(1-p)^{99}$ $p\geq0$ And in general, can setting the derivative equal to $0$ and solving for the variable lead to the values in function's domain that produces the maximum value ?
Here is another way; clearly we seek $p \in (0, 1)$, so $p, 1-p$ are both positive. $$\frac{99}{100}f(p) = (99p) \cdot \underbrace{(1-p)(1-p)\cdots(1-p)}_{99 \text{ times}} $$ Now the RHS is the product of $100$ terms, which sum to a constant, viz. $99$. Therefore its maximum is when all terms are equal, viz. $\dfrac{99}{100}$ (this can be shown easily with AM-GM). $\implies p = \dfrac1{100}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $\int_{\gamma} \frac{ |dz| }{z} $ where $\gamma $ is the unit circle I need to evaluate $\int_{\gamma} \frac{ |dz| }{z} $ where $\gamma $ is the unit circle. Attempt: Let $\gamma(t) = e^{it} $ where $t \in [0,2 \pi]$. How can I get rid of the absolute values in the differential?
The notation $\lvert dz\rvert$ denotes the arc-length measure. For a piecewise continuously differentiable curve $\alpha \colon [a,b] \to \mathbb{C}$ and a continuous function $f$ defined (at least) on the set $\alpha([a,b])$, we have by definition $$\int_{\alpha} f(z)\,\lvert dz\rvert = \int_a^b f(\alpha(t))\cdot \lvert \alpha'(t)\rvert\,dt.$$ Note: The constraints on $\alpha$ and $f$ can be loosened, it suffices that $\alpha$ is rectifiable, and $f\circ\alpha$ Borel measurable and "not too large in absolute value". But the typical situation is that one has a piecewise continuously differentiable path and a continuous function. If you plug in the definitions, you can directly evaluate your integral. However, it may be worth mentioning that in the particular case of circles, $\lvert dz\rvert$ transforms in a way that complex analysis methods to evaluate the integral become applicable (if the function is holomorphic on a suitable domain). Namely, for the parametrisation $\alpha(t) = z_0 + re^{it}$ of the circle $\lvert z-z_0\rvert = r$, we have $\alpha'(t) = ire^{it} = i(\alpha(t) - z_0)$, so we obtain $$\int_{\alpha} f(z)\,\lvert dz\rvert = \frac{r}{i}\cdot\int_{\lvert z-z_0\rvert = r} \frac{f(z)}{z-z_0}\,dz.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
"Ambiguous up to scale" , Explanation required. I am reading "Computer Vision: Models, Learning, and Inference" in which author writes at several points (like on page 428-429) that although matrix A seems to have 'n' degree of freedom but since it is ambiguous up to scale so it only has 'n-1' degree of freedom? Can anyone explain what this thing means? Why one degree of freedom is decreased? Although it is in computer vision book but I think the answer is related to some properties of Linear algebra.
It might be illuminating to learn about projective spaces (just the idea.) I will explain. Define $\mathbb{RP}^1$ (so called projective line) as set of all straight lines in a plane going through origin. Any such line is uniquely determined by specifying one of its points $(x,y) \neq 0$. However, for any nonzero $\lambda$ points $(x,y)$ and $(\lambda x, \lambda y)$ are equivalent in the sense that they determine the same line. Clearly $\mathbb {RP}^1$ is one dimensional. You can parameterize all of lines (except horizontal one i.e. the one with $y=0$) by just giving x coordinate of a point such that $y=1$. Another way is to use angle of inclination of your line to $x$ axis as a coordinate. Your example is similar. Since scaling of matrix doesn't change anything you are really interested in set of all directions in space of matrices. In principle you could specify this direction by giving $n-1$ "angles "of some sort; or if your matrices are nonsingular impose extra constraint $\det A=1$ on your matrices to make it unambiguous. Quite often it turns out that it is easier to not do it and still use all coordinates with one extra "scaling degree of freedom". That's because matrices are way easier to work with than some weird "projective matrix spaces".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
what is a curve ? Is the concept of derivative limited to curves only? I am trying to understand derivative and I want to know intuitive and rigorous definitions for a curve and if derivative is lmited only to curves or not..
A curve is the graph of an application \begin{align*}\gamma :A\subset \mathbb R&\longrightarrow \mathbb R^n\\ t&\mapsto (\alpha_1(t),...,\alpha_n(t))\end{align*} where \begin{align*} \alpha_i:A&\longrightarrow \mathbb R\\ t&\mapsto \alpha_i(t). \end{align*} And yes, derivative are limited to curve. For a surface or any other space... you will talk about differential and not derivative. For a curve, the derivative and the differential coincident.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1454919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proofs for Taylors theorem and other forms Let $f \in C^k[a,b]$.Show that for $x,x_0 \in [a,b]$, $$f(x)=\sum\limits_{j=0}^\mathbb{k-1}{{1\over j!}f^{(j)}(x_0)(x-x_0)^j}+{1\over k!}{\int_{x_0}^x f^{(k)}(t)(x-t)^k \,dt}$$ and after this use this result for proving the following forms of Taylor's theorem: $i)$ if we assume the strongest condition of derivative $f \in C^{k+1}[a,b]$ show that $$f(x)=\sum\limits_{j=0}^\mathbb{k}{{1\over j!}f^{(j)}(x_0)(x-x_0)^j}+O(x-x_0)^{k+1}$$ $ii)$ if we do not have condition of higher derivative show that $$f(x)=\sum\limits_{j=0}^\mathbb{k}{{1\over j!}f^{(j)}(x_0)(x-x_0)^j}+o(x-x_0)^{k+1}$$ I am trying to find a complete answer(proof) for this parts of Taylors theorem but i did not manage to find anything since on internet.Any proof for this would be apreciated.
Set $$ \varphi(x)=\sum_{j=0}^{k-1}\frac{f^{(j)}(x)}{j!}(z-x)^j $$ then $$ φ'(x)=\frac{f^{(k)}(x)}{(k-1)!}(z-x)^{k-1} $$ and thus $$ φ(z)-φ(x)=\int_x^z\frac{f^{(k)}(t)}{(k-1)!}(z-t)^{k-1}\,dt $$ Now change the variables $x$ to $x_0$ and then $z$ to $x$ to obtain $$ f(x)=\sum_{j=0}^{k-1}\frac{f^{(j)}(x_0)}{j!}(x-x_0)^j+\frac{1}{(k-1)!}\int_{x_0}^x f^{(k)}(t)(x-t)^{k-1}\,dt $$ So it seems the powers and factorials in the error term in your first formula are somewhat off. For the conclusions, take the (second) most simple case as illustration. $$ f(x)=f(x_0)+f'(\tilde x)(x-x_0),\qquad \tilde x\in [x_0,x], $$ gives directly $f(x)=f(x_0)+O((x-x_0)^1)$ and with a virtual zero \begin{align} f(x)&=f(x_0)+f'(x_0)(x-x_0)+\Bigl(f'(\tilde x)-f(x_0)\Bigr)(x-x_0)\\ &=f(x_0)+f'(x_0)(x-x_0)+o((x-x_0)^1) \end{align} for $f\in C^1(\Bbb R)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Equation $x^{n+1}-2x+1=0$ Show without derivatives that the equation $x^{n+1}-2x+1=0,n>1,n\in\mathbb{R}$ has at least one solution $x_0\in (0, 1)$ I have thought that I need to use Bolzano for the equation as $f(x)$. So, we have: $$f(0)=1>0$$ $$f\left( \frac{1}{2}\right) =2^{-n-1}>0$$ $$f(1)=0$$ There is a problem here, as I cannot find any number or somehow use limit theorems in order for some $a\in (0, 1)$ to be $f(a)<0$. Any hint?
You want some $a \in(0,1)$ such that $f(a)<0$. Writing $b =\frac{1}{a}$ this is equivalent to finding a $b >1$ such that $$ 1-2b^n+b^{n+1} <0 $$ or $$2b^n > 1+ b^{n+1}$$ Now by Bernoulli we have $$b^n >1+n(b-1)$$ So by choosing $n(b-1)>1$ you have $b^n>2$ and hence $$\frac{1}{2}b^n>1 \,.$$ Also, as long as $b <\frac{3}{2}$ you have $$\frac{3}{2}b^n >b^{n+1}$$ Combining the two you get that for all $\frac{1}{n}+1< b < \frac{3}{2}$ we have $$2b^n >1+b^{n+1}$$ This solution works for $n \geq 3$, as we need $\frac{1}{n}+1< b < \frac{3}{2}$. For $n=2$ the inequalities are not strong enough, but in this case the problem is very easy to solve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
Under what conditions does matrix multiplication commute? This is just a check on my reasoning, I guess. So for two matrices $A, B$ to commute, the following must hold: $$(AB)_{ij} = \sum_{k=1}^{n}a_{ik}b_{kj} = \sum_{k=1}^{n}b_{ik}a_{kj} = (BA)_{ij}$$ This can happen if for all $i, j, k$: a. $a_{ik}=a_{kj}$ and $b_{ik}=b_{kj}$, or b. $a_{ik}=b_{ik}$ and $a_{kj}=b_{kj}$, i.e. $A=B$, or c. $a_{ik} = 0$ or $b_{ik} = 0$, i.e. either matrix is null. Are there more possibilities? Edit: I originally had (a) as "Both matrices are symmetric", but as @user1551 points out, this is not true. After fixing the summations, I see where I was mistaken. I'm not sure how to characterize (a) now.
Two matrices commute when they are simultaneously triangularisable, i.e., when there is some basis in which they are both triangular. Roughly speaking, it is when they have the same eigenvectors, probably with different eigenvalues. (But then there are degenerate cases, which make it all more complicated.) This property has really nothing to do with A and B being symmetric. Indeed, there are examples of matrices which are symmetric and don't commute... $$A=\left(\begin{matrix}2& 1\\1 & 3 \end{matrix}\right),\; B=\left(\begin{matrix}3& 1\\1 & 2 \end{matrix}\right), $$ ...and those which are not symmetric but do commute: $$A=\left(\begin{matrix}1& 1\\0 & 1 \end{matrix}\right),\; B=\left(\begin{matrix}1& 2\\0 & 1 \end{matrix}\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Vocabulary question: singularity for an analytic map I have a question that is purely on vocabulary. My native language is not english, so I would like to know the usual convention for the following. When people say "let $f: X \to Y$ be an analytic map", where $X,Y$ are, say, complex manifolds, do they allow for singularities on $X$ ? If yes, what about essential singularities? For example, consider the following statements: * *All analytic maps $f: \mathbb{P}^1(\mathbb{C}) \to \mathbb C$ are constants. *The exponential function $\exp: \mathbb{P}^1(\mathbb{C}) \to \mathbb C$ is analytic. Which of those statements would be true, according to usual convention? i.e. can we say that the exponential function is analytic on the projective line, but with an essential singularity ?
No, an analytic map has no singularities. "Holomorphic" is a synonym in English, in all cases of which I'm aware. "Meromorphic" functions may have poles, but no essential singularities. I don't know a word for a function which is analytic at most points, but with arbitrary singularities permitted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lebesgue measure of numbers whose decimal representation contains at least one digit equal $9$ Let $A$ be the set of numbers on $[0, 1]$ whose decimal representation contains at least one digit equal $9$. What is its Lebesgue measure $\lambda(A)$?
We don't care about the countable set of $x\in[0,1]$ possessing two different decimal expansions. Denote by $A_k$ $(0\leq k\leq9)$ the set of numbers in $[0,1]$ having $k$ as first digit after the decimal point. The sets $A_k$ with $k<9$ can be written as $$A_k={k\over10}+{1\over 10}A\qquad(0\leq k<9)\ .$$ The scaling property of Lebesgue measure $\lambda$ then implies that $\lambda(A_k)={1\over10}\lambda(A)$ for these $k$. In this way we obtain $$\lambda(A)=9\cdot{1\over10}\lambda(A)+{1\over 10}\ ,$$ whereby the last term measures the set $A_9$. It follows that $\lambda(A)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
If $a$ , $b$ have the same minimal polynomial there exists a field automorphism such that $\sigma(a)=b$ Two elements are said to be conjugates if they have the same minimal polynomial. It is true that for every extension $L/K$, $a\in L$ and $b$ conjugate to $a$ in $L$ there exists an element $\sigma \in \text{Gal}(L/K)$ such that $\sigma(a)=b$? I know that this is true if $a$ is a primitive element and I guess that is true if the extension $L/K$ is normal (because you only need to find a $\sigma \in \text{Gal}(K(a,b)/K)$ such that $\sigma(a)=b$ and as the extension is normal you can extend it to a $\tilde{\sigma} \in \text{Gal}(L/K)$), but in the general case I don't even have an intuition of what happens. Can anyone give a counterexample or show a proof in the normal/general case?
This is false. For instance, consider $K=\mathbb{Q}$, $L=\mathbb{Q}(\sqrt[4]{2})$, $a=\sqrt{2}$, and $b=-\sqrt{2}$. Then $a$ and $b$ are conjugate over $K$, but no automorphism of $L$ can send $a$ to $b$ since $a$ has a square root in $K$ but $b$ does not. It is true if $L$ is normal over $K$. In that case, there is an isomorphism $\sigma:K(a)\to K(b)$ sending $a$ to $b$, which extends to an isomorphism $\tilde{\sigma}:L\to L$ by normality of $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455509", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why do the decision variables in a linear optimization problem have to be non-negative? Why do the decision variables in a linear optimization problem have to be non-negative? I mean it makes sense in certain scenarios (when you are talking about how many pairs of two shoes to make for maximum profit) as you can't make negative shoes but why does this have to be the case isn't there some situation where one of the variables could be negative?
Yes, you are right. A variable can be negative. If at least one of the variable is negative (0 inclusive), then you can transform the problem to a problem with only non-negative variables. Therefore you still have the standard form. Numerical example: $\texttt{max} \ \ 2x_1-x_2$ $x_1-2x_2\leq 5$ $3x_1-x_2\leq 7$ $x_1\geq 0,\ x_2\leq 0$ Now you define $x_2=-x_2'$ The problem becomes $\texttt{max} \ \ 2x_1+x_2'$ $x_1+2x_2'\leq 5$ $3x_1+x_2'\leq 7$ $x_1,\ x_2'\geq 0$ A transformation can be also done, if a variable is not restricted. Suppose that y is not restricted, then you define $y=y'-y''$, where $y', \ y'' \geq 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Vector Proof for triple product How can I prove/disprove A x (B x C)=(A x B) x C +B x (A x C) ? I know I could equate the right side to: B(A dot C)-C(A dot B) But I don't know where to go from there.
This is the Jacobi identity for the vector cross product. Since you already have the identity: $$ \mathbf{a}\times(\mathbf{b}\times\mathbf{c})=\mathbf{b}(\mathbf{a}\cdot\mathbf{c})-\mathbf{c}(\mathbf{a}\cdot\mathbf{b}) $$ applying this to both sides should show you they are equivalent. The left-hand side is: $$ LHS=\mathbf{a}\times(\mathbf{b}\times\mathbf{c})=\mathbf{b}(\mathbf{a}\cdot\mathbf{c})-\mathbf{c}(\mathbf{a}\cdot\mathbf{b}) $$ and the right-hand side is: $$ \begin{split} RHS&=(\mathbf{a}\times\mathbf{b})\times\mathbf{c}+\mathbf{b}\times(\mathbf{a}\times\mathbf{c}) \\ &=-\mathbf{c}\times(\mathbf{a}\times\mathbf{b})+\mathbf{a}(\mathbf{b}\cdot\mathbf{c})-\mathbf{c}(\mathbf{b}\cdot\mathbf{a}) \\ &=-\mathbf{a}(\mathbf{c}\cdot\mathbf{b})+\mathbf{b}(\mathbf{c}\cdot\mathbf{a})+\mathbf{a}(\mathbf{b}\cdot\mathbf{c})-\mathbf{c}(\mathbf{b}\cdot\mathbf{a}) \\ &=\mathbf{b}(\mathbf{a}\cdot\mathbf{c})-\mathbf{c}(\mathbf{b}\cdot\mathbf{a})=LHS \end{split} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why can there be an infinite difference between two functions as x grows large, but a ratio of 1? I learned in grade school that the closer $a$ and $b$ are to one another, the closer $\frac{a}{b}$ is going to be to $1$. For example, $\frac{3}{\pi}$ is pretty close to 1, and $\frac{10^{100}}{42}$ isn't even close to 1. So, why is: $$\lim_{x\to\infty} \frac{x^{2}}{x^{2}+x} = 1$$ But: $$\lim_{x\to\infty}[(x^2+x)-(x^2)] = \infty$$ ? Seems pretty counterintuitive. What's going on here?
For the limit, $$\lim_{x \to \infty} \frac{x^2}{x^2+x}$$ The $x$ term becomes insignificant and $x^2$ is the dominant term as $x$ gets bigger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455826", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "40", "answer_count": 5, "answer_id": 0 }
How to find the set for which a function is harmonic LEt $f(z) = Im( z + \frac{1}{z} ) $. I need to find the set where $f$ is harmonic. IS there a way to find this without too much computations?
Although @Almentoe has shown that $f$ is harmonic is the punctured plane that deleted the origin, here we analyze the problem directly - "brute force." First, note that we can write for $z\ne 0$ $$f(z)=\text{Im}\left(z+\frac1z\right)=y-\frac{y}{x^2+y^2}$$ Then, for $(x,y)\ne (0,0)$, we have $$\frac{\partial^2 f}{\partial x^2}=\frac{2y(y^2-3x^2)}{(x^2+y^2)^3} \tag 1$$ and $$\frac{\partial^2 f}{\partial y^2}=-\frac{2y(y^2-3x^2)}{(x^2+y^2)^3} \tag 2$$ Adding $(1)$ and $(2)$ reveals that for $(x,y)\ne (0,0)$, $$\frac{\partial^2 f}{\partial x^2}+\frac{\partial^2 f}{\partial y^2}=0$$ We conclude that $f$ is harmonic for $z\ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Diagonalization of matrix and eigenvalues of $A+A^2+A^3$ Let $$A= \begin{pmatrix} 6 & 4 & 1 \\ -6 & -1 & 3 \\ 8 & 8 & 4 \\ \end{pmatrix} $$ Find a non-sigular matrix P and a diagonal matrix D such that $A+A^2+A^3=PDP^{-1}$ No idea what to do
You do not have to calculate $A+A^2+A^3$. Suppose that you know how to diagonalize the matrix $A$, i.e., you can find an invertible matrix $P$ and a diagonal matrix $D$ such that $A=PDP^{-1}$. (I will leave the computations to you. You can have a look at other posts tagged diagonalization or on Wikipedia. I guess you can find there something to get you started. Then you can check your result on WolframAlpha or using some other tools.) If you already have $A=PDP^{-1}$, then it is easy to see that $$ \begin{align*} A^2&=(PDP^{-1})(PDP^{-1}) = PD^2P^{-1}\\ A^3&=(PD^2P^{-1})PDP^{-1} = PD^3P^{-1} \end{align*} $$ and we get $$A+A^2+A^3=PDP^{-1}+PD^2P^{-1}+PD^3P^{-1} = P(D+D^2+D^3)P^{-1}.$$ Clearly, the matrix $D+D^2+D^3$ is a diagonal matrix. If $d_1$, $d_2$, $d_3$ are the diagonal elements of the matrix the, then $D+D^2+D^3$ has $$d'_i=d_i+d_i^2+d_i^3$$ on the diagonal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1455997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
When to use product or set notations in calculating Probability. Problem: The probability that it will rain today is 0.5.The probability that it will rain tomorrow is 0.6.The probability that it will rain either today or tomorrow is 0.7.What is the probability that it will rain today and tomorrow? * *Why we just can't multiply the probability of today's and tomorrows' raining, as they are two events and must be followed one after other to get the final event on raining on two days. So the answer is 0.3. *But the solution say we got to use the equality Pr(A and B) = Pr(A) + Pr(B) - Pr(A or B). This give us the answer 0.4. It look to me like Independence is to do something in here, but I might be wrong. Please help me with, why set notation works but not the product rule ?
The probability of a union of events is always the sum of probabilities of the events minus the probability of their intersection.   This is how probability measures are required to work (among other things), so: $\Pr(A\cup B)=\Pr(A)+\Pr(B)−\Pr(A\cap B)$.   This is equivalently: $$\Pr(A\cap B)=\Pr(A)+\Pr(B)−\Pr(A\cup B)$$ This is always the case.   It is only when the events are independent that it is also true that: $$\Pr(A\cap B)=\Pr(A)\cdot\Pr(B)$$ So we can only use the probability rule when we have certainty that the events are independent.   However, we can always use the addition rule when given three of the four probabilities. In this case, we don't have any way to guarantee that the events of rainfall on subsequent days are independent.   Rather it would seem reasonable that there may be some dependence.   Since we are given three probabilities (of two events and their union), it is best to use the addition rule to find the fourth (their intersection); because that will always work. And indeed on doing so, we find that these events are in fact not independent because, in this case, $\Pr(A)\cdot\Pr(B)\neq \Pr(A)+\Pr(B)-\Pr(A\cup B) = \Pr(A\cap B)$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/1456142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }