Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Map $\mathbb{C}\setminus [-1,1]$ onto the open unit disk Let $G = \mathbb{C}\setminus [-1,1]$. I wan't to find an analytic function $f:G\rightarrow \mathbb{D}$ where $\mathbb{D}$ denotes the unit disk such that $f$ is onto, and preferably if possible one-to-one. Now I've seen that $g(z) = \frac{1}{2}(z+1/z)$ maps the open unit disk to $\mathbb{C}\setminus [-1,1]$ in a one-to-one fashion and therefore the given candidate would be its inverse. $$g(z) = w\Leftrightarrow z^2-2zw+1 = 0\Leftrightarrow (z-w)^2 = w^2-1.$$ Here is my problem: I want to show that $g$ is invertible and find a concrete formula for $g^{-1}$ however I am not sure how to do this using square roots. Say we want to use the square root defined from the branch of the logarithm which satisfies $\log re^{it} = \log r+it$ for $0<t<2\pi$. Then we want $w^2-1$ to stay away from $(-\infty,0]$. However writing $w = x+iy$ it is clear that we then need to restrict $w$ to lie in $\mathbb{C}\setminus \Big((-\infty,-1]\cup(1,\infty]\Big)$ which defeats the purpose since we want $w\in \mathbb{C}\setminus [-1,1]$. How should I go about choosing a branch in this case? Clearly $g$ is undefined at $0$ so this only gives a bijective map from $G$ to $\mathbb{D}\setminus\{0\}$. Also by simple connectivity injectivity is not possible however can we find a map which is onto?
If $f:G\to \mathbb D$ is onto it cannot be one-to-one: else $G$ and $\mathbb D$ would be analytically isomorphic. Indeed the inverse of a bijective analytic mapping between open subsets of $\mathbb C$ is automatically analytic. Note that this is a non trivial result. But this is absurd since these domains are not even homeomorphic: $\mathbb D$ is simply-connected whereas $G$ is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3541832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Philosophy of simple field extensions In B. L. van der Waerden's Algebra stuck on the problem 6.9: The polynomial $f(x) = x^4 + 1$ is irreducible in the field of rationals. Adjoin a root $\theta$ and resolve the polynomial in the extended field $\mathbb Q (\theta)$ into prime factors. Seems like I haven't nailed the idea of extending fields, so I'm asking to control my thoughts and help with troubled places. We can obtain the desired extension by two ways: either we use the fact, which states that we know, in which field would that polynomial have a root (nonsymbolic adjunction), or we can build residue class field modulo that polynomial (symbolic adjunction). I don't realize what I'm supposed to do in both cases. * *In nonsymbolic way, should I just completely factorize $f(x)$ over $\mathbb C$, or am I to find only one (any shall do) root? I can write $f(x) = (x^2-i)(x^2+i) = (x-\sqrt i)(x + \sqrt i)(x - \sqrt{-i})(x + \sqrt{-i})$ but what would that give me? Or I could simply say that $\theta = \sqrt{\sqrt{-1}}$ is an obvious root, divide $f(x)$ by $(x - \theta)$ and think hard what to do next with the quotient $x^3 + \theta x^2 + {\theta}^2 x + {\theta}^3$? *In symbolic way, I have to find residue class field modulo $f(x)$. Is this possible at all? I haven't seen yet a single example of making that with polynomials. I'm really sorry for the size of the question. The systematic ignorance reminds of itself. However, I feel, that a proper answer will amend many other problems in my knowledge. I am really grateful at least for the time you spent on reading it.
I think it's important to realize what's taken as obvious here and what not. Obvious: there is a root in some extension, and there is a prime factorization in another extension Non obvious: all roots are equivalent in the sense that: Non obvious: the polynomial can be factored in the simple field extension $Q(\theta)$ for any root $\theta$ In particular, the question is probably supposed to be attacked using a more algebraic manner than simply writing down the factorization over $\mathbb{C}$. Knowing the factorization over $\mathbb{C}$ helps figuring out the algebra at play however.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3541971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 3 }
Prove $e^{z+w}=e^ze^w$ using the Taylor expansion for $ e^z.$ For all $z,w ∈ \Bbb{C}$, using the power series definition for $e^z$, prove $e^{z+w}=e^ze^w$.
Starting with $e^{z+w}$, we have : \begin{align} e^{z+w} & = \sum_{n=0}^\infty \frac{(z+w)^n}{n!} \\ & = \sum_{n=0}^\infty \sum_{k=0}^n \binom{n}k \frac{z^kw^{n-k}}{n!} \end{align} Here's the point : we now need to collect terms of the form $z^K$ for fixed $K$. This will involve a change of variable. To see how we do this, we write all the terms of this summation down , breaking it for $n = 0,1,2,...$: \begin{align} n = 0 & : & \binom{0}{0} \frac{z^0w^0}{ 0!} \\ n=1 & : & \binom{1}{0} \frac {z^0w^1}{ 1!} , \binom{1}{1} \frac{z^1w^0}{1!} \\ &\vdots& \\ n = N &:& \binom N0 \frac{z^0w^N}{N!} , \binom N1\frac{z^1 w^{N-1}}{N!}, \ldots , \binom NN \frac{z^Nw^0}{N!}\\ &\vdots& \end{align} This is one way of summing the terms : take all the terms for fixed $n$, sum these, and then sum over all $n$. Now, let us do something different : we don't fix $n$. We fix $\mathbf k$. Now, let us write down what terms in the summation have $k=0$: $$ \binom{0}{0} \frac{z^0w^0}{0!} , \binom 10 \frac{z^0w^1}{1!} , \binom 20 \frac {z^0w^2}{2!} , \ldots $$ for say $k=7$, the $n = 0 \to 6$ will not contribute because $\binom nk$ in that case is zero. So if we look at which terms have $k=7$ in the summation, we get : $$ \binom{7}{7} \frac{z^7w^0}{0!} + \binom 87 \frac{z^7w^1}{1!} + \binom 97 \frac{z^7w^2}{2!} + ... $$ Now, the point is this : initially, we fixed $n$, and summed over all $k$, then summed over all $n$. Now, we fix $k$,sum over all $n$ then over all $k$. Basically, I am trying to say that the summations can be interchanged : $$ \sum_{n=0}^\infty \sum_{k=0}^n \binom{n}{k} \frac{z^kw^{n-k}}{k!} = \sum_{k=0}^\infty \color{green}{\sum_{n \geq k}} \binom {n}{k} \frac{z^kw^{n-k}}{n!} $$ This is an inversion of question : first, we asked "Fix $n$, for which $k$ is the summation term non-zero?" for which the answer was $0 \leq k \leq n$. Now, we ask "fix $k$, for which $n$ is the summation term non-zero?" Now the answer should be clear : it is so only when $n\geq k$ (think about this yourself). Now, we have fixed $k$. Note that this is equivalent to calculating the coefficient of $z^k$. We only need to sum over $n$. But as $k$ is fixed, so is $z^k$ and $k!$, these come out of the summation. So the rest of the answer follows. So fixing $K$, look at the coefficient of $z^K$. for $n < K$ we have $\binom nK = 0$, so no coefficient. For $n \geq K$, the term becomes $\binom nK \frac{w^{n-K}}{n!}$ , which after simplifying the factorial becomes $\frac{w^{n-K}}{K!(n-K)!}$. In short, once we sum over all $n$, the coefficient of $z^K$ is : $$ \sum_{n \geq K} \frac{w^{n-K}}{K!(n-K)!} \overbrace{=}^{t = n-K} \frac 1{K!}\sum_{t=0}^\infty\frac{w^t}{t!} = \frac{e^w}{K!} $$ So now, once we have collected all the terms : $$ e^{z+w} = \sum_{K=0}^\infty \frac {e^w}{K!} z^K = e^w \sum_{K=0}^\infty \frac{z^K}{K!} = e^we^z $$ as expected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3542281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Decreasing sequence term sign for zero convergent series Given a convergent series $$\lim_{n \to \infty} \sum_{k=1}^n a_k = 0 $$ for an decreasing sequence $\{a_n\}$. Can we make any deductions on the sign of sequence terms $a_n$?
If $(a_n)_{n \in \mathbb{Z}^+}$ is decreasing and $\sum_{k=1}^\infty a_k = 0$, then $a_n = 0$ $\forall n$. To see this, we first show that $a_n \geq 0$. Suppose not, so $a_1 = q < 0$. Then since the sequence is decreasing, $a_n \leq q$ $\forall n$. Then: $$ \sum_{k=1}^\infty a_k \leq \sum_{k=1}^\infty q = -\infty $$ Which is not possible. We then observe that $a_1 = 0$ necessarily. This is because if $a_1 > 0$, then since $a_n \geq 0$ $\forall n$, we have that: $$ \sum_{k=1}^\infty a_k \geq a_1 > 0 $$ Which is, again, a contradiction. This means that we have $0 = a_1 \geq a_2 \geq a_3 \geq \cdots \geq 0$. The only possibility is that $a_n = 0$ $\forall n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3542456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bounded function whose second derivative is non negative Is it true that a twice continuously differentiable bounded function from R to R with non negative second derivative for all x in R is necessarily a constant? If not give a counter example. The given function is convex throughout R since it’s second derivative is non negative. And its boundedness geometrically implies it is a constant. How should I rigorously prove the result? Or is my geometric intuition wrong? Help me please.
If $f''\ge 0$ then $f'$ is increasing. Because if $x<y$ and $f'(x)>f'(y)$ then by the MVT there exists $z\in (x,y)$ such that $0>\frac {f'(y)-f'(x)}{y-x}=(f')'(z)=f''(z)\ge 0,$ which is absurd. If $f$ is differentiable and not constant then $f'$ is not everywhere $0.$ For if $f(x)\ne f(y)$ then by the MVT there exists $z$ between $x$ and $y$ with $0\ne \frac {f(y)-f(x)}{y-x}=f'(z).$ Therefore: If $f''\ge 0$ and $f'(z)>0$ then for $x>z$ we have $$f(x)=f(z)+\int_z^xf'(t)dt\ge f(z)+\int_z^xf'(z)dt=f(z)+(x-z)f'(z)$$ which ( for a fixed $z$) is unbounded above as $x\to \infty.$ If $f''\ge 0$ and $f'(z)<0$ then for $x<z$ we have $$f(x)=f(z)+\int_z^x f'(t)dt=f(z)+\int_x^z(-f'(t))dt\ge f(z)+\int_x^z(-f'(z))dt=$$ $$= f(z)+(z-x)(-f'(z))$$ which (for a fixed $z$) is unbounded above as $x\to -\infty.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3542619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Limit $\lim_{x\to 0^{-}}\frac{e^{40x}-1}{5x}$ $$\lim_{x\to 0^{-}}\frac{e^{40x}-1}{5x}$$ What I have does is: $$\lim_{x\to 0^{-}}\frac{e^{40x}-1}{5x}=\lim_{x\to 0^{-}}\frac{e^{40x}-e^0}{5x}=\lim_{x\to 0^{-}}\frac{8}{8}\cdot\frac{e^{40x}-e^0}{5x}=\lim_{x\to 0^{-}}8\cdot\frac{e^{40x}-e^0}{40x-0}=\\=8\cdot(e^{40x})'_{x=0}=8\cdot40e^{40\cdot 0}=320$$ Which is incorrect, where is the problem?
$$l=\lim_{x\to 0^{-}}\frac{e^{40x}-1}{5x}$$ Using the derivative definition: $$l=\frac 1 5\lim_{x\to 0^{-}}\frac{e^{40x}-e^{40*0}}{x-0}$$ $$l=\frac 1 5(e^{40x})_{x=0}'$$ $$l=\frac 1 5(40e^{40x})_{x=0}$$ Therefore: $$l=\frac {40} 5=8$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3542932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
Prove that a maximal planar graph/triangulation of $V>3$ has minimum degree = $3$ I'm trying to prove that any maximal planar graph that has more than 3 vertices will have a minimum degree of 3, that is any vertex in such a graph has greater or equal to 3 edges connected to itself. So it doesn't have to have a degree 3 vertex. I want to prove that when there's a degree 2 vertex in a planar graph G (say the rest of G is already maximal), why it is always possible to add at least another edge to the degree 2 vertex to form a maximal planar graph (without crossing). I can visualize a worst case where a new vertex (u) is added to a maximal planar graph (G) and lands in a triangle formed by three vertices that are previously added to the graph. The only option for u is to connect with the three vertices of the triangle since a fourth edge trying to connect u and the "outside" will cross at least one edge, resulting in a non-planar graph. If u lands outside G, then it should be able to connect to at least three vertices without crossing any edges since G's outer face should also be bounded by three edges, which means that at least three vertices on the verge of G can be connected to u? How to formally prove the minimum degree of a maximal planar graph G (with more than 3 vertices) is 3?
Given a planar graph, embedded in the plane, you want to add a number of edges to get a maximal planar graph, and you want to be sure that resulting maximal planar graph won't have vertices with degree $=2$. From this Wikipedia page: A simple graph is called maximal planar if it is planar but adding any edge (on the given vertex set) would destroy that property... The process of adding edges is straightforward - you need to find all faces, bounded by more than $3$ edges (including the external face), and triangulate these faces by adding new edges. Let's consider the case, which looks problematic to you - a vertex $u$ with degree $2$, for which it's impossible to add a new edge, incident to the $u$. You have two vertices $v$ and $w$, adjacent to the vertex $u$, and two faces, partially bounded by edges $\{v,u\}$ and $\{u,w\}$. If a new edge, incident to the vertex $u$, can't be added, then both these faces should have an edge $\{v,w\}$, which prevents this addition. So, the graph contains two parallel edges $\{v,w\}$, which is not possible, because we consider simple graphs only.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A double sum for the square of the natural logarithm of $2$. I am trying to show \begin{eqnarray*} \sum_{n=1}^{\infty} \sum_{m=1}^{\infty} \frac{1}{(n+m)^2 2^{n}} =(\ln(2))^2. \end{eqnarray*} Motivation : I want to use this to calculate $ \operatorname{Li}_2(1/2)$. So I want a solution to the above that does not use any reference to dilogarithms and please avoid rational multiples of $\pi^2$ (if possible). Right, lets turn it into a double integral. ( I know lots of you folks prefer integrals to plums.) Show \begin{eqnarray*} \int_0^1 \int_0^1 \frac{xy \ dx \ dy}{(1-xy)(2-xy)} =(\ln(2))^2. \end{eqnarray*} Reassuringly Wolfy agrees My try: Let $u=xy$, & the double integral becomes \begin{eqnarray*} \int_0^1 \frac{dy}{y} \int_0^y \frac{u \ du }{(1-u)(2-u)} . \end{eqnarray*} Partial fractions \begin{eqnarray*} \frac{u}{(1-u)(2-u)} =\frac{1}{1-u} - \frac{2}{2-u}. \end{eqnarray*} Do the $u$ integrations to leave the $y$ integrals \begin{eqnarray*} -\int_0^1 \frac{\ln(1-y) dy}{y} +2 \int_0^1 \frac{\ln(2-y) dy}{y}. \end{eqnarray*} The first integral is \begin{eqnarray*} -\int_0^1 \frac{\ln(1-y) dy}{y} = \frac{ \pi^2}{6}. \end{eqnarray*} which I was hoping to avoid and even worse Wolfy says the second integral is divergent So you have a choice of questions, where did I go wrong in the above ? OR how can we show the initially stated result?
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ $\ds{\bbox[5px,#ffd]{\sum_{n = 1}^{\infty}\sum_{m = 1}^{\infty} {1 \over \pars{n + m}^{2}\, 2^{n}} = \ln^{2}\pars{2}} \approx 0.4805:\ {\Large ?}}$. \begin{align} &\bbox[5px,#ffd]{\sum_{n = 1}^{\infty}\sum_{m = 1}^{\infty} {1 \over \pars{n + m}^{2}\, 2^{n}}} = \sum_{n = 1}^{\infty}\sum_{m = 1}^{\infty} {1 \over 2^{n}}\ \overbrace{\bracks{-\int_{0}^{1}\ln\pars{x}x^{m + n - 1}\,\dd x}} ^{\ds{{1 \over \pars{n + m}^{2}}}} \\[5mm] = &\ -\int_{0}^{1}\ln\pars{x} \sum_{n = 1}^{\infty}\pars{x \over 2}^{n} \sum_{m = 1}^{\infty}x^{m}\,{\dd x \over x} = -\int_{0}^{1}\ln\pars{x} {x/2 \over 1 - x/2}\,{x \over 1 - x}\,{\dd x \over x} \\[5mm] = &\ -\int_{0}^{1}\ln\pars{x} {x \over \pars{2 - x}\pars{1 - x}}\,\dd x = 2\int_{0}^{1} {\ln\pars{x} \over 2 - x}\,\dd x - \int_{0}^{1} {\ln\pars{x} \over 1 - x}\,\dd x \\[5mm] = &\ 2\int_{0}^{1/2} {\ln\pars{2x} \over 1 - x}\,\dd x - \int_{0}^{1}{\ln\pars{1 - x} \over x}\,\dd x \\[5mm] \stackrel{\mrm{IBP}}{=}\,\,\,& -2\int_{0}^{1/2}\mrm{Li}_{2}'\pars{x}\,\dd x + \int_{0}^{1}\mrm{Li}_{2}'\pars{x}\,\dd x = -2\,\mrm{Li}_{2}\pars{1 \over 2} + \mrm{Li}_{2}\pars{1} \\[5mm] = &\ -2\ \underbrace{\bracks{{\pi^{2} \over 12} - {1 \over 2}\ln^{2}\pars{2}}}_{\ds{\mrm{Li}_{2}\pars{1 \over 2}}}\ +\ \underbrace{\pi^{2} \over 6} _{\ds{\mrm{Li}_{2}\pars{1}}} = \bbx{\ln^{2}\pars{2}} \\ & \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543241", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Examples of Dense\Not Dense sets Let the set of eventually zero sequences $c_{00} = \{x = (x_1, x_2, . . .) : x_n = 0 \ \text{for all but finitely many} \ n \}$ where $x_i$ are real numbers. (a) Prove that $c_{00}$ is dense in $l^p, 1 \le p < \infty$. (b) Prove that $c_{00}$ is not dense in $l^\infty$. $\text{My attempt}$: a) let $\epsilon>0$ and pick any $c_{00} \ni x= (x_1,\dots,x_n,0,0,\dots) , x_i\in\mathbb{R}$. Define the set $M=\{l^p \ni y=(y_i)_{i\ge 1} : (y_i)_{i=1}^n \in \mathbb{Q}, (y_i)_{i=n+1}^\infty=0 \}$, then the set $M$ is countable and $M\subset l^p$. then , since rationals are dense in reals; $$\|x-y\|_{l^p}^p = \sum_{i=1}^\infty|x_i-y_i|^p = \sum_{i=1}^n|x_i-y_i|^p < \epsilon^p$$ $$\|x-y\|_{l^p}<\epsilon \ \ \ ,\text{for some}\ \ y\in M$$ b) similarly let $\epsilon>0$ and pick any $c_{00} \ni x= (x_1,\dots,x_n,0,0,\dots) , x_i\in\mathbb{R}$. Also pick $l^\infty \ni y=(y_i)_{i\ge1}$s.t. $(y_i)_{i=1}^n \in \mathbb{Q}, (y_i)_{i=n+1}^\infty=L$. Where $\sum_{i=1}^n|x_i-y_i|^p<\frac{\epsilon^p}{2}$ and $L<\infty$ is sufficiently large. Then ; $$\|x-y\|_{l^p}^p = \sum_{i=1}^\infty|x_i-y_i|^p = \sum_{i=1}^n|x_i-y_i|^p + \sum_{i=n+1}^\infty|y_i|^p< \frac{\epsilon^p}{2} + \sum_{i=n+1}^\infty|y_i|^p$$ So $L$ can be chosen sufficiently large s.t. $\sum_{i=n+1}^\infty|y_i|^p > \frac{\epsilon^p}{2}$. So $c_{00}$ is not dense in $l^\infty.$
For (a), your argument does not prove that $c_{00}$ is dense in $\ell^p$, rather it proves that the set of elements of $c_{00}$ having rational coordinates is dense in $c_{00}$. To prove that $c_{00}$ is dense in $\ell^p$, you need to start with an arbitrary element of $\ell^p$ and show that the $\varepsilon$ ball centered at this element contains an element of $c_{00}$ for any $\varepsilon>0$. For (b), in order to show that $c_{00}$ is not dense in $\ell^{\infty}$ you need to show that there is some $x\in\ell^{\infty}$ and $\varepsilon>0$ such that no element of $c_{00}$ is contained in the $\varepsilon$ ball centered at $x$. So while you are on the right track, it doesn't really make sense to start with an element of $c_{00}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to find the stationary point under constraints analitically? I am working with the optimization problem from the paper, eq.(5) $$\max_{X=(x_1, x_2, \ldots, x_{n+1})} f(X)=(A-B\sum_{i=1}^n \frac{1}{x_i})\times x_{n+1}$$ subject to $$x_{n+1}=1-2k\sum_{i=1}^n x_i,$$ $$x_i \geq 0, \quad i = 1,2,\ldots, n+1.$$ Here $A \gg B > 0 \in \mathbb{R}$, $k \in \mathbb{Z}^+$, and $f(X)>1$. From the paper I know the stationary point $$X^*=(\underbrace{\sqrt{\frac{B}{2kA}}, \ldots, \sqrt{\frac{B}{2kA}}}_{n\text{ times}}, 1 − 2 kn\sqrt{\frac{B}{2kA}} )$$ and the optimal value $$f(X^*)=(\sqrt{A} − n \sqrt{ 2 \cdot k \cdot B})^2.$$ Question. How to find the stationary point under constraints analytically? Is it possible for $n=3, k=10$ case? Attempt. I have tried to use the Lagrange multiplier: $$F(X, \lambda) = x_{n+1}(A-B\sum \frac{1}{x_i}) + \lambda(x_{n+1} - 1 + 2k\sum x_i)=0$$ and found the partical derivatives and have the $(n+2)$ system with $n+2$ variables: \begin{cases} F'_{x_i}(X, \lambda)= x_{n+1}\frac{B}{x_i^2} +2 \lambda k x_i=0, \quad i=1,2,..., n, \\ F'_{x_{n+1}}(X, \lambda)= A - B\sum \frac{1}{x_i}+\lambda=0, \\ F'_{\lambda}(X, \lambda)=x_{n+1} -1+2k\sum x_i =0. \end{cases} My problem now is how to express $x_i$, $i=1,2,..., n$, and $x_{n+1}$ through $\lambda$ and find roots. I know the stationary point and have found the A & Q. I will use the notation $\sum_{i=1}^n x_i := n \cdot x$ $\sum_{i=1}^n \frac{1}{x_i} := \frac{n} {x}$, and $x_{n+1}:=y$. Then the system will be $$y\frac{B}{x^2}+2\lambda k x=0, \tag{2.1}$$ $$\lambda=B\frac{n}{x}-A, \tag{2.2}$$ $$y=1-2knx. \tag{2.3}$$ Put $(2.2)$ and $(2.3)$ in $(1.1)$: $$( 1-2knx )\frac{B}{x^2}+2 (B\frac{n}{x}-A ) k x=0, \tag{3.1}$$ Multiple both sides $(3.1)$ on $x^2$: $$( 1-2knx )B+2 (B\frac{n}{x}-A ) k x^3=0, \tag{4.1}$$ Open brackets and collect tems: $$2kAx^3-2nBkx^2+2nBkx-B=0, \tag{5.1}$$ divide both sides on $2kA$: $$x^3 - n\frac{B}{A}x^2 + n \frac{B}{A}x-\frac{1}{2k}\frac{B}{A}=0. \tag{6.1}$$ One can see the equation of power $3$, I am looking for a root $x \in \mathbb{R}$. I think the equation $(6.1)$ should has a simple real root and complex pair.
Let’s follow your approach. Unfortunately, the first $n$ equations of your system are wrong, we have $F'_{x_i}(X, \lambda)= x_{n+1}\frac{B}{x_i^2} +\color{red}{2\lambda k}=0$. Next, both values of $X^*$ from the paper and the first $n$ equations of the system (unless $\lambda=x_{n+1}=0$) says that for the stationary point all $x_i$’s for $1\le i\le n$ are equal to some value $x$. This lead as to a system $$y\frac{B}{x^2}+2\lambda k=0, \tag{2’.1}$$ $$\lambda=B\frac{n}{x}-A, \tag{2’.2}$$ $$y=1-2knx. \tag{2’.3}$$ Put $(2’.2)$ and $(2’.3)$ in $(2’.1)$: $$(1-2knx )\frac{B}{x^2}+2 (B\frac{n}{x}-A ) k=0.$$ $$\frac{B}{x^2}-2Ak=0.$$ $$x=\sqrt{\frac{B}{2kA}}.$$ Remark that in other to assure that the obtained value $$X^*=(\underbrace{\sqrt{\frac{B}{2kA}}, \ldots, \sqrt{\frac{B}{2kA}}}_{n\text{ times}}, 1 − 2 kn\sqrt{\frac{B}{2kA}})$$ provides the optimal value for $f(X^*)$, we also have to consider other possible critical points (which are often missed in applications, making them non-rigorous) provided by the following general Lagrange’s theorem. Let $m$ be a natural number, $r\le m$, functions $f,g_1,\dots, g_r$ from $\Bbb R^m\to R$ are continuously differentiated in a neighborhood of a point $x$ such that $g_i(x)=0$ for each $1\le i\le m$ and rank of the Jacobi matrix $J(x)=\left\|\tfrac{\partial g_i}{\partial x_j}(x) \right\|$ equals $r$. If the function $f$ has a conditional extremum at the point $x$ then there exists numbers $\lambda_1,\dots,\lambda_n$ such that $\left(f+\lambda_1g_1+\dots+\lambda_rg_r\right)(x)=0$. That is in our case we have also to check points for which $J(x)=0$. Luckily, $J(x)=(-2k,\dots,-2k,1)$ so its rank is always $r=1$ and we can skip this part. Now we found condition for a local condition maximum of the function $f$. But we have to evaluate its global maximum (or supremum). For this usually we have also to check values of $f$ at the boundary points of its domain. In our case formally these are points $x=(x_i)$ with some of $x_i$ are zeros, but luckily, this is excluded by the expression for $f$. Finally, it can happen that the global maximum of the function $f$ is not attained in any point of its domain. Luckily, this is not our case because we have $x_i\ge B/A$ for each $1\le i\le n$ and each point $x$ such that $f(x)\ge 0$. Since $f$ is a continuous function, it attains its maximum value in some point $x$ on a compact domain given by the conditions $B/A\le x_i\le 1/2k$ and $x_{n+1}=1-2k\sum_{i=1}^n x_i$. This points $x$ fits for Lagrange’s theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
probability priority of comma and bar In probability does , have priority over |? I am asking because of below question. Is below X and (Y given by Z)? P(X,Y|Z) or is it (X and Y) given by Z?
To answer your question more directly, yes, comma (,) has higher priority than bar (|) when interpreting this notation. So P(X,Y|Z) is the conditional probability of X and Y occurring together, given Z. It might help to think of this case as P((X,Y)|Z) (though such use of additional brackets is not common in the notation, afaik). Similarly, P(X|Y,Z) is the conditional probability of X, given Y and Z (i.e. as if there were additional brackets as follows: P(X|(Y,Z)).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Curry-Howard: Types with Logic vs Types as Logic In the paper Knowledge Representation in Bicategories of Relations there is a short remark on p47 that makes a distinction that seems quite far ranging regarding the Curry-Howard Correspondence. The paper is is interested in "types with logic" not "types as logic", where the first is sometimes called "two-level type theory", or "logic-enriched type theory". It cites Nicola Gambino and Peter Aczel 2006 paper The generalised type-theoretic interpretation of constructive set theory, which is difficult to read. Is there a nice intuitive explanation of this distinction? (Something that would allow me to understand the gist of it, and know at what point I would need to dig into the details?)
You might try these two posts by Mike Shulman, Propositions as Some Types and Algebraic Nonalgebraicity and Freedom From Logic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Estimate $n$ such that $\log(n^C)Given $C\in \Bbb N$ (which we can assume to be big), is there a simple way to estimate de value of $n$ such that the following formula is satisfied?$$\log(n^C)<n.$$ Equivalently, how can we estimate the index $n$ where the sequence $x_n=\frac{n}{\log(n)}$ exceed a given value $C\in\Bbb N$?
The inequality $\log n^C < n$ is equivalent to $ \dfrac{\log n}{n} < \dfrac 1C$. Since $\dfrac{\log n}{n} \le \dfrac 1e$ for all $n \in \mathbf N$ we may as well assume that $C \ge e$. This is a good place to apply the Lambert $W$-function, although not its usual branch. The function $y = xe^x$ is decreasing on $(-\infty,-1)$ and increasing on $(-1,\infty)$. Its minimum value at $x=-1$ is $y=-1/e$. The $W$-function is usually defined as the inverse of $y = xe^x$ on the interval $[-1/e,\infty)$. Thus if $y \in [-1/e,\infty)$ then $W(y)$ is the unique $x \in [-1,\infty)$ satisfying $y=xe^x$. On the other hand, if $y \in [-1/e,0)$ you can define $W_{-1}(y)$ as the unique $x \in (-\infty,-1)$ satisfying $y = xe^x$. This is the particular Lambert function that is useful here. Observe that $W_{-1}$ is decreasing. With that out of the way you can do a simple calculation. With $x = \log n$ you get $$0 < xe^{-x} = \frac{x}{e^x} = \frac{\log n}{n} < \frac 1C \le \frac 1e$$ so that $$- \frac 1e \le - \frac 1C < -xe^{-x} < 0.$$ Thus $- \dfrac 1C$ and $-xe^{-x}$ both belong to the domain of $W_{-1}$ and since $W_{-1}$ is decreasing you get $$-x = W_{-1}(-xe^{-x}) < W_{-1} \left( - \frac 1C \right)$$ so that $$ n > e^{-W_{-1} \left( - \frac 1C \right)}.$$ For $y \in [-1/e,0)$ the value of $W_{-1}(y)$ can be computed in Wolfram using productlog$(-1,y)$. For instance, with $C = 100$ we find $$n > \mathrm{exp}(-\mathrm{productlog}(-1,-1/100)) \approx 647.2775.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3543992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How to find the intersection of two circles I am doing a question on coordinate geometry. It asks to find the points of intersection of the two circles: $\\(x+1)^2+(y-2)^2=10$ and $\\(x-1)^2+(y-3)^2=5$ And then find the area of the triangle formed by the two points and the origin. I am wondering how to do this - when I sketch it, it looks like the line segment joining the points of intersection is perpendicular to the line segment joining the centers, but I can't prove this, and don't know what to do with it anyway. Any help please!
Hint: By subtracting the equations you get the radical line of them: $$4x+2y = 10.$$ Now plug $y =5-2x$ in to one of them and solve a quadratic equation on $x$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3544246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Effects of increasing a matrix's values on the eigenvalue decomposition Let $A \in \mathbb{R}^{n \times n}$ be a real, symmetric positive semi-definite matrix, and let $A = UVU^T$ be $A$'s eigendecomposition. Suppose that the matrix $A'$ was obtained by $A$ by making some values of $A$ larger, in a way that $A'$ is still symmetric and positive semi-definite matrix. Can we say something about $A'$ eigendecomposition in terms of $A$'s eigendecomposition? In particular, does $A'$ eigenvalues necessarily larger?
You are asking if the eigenvalues of $(A+\Delta)$ are larger than the values of $A$. If the entries of the symmetric $\Delta$ are positive, this might not be true. We may have $\det(A+\Delta)< \det A$, just increase some off-diagonal elements of a symmetric $2\times 2$ matrix. However, if $\Delta$ itself is positive semi-definite, then Yes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3544413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of existence of $a$ in $\lim_{h\rightarrow0}\frac{a^h-1}{h}=1$ In deriving the derivative of function $f(x)=\exp(x)$, it is often pointed out that in the general case of $f_a(x)=a^x$ the following expression can be deduced from the definition of the derivative: $$ \frac{d}{dx}a^x = a^x\cdot\left(\lim_{h\rightarrow0}\frac{a^h-1}{h}\right) $$ with $e$ defined as the real number for which the above limit term is equal to 1. However, this seems unsatisfactory to me as the existence of a real number $a$ satisfying the equation $\lim_{h\rightarrow0}\frac{a^h-1}{h}=1$ is not established. Can somebody please point me to a proof of the existence of such a number or provide it if possible? Thanks.
Assuming that basic theorems about limits and the logarithm function are available as well as $\lim_{x\to 0}\left(x+1\right)^{1/x}=e$, we can first prove that: $$\lim_{y\to 0}\frac{\ln(y+1)}{y}=\lim_{y\to 0}\ln\left(y+1\right)^{1/y}=\ln\left(\lim_{y\to 0}\left(y+1\right)^{1/y}\right)=\ln(e)=1 $$ Switching the limit and the logarithm is possible since $\ln(t)$ is continous. Then $$\lim_{y\to 0}\frac{\log_a(y+1)}{y}=\frac{1}{\ln(a)}\lim_{y\to 0}\frac{\ln(y+1)}{y}=\frac{1}{\ln(a)} $$ And $y=a^{h}-1\to 0$ as $h\to 0$. So by limit of a composition of functions, $$\lim_{h\to 0}\frac{\log_a\left(a^h-1+1\right)}{a^h-1}=\lim_{h\to 0}\frac{h}{a^h-1}=\frac{1}{\ln(a)}\\ \lim_{h\to 0}\frac{a^h-1}{h}=\ln(a)$$ Hence $\lim_{h\to 0}\dfrac{a^h-1}{h}=1$ if and only if $\ln(a)=1\Leftrightarrow a=e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3544583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Does $\int\frac{1}{2}\tanh\left(\frac{1}{x^{2}}\right)\,dx$ have a closed form? Context: I am looking for an activation function that is linear in the area surrounding $x=0$, while also staying within the range of -1 to 1. While I was messing around in Desmos, I stumbled across the function $f(x)=\frac{1}{2}\tanh\left(\frac{1}{x^{2}}\right)$. $\int\frac{1}{2}\tanh\left(\frac{1}{x^{2}}\right)\,dx$ produces the exact shape I am looking for, but I cannot figure out how to find the closed form for it. Am I missing something obvious, or does this function not have a closed form antiderivative?
Simplest one I can think of is $\dfrac{x}{1+|x|} $. There are also $\tanh(x) $ and $\arctan(x) $ scaled as $(2/\pi)\arctan(\pi x/2) $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3544710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Let V, W and Z be vector spaces, and let $T:V \rightarrow W$ and $U: W \rightarrow Z$ be linear. Let V, W and Z be vector spaces, and let $T:V \rightarrow W$ and $U: W \rightarrow Z$ be linear. a).Prove if UT is one-to-one, then T is one-to-one. Must U also be one-to-one? b). Prove if UT is onto, then U is onto. Must T also be onto? for a). If UT is one-to-one, let $x \in V$, then $U(T(x))=0$ and $T(x)=0$. Question for a): Am I supposed to prove that x is also equal to 0? I am not sure how to get there. And I am shaky on whether U must be one-to-one. It seems like U must. for b). If UT is onto, for $x \in V$, $U(T(x))= Z$ Since $T(x) \in W$, and image of U is equal to codomain, U is onto. Question for b): I am not sure how to argue whether U must be onto.
Both holds also for functions that are not linear, so I will prove this in the traditional way. Recall that, if $f:A\to B$ is a function, $f$ is one-to-one if and only if for every $x$ and $y$ in $A$, $$f(x) = f(y) \textrm{ implies that } x=y.$$ Also, recall that $f$ is onto if and only if for any $b\in B$ there exists $a\in A$ such that $b = f(a)$. So, for $(a)$, since our goal is to prove that $T$ is one-to-one, suppose that $u$ and $v$ are two vectors in $V$ such that $T(u) = T(v)$ and, in some way, we need to prove that $u=v$. But this is easy, just apply $U$ to both sides of the equation $T(u) = T(v)$ and use the fact that $UT$ is one-to-one. For $(b)$, pick any $z\in Z$ and we need to find some vector $w\in W$ such that $z = U(w)$, right? Since $UT$ is onto, for $z\in Z$ there are $v\in V$ such that $z = (UT)(v) = U(T(v))$. What is supposed to be the vector $w$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3544869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Ring homomorphism and characteristic Let $F$ be a finite field. If $f : F \to F$, given by $ f(x) = x^3$ is a ring homomorphism, then * *$ F = \mathbb{Z} / \mathbb{3Z}$ *$ F = \mathbb{Z}/ \mathbb{3Z}$ or $ F = \mathbb{Z}/ \mathbb{2Z}$ *$ F = \mathbb{Z}/ \mathbb{2Z}$ or characteristic of $F$ is 3. *Characteristic of $F$ is $3$ For ring homomorphism, $f(x+y)= f(x)+ f(y)$ and $f(xy)= f(x) f(y)$ for $x,y \in F$. So, Charcteristic of $F$ must be 3 so that $ 3x^2y + 3xy^2= 0$, so $4$ option should be true but how to discard option 1?
Consider $(\mathbb Z/3\mathbb Z)[x]/(x^2+1)$. It is a field $\not=\mathbb Z/3\mathbb Z$ of characteristic $3$, so option $(1)$ is not true. Similarly option $(2)$ is not a necessary condition for $x\mapsto x^3$ to be a ring homomorphism. For option $(3)$, we shall show that it is necessary for $x\mapsto x^3$ to be a ring homomorphism that either $F=\mathbb Z/2\mathbb Z$ or $F$ has characteristic $3$. By the binomial expansion, $x\mapsto x^3$ is a ring homomorphism if and only if $3(x^2y+xy^2)=0,\,\forall x, y\in F$. And this is true if and only if $3=0$ or $x^2y+xy^2=0$ in $F$. The first case means $F$ has characteristic $3$. The second case says that, when $x,y\not=0$, $xy$ is invertible, and hence $x+y=0$. This means that every non-zero element in $F$ is the additive inverse of $1_F$. As a consequence, there is only one non-zero element in $F$, i.e. $F=\mathbb Z/2\mathbb Z$. Thus option $(3)$ is a necessary condition. Since $\mathbb Z/2\mathbb Z$ has characteristic $2$, option $(4)$ is not necessary. Hope this helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving recursive function with floor The recursive function is this: $$ T(n) = \begin{cases} 2 & \text{ for }n=1;\\ T \left( \lfloor \frac{n}{2} \rfloor \right) + 7 &,\text{ otherwise} \end{cases} $$ Based on the definition of the function, the right side becomes: $T(n) = T( \lfloor \frac{n}{2^i} \rfloor) + 7 * i;$ The procedure stops when $\lfloor \frac{n}{2^i} \rfloor == 1$ The problem is how do I continue it from there?
We list first some terms: $$\underbrace{2}_{1}, \underbrace{9, 9}_{2}, \underbrace{16, 16, 16, 16}_{4}, \underbrace{23, 23, 23, 23, 23, 23, 23, 23}_{8}, 30, 30, \cdots$$ Conjecture: $$T(n) = 2 + 7\lfloor \log_2 n\rfloor, \quad n = 1, 2, 3, \cdots.$$ We need to prove it. To this end, let $$S(n) = 2 + 7\lfloor \log_2 n\rfloor, \quad n = 1, 2, 3, \cdots.$$ Let us prove that $S(n) = T(n)$ for $n = 1, 2, 3, \cdots$. We use mathematical induction. First, $S(1) = T(1) = 2$, and $S(2) = T(2) = 9$. Assume that $S(k) = T(k)$ for $k = 1, 2, \cdots, n$ ($n\ge 2$). We need to prove that $S(n+1) = T(n+1)$. There exist integer $m\ge 1$ and integer $r$ with $0\le r < 2^m$ such that $n = 2^m + r$. We split into two cases: 1) $r = 2^m - 1$: We have $S(n+1) = 2 + 7(m+1)$ and $S(\lfloor \frac{n+1}{2}\rfloor ) = 2 + 7m$ which results in $S(n+1) = S(\lfloor \frac{n+1}{2}\rfloor ) + 7 = T(\lfloor \frac{n+1}{2}\rfloor ) + 7 = T(n+1)$. 2) $r < 2^m - 1$: We have $S(n+1) = 2 + 7m$ and $S(\lfloor \frac{n+1}{2}\rfloor ) = 2 + 7(m-1)$ which results in $S(n+1) = S(\lfloor \frac{n+1}{2}\rfloor ) + 7 = T(\lfloor \frac{n+1}{2}\rfloor ) + 7 = T(n+1)$. $\quad$ Q.E.D. Thus, $T(n) = 2 + 7\lfloor \log_2 n\rfloor, \quad n = 1, 2, 3, \cdots.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545184", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A Vandermonde Identity for Stirling Numbers? I'm facing the problem of trying to express a quantity in the simplest possible way (it is, using the least possible number of sum symbols). $$ \sum_{j=0}^{n} \sum_{\ell=0}^m \frac{1}{j!}\binom{b+j}{j} {j+1 \brack {\ell+1}} {b+2 \brack {m-\ell+1}}$$ Of course, this can be easily written as a convolution between two polynomials (which happen to be more or less simple). I'm pretty sure that approach will not work (at most, one can write the above expression as "the coefficient of $x^m$ in this product [...]", but that is not useful to my purpose). However, if one explores this sum a little bit, it pretty soon come up the fact that it could be truly useful to, for example, be able to compute this: $$\sum_{\ell=0}^m {j+1\brack{\ell+1}}{b+2 \brack {m-\ell+1}}$$ (which resembles a lot Vandermonde's Identity, but with Stirling numbers instead of binomial coefficients). I looked up on a couple of books (Concrete Mathematics of Graham-Knuth-Patashnik, and others), and I couldn't find any references pointing to such an identity. Does anybody know something like that? (Perhaps involving other weird numbers as Eulerian or double Eulerian or that kind of stuff?) Nevertheless, any kind of help simplifying the first double sum would be really appreciated.
As usual check carefully (although there is a code snippet to illustrate). I seem to have a form of mathematical dyslexia. Answer: $\left[x^{M}\right]\frac{\Gamma\left(x+a\right)\Gamma\left(x+b\right)}{\Gamma\left(x\right)\Gamma\left(x+a+b\right)}\left(x\right)_{\left(a+b\right)}$ Actually the expression gives the answer for all $M$. Some code is included for that. I can expand it to include M and the binomial summation on a if needed. The process can probably be specialized to just compute one answer for a particular $M$. Let me know if you need it. First some standard facts: Definition 1. The Pochhammer symbol. $$\begin{align*}\left(x\right)_{n}=x\cdot\left(x+1\right)\ldots\left(x+n-1\right) \end{align*}$$ Definition 2. Unsigned Stirling Number $$\left[\begin{array}{c} n\\ l \end{array}\right]\equiv\left[x^{l}\right]{\displaystyle \prod_{k=0}^{n-1}\left(x+k\right)=\left[x^{l}\right]x\cdot\left(x+1\right)\ldots\left(x+n-1\right)}=\left[x^{l}\right]\left(x\right)_{n}$$ $$=\left[x^{l}\right]{\displaystyle \sum_{j=0}^{n}}\left[\begin{array}{c} n\\ j \end{array}\right]x^{j}$$ We can partition $$\left(x\right)_{\left(a+b\right)}\rightarrow\left(x\right)_{a}\cdot\left(x+a\right)_{b} \tag{1}\label{1}$$ The formula for all convolutions: $$(x)_{a}\cdot\left(x\right)_{b} \tag{2}\label{2}$$ The problem can be phrased as the convolution : $${\displaystyle {\displaystyle \sum_{j=0}^{min\left(a,b,M\right)}}\,\left[\begin{array}{c} a\\ j \end{array}\right]\left[\begin{array}{c} b\\ M-j \end{array}\right]}={\displaystyle \sum_{j=0}^{min\left(a,b\right)}}\left(\left[x^{j}\right]\left(x\right)_{a}\right)\cdot\left(\left[x^{M-j}\right]\left(x\right)_{b}\right)$$ $$=\left[x^{M}\right]\left(\left(x\right)_{a}\cdot\left(x\right)_{b}\right) \tag{2}\label{3}$$ Remark. The upper limit can be extended but additional terms would be zero. The intent is to convert (1) to (2). Which can be done using: http://functions.wolfram.com/06.10.17.0004.02 Substituting our symbols. $$\left(x\right)_{b}=\frac{\Gamma\left(x+a\right)\Gamma\left(x+b\right)}{\Gamma\left(x\right)\Gamma\left(x+a+b\right)}\left(x+a\right)_{b}$$ Which is so obvious it's laughable. Thus the answer is: $$\left[x^{M}\right]\frac{\Gamma\left(x+a\right)\Gamma\left(x+b\right)}{\Gamma\left(x\right)\Gamma\left(x+a+b\right)}\left(x\right)_{\left(a+b\right)}=\left[x^{M}\right]\left(x\right)_{a}\cdot\left(x\right)_{b}$$ Maxima example code for all M: load ("stirling")$ gamma_expand: true$ gamma(x+5); p1:pochhammer(x,6); expand(p1); p2:pochhammer(x+a,6); conv(a,b):=pochhammer(x,a+b)*gamma(x+a)*gamma(x+b)/((gamma(x))*gamma(x+a+b)); ratsimp(conv(4,5)); ratsimp(pochhammer(x,4)*pochhammer(x,5));
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 2, "answer_id": 1 }
What kind of surface is this? Is there a way to plot this? I am given the surface: $$S=\{ \vec{x} \in \mathbb R^3: {\|\vec{x} \|}_2^2=4, x^2+y^2 \le 1, z >0 \}$$ and I want to calculate the Mass of $S$ given a density $\rho$. It sort of looks like the upper half of a sphere. The problem I have is that the first equation ${\| \vec{x} \|}_2^2=4$ means that the radius of this sphere is $R=2$. However, the condition $x^2+y^2 \le1$ would mean that it is some kind of half sphere with a smaller "base". I tried to plot this in Wolfram Alpha but I couldn't get it to work. Is there any way I can parameterize/transform this surface in spherical coordinates?
In spherical coordinates (warning: notations may differ) $$ \eqalign{x &= r \sin (\theta) \cos (\phi)\cr y &= r \sin (\theta) \sin(\phi)\cr z &= r \cos(\theta)}$$ you have $\|\vec{x}\| = r$ and $x^2 + y^2 = r^2 \sin^2(\theta)$, so in this case you want $r = 2$ and $0 \le \theta \le \arcsin(1/2) = \pi/6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Nilpotent Lie algebras are closed under extensions Problem: Let $L$ be a Lie algebra and $K$ an ideal such that $L/K$ is nilpotent and such that $ad(x)|_K$ is nilpotent for all $x \in L$. Prove that $L$ is nilpotent. By Engel's Theorem, I know that $K$ is nilpotent, which implies that $ad(K)$ is also nilpotent. So, there exists an integer $m$ such that $ad(K)^{(m)} = \{ 0 \}$, where $ad(K)^{(m)}= [ad(K), ad(K)^{(m-1)}]= \{ 0 \}$, (the $m$-th term of the lower central series of $ad(K)$). Now, because I know that an extension of a nilpotent algebra is nilpotent only if it is central, I would like to relate $K$ to $Z(L)$, using information on $ad(K)^{(m)}$, if that makes any sense. Is this a reasonable way to tackle the problem? If so, any hints to proceed?
The answer is yes provided that $L$ is finite-dimensional. Denote by $x^*$ a class in $L/K$, $x\in L$. Since $L/K$ is nilpotent, there exists $m$ such that $ad_{x^*}^r=0$. That is, for all $z\in L$ we have $ad_x^r(z)\in K$. But $(ad_x)|_K$ is nilpotent, hence there exists $s$ such that $0=ad_x^s(ad_x^r(z))=(ad^{r+s}_x)(z)$ for all $z\in L$. By Engel's theorem $L$ is nilpotent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\frac{1}{x}-\frac{\ln 2}{2^x} > 0$ if $x>0$ Given that $x>0$, can we show that $$\dfrac{1}{x}-\dfrac{\ln 2}{2^x} > 0$$ I've plotted it out and it appears to be monotonically approaching its $0$ limit, and when plugged into Wolfram it appears to have no real $0$s, but I do not know how to go about this analytically to prove it. This is the final step in a long sequence of steps, but this inequality is fairly self-contained so I didn't think it relevant to include said extra information.
We have: $\dfrac{1}{x} - \dfrac{\ln 2}{2^x}= \dfrac{2^x-\ln(2^x)}{x\cdot 2^x}> 0$ because $x > 0, 2^x > 0$ and $2^x > \ln(2^x)$. The last one holds because $y > \ln y$ is true due to $e^y > y$ which is clearly true for $y = 2^x > 0$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Intuition behind $\sin(\theta)$ when introducing this to high school students When first introducing trigonometry to students, the traditional setup is to start with a right-angled triangle with reference angle $\theta$ and we label the sides with "Hypotenuse, Opposite and Adjacent." To keep students engaged with some practicality behind this, we can give an example of trying to figure out the height of a tree, know how far you are from the base of the tree and estimating the angle to the top of the tree. Then we define something arbitrary called "$\sin(\theta) = \frac{\text{Opposite}}{\text{Hypotenuse}}$". I feel like at this point, students lose the conceptual intuition behind what's going on. Some students who are able to just accept it without questioning it too much can start punching in numbers and angles into the calculator when doing example questions. Other students who feel stuck with this weird idea might not be able to move forward. What would be a good idea to explain how to think about $\sin(\theta) $? I don't want to introduce a unit circle type definition because I feel like it will only make it less tangible for them. Can we do better than something like "it's a magic computer which tells you the ratio of the opposite and hypotenuse sides of a right-angled triangle when you supply it the reference angle" To maybe elaborate/clarify: I feel like a few things that students might not be able to understand If you take the tree example from above, we have the adjacent side and the angle. Now: The definition of $\tan(\theta)$ is the missing quantity we wanted in the first place. The ratio of the opposite side and the adjacent side. But how does $\tan$ go and calculate the ratio when I give it a angle? I think it's possible to convince them - once I have this ratio, I can find the length of the missing side: $\text{Opposite} = \tan(\theta)\times \text{Adjacent}$.
You can sell sine and cosine based on expressing how much of the right triangle in question aligns with the adjacent or opposite side. Let us set notation, * *$A$ = adjacent side length *$B$ = opposite side length *$C$ = hypotenuse side length Since the triangle is assumed to be a right triangle we know $A^2+B^2=C^2$. Let $\theta$ be the angle between $A$ and $C$. * *the hypotenuse is the longest side; $A,B \leq C$ *the only way for $A=C$ is that $\theta = 0^o$ (this happens when $B=0$ *if we imagine $A$ shrinking to zero we see $\theta$ gets close to $90^o$ We can introduce sine and cosine as devices to express how much of $C$ is used in forging $A$ or $B$: * *$A = C \cos \theta$ *$B = C \sin \theta$ Notice since $A,B \leq C$ we must have $\cos \theta, \sin \theta \leq 1$. Also, when $\theta = 0$ we noted $A=C$ hence $\cos 0 = 1$ whereas $\sin 0 = 0$. Conversely, from the case of $A \rightarrow 0$ we saw $B=C$ and $\theta = 90^o$ hence $\cos 90^o = 0$ whereas $\sin 90^o = 1$. Of course, there are much better ways. But perhaps this is sort of in the direction you seek ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3545998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How do I Transform a Quadratic expression into a Pell Equation? I have been told that a simple linear transformation (or a change of variables) can transform the quadratic $$x^2+45xy-216y^2$$ into the Pell equation $$p^2−321q^2=1$$ However I have been unable to achieve this. Can anyone help me find a simple linear transformation (or a change of variables) to arrive at the Pell equation above? I want to eliminate the xy term in the quadratic.
Let $p=x+\dfrac{45}2y$ and $q=\dfrac32y$. Then $p^2-321q^2=x^2+45xy+\dfrac{2025}4y^2-321\dfrac94y^2=x^2+45xy-216y^2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $G$ is isomorphic to the direct product of its subgroups $H$ and $K$, are $H$ and $K$ normal in $G$? Is this proposition true? Or give a counter example? It comes from this question: if $H$ is a direct factor of $K$ and $K$ is a direct factor of $G$, then $H$ is normal in $G$. If the proposition is true then we are done.If not then how to prove it ..?
Take any group $H$ with a non-normal subgroup $K\subseteq H$ and consider $G:=H\times K$. Now let $H':=H\times\{e\}$ and let $K':=K\times\{e\}$. Both are subgroups of $G$ but $K'$ is not normal. However $K'$ is isomorphic to $K$ and $H'$ is isomorphic to $H$ and thus $G\simeq H'\times K'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Showing $(a_k)_{k=1}^{\infty}$ is in $l^1$ My lecturer claimed this following without a proof. If anybody could give me some idea why this is true, it would be great. Let $a_k\in\mathbb{R}$, be a real sequence, such that the series $\sum_{k=1}^{\infty}a_kx_k$ converges for all sequences $(x_k)_{k=1}^{\infty}$ with $\lim_{k\rightarrow\infty}x_k=0.$ Then $(a_k)_{k=1}^{\infty}\in l^1$?
It suffices to show that $(a_n)_{n=1}^\infty\in (c_0)^*$. To show this, observe that the operators $x\mapsto \sum_{n=1}^N a(n)x(n)$ are continuous on $c_0$. And since for each fixed $x\in c_0$, the quantity $$\sup_N |\sum_{n=1}^N a(n)x(n)|$$ is bounded. So by the Uniform Boundedness Theorem, $a$ is continuous on $c_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove that $\lim_{t \to \infty} \int_1^t \sin(x)\sin(x^2)\,dx$ converges Question_ Prove that $$\lim_{t \to \infty} \int_1^t \sin(x) \sin(x^2) \, dx$$ converges. I think the indefinite integration of $\sin(x)\sin(x^2)$ is impossible. Besides, I've wondered whether the definite integration of it is possible or not. I've tried to use the condition that $t \to \infty$. The one that came up to my mind is to use partial integration. When using it, we can have: $$\int_{1}^{t}\sin(x)\sin(x^2) \, dx=-\left[ \sin(x^2) \cos(x) \right]_1^t + \int_1^t 2x\cos(x^2)\sin(x) \, dx$$ However, since $\sin(t^2)\cos(t)$ diverges as $t \to \infty$, I couldn't determine whether the given integration diverges or not. Due to this, I re-tried to have partial integration in a quite different way: $$\int_1^t \sin(x)\sin(x^2) \, dx=\int_1^t \frac{\sin(x)}{2x}(2x\sin(x^2)) \, dx=-\left[\frac{\sin(x)}{2x} \cos(x^2)\right]_1^t + \int_1^t \frac{x\cos(x)-\sin(x)}{2x^2} \cos(x^2) \, dx$$ In this case, $\left[\sin(t)\cos(t^2)/2t\right]$ goes to $0$ as $t \to \infty$. Therefore, it is enough to see the integration part only. Unfortunately, I'm stuck here. Could you give me some key ideas that can investigate whether $$\int_{1}^{t}\frac{x\cos(x)-\sin(x)}{2x^2}\cos(x^2)dx$$ converges or not? The other way of solution is also welcome! Thanks for your advice.
This is an old Putnam problem [2000, A4]: show that $\displaystyle{\lim_{B\to \infty}\int _0^B \sin(x)\sin(x^2)\,dx}$ exists. Since we are interested in the limit, we can assume $B>1$. To simplify matters, we introduce a factor of 4. $$ \lim_{B\to \infty}2\int _0^B \sin(x)\cdot 2\sin(x^2)\,dx $$$\sin(x^2)$ doesn't have an elementary antiderivative, but $2x\sin(x^2)$ does. We multiply and divide by $x$ to introduce this factor $$ \lim_{B\to \infty}2\int _0^B\frac{ \sin(x)}{x}\cdot2 x\sin(x^2)\,dx $$Let's use IBP to trade one integral for another. Let $u=\frac{\sin(x)}{x}$, $dv=2x\sin(x^2)dx$: $$ 2\int _0^B\frac{ \sin(x)}{x}\cdot2 x\sin(x^2)\,dx = \left.-2\cos(x^2)\frac{\sin(x)}{x}\right|_0^B + \int _0^B 2\cos(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^2}\,dx $$The boundary term evaluates to 2 as $B\to\infty$. Now we use that $B>1$ to split up the new integral: $$ =2 + 2\int _0^1 \cos(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^2}\,dx+ \int _1^B 2\cos(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^2}\,dx $$For the first integral, note that cosine is positive and continuous on $[0,1]$. Then by the Mean Value Theorem for Integrals, for some $\xi\in(0,1)$ we have $$ \int _0^1 2\cos(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^2}\,dx = 2\cos(\xi^2)\int _0^1 \frac{x \cos (x)-\sin (x)}{x^2}\,dx = 2\cos(\xi^2) \left.\frac{\sin(x)}{x}\right|_0^1 $$ $$ = 2\cos(\xi^2)\left(\frac{\sin(1)}{1}-1\right); $$this is a finite number. The presence of $x^2$ in the denominator of the second integral is a good sign: we know that $x^{-2}$ is improperly integrable on $[1,\infty).$ We have to tweak things a little but the goal is to get a function comparable with $x^{-2}$ on $[1,B]$. Now we reintroduce a factor of $x/x$ as earlier and use IBP again: $$ \int _1^B 2x\cos(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^3}\,dx $$ $$ = \left.\sin(x^2)\cdot \frac{x \cos (x)-\sin (x)}{x^3} \right|_1^B - \int_1^B \sin(x^2)\cdot \frac{-x^2\sin (x)+3 \sin (x)-3 x \cos (x)}{x^4}\,dx $$The boundary terms are finite in the limit. This new integral has the form we want: it is roughly of the form $x^{-2}$. We now use several basic comparisons to show it converges as $B\to \infty$. First, take absolute values: $$ \left|\int_1^B \sin(x^2)\cdot \frac{-x^2\sin (x)+3 \sin (x)-3 x \cos (x)}{x^4}\,dx\right| $$ $$ \leq \int_1^B\left| \sin(x^2)\cdot \frac{-x^2\sin (x)+3 \sin (x)-3 x \cos (x)}{x^4}\right|\,dx $$ $|\sin(\theta)|\leq1$ for any real $\theta$, and dividing the numerator and denominator by $x^2$ gives $$ \leq \int_1^B1\cdot\left| \frac{-x^2\sin (x)+3 \sin (x)-3 x \cos (x)}{x^4}\right|\,dx $$ $$ = \int_1^B\frac{\left|-\sin (x)+3 \sin (x)/x^2-3 \cos (x)/x\right|}{x^2}\,dx $$ $$ \leq \int_1^B\frac{1+3+3}{x^2}\,dx=\int_1^B\frac{7}{x^2}\,dx $$This last integral is finite. The original integral is bounded by a number of finite pieces, hence is finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Finding all integer solutions to $A\times B\times C = A! + B!+ C!$ with $0\leq A, B, C\leq 9$ without trying every combination? Let $A$, $B$, $C$ be integers such that $0 \le A, B, C \le 9$. Find all the solutions for $$A\times B\times C = A! + B!+ C!$$ I have tried some values for $A$, $B$, and $C$ and found a solution: $A=4$, $B=3$, $C=3$. Is there a way to solve this without using a computer to try every combination possible?
We may suppose that $A\le B\le C$. Then $ABC\le C^3$. But if $C\ge 6$, then $C!>C^3$, so $$ABC<A!+B!+C!$$ Hence $C\le 5$. And if $C=5$, then we have $$5AB=A!+B!+120$$ If $A\le 4$, then the LHS is $\le 100$; and if $A=5$, then the RHS is $>125$. Both cases lead to a contradiction; therefore $C$ can't be $5$. Hence $C\le 4$. And you seem to have covered all those cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
Use integer/quadratic programming to maximize consecutive zeros in a binary array A binary array $x = [x_1, x_2, x_3, x_4, x_5]$ with each element a binary integer variable taking values 0 or 1. One constraint: $$x_1 + x_2 + x_3 + x_4 + x_5 == 1$$ Basically one of the variables must be 1. I am trying to maximize the number of consecutive zeros in this array. The optimal result would be $x_1 = 1$ or $x_5 = 1$. In either case, it yields a result with 4 consecutive zeros. In practice, I want to allocate some slots but leave some long-range of empty slots for future allocation. Another example is: If I have to allocate one slot with length 1 and another slot with length 2. I will allocate $x_1, x_2, x_3$ so that the remaining empty slot is $x_4, x_5$ (Or allocate $x_3,x_4,x_5$ and leave $x_1,x_2$). Any suggestion to formulate in a way an optimization solver can solve? Or any suboptimal formulation? Thanks!
Suppose you have the binary vector $x = (x_1,\cdots,x_n) \in \{0,1\}^{n}$, where $x_i = 1$ if the $i^{\text{th}}$ slot is filled and zero otherwise. I can think of the following naive and nasty formulation for the objective (maximum number of consecutive zeros): $$ \underset{J \subset \{1,\cdots,n\}}{\max}\left\lbrace \lvert J \rvert \prod_{j \in J} (1-x_j) \right\rbrace.$$ Note: * *The variable is the subset $J$ of $\{1,\cdots,n\}$ *The number of possible choices of $J$ is $2^n - 1$ (excluding the empty set) *The product term inside the $\max$ can be linearized by adding exponentially many auxiliary variables There may be a better modeling trick - you can try asking at OR Stackexchange.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3546943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Jump discontinuity for Burgers equation $u_{y} +uu_{x} =0$ Let $u$ be a $C^{1}$-solution of $u_{y} +uu_{x} =0$ in each of two regions separated by a curve $x =\xi(y)$. Let $u$ be continuous, but $u_{x}$ have a discontinuity on the curve. Prove that $\frac{d\xi}{dy} = u$ and hence the curve is a characteristic. I expressed $u_{y} +uu_{x}=0$ as $(u_{y}^{+} -u_{y}^{-} +u(u_{x}^{+}-u_{x}^{-})=0$ but am unsure where to go from there.
The Rankine-Hugoniot condition reads $$ \frac{\text d \xi}{\text d y} = \frac12(u^++u^-) $$ where $u^\pm$ are the values of $u$ on each side of the curve $x=\xi(y)$. Since $u$ is continuous across the curve, we have $u^\pm = u|_{x=\xi}$, which ends the proof (recall that characteristics satisfy the Lagrange-Charpit equation $\text d x/u =\text d y /1$). See also this thread.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given a (Deterministic Finite Automata) DFA that recognizes a language $L$, show how to construct another DFA that recognizes the language $\max(L)$. If $L$ is any language, then $$\max(L)= \{w \mid \text{$w$ is in $L$ and there is no non-empty string $x$ such that $wx$ is in $L$} \}.$$ I am really confused about what this problem is asking and I would greatly appreciate some light.
This new language is composed of the "maximal elements" of $L$. For example, if $L = \{ab, ba, aba\}$, then max$(L) = \{ba, aba\}$. It doesn't include $ab$ since that can be extended to $aba$ and still be $L$. So how can we construct a DFA recognizing this? Since $L$ is regular, it has a DFA $D = (Q, \Sigma, \delta, q_0, F)$ recognizing it. We should try to use this DFA to make a new one for max$(L)$, $D' = (Q', \Sigma, \delta', q_0', F')$. Given max$(L) \subseteq L$, it seems like a good idea to make $Q'$ the Cartesian product of $Q$ and another set. This will allow us to basically run two DFAs in parallel, and check in the first DFA if $w \in L$, and in the second that there is no way to extend $w$ to any $wx \in L$. It may seem difficult to think of the second half of the tuple, but consider this: is there a way to cleverly define $F'$ using a simple second half, to make $F'$ do most of the work of "figuring out" if there is $wx \in L$? It may be useful to write out an actual DFA or two and see how you yourself would determine max($L$), and see if you can define an $F'$ that suffices. Remember that $Q$ is finite, so you (or a set-builder...) could do a graph search over it to find paths to accepting states. Once you have $F'$ and $Q'$, then $q_0'$ and $\delta'$ are straightforward. You could even drop the tuple entirely, but I find it easier to think about for this category of problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a bijection between uncountable sets? I know that there is a bijection between naturals and rationals. I also know that there is no bijection between naturals and reals (diagonal argument). But, I have never heard of the existence of a bijection between uncountable sets (ex aleph-one). Is there a way to create a (computable ?) function that takes an element from an uncountable set and outputs (in infinite time ?) an element from another uncountable set ? (I do not have a strong mathematical background, so please keep it simple or use terms of computer science) [EDIT] It seems that my question was very trivial. An answer would be y = f(R) where f is just one-to-one. I was hoping for something more sophosticated :( . Sorry for the inconvenience. [EDIT2] How we would construct a bijection between these sets ? A = reals B = reals without naturals C = reals without primes
For a bijection between $A$ and $B$, consider the application that sends every natural $n$ to $e^n$, and $e^n+m$ to $e^n+m+1$ for non-negative integer $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Gaussian width of sparse balls The Gaussian width of a set $T\subset \mathbb{R}^n$ is defined as, $$ G(T) = E\left[\sup_{\theta \in T} \sum_{i=1}^n \theta_i W_i\right], $$ where, $\mathbf{W}=(W_1,\ldots,W_n)$ is a sequence of i.i.d. $N(0,1)$ random variables. I am interested in finding the value of $G(T)$ for $$ T(s) \equiv \{\theta\in\mathbb{R}^n: {\|\theta\|}_0 \leq s,{\|\theta\|}_2\leq 1\}, $$ the set of all $s$-sparse vectors within the unit ball, with $s\in\{1,\ldots,n\}$. This is an exercise problem in Wainwright's book on HD-Statistics. I have been able to show, $$ G(T(s)) = E\max_{|S|=s} {\|\mathbf{W}_S\|}_2, $$ and $S$ is a subset of $\{1,\ldots,n\}$, with cardinality $|S| = s$. Here the subscript $S$ denotes the components of $\mathbf{W}$ corresponding to $S$. Then, using Gaussian concentration inequality and the union bound, I can get, $$ P\left(\max_{|S|=s}{\|\mathbf{W}_S\|}_2 \geq \sqrt{s} + t\right)\leq \binom{n}{s} \exp\{-t^2/2\},\ \text{for all $t>0$.} $$ I can use the bound, $$ \binom{n}{s}\leq {\left(ne/s\right)}^s, \ \text{for all $s=1,\ldots,n$.} $$ Finally, I need to integrate to obtain the bound on the expectation. I am unable to do it to get the desired upper bound (of the order), $$ K\sqrt{s\log(en/s)},\ \text{where $K$ is some constant.} $$ Any ideas would be helpful!
Notation: $C$ below denotes (possibly different) absolute constants. Recall that for $N$ sub-gaussian variable $X_i$ (independence not required) with $\max_i \| X_i\|_{\psi_2}\le K$, $E \max_{i\le N} X_i \le CK \sqrt{\log{N}}.$ For our problem, max in $E \max_{|S| \le s} |W_S|$ enumerates over $N:=\sum_{k=1}^s \binom{n}{k}$ different subsets of ${1,\dots,n}$. Also use Gaussian concentration inequality we obtain $\max_{|S|\le s} \| W_S-\sqrt{|S|}\|_{\psi_2}\le C$. So we have $$E \max_{|S|\le s} (|W_S|-\sqrt{|S|}) \le C \sqrt{\log(\sum_{k=1}^s \binom{n}{k})} \le C \sqrt{s\log(en/s)}$$ where we used $\sum_{k=1}^s \binom{n}{k} \le (\frac{ne}{s})^s.$ Finally, this implies that (using $\sqrt{|S|}\le \sqrt{s}$ and move $\sqrt{s}$ to RHS) $$E \max_{|S| \le s} W_S \le \sqrt{s}+C\sqrt{s\log(en/s)}\le C \sqrt{s\log(en/s)}$$ by $n>s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $f=f(x(s),y(t))$, then $\frac{\partial f}{\partial t}=0$ is wrong? enter link description here You may see through the link that someone pointed out that if$$f=f(x(t),y(t))$$then$$\cfrac{\partial f}{\partial t}=0$$So now i want to challenge this by giving an example: Let $f=f(x,y)$, where $x=x(t)$,$y=y(s)$. Because $f$ is independent of $t$, so we have $$\frac{\partial f}{\partial t}=0\qquad(1)$$ Now, given that $x=x(t)=2t^2$, $y=y(s)=3s$, then f is given by:$$f=f(x,y)=xy\qquad(2)$$ Substitute $x=x(t)=2t^2$, $y=y(s)=3s$ into (2), we have$$f=6t^2s\qquad(3)$$It follows that $$\frac{\partial f}{\partial t}=12ts\qquad(4)$$Equation(1) and Equation (4) are not equal, what is wrong?
$t$ affects $f$. in other words $f$ and $t$ is dependent since $f$=$f$($x$($s$),$y$($t$)) hope this helps! :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
A formula for any finite sequence of number In Advanced Problems in Mathematics by Stephen Siklos, pg24, he writes "Given any finite sequence of numbers, a formula can always be found which will fit all given numbers and which makes the next number (e.g.) 42." Is there a source or proof for this statement?
Given a sequence of $a_0 ... a_{n-1}$, all you have to do is find $n$ linearly independent functions $f_0$ through $f_{n-1}$. Then define a sequence $c_i$ by $\sum_{i=0}^{n-1} c_i f_i(k)= a_k$ for all $k$. In matrix form, that's $M$c = a where $M$ is a matrix whose entries are given by $f_i(k)$ (the two parameters $i$ and $k$ giving a two dimensional array of numbers), c is the sequence of coefficients, and a is the original given sequence. If the columns of $M$ are linearly independent, then a solution for c exists, giving a formula for a. One can obtain Lagrange polynomials from a special case where $f_i$ are powers of $k$, but one can use exponential functions, trigometric functions (e.g. discrete Fourier transforms), etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Given $f(x) = x^n e^{-x}$, show that $\int_0^1 f(x)\, dx$ is equal to a given expression. Consider the function: $$f : \mathbb{R} \rightarrow \mathbb{R} \hspace{2cm} f(x) = x^n e^{-x}$$ I have to show the following: $$\int_0^1 f(x) dx = n! \bigg [ 1 - \dfrac{1}{e} \bigg ( 1 + \dfrac{1}{1!} + \dfrac{1}{2!} + \dfrac{1}{3!} + ... + \dfrac{1}{n!} \bigg ) \bigg ]$$ I used the notation: $$I_n = \int_0^1 f(x)dx$$ And by integrating by parts I got the recurrence formula: $$I_n = - \dfrac{1}{e} + n \cdot I_{n - 1}$$ But I don't have any idea as to how I could show what is asked.
My intuition first goes to repeated substitution: $$ \begin{split} I_n &= - \frac 1 e + n\left(-\frac 1e + (n-1) \left(-\frac 1e+\dots\right) \right) \\ &= -\frac 1e - \frac n e - \frac{n(n-1)}e - \dots - \frac{n(n-1)\cdots (2)}e + n! I_0. \end{split}$$ Direct computation of $I_0$ and collecting $n!$ results in the formula you want. Otherwise, you may show that the closed formula satisfies the recursion and be done by the axiom of induction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3547899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
UMVUE of $\frac{p}{1-p}$ when $X\sim bin(n,p)$ Uniform Minimum Variance Unbiased Estimate of $\frac{p}{1-p}$ when $X\sim bin(n,p)$ Note: $Bin(n,p)$ is an one parameter exponential family member with min complete sufficient statistic $X$. Then If can find $E[T(X)]=\frac{p}{1-p}$ Then by Scheffes theorem then UMVUE, but will not achieve CRB because no linear function of X can produce an unbiased estimate of $\frac{p}{1-p}$. Then: $$E[T(X)]=\sum_{t=0}^{n}T(t){n\choose t}p^t(1-p)^{n-t}=\frac{p}{1-p}$$ $\Longrightarrow$ $$\sum_{t=0}^{n}T(t){n\choose t}p^{t-1}(1-p)^{n-(t-1)}=1$$ If we let $$T(t)=\frac{{n\choose t-1}}{{n\choose t}}$$ the desired equality follows then $$T(X)=\frac{{n\choose X-1}}{{n\choose X}}$$ is the UMVUE. Is my logic correct? Is there another way to find the UMVUE?
I think the UMVUE does not exist for $\frac{p}{1-p}$. $$E[T(X)]=\sum_{t=0}^{n}T(t){n\choose t}p^t(1-p)^{n-t}=\frac{p}{1-p}$$ $$\sum_{t=0}^{n}T(t){n\choose t}(\frac{p}{1-p})^t(1-p)^{n}=\frac{p}{1-p}$$ $$\sum_{t=0}^{n}T(t){n\choose t}(\frac{p}{1-p})^t=\frac{p}{1-p}*\frac{1}{(1-p)^{n}}$$ by choosing $\lambda=\frac{p}{1-p}$ $$\sum_{t=0}^{n}T(t){n\choose t}\lambda^t =\lambda *(1+\lambda)^n$$ so $\forall \lambda$ $$\sum_{t=0}^{n}T(t){n\choose t}\lambda^t =\lambda *(1+\lambda)^n$$ but it can not happen! since the max power of $\lambda$ in both side not equal. another way: It is easy that $\frac{p}{1-p}=-1+\frac{1}{1-p}$ $\frac{1}{p}$ and $\frac{1}{q}=\frac{1}{1-p}$are not U-estimable so $\frac{p}{1-p}=-1+\frac{1}{1-p}$ is not U-estimable
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
rank deficient least squares with minimum $\ell_1$ norm When a least square solution of a rank deficient least square problem is sought, there are multitude of solutions that give the same minimal residual vector $r=\mathbf{A}x-b$. I am familiar with Moore-Penrose pseudo-inverse that picks minimum $\ell_2$-norm least square solution among these solutions i.e. $$ x_{LS} = \mathbf{A}^{\dagger}b$$ Computing this usually involves getting the SVD of A and screening all singular vectors associated with zero singular values. Is there a similar expression that gives the minimum $\ell_1$-norm? Even if no expression can be given, is there a algorithm to compute the minimum $\ell_1$-norm among all solutions that give the minimum residual vector? This seems to be a convex optimization problem but I could be mistaken. If it helps I am interested in a problem with column-rank deficient $\mathbf{A}$ and it has more rows than columns. Update: A related problem where the residual norm is known can be solved via LASSO (using the homotopy approach to find the regularization parameter that matches the residual norm). LASSO indeed seems to give the minimum $\ell_1$ norm solution for the given residual norm. In this problem the residual vector is given and a minimum $\ell_1$ norm solution is sought. Edit: After the comment below by @littleO I realized the mistake I was making that motivated the original question. I was planning to project the $b$ vector on to null space of $\mathbf{A}$ and call it the noise or residual vector but there could be a component of the noise vector in the range of $\mathbf{A}$ as well. So if I may change the question, how does one find out or estimate the norm of the noise?
First solve the unconstrained least squares problem, for example by using the Moore-Penrose pseudo-inverse (which picks the minimum $\ell_2$-norm least square solution), or any other method. Evaluating the two-norm of the residual at that optimal value of $x$ provides the $\epsilon$. Then solve the $\ell_1$-norm minimization problem subject to the $\ell_2$-norm constraint. Using the Moore-Penrose pseudo-inverse least squares solution, that would be Minimize $\|x\|_1$ subject to $\|\mathbf{A}x - b\|_2 \le \|\mathbf{A}\mathbf{A}^{\dagger}b - b\|_2$ The CVX code for this convex optimization problem is: cvx_begin variable x(n) minimize(norm(x,1)) norm(A*x-b) <= norm(A*pinv(A)*b-b) cvx_end
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can anyone explain a compact set intuitive way? I know that there are many ways to define the compactness of the set in a topological space, e.g., $X$ is compact if and only if every open cover of $X$ has a finite subcover. I also know that it is the extension of the concept of the bounded closed set in an Euclidean space to a general topological space. However, I do not believe I truly understand what it is intuitively. Can anyone help me grasp the concept intuitively? ** I already got a wonderful answer, but please add more answers if you have different (or even similar) insight of yours. I can't examine what more I could learn on this post. I'm very excited!
Compact sets have many nice properties of finite sets but they can be infinite. For example, in finite sets: * *All functions have a maximum *All functions are bounded *All sequences have a constant subsequence In compact sets: * *All continuous functions have a maximum *All continuous functions are bounded *Spaces where all subsequences have a convergent subsequence are called sequentially compact Just like finite sets but infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
If the ratio of areas of two polygons is the square of the ratio of their perimeters, are the polygons similar? It is known that for two similar polygons $A$ and $B$, the ratio of the areas is the square of the ratio of the perimeters. That is, for example, if the ratio of perimeter A to perimeter B is 5:7, then the ratio of their areas is 25:49. However, is the converse to this statement true? That is, If the ratio of areas of two polygons is the square of the ratio of their perimeters, are the polygons similar? What if we add the specification that the polygons have the same number of vertices? I cannot think of a counterexample. Any help is appreciated!
HINT.-Let $p_1$ be the perimeter and $A_1$ the area of the polygone $P_1$ and similarly $p_2$ and $A_2$ for the polygone $P_2$ with $$\frac{p_1^2}{p_2^2}=\frac{A_1}{A_2}$$ You have $$p_2=s_1+s_2+\cdots+s_n$$ How many changes of sides $s_i$ preserving the value $A_2$ are there? The answer to your problem is NO.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Eigenpairs of Normal Matrices Let's say $A$ is a normal matrix, that is, $AA^*=A^*A$ - then what can be said about the eigen-pairs of $A$ and $A^*$? I'm trying to show that if $Ax = kx$, then $A^*x = \overline{k}x$. How do I proceed? I tried the following: * *$Ax = kx$ implies *$x^*A^* = \overline{k}x^*$ *Multiplying by $Ax$ on both sides and replacing $A^*A$ by $AA^*$ - but couldn't get anywhere past this. Could someone point me in the right direction? Any help is appreciated.
Hint: We have $\|Ax-kx\|^2 =\|A^*x-\bar kx\|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
complex integral - residuum theorem or something else? I have to compute integral of complex function: $$ \int_{|z|=3} \frac{z^9}{z^{10} - 1} $$ I thought that I should after I found points where $z^{10} = 1$ use the residue theorem but I do not know how to execute that. So I thought that maybe I should integrate by substitution $w=z^{10} $ so $ dw = z^9 dz$ which would be fortunate but then I still do not know how to compute integral over that closed curve. Is there a simplier metod which I do not see? How to execute residuum theorem for 10 singularities of form: $\cos(k\pi/5) + i\sin(k\pi/5), k\in(0,1,2,3,4,5,6,7,8,9) $ ? Thank you for any help!
Residues sum to zero hence the sum of the residues from the poles on the unit circle is minus the residue at infinity: $$-\mathrm{Res}_{z=\infty} f(z) = \mathrm{Res}_{z=0} \frac{1}{z^2} f\left(\frac{1}{z}\right).$$ In the present case we find with $f(z) = z^9/(z^{10}-1)$ $$\mathrm{Res}_{z=0} \frac{1}{z^2} \frac{1/z^9}{1/z^{10}-1} = \mathrm{Res}_{z=0} \frac{1}{z^2} \frac{z}{1-z^{10}} = \mathrm{Res}_{z=0} \frac{1}{z} \frac{1}{1-z^{10}} = 1.$$ Hence the integral is $2\pi i\times 1 = 2\pi i.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548822", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Complex number inequality $|z-1| \ge \frac{2}{n-1}$ If $n \ge 3$ is an odd number and $z\in\mathbb{C}, z\neq -1$ such that $z^n=-1$, prove that $$|z-1|\ge \frac{2}{n-1}$$ I was thinking that $-2=z^n-1=(z-1)(z^{n-1}+z^{n-2}+...+z^2+z+1)$ and because $z\neq -1$, the second factor can not be $0$: $$|z-1|=\frac{2}{|z^{n-1}+z^{n-2}+...+z^2+z+1|}$$ Also $|z|=1$, so if I use triangle inequality, I find: $$|z-1|\ge \frac{2}{|z^{n-1}|+...+|z|+1}=\frac{2}{n}$$ but this is lower than $\frac{2}{n-1}$. Does this help in any way?
Let $n=2m+1$. Observe that $$z^n+1=(z+1)(z^{2m}-z^{2m-1}+\cdots+z^2-z+1)$$ has simple roots (consisting of roots of unity). Let $f(z)$ be the second factor of the right hand side. Then $$f(z)=(z-1)(z^{2m-1}+z^{2m-3}+\cdots+z)+1=0$$ if $z$ is a root of $z^n+1$ and $z\neq -1$. Applying similar argument as yours, one has $$z-1=\frac{-1}{z^{2m-1}+z^{2m-3}+\cdots+z}$$ $$\Rightarrow |z-1|\geq \frac 1m=\frac 2{n-1},$$ as required. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/3548924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Finding the infinite sum of a Fourier Series at a given $x$ My question is this: Let $f(x)=3x-1$, with period $1$ and $x\in(0,1)$. Is the Fourier series of $f(x)$ convergent at $x=2/3$? If yes, what is the corresponding value of the sum of the Fourier series? I thought that every Fourier series is convergent for all $x$, because the function is piecewise smooth and periodic, and we're defining the value of any jump discontinuities to be the $a_0$. Is that wrong? And if it's true, is it also true for a sine/cosine Fourier series? Or is it common in this topic to ask about uniform convergence and omit the "uniform"? When it comes to computing the value, I get a little confused. I know by periodicity that $f(2/3)=f(-1/3)=-2$, and since $a_0=-1$ the infinite sum has to be convergent to $-1$, which I can easily verify on a computer program. Is periodicity sufficient to prove this is what the value of the Fourier series is? If not, I cannot figure out how to find the sum by hand. I can get this far on my own: $f(2/3)=-2=-1 + \frac{3}{\pi}\sum_{n=1}^{\infty} \frac{(-1)^{n+1}\sin(4\pi n/3)}{n} \Rightarrow \sum_{n=1}^{\infty} \frac{(-1)^{n+1}\sin(4\pi n/3)}{n} = -\frac{\pi}{3}$, and I suspect I'm supposed to use some Calculus series information to somehow reach this conclusion. I've done quite a bit of research into the general form of different series, to try to refresh my memory, but none of them seem to be similar enough to my sum. Since there's a $\pi$ involved, I'm sure I can't just use the decimal approximation to see the partial sums are approaching that limit. There is probably a really simple way to do this, but I just can't figure it out: If I write out the first several terms of the sum as in the last part above, I get $-\frac{\sqrt{3}}{2}-\frac{\sqrt{3}}{2\cdot2}+0+\frac{\sqrt{3}}{2\cdot4}+\frac{\sqrt{3}}{2\cdot5}-0-\frac{\sqrt{3}}{2\cdot7}-\frac{\sqrt{3}}{2\cdot8}+0+\frac{\sqrt{3}}{2\cdot10}+\frac{\sqrt{3}}{2\cdot11}-0...$ I thought maybe I should combine the positives and negatives so I could get back to a form with $(-1)^k$, and then I get $\frac{-3\sqrt{3}}{4}+\frac{9\sqrt{3}}{40}-\frac{15\sqrt{3}}{112}+\frac{21\sqrt{3}}{220}-...$, whose denominators are similar to the series $\sin^{-1}(x)\approx x+\frac{x^3}{6}+\frac{3x^5}{40}+\frac{5x^7}{112}+\frac{35x^9}{1152}+...$ If it helps, I do know that $\sin^{-1}(\frac{-\sqrt{3}}{2})=-\frac{\pi}{3},$ but I don't see how the given sum could be rewritten as the Taylor series for inverse sin. The denominators look the same except the last term, but if I plug in any $x$ to the Taylor series for $\sin^{-1}(x)$, I don't get an alternating sequence. And, if I try to use $\frac{-\pi}{3}=-\tan^{-1}(\sqrt{3}), \tan^{-1}(x) \approx x - \frac{x^3}{3} + \frac{x^5}{5} - \frac{x^7}{7} + ...$, since there's an alternating sequence, I get even further away from my equating the sum to a trig function. If anyone could please explain to me how I should analyze sums like this, so that I can know what I'm doing enough to solve them by hand, and clarify the convergence question, I would greatly appreciate it!
The Fourier series converges to $f(x)$ at points of continuity, including $x=\frac{2}{3}.$ At points of discontinuity, $x =k\in \mathbb{Z}$ the series will converge to the average of the limits from the left and right: $$ \textrm{Fourier series } \rightarrow \frac{1}{2} (\lim_{x\uparrow k} f(x) + \lim_{x \downarrow k} f(x)).$$ If the period is $T=1$, and the function is as shown in the figure, then the Fourier series is given by $$f(x) \sim \frac{a_0}{2} + \sum_{k=1}^{\infty} (a_k \cos 2\pi k x + b_k \sin 2\pi k x).$$ $$a_0=2 \int_0^1 (3x-1 )\, dx = 1,$$ $$a_k=2 \int_0^{1} (3x-1) \cos 2\pi k x \, dx = 0,$$ $$b_k=2 \int_0^{1} (3x-1) \sin 2\pi k x \, dx = -\frac{3}{k \pi},$$ $$f(x) \sim \frac{1}{2} - \frac{3}{\pi} \sum_{k=1}^{\infty}\frac{\sin 2\pi k x}{k} .$$ $$f(2/3) \sim \frac{1}{2} + \frac{3\sqrt{3}}{2\pi} \sum_{k=0}^{\infty}\left[\frac{1}{3k+1} - \frac{1}{3k+2}\right]$$ $$f(2/3) \sim \frac{1}{2} + \frac{3\sqrt{3}}{2\pi} \sum_{k=0}^{\infty} \frac{1}{9k^2+9k+2} = \frac{1}{2}+\frac{1}{2} =1.$$ One way to evaluate the summation formula: $$\sum_{k=0}^\infty \frac{1}{9k^2+9k+2} = \sum_{k=0}^\infty \int_1^2 \frac{1}{(3k+x)^2} dx = \int_1^2 \sum_{k=0}^\infty \frac{1}{(3k+x)^2} dx$$ $$= \frac{1}{9} \int_1^2 \psi_1 (x/3)dx = \frac{1}{3} [\psi_0(2/3)-\psi_0(1/3)] = \frac{\pi}{3\sqrt{3}}.$$ The function $\psi_n(\cdot)$, is the polygamma function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Given by a graph is the function $f(x)$. How many solutions does $f(f(f(x)))=0$ have? This problem, like the other ones I've posted so far, is from a 2017 olympiad. It goes like this: Given by the following graph is the function $f(x)$: How many solutions does $f(f(f(x)))=0$ have? In other words, how many time does the graph of $f(f(f(x)))$ touch or intersect the x-axis? For me, the issue already lies in identifying $f(x)$. I know it must have something to do with $|x|$. How would one go about solving this problem?
You don't really need to obtain a different representation of $f$ besides the graph, to solve this problem. If $f^3(x)=0$, then $f^2(x)$ must be either $0$ or $2$. So, the problem becomes computing how many solutions are there of $f^2(x)=0$ union the solutions of $f^2(x)=2$. For $f^2(x)=0$ we have again $f(x)=0$ or $2$. Therefore, $x=0,2$ or $-2$. For $f^2(x)=2$ we have $f(x)=-2$, from where $x=4$. So, in total we have $4$ solutions $x=0,x=2, x=-2$ and $x=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to find the largest semi-circle that fits inside a polygon? I have seen (and implemented) algorithms that find the 'Pole of Inaccessibility' for a polygon - that allows you to draw the largest circle within it. However, if I wanted to find the largest semi-circle that fits inside a polygon, is there a similar method? EDIT: I use this algorithm MapBox polylabel to calculate the largest circle that will fit inside a polygon. Whilst I understand what it does, I can't really see a way to apply it to semi-circles. I feel as if the answer might start with trying to find the longest line inside the polygon that has the smallest average distance to the boundary, which might align it close to the longest straight(ish) part of said boundary. I re-implemented this Largest Rect in a Poly which I feel could have some bearing on my thoughts above in the way that it searches for longest lines inside the poly. But its easy to come up with shapes where the largest semi-circle is not really approximated by either the largest circle or the largest rectangle.
My Approach Thus Far Using the largest rect routine above, I find the largest 2:1 rectangle that will fit. And (if necessary) move it parallel to the short edge until it is on the boundary. I then draw a semicircle on each of the long edges, and see if I can make it bigger. I then move the rectangle parallel to the short edge again, until it is on the other side. Then repeat the semicircle jigging around. It's not perfect, but it's a decent start. EDIT: It seems it would also be worth testing the 'internal' semi-circle formed by each line segment of the polyline. There also feels like there is a geometric solution to finding the largest edge-aligned internal semi-circle for each segment, but I can't quite get my head around it yet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show $\frac{\cos(n\theta)-\cos((n+1)\theta)}{2-2\cos(\theta)}=\frac{\sin((n+\frac{1}{2})\theta)}{2\sin(\theta/2)}$ In working to prove that $$1+\cos\theta+\cos(2\theta)+\dots+\cos(n\theta)=\frac{1}{2}+\frac{\sin((n+\frac{1}{2})\theta)}{2\sin(\theta/2)} \tag{1}$$ I have shown $$\begin{align} 1+\cos(\theta)+\cos(2\theta)+\cos(3\theta)+\dots+\cos(n\theta) &=\Re\left(\frac{1-e^{(n+1)i\theta}}{1-e^{i\theta}}\right) \\[6pt] &=\frac{1-\cos(\theta)+\cos(n\theta)-\cos((n+1)\theta)}{2-2\cos(\theta)} \\[6pt] &=\frac{1}{2}+\frac{\cos(n\theta)-\cos((n+1)\theta)}{2-2\cos(\theta)} \tag{2} \end{align}$$ but I am unsure how to proceed from here and get the last term of $(2)$ to match the last term of $(1)$: $$\frac{\cos(n\theta)-\cos((n+1)\theta)}{2-2\cos(\theta)}=\frac{\sin((n+\frac{1}{2})\theta)}{2\sin(\theta/2)} \tag{3}$$ I have read this post "How can we sum up $\sin$ and $\cos$ series when the angles are in arithmetic progression?", but I cannot seem to convert from the form their answer is in to my form. If possible, I would like to avoid using too many identities as this is an exercise in my complex analysis book.
$1+\cos(\theta)+\cos(2\theta)+\cos(3\theta)+\dots+\cos(n\theta)=$ $\dfrac{2+e^{i\theta}+e^{-i\theta}+e^{2i\theta}+e^{-2i\theta}+e^{3i\theta}+e^{-3i\theta}\dots+e^{ni\theta}+e^{-ni\theta}}2=$ $\dfrac{1+e^{-ni\theta}+\dots+e^{-3i\theta}+e^{-2i\theta}+e^{-i\theta}+1+e^{i\theta}+e^{2i\theta}+e^{3i\theta}+\cdots+e^{ni\theta}}2=$ $\dfrac12$+$\dfrac{e^{-ni\theta}\left(\dfrac{1-e^{(2n+1)i\theta}}{1-e^{i\theta}}\right)}2=$ $\dfrac12$+$\dfrac{e^{-ni\theta}\left(\dfrac{e^{-i\theta/2}-e^{(2n+1/2)i\theta}}{e^{-i\theta/2}-e^{i\theta/2}}\right)}2=$ $\dfrac12$+$\dfrac{ \left(\dfrac{e^{-i(n+1/2)\theta}-e^{i(n+1/2)\theta}}{e^{-i\theta/2}-e^{i\theta/2}}\right)}2=$ $\dfrac12+\dfrac12\dfrac{ \sin\left((n+\frac12)\theta\right)}{\sin(\theta/2)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Number of odd and even permutation. I needed to find number of odd and even permutations in a symmetric group $S_n$ (having $n$ elements). What we do is select a arbitrary fixed odd permutation $h \epsilon S_n $. We know that $hS_n=${$hg:g\epsilon S_n$} = $ S_n$. Let's say there are $x$ odd permutations and $y$ even permutations in $S_n$. Number of odd permutation in $hS_n$ is $y$ (formed $h \cdot \#$ (even permutation $\in S_n$)) and even permutation=$x$. As $S_n$ and $hS_n$ are same sets therefore $x=y$. Therefore even = odd = $\frac{n!}{2}$. So , according to my proof if we can find even one odd permutation then number of even and odd permutation are same in any group. A subset of $S_n$ can only be a subgroup of $S_n$ if the subset either contains equal odd and even permutations or only even permutation. Is my result correct, or did I make any mistake?
By way of enrichment I would like to show how to solve this using analytic combinatorics. We have that the sign of a permutation $\pi$ is given by $$\sigma(\pi) = \prod_{c\in \pi} (-1)^{|c|-1}$$ where $c$ iterates over the cycles of the permutation and $|c|$ is the length of the cycle. Therefore we use the following combinatorial class of permutations with the length of cycles minus one marked: $$\def\textsc#1{\dosc#1\csod} \def\dosc#1#2\csod{{\rm #1{\small #2}}} \textsc{SET}( \textsc{CYC}_{=1}(\mathcal{Z}) + \mathcal{U}\times \textsc{CYC}_{=2}(\mathcal{Z}) + \mathcal{U^2}\times \textsc{CYC}_{=3}(\mathcal{Z}) + \mathcal{U^3}\times \textsc{CYC}_{=4}(\mathcal{Z}) + \cdots).$$ This gives the EGF $$G(z, u) = \exp\left(z+u\frac{z^2}{2} + u^2\frac{z^3}{3}+ u^3 \frac{z^4}{4}\cdots\right) \\ = \exp\left(\frac{1}{u}\log\frac{1}{1-uz}\right).$$ It follows that the EGF of even permutations is given by $$H(z) = \frac{1}{2} (G(z,1)+G(z,-1)) \\ = \frac{1}{2} \left(\exp\log\frac{1}{1-z} + \exp\left(-\log\frac{1}{1+z}\right)\right) \\ = \frac{1}{2} \left(\frac{1}{1-z} + 1+z\right).$$ Therefore we have for $n\ge 2$ $$n! [z^n] H(z) = \frac{1}{2} n!$$ and as boundary cases the value one for $n=0$ and $n=1$ (these two contain zero transpositions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Lipschitz continuity of conformal maps I have a basic question on Lipschitz continuity of maps. Let $\mathbb{D}$ be the unit disk and $T$ be an equilateral triangle. We have a conformal map $\phi :\mathbb{D} \to T$, which is extended to a homeomorphism from $\overline{\mathbb{D}}$ to $\overline{T}$. In fact, $\phi$ can also be given specifically by the Schwarz--Christoffel formula. Let $p$ denote a vertex of $T$. Then, it follows from the formula that there exist $C>0$ and $R>0$ such that \begin{align*} |\phi^{-1}(z)-\phi^{-1}(p)| \le C|z-p|^3 \end{align*} for any $z \in B(p,R)$. Here, we denote by $B(p,R)$ the ball centered at $p$ with radius $R>0$. In particular, $\phi^{-1}$ is Lipschitz continuous at vertices of $T$. We also see that $\phi^{-1}$ is smooth at any points other than vertices. Then, can we show that $\phi^{-1}$ is Lipschitz continuous on $\overline{T}$? Now, $\phi^{-1}$ is smooth function on $\overline{T}$. Hence, we conclude that $|(\phi^{-1})'|$ is bounded on $\overline{T}$. Therefore, we can conclude that $\phi^{-1}$ is Lipschitz continuous. Is this proof correct?
What you wrote is almost correct but your logic is muddled: You did not prove that $\phi^{-1}$ is $C^1$ on $\bar{T}$. What one observes is that there is a constant $L$ such that $\phi^{-1}$ is $L$-Lipschitz on $T$. (Use the inverse function theorem to prove that $\phi^{-1}$ has uniformly bounded derivative.) From this, it follows that $\phi^{-1}$ is $L$-Lipschitz on $\bar{T}$. (This is very general: If $A$ is a subset of a metric space $M$ and $f: M\to M'$ is $K$-Lipschitz, then $f$ admits a unique continuous extension to $\bar{A}$, which is again $K$-Lipschitz.) Hence, $\phi^{-1}$ is Lipschitz-continuous on $\bar{A}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of Finding Oil An oil prospector will drill a succession of holes in a given area until he finds r productive wells. The probability that he is successful on a given trial is 0.3. The prospector can only afford to drill 6 wells. What is the probability the inspector fails to find r productive wells if: (a) r = 1? (b) r = 2? (c) r = 3? I believe I did the work correctly I just want some clarification. Let S=sucuess and F=Failure. So $P(S)=.3$ and $P(F)=.7$. So to find each we would have to do a geometric disturbution, $p(r)=P(F)^{6-r}P(S)$ So for a, we have: $p(r=1)=(.7)^{6-1}(.3)=.050421$ For b: $p(r=2)=(.7)^{6-2}(.3)=.07203$ And for c: $p(r=3)=(.7)^{6-3}(.3)=.1029$
The probability of success for a given $r$ is the probability to have at least $r$ successes when attempting $6$ times an expriment with a probability of success equal to $p=0.3$. That is, if $X$ is random variable following a binomial ditribution $B(6,0.3)$, the probability $p_r$ of success, for a given $r$, is $$p_r=p(X\ge r)$$ Since $X\ge r+1\implies X\ge r$, this also means that $p_{r+1}\le p_{r}$, that is $p_r$ is decreasing as $r$ increases. Now, $$p_1=p(X\ge1)=1-p(X=0)=1-(1-p)^6\simeq0.882351$$ $$p_2=p(X\ge2)=1-p(X\le1)=1-(1-p)^6-\binom{6}{1}p(1-p)^5\simeq0.579825$$ $$p_3=p(X\ge3)=1-p(X\le2)=1-(1-p)^6-\binom{6}{1}p(1-p)^5-\binom{6}{2}p^2(1-p)^4\simeq0.25569$$ It may look like we are cheating, since the guy is supposed to stop when he reaches $r$ successes. But we can pretend he then continues to drill until he reaches $6$, as this won't change the outcome. That's why we must check if $X$ is at least $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3549998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solution for a nonlinear ODE During my PhD thesis im facing the following ODE: $$\frac{C}{\gamma}\frac{f'(y)}{\sqrt{1+(f'(y))^2}}=y-K_1$$ Where $y$ is a positive variable$(y\geq0)$, $C$ and $\gamma$ are parameters, $K_1$ is a constant (due to previous integration and is necessarily $\leq0$) and $f'(y)=df/dy$ is the function I want to obtain. Squaring both sides, it can be rewritten: $$(\frac{df}{dy})^2=\frac{\gamma^2(y-K_1)^2}{C^2-\gamma^2(y-K1)^2}$$ Now, if I guarantee that $C^2-\gamma^2(y-K1)^2 \geq 0$ the solution is easily obtained through: $$df=\pm\frac{\gamma(y-K_1)}{\sqrt{C^2-\gamma^2(y-K1)^2}}dy$$ But my problem is that in many times this term in the square root will be less than 0, then i don't know how to proceed since the function will not belong to $\mathbb{R}$ anymore, which is phisically impossible in the problem. Could anyone help me in this matter? Thank you very much! P.S.: The ODE was obtained after integration of $\frac{Cf''(y)}{(1+f'(y)^2)^{3/2}}=\gamma$
Let us re-write the ODE as $$\frac{A y'(x)}{\sqrt{1+y'^2(x)}}=x-B \implies A^2 y'^2(x)=(x-B)^2(1+y'^2(x)).$$ $$\implies y'^2(x)(A^2-(x-B)^2)=(x-B)^2.$$ $$\implies y'(x)=\pm \frac{x-B}{\sqrt{A^2-(x-B)^2}}.$$ $$\int dy= \pm \int \frac{x-B}{\sqrt{A^2-(x-B)^2}}dx.$$ $$\implies y(x)=\pm \sqrt{A^2-(x-B)^2}+D$$ $$\implies (y-D)^2+(x-B)^2=A^2.$$ $D$ is constant of integration, $A=C/\gamma$, $B=K_1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3550182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivation of D'Alembert's Solution Could anyone provide references wherein D'Alembert's solution to the one-dimensional wave equation is derived from basic properties? I keep running into the approach where D'Alembert's solution is mysteriously presented out thin air and then shown to satisfy the wave equation. I am looking for references that show how D'Alembert's solution is derived. $$\;$$ **Giuseppe: Either of your references is a good first start. The next step, which is what I'm looking for, would be to derive D'Alembert's solution: $$y(x,t) = \frac{1}{2}\left[F\left(x-ct\right)+F\left(x+ct\right)\right]+\frac{1}{2c}{\int\limits_{x-ct}^{x+ct}}G\left(s\right)ds$$ where $F$ and $G$ are odd periodic extensions of $f$ and $g$ respectively, and where the initial conditions are $$y\left(x,0\right)=f\left(x\right)\;,\;0{\lt}x{\lt}L,$$ $${y_t}\left(x,0\right)=g\left(x\right)\;,\;0{\lt}x{\lt}L\;.$$ D'Alembert's is more general in the sense that $f$ and $g$ do not have to be restricted to well behaved functions like the trigonometric functions.
For the derivation of D'Alembert's solution to the one-dimensional wave equation you may follow the following references: $(1)~~$"Linear Partial Differential Equations for Scientists and Engineers" by Tyn Myint-U & Lokenath Debnath (Chapter $5$, section $5.3$) $(2)~~$d’Alembert’s solution of the wave equation / energy $(3)~~$Wikipedia $(4)~~$WolframMathWorld
{ "language": "en", "url": "https://math.stackexchange.com/questions/3550371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How Many 3 letter words from characters in "ABRACADABRA"? The words cannot have a repetition of letters and each character of the type is distinct i.e. the first 'B' is distinct from the second 'B', and both the 'B's can not be in a word simultaneously. I need an algorithm/formula that answers for any word(provided as count of each letter). for a smaller word: AABBC answers are permutations of these letters: * *A1B1C *A1B2C *A2B1C *A2B2C These are all the combinations. So count of all words is = 4 * 3P3 = 4 * 6 = 24
As an algorithm: loop over (0-9) as A loop over (0-9 excluding A) as B loop over(0-9 excluding B) as C output ABC end end end You could try asking this question on stackoverflow as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3551029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$T\in \mbox{End}(V)$. If $p|m_T,$ then there is a vector $v$ such that the minimal polynomial of $v$ is exactly $p$. Let $T:V\rightarrow V$ be linear a linear map and let $p\in \mathcal P(\mathbb C)$ be a non-constant polynomial such that $p | m_T$, where $m_T$ is the minimal polynomial of the endomorphism $T$. Prove that exists $v\in V$ such that $m_v=p$, that is, such that the minimal polynomial for $v$ is $p$ (the minimal polynomial for $v$ is the lowest degree polynomial such that the family $(T^j(v))_{j\geq 0}$ is linear dependent, ie, such that $m_v(T)(v) = 0$) My attempt: Since $p|m_T$, we may write $m_T = ph$. Writing $V_p := \{v\in V: p(T)(v) =0\}$, we must have $V_p \neq \{0\},$ otherwise we have $p(T)(u)\neq 0$ for all $u\in V$ and, since for all $u\in V$ we have $0=m_T(T)(u) = p(T)(h(T)(u)),$ it must be the case that $h(T)(u) = 0 $ for all $u\in V$. But the degree of $h$ is less than the degree of $m_T$, which is a contradiction. Ok, now we may consider $w\in V_p, w\neq 0$. It is true that $p(T)(w) = 0$ and since $m_w$ is the lowest degree polynomial satisfying the condition $m_w(T)(w) = 0,$ it follows that $m_w|p$. No argument can help me proceed here to conclude that $p|m_w$, because it may not be the case. I'm stuck at this point. I guess that the correct path should try to construct such a vector $v$ using somehow the irreducible factors of $p$, but i'm not sure how to proceed. EDIT: I guess that I could solve it, I just need to formalize it properly. Since $p|m_T$, the idea is to write $m_T(t) = \prod_{i=1}^s p_i^{\ell_i}$ and $p(t)= \prod_{i=1}^s p_i^{k_i}$, where $0\leq k_i\leq \ell_i$ and each $p_i$ is a monic irreducible factor and relatively prime with one another. Using a lemma prior to the primary decomposition theorem and the notation $V_p=\{v\in V: p(T)(v) = 0\}$, we may write: $V_p = \bigoplus V_{p_i^{k_i}}$ and since $V_p\neq 0$, we may consider $0\neq v = v_1+\cdots + v_s \in V_p.$ Since $p(T)(v) = 0$ and every $p_i$ is relatively prime with each other, it must be the case that $p_i^{k_i}(T)(v_i) = 0$ for every $i=1,\cdots, s$ and $p_i^{k_i}(T)(v_j) \neq 0$ for $i\neq j$. Therefore, we may consider $m_1,\cdots, m_s\in \mathbb N$ minimal such that $p_i^{m_i}(T)(v_i) \neq 0 $ and $p_i^{m_i+1}(T)(v_i)= 0.$ Define then $0 \neq \tilde v = \sum p_i^{m_i}(T)(v_i)$. It is the case that $m_{\tilde v} = p$. Is this correct?
Use the fact that $V^T$ has cyclic submodule $C \cong \mathbb C[x]/(m_T)$ and then $h$ has order $p$ in it. Here are more details: Cyclic Modules, Characteristic Polynomial and Minimal Polynomial
{ "language": "en", "url": "https://math.stackexchange.com/questions/3551180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate $\lim\limits_{x\rightarrow 0^+}x\log(1+x^{-1})$ Can someone help me out with $$\lim\limits_{x\rightarrow 0^+}x\log(1+x^{-1})?$$ I tried Taylor's expansion to no avail.
$$\lim_{x\rightarrow 0}\ln\left(1+\frac{1}{x}\right)^{x}=\lim_{x\rightarrow 0}\frac{\ln\left(1+\frac{1}{x}\right)}{\frac{1}{x}}=\frac{\frac{-\frac{1}{x^{2}}}{1+\frac{1}{x}}}{-\frac{1}{x^{2}}}=\frac{x+1}{x}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3551494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Question on Primitive Ring and $\operatorname{End}(V)$ A ring $R$ is primitive if it has a faithful irreducible module. Let's say that $R$ is primitive and it's faithful irreducible module is $V_R$. Since this module is faithful, we have that $R$ embeds naturally into $\operatorname{End}_R(V)$. Since $V_R$ is irreducible, we have that $R$ acts transitively on $V_R$. When I first read this, it seemed that this should imply that $R \cong \operatorname{End}_R(V)$. Since the module is faithful, we can view $R$ as being embedded into $\operatorname{End}_R(V)$, and I was thinking that $R$ acting transitively on $V$ means that $R$ must be all of $\operatorname{End}_R(V)$. Can someone give me some examples or insight into why this isn't so? Thanks!
Hm, a couple things need to be straightened out. First of all, given a faithful left $R$ module $M$, one can always talk about the natural embedding of $R$ into $\mathrm{End}(M_\mathbb Z)$. If it is a faithful right $R$ module, then you get an embedding of $R^{op}\to \mathrm{End}(M_\mathbb Z)$. When $M$ is a faithful simple left $R$ module, $End(_RM)=D$ is a division ring. This can be considered as an action of $D$ on the right of $M$, and in a similar way to the above we can embed $R\to End(M_D)$. This is the normal context for considering left primitive rings. With that out of the way, on to your questions. it seemed that this should imply that $≅End_()$ Nope. That would imply $R$ is a division ring. Even with the correction of replacing $_R$ with $_D$ as suggested above, there is no reason for it to be isomorphic to the full ring of transformations. For one thing, the full ring of transformations is always primitive on both sides, but there exists a right-not-left primitive ring. I was thinking that $$ acting transitively on $$ means that $$ must be all of $End_()$ The fact that $R$ acts transitively on $V$ just reflects that $V_R$ is simple. (You can work out that the two things are equivalent.) It doesn't say anything about how much $R$ "fills" $End_D(V)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3551625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $\mathscr A$ has all products and equalizers, then it has all limits This question is about part (a) of this proposition: Here is a plan of the proof. Here's what the picture looks like, from what I understand: But I don't understand what the condition $s\circ p=t\circ p$ means. I guess this means componentwise equality: $s_u\circ p_u=t_u\circ p_u $ for all $u$, but what is $p_u$? $L$ is not a product, and so $p_u$ cannot be a projection. I'm just trying to prove first that $Du\circ p_I=p_J$ for all $u: I\to J$, and it seems I need to use $s\circ p=t\circ p$ to that end, which is unclear how to do.
$s \circ p = t \circ p$ are both arrows into a product, the equality here is equivalent to them being equal after each projection, i.e., $\pi_u \circ s \circ p = \pi_u \circ t \circ p$ for all $u : J \to K$ in $\mathbf I$. Now, $\pi_u \circ s$ is the $u$-component of $s$, hence is equal to $D(u) \circ \text{pr}_J$. Similarly, $\pi_u \circ t = \text{pr}_K$. Combining this with the previous equality, we find $D(u) \circ \text{pr}_J \circ p = \text{pr}_K \circ p$. Now, by definition $p_I = \text{pr}_I \circ p$ for any $I \in \mathbf I$ (i.e. $p_I$ is the $I$-component of $p$), we find $D(u) \circ p_J = p_K$. Hence $(L, (p_I : L \to D(I))_{I \in \mathbf I})$ is a cone. Hopefully this clears up the confusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3551869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Determine the value of the sum to infinity of $(u_{r+1} + \frac{1}{2^r})$ Determine the value of $$\sum_{r=1}^\infty\Bigl(u_{r+1} + \frac{1}{2^r}\Bigr)$$. In earlier parts of the question, it is given $\displaystyle u_r= \frac{2}{(r+1)(r+3)}$, which when expressed in partial fractions is $\displaystyle\frac{1}{r+1}-\frac{1}{r+3}$. Also previously, the question asks to show that $$\sum_{r=1}^n u_r = \frac{5}{6}-\frac {1}{n+2} -\frac{1}{n+3}$$ which I managed to. I have also determined the values of $$\sum_{r=1}^\infty u_r$$ which is $$\sum_{r=1}^\infty u_r= \frac{5}{6}$$ But I don’t know how to solve for $$\sum_{r=1}^\infty\Bigl(u_{r+1} + \frac{1}{2^r}\Bigr)$$ My attempts so far: $\displaystyle u_{r+1} =\frac{2}{(r+1+1)(r+1+3)}$ $=\displaystyle\frac{2}{(r+2)(r+4)}$ Let $\displaystyle \frac{2}{(r+2)(r+4)}\equiv \frac{A}{(r+2)(r+4)}$ $2\equiv A(r+4)+B(r+2)$ Let r $=-4,B=-1$ Let $r=-2,A=1$ So, is $\displaystyle u_{r+1} \equiv \frac {1}{r+2}- \frac{1}{r+4}$? Assuming that it is,$$\sum_{r=1}^\infty\Bigl(u_{r+1} + \frac{1}{2^r}\Bigr)$$ = $$\sum_{r=1}^\infty\Bigl(\frac{1}{r+2}+\frac{1}{r+4} + \frac{1}{2^r}\Bigr)$$ $=\frac{1}{3}-\require{cancel}\cancel{\frac{1}{5}}+\frac{1}{2}$ $\frac{1}{4}-\require{cancel}\cancel{\frac{1}{6}}+\frac{1}{4}$ $\require{cancel}\cancel{\frac{1}{5}}-\cancel{\frac{1}{7}}+\frac{1}{8}$ $\require{cancel}\cancel{\frac{1}{6}}-\cancel{\frac{1}{8}}+\frac{1}{16}$ ..... I don't know how to cancel the middle terms, how should this question be solved?
Write $$U_{r+1}=\frac{2}{(r+2)(r+4)}= 2 \left[\left(\frac{1}{r+2}-\frac{1}{r+3}\right)+\left(\frac{1}{r+3}-\frac{1}{r+4}\right) \right]= [(F_r-F_{r+1})+(F_{r+1}-F_{r+2})], F_r=\frac{1}{r+2}.$$ Now telescoping summation can be done easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
find matrix element from matrix equation How to find "x" from this equation $$ \begin{bmatrix} a_1 & a_1^2 & \cdots & a_1^n \\ a_2 & a_2^2 & \cdots & a_2^n \\ \vdots & \vdots& \ddots & \vdots \\ a_m & a_m^2 & \cdots & a_m^n \\ \end{bmatrix} \begin{bmatrix} x \\ b_2\\ \vdots\\ b_m \\ \end{bmatrix} = \begin{bmatrix} c_1\\ c_2\\ \vdots\\ c_m \\ \end{bmatrix} $$
If $n \ne m$, then the given equation makes no sense. So, let $n=m$ and $j \in\{1,2,...,m\}.$ Then we have $$a_jx+a_j^2b_2+...+a_j^mb_m=c_j.$$ If $a_j \ne 0$ we get $$x= \frac{c_j}{a_j}-(a_jb_2+...+a_{j}^{m-1}b_m).$$ If $a_1=a_2=...=a_m=0$, then you can't find $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
An identity for double integrals Let $f$ be a real valued continuous function on $\mathbb{R}_+^2$ and $F(x,y)=\int_0^x\int_0^y f(s,t)\,ds\,dt$. How to show that $$\frac{1}{uv}\int_0^u\int_0^vF(x,y)\,dx\,dy=\int_0^u\int_0^v\left(1-\frac{x}{u}\right)\left(1-\frac{y}{v}\right)f(x,y)\,dx\,dy$$ is satisfied.
Let's look at the one variable situation first: If $$G(x):=\int_0^x g(s)\>ds$$ then $$\int_0^u G(x)\>dx=\int_0^u\int_0^x g(s)\>ds\>dx=\int_0^u\int_s^u g(s)\>dx\>ds=\int_0^u(u-s)g(s)\>ds$$ (draw a sketch of the $(x,s)$-plane!), and therefore $${1\over u}\int_0^uG(x)\>dx=\int_0^u\left(1-{s\over u}\right)g(s)\>ds\ .$$ Now do this with your $f(x,y)$ "first with respect to $x$, then with respect to $y$", or use the Stone-Weierstrass approximation theorem: Your $(x,y)\mapsto f(x,y)$ can be approximated by polynomials, hence by linear combinations of functions $\psi(x,y):=g(x)h(y)$. For such functions $\psi$ we of course have $$\Psi(x,y)=\int_0^x g(x)\>dx \ \int_0^y h(y)\>dy\ ,$$ etcetera.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finite equational basis for trigonometric identities Consider the structure $(\mathbb{R}, +,-,*,\sin,\cos,0,1)$, where $+$ is addition, $-$ is additive inverse, $*$ is multiplication, $\sin$ is the sine function, and $\cos$ is the cosine function. Is there a finite basis for the equational identities of that structure? In fact, I conjecture that, in addition to the axioms of a commutative ring, all you need are that $\sin(0)=0$, $\cos(0)=1$, $\sin(-x)=-\sin(x)$,$\cos(-x)=\cos(x)$, the sine of sum formula, the cosine of sum formula, and $\sin^2(x)+\cos^2(x)=1$.
I don't know the answer to this question, but it reminds me of a lemma I read in Equations on real intervals Walter Taylor Algebra universalis 55 (2006) 409-456. Lemma 5.2. (Expanded, so that it makes sense here.) Suppose that $\mathbb R$ is the real line considered as a topological space. Suppose also that $$ \mathbb A = \langle \mathbb R; \oplus, \odot, \ominus, \stackrel{\cdot}{0}, \stackrel{\cdot}{1}, c(x), s(x), \lambda(x)\rangle $$ is a topological algebra (meaning all operations are continuous) which satisfies the following identities: * identities saying that $ \langle \mathbb R; \oplus, \odot, \ominus, \stackrel{\cdot}{0}, \stackrel{\cdot}{1}\rangle $ is a commutative ring, * $c(x + y) = (c(x) · c(y)) − (s(x) · s(y))$, * $s(x + y) = (c(x) · s(y)) + (s(x) · c(y))$, * $c(\stackrel{\cdot}{0}) = \stackrel{\cdot}{1}$, $c(\stackrel{\cdot}{1}) = \stackrel{\cdot}{0}$, $s(\stackrel{\cdot}{1}) = \stackrel{\cdot}{1}$, * $c(s(x)) = (\lambda(x))^2$. Let $\mathbb A^-$ be the topological algebra $\mathbb A$ with $\lambda(x)$ deleted from the signature. There is a unique self-homeomorphism $\phi:\mathbb R\to \mathbb R$ that is an algebra isomorphism of $\mathbb A^-$ onto the algebra $$ \mathbb B = \left\langle \mathbb R; +, \cdot, -, 0, 1, \cos\left(\frac{\pi}{2}x\right), \sin\left(\frac{\pi}{2}x\right)\right\rangle. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is it necessary to write limits for a substituted integral? To solve the following integral, one can use u-substitution: $$\int_2^3 \frac{9}{\sqrt[4]{x-2}} \,dx,$$ With $u = \sqrt[4]{x-2}$, our bounds become 0 and 1 respectively. Thus, we end up with the following: $$36\int_0^1{u^2} \,du$$ In the first case, the lower bound is a vertical asymptote so we would have to use limits to find the answer. However, in the second case there's no longer an asymptotal bound - would you still have to write the limits since the original function would've needed limits, or can you just solve this by plugging in the substituted bounds? I know the final answer will be the same either way, but I want to know if it can be considered correct to exclude the limits in the 2nd integral from a technical perspective since the function has been changed. Many thanks in advance!
Suppose you must. Then we have $$9\lim_{a\to2^+}\int_a^3\frac1{\sqrt[4]{x-2}}\,dx.$$ Let $u=x-2$. Then we have $$9\lim_{a\to2^+}\int_{a-2}^1 u^{-1/4}\, du.$$ By the power rule, we have $$9\lim_{a\to2^+} \left.\frac43u^{3/4}\right]_{a-2}^1=9\left[\frac43(1-\lim_{a\to2^+}(a-2)^{3/4})\right]=9\left[\frac43(1-0)\right]=12.$$ Notice that it does not make a difference whether you use $\lim_{a\to2^+}(a-2)$ or $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Let $a_1=1$ and $a_n=n(a_{n-1}+1)$ for $n=2,3,...$. Let $a_1=1$ and $a_n=n(a_{n-1}+1)$ for $n=2,3,...$. Define $$P_n=\left(1+\frac{1}{a_1}\right)\left(1+\frac{1}{a_2}\right)...\left(1+\frac{1}{a_n}\right).$$ Find $\lim\limits_{n\to\infty} P_n$. My approach: $$P_n=\left(1+\frac{1}{a_1}\right)\left(1+\frac{1}{a_2}\right)...\left(1+\frac{1}{a_n}\right)$$ $$=\left\{\frac{(a_1+1)(a_2+1)...(a_n+1)}{a_1.a_2...a_n}\right\}$$ $$=\left\{\frac{1}{a_1}\left(\frac{a_1+1}{a_2}\right)\left(\frac{a_2+1}{a_3}\right)...\left(\frac{a_{n-1}+1}{a_n}\right)(1+a_n)\right\}$$ $$=\left\{\frac{1}{a_1}.\frac{1}{2}.\frac{1}{3}.....\frac{1}{n}(1+a_n)\right\}$$ $$=\frac{1+a_n}{n!}, \forall n\in\mathbb{N}.$$ Thus, we need to find $$\lim_{n\to\infty} \frac{1+a_n}{n!}.$$ How to proceed after this?
$$\begin{align} a_{n}&=na_{n-1}+n\\ &=n[(n-1)a_{n-2}+(n-1)]+n\\ &=n(n-1)[(n-2)a_{n-3} + (n-2)]+n(n-1)+n\\ & \phantom{a bit of space here would be nice}\vdots\\ &=\left[n(n-1)(n-2)\dots 3 \cdot 2\cdot 1\right]\left( 1+\frac{1}{1!}+\frac{1}{2!}+\cdots +\frac{1}{(n-1)!} \right)\\ &=n!\left(1+\frac{1}{1!}+\frac{1}{2!}+\cdots +\frac{1}{(n-1)!}\right)\rightarrow n!e, \quad (n \to \infty) \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3552895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In how many ways can $6$ prizes be distributed among $4$ persons such that each one gets at least one prize? My understanding: First select $4$ prizes and distribute it among the $4$ people in $^6C_4\times4!$ ways and then distribute the remaining $2$ prizes in two cases: when $2$ people have $2$ prizes: $\frac{4!}{2!}$ways or when one person has $3$ prizes: $4$ ways. So totally: $^6C_4\times4!\times( \frac{4!}{2!}+4)$ ways $= 5760$ ways. However the answer is $1560$ ways. How?
Someone could get $3$ prizes & everyone else gets $1$: There are $4$ ways to choose the person who gets $3$ prizes and then $ (6 \times 5\times 4)/3! \times 3 \times 2 \times 1$ ways to distribute the prizes. So $480$ ways in this case. OR Two people get $2$ prizes each & everyone else gets $1$: There are $6$ ways to choose the people who get $2$ prizes and then $ (6 \times 5)/2! \times (4 \times 3 )/2! \times 2 \times 1$ ways to distribute the prizes. So $1080$ ways in this case. $480+1080=\color{red}{1560}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3553023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Lagrange multiplier pedagogy From what I can tell, the traditional way to teach Lagrange multipliers is to start with a function $f(x,y,z)$ and to look for extrema of $f$ subject to $g(x,y,z)=k$. That is, we restrict $(x,y,z)$ to be on the level curve $g(x,y,z)=k$. We then look at the level curves of $f$ and find the one(s) tangent to the level curve $g(x,y,z)=k$. An example of this can be found here: http://tutorial.math.lamar.edu/Classes/CalcIII/LagrangeMultipliers.aspx I came across what seems to me to be a different approach here: https://sites.lafayette.edu/thompsmc/files/2014/01/Section_14_8.pdf In this pdf, the constraint $g(x,y,z)=k$ is not referred to as a level curve. Rather it's shown as a cylinder intersecting the shape in question [that is, the graph of $f(x,y,z)$]. We're looking for extrema along the curve of intersection of the 2 shapes. We then look at level curves of both $f$ and $g$. And we find that the pair of level curves that are tangent correspond to an extrema of $f$. Unlike the traditional approach, here we have multiple level curves of $g$. I can't seem to reconcile these two views and am wondering if there's a sense in which one of the 2 approaches outlined is a more generalized version of the other. Can someone help guide me on this? Thanks!
Preliminary note. In the first tutorial the objective is a quadratic function and the constraint is a sphere. While in the second tutorial the objective is a sphere and the constraint a cylinder (or a quadratic function). The main difference is that in the first tutorial only the variables $(x,y)$ are considered, and both $f$ and $g$ go from $\mathbb{R}^2$ to $\mathbb{R}$. In contrast, the second document depicts functions with three variables $(x,y,z)$ and represents the constraint $g(x,y)=z$ for different levels of $z$, hence the different level curves for the constraint on the last figure of page 4. If you fix $z$ to and arbitrary number, and eliminate $z$ from the choice variables, you end up with a problem which is comparable to the one considered in your first reference. Both views are consistent, but represent slightly different problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3553302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Composition Proof I am trying to study functions in math and learning some basic proofs. In numerous places I have seen this: $$(f \circ\ g) ^{-1}(u) = g^{-1}(f^{-1}(u))$$ I know this is true as well, having used it in numerous places in middle and high school. Is there any way of proving this definition though using logical steps? Thank you!
Assuming that $g : A \to B$ and $f : B \to C$, then, by definition, $(f\circ g)^{-1} : C\to A$ is the unique function such that * *$(f\circ g)^{-1}\circ(f\circ g) = \textrm{id}_A$, and *$(f\circ g)\circ(f\circ g)^{-1} = \textrm{id}_C$. But observe that the function $g^{-1}\circ f^{-1} : C\to A$ has the same properties: $$\begin{align} (g^{-1}\circ f^{-1}) \circ (f\circ g) &= \big( g^{-1} \circ (f^{-1} \circ f) \big) \circ g \\ &= ( g^{-1} \circ \textrm{id}_B) \circ g \\ &= g^{-1} \circ g = \textrm{id}_A \end{align}$$ and $$\begin{align} (f\circ g) \circ (g^{-1}\circ f^{-1}) &= \big( f \circ (g \circ g^{-1}) \big) \circ f^{-1} \\ &= ( f \circ \textrm{id}_B) \circ f^{-1} \\ &= f \circ f^{-1} = \textrm{id}_C. \end{align}$$ Hence, $g^{-1}\circ f^{-1}$ and $(f\circ g)^{-1}$ are the same!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3553399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Consider the following sequence: $a_j = 4a_{j-1} - 4a_{j-2}$ $a_0=0;\ a_1 = 1$ For all $j\geqslant2$, come up with a general formula for the term $a_j$. Use mathematical induction to prove your claim. I have calculated the first few terms of this sequence, and determined from them that $a_j = j · 2^{j-1}$. However, I am very bad at induction (some might even say weak at it) and would appreciate if someone could help me prove that this is in fact the case.
In your case, since your linear recurrence relation uses $2$ smaller subscripts in the definition for $a_j$, you want to use strong induction to prove that $$a_j = j\left(2^{j-1}\right), \; j \ge 0 \tag{1}\label{eq1A}$$ In this situation, you need to verify $2$ base cases. Although the question asks you prove \eqref{eq1A} holds for $j \ge 2$, it actually holds starting at $0$. As such, you can start there to first verify it works for $j = 0$ and $j = 1$. This is true since $0 = 0(2^{-1})$ and $1 = 1(2^{0})$, respectively. Next, assume for some $k \ge 1$ that \eqref{eq1A} holds for all $j \le k$. For $j = k + 1$, you have $$\begin{equation}\begin{aligned} a_{k + 1} & = 4a_{k} - 4a_{k - 1} \\ & = 4k\left(2^{k-1}\right) - 4(k-1)\left(2^{k-2}\right) \\ & = 2k\left(2^{k}\right) - (k-1)\left(2^{k}\right) \\ & = 2^{k}(2k - (k - 1)) \\ & = (k+1)2^{k} \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ This shows \eqref{eq1A} also holds for $j = k + 1$. Thus, by strong induction, \eqref{eq1A} holds for all $j \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3553532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How can derivatives represent tangents? I am taking a Introduction to Calculus course and am struggling to understand how derivatives can represent tangent lines. I learned that derivatives are the rate of change of a function but they can also represent the slope of the tangent to a point. I also learned that a derivative will always be an order lower that the original function. For example: $f(x) = x^3 and f'(x) = 3x^2$ What I fail to understand is that how can $3x^2$ represent the slope of the tangent line if it is not a linear function? Wouldn't this example mean that the slope or the tangent itself is a parabola?
* *The formula defining the derivative function is not itself the equation of the tangent; this formula gives you , for each tangent ( one tangent for each point $(x, f(x))$ of the graph of $f$ ), the slope of this line. And a slope is a number. The main point here is that the derivative function is a function that sends back numbers as outputs ( not lines, not tangents). One and only one number far every permissible input $x$. * *To understand this, remember that for all point $(x, f(x))$ of the graph (such that there is a tangent to the graph at this point), this tangent will have the form : $$y = mx + b$$. The number $m$ is the slope of the tangent. You may think of it as a percent ( in the same way as we ordinarily think of the slope of a road in terms of %). For example, the slope of the line $y = 0,5x +2$ has slope $0,5$, that is, $50$%. The slope of the line $6x + 10$ has slope $6$, that is $600$%. The slope of $y=0x+5=5$ is $O$ ( = $0$%). The slope of $y= -2x +40$ is $-2$ = $- 200$% ( These are arbitrary examples, not related to the $x^3$ function). * *So, for each input $x$, the derivative gives as output the number $m$ ( that is, the slope) of the tangent to the graph at point $( x, f(x))$. *The beauty is that, although the tangents will ( ordinarily) have various slopes, although the outputs of function $f'(x)$ will be different for different $x$ values ( inputs), we are often able to find a rule defining a constant numerical relation between the value of $x$ and the corresponding slope. For example, for $f(x)=x²$, it can be proved that $f'(x)$ ( the slope of the tangent to the graph of $f$ at $(x, f(x))$) is always the double of x ! This is what means the differentiation rule : $\frac {d} {dx}x^2$ $=$ $2\times x$. Note : this number that is sent back as output is formally defined as a limit, namely, the limit , as $h$ approaches $0$ , of the ratio $\frac {f(x+h) - f(x)} { (x+h) - x}$ = $\frac{change- in-y}{change-in-x}$ This shows that the slope of the tangent happens to be identical to the instantaneous rate of growth of the original function $f$ at the point $( x, f(x))$. This is why, in fact, we are interested in these slopes. Note : you can use the number $f'(a)$ to find the equation of the tangent at a given point $( a, f(a))$. Since $f'(a)$ is " the $m$ ( = slope) of this tangent" ,the equation of this line will have the form : $y = f'(a)x + b$. The fact that you also know one point of this tangent , namely, the point $(a, f(a))$, allows you ( with some algebra) to recover the number $b$ , and finally the whole equation of the tangent at this point $( a, f(a))$. * *Examples with $f(x)= x^3$ and consequently $f'(x)= 3x^2$: For $x= 1$ , the slope is $f'(1)$ = $3\times1^2$= $3$ = $300$% So, at $( 1, f(1))$ , the slope of the tangent to the graph of $f$ is $300$%. Quite a big slope. For $x= 2$ , the slope is $f'(2)$=$3\times2^2$= 12 = $1200$% So, at $( 3, f(3))$ , the slope of the tangent to the graph of $f$ is $1200$%. A huge slope!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3553644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $O_n(\mathbb{R})\leq GL_{n}(\mathbb{R})$ A Sub Group? Is $O_n(\mathbb{R})\leq GL_{n}(\mathbb{R})$ where $O_n(\mathbb{R})$ are orthonormal matrices. * *$I_n^t=I^{-1}_n=I_n$ and therefore: $I_n\in O_n(\mathbb{R})$ *Let $A,B\in O_n(\mathbb{R})$ then we have to prove that $AB^{-1}\in O_n(\mathbb{R})$ We have that if $B\in O_n(\mathbb{R})$ then $B^{-1}=B^{t}\in O_n(\mathbb{R})$ but how can we conclude about $AB^{-1}\in O_n(\mathbb{R})$? Can we say that $$(AB^{t})^t=BA^t=BA^{-1}$$ And therefore $AB^{-1}\in O_n(\mathbb{R})$?
Note that the condition of $A$ being invertible and satisfying $A^t=A^{-1}$ is equivalent to $AA^t = I$. With this it is easy to check the properties of being a subgroup. * *$II^t = II = I$, hence $I\in O_n$ *Given $A,B\in O_n$ we have $AA^t=BB^t=I$ and hence $$\begin{align} (AB^{-1})(AB^{-1})^t&= AB^{-1}(B^{-1})^tA^t\\ &= AB^{-1}(B^t)^{-1}A^t\\ &= A(B^t B)^{-1} A^t\\ &= AA^t\\ &= I. \end{align}$$ thus $AB^{-1}\in O_n$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Minimize the number of non-zero rows I am trying to formulate a shipping problem as a mathematical optimization one, and I'm having some trouble determining my optimization objective (cost function). Without getting too far into the details of the particular problem, the variable I am trying to optimize over is an $K\times N$ matrix X of non-negative integers. I have figured out some constraints on this matrix, namely $$ X^T \vec{1} = \vec{d} $$ and $$ X \preceq A $$ where $A$ is a non-negative $K\times N$ matrix of integers and $\vec{d}$ is a vector of length $N$ (of integers), and $\preceq$ indicates the usual component-wise matrix inequality. These are based on the particular problem at hand, and are nicely linear. This means I can plug these constraints into some numerical optimizer by simply "linearizing" the matrix $X$. More complicatedly, as an objective I am trying to minimize a weighted count of non-zero rows of $X$. As such, I would like some sort of function rowNonEmpty which would take in a matrix row and return 0 if the row is all zeros and 1 otherwise. My optimization objective would then be something like $$ \min_{X} \; \sum_{j=1}^K w_j \cdot \text{rowNonEmpty}\left(X_{j*}\right) $$ where $w_j$ is a weighting factor for row $j$ and $X_{j*}$ is the $j$th row of $X$. Does anyone have some suggestions for how to properly frame this problem and the rowNonEmpty function? My first thought was that this is similar to some sort of weighted $L_1$ norm minimization, but not really because we are looking at non-zero rows of a matrix and not just vector entries. The first answer here suggests adding binary variables, but this is again for a more standard problem involving only a vector.
For each row $i$, introduce a binary variable $y_i$ to indicate whether the row is nonempty. The objective is to minimize $\sum_i w_i y_i$, and the constraints are: $$X_{i,j} \le A_{i,j} y_i$$ for each row $i$. This formulation enforces the logical implication $X_{i,j} > 0 \implies y_i = 1$, which is sufficient if $w_i \ge 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected number of dice that are all 6 Suppose that a group of n fair dice is rolled $4*6^{n-1}$ times. a) Find the expected number of times that “all sixes” is achieved (i.e., how often among the $4*6^{n-1}$ rolls it happens that all n dice land 6 simultaneously). The answer is $\frac{4}{6}$. (Binomial distribution) b) Same thing but after one normal roll of the n dice, going from one roll to the next, with probability 6/7 keep the dice in the same configuration and re-roll with probability 1/7. For example, if n = 3 and the 7th roll is (3, 1, 4), then 6/7 of the time the 8th roll remains (3, 1, 4) and 1/7 of the time the 8th roll is a new random outcome. Does the expected number of times that “all sixes” is achieved stay the same, increase, or decrease (compared with (a))? Give a short but clear explanation. I'd appreciate some hints on how to approach this. My initial thought was to try to come up with the probability for a throw to be all sixes, using conditional probability, but the equation is messy. Is there a better intuition for this ?
For b) The first roll can be all sixes if all the dice roll sixes. The second roll can be all sixes if the first roll is all sixes and the second roll does not change or the second roll is a reroll and you roll all sixes (whether or not the first roll is all sixes). So, for the second roll, the probability that you get all sixes is: $$\dfrac{6}{7}\left(\dfrac{1}{6^n}\right)+\dfrac{1}{7}\left(\dfrac{1}{6^n}\right) = \dfrac{1}{6^n}$$ By induction, you can show that the probability that any specific roll is all sixes is $\dfrac{1}{6^n}$. Thus, the expected number of times you roll all sixes remains unchanged.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
In the epsilon-delta definition, what is wrong if I said: "given delta, there exists an epsilon"? WHY are we always given $\epsilon > 0$ first, then solving for a $\delta>0$? This is in the limit definition. I want to ask: Can we say "given $\delta>0$, there exists $\epsilon>0$"? Since we can always solve for one given the other. I found three counterexamples, but I don't understand them: * *Let $f(x) = \sin x$, let $L$ and $\delta$ be arbitrary real numbers. Then $\epsilon = |L| + 2$ satisfies your definition. (from post) Q: What's wrong with setting $\epsilon = |L| + 2$? It's big, but it's not wrong! *Let $f(x) = 1/x$, and let $a = 1$. The definition fails for $\delta \ge 1$, since for any $\epsilon$ we can choose $x=1/(L+\epsilon)$ if $L+\epsilon > 1$, so that $f(x)-L \ge \epsilon$. (from post) Q: What are they saying here? At $x=1$, the definition fails for $\epsilon \ge 1$ too! The problem is not $\delta$. The problem is the function is undefined for $x \le 0$. *Counterexample: $\lim\limits_{x \to 0} f(x) = L$ $f(x) = \begin{cases} \sin \frac{1}{x}, & x \ne 0 \\ 0, & x = 0 \end{cases}$ Given any $\delta > 0$, we can find $\epsilon > 0$ such that $|f(x) - L| < \epsilon$ whenever $|x| < \delta$. For instance, set $\epsilon = 2$; then any choice of $L \in (-1,1)$ will satisfy this "reversed" situation. (from post) Q: I don't see how setting $\epsilon = 2$ violates any definition. I mean, we did find a $\epsilon$ for a given $\delta$. Thanks all for the pouring answers, I'll get back to each one personally. If I did not choose an answer, that means all submissions are still welcomed! The best answer will be chosen based on # of upvotes (50%) and if I understood it and agree it's the best (50%).
Because $\epsilon$ can be unbounded. Let's say we choose $\delta=3$., then is $\epsilon = 6$. But, does it have to be 6? No! It can be 7, 8, 9, 1000. In fact, any number bigger than 6 will do. So, choosing $\delta$ first leaves $\epsilon$ unbounded. Choosing $\epsilon$ first is like reverse engineering. To me, changing $y$ first, and watch what happens to $x$ is weird. Because I'm used to x as the independent variable, and y as the dependent variable. Q: Why does "reverse engineering" work? What's the intuition behind it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 10, "answer_id": 8 }
Backgammon variant — two player double or nothing on random walk? Two players, A and B, play the following backgammon variant. A series of steps is labelled $-2$, $-1$, $0$, $1$ and $2$, and a chess piece is placed on step $0$. A fair coin is flipped, and it lands on heads, the chess piece is moved forward by one step, while if it lands on tails, the piece is moved back by one step. Player A wins if the piece lands on step $2$ first, while player B wins if the piece lands on step $-2$ first, and the loser must pay the winner some amount of money. The amount is originally set at \$1. Initially, either player can challenge the other player to double this stake at any time (before the game ends), and the other player must either accept this or forfeit and lose the game. Following that, two consecutive challenges cannot be raised by the same player (i.e. if player A challenges first and B accepts, A cannot challenge again until B challenges him), and only one challenge can be raised each turn. When should you challenge, and if challenged, should you accept? What is the expected value of your strategy? My initial thought was that if challenged, I would only accept when at $0$ (indifferent) and $+1$ (if I was player A) or $-1$ (if I was player B). However, this would mean that the other player challenged one step before possibly losing. How do I reconcile this? Is there a reason why a player would challenge if one step away from potentially losing?
The analysis in quarague’s answer isn’t correct because it only takes the possibility of doubling once into account, whereas future doubling opportunities in fact increase the expected payoff. Denote by $x_k$ the expected value of the game for $A$ when $A$ has score $k$ and has the right to challenge and $B$ doesn’t. We can guess that $A$ challenges when the score is $1$, and $B$ accepts, and then check whether this is self-consistent. Under these assumptions, we have $$ 2x_0=x_{-1}+x_1=x_{-1}-2x_{-1}=-x_{-1} $$ and $$ 2x_{-1}=x_0+x_{-2}=x_0-1\;, $$ and thus $x_{-1}=-\frac25$, $x_0=\frac15$ and $x_1=\frac45$. The assumptions turn out to be self-consistent, since the expected return for $A$ at score $1$ would only be $\frac12\left(\frac15+1\right)=\frac35\lt\frac45$ without challenging, and the expected return for $B$ upon refusing would be $-1\lt-\frac45$. It remains to find the optimal strategy and expected return in the initial state of the game, where both players have the right to challenge. Denote these expected returns by $y_k$. Then $y_0=0$ by symmetry, and $y_1$ is $\frac12$ if $A$ doesn’t challenge, $1$ if $A$ challenges and $B$ refuses and $-2x_{-1}=\frac45$ if $A$ challenges and $B$ accepts, so $A$ challenges at score $1$ and $B$ accepts. To summarize, the initial value of the game is $0$ by symmetry; a player challenges exactly if they have a score of $1$, the other player always accepts, and the value of the game for the player with the right to challenge is $-\frac25$, $\frac15$ and $\frac45$ at a score of $-1$, $0$ and $1$, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
I'm struggling with the general equation of this set of vectors algebraically. I'm struggling with Part B in the problem below. I thought the answer would be x(-1, 0, 1) + y(1, -1, 0), but that doesn't seem to be correct. Any assistance would be greatly appreciated!
$u$ and $v$ are linearly independent, but $w=-u-v$, so the span of these $3$ vectors is a ($2$-dimensional) plane through the origin; that is, the coordinates $(x,y,z)$ of the space spanned by these vectors satisfy $Ax+By+Cz=0$ for some $A, B, $ and $C$. Since $u$ and $v$ are in the plane, we know that $A\times-1+C\times1=0$ and $A\times1+B\times-1=0$; i.e., $A=C$ and $A=B$. Thus, the plane equation is $Ax+Ay+Az=0$, or $x+y+z=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554645", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why only either of $A^{-1}A$ and $AA^{-1}$ is equal to identity matrix? Consider $6\times 4$ matrix $A$. Why either of $A^{-1}A$ and $AA^{-1}$ is equal to identity matrix, but the other one is not equal to identity matrix? Note: I try to do it numerically as follows. But I don’t understand why I get such a result. I see it’s mathematical explanation in order to convenience.
Inverse is defined for square matrices and it is unique. For rectanglar matrices either right or left inverse exists and it is also not unique. For $A_{6 \times 4}$ if B is the inverse of $A$, then $$A_{6 \times 4} B_{4 \times 6}=I_{6 \times 6}~~~~~(1)$$ Here $B$ has 24 unknowns whereas there are 36 equations. This system of equations is over-determined so left inverse of $A$ cannot exist. Next, let $C$ be the right inverse, then $$C_{4\times 6} A_{6 \times 4} =I_{4\times 4}~~~~~~(2)$$ Here there are 24 unknown elements of $C$ and there are only 16 linear simultaneous equations. So there are many solutions and this makes the right inverse to exist but it will not be unique. So, here in your example, Eq.(1) is not possible, but (2) is possible and there will be many matrices for $C$. Also note that $A$ is horizonatal matrix, $B$ is vertical (but here it doesn't exist) $C$ is again vertical it exists but it is non uinque. So a horizontal matric will have a vertical matrix as its inverse which is either left or right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Every nonzero element in a Banach space has a norming extreme point For any Banach space $F,$ let $B_F$ be the closed unit ball of $F,$ that is, $B_F = \{x\in F: \|x\| \leq 1\}.$ Also, let $ext B_F$ be the set of extreme points of $B_F$ (Recall that $x$ is an extreme point of $B_F$ if $x = \frac{1}{2}(x_1+x_2)$ for some $x_1,x_2\in B_F$ implies that $x= x_1=x_2.)$ Let $F^*$ be the continuous dual space of $F.$ Question: Let $F$ be a Banach space and $x\in F\setminus\{0\}.$ Is it true that there exists $x^*\in ext B_{F^*}$ such that $x^*(x) = \|x\|?$ I think it is true. Assume that $\|x\| = 1.$ Consider $$S = \{x^*\in B_{F^*}: x^*(x) = 1\}.$$ By Hahn-Banach Theorem, $S$ is nonempty. Clearly $S$ is convex. I would like to show that $S$ is weak-star closed in $B_{F^*}$ so that it is weak-star compact (Banach-Alaoglu states that $B_{F^*}$ is weak-star compact). Then by Krein-Milman Theorem, $S$ has an extreme point, say $z^*.$ It is easy to see that $z^*$ is also an extreme point of $B_{F^*}.$ Such $z^*$ is our desired bounded linear functional. However, I am not sure how to show that $S$ is weak-star closed. Any hint is appreciated.
I would like to show that $S$ is weak-star closed in $B_{F^*}$ so that it is weak-star compact For a fixed $x\in F$, the map $$ \phi_x : F^*\to\mathbb K,\; x^*\mapsto x^*(x) $$ is continuous from the weak-star topology of $F^*$ to the ordinary topology of $\mathbb K$, where $\mathbb K\in\{\mathbb R,\mathbb C\}$. This is because the weak-star topology of $F^*$ is the coarsest topology such that all functionals $\phi_y:F^*\to\mathbb K$ for $y\in F$ are continuous (this is sometimes even used as the definition of the weak-star topology). Therefore, the preimage $\phi_x^{-1}(\{1\})$ of the closed set $\{1\}$ is weak-star closed. It follows that the set $S= B_{F^*}\cap \phi_x^{-1}(\{1\})$ is weak-star closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3554988", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find a limit with sqrt $\lim_{x \to \infty}x^2\left(x^2 - x \cdot \sqrt{x^2 + 6} + 3\right)$ $$\lim_{x \to \infty}x^2\left(x^2 - x \cdot \sqrt{x^2 + 6} + 3\right)$$ I don't know how to rewrite or rationalize in order to find the limit.
Just to give yet another approach, let $u=x^2+3$. Then, for $x\ge0$, $$x^2(x^2-x\sqrt{x^2+6}+3)=(u-3)\left(u-\sqrt{u^2-9}\right)={9u\over u+\sqrt{u^2-9}}-{27\over u+\sqrt{u^2-9}}\to{9\over1+1}-0={9\over2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 4 }
Term of rotation matrix entries equals 1 - proof concept?! I derived some stuff and it is happening that i come to the following expression: $\frac{r_{13}^2 + r_{23}^2}{(r_{11}r_{23} - r_{13}r_{21})^2 + (r_{12}r_{23} - r_{13}r_{22})^2}$ that must equal 1 for all first two rows of a rotation matrix (orthogonal matrix). $$ \begin{matrix} r_{11} & r_{12} & r_{13} & \\ \end{matrix} $$ $$ \begin{matrix} r_{21} & r_{22} & r_{23} & \\ \end{matrix} $$ I am confused how to proof this! I see a bit of a cross product in the denominator but can't identify the relation combined with the numerator.
The denominator shows the sum of squares of two components of a cross product, which are precisely the components of the vector at the denominator.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find $\lim_{x \to \infty} x^3 \left ( \sin\frac{1}{x + 2} - 2 \sin\frac{1}{x + 1} + \sin\frac{1}{x} \right )$ I have the following limit to find: $$\lim\limits_{x \to \infty} x^3 \bigg ( \sin\dfrac{1}{x + 2} - 2 \sin\dfrac{1}{x + 1} + \sin\dfrac{1}{x} \bigg )$$ What approah should I use? Since it's an $\infty \cdot 0$ type indeterminate I thought about writing $x^3$ as $\dfrac{1}{\frac{1}{x^3}}$ so I would have the indeterminate form $\dfrac{0}{0}$, but after applying L'Hospital I didn't really get anywhere.
Let $y=x+1$ then $$\begin{align}\sum\sin&=\sin\left(\frac1{x+2}\right)-2\sin\left(\frac1{x+1}\right)+\sin\left(\frac1x\right)\\ &=\sin\left(\frac1y-\frac1{y^2}+\frac1{y^2(y+1)}\right)-2\sin\left(\frac1y\right)+\sin\left(\frac1y+\frac1{y^2}+\frac1{y^2(y-1)}\right)\\ &=\sin\left(\frac1{y^2(y+1)}\right)\cos\left(\frac1y-\frac1{y^2}\right)+\left(1-2\sin^2\left(\frac1{2y^2(y+1)}\right)\right)\\ &\quad\times\left(\sin\left(\frac1y\right)\left(1-2\sin^2\left(\frac1{2y^2}\right)\right)-\cos\left(\frac1y\right)\sin\left(\frac1{y^2}\right)\right)-2\sin\left(\frac1y\right)\\ &\quad+\sin\left(\frac1{y^2(y-1)}\right)\cos\left(\frac1y+\frac1{y^2}\right)+\left(1-2\sin^2\left(\frac1{2y^2(y-1)}\right)\right)\\ &\quad\times\left(\sin\left(\frac1y\right)\left(1-2\sin^2\left(\frac1{2y^2}\right)\right)+\cos\left(\frac1y\right)\sin\left(\frac1{y^2}\right)\right)\\ &=\sin\left(\frac1{y^2(y+1)}\right)\cos\left(\frac1y-\frac1{y^2}\right)-4\sin\left(\frac1y\right)\sin^2\left(\frac1{2y^2}\right)\\ &\quad-2\sin^2\left(\frac1{2y^2(y+1)}\right)\sin\left(\frac1y-\frac1{y^2}\right)-2\sin^2\left(\frac1{2y^2(y-1)}\right)\sin\left(\frac1y+\frac1{y^2}\right)\\ &\quad+\sin\left(\frac1{y^2(y-1)}\right)\cos\left(\frac1y+\frac1{y^2}\right)\end{align}$$ So $$\begin{align}\lim_{x\rightarrow\infty}x^3\sum\sin&=\lim_{y\rightarrow\infty}\left\{\left(1+\frac1y\right)^2\frac{\sin\left(\frac1{y^2(y+1)}\right)}{\frac1{y^2(y+1)}}\cos\left(\frac1y-\frac1{y^2}\right)\right.\\ &\quad-\frac1{y^2}\left(1+\frac1y\right)^3\frac{\sin\left(\frac1y\right)}{\frac1y}\frac{\sin^2\left(\frac1{2y^2}\right)}{\left(\frac1{2y^2}\right)^2}\\ &\quad-\frac{1-\frac1{y^2}}{2y^4}\frac{\sin^2\left(\frac1{2y^2(y+1)}\right)}{\left(\frac1{2y^2(y+1)}\right)^2}\frac{\sin\left(\frac1y-\frac1{y^2}\right)}{\frac1y-\frac1{y^2}}\\ &\quad-\frac{\left(1+\frac1y\right)^4}{2y^4\left(1-\frac1y\right)^2}\frac{\sin^2\left(\frac1{2y^2(y-1)}\right)}{\left(\frac1{2y^2(y-1)}\right)^2}\frac{\sin\left(\frac1y+\frac1{y^2}\right)}{\frac1y+\frac1{y^2}}\\ &\quad\left.+\left(1+\frac1y\right)^2\frac{\sin\left(\frac1{y^2(y-1)}\right)}{\frac1{y^2(y+1)}}\cos\left(\frac1y+\frac1{y^2}\right)\right\}\\ &=1-0-0-0+1=2\end{align}$$ I just wanted to see how this looked in brute force trigonometric identities...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Check if $\lim_{n\rightarrow\infty}\sum_{k=1}^n\ln\Big(1-\frac{1}{k}+\frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big)\Big)$ converges. How can I check if the following expression converges as $n \rightarrow\infty$? I am confused because $n$ appears twice... $$\lim_{n\rightarrow\infty} \sum_{k=1}^n \ln \Big(1 - \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big)\Big)$$ My initial thoughts are that $\cos(\frac{\theta}{\sqrt{\ln n}}) \rightarrow 0$ and so we are essentially summing $\ln(1) = 0$ infinitely many times, so it converges to 0. But is this correct?
Since $\cos$ is even, you may assume WLOG that $\theta \geq 0$. Note that $\displaystyle \sum_{k=1}^n\frac{-\theta^2}{2k\ln n} = \frac{-\theta^2}{2}\frac{1}{\ln n}\sum_{k=1}^n \frac 1k \xrightarrow[n\to \infty]{}\frac{-\theta^2}{2}$ and $$\left|\sum_{k=1}^n \ln \Big(1 - \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big)\Big) - \sum_{k=1}^n\frac{-\theta^2}{2k\ln n} \right| \leq A_n + B_n $$ where $\displaystyle A_n = \left|\sum_{k=1}^n \ln \Big(1 - \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big)\Big) - \left(- \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big) \right) \right| $ and $\displaystyle B_n = \left|\sum_{k=1}^n - \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big) - \frac{-\theta^2}{2k\ln n} \right|$ Since $\ln(1+x) = x+O(x^2)$, there is some $K>0$ and $\epsilon$ such that $$|x|\leq \epsilon \implies |\ln(1+x)-x|\leq Kx^2$$ Since $\cos(x) = 1-\frac{x^2}2+O(x^4)$, there is some $K'>0$ and $\epsilon'$ such that $$|x|\leq \epsilon' \implies |\cos(x)-1+\frac{x^2}2|\leq K'x^4$$ For $n\geq \max\left(\exp\left(\frac{\theta}{\arccos(1-\epsilon)} \right), \exp\left(\frac{\theta}{\epsilon'} \right) \right)$, $$\begin{align}A_n+B_n&\leq K\left(\cos(\frac{\theta}{\sqrt{\ln n}})-1 \right)^2\sum_{k=1}^n \frac 1{k^2} + K' \frac{\theta^4}{\ln^2 n} \sum_{k=1}^n \frac 1k \\&= O\left( \left(\cos(\frac{\theta}{\sqrt{\ln n}})-1 \right)^2 \right) + O\left(\frac{1}{\ln n} \right) \\ &=o(1) \end{align}$$ Finally, $$\sum_{k=1}^n \ln \Big(1 - \frac{1}{k} + \frac{1}{k}\cos\Big(\frac{\theta}{\sqrt{\ln n}}\Big)\Big)\xrightarrow[n\to \infty]{}\frac{-\theta^2}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Principal divisor $P+Q+R-3\infty$ on elliptic curve comes from a straight line Let $E=V(y^2-(x-a)(x-b)(x-c))$ be an elliptic curve. Let $D=P+Q+R-3\infty$ be a divisor. Then $D$ is principal if and only if $P$, $Q$ and $R$ lie on a straight line. One direction is straightforward - if the three points $P$, $Q$, $R$ are collinear, the line on which they lie gives the divisor. But what about the other direction? The field of definition is assumed to be algebraically closed. Edit: None of the suggestions so far have been satisfying. The hint my professor gave us was to use Riemann-Roch. In particular: What happens if $\infty\in\left\{P,Q,R\right\}$?
There is an argument specific to elliptic curves in Weierstrass form, it doesn't generalize well to other curves. Let $E:y^2=x^3-ax-b, 4a^3-27b^2\ne 0$ be a smooth affine cubic curve defined over an algebraized closed field $k$. Its field of rational functions is $k(x)[y]/(y^2-x^3+ax+b)$. The projective closure is $C:ZY^2=X^3+aXZ^2+bZ^3$ which is smooth too at the only one new point $O=[0:1:0]\in C-E$ which corresponds to $(\infty,\infty)$ on $E$. That $E$ is smooth implies that its coordinate ring $k[E]=k[x,y]/(y^2-x^3-ax-b)$ is the full ring of rational functions $C\to k$ regular on $E$. If $D=\sum_{j=1}^n P_j- nO$ is principal $=Div(f)$ then $f$ is regular on $E$ so $f\in k[E]$ ie. $f = \sum_{j,l} c_{j,l} x^j y^l$. Near $O$ the equation becomes $\frac{y^2}{x^3}=1+a/x^2+b/x^3$ where $1+a/x^2+b/x^3$ is regular and non-zero at $O$. Thus $x$ has a pole of order $2k$ and $y$ has a pole of order $3k$ at $O$. Since $O$ is a smooth point of $C$ then $k=1$ and $x/y$ has a simple zero. Replacing in $f = \sum_{j,l} c_{j,l} x^j y^l$ each occurrence of $y^2$ by $x^3+ax+b$ we get $$f = g(x)+yh(x),\qquad g,h\in k[x]$$ Then $g(x)$ has a pole of order $2\deg(g)$ and $yh(x)$ has a pole of order $3+2\deg(h)$ at $O$, thus $f$ has a pole of order $n=\max(2\deg(g),3+\deg(h))$ at $O$. And hence $n=3$ implies that $\deg(g)\le 1,\deg(h)=0$ which means that $f = c+dx+ey,e \in k^*$, so that $Div(c+dx+ey)=P_1+P_2+P_3-3O$ and $P_1,P_2,P_3$ lie on the line $c+dx+ey=0$. Conversely if $P_1,P_2,P_3$ are 3 distinct points of $E$ lying on the line $c+dx+ey=0,e\in k^*$ then $Div(c+dx+ey)=P_1+P_2+P_3-3O$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How will matrix $A^n$ affect the original eigenvector and eigenvalue? For example, a matrix $A$ has three distinct eigenvalues and has 3 eigenvectors $v_1,v_2,v_3$ corresponding to the three distinct eigenvalues. So, am I right to say $A^4$ will result in eigenvalue^4 and the eigenvectors remain unchanged?
The claim that the eigenvectors remain unchanged can be interpreted in two different ways, one of which is true, while the other is false. The first interpretation is If $v$ is an eigenvector of $A$, then $v$ is an eigenvector of $A^4$. This claim is true, and easily verified: If $v$ is an eigenvector of $A$, then there exists a value $\lambda$ such that $Av=\lambda v$. But then $A^4v = A^3Av = A^3(\lambda v) = \lambda A^3v = A^2 A v = \ldots = \lambda^4 v$. Therefore $v$ is also an eigenvector of $A^4$, with the corresponding eigenvalue $\lambda^4$. The second interpretation is: $v$ is an eigenvalue of $A^4$ iff $v$ is an eigenvalue of $A$. That claim is false. A simple counterexample is $A=\pmatrix{1&0\\0&-1}$. Up to a scalar factor, the only eigenvectors of this matrix are $\pmatrix{1\\0}$ and $\pmatrix{0\\1}$. But $A^4$ is the identity matrix, and all non-zero vectors are eigenvectors of the identity. With the eigenvalues it is even more complicated, as here the claim may depend on whether you consider real or complex vector spaces/matrices. For example, consider the matrix $A=\pmatrix{2&0&0\\0&0&-1\\0&1&0}$. In the real numbers, the only eigenvalue is $2$. But $A^4$ has eigenvalues $2^4=16$ and $1$. On the other hand, in the complex numbers, this matrix has the additional eigenvalues $i$ and $-i$, whose fourth powers are, indeed, both $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For $x\in G$, define $H_x=\{g^{-1}xg \mid g\in G\}$. Under what conditions on $x$ will $H_x \leq G$. Further, if $H_x \leq G$, will $H_x \lhd G$? For $x\in G$, define $H_x=\{g^{-1}xg \mid g\in G\}$. Under what conditions on $x$ will $H_x \leq G$. Further, if $H_x \leq G$, will $H_x \lhd G$? My attempt is as below: $G$ is a group so $e\in G$ and $e^{-1}xe=x \in H_x$.This means that $H_x$ will be non-empty. For $H_x$ to be a subgroup we must show that for any $g_1^{-1}x g_1$ and $g_2^{-1}xg_2$ in $H_x$ , $(g_1^{-1}xg_1)(g_2^{-1}xg_2)^{-1}\in H_x$. Also $(g_1^{-1}xg_1)(g_2^{-1}xg_2)^{-1}=(g_1^{-1}xg_1g_2^{-1}x^{-1}g_2)$. Now if $x=e$ then $x \in H_x \Rightarrow e\in H_x$ and we have $(g_1^{-1}xg_1g_2^{-1}x^{-1}g_2)=e \in H_x$. So We can say that $H_x \leq G$. In this case since $H_x=\{e\}$, we have $H_x \lhd G$. Please let me know if my reasoning is right. Also is $x=e$ the only possible condition on $x$ which makes $H_x$ a subgroup?
Your reasoning is correct. To answer your final question, think that something to hold for every $g_1, g_2$ must hold in particular for $g_1=g_2$. Explicitly: \begin{alignat}{1} H_x \le G &\iff \forall g_1,g_2 \in G, \exists g\in G \mid g_1^{-1}xg_1(g_2^{-1}xg_2)^{-1}=g^{-1}xg \\ &\Longrightarrow \forall g_1 \in G, \exists g\in G \mid g_1^{-1}xg_1(g_1^{-1}xg_1)^{-1}=e=g^{-1}xg \\ &\Longrightarrow \exists g\in G \mid xg=g \\ &\Longrightarrow x=e \end{alignat} You have already shown that $x=e \Longrightarrow H_x \le G$, so indeed: $H_x \le G \iff x=e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3555895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Value of $ \lim_{n \to \infty} \int \limits_{0}^{1}nx^n e^{ x^2} ?$ How to find the value of $$ \lim_{n \to \infty}\int \limits_{0}^{1} nx^n e^{ x^2} ?$$ From wolfram the limit approaches to $e$ for larger values of $n$. I substituted $x^2 $ with $u$ and obtained $$ \frac{ n} {2} \int \limits_{0}^{1} u^{\frac{n-1}{2}} e^{u} du $$ The value of this integral can be obtained from here. But still I'm unable to get it. Is there any better approach for this question?
You have the right idea about changing the variable, just a different change: $u=x^{n+1}$ $$ \begin{align} \lim_{n\to\infty}\int_0^1nx^ne^{x^2}\,\mathrm{d}x &=\lim_{n\to\infty}\int_0^1\frac{n}{n+1}e^{u^{\frac2{n+1}}}\,\mathrm{d}u\\ &=\int_0^11\cdot e^1\,\mathrm{d}u\\[6pt] &=e \end{align} $$ Note that $\frac{n}{n+1}e^{u^{\frac2{n+1}}}$ increases monotonically to $e$ for all $u\in(0,1]$, and uniformly on compact subsets, so we can use monotone convergence, dominated convergence, or uniform convergence (on each compact subset) to justify the exchange of limit and integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3556059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Show that if $\gcd(a,3)=1$ then $a^7 \equiv a\pmod{63}$. Why is this assumption necessary? Question: Show that if $\gcd(a,3)=1$ then $a^7 \equiv a\pmod{63} $. Why is this assumption necessary? Proof: Since $\gcd(a,3)=1$ $\Leftrightarrow a\equiv 1\pmod 3$ $\Leftrightarrow a^7\equiv 1\pmod3\equiv a\pmod3$ Then using Fermat's Little Theorem: If $a,p\in\mathbb N$ and $p$ is prime then $a^7\equiv a\pmod7$ $\Rightarrow 3 |a^7-a$ and $7 |a^7-a$ $\Leftrightarrow a^7-a=3k_1$ and $a^7-a=7k_2$ $\Rightarrow (a^7-a)^3=63(k_1)^2k_2$ $\Rightarrow (a^7-a)^3\pmod{63}\equiv 0$ Since $x^m\pmod n\equiv x\pmod n$ $\Rightarrow (a^7-a)^3\pmod{63}\equiv a^7-a\pmod{63}\equiv 0$ $\Leftrightarrow a^7\equiv a\pmod{63}$ The thing I am struggling with is that the question says that the only assumption necessary is that $\gcd(a,3)=1$. Surely there are two assumptions neccessary, since to use Fermat's Little Theorem (in this situation) we need $a\neq 0 \pmod7 \space\space\space(\gcd(a,7)=1)$. I am sure there is something obvious I'm missing -would be great if someone could check over what I have done and point out any mistakes :)
Fermat's Little Theorem states an integer $a$ and prime $p$ satisfy $p|a^p-a$, and if further $p\nmid a$ we can cancel this to $p|a^{p-1}-1$. So we always have $7|a^7-a$, but if $3\nmid a$ we can reason$$3|a^2-1\implies 3^2|(a^3+2a)(a^2-1)^2+3(a^3-a)=a^7-a.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3556188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Calculus of variations Euler-Lagrange equation and variational problem Find all the extrema (local minima and maxima) of the function $$J[y] = \int\limits_1^2(xy' + y)^2\,\mathrm dx;\qquad y(1) = 1, y(2) = \dfrac12.$$ Hint. Once you've found the solution of the Euler-Lagrange equation with the boundary conditions, remember to check, like in the previous problem, if this solution is a minimum, a maximum or not an extremum. The image above shows my work. I'm pretty sure I solved the E-L equation correctly with the boundary conditions, but I am not too sure about the variation part. I always seem to find an absolute minimum, which makes me think my understanding of this part is lacking.
Rather than going through your work line by line, let's see if I get the same answer: $$L=x^2y^{\prime2}+2xyy^\prime+y^2\implies 0=\frac{(\partial_{y^\prime}L)^\prime-\partial_yL}{2x^2}=y^{\prime\prime}+\frac2xy^\prime\implies y=A+\frac{B}{x}.$$The boundary conditions give $y=\frac1x$, as you said. With $y=\frac1x+\eta$ we get$$J=\int_1^2(x\eta^\prime+\eta)^2dx,$$which is minimal for $\eta=0$, so you're also right about the stationary point being a minimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3556367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving that $\frac{\sum_{n=0}^\infty a_n n z^n}{\sum_{n=0}^\infty a_n z^n}$ is strictly increasing. Im considering the quotient from the title: $$g(z)=\frac{\sum_{n=0}^\infty n\cdot a_n z^n}{\sum_{n=0}^\infty a_n z^n}$$ with $a_n>0$ $\forall n\in \mathbb{N}$ and for all $z\in(0,1)$. And assuming that both sums suffice the requirements necessary in order to differentiate, etc. What I want to prove is that $g'(z)>0$ and $g''(z)\geq 0$ (Im pretty sure about the truth of the second identity but not 100% sure). I've been able to prove the first identity deriving with respect to z and then using the Cauchy formula for the coefficient of the product of two infinite series. (If requested I can post it but it's way too cumbersome to be useful to prove that $g''(z)\geq 0$). I also think that there must be an easier and smarter way to prove it (In fact it may be trivial but I just haven't thought of it). I would appreciate any help and thanks in advance. Edit: I've asked this because I think it could be a general property, but the problem this actually comes from is with this specific $a_n$ (if it can be proven for this case I'd be happy too) $$a_n = \frac{\Gamma\left(\frac{1}{2} +\frac{1}{k}\right)}{\Gamma\left(\frac{1}{2} \right) \cdot \Gamma\left(1 +\frac{1}{k}\right)}\cdot \frac{\Gamma\left(\frac{1}{2} + n \right) \Gamma\left(1+\frac{1}{k} + n \right)}{\Gamma\left(\frac{1}{2} +\frac{1}{k}+ n \right)} \cdot \frac{1}{n!}$$ where $k>0$ is a positive constant. And this problems in itself comes from trying to prove that both the first and second derivative of the following quotient are positive: $$\frac{_2F_1\left(\frac{3}{2},1+\frac{1}{k};\frac{1}{2}+\frac{1}{k};z \right) }{_2F_1\left(\frac{3}{2},1+\frac{1}{k};\frac{1}{2}+\frac{1}{k};z \right) }$$
Even when the inequality changed to $g''(z) \ge 0$, it need not be true. For an counterexample, take $a_n = \begin{cases}\frac1{n!},& n \ne 1,\\ \frac12 & n = 1\end{cases}$. The denominator of this $g(z)$ is $$D(z) \stackrel{def}{=} \sum_{n=0}^\infty a_n z^n = e^z - \frac{z}{2}$$ From this, we can deduce $$g(z) = \frac{\sum\limits_{n=0}^\infty n a_n z^n}{\sum\limits_{n=0}^\infty a_n z^n} = z\frac{d}{dz}\log(D(z)) = z \frac{e^z - \frac12}{e^z - \frac{z}{2}}$$ If one throw $g(z)$ to a CAS and make a plot of $g''(z)$, one will notice $g''(z) < 0$ for $z \in [1, \sim\!3.81787380305645 ]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3556624", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What is the value of $x$, given that $f\left(x\right)+f\left(\frac{1}{1-x}\right)=x$ and $f^{-1}\left(x\right)=2$? I tried to approach the problem by using the equation $f\left(2\right)=x$ but I always get stuck in the middle of the process.
We have $$x=2\Rightarrow f(2)+f(-1)=2$$ $$x=1/2\Rightarrow f(1/2)+f(2)=1/2$$ $$x=-1\Rightarrow f(-1)+f(1/2)=-1$$ This is a system of three equations with three unknowns. If it helps, we can rewrite it as $$s+y=2$$ $$s+z=1/2$$ $$y+z=-1$$ where $f(2)=s$, $f(-1)=y$, and $f(1/2)=z$. We can easily solve this by noting that $$2+1/2=(s+y)+(s+z)=2s+(y+z)=2s-1$$ $$7/2=2s$$ $$s=7/4$$ We conclude $f(2)=7/4$ and therefore $x=7/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3556891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Find general term of recursive sequences $ x_{n+1}=\frac{1}{2-x_n}, x_1=1/2,$ Please help to solve: * *$ x_{n+1}=\frac{1}{2-x_n}, x_1=1/2,$ *$x_{n+1}= \frac{2}{3-x_n}, x_1=1/2$ I know answers, but can't figure out the solution. The first one is obvious if you calculate first 3-5 terms by hand. But how can I get the result not by guessing, but mathematically? Answers are: * *$x_n = \frac{n}{n+1}$ *$x_n = \frac{3\cdot2^{n-1}-2}{3\cdot2^{n-1}-1}$
If we recursively apply the recursive relation we get $$x_{n+1} = \frac{1}{2-x_n} = \frac{2-x_{n-2}}{3-2x_{n-2}} = \frac{3-2x_{n-3}}{4-3x_{n-3}}$$ and in general $$x_{n+1} = \frac{k-(k-1)x_{n-k}}{k+1-kx_{n-k}}$$ Setting $k=n-1$ we get $$x_{n+1} = \frac{n-1-(n-2)\frac{1}{2}}{n-(n-1)\frac{1}{2}} = \frac{n}{n+1}$$ I haven't checked the second one but I believe the same method should produce the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Abelian group of finite order This problem was submitted to me as part of Abstract Algebra Homework. Let $(G,\cdot)$ be an abelian group of finite order. Show that if $o(G)=n$ then $a^n=e$ for all $a\in G $. Hint : Show that if $G=\{e,a_1,a_2,\cdots,a_{n-1} \}$ then $G=\{a,a\cdot a_1, a \cdot a_2 , \cdots , a \cdot a_{n-1} \}$ with $o(\cdot)$ is the cardinality and $e$ is the identity element. My first idea is that it reminded me of Cauchy's theorem for Abelian groups of finite order. However I did not manage to link it to it. Edit 1: It was actually Lagrange theorem I was thinking of. A colleague proposed : * *to prove that $\exists k \in [[1,n]], a^k =e$ *to prove the hint when $a \neq e$ (it is trivial otherwise), $\forall k < n, a^k \neq e$, "since otherwise the powers of $a$ form a smaller set of numbers than $G$ so you have to have $a^n =e$. Edit 2: The proof must not use Lagrange's theorem.
Use the hint: let $b=ea_1a_2\dotsm a_{n-1}$. Then $$ b=ae\cdot aa_1\cdot \dotsm aa_{n-1}=a^nb $$ (because the group is abelian). Hence $a^n=e$ by cancelling $b$. Why can we write $G=\{a,aa_1,\dots,aa_{n-1}\}$? Consider the map $\mu_a\colon G\to G$, $\mu_a(x)=ax$. This map is injective, hence also surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Does the series diverge or converge: $\sum\limits_{n=1}^{\infty} \frac{1+(-1)^n}{\sqrt{n}}$? I'm asked to find out if this series converges or diverges: $$\sum_{n=1}^{\infty} \frac{1+(-1)^n}{\sqrt{n}}=0+\frac{2}{\sqrt{2}}+0+\frac{2}{\sqrt{4}}...$$ So I thought I could use a direct comparison test, so $$\sum_{n=1}^{\infty} \frac{1+(-1)^n}{\sqrt{n}}\leq \sum_{n=1}^{\infty}\frac{2}{\sqrt{n}}$$ But giving that this is a p-serie with $$p=-\frac{1}{2}$$ I know that I can not use this to compare with because it diverges. So I'm stuck. Does anyone have some tips?
Hint: Since $1+(-1)^n=0$ if $n$ is odd and $2$ if $n$ is even, it's $$\sum\limits_{k=1}^\infty\dfrac{2}{\sqrt{2k}}= \sqrt2\sum\limits_{k=1}^\infty \dfrac1{\sqrt k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$D$ is a point inside $\triangle ABC$, $\angle CAD=\angle DAB=10$, $\angle CBD=40$, $\angle DBA=20$, what is $\angle CDB$? $D$ is a point inside $\triangle ABC$, $\angle CAD=\angle DAB=10$, $\angle CBD=40$, $\angle DBA=20$, what is $\angle CDB$? I'm sure I'm supposed to use trigonometry to obtain the value of $\angle CDB$ but I'm not exactly sure how to start. Maybe Law of Sines?
Take E, reflection of B in AD; it belongs to AC and $\triangle BED$ is equilateral; easily BE=BC, BEC being a Langley triangle and required $\widehat{BDC}=70^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$\operatorname{Hom}_A(K,A)\neq0$ iff $A=K$ I can prove this result in one way that is if $A=K$. Then there exists a natural morphism from $K$ to $K$. But I have no idea how to prove the converse part. I am new to this course please help by providing a good explanation. Here A is integral domain and K is quotient field.
Suppose that we have a non-zero $A$-module morphism $\phi : K \to A$. Notice that $\phi (1) \neq 0$ (because otherwise $\phi = 0$). We have $\phi (1) \phi (\frac{1}{\phi (1)}) = \phi (\phi (1) \cdot \frac{1}{\phi (1)}) = \phi (1)$ and therefore $\phi (\frac{1}{\phi (1)}) = 1$. Let $0 \neq a \in A$. We want to show that $a$ has an inverse in $A$. We have $ a \phi (\frac{1}{\phi (1)a}) = \phi (a \cdot \frac{1}{\phi (1) a}) = \phi (\frac{1}{\phi (1)}) = 1$ and therefore $a$ has an inverse in $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The equation of the tangent line at the point (1,-1) is $y=\frac{4x}{3}-\frac{7}{3}$. Given the equation, $x^2y+ay^2=b$, find the values of a and b. So I got this question: "The equation of the tangent line at the point $(1,-1)$ is $y= \dfrac{4x}{3} - \dfrac{7}{3}$. Given the equation, $x^2y + ay^2 = b$, find the values of $a$ and $b$." I'm really stuck right now because I think I am supposed to differentiate the equation with $a$ and $b$ in terms of $x$, but when I do I'm stuck with $\dfrac{\mathrm dy}{\mathrm dx}$ on one side and $\dfrac{\mathrm da}{\mathrm dx}$ on the other. So even if I subbed in the $(1, -1)$ and used algebra to get $\dfrac{\mathrm dy}{\mathrm dx}$ to equal $\dfrac{4}{3}$ (slope I want to get), I am stuck with $\dfrac{\mathrm dy}{\mathrm dx}$ AND a $\dfrac{\mathrm da}{\mathrm dx}$ term I don't know how to get rid of. Thanks in advance for the help!
First, you have to differentiate both sides implicitly: $$\frac{\mathrm d}{\mathrm dx} \left(x^2y+ay^2\right) = \frac{\mathrm d}{\mathrm dx} (b)$$ $x^2y$ is a product, so $\dfrac{\mathrm d}{\mathrm dx} \left(x^2y\right) = y\dfrac{\mathrm d}{\mathrm dx} \left(x^2\right)+x^2\dfrac{\mathrm dy}{\mathrm dx} = 2xy+x^2\dfrac{\mathrm dy}{\mathrm dx}$. $a$ is a constant, so $\dfrac{\mathrm d}{\mathrm dx}\left(ay^2\right) = a\dfrac{\mathrm d}{\mathrm dx}\left(y^2\right) = 2ay\dfrac{\mathrm dy}{\mathrm dx}$. $b$ is also a constant, so $\dfrac{\mathrm d}{\mathrm dx} (b) = 0$. Combining these gives: $$2xy+x^2\frac{\mathrm dy}{\mathrm dx}+2ay\frac{\mathrm dy}{\mathrm dx} = 0 \tag{1}$$ Notice how there's no $\dfrac{\mathrm da}{\mathrm dx}$ anywhere. I would guess the issue here is that you're differentiating $ay^2$ the same way you'd differentiate $xy^2$ by keeping an extra $\dfrac{\mathrm da}{\mathrm dx}$. Even if you did use the product rule, since $\color{blue}{\dfrac{\mathrm da}{\mathrm dx} = 0}$, you'd get the same result: $$\dfrac{\mathrm d}{\mathrm dx} \left(ay^2\right) = y^2\dfrac{\mathrm d}{\mathrm dx} (a)+a\dfrac{\mathrm d}{\mathrm dx}\left(y^2\right) = y^2\color{blue}{\dfrac{\mathrm da}{\mathrm dx}}+2ay\dfrac{\mathrm dy}{\mathrm dx} = 2ay\dfrac{\mathrm dy}{\mathrm dx}$$ Finally, the equation of the tangent line at $(1, -1)$ is $y = \dfrac{4}{3}x-\dfrac{7}{3}$, so the slope becomes $\dfrac{\mathrm dy}{\mathrm dx}\biggr\rvert_{x = 1} = \dfrac{4}{3}$. If you use the point $(1, -1)$ for equation $(1)$, the only unknown left is $a$, which you can solve for. Finally, since the tangent and the original curve share the point $(1, -1)$, plugging in $x = 1$, $y = -1$, and the value of $a$ (from the previous part) will give $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3557872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can you switch $g'$ and $f'$ in the integration by parts formula? I'm practicing calculus for the future and have a question about the integration by parts formula. I was taught: $\int f(x)g'(x) =f(x)g(x)-\int f'(x)g(x)dx $ But would switching $f'$ and $g'$ in the formula to this still work? $\int f'(x)g(x) = f(x)g(x) - \int f(x)g'(x)dx$ If so is one way harder than the other? In a problem like $\int x\cos(x)$ does it matter which values you pick for $f(x)$ and $g(x)$, or would both ways yield the correct answer?
Both will yield the same answer, but some integrals makes the math work out nicer. Changing what you choose as the "parts" is a common strategy to try when solving these.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3558011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Differentiable function $f(x)=4x^7 −14x^4 +30x−17$ I am trying to prove that the function $f:\Bbb R→\Bbb R$, $f(x)=4x^7 −14x^4 +30x−17$,is injective. To do this I need to prove it is differentiable from first principles. I can then prove its derivative is strictly increasing to show it is injective. Any help on the proof that it's differentiable would be great, especially in what delta to choose.
Since we can evaluate a derivative, our function is differentiable. Also, $$f'(x)=28x^6-56x^3+30=28\left(x^6-2x^3+\frac{15}{14}\right)=28\left((x^3-1)^2+\frac{1}{14}\right)>0.$$ $$f'(x)=\lim_{h\rightarrow0}\frac{f(x+h)-f(x)}{h}=$$ $$=\lim_{h\rightarrow0}\frac{4((x+h)^7-x^7)-14((x+h)^4-x^4)+30(x+h-x)}{h}=$$ $$=4(7x^6)-14(4x^3)+30=28x^6-56x^3+30.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3558256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
How to separate variables in DiffEq of $y'=x-y$ $$\frac{dy}{dx} = x-y$$ How do I separate so I can integrate both sides? Thanks for getting me started. I know the solution is $$ y = x-1+2e^{-x}$$
Hint Your equation is $$y'(x)+y(x)=x.$$ Multiply both side by $e^x$ gives $$\big(y(x)e^x\big)'=xe^x.$$ I let you conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3558401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 1 }
Are there finite lattices which are not order isomorphic to a sublattice of $\mathbb Z^n$? Consider the lattice $\mathbb Z^n$ for some finite $n$ ordered by $x\leq y \leftrightarrow \forall i: x_i \leq y_i$. I'm having difficulty thinking of all lattice which is not a sublattice of this. For example, the diamond lattice is isomorphic to: (2,2,2) / | \ (1,0,0) (0,1,0) (0,0,1) \ | / (0, 0, 0) Are there counterexamples? If not, is there a name for the theorem that they are all isomorphic?
While not every finite lattice is a sublattice of $\mathbb Z^n$, as amrsa pointed out, it is true that every finite poset is a subposet of $\mathbb Z^n$. Given a finite poset $P$ consider the map $L\colon P\to\mathbb Z^P$ given by $$ L(p)_q = \begin{cases} 1 & \text{if $q\le p$}, \\ 0 & \text{otherwise}. \end{cases} $$ Then $L$ is an order-preserving and order-reflecting injection and hence $P\cong L(P)$ as posets. Note that this map factors through the boolean lattice on $P$ by sending $p$ to its downset $p_\downarrow=\{\ q\ |\ q\le p\ \}$ which then gets sent to the characteristic function $L(p) = \chi_{p_\downarrow}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3558505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }