Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Combinatorial Summation I'm trying to solve the following question: If $s_n$ is the sum of first $n$ natural numbers, then prove that $$2(s_1s_{2n}+s_2s_{2n-1}+\dots+s_ns_{n+1})=\frac{(2n+4)!}{5!(2n-1)!}$$ This is where I've come so far: The general term of $(1-x)^{-2n}$ happens to be $t_{r+1}=\frac{(2n+r-1)!}{(2n-1)!r!}x^r$ and therefore the 6th term, i.e, $t_{5+1}=t_6=\frac{(2n+4)!}{(2n-1)!5!}x^5$, which looks like the RHS of the question I'm trying to solve. Second, I found that $$(1-x)^{-3}=s_1+s_2x+\dots+s_nx^{n-1}+\dots$$ It's clear that the LHS must be something like, $(1-x)^{-a}(1-x)^{-b}$. But I'm not able to find one such combination which would give me $2(s_1s_{2n}+s_2s_{2n-1}+\dots+s_ns_{n+1})$. Help is greatly appreciated.
If you'd be open to another strategy, we want to prove$$\sum_{k=1}^ns_ks_{2n+1-k}=\frac{n(n+1)(n+2)(2n+1)(2n+3)}{30}.$$This is an exercise in arithmetic once you notice$$s_k=\tfrac12k(k+1)\implies s_ks_{2n+1-k}=\tfrac14k(k+1)(2n+1-k)(2n+2-k)$$is a quartic in $k$, and use the Faulhaber's formulae$$\begin{align}\sum_{k=1}^nk&=\tfrac12n(n+1),\\\sum_{k=1}^nk^2&=\tfrac16n(n+1)(2n+1),\\\sum_{k=1}^nk^3&=\tfrac14n^2(n+1)^2,\\\sum_{k=1}^nk^4&=\tfrac{1}{30}n(6n^4+15n^3+10n^2-1).\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4096903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Proving global minimum by lower bound of 2-variable function $f(x,y)=x^4+2x^2y+y^2-4x^2-8x-8y$ I would like to prove that the following function $f :\mathbb{R}^2\to\mathbb{R}$ has a global minimum: $f(x,y)=x^4+2x^2y+y^2-4x^2-8x-8y=(x^2+y)^2-4(x^2+2x+2y)$ $f$ has strict local minimum at $f(1,3)=-20$ I think that what I need to show is that $-20$ is a lower bound of this function, and then conclude that's a global minimum, but I didn't manage to do so. Please advise. Thank you.
You can find the stationary points: \begin{align} \frac{\partial f}{\partial x}&=4x^3+4xy-8x-8 \\[6px] \frac{\partial f}{\partial y}&=2x^2+2y-8 \end{align} At a critical point $y=4-x^2$ and also $$ x^3+x(4-x^2)-2x-2=0 $$ that is, $x=1$, that implies $y=3$. Since clearly the function is upper unbounded on the line $y=0$, we just need to show it is lower bounded. Conjecturing that the stationary point is a minimum, we have $f(1,3)=-20$, we need to see whether $f(x,y)\ge-20$. Now let's try completing the square in $$ y^2+2(x^2-4)y+x^4-4x^2-8x+20 $$ Since $(x^2-4)^2=x^4-8x^2+16$, we have $$ f(x,y)+20=(y+x^2-4)^2+4x^2-8x+4=(y+x^2-4)^2+4(x-1)^2 $$ which is everywhere nonnegative, so we proved that $f(x,y)\ge-20$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4097017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cubic equation with circle intersection to form a square A cubic equation and circle (unit radius) has intersection at A,B,C,D. ABCD is a square. Find the angle $\theta$. I tried: * *$(0,0)$ is a solution so constant term is $0$ *Substituting A(x,y) and C(-x,-y) and adding them gives coefficient of $x^2$ is 0. Then the cubic becomes f(x) = $ax^3+bx$. 3.Substituting A and B and added the two equations. I found it interesting-for n points given we can find a unique n+1 degree polynomial Also - Can complex number be used here? Please note: I am not sure whether we can find the angle(integer) without knowing the coefficients of the cubic. EDIT: From the answers 1.putting A $(cos\theta,sin\theta)$ in f(x) : $acos^3\theta + b cos\theta = sin\theta$ 2.putting B $(-sin\theta,cos\theta)$ in f'(x) : $asin^2\theta + b = tan\theta$ [ as circle has $tan\theta$ slope at B] $1,2 $ eqn gives $3asin^2\theta = acos^2\theta$ So, $sin^2\theta = \frac{1}{4}$ But I getting the value of $\theta$ but a answer shows plot of many cubics -> because in my case $ABCD$ is a square.
HINT The cubic must clearly be of type $$ y = bx\left( {x^{\,2} - a^{\,2} } \right) $$ In polar coordinates $$ r\sin \theta = br\cos \theta \left( {r^{\,2} \cos ^{\,2} \theta - a^{\,2} } \right) $$ i.e. $$ 0 = r\left( {br^{\,2} \cos ^{\,3} \theta - \left( {a^{\,2} b\cos \theta + \sin \theta } \right)} \right) $$ and excluding the origin $$ 0 = br^{\,2} \cos ^{\,3} \theta - c\cos \left( {\theta + \beta } \right) $$ where either $b$ and $c$ can be taken as positive. So $$ r = \sqrt {{{c\cos \left( {\theta + \beta } \right)} \over {b\cos ^{\,3} \theta }}} $$ Then $D$ is a local max for $r$, and you shall impose to find the same $r_{max}$ at $90^{\circ}$ thereafter.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4097262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Not sure if this idea for calculating the square roots of a value has any merit Okay, so I was thinking about square roots the other day and I spent some time thinking of how to visualize them: Say we have 10,000. The square root would be 100. So 100 of 100 equals 10,000. Now what if you had a 100 sided polygon with each side segment being equal to 100? Wouldn't there be a strong correlation between the squared value of the side length and the approximated circumference of the circle that best fits this polygon? I don't know too much about trig identities but it seems to me you should be able to make a good guess for the square root of a number (it would be more accurate the higher the number) by relating it to the approximated circumference? This is mostly for making the initial guess when finding a square root with software by the way I can't figure out how to evaluate for S in the following equation (where X is already solved by the program, which I have figured out so far) X = s/sin(180/s)
Not sure I understand your method, or the point of it. Speed? Accuracy? Several methods for approximating square roots are described here. Curious to know why you don’t just call the sqrt function provided by whatever programming language you’re using. The guys who implemented it know far more about square root approximations than you or I do.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4097467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Non-existence of surjections from $L^p$ to $L^q$ Let $1 \leq p, q \leq \infty$ and suppose there exists a continuous linear surjection $$ T : L^p[0,1] \longrightarrow L^q[0,1]. $$ Does it necessarily follow that $q \leq p?$ In the case of sequence spaces $\ell_p$ the result holds if we swap $p,q$ by Pitt's theorem, which asserts that for $q<p$ any bounded linear operator $T:\ell_p \to \ell_q$ is compact. In the $L^p$ context I suspect the notions of type/cotype may relevant, but as a non-specialist it isn't obvious to me how these can be applied. This question is mainly out of curiosity, and any references would be appreciated.
It is true that for $1<p<\infty$, $L^p[0,1]$ contains a complemented subspace isomorphic to $L^2[0,1]$. The subspace is the closed linear span of the Rademacher functions. This result is due to Khintchine inequality. For $p = 1, q > p$, it is true that there is no linear surjection from $L^p[0,1]$ onto $L^q[0,1]$. This is because $L^1[0,1]$ cannot contain infinite dimensional reflexive subspaces (a consequence of $L^1[0,1]$ having the Dunford Pettis Property)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4097741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
An Old Number theory IMO question In my book, under Legendre’s Function, the following two examples were given; When $m,n \in N$ , prove that; * *$\ $ $m! \cdot (n!)^m$ divides $(mn)!$ *$\ $ $m! \cdot n! \cdot (m+n)! $ divides $(2m)! \cdot (2n)!$ Well, for the first one, I know it is just the number of ways to put $mn$ balls into $m$ identical baskets, each with $n$ balls. But, as this is a NT book, I tried solving this with Legendre’s Function and it just did not work, I got we needed to prove: $f(mn) \ge f(m) + m \cdot f(n)$ where $f(k) = [\frac{k}{p}]$ where [.] is the floor function. Now, I could not figure out what to do, I tried inducting on $n$ but after using some inequalities like $[xy] \ge [x][y]$ and $[x+y] \ge [x]+[y]$ , the inequality just became false. As I couldn't even solve the first one, I could not solve the second one either, not even 'combinatorically'. So, I am looking for a proof of both the problems using Legendre’s Function (and perhaps a combinatorial proof of $2$ well?) Thanks!
Question $\bf{1}$ In this answer, it is shown that $$ \frac{(mn)!}{(m!)^nn!}=\prod_{k=1}^n\binom{mk-1}{m-1}\tag1 $$ Question $\bf{2}$ Using the equation $x=\lfloor x\rfloor+\{x\}$, we get the equation $$ \lfloor 2x\rfloor+\lfloor 2y\rfloor-\lfloor x\rfloor-\lfloor y\rfloor-\lfloor x+y\rfloor=\{x\}+\{y\}+\{x+y\}-\{2x\}-\{2y\}\tag2 $$ There are two possibilities: if $\{x\}+\{y\}\lt1$ $$ \{x+y\}=\{x\}+\{y\}\tag{3a} $$ or if $\{x\}+\{y\}\ge1$ $$ \{x+y\}=\{x\}+\{y\}-1\tag{3b} $$ If $\{x\}\lt\frac12$ and $\{y\}\lt\frac12$, then $\text{(3a)}$ implies $$ \{x\}+\{y\}+\overbrace{\{x+y\}}^{\{x\}+\{y\}}-\overbrace{\{2x\}}^{2\{x\}}-\overbrace{\{2y\}}^{2\{y\}}=0\tag{4a} $$ If $\{x\}\ge\frac12$ and $\{y\}\ge\frac12$, then $\text{(3b)}$ implies $$ \{x\}+\{y\}+\overbrace{\{x+y\}}^{\{x\}+\{y\}-1}-\overbrace{\{2x\}}^{2\{x\}-1}-\overbrace{\{2y\}}^{2\{y\}-1}=1\tag{4b} $$ Otherwise, assume $\{x\}\ge\frac12$ and $\{y\}\lt\frac12$, then if $\{x\}+\{y\}\,{\color{#C00}{\lt}\atop\color{#090}{\ge}}\,1$ $$ \{x\}+\{y\}+\overbrace{\{x+y\}}^{\{x\}+\{y\}-{\color{#C00}{0}\atop\color{#090}{1}}}-\overbrace{\{2x\}}^{2\{x\}-1}-\overbrace{\{2y\}}^{2\{y\}}={\color{#C00}{1}\atop\color{#090}{0}}\tag{4c} $$ Thus, $(2)$ and $(4)$ ensure that $$ \lfloor 2x\rfloor+\lfloor 2y\rfloor-\lfloor x\rfloor-\lfloor y\rfloor-\lfloor x+y\rfloor\ge0\tag5 $$ Therefore, for any prime $p$, $$ \overbrace{\sum_{k=1}^\infty\left(\left\lfloor\frac{2n}{p^k}\right\rfloor+\left\lfloor\frac{2m}{p^k}\right\rfloor\right)}^\text{factors of $p$ in $(2n)!(2m)!$}\ge\overbrace{\sum_{k=1}^\infty\left(\left\lfloor\frac{n}{p^k}\right\rfloor+\left\lfloor\frac{m}{p^k}\right\rfloor+\left\lfloor\frac{n+m}{p^k}\right\rfloor\right)}^\text{factors of $p$ in $n!m!(n+m)!$}\tag6 $$ That is, $$ n!\,m!\,(n+m)!\mid(2n)!\,(2m)!\tag7 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4097861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
the integral PDF $\sqrt {{2 \over \pi }} \int_{t\, = \,0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} /t^{\,2} + 2t} \right)} dt\,} $ In my answer to this post I came to the conclusion that the PDF of the volume of a parallelepiped with normal distributed coordinates, having one vertex at the origin, is $$ p_t (v) = \sqrt {{2 \over \pi }} \int_{t\, = \,0}^\infty {e^{\, - \,{1 \over 2}\left( {v^{\,2} /t^{\,2} + 2t} \right)} dt\,} $$ with the corresponding CDF $$ P_{\,t} (v) = \int_{t = 0}^\infty {\,\,t\;e^{\, - \,\,t} \,{\rm erf}\left( {{v \over {\sqrt 2 \,t}}} \right)dt\,} $$ where $0 \le v$. I wonder whether these two integrals may have an interesting expression in terms of known functions.
In terms of the Meijer G function $$p_t (v) =\frac{\sqrt{2}}{\pi }\,\, G_{0,3}^{3,0}\left(\frac{v^2}{8}| \begin{array}{c} 0,\frac{1}{2},1 \end{array} \right)$$ $$P_t (v) =\frac{2}{\pi }\,\, G_{1,4}^{3,1}\left(\frac{v^2}{8}| \begin{array}{c} 1 \\ \frac{1}{2},1,\frac{3}{2},0 \end{array} \right)$$ Edit In terms of series, for small values of $v$ $$p_t (v)=\sqrt{\frac{2}{\pi }}-v+$$ $$\frac{v^2 \left(-2 \sqrt{2} \log (v)-2 \sqrt{2} \gamma +\sqrt{2}+\sqrt{2} \log (8)+\sqrt{2} \psi ^{(0)}\left(-\frac{1}{2}\right)\right)}{4 \sqrt{\pi }}+$$ $$\frac{v^3}{6}+O\left(v^4\right)$$ $$P_t (v)=\sqrt{\frac{2}{\pi }} v-\frac{v^2}{2}+\frac{v^3 (-6 \log (v)-9 \gamma +11+\log (8))}{18 \sqrt{2 \pi }}+O\left(v^4\right)$$ For large values of $v$, using $v=t^3$ $$p_t (v)=\frac{4t}{\sqrt{3}}\,e^{-\frac{3 t^2}{2}} \left(1+\frac{5}{18 t^2}-\frac{35}{648 t^4}+\frac{665}{34992 t^6}+O\left(\frac{1}{t^8}\right)\right)$$ $$P_t (v)=1-\frac{34}{9 \sqrt{3}}e^{-\frac{3 t^2}{2}}\left(\frac{18 t^2}{17}+1-\frac{35}{612 t^2}+\frac{1925}{33048 t^4}+O\left(\frac{1}{t^6}\right)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $V^n$ and $\mathcal{L}(\mathbf{F}^n,V)$ are isomorphic vector spaces For $n$ positive integer, define $V^n$ by $V^n=\underbrace{V\times...\times V}_{n \ times}$. Prove that $V^n$ and $\mathcal{L}(\mathbf{F}^n,V)$ are isomorphic vector spaces. I would like to know if my proof holds and to have a feedback, please. ($\mathbf{F}$ denotes a field here) Let $(v_1,...,v_n)$ be a basis of $V$. So, each element in $V$ can be expressed as $\lambda_1 v_1+...+\lambda_n v_n$ for $\lambda_1,...,\lambda_n \in \mathbf{F}$. Let $\xi:\mathbf{F}^n\to V$, $\xi(\lambda_1,...,\lambda_n)=\lambda_1 v_1+...+\lambda_n v_n$ and define $\psi: V^n\to \mathcal{L}(\mathbf{F}^n,V)$ as $\psi (\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n)=\xi(\lambda_1,...,\lambda_n)$. Clearly $\psi$ is a linear application (it is easy to check). We show now that $\psi$ is injective. $\psi(\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n)=\xi(\lambda_1,...,\lambda_n)=\lambda_1v_1+..+\lambda_nv_n=0 \iff \lambda_1=...=\lambda_n=0$ because $(v_1,...,v_n)$ is linearly independent in $V$. So, $\lambda_1 v_1+...+\lambda_n v_n,...,\lambda_1 v_1+...+\lambda_n v_n=0$ and we conclude that $\psi$ is injective. Moreover, the dimension of $V^n$ is equal to a dimension of $\mathcal{L}(\mathbf{F}^n,V)$. Thus, by fundamental theorem we conclude that $\psi$ is surjective. Therefore, $\psi$ is an isomorphism
In fact the result is true whatever the dimension of $V$ is. Consider $$\begin{array}{l|rcl} \Phi : & V^n & \longrightarrow & \mathcal L(F^n,V)\\ & (v_1,\dots,v_n) & \longmapsto & (\lambda_1, \dots, \lambda_n) \mapsto \lambda_1v_1+ \dots + \lambda_n v_n\end{array}$$ $\Phi$ is linear, injective as its kernel is the set consisting of the zero vector and surjective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Probability of 4 parents sharing 2 birthdays I'm a math idiot and I'm trying to figure something out. My mother was born on April 10th as was my friend's mother. My father was born on July 4th as was my friend's father. How do I calculate the probability of that happening? EDIT: How would one calculate the probability of 2 pairs of shared birthdays? I think this is the answer I'm looking for Halp. -Brett
Since you don't care about overlaps (you said "I'm looking for the probability of all 4 falling on those exact dates."), you take the probability your mom's birthday is April 10th, and multiply that by the probability your friend's mom's birthday is April 10th, then you multiply that by the probability your dad was born on July 4th, and multiply that by the probability your friend's dad was born on July 4th. This means $\frac{1}{365}*\frac{1}{365}*\frac{1}{365}*\frac{1}{365} = (\frac{1}{365})^4 = 5.63*10^{-11}$ very slim (THIS IS EXCLUDING LEAP YEARS). Multiply by 100 to get a percent: $5.63*10^{-9}$% Bear in mind, that since you said you only care about birthdays falling on specific dates, the probability your mom's b-days are April 10th, and dad's are July 4th, is THE EXACT SAME PROBABILITY of your mom's birthday being January 28th, your friend's mom's Decmber 2nd, your Dad's April 11th, and your friend's dad's being October 30th. If you only care about 2 pairs of the same birthday, the probability would actually be higher than for any select birthday assigned (the probability above). The probability of 2 pairs of shared birthdays gets more complicated as well... for that, you can now see @CoveredInChocolate's answer Fun fact: if there's a group of 23 people, there's over a 50% chance that 2 or more of them share a birthday.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is there an explicit equivalence of categories between real and complex commutative unital C* algebras Gelfand duality shows that the category of commutative unital C* algebras is dual to the category of compact Hausdorff spaces. This holds for the real case (see Stone spaces of Johnstone) as well as the much better known case of complex C* algebras though the proofs are not the same. Since these two categories are both dual to the same category they should be equivalent to each other. Is this equivalence of categories explicitly described anywhere?
I am not sure about a reference, but the equivalence is easy enough to describe: From complex to real: $$ A \to A_{sa} = \{a\in A: a^*=a\}. $$ From real to complex: $$ A \to {\mathbb C}\otimes A. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Trying to find out if diameter and average pairwise distance of graph has a relationship I've been trying to see if this statement is true or false for unweighted connected graphs. $$ \frac{\operatorname{diam}G}{\operatorname{apd}G}\leq c, \quad c\in\mathbb{R^+} $$ where $\operatorname{diam}G$ is the maximum distance possible between two vertices in a graph, and $\operatorname{apd}G$ is the average pairwise distance between 2 vertices and is $\frac{\sum{\operatorname{dist}(u,v)}}{\binom{\text{number of vertices}}{2}}$. I've tried drawing out graphs (from 3 to 6 vertices) from the minimum connected graph of a simple path to the complete graph. The complete graph always yields $\frac{1}{2}$ while the the simple path has the largest value that always increases, so I tried to find the simple path value for increasing vertices. I denote the ratio for the simple path to be $R_v$ where $v$ is the number of vertices in the graph, and I get:$$ R_2=\frac{1}{2},R_3=\frac{3}{4}, R_4=\frac{9}{10},R_5=1,R_6=\frac{15}{14}, R_7=\frac{9}{8},R_8=\frac{7}{6},R_9=\frac{6}{5} $$ It seems to me that it is an increasing sequence, but I cannot seem to see any sort of pattern in it. It looks like the terms are a subset of the sequence $\{\frac{n+1}{n}\}$ and that as $v\to\infty$, $n\to0$, so the terms look like they go to some arbitrarily large value when v gets bigger and bigger, so the statement might be false. But of course I can't prove it. They might decrease the rate of increase to the point where there is an upper-bound. I have also searched online for what other people might have said on this, and some sources (other university solutions) said that the statement is false. They said to draw a star-shaped wand, but I have no idea what that is. Ideally, there'd be a graph where the $R_v$ is a sequence that can be characterised neatly by a recurrence relation or general form, then I can just take limits. Could someone point me there or at least explain what this 'wand' is supposed to be? Thank you very much for your help. PS: I was also thinking of treating it as larger than a divergent sequence or series and then comparison test it.
Consider the graph $G_{m,n}$ with vertices $x_1,\dots,x_n$ and $y_1,\dots,y_{m}$ such that the edges are the ones of the form $\{x_i,x_j\}$ $\{x_i,y_1\}$ and $\{y_i,y_{i+1}\}$. We can calculate the average distance precisely but I'd rather not. Notice when $m$ is fixed and $n$ goes to infinity the proportion of pairs of vertices that are of the form $\{x_i,x_j\}$ goes to $1$ and so the average distance goes to $1$. On the other hand the diameter is $m$, so when $n$ goes to infinity we have $\lim\limits_{n\to \infty} \frac{\operatorname{diam}(G_{n,m})}{\operatorname{apd}(G_{n,m})} = m$, so I believe there is no $c$ that fits the bill.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If all normal lines are parallel, then the surface is a plane. I have to prove the following statement: If $S$ is a connected surface such that every normal line is parallel to a unit vector $a,$ then show that $S$ is a subset of a plane. Now, for a point $p$ on $S,$ the plane $$T_p:=\{x\in\mathbb{R}:\langle x-p,a\rangle=0\}$$ Is the tangent plane at $p.$ It will be sufficient enough to show that every point $z\in S$ is in $T_p.$ I completely understand this geometrically, but have not been able to put it into words at all. I know that $S$ being connected is crucial... A hint here would really go a long way. Thank you so much!
Consider the function $f:S\to\mathbb R$ given by $f(p)=p\cdot a$. Now for any point $p$ and any parametrised curve $C:[-1,1]\to S$ with $C(0)=p$ and $C'(0)=v\in T_p(S)$, the composition $\alpha=f\circ C$ satisfies $$\alpha'(0)=f'(v)=a\cdot v=0$$ since the normal, a scaling of $a$, is perpendicular to $T_p(S)$. Thus $f$ is constant over $C$. Since $C$ was arbitrary and $S$ is connected, $f$ is constant over $S$ and $S$ is a plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4098945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can a catenary be a parabola? In one of my text books (Thomas' Calculus, I think), one of the exercises asked us to find the arc length of a hanging wire that was weighted such that it traced a parabola, or something to that effect. Point is, it wanted us to find the arc length of a parabola, and tried to justify it with a real world example. Now, I know that, in general, a hanging wire forms a catenary, not a parabola, so my question is: is it possible for a catenary to be a parabola, at least within a given finite width?
No, it is not possible. For instance, take a point $P$ of a parabola. Consider the straight line $l_P$ passing through $P$ parallel to the axis of symmetry of the parabola. Let $l_P'$ be the reflection of $l$ on the normal to the parabola at $P$. Then all these straight lines $l_P'$ have a point in common (which is the focus of the parabola). The catenary doesn't have this property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Suppose the equation $ax+by=c$ has $m$ positive solutions. How many positive solutions does the equation $ax+by=c+ab$ have? Suppose that $a,b,c$ are positive integers. Suppose the equation $ax+by=c$ has $m$ positive solutions. How many positive solutions does the equation $ax+by=c+ab$ have? I know $ax+by=ab$ does not have any positive integer solutions but would that necessarily imply that $ax+by=c+ab$ still has m solutions? I have this confusion since if we add the two equations we get $ax+by=\frac{c+ab}{2}$.
We write $a = du$ and $b = dv$ with $\gcd(u, v) = 1$. It follows that $d \mid c$ and we may write $c = dw$. Thus the two equations become $ux + vy = w$ and $ux + vy = w + duv$, respectively. Let us prove the following claim. Assume that $u, v$ are coprime. Then the number of positive solutions of $ux + vy = w + uv$ is exactly $1$ more than that of $ux + vy = w$. Proof: Every positive solution $(x_0, y_0)$ of $ux + vy = w$ uniquely gives a positive solution of $ux + vy = w + uv$, namely $(x_0, y_0) \mapsto (x_0 + v, y_0)$. This gives a bijection between positive solutions of $ux + vy = w$ and positive solutions of $ux + vy = w + uv$ such that $x > v$. Thus it suffices to consider what are the "extra" solutions of $ux + vy = w + uv$, namely those satisfying $x \leq v$. I claim that there is exactly one. Firstly, taking the equation mod $v$, we see that $ux \equiv w \mod v$, which (together with $x \leq v$) shows that there is at most one such solution. Secondly, by taking $x_0$ to be the minimum positive integer such that $ux \equiv w\mod v$, we have $w + u(v - x_0) > 0$ and also $w + uv - ux_0 \equiv 0 \mod v$, hence $(x_0, \frac{w + uv - ux_0}v)$ is a positive integer solution. By induction, the number of positive solutions of $ux + vy = w + duv$ is exactly $d$ more than that of $ux + vy = w$. Therefore the answer is $m + \gcd(a, b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving $x = \frac13\sec(a)$ for $a$ (after integrating via trig substitution) For the integral $\dfrac{1}{x\sqrt{9x^2-1}}$, I decided to use $x = \frac13\sec(a)$. The expression simplifies to the integral of $1$, simply becoming $a$. The issue is making $a$ the subject of $x = \frac13\sec(a)$. By drawing a triangle of hypotenuse $3x$, base length $1$ and height of $(9x^2 -1)^{1/2}$, where $a$ is the angle between $3x$ and $1$. I end up with three expressions for $a$: $$a = \arctan\left(\sqrt{9x^2-1}\right) \quad\text{or}\quad \arcsin\left(\dfrac{\sqrt{9x^2-1}}{3x}\right)\quad\text{or}\quad \arccos\left(\dfrac{1}{3x}\right)$$ After graphing all three on Desmos, only the first answer was right, however I don’t know why, and is it always the arctan that’s right? What assumption is being made for the sine and cosine variation that makes the integral incorrect? Thank you.
First of all you should include $C$ as a constant since it is an idefinite integral. Now you know that the domain of $\arcsin$ and $\arcsin$ differ from that of $\arctan$ which is defined for all values of $x$. I think the problem might be the way you are plotting the solutions. You have to specify which range of $x$ values you are using.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Maximize the value of $(a,b) + (b,c) + (a,c)$ given is $a + b + c = 5n$; $a,b,c,n$ are positive integers $(a,b)$ stands for the gcd of $a$ and $b$ Maximize the value of $(a.b) + (b,c) + (a,c)$ So I found that since $a \geq(a,x)$, that $5n \geq (a.b) + (b,c) + (a,c)$ And for $n=0 $ mod 3 , we can set $a=b=c$, and find the maximum. I've been trying to do the cases $n$=1 and 2 mod 3 separately, but I haven't made any progress with those cases.
We are going to explore how large the fraction $\frac{\gcd(a,b) + \gcd(b,c) + \gcd(c,a)}{a+b+c}$ can be, where $a,b,c$ are any positive integers such that all of them are not equal. (We forget about the sum being $5n$). Case $1: a=b$. Notice $\gcd(a,b) + \gcd(b,c) + \gcd(c,a) = a + \gcd(a,c) + \gcd(c,a)$. If $a$ is a multiple of $c$ then we get $kc + 2c = (k+2)c$ which when compared to $2a+c = (2k+1)c$ gives a ratio of at most $\frac{4}{5}$. If $c$ is a multiple of $a$ then we get $a + 2a = 3a$ which when compared to $2a+c = a(2+k)$ gives a ratio of at most $\frac{3}{5}$. If neither number is a multiple of another then $a + \gcd(a,c) + \gcd(c,a) \leq a + a/2 + c/2 = \frac{3a}{2} + c/2 $ which when compared to $2a+c$ gives a ratio of at most $\frac{3}{4}$ If no number is equal to another without loss of generality $a<b<c$ and now notice: $\gcd(a,b) + \gcd(b,c) + \gcd(c,a) \leq b/2 + c/2 + a < \frac{2}{3}(a+b+c)$ So the best ratio possible is $\frac{4}{5}$ with equality if and only if $a=b=2c$, in other words if the solution is of form $(n,2n,2n)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
differentiation of a volume Given a function $f(x)$. Then we define $D=\{x\mid f(x)>y\}\subset\mathbb{R}^n$ and $\Gamma=\{x\mid f(x)=y\}$. Now we define $S(y)=\int_D dx$. My question is what is the meaning of $S(y)$? Is it "size" (volume) of the domain $D$? How can I obtain $\frac{dS}{dy}$ and what is its meaning?
No, $S(y)$ is the size of the projection of $D$ onto the $x$ axis. The meaning of the derivative is the speed of increase of $f$ when $f(x) = y.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find the Fisher Information of $X\sim{\rm Poisson}(\mu)$ Suppose $X \sim Poisson(\mu)$, so for $i \in \{0,1,2,\ldots\}$, $P(X = i) = \exp(-\mu) \mu^i/i!.$ Find the Fisher Information of $X$. This is what I have done so far $S_X(i) = \frac{\partial \ln (f_{X|\mu}(i))/\partial \mu}{f_{X|\mu}(i)} = -1 + \frac{i}{\mu}$ How do I find the probabilities to get the fisher score?
$f(x|\mu) = \exp(-\mu) \mu^x/x!$ $\implies l(x|\mu) = \ln{f(x|\mu)} = -\mu + x\ln{\mu} - \sum_{i=1}^{x}i$ $l^{\prime}(x|\mu) = -1 + \frac{x}{\mu}$ $l^{\prime\prime}(x|\mu) = - \frac{x}{\mu^2}$ Fisher information $I(\mu) = -E[l^{\prime\prime}(x|\mu)] = \frac{E[X]}{\mu^2} = \frac{1}{\mu} $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4099940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About argument of forgetful functor Let $(\{0,1\},0) \in \text{Set}_*$ be the set $\{0,1\}$, equipped with the base point $0 \in \{0,1\}$. Prove that $(\{0,1\},0)$ represents the forgetful functor $U:\text{Set}_* \rightarrow \text{Set}$. Forgetful functor is used for any functor that forgets structure, and whose codomain is the category of sets. For example, $U: \text{Group} \rightarrow \text{Set}$ sends a group to its underlying set and a group homomorphism to its underlying function. The only thing I can think of for argument is the codomain does not contain base point, so $(\{0,1\},0)$ represents the forgetful functor. How to improve on this? Thanks.
Given any pointed set $(S, *)$, what are the morphisms $(\{0, 1\}, 0)\to(S, *)$? They are precisely the functions $f\colon\{0, 1\}$ with $f(0)=*$ and with $f(1)\in S$ arbitrary. We may define a bijection $\varphi\colon\text{Hom}_*(\{0, 1\}, S) \to S$ (where the subscript signals that these are pointed functions) the following way: given any pointed function $f$, let $\varphi(f)=f(1)$. The inverse is given by $\theta(s)=f_s\colon \{0, 1\}\to S$, where $f_s(0)=*, f_s(1)=s$. I'll let you check the naturality as an exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100075", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\lim_{x\to a} f(x)=\lim_{h\to 0}f(a+h)$ versus $\lim_{x\to a} f(x)=\lim_{x\to 0}f(a+x)$ Just had a quick question about the variable naming observed in the following statement: $\displaystyle\lim_{x\to a} f(x)=\displaystyle\lim_{h\to 0}f(a+h)$ Is the above statement equivalent to: $\displaystyle\lim_{x\to a} f(x)=\displaystyle\lim_{x\to 0}f(a+x) \quad ?$ If so, is there any particular reason to prefer the former over the latter? Cheers~
The first form is safer, as it uses distinct variables for the absolute and relative positions, hence avoids confusions. (A second argument is that "by tradition", $h$ is immediately recognized as a relative displacement.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Condition for existence of an orthonormal matrix whose column space is orthogonal to the column space of another matrix While I was reading a statistics paper, I came across one statement that I don't understand (I just have basic linear algebra knowledge). Assume (in the context of regressions), we have a $n\times p$ data matrix $X$, assuming that $X$ is invertible and $n>p$. The paper states "$U \in \mathbb{R}^{n \times p}$ is an orthonormal matrix whose column space is orthogonal to that of $X$ s.t. $U^TX=0$": such matrix exists if $n\geq 2p$. I don't understand where the last statement comes from. I know that the nullspace of $X$ has dimension $n-rank(X)=n-p$ in full rank case and $U$ is the orthonormal basis of the null space of $X$. But I don't get the link why $U$ only exists, if $n\geq p +rank(X)$, i.e. $n\geq 2p$.
not sure, but this could be a reason. Since $X$ is a non-square matrix, we can assume that there are $n-p$ rows that can be described as a linear combination of the other rows. Suppose we decompose $X$ into the following: $$X = \begin{bmatrix}X_1 \\ AX_1\end{bmatrix}$$ Where $A \in (n-p)\times p$ represent these linear row transformations and $X_1$ is a full rank $p\times p$ matrix. The orthonormal matrix can then be decomposed to $$U^T = \begin{bmatrix}U_1 & U_2\end{bmatrix}$$ Such that $$U_1X_1 + U_2AX_1 = 0_{p\times p}$$ Next assume that $U_1$ is the orthonormal matrix of $X_1$ $$U_1X_1 = 0_{p\times p}$$ $$U_2 = U_1(A^TA)^{-1}A^T$$ Now as we know, if a matrix has more columns than rows, the product $A^TA$ is singular (and the inverse doesnt exist). Which means that in order to ensure the inverse exists, the amount of rows in $A$ must be larger or equal to $p$. Alternatively, suppose $U_1X_1 \neq 0$: $$-U_1X_1 = U_2AX_1$$ $$-U_1 = U_2A$$ $$-U_1A^T(AA^T)^{-1} = U_2$$ if $n<2p$, the inverse of $AA^T$ does exist, however, this does not yield a solution unless $A$ is square: $$U_1X_1 -U_1A^T(AA^T)^{-1}AX_1 \neq 0_{p\times p}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Range and Graph of a function Let $$f(x)=\left(1+\frac{1}{x}\right)^x$$ State the range and plot its graph wherever the function is defined. I got the range and $f'(x)$, but I cannot find the sign of it to predict the increasing/decreasing of the graph. Please help.
Hint $$f(x)=\left(1+\frac{1}{x}\right)^x\implies \log(f(x))=x \log\left(1+\frac{1}{x}\right)$$ $$\frac{f'(x)}{f(x)}=\log \left(1+\frac{1}{x}\right)-\frac{1}{x+1}$$ Now, play with some inequalities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Showing without calculating the integral that: $\pi/2 \leq \int_{0}^{\pi} \frac{x\sin(x)}{1+\cos(x)^2} \leq \pi$ I want to prove without calculating the integral that: $\pi/2 \leq \int_{0}^{\pi} \frac{x\sin(x)}{1+\cos(x)^2} \leq \pi$. I have some problem, since if my function is: $f:[0,\pi]\to R$ defined by $f(x)=\frac{x\sin(x)}{1+\cos(x)^2}$ then $f(0)=f(\pi)=0$ and since f is non-negative in this intervel then the given integral is $\geq 0$ so is it legal to use the integral monotone theorem? I mean can i say that for every $x\in [0,\pi]$, $\frac{x\sin(x)}{1+\cos(x)^2} \leq x\sin(x)$ and $\frac{x\sin(x)}{1+\cos(x)^2}\geq x\sin(x)/2$ ?
For $x\in [0,\pi]$, $\frac{x\sin(x)}{1+\cos(x)^2} \leq x\sin(x)$ and $\frac{x\sin(x)}{1+\cos(x)^2}\geq x\sin(x)/2$ , this is true because ,the first inequality is true because the denominator is $>1$ for all $x$ in the domain and the numerator is positive and the second because the denominator is always less than equal to $2$ for all $x$ in the domain and the numerator is always positive . Since $f(x)>0$ you can conclude that the inequality also holds true for their integrals from $0$ to $\pi$ And since the integral of $x\sin(x)$ from $0$ to $\pi$ is $\pi$ the first inequality in the question is true .
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Complex $ \sqrt[n]{\cdot} $ Define $ b^{z}:=e^{\log\left(b\right)\cdot z} $, as a function of 2 complex variables $ (b,z) \in D(1,1)\times \mathbb{C} $ Where $log $ here is defined only on the disk $ D(1,1) $ and given by $$ \log\left(z\right)=\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\left(z-1\right)^{k} $$ Now, given a positive integer $ n $, I want to prove that there exist a unique function $ f=\sqrt[n]{\cdot}:D\left(1,1\right)\to D\left(1,1\right) $ such that $$ \left(f\left(z\right)\right)^{n}=z,\thinspace\thinspace\thinspace and\thinspace\thinspace\thinspace\thinspace\thinspace f\left(1\right)=1 $$ Here's what I have tried: Define $$ f\left(z\right)=e^{\frac{1}{n}\log\left(z\right)} $$ This will be our function. And indeed $ f\left(1\right)=e^{\frac{1}{n}\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k}}{k}\left(1-1\right)^{k}}=e^0=1 $ Next, I want to prove that indeed $(f(z))^n=z$. But Im not sure why would it even be true. Those are the steps Im not sure about: $ \left(f\left(z\right)\right)^{n}=\left(e^{\frac{1}{n}\log\left(z\right)}\right)^{n}\underset{?}{=}e^{\frac{n}{n}\log\left(z\right)}\underset{?}{=}z $ This power rule is true in this complex case? also, I know in general $ e^{\log\left(z\right)}\neq z $, but in this particular case where we defined $ log $ as the power series over the disk $(1,1) $ , is it true? If indeed those steps are true, how can I show that this function is unique? Thanks in advance, any help would be appreciated.
The function $$ \log(z)=\sum_{k=1}^{\infty}\frac{\left(-1\right)^{k+1}}{k}\left(z-1\right)^{k} $$ is holomorphic in $D(1, 1)$ as the value of a power series with convergence radius $1$. It satisfies $\log(1) = 0$ and $\log(z)' = 1/z$. It follows that $e^{\log(z)} = z$ in $D(1, 1)$, as demonstrated in Primitive function of $ 1/z $, complex analysis. The complex exponential function satisfies $e^{z+w} = e^z e^w$ for all complex numbers $z, w$, and consequently $e^{nz} = (e^z)^n$ for $z \in \Bbb C$ and positive integers $n$. So your calculation is correct: $f(z)=e^{\frac{1}{n}\log(z)}$ is holomorphic in $D(1,1)$, it satisfies $f(1) = 1$ and $$ f(z)^n = \left( e^{\frac{1}{n}\log(z)}\right)^n = e^{\log(z)} = z \, . $$ Uniqueness: $f$ is the only holomorphic function in $D(1, 1)$ satisfying $f(z)^n = z$ and $f(1) = 1$: Assume that $g$ is another such function. Then $h=f/g$ is holomorphic in $D(1, 1)$ and satisfies $$ \tag{*} 0 = h(z)^n -1 = (h(z)-1)(h(z)-\omega)\cdots(h(z)-\omega^{n-1}) $$ with the primitive $n$-th root of unity $\omega = e^{2\pi i/n}$. It follows that the zeros of at least one of the factors on the right-hand side of $(*)$ must have an accumulation point in $D(1,1)$, and then the identity theorem for holomorphic functions implies that that factor is identically zero. So we have shown that $f(z)/g(z) = \omega^k$ for some $k$, and since $f(1)=g(1)=1$ it follows that $f(z) = g(z)$. Alternatively one can show that $f(z)^n = z$ implies $$ f'(z) = \frac{f(z)}{nz} $$ and use this to prove that if $g$ is another such function then the derivative of $h=f/g$ is zero. Remark: One can also show that $f$ is the only continuous function in $D(1, 1)$ satisfying $f(z)^n = z$ and $f(1) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4100884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_{0}^{\infty} \frac{\tan^{-1}x^2}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)}\>dx$ How to Integrate $$ I = \int_{0}^{\infty} \frac{\tan^{-1}x^2}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)} \>dx\approx -0.295512 $$ Mathematica returns a result that does not match numerically with the integral approximation : $$ \frac{i C}{4} -\frac{\pi}{4}\sqrt{4-3i} +\frac{3 \pi^2}{32} +\pi\left(\frac{1}{4}-\frac{i}{8}\right)\coth^{-1}(\sqrt{2}) \approx -0.048576 + 0.43823\,i $$ Where C denotes Catalan's Constant Motivation I was able to find a closed form for $$ \int_{0}^{1} \frac{\tan^{-1}(x^2)}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)}dx $$ using double infinite sums. Upon plotting the function within the integral i saw that it could be integrated from $0$ to ${\infty}$ . Attempts Number 1 I tried to Split the integral as $$ \int_{0}^{1} \frac{\tan^{-1}(x^2)}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)}dx + \int_{1}^{\infty} \frac{\tan^{-1}(x^2)}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)}dx $$ and use the Taylor Series for $\tan^{-1}(x^2) $ when $|x| >1$ Number 2 I tried using partial fractions as $$ \frac{1}{x^4-1} = \frac{1}{4(x-1)} - \frac{1}{4(x+1)} -\frac{1}{2(x^2+1)} $$ $$ \frac{1}{x^2(x^4-1)} = \frac{1}{2(x^2+1)}-\frac{1}{x^2}-\frac{1}{4(x+1)} + \frac{1}{4(x-1)} $$ from which I obtained $$\int_{0}^{\infty} \frac{\tan^{-1}(x^2)}{2(x^2+1)}dx = \frac{\pi^2}{16} $$ $$ \int_{0}^{\infty} \frac{\pi}{8(x^2+1)} dx= \frac{\pi^2}{8} $$ but was unable to proceed further. Number 3 A number of basic integration techniques such as U-Sub and Integration by parts. Number 4 Using the same technique i used to evaluate the same integral but from $0$ to $1$ I will continue to try , but for now I find myself to be stuck. Q - Is there a closed form for I? If the solution is easy and i am missing something , could you provide hints instead? Thank you for your help and time.
Note $$\frac1{x^4-1} =\frac1{2(x^2-1)} - \frac1{2(x^2+1)}\\ \frac1{x^2(x^4-1)} =\frac1{2(x^2-1)} + \frac1{2(x^2+1)}-\frac1{x^2} $$ and rewrite the integral as \begin{align} I &= \int_{0}^{\infty} \left( \frac{\tan^{-1}x^2}{x^2(x^4-1)}-\frac{\pi}{4(x^4-1)} \right)dx\\ &=\frac{\pi^2}{16}+ \frac12\int_0^{\infty} \frac{\tan^{-1}x^2}{1+x^2}dx - \int_0^{\infty} \frac{\tan^{-1}x^2}{x^2}dx + \frac12\int_0^{\infty} \frac{\tan^{-1}x^2-\frac\pi4}{x^2-1}dx\tag1 \end{align} where $\int_0^{\infty} \frac{\tan^{-1}x^2}{1+x^2}dx=\frac{\pi^2}8$, $ \int_0^{\infty} \frac{\tan^{-1}x^2}{x^2}dx=\frac{\pi}{\sqrt2}$ and \begin{align} &\frac12\int_0^{\infty} \frac{\tan^{-1}x^2-\frac\pi4}{x^2-1}dx =\int_0^{1} \frac{\tan^{-1}x^2-\frac\pi4}{x^2-1}dx\\ \overset{IBP}=&-\int_0^1 \frac x{1+x^4} \ln \frac{1-x}{1+x}dx \overset{x\to \frac{1-x}{1+x}}=-\int_0^\infty\frac {\ln x}{1+6x^2+ x^4}dx\\ =&-\frac1{4\sqrt2}\left( \int_0^\infty\frac{\ln x}{x^2+ (\sqrt2-1)^2}dx -\int_0^\infty\frac{\ln x}{x^2+ (\sqrt2+1)^2}dx \right)\\ =&-\frac1{8\sqrt2}\left(\frac{\ln(\sqrt2-1)}{\sqrt2-1} - \frac{\ln(\sqrt2+1)}{\sqrt2+1} \right) \end{align} Substitute above results into (1) to obtain $$I= \frac{\pi^2}8-\frac\pi{\sqrt2}\left(1+\frac{\ln(\sqrt2-1)}{8(\sqrt2-1)} - \frac{\ln(\sqrt2+1)}{8(\sqrt2+1)} \right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4101047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Is there a difference between the three best-reply (best-response) functions in game theory? I recent became aware that in game theory there are not one but three definitions of the best response function, which is unfortunately routinely referred to interchangeably depending on the context. People will say "the strategy is the best response" without specifying which best response they are talking about. To showcase, let $u_i$ be the pure or mixed payoff function, $s_i \in S_i$ be the pure strategy, $x_i \in \Delta(S_i)$ the the mixed strategy of player $i$ Then the best response/reply function for player $i$ is either (depending on the context) * *the set of $s_i$ such that $u_i(s_i, s_{-i}) \geq u_i(s_i^\prime, s_{-i})$ for all $s_i^\prime \in S_i$ *the set of $s_i$ such that $u_i(s_i, x_{-i}) \geq u_i(s_i^\prime, x_{-i})$ for all $s_i^\prime \in S_i$ *the set of $x_i$ such that $u_i(x_i, x_{-i}) \geq u_i(x_i^\prime, x_{-i})$ for all $x_i^\prime \in \Delta(S_i)$ I think 1 and 3 are usually distinguished as the pure and mixed best response function. I am uncertain about $2$. Are these concepts all distinct and give different sets, or do they in fact intersection, are subsets of one or the other? For example, is $2$ a distinct concept or equivalent to $3$? And when people refer to best response in game theory, which one are they referring to?
The set of best responses to a certain action is the set of all strategies that maximize payoff against it. In general, it includes all actions, pure and mixed, as in 3. For example, this is used when proving the existence of a Nash equilibrium (and then the best-response correspondence has a fixed point). It is easy to verify that if a player has a mixed best reply, he also has a pure one so in many contexts when you just consider a certain best reply or the best reply payoff, you can assume the player uses a pure strategy, such as in 2. Finally, in 1 you consider the best reply to a certain pure action of the others, while in 2 and 3 to a mixed. I don't see it as a difference of definition but rather of the argument of the best-reply correspondence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4101270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating the Integral $\int_{-\infty}^\infty e^{-x^2}\cos\big(2x^2\big)\,\mathrm dx$ How to compute the integral $$\int_{-\infty}^\infty e^{-x^2}\cos\big(2x^2\big) dx$$ I am wondering if there's a nice closed form of this elegant integral. I have tried to compute this integral using some substitutions, Laplace and Mellin transforms, however it doesn't seem to get or transform to something more simplified. Any approach (including complex analysis) is most welcomed. Thanks. EDIT $\textbf{1}$: @heropup has provided a beautiful answer, however I would be much more happy if there's another nice way to prove the same. Thanks.
The first thing that comes to my mind is to do something like this: $$e^{-x^2} \cos (2x^2) = e^{-x^2} \frac{e^{2x^2 i} + e^{-2x^2 i}}{2} = \frac{1}{2} \left( e^{-(1-2i)x^2} + e^{-(1+2i)x^2} \right),$$ then use the fact that $$\int_{x=-\infty}^\infty e^{-x^2} \, dx = \sqrt{\pi}$$ to evaluate $$\int_{x=0}^\infty e^{-zx^2} \, dx = \frac{\sqrt{\pi}}{2 \sqrt{z}}, \quad \Re(z) > 0.$$ Then we obtain $$\int_{x=-\infty}^\infty e^{-x^2} \cos (2x^2) \, dx = \frac{\sqrt{\pi}}{2} \left((1-2i)^{-1/2} + (1+2i)^{-1/2}\right) = \sqrt{\frac{1 + \sqrt{5}}{10} \pi}.$$ In case that last step is unclear, one simply writes $$\frac{1}{\sqrt{1 - 2i}} + \frac{1}{\sqrt{1 + 2i}} = \frac{\sqrt{1 + 2i} + \sqrt{1 - 2i}}{\sqrt{5}}.$$ Then because $$1 \pm 2i = \sqrt{5} \left(\frac{1}{\sqrt{5}} \pm \frac{2}{\sqrt{5}} i\right) = \sqrt{5} e^{\pm i \theta}$$ where $\theta = \tan^{-1} 2$, we have $$\sqrt{1 + 2i} + \sqrt{1 - 2i} = 5^{1/4}(e^{i\theta/2} + e^{-i\theta/2}) = 2 \cdot 5^{1/4} \cos \frac{\theta}{2} = 2 \cdot 5^{1/4} \sqrt{\frac{1 + \cos \theta}{2}} \\ = 2 \cdot 5^{1/4} \sqrt{\frac{1 + 1/\sqrt{5}}{2}}.$$ The rest is algebra. I believe such a computation should be accessible to a student of complex analysis--indeed, a student of high school trigonometry and complex number arithmetic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4101551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving ODE with initial condition, one step wrong I have a step wrong while solving my IVP but I cannot find what. I will post my detailed solution in the hope someone sees where it goes awry: The IVP: $$t^2 \frac{dy}{dt}−t=1+y+ty,y(1)=4.$$ * *I start by moving all function of y on the RHS: $$\displaystyle$$ $$\displaystyle t^2\frac{dy}{dt}-y-ty=1 +t$$ $$t^2\frac{dy}{dt}-(1+t)y=1 +t$$ *It is not in standard form so I continue by divinding by $t^2$: $$\frac{dy}{dt}-\frac{(1+t)}{t^2}y=\frac{1 +t}{t^2}$$ *With $h(t) = -\frac{(1+t)}{t^2}$ I can use integrating factor $e^{\int h(t)}$ = $e^{H(t)}$. With $H(t) = \frac{1}{t} - ln(t)$, I get : $$e^{\frac{1}{t} - ln(t)}\frac{dy}{dt}- e^{\frac{1}{t} - ln(t)}\frac{(1+t)}{t^2}y=\frac{1 +t}{t^2}e^{\frac{1}{t} - ln(t)}$$ Which can be rewritten as: $$\frac{d}{dt}(e^{\frac{1}{t} - ln(t)} y) = \frac{1 +t}{t^2}e^{\frac{1}{t} - ln(t)}$$ $$\displaystyle e^{\frac{1}{t} - ln(t)} y = \int{ \frac{1 +t}{t^2}e^{\frac{1}{t} - ln(t)}} $$ $$\displaystyle e^{\frac{1}{t} - ln(t)} y = -e^{\frac{1}{t} - ln(t)} + C $$ 4. by multiplying with $e^-H(t)= e^{-\left(\frac{1}{t} - ln(t)\right)}$ we get: $$ y(1) = 4 = -1 + Ce^{-\left(\frac{1}{t} - ln(t)\right)} \\ C = 5e$$ *The initial value condition gives that C: $$y(1) = 4 = -1 + Ce^{-\left(\frac{1}{t} - ln(t)\right)} \\ C = 5e$$ $$y(t) = -1 + 5 e e^{-\left(\frac{1}{t} - ln(t)\right)} \\ y(t) = -1 + 5 e^{-\left(\frac{1}{t} - ln(t) - 1\right)}$$ But this appears to be wrong. Is there some step I am doing wrong? EDIT: fixed -1 in exponent as per the comments. EDIT 2: trying to plug in in the original equation Simplifying we get: $$y(t) = -1 + 5 t e^{1-\frac{1}{t}}\\ y'(t) = 5 e^{1-\frac{1}{t}} + 5t \frac{1}{t^2}e^{1-\frac{1}{t}} \\ y'(t) = 5 e^{1-\frac{1}{t}} + 5 \frac{1}{t}e^{1-\frac{1}{t}}$$ $$t^2 \frac{dy}{dt}−t=1+y+ty \\ t^2 \left(5 e^{1-\frac{1}{t}} + 5t \frac{1}{t}e^{1-\frac{1}{t}} \right) -t = 1 -1 + 5 t e^{1-\frac{1}{t}} -t + 5 t^2 e^{1-\frac{1}{t}} $$ Seems everything is disappearing, but I still get wrong on the automatic assessment.
Your answer is correct note that $$e^{-1/t+\ln t}= te^{-1/t}.$$ Notr that $e^{ln z}=z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4101732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Identifying homogenous differential equations Again, with respect to the same question I posted some minutes ago (Identifying separable differential equations), I discovered that the textbook provided to me was indeed wrong. Now, I'm beginning to question some of it's answers. Taking the same function from the previous question, $$\frac{dy}{dt}+p(t)y(t)=q(t)y(t)+3y(t)$$ refactoring $p(t)y(t)−q(t)y(t) + 3 y(t)$ as $(3 - r(t)) y(t)$ and I came to $$\frac{dy}{dt} = (3−r(t))y(t),\text{ with }r(t)=q(t)−p(t)$$ Given that I don't know my $y(t)$, I'm not able to verify that is homogeneous, right? Considering that by definition, it's only homogeneous if $f(tx, ty) = t^n f(x,y)$. In this case, I only know my $dy/dt$, and it isn't a $F(y/x)$ type function. I have a pretty solid guess that is non homogeneous, but I don't know the way to prove it.
Whether or not your equation is homogeneous doesn't actually have anything to do with the form of the solution. If an equation is homogeneous, that basically just means that all of the terms have $y$ in them in some way or another. In this case, we have an equation of the form $y' + f(t)y = 0$, letting $f(t) = r(t) - 3 = p(t) - q(t) - 3$. This is homogeneous because there is no source term in terms of $t$ on the right-hand side. If we instead had an equation like $y' + f(t)y = \sin t$ or $y' + f(t)y = 1$, then these would be nonhomogeneous because there is a source term, a function of $t$ on the right-hand side which we can't get into a term with $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4101910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrix divided by its Euclidean norm $\frac{x}{\|x\|_{2}}$ I know that dividing a vector by its Euclidean norm $$\frac{x}{\|x\|_{2}}$$ gives a unit vector. Is there any kind of similar logic that can be applied to matrices? For example: $$\frac{A^{T}A}{\|A\|^{2}}$$
Matrices of a given shape, not even necessarily square, already comprise a vector space you may as well treat like any other. Its usual definition of a squared Euclidean length is $\sum_{ij}A_{ij}^2$, like the usual $\sum_kv_k^2$, by considering $ij$ to be $k$. This is of course $\sum_{ij}A_{ij}A^T_{ji}=\sum_i(AA^T)_{ii}=\operatorname{tr}AA^T$, or $\operatorname{tr}A^TA$ if you prefer. So the "unit" matrix is just $\frac{A}{\Vert A\Vert},\,\Vert A\Vert:=\sqrt{\operatorname{tr}A^TA}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4102098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
the length of a perpendicular inside of a triangle Let $ABC$ a triangle with its area $S$ and $AB=c$, $BC=a$, $AC=b$. The median from $A$ intersects the bisecting of the angle $ABC$ in $X$. If $x=\ $the length of the perpendicular from $X$ on $BC$ than $x$ is:
You are almost there: if $H$ is the perpendicular projection from $A$ on $BC$ and $Q$ is the perpendicular projection from $X$ on $BC$ ($XQ=x$), then: $$ AH/XQ=AM/XM=1+AX/XM \quad\text{and}\quad AH=2S/a. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4102464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that $f'(x)>\frac{f(x)}{x}$ Let $f:[0,1]\to\mathbb{R}$ be a continuous function on $[0,1]$ and differentiable on $(0,1)$. Suppose that $f(0)=0$ and that $f'$ is strictly increasing. Show that $f'(x)>\frac{f(x)}{x}$. I would like to know if my proof holds, please and to have a feedback. First, as $f$ is differentiable on $(0,1)$ and $f'$ is strictly increasing, then it is convex on $(0,1)$. Thus, we have the following inequality for $h<x<x+h$ ($h>0$ and such that $x+h<1$): $\frac{f(x)-f(h)}{x-h}\le\frac{f(x+h)-f(h)}{x}\le \frac{f(x+h)-f(x)}{h}$ Applying the limit as $h\to 0$ we obtain $\frac{f(x)-f(0)}{x}=\frac{f(x)}{x}=\lim_{h\to 0}\frac{f(x)-f(h)}{x-h}\le \lim_{h\to 0}\frac{f(x+h)-f(x)}{h}=f'(x)$ as wanted.
You proof is fine if you say that $f'$ is increasing (not $f$). However, you only get that $f'(x)\ge\frac{f(x)}{x}$, not the strict inequality. Here is another (possibly simpler) approach which also gives the strict inequality: From the mean-value theorem we have for $x > 0$ $$ f(x) = f(x) - f(0) = f'(c) (x-0) $$ for some $c \in (0, x)$. Since $f'$ is strictly increasing it follows that $$ f(x) = f'(c) x < f'(x) x $$ which is the desired estimate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4102952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expected area of a random triangle with fixed perimeter, And different ways to choose a random triangle Following on from this question: There are several solutions out there but each has a different outcome. I'm trying to calculate the expected area of a random triangle with a fixed perimeter of 1. My initial plan was to create an ellipse where one point on the ellipse is moved around and the triangle that is formed with the foci as the two other vertices (which would have a fixed perimeter) would have all the varying areas. But then I realized that I wouldn't account for ALL triangles using that method. For example, an equilateral triangle with side lengths one third would not be included. Can anyone suggest how to solve this problem? Thanks. When I ran a script that randomly selects a triangle with a perimeter of length 1 according to how @Sheheryar Zaidi specified in his answer: Let $0<x<y<1$ be the points at which the "stick is broken", and so $x, y-x, 1-y$ are the lengths of the three segments. For a triangle to be formed, the sum of any two sides must be greater than the third side. Therefore we get the following inequalities: $$x+(y-x)>1-y \\ (y-x)+(1-y)>x \\ (1-y)+x>y-x$$ Plotting these on a coordinate system gives a triangular region with vertices $(0, 1/2), (1/2, 1/2), (1/2,1)$. So any pair $(x, y)$ contained within that region results in a triangle of perimeter 1. I parameterize these pairs: $$\left(\frac{a_1}{2}, \frac{1+a_2}{2}\right),$$ for $0<a_2<a_1<1$. Now these can be plugged in Heron's formula (and simplified): $$A(a_1, a_2)=\frac{1}{4}\sqrt{(1-a_1)(a_1-a_2)(a_2)}$$ the average area of $10^7$ attempts came out 0.026179216476998588. The closest result is of @Sheheryar Zaidi, but I do not know what exactly is A(R) in $E(A)=\frac{1}{A(R)}\int_0^1\!\!\!\int_0^{a_1}A(a_1, a_2)\,da_2da_1$. Here's the Python code: import random import math def areaOfRandomTriangle(): x = random.random() y = random.uniform(0,x) A = x/2 B = (1+y)/2 - A C = 1 - (B+A) s = (A + B + C)/2 area = math.sqrt(s*(s-A)*(s-B)*(s-C)) return area n = 10**7 c = 0 for i in range(n): c += areaOfRandomTriangle() print('Average Area:',c/n) Average Area: $0.026179216476998588$ But when I chose the random triangle in another way suggested there - by using an ellipse, i.e .: * *Side A of the triangle is uniformly selected from $[0,\frac{1}{2}]$. *An ellipse is constructed whose distance between the foci is A (parameters a, b of the ellipse can be found by the condition $2a +A = 1$). *Select a point from the circumference of the ellipse uniformly as suggested here. *We will define the other two sides of the triangle B, C to be the distance of the point from the focal points respectively. *Calculate the area of the triangle A, B, C. The area average of $10^5$ attempts is $0.02184924698584864$. So my question is how to choose the triangle randomly and what is the expectation of the area?
I continued @Sheheryar Zaidi's path (quoted in the question), and just fixed the last equation and the result that came out is just like the result of the code. Because the $a_2$ is really $a_2|a_1$ like here, so his expectation is: $$\frac{1}{a_1}\int_0^{a_1}\frac{1}{4}\sqrt{(1-a_1)(a_1-a_2)(a_2)} \,da_2$$ Therefore the expectation of the triangle area is: $$E(A)=\int_0^1\frac{1}{a_1}\int_0^{a_1}\frac{1}{4}\sqrt{(1-a_1)(a_1-a_2)(a_2)} \,da_2da_1 =0.0261799387799$$ Which is exactly like the result of the code, with an accuracy of 6 digits after zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4103186", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Understanding open and closed sets I am really struggling with understanding the notions of open and closed sets. I "understand" that an open set is one which contains a ball with some radius $r$ for each $x$ in that set such that the ball is still inside the set. A closed set contains all of its limit points. But given the set $$(-\frac{1}{n},\infty),$$ I am unable to "picture" in my head why this is open. If anyone could please give advice on how to look at open and closed sets because so far most definitions have not made it any easier. Thank you.
Forget the $-\frac1n$. For any real number $x$, $(x,\infty)$ is an open set. That's so because, if $y\in(x,\infty)$, then $(x,2y-x)\subset(x,\infty)$. And $(x,2y-x)$ is the open ball centered at $y$ with radius $y-x$, since it is equal to $\bigl(y-(y-x),y+(y-x)\bigr)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4103420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the proper convention regarding the order of operations of a fractional exponent and/or the simplification of it? Specifically, consider the example $\sqrt[4]{x^2}$. The answer of course would be $\sqrt{|x|}$ since the x is squared first. However if converted to the exponential fraction of $x^{2/4}$, you lose the information of which came first, so you could end up with the wrong answer in this case. In fact, one might simplify it to $x^{1/2}$ which would simply be $\sqrt{x}$ which is NOT equal to $\sqrt{|x|}$. Is there a proper convention for dealing with this? Or is the takeaway that you should be careful when simplifying a function into a fractional exponent? The reason this is important to me is because I'm tutoring a student taking college algebra and I want to be 100% correct in my explanation of this sort of problem. He encounters many problems where he is asked to find the domain of functions and if he's ever given this type of scenario I need to know what to tell him.
They key is just to track the domain all the way through the problem. If the domain becomes restricted as you simplify, you incorporate an absolute value to deal with it. $$ \sqrt[4]{x^2}, x\in\mathbb{R} $$ $$ = x^{\frac{2}{4}}, x\in\mathbb{R} $$ $$ = |x|^{\frac{1}{2}}, x\in\mathbb{R} $$ $$ = \sqrt{|x|}, x\in\mathbb{R} .$$ If you miss the absolute value in the third step above, your domain becomes $x\in[0,\infty)$, so you've artificially restricted it. That's how you know you need to include the absolute value. Just an additional note on why this works - the problem comes from the simplification of the fraction, and if you're really careful about your order of operations, it works: $$ \sqrt[4]{x^2} = \left(x^2\right)^{\frac{1}{4}} = \left(\left(x^2\right)^{\frac{1}{2}}\right)^{\frac{1}{2}} = |x|^{\frac{1}{2}} = \sqrt{|x|}.$$ If the order had been reversed, we'd have: $$ \sqrt[4]{x}^2 = \left(x^{\frac{1}{4}}\right)^2 = \left(\left(x^{\frac{1}{2}}\right)^{\frac{1}{2}}\right)^2 = x^{\frac{1}{2}} = \sqrt{x}.$$ The domain throughout this reversed problem is $x\in [0,\infty)$. The key to it is in the following two identities: $$ \left(x^2\right)^{\frac{1}{2}} = |x|, x\in\mathbb{R} , $$ $$ \left(x^{\frac{1}{2}}\right)^2 = x, x\in [0,\infty) .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4103776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
a mean value theorem question if $f(x)$ can be differentiated to any degree for $x\in (0,+\infty)$ and $f'(x)>0,f''(x)<0$,if $0<a<b$,acordding to mean value theorem we have $\displaystyle\exists \xi\in(a,b),st.\frac{f(b)-f(a)}{b-a}=f'(\xi)$ , prove: $\displaystyle \xi<\frac{a+b}{2}$ And I think that's pretty obvious by looking at the graph of the function,but i don't know how to prove it in math words.
Let $f(x) = \arctan x$, $a={1 \over 8}, b={1 \over 2}$. Since $f''(x) <0$ we see that $f'$ is strictly decreasing, so to provide a counterexample, it is sufficient to show that ${f(b)-f(a) \over b-a} \le f'({a+b \over 2})$ or ${8 \over 3} (\arctan { 1\over 2} - \arctan { 1\over 8}) \le {256 \over 281}$. Evaluating give $0.904... \le 0.911...$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4104077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
DUality between derived tensor and derived Hom functors Let $A$ be a finite-dimensional $k$-algebra, $\def\H{\operatorname{Hom}} D=\H_k(-,k)$. How to show that $ R\H_A (DA,A) \cong D(DA \otimes^{\mathbb{L}}_A DA)$? $D(DA \otimes^{\mathbb{L}}_A DA) = \H_k(DA \otimes^{\mathbb{L}}_A DA, k) =\!\!\!\!? \ \H_k(DA,R\H_A(DA,k)) = ?$ Thank you.
By Lemma 15.72.2 of https://stacks.math.columbia.edu/tag/0A5W, if we choose any free finitely generated resolution $A_i$ of $DA$ (with differentials $A_i\rightarrow A_{i-1}$, $i\geq 0$, $A_{-1}=DA$), $RHom_A(DA,A)$ is represented by the complex $Hom(A_i,A)$ (where the differentials go from $i \geq 0$ to $i+1$ and the complex is zero in negative degree). That’s always possible because by working recursively, we can ensure that all the involved kernels have finite dimension over $k$ and are thus finitely generated over $A$. By Definition 15.58.15 in https://stacks.math.columbia.edu/tag/06XY, $DA \otimes_A^{\mathbb{L}} DA$ is represented by the complex $DA \otimes A_i$ (differentials from $i$ to $i-1$, zero in negative degree), and thus $D(DA \otimes_A^{\mathbb{L}} DA)$ is represented by the complex $D(DA \otimes_A A_i)$ (differentials from $i$ to $i+1$, zero in negative degree). So all that remains to be proved is that if $M$ is a free $A$-module of finite rank, there is a natural isomorphism $D(DA \otimes_A M) \rightarrow Hom_A(M,A)$. It is enough to show that there is a perfect natural $k$-duality $\langle \cdot,\, \cdot\rangle: (DA \otimes_A M) \times Hom_A(M,A) \rightarrow k$. We define it as $(\alpha \otimes m,f) \longmapsto \alpha(f(m))$. Note that the two spaces involved have the same finite dimension over $k$, so it’s enough to show that if $\langle \cdot,\, f\rangle=0$, then $f=0$. Now, let $f: M \rightarrow A$ be such that $\langle \cdot,\, f \rangle=0$. This means that for every $m \in M$, $f(m)$ is in the kernel of every $\alpha \in DA$. But $A$ is a $k$-vector space, so the intersection of all the kernels of the elements of $DA$ is zero, so $f=0$, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4104230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
contraction mapping and convergence proof I am having difficulties arranging and concluding the proof... Suppose $f$ maps the open interval $E$ into itself, $0 < b < 1$, $f$ has property $X(b)$ (that property is Lipschitz continuous), and $x_0 \in E$ Prove that the sequence $\{ x_k\}$ defined recursively by $x_k = f(x_{k−1})$ for $k \geq 1$ converges. what I started with: Given: $x_0 \in E$ $f(x_{k-1}) = x_k$ $0 < b <1$, $f$ has the property $X(b) \in E$, $|f'(x)| \leq b$, $x_0 \in E$ from $|f'(x)| \leq b$ $f(x_0) - f(x) \leq y+x_1$ becomes $f(x_0) = x_1$ $x_k = f(x_{k-1})$. for $ K\geq 1$ I am stuck and not sure if whatI am starting with is the right track :(
I have no idea about what you mean by $y$ I think you should notice that Lipchitz condition doesn't mean the existence of derivative, so you are not allowed to use $f'(x)$ Here's my proof By the condition, given $x_0\in E$, we have $x_n\in E,\forall n\in \mathbb{N}$ By Lipschitz condition, we have $$|f(x)-f(y)|\leq b|x-y|,\forall x,y\in E$$ specially, $$|x_{n+2}-x_{n+1}|=|f(x_{n+1})-f(x_n)|\leq b|x_{n+1}-x_n|,\forall n\in \mathbb{N}$$ let $n=0,1,...,k-1$, and multiply the inequalities together, we have $$|x_{k+1}-x_{k}|\leq b^k|x_1-x_0|$$ note that given $x_0$, $x_1-x_0$ is a constant, so when $m>n$ $$|x_m-x_n|\leq \sum_{k=n}^{m-1}|x_{k+1}-x_k|\leq \sum_{k=n}^{m-1}b^k|x_1-x_0|\leq \sum_{k=n}^\infty b^k|x_1-x_0|=\frac{b^n}{1-b}|x_1-x_0|$$ $\forall\epsilon>0,\exists N,\frac{b^N}{1-b}|x_1-x_0|<\epsilon$, then we have $|x_m-x_n|<\epsilon,\forall m>n>N$ by Cauchy's criterion, it is equivalent to the convergence of $\lbrace x_n\rbrace$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4104496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uniform convergence of $\sum_{n=1}^\infty \frac{1}{z^n+1}$ I've proved that the series $\sum_{n=1}^\infty \frac{1}{z^n+1}$ for $|z|\geq r >1,z\in\Bbb C$ converges uniformly by weierstrass M-test. But I want to show it's not uniformly convergent on $|z|>1$. Could you give any hints?
If it was uniformly convergent for $|z|>1$, $$ \mathop {\sup }\limits_{|z| > 1} \left|\sum\limits_{n = N}^\infty {\frac{1}{{z^n + 1}}} \right| $$ would tend to zero as $N\to +\infty$. Note however that $$ \mathop {\sup }\limits_{|z| > 1} \left|\sum\limits_{n = N}^\infty {\frac{1}{{z^n + 1}}} \right| \ge \sum\limits_{n = N}^\infty {\frac{1}{{\left( {1 + \frac{1}{N}} \right)^n + 1}}} \ge \frac{1}{{\left( {1 + \frac{1}{N}} \right)^N + 1}} \ge \frac{1}{{e + 1}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4104623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving that $\int_0^{\pi/2} \sin^{2n + 1}(x) dx = \frac{2}{3} \cdot \frac{4}{5} \cdot \frac{6}{7} \cdot ... \cdot\frac{2n}{2n + 1}$ I'm having some trouble with the following exercice: Prove that, for all $n \in \Bbb N$: $$\int_0^{\pi/2} \sin^{2n + 1}(x) dx = \frac{2}{3} \cdot \frac{4}{5} \cdot \frac{6}{7} \cdot ... \cdot\frac{2n}{2n + 1}$$ So, the first thing I did was write the rhs using the product notation no make things a little easier: $$\frac{2}{3} \cdot \frac{4}{5} \cdot \frac{6}{7} \cdot ... \cdot\frac{2n}{2n + 1} = \prod_{k=1}^n \frac{2k}{2k + 1}$$ And then I tried to use induction to prove this and I easily proved that if $n = 1$ this is true, But I'm having some trouble with the induction step of the proof. How can I prove this? Note: In the last exercice the I proved that: $$\int \sin^n(x) dx = - \frac{1}{n}(\sin^{n - 1}(x))\cos(x) + \frac{n - 1}{n} \int \sin^{n - 2}(x) dx$$ I don't know if this helps or if it's related to this exercise.
The proof you have done in your previous exercise, as you mentioned is exactly what you can use to get you answer for your question. A small step and you reach you answer. In you answer, if you take the integral on left hand side I{n} the integrals with exponential powers (n-1) and (n-2) get simplified into I{n-1} and I_{n-2} the you can form a series which will give you the value you require. Forgive my typing as I am new to stackexchange. Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4104790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Sample size for a given accuracy, at estimating $\pi$ by the Monte Carlo method. I have the following problem: For the classical technique for estimating $\pi$ by using the Monte Carlo method, find the minimum number of points $n$, such that (being $\hat{p}$ our estimator) we get $\dfrac{|\hat{p} - \pi|}{\pi} < 0.0005$. I have already made some simulations, trying to estimate the value of the minimum value of $n$ by estimating the standard deviation of $\hat{p}$. But actually the right method is by proving mathematicaly, so I tried the following. Considering $\pi/4$ as being approximately a proportion, lets call it $p$. I will try to find the $n$ that gives me $\dfrac{|\hat{p} - \pi|}{\pi} < 0.0005$ with a $95\%$ probability: $$ P\left( \frac{|\hat{p} - p|}{p} < 0.0005 \right) = 0.95 \Leftrightarrow P\left( \frac{-0.0005p}{\sqrt{\frac{p(1-p)}{n}}} < \frac{\hat{p} - p}{\sqrt{\frac{p(1-p)}{n}}} < \frac{0.0005p}{\sqrt{\frac{p(1-p)}{n}}} \right) = 0.95 \Leftrightarrow\ \Leftrightarrow P\left( \frac{-0.0005p}{\sqrt{\frac{p(1-p)}{n}}} < Z < \frac{0.0005p}{\sqrt{\frac{p(1-p)}{n}}} \right) = 0.95 $$ By the fact that $Z \sim N(0,1)$, I get: $$ \frac{0.0005p}{\sqrt{\frac{p(1-p)}{n}}} = 1.96 \Leftrightarrow n = \frac{1.96^2\frac{1-p}{p}} {0.0005^2} $$ Considering that I did not missed anything or calculated something wrong in the step above, I could find $n$ by a iterative method, using $p = \hat{p}_0$, were $\hat{p}_0$ is the estimate proportion for a first given $n = n_0$, and then compute $n_k$ with: $$ n_k = \frac{1.96^2\frac{1-\hat{p}_{k-1}}{\hat{p}_{k-1}} }{0.0005^2} $$ Where $\hat{p}_{k-1}$ is calculated using $n = n_{k-1}$. But I don't know... Is this a sufficient demonstration? Am I in the wrong direction? NOTES: * *I the division I've done in the inequality is to normalize $\hat{p}$, considering it as an esimator for $\pi/4$, since $E[\hat{p}] = p$ and $sd(\hat{p}) = \sqrt{p(1-p)/n}$. *The Monte Carlo method for estimating $\pi$ can be found, pretty straight forward, in this link. *I can't use the real value of $\pi$, it is considered "cheating".
I don't understand why you resort to an iterative approach when you already defined $p = \pi/4$. This immediately gives you $$n \ge \frac{1.96^2 \frac{1 - \pi/4}{\pi/4}}{0.0005^2} \approx 4.19855 \times 10^6.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4105053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
There are 3 numbers $a,b,c$ such that $HCF(a,b)=l,HCF(b,c)=m$ and $HCF(c,a)=n$. $HCF(l,m)=HCF(l,n)=HCF(n,m)=1$. Find LCM of $a, b, c$. My solution approach for this problem was to use the relationship formula between LCM and HCF of three numbers which is $$LCM(p,q,r)=\frac{pqr \times HCF(p,q,r)}{HCF(p,q) \times HCF(q,r) \times HCF(r,p)}$$and upon using this formula I get $$LCM(a,b,c)=\frac{abc \times HCF(a,b,c)}{lmn}$$ But I am not able to figure out the the value of $HCF(a,b,c)$. Can someone please help on this? Thanks in advance !!!
We can express a, b and c as a=ln(a') , b=lm(b') and c=mn(c') As the HCF(a,b)=l , therefore HCF(lna',lmb') = l , therefore HCF(na',mb')=1 . Now as m,n are co-prime , HCF(a',b')=1. Similarly we can find that HCF(a',b',c')=1 , therefore HCF(a,b,c)=1 Therefore , with your formulae , LCM(a,b,c) becomes abc/lmn
{ "language": "en", "url": "https://math.stackexchange.com/questions/4105482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $\mathbb{R^3}$ is not a vector space Consider $\mathbb{R^3}$ with the usual addition $+$ of vectors, but with scalar multiplication $\otimes$ defined by: $k$ $\otimes$ $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ = $\begin{bmatrix} x \\ ky \\ 2kz \end{bmatrix}$, $\begin{bmatrix} x \\ y \\ z \end{bmatrix}$ $\in$ $\mathbb{R^3}.$ Show that $\mathbb{R^3}$ together with the operations + and $\otimes$ is not a vector space. My attempt: A property of a vector space is $(-1)$ $\otimes$ $\mathbf{u}$ = $\mathbf{-u}$ So, applying this would result to: $(-1)$ $\otimes$ $(x, y, z)$ = $(x, -y, -2z)$. To fulfill the property, this must be equal to $\mathbf{-u}$. So, $(x, -y, -2z)$ = $-(x, y, z)$ = $(-x, -y, -z)$. But, $(x, -y, -2z)$ $\ne$ $(-x, -y, -z)$ for all $x$. Hence, it is not a vector space. Is my attempt enough to prove it isn't a vector space? Or at least, correct? Should I show further proof?
An idea to simplify things: it's easy to show that in any vector space if must be that $\;1\cdot v= v\;$, for any (supposed) vector $\;v\;$ , but here $$1\cdot\begin{pmatrix}x\\y\\z\end{pmatrix}:=\begin{pmatrix}x\\y\\2z\end{pmatrix}\neq\begin{pmatrix}x\\y\\z\end{pmatrix}$$ and thus we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4105655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Reference for a theorem about necessary and sufficient conditions for configuration of n-tuples of points. I'm writing my thesis and at a certain point I claim, and indeed it turns out to be true from discussions with my supervisors, that certain configurations for n-tuples of points, in a curve of degree k, have the following property: If there is an n-tuple of points of $\mathbb{P}^2$ such that the configuration C is possible than the configuration C is possible for any n-tuple For example it's possible that given 8 points on an irreducible quartic one of them is a triple point and the other 7 are non-singular points, but then I claim that given any 8-uple in $\mathbb{P}^2$ there is a quartic with a triple point in one of them and passing through the other 7. I claim that these configurations are all possible configuration on curves of degree bigger than 3 if we consider an 8-uple. What I need is a reference for such theorem that I'm quite sure exists
I couldn't find a complete answer. However it turns out that the best way to exclude a configuaration is to use Bezout theorem. For example a cubic can have at most one singularity, if it had two and we considered a line through the two we woud get an intersection mutliplicity of 2+2=4 that is bigger than the 3*1=3 coming from Bezout. In some case we need to use singular curves, for example a singular cubic with the signularity (a double point) on the triple point of a quintic has intersection multiplicity 3*2=6 on the point alone
{ "language": "en", "url": "https://math.stackexchange.com/questions/4105796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Complicated comparing sets Consider the open interval $(0, 1)$ and the closed interval $[0, 1]$. We claim that $[0, 1] ≈ (0, 1)$. We deduced the equipotence $[0, 1] ≈ (0, 1)$ by invoking the Schr¨oder–Bernstein theorem, but we didn’t actually construct a bijection $[0, 1] → (0, 1).$ (a) Verify that the function $h$ given by the following scheme is such a bijection: $$0\overset{h}{\longmapsto}\frac12\qquad\qquad\qquad\qquad\qquad\qquad\quad\,{}\\\frac1n\longmapsto\frac1{n+2}\qquad\text{for each integer }n\ge 1\\x\longmapsto x\qquad\qquad\qquad\qquad\quad\;\text{otherwise}$$ (b) The function $h$ in part (a) acts on $[0, 1]$ in a way that resembles the strategy of the hotel manager in the text below. How? Now imagine that there is a hotel with infinitely many rooms along an unending corridor: a first room, a second room, and so on. Imagine further that there is a person occupying each room; so we can reasonably say that there are exactly as many guests as there are rooms. You arrive at the hotel seeking a room, and the manager tells you, “At the moment the rooms are all occupied, but we always have room for one more.” Whereupon she speaks into an intercom connected to all the rooms, saying, “At the sound of the bell, each guest should step out into the corridor and move into the next room in line.” This clears out the first room, and you have a place to stay. Notice that although you thought you had enlarged the collection of guests by your presence, there are still as many hotel rooms as there are guests. My work is below: We are given function $h$ $$h(x)=\begin{cases}\cfrac1{n+2} & \text{for }x=\cfrac1n,n\ge 1\\\cfrac12 & \text{for }x=0\\x & \text{otherwise}\end{cases}$$ We know that a function from $A$ (the domain) to $B$ (the range) is both one-to-one and onto when no element of $B$ is the image of more than one element of $A,$ and all elements in $B$ are used. Functions that are both one-to-one and onto are referred to as bijective. Here we can say that $h(x)$ is an identity function as $x,$ in the form of $\frac1n$ and $x\ne0.$ We know that an identity function is bijective. For $x\ne\frac1n$ and $x\ne 0,$ $h(x)$ is bijective. Let us consider $x_1=\frac{1}{n_1},x_2=\frac{1}{n_2}$ and assume that $h(x_1)=h(x_2),$ so $h\left(\frac{1}{n_1}\right)=h\left(\frac{1}{n_2}\right).$ Then, $$\frac{1}{n_1+2}=\frac{1}{n_2+2}\iff n_1+2=n_2+2$$ Therefore, $n_1=n_2,$ thus $x_1=x_2$ which proves that $h(x)$ is one-to-one. For $n\ge 1,$ $\frac{1}{n+2}\ne\frac{1}{2}$ Thus, only $x=0$ maps to $\frac12.$ Let us check identity function for the numbers $y=\frac1n,n>2.$ If we have $y=\frac1n,$ then there exists an element $x$ such that $\frac1n=h\left(\frac{1}{n-2}\right),$ $x=\frac1{n-2}$ Therefore, $h(x)$ is onto, so we conclude that $h(x)$ is bijective. This question is driving me nuts. Please give me some advice on how to answer it concisely. Thank you so much!
Here is how the construction of that bijection mirrors the freeing of the first two rooms in Hilbert's Hotel. The function $h$ matches the rational numbers $$ 0, 1/1, 1/2, 1/3, 1/4, 1/5, \ldots $$ in order to the rational numbers $$ 1/2, 1/3, 1/4, 1/5, 1/6, 1/7, \ldots $$ That says the inverse function $h^{-1}$ creates "rooms" for the extra guests $0$ and $1$ by shifting the guests $1/n$ over two rooms. All the other real numbers in the interval stay fixed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4105919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Examples/classification of algebraic symplectomorphisms I'm curious about examples of algebraic automorphisms of complex varieties which are symplectomorphism. For instance, can we classify the algebraic symplectomorphisms of $\mathbb{P}_{\mathbb{C}}^n$ with regard to the Fubini-Study form? Or can we give large classes thereof for $\mathbb{P}_{\mathbb{C}}^n$ or other complex algebraic varieties? If understand a comment on this question correctly, there ought to be an ample supply....
A general remark about Kahler manifolds is that a biholomorphic automorphism is symplectomorphic if and only if it is an isometry. (Generally, if a diffeomorphism preserves two out of three structures given by a Kahler metric: hermitian, symplectic, complex, then it preserves all three.) In the case of $CP^n$ with the FS-metric, the group of biholomorphic isometries is easily seen to be equal to $PU(n+1)$. As for more general examples, you can consider biholomorphic isometries of compact hermitian symmetric spaces (these spaces will be algebraic). Helgason's book Helgason, Sigurdur, Differential geometry, Lie groups, and symmetric spaces, Pure and Applied Mathematics, 80. New York-San Francisco-London: Academic Press. XV, 628 p. (1978). ZBL0451.53038. contains a description of these groups (I do not remember the details). For general smooth algebraic varieties, describing the groups of biholomorphic isometries will be difficult. On general grounds, all you can say that this group will be a compact Lie group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a general procedure for inducing a formula / recognizing a pattern? For some problems, it's pretty simple to deduce what a formula should look like after you enumerate a few examples, but for some problems, it's not so clear. For these less clear examples, is there some procedure I can follow to find the formula? For example, just now, I was working on the problem: Find the expected number of tosses it takes to get $k$ 6s in a row. The recursive formula is $$ E[N_k ] = 6(E[N_{k - 1}] + 1) $$ where $N_k$ is the number of tosses it takes to get $k$ 6s in a row. The initial condition is $E[N_1] = 6$ because $N_1$ is a geometric random variable with $1/6$ probability of success. Then I enumerated a few cases: $$ E[N_1] = 6 \\ E[N_2] = 42 \\ E[N_3] = 258 \\ E[N_4] = 1554 $$ It's not obvious to me what the formula should be. But I think it should look something like $$ E[N_k] = 6^k + (k - 1) * \text{something} + \text{maybe other terms} $$ But I don't know what this "something" and "maybe other terms" should be. If I sit here long enough I could probably figure it out, but is there some kind of general procedure that I can apply to a wide array of problems?
If you call $a_k=E[N_k]$ then you end up with a linear induction relation: $$a_k-6a_{k-1}=6$$ Since it is linear, it solves as the sum of general solution of homegenous equation $H_k-6H_{k-1}=0$, which is $H_k=6^kH_0$ and one particular solution of the full equation. The theory says that if the RHS is of the form $P(k)r^k$ then you can find a particular solution of the form $Q(k)r^k$ with $\deg(Q)=\deg(P)+m$ where $m$ is the multiplicity of the root of the homogenous equation. In this case $RHS = 6\times 1^k$ and the single root of homogenous equation is $r=6$. So the multiplicity of $1$ is just $m=0$ (i.e. not a root) this means $\deg(Q)=0+0=0$. All that to say that our particular solution is simply a constant... Therefore let search for $S_k=c$ then $c-6c=6\iff c=-\frac 65$ Our general solution is then $$a_k=H_k+S_k=6^kH_0-\frac 65$$ We now solve for initial conditions: $a_1=6=6H_0-\frac 65\iff H_0=\frac 65$ and we get $$E[N_k]=\frac 65(6^k-1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is meant by $\log_2 X \sim N(0,n) \implies \log X \sim N(0, n (\log 2)^2)$? I see the following written in my class notes: $\log_2 X \sim N(0,n) \implies \log X \sim N(0, n (\log 2)^2)$ But I don't understand what they did here. It looked like they did some kind of conversion of the base on the log, but what is the base on the log on the RHS?
Let $Y = \log_2X \sim N(0, n)$. By using a formula $$ \log_2X = \frac{\log X}{\log 2}, \text{ where } \log X \text{ is the natural }\log \text{ of }X, $$ one obtains that $$ \log X = \log{2}\cdot \log_2{X} = \log{2}\cdot Y. $$ Then, we can use a property of the Normal distribution $$ \text{If }Y \sim N(\mu, \sigma^2), \text{ then }aY\sim N(a\mu, a^2\sigma^2). $$ In our case, we have $$ Y = \log_2X \sim N(0, n), a = \log{2}\Rightarrow aY = \log{X} \sim N(0, n(\log{2})^2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Arranging numbers 1 to 1000 such that the difference of two adjacent numbers is not a square nor a prime number I've been working on the following problem for a while: Prove that it's possible to arrange numbers 1 to 1000 an order such that each number appears once and |$x_j - x_{j+1}$| is not a perfect square nor a prime number. The idea is just to prove that such an ordering exists, not to explicitly construct it (thankfully). My first thought was to try to construct an explicit ordering of 1 to 10 that satisfies the given constraints and then see if I could extrapolate a pattern. Unfortunately, I wasn't able to do this (5 minus any other number in that sequence gives either a prime or a perfect square, I believe...) I found online that there are 168 primes and 31 perfect squares between 1 and 1000, and this seems like potentially useful information. However, I'm still not able to connect the dots and figure out how to think about this problem ... Any help would be much appreciated.
You can also construct it: Start with $1$ and keep adding $6$ i.e $1,7,13$ until you hit $997$ then go back to $3$ and keep adding $6$ until you get to $999$ and go back to $5$ repeat until $995$ then go back to $2$ repeat until $998$ and go back to $4$ repeat until $1000$ and go back to $6$ repeat until $996$ and you're finished. The difference between consecutive terms is either $6,994,993$ - those clearly aren't primes, and $993$ and $994$ aren't perfect squares since they are $>31^2$ (and $32^2>1000$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove the limit of a piecewise defined sequence Is my approach correct? Define the sequence $\{b_{n}\}$ as $$ b_n =\begin{cases} \frac{n^4 + 1}{n^4}& \text{if $n$ is even}\\ \frac{n^2 - 1}{n^2} & \text{if $n$ is odd}\end{cases}.$$ Prove that $\lim b_n = 1$ Proof. We partition $\{b_{n}\}$ into subsequences $\{b_{2n}\}$ and $\{b_{2n-1}\}$ where $$\{b_{2n}\} = \{\frac{n^4 + 1}{n^4}\}$$ $$\{b_{2n-1}\} = \{\frac{n^2- 1}{n^2}\}.$$ Taking the limits, $$\lim b_{2n} = \lim \frac{n^4 + 1}{n^4} = \lim 1 + \frac{1}{n^4} = 1$$ $$\lim b_{2n-1} = \lim \frac{n^2 - 1}{n^2} = \lim 1 - \frac{1}{n^2} = 1.$$ Let $\varepsilon > 0$. Then there exists $N_{1},N_{2} \in \mathbb{N}$ s.t. $$|b_{2n} - 1| < \varepsilon,\forall 2n \geq N_{1}$$ $$|b_{2n-1} - 1| < \varepsilon,\forall 2n-1 \geq N_{2}.$$ Let $N = \max\{N_{1},N_{2}\}$ and $n \geq N.$ If $n$ is even, then $n = 2j$ for some $j \in \mathbb{N}$ and $$|b_{n} - 1| = |b_{2j} - 1| < \varepsilon.$$ If $n$ is odd, then $n = 2j - 1$ for some $j \in \mathbb{N}$ and $$|b_{n} - 1| = |b_{2j - 1} - 1| < \varepsilon.$$ From both cases, $|b_{n} - 1| < \varepsilon. \square$
Yes, I do not see any particular mistake. The key point is indeed to notice the two subsequences may have different rates of convergence and hence if you take $N$ the maximum of $N_1$ and $N_2$, then you are sure you get the definitions of convergence in both cases, hence for all terms greater then $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $\arctan x$ cannot be expressed as a rational function In my Calculus class the teacher proposed as an exercise to prove that $\arctan(x)$ cannot be expressed as a rational function (fraction of polynomials) in any closed interval $[a,b]$. I've been thinking on the problem and I haven't been able to prove it by "analysis" arguments but using some concepts of an abstract algebra course I took last semester. I've done it this way: Suppose there exist $p(x),q(x)$ such that $\arctan(x) = \frac{p(x)}{q(x)}$ in the interval $[a,b]$, where $\gcd(p,q)=1$. Thus, $\left(\frac{p(x)}{q(x)}\right)' = \frac{1}{1+x^2}$. If we expand the expression of the derivative of a quotient and manipulate the equality, we end up with: $$(1+x^2)(p'(x)q(x)-p(x)q'(x))=q^2(x)$$ Thus, $1+x^2$ divides $q^2(x)$ and as $1+x^2$ is irreducible over $\mathbb R[x]$, $1+x^2$ divides $q(x)$. Let $q(x)$ have the prime factor $1+x^2$ "n" times. From the equation above, we know that $q^2(x)$ has the factor $(1+x^2)$ "2n" times; hence $(p'(x)q(x)-p(x)q'(x))$ has the factor $1+x^2$ exactly "2n-1" times. Note that for $n\geq 1$, $2n-1 \geq n$. Hence, the factor $(1+x^2)^n$ must divide $(p'(x)q(x)-p(x)q'(x))$ and it also divides $q(x)$; thus $(1+x^2)^n$ must divide $p(x)q'(x)$. Suppose that $1+x^2$ is not a factor of $p(x)$. Thus, $(1+x^2)^n$ must divide $q'(x)$. However, note that: $$ q'(x) = ((x^2+1)^n \cdot r(x))' = 2nx(x^2 + 1)^{n-1} r(x) + (x^2+1)^n r'(x),\ \gcd(r,1+x^2)=1 $$ Therefore, $(x^2+1)^n$ divides $q'(x)$ if and only if $(x^2+1)^n$ divides $n(x^2+1)^{n-1} r(x)$. Then, $x^2 + 1$ must divide $r(x)$ which is a contradiction. Hence, $x^2 +1$ must also be a factor of $p(x)$. However, by hypothesis $\gcd(p,q)=1$ while $x^2+1$ divides both $p,q$. This is a contradiction, hence there do not exist such polynomials. First of all, I would appreciate if anyone could tell me if this proof has any error. Moreover, I would appreciate any other approach which uses theorems of elementary one-variable calculus instead of divisibility. NOTE: In the case of $\mathbb R$, this problem is (almost) trivial. As the arctangent has a horizontal asymptote in both $\pm \infty$, it follows that if $\arctan(x) = \frac{p(x)}{q(x)}$ for certain $p,q$, they must verify that $deg(p) = deg (q)$. Moreover, $x=0$ is a unique real root of $p(x)$ with multiplicity 1 as $\arctan(0) = p(0)/q(0) = 0$ and $\arctan(0)' = 1/(1+0^2) = 1 \not = 0$. Thus, $deg(p)$ must be odd. However, all the roots of $q(x)$ are complex as $\arctan(x)$ must be continuous on $\mathbb R$. As complex roots on a real polynomial come in conjugate pairs, it follows that $deg(q)$ must be even. Then $deg(p) \not = deg(q)$ which is a contradiction. Conclusions: I have already proposed a solution to my Calculus professor based on some arguments given in this page and adding some details: Suppose there exist $p,q$ such that $\arctan(x) = p(x)/q(x)$ and $gcd(p,q)=1$ in the interval $[a,b]$. Therefore, they must have the same derivative, hence: $$ (1+x^2)(p'q-pq') = q^2, \forall x \in [a,b]$$ As an equality of polynomials in an interval $[a,b], a<b$ must hold also in $\mathbb R$, then if $q(x) /not = 0$, the equality above holds. We are going to prove that q(x) has no real roots hence $p(x)/q(x)$ is continuous in $\mathbb R$. Suppose $\alpha \in \mathbb R$ is a root of $q$. As there are finitely many roots of $q(x)$ as is a non-zero polynomial, there exists $\delta > 0$ such that for any point in $[\alpha - \delta,\alpha + \delta]$ except for $x= \alpha$, the equality $(\arctan x)' - (\frac{p}{q})' = 0$ holds. Thus, there exists a constant $c \in \mathbb R$ such that $\frac{p}{q} = \arctan + c$. Thus, $$ \lim_{x \to \alpha} \frac{p}{q} = \lim_{x \to \alpha} (\arctan (x) + c) = arctan(\alpha) + c $$ As arctan is a continuous function. Thus, $\alpha$ must be a root also of $p(x)$ (if not, $\lim_{x \to \alpha} \frac{p}{q} = \infty$). But as we suppose that $p,q$ are prime polynomials, this is a contradiction. Hence, there cannot exists any root of q(x). Hence, $p(x)/q(x)$ is continuous in $\mathbb R$ and the equality $(\arctan x)' = (\frac{p}{q})'$ holds in $\mathbb R$. Therefore, in $\mathbb R$, $\frac{p}{q} = \arctan x + c$ and in $[a,b]$, $\arctan = \frac{p}{q}$. Thus $c = 0$, and we get that $\forall x \in \mathbb, \arctan = \frac{p}{q}$. But we have already proved that this is a contradiction, as desired. Further Questions: The same procedure can be used to prove that $log x$ cannot be expressed as a rational function. Moreover, I have come up with two questions I would try to solve: * *Can the argument applied here be generalised to an arbitrary function under certain hypothesis ($f$ continuous and $f'$ a rational function, for example)? *Can we find a similar argument to prove that $\sin x$ and $\cos x$ cannot be expressed in any interval as a rational function? I would try to publish as soon as possible the conclusions of these questions.
Write $p(x)=q(x)\arctan x$ and consider power expansions. Since $p(x),q(x)$ are polynomials, the Taylor series of $\arctan x$ must be a power series, so $|x|\le1$. This proves that $\arctan x$ is not a rational function on $|x|>1$. Since $\arctan x$ is a strictly increasing, bounded function over the reals, we must have $\deg p=\deg q=n$. (*) Thus $$p_0+\cdots+p_nx^n=(q_0+\cdots+q_nx^n)\left(x-\frac{x^3}3+\frac{x^5}5-\cdots\right).$$ Equating $x^{n+1}$ terms yields $$0=q_n-\frac13q_{n-2}+\frac15q_{n-4}-\cdots_f$$ where $\cdots_f$ denotes a finite continuation (up to either $q_1$ or $q_0$ depending on the parity of $n$). Equating $x^{n+3}$ terms yields $$0=-\frac13q_n+\frac15q_{n-2}-\frac17q_{n-4}+\cdots_f.$$ Going indefinitely, we obtain $$\begin{bmatrix}1&-1/3&1/5&\cdots_f\\-1/3&1/5&-1/7&\cdots_f\\1/5&-1/7&1/9&\cdots_f\\\vdots_i&\vdots_i&\vdots_i&\ddots_{i,f}\end{bmatrix}\begin{bmatrix}q_n\\q_{n-2}\\q_{n-4}\\\vdots_f\end{bmatrix}={\bf 0}.$$ where $\cdots_i$ denotes an infinite continuation. Since each row is linearly independent of any other row, we obtain a contradiction as there are infinitely many such equations defining a finite number of variables. Thus $\arctan x$ is not a rational function on $|x|\le1$. (*) The deduction $\deg p=\deg q=n$ is actually not needed, but only used to make the calculations convenient. To equate the terms such that the LHS is zero, we can take $x^{\ge m+n-1}$ terms and the same conclusion follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4106935", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 1 }
Markov Chain Derivation question This page defines the Markov Property as the following. Does anyone how form (b) becomes (c)? Is it due to $(X_{n-1} = i_{n-1}) \subset (X_{n-1} = i_{n-1}) \cap ... \cap (X_0 = i_0)$? Is it due to independence of the conditional events? a) $$ \mathbb{P}(X_0 = i_0, X_1 = i_1, X_2 = i_2, ..., X_n = i+n) $$ b) $$ = \mathbb{P}(X_0 = i_0) \mathbb{P}(X_1 = i_1 | X_0 = i_0) \mathbb{P}(X_2 = i_2 | X_1 = i_1, X_0 = i_0) ... \mathbb{P}(X_n = i_n | X_{n-1} =i_{n-1} , ..., X_1 = i_1, X_0 = i_0) $$ c) $$ = \mathbb{P}(X_0 = i_0) \mathbb{P}(X_1 = i_1 | X_0 = i_0) \mathbb{P}(X_2 = i_2 | X_1 = i_1) ... \mathbb{P}(X_n = i_n | X_{n-1} = i_{n-1}) $$
By the definition of a Markov chain, the probability of the $k$ th state depends only on the $k-1$ th state, it is independent of the $k-2, k-3, \cdots, 0$ th states. This can be explained best with the help of a state transition diagram. For example, in this Markov chain, there are no arrows linking state $2$ to state $0$, without passing through state $1$. As the diagram clearly shows, the $k$ th state is linked to the $k-1$ th state only. Hence, $$\mathbb{P}(X_k = i_k|X_{k-1}=i_{k-1}, \ldots, X_0=i_0) = \mathbb{P}(X_k = i_k | X_{k-1} = i_{k-1})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\int_{0}^{1}f(x)g(x)=0 \implies f(x)=0 \ \forall x \in [0,1]$ Let $f:[0,1]\to \mathbb{R}$ be a continuous function. If $\int_{0}^{1}f(x)g(x)=0$ for all continuous functions $g(x)$, then $f(x)=0 \ \forall x \in [0,1]$. I would like to know if my proof holds, please. As the statement holds for every continuous function $g(x)$ on $[0,1]$, then it holds in particular for $g(x)=f(x)$. So, we have to show that $\int_{0}^{1}f^2(x)=0 \implies f(x)=0 \ \forall x \in [0,1]$. First, as $f$ is continuous on $[0,1]$, $f^2(x)$ is continuous on $[0,1]$ as well. Therefore, $f^2(x)$ has a primitve $F(x)$ (which is differentiable on $[0,1]$). Thus, $F(1)-F(0)=0\implies F(1)=F(0)$ and $F'(x)=f^2(x)\ge 0$. As $F'(x)\ge 0$, then $F(x)$ is increasing. We will show now that $F(x)$ is constant. Consider $0\le x\le 1$. As $F$ is increasing, then $F(0)\le F(x)\le F(1)=F(0)$. So, $F(x)=F(1)=F(0)=cste \ \forall x\in [0,1]$ . We conclude that $F'(x)=cste'=0=f^2(x) \implies f(x)=0$ as wanted.
Just a suggestion : I guess you know that if $f\geq 0$ is continuous, then $\int_0^1 f=0\implies f=0$. Using that : $$\forall g\in \mathcal C[0,1], \int_0^1 fg=0\implies \int_0^1 f^2=0\implies f=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that the sequence $\{\int_{1}^{n} \frac{\cos t}{t^2} dt\}$ is Cauchy Here is my attempt but I got stuck. Proof. Let $\varepsilon > 0$. Choose $N$ (any hints for this) and let $m > n \geq N$. Then \begin{align*}& \left|\int_{1}^{m} \frac{\cos t}{t^2} dt - \int_{1}^{n} \frac{\cos t}{t^2} dt\right|=\\ & = \left|-\int_{m}^{1} \frac{\cos t}{t^2} dt - \int_{1}^{n} \frac{\cos t}{t^2} dt\right| =\\&=\left|-\int_{m}^{n} \frac{\cos t}{t^2} dt\right| = \left|\int_{m}^{n} \frac{\cos t}{t^2} dt\right|=\\ &= \left|\frac{\cos(c)}{c^2}(n-m)\right| = \left|\frac{\cos(c)}{c^2}\right||n-m| =\\&= \frac{|\cos(c)|}{c^2}(m-n)\end{align*} $\exists c \in (n,m)$ by Mean Value Theorem. Any hints on how to proceed with this?
If $m\geqslant n$, then\begin{align}\left|\int_0^m\frac{\cos t}{t^2}\,\mathrm dt-\int_0^n\frac{\cos t}{t^2}\,\mathrm dt\right|&=\left|\int_n^m\frac{\cos t}{t^2}\,\mathrm dt\right|\\&\leqslant\int_n^m\left|\frac{\cos t}{t^2}\right|\,\mathrm dt\\&\leqslant\int_n^m\frac1{t^2}\,\mathrm dt\\&=\frac1n-\frac1m\end{align}and, of course, if $m<n$, then $\left|\int_0^m\frac{\cos t}{t^2}\,\mathrm dt-\int_0^n\frac{\cos t}{t^2}\,\mathrm dt\right|\leqslant\frac1m-\frac1n$. So, if $\varepsilon>0$, just take $N\in\Bbb N$ such that $\frac1N<\varepsilon$. Then, if $m,n\in\Bbb N$ and $m,n\geqslant N$, then$$\left|\int_0^m\frac{\cos t}{t^2}\,\mathrm dt-\int_0^n\frac{\cos t}{t^2}\,\mathrm dt\right|\leqslant\left|\frac1n-\frac1m\right|<\frac1N<\varepsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $D$ is diagonal matrix, then why $e^{D}$ turns out to be like this... If $D$ is a diagonal matrix, for instance $D=\begin{pmatrix}a&0\\0&b\end{pmatrix}$. Im wondering why $e^{D}=\begin{pmatrix} e^{a} &0\\0& e^{b} \end{pmatrix}$. I already know that $e^{x}= \sum_{n=0}^{\infty}\frac{x^{n}}{n!}$. Therefore, $$e^{D}= \sum_{n=0}^{\infty}\frac{D^{n}}{n!}.$$ Of course, $\frac{D^{n}}{n!}$ is a diagonal matrix for every $n$,so this is a series of diagonal matrix. But I cannot figure it out why this series, $\sum_{n=0}^{\infty}\frac{D^{n}}{n!}$, turns out to be $\begin{pmatrix} e^{a} &0\\0& e^{b} \end{pmatrix}$. I cannot see the trick. Thanks.
We have that $$e^{D}= \sum_{n=0}^{\infty}\frac{D^{n}}{n!}.$$ And since $\frac{D^{n}}{n!}=\begin{pmatrix} \frac{a^{n}}{n!} &0\\0& \frac{b^{n}}{n!} \end{pmatrix}$ for every $n$, then $\sum_{n=0}^{\infty}\frac{D^{n}}{n!}= \begin{pmatrix} \sum_{n=0}^{\infty} \frac{a^{n}}{n!} &0\\0& \sum_{n=0}^{\infty} \frac{b^{n}}{n!} \end{pmatrix}=\begin{pmatrix} e^a &0\\0& e^{b} \end{pmatrix}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the compactness theorem does not apply to infinite logic I am trying to understand why the compactness theorem does not apply in infinite logic and I wonder if anyone has a good example and explanation for this? Edit: By infinite logic I mean logic that allows infinitely many conjunctions and disjunctions. More exactly: * *$M \models \bigvee \Gamma$ iff $M \models \varphi$ for some set of sentences $\varphi \in \Gamma$. *$M \models \bigwedge \Gamma$ iff $M \models \varphi$ for some set of sentences $\varphi \in \Gamma$. The compactness theorem: The compactness theorem states that a set of first-order sentences has a model if and only if every finite subset of it has a model. Thanks in advance!
This is called "infinitary logic." For every pair of infinite cardinals $\kappa\ge\lambda$ there is a logic $\mathcal{L}_{\kappa,\lambda}$ gotten by closing first-order logic under conjunctions and disjunctions of size $<\kappa$ and universal and existential quantification over tuples of length $<\lambda$. The most common infinitary logics are of the form $\mathcal{L}_{\kappa,\omega}$ - so only finitary quantification is allowed, although we permit "big" Boolean combinations. The logic $\mathcal{L}_{\omega,\omega}$ is just first-order logic itself. The first infinitary logic is $\mathcal{L}_{\omega_1,\omega}$, where we expand first-order logic by allowing countably infinite conjunctions and disjunctions. Here we already see a failure of compactness: consider the sentence $$(*)\quad\bigvee_{n\in\mathbb{N}}[\forall x_1,...,x_n(\bigvee_{1\le i<j\le n}x_i=x_j)].$$ This is true in a structure iff that structure is finite. But this yields a counterexample to compactness (think about the proof that every first-order theory with arbitrarily large finite models has an infinite model): Consider the theory $\{(*)\}\cup\{\mbox{"There are at least $n$ elements"}: n\in\mathbb{N}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is it correct to say that an integer is congruent with a p-adic number $\text{mod }p^k$? Is there a standard way to describe a class of $p$-adic numbers that share the same final digits? E.g., if $x$ is a $p$-adic number and the last four digits of its $p$-adic representation are $\ldots abcd$, can we say that $x \equiv {c*p + d} \pmod {p^2}$? Furthermore, what if $x$ is a $p$-adic fraction, and its last four digits are $\ldots ab.cd$? Surely we can't say $x \equiv \frac{c*p+d}{p^2} \pmod {p^{-2}}$?
The analogy with ordinary integers and rational numbers works well. To your first question: yes, you can totally say that. Just like with integers, $x \equiv y \pmod{m}$ means nothing more nor less than $m$ divides $x-y$, and “divides” means that $x-y=mk$ for some integer $k$. In your example, $p^2$ divides the difference between the given number and $cp+d$, because that difference is a $p$-adic integer without a free term and a multiple of $p$ term. All the terms have $p^2$ and beyond. To your second question, consider what you’d say with ordinary rational numbers. If $q=17.25$ we don’t say that $q$ is congruent to $0.25$ modulo $\frac{1}{100}$. We can say that $q$ is congruent to $0.25$ modulo $1$, which means “up to adding some integer”. The same is true in the field of $p$-adic numbers. The rational numbers $\mathbb{Q}$ and the field $\mathbb{Q}_p$ of $p$-adic numbers are fields, so there’s no point in talking about congruence in the same way we do with rings such as the integers $\mathbb{Z}$ or the $p$-adic integers $\mathbb{Z}_p$. What we can say is that two elements differ by an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4107758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the equation $\frac{1}{2} =4x^3-3x$ has no rational root. Suppose it has rational root, then $x=\frac{p}{q}$, where $q\neq 0$ and $(p, q)=1$. Then the equation can be written as, $$q^3=2(4p^3-3pq^2)$$ So $q$ is even, this force $p$ to be an odd. So we can substitute $p=2k+1$ and look for any contradiction. Is the way correct? Is there any easy way to do it?
You can apply the Rational root theorem to the equation $$ 8x^3 - 6x -1 = 0 \, . $$ It states that if $x=p/q$ with integers $p, q$ is a rational solution of that equation then $p$ is a factor of $a_0 = 1$, and $q$ is a factor of $a_3 = 8$. That gives (a small number of) candidates for rational roots which you can exclude by substituting them into the equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluating $\frac{13}{1.2 .3 .2}+\frac{26}{2.3 .4 .4}+\frac{43}{3.4 .5 .8}+\frac{64}{4.5 .6 .16}+\cdots$ $$\frac{13}{1.2 .3 .2}+\frac{26}{2.3 .4 .4}+\frac{43}{3.4 .5 .8}+\frac{64}{4.5 .6 .16}+\cdots$$ I can reduce it to the general term, $$\sum_{r=1}^\infty \frac{2r^2 + 7r +4}{r(r+1)(r+2)2^r}$$ I don't know how to go about this any further though. I also ran this in python and the sum is exceeding $1.5$ for $10,000$ terms, which is weird since it should converge to $1.5$, so it makes me doubt if the general term I've written is correct.
Consider $$f(x)=\sum_{r=1}^\infty \frac{2r^2 + 7r +4}{r(r+1)(r+2)}x^r=\sum_{r=1}^\infty \left(\frac{1}{r+1}-\frac{1}{r+2}+\frac{2}{r} \right)x^r$$ $$f(x)=\frac{\log (1-x)}{x^2}+\frac{1}{x}-\frac{\log (1-x)}{x}-2 \log (1-x)-\frac{1}{2}$$ $$f\left(\frac{1}{2}\right)=\frac 3 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How to project vector onto a plane but not along plane normal? In 3d space, is there any way to project a vector onto a plane, but along the UP direction (0,1,0) instead of the plane normal? If so, how do I do that and what is it called?
A picture would help, but this is what I think you are asking. Given the plane $ax+by+cz = d$ and the point $(p,q,r)$, you want to move the point in the direction $(0,1,0)$ until it meets the plane. Well that move will change only the value of $q$, To find the new value, solve $$ ap + b? + cr = d $$ for the value of $?$. There will be no solution if $(0,1,0)$ is parallel to the plane (unless the point is in the plane to begin with). In more generality, you are asking for the intersection of a line (through a given point in a given direction) with a plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What does this notation mean: $\|\cdot \|_F$ For context, this paper defines the following equation as the accuracy metric for their function: $$ Acc(\hat{Y}) = 1-\frac{\|Y-\hat{Y}\|_F}{\|Y\|_F} $$ I assume it's probably just a weird notation the authors made up, but I want to make sure I'm not missing anything. Google has been no help, and I can't find any other mention of that symbol in the paper. Any help is appreciated!
It's very likely the Frobenius norm, sort of the fundamental norm (Euclidean norm) over matrices : https://mathworld.wolfram.com/FrobeniusNorm.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does a nonzero value of artificial variables, after applying simplex method, show infeasible solution? In linear programming we sometimes use artificial variables for the simplex method when constraints are expressed as equalities. After a certain number of iterations of the simplex method we reach a point in which all coefficients are non negative. What does a nonzero value of the artificial variable indicate at that point? Infeasible solution? Error in the computation? Would you be able to illustrate with an example? Edit: This is an example. Initial simplex tableau on the right. I start the simplex method by choosing entering-leaving variable and I stop if there are no negative coefficients in the last row. What does a nonzero value for the artificial variable mean at this point?
My understanding is that you make an initial optimization pass with an objective function that maximizes the negative of the artificial variables. If these artificial variables are set to zero from that pass, then the problem is feasible, and you can then eliminate them and continue on with a second pass using the real objective function. If the artificial variables are not zero (after that first pass) then the original problem was not feasible. (I believe some texts describe this as Phase I and Phase II.) (Note that in CLRS (the textbook), instead of artificial variables, it prescribes replacing an equality constraint $ax=b$ with two counteracting inequality constraints $ax\le b$ and $-ax \le -b$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are taking the real part and differentiating associative? Heya all this is a quick question which I think is true but consider a continuous function $f(x): \mathbb{R} \to \mathbb{C} $ then is $\mathbb{R}\{\frac{d}{dx}f(x) \} = \frac{d}{dx}\mathbb{R}\{f(x)\}$ In words: Is differentiating the real part of a function the same as the real part of the derivative? Every example I have tried so far it ends up holding!
Yes, because differentiating $f:\mathbb R\to\mathbb C$ is defined as $$\frac{\mathrm d}{\mathrm dx}f(x) := \frac{\mathrm d}{\mathrm dx}(\mathfrak{Re}f)(x)+i\frac{\mathrm d}{\mathrm dx}(\mathfrak{Im}f)(x)$$ Therefore, $$\mathfrak{Re}\left(\frac{\mathrm d}{\mathrm dx}f(x)\right) = \frac{\mathrm d}{\mathrm dx}(\mathfrak{Re}f)(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4108939", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove/disprove this set of functions is equicontinuous I'm trying to figure out whether the set of functions $$\{ g_a : a \in [1, \infty)\; in\; C[-1,1], \ g_{a}(x) = \frac {1}{(x-a)^{2}+1} \}$$ is equicontinuous. I have tried to manipulate the inequality $|g_{a}(x) - g_{a}(y)|$ and i got that $|g_{a}(x) - g_{a}(y)| \le |y-x| |2a+2|$ but this doesn't really give me a value of $\delta$ that works for any $a$. I'm stuck. I also don't really have an intuition for whether it is equicontinuous or not.. is there anything that should hint this to me?
Choose an $\epsilon > 0$ and take $\delta=\frac{\epsilon}{2}$. Let $|y-x|<\frac{\epsilon}{2}$. Then: \begin{align} |g_a(x)-g_a(y)|&=\Big|\frac{1}{(x-a)^2+1}-\frac{1}{(y-a)^2+1}\Big| \\ &=\frac{|y-x|~|y+x-2a|}{[(x-a)^2+1]~[(y-a)^2+1]} \\ & \leq \frac{|y-x|~(|y-a|+|x-a|)}{[(x-a)^2+1]~[(y-a)^2+1]} \\ &=|y-x|~\Big[\frac{|x-a|}{[(x-a)^2+1]~[(y-a)^2+1]}+\frac{|y-a|}{[(x-a)^2+1]~[(y-a)^2+1]}\Big] \end{align} Now if $|x-a|<1$ then $(x-a)^2+1-|x-a|\geq0 \Rightarrow \frac{|x-a|}{[(x-a)^2+1]} \leq 1 \Rightarrow \frac{|x-a|}{[(x-a)^2+1]~[(y-a)^2+1]} \leq 1$. If $|x-a|\geq 1$ then $|x-a| \leq (x-a)^2 \leq (x-a)^2 +1 \Rightarrow \frac{|x-a|}{[(x-a)^2+1]} \leq 1 \Rightarrow \frac{|x-a|}{[(x-a)^2+1]~[(y-a)^2+1]} \leq 1$. Thus in any case we have $ \frac{|x-a|}{[(x-a)^2+1]~[(y-a)^2+1]} \leq 1$. Similar for $y$ related exdpression. Now from above we get \begin{equation} |g_a(x)-g_a(y)| \leq 2~|y-x|<\epsilon \end{equation} for all $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4109116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{5i}{2+i}=1+2i$ Since $5i=5e^{i \frac{\pi}{2}}$ And $2+i=\sqrt{5}e^{ix_1}$ where $x_1=\arctan \frac{1}{2}$, we have $\frac{5i}{2+i}=\sqrt{5}[\cos(\frac{\pi}{2}-x_1)+i\sin(\frac{\pi}{2}-x_1)]$ Now from here how do I continue? P.s:I know how to solve this if it wasn't asked to solve using polar forms.
Hint $$\begin{align} \cos\left({\pi \over 2}-x\right)&=\sin{x}\\ \sin\left({\pi \over 2}-x\right)&=\cos{x}\\ \tan\left({\pi \over 2}-x\right)&={1\over\tan{x}} \end{align}$$ So now let's develop the hint from where you left it i.e $${5i\over 1+2i}=\sqrt{5}\left(\cos\left({\pi\over 2}-x\right)+i\sin\left({\pi\over 2}-x\right)\right)=a+ib$$ So $a^2+b^2=5$ and $${b\over a}=\tan\left({\pi\over 2}-x\right)={1\over \tan{x}}=2$$ And so $5a^2=5$ and $a=\pm 1$, $b=\pm 2$ The negative solutions are to be excluded because $0\lt {\pi\over 2}-x\lt {\pi\over 2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4109471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Evaluate:- $\frac{[4 + \sqrt{15}]^{3/2} + [4 - \sqrt{15}]^{3/2}}{[6 + \sqrt{35}]^{3/2} - [6 - \sqrt{35}]^{3/2}}$ Evaluate:- $\dfrac{[4 + \sqrt{15}]^{3/2} + [4 - \sqrt{15}]^{3/2}}{[6 + \sqrt{35}]^{3/2} - [6 - \sqrt{35}]^{3/2}}$ What I Tried:- Let $a = 4 , b = \sqrt{15} , c = 6, d= \sqrt{35}$ . Then I get :- $$\rightarrow \frac{[a + b]^{3/2} + [a - b]^{3/2}}{[c + d]^{3/2} - [c - d]^{3/2}}$$ Now I can put the formulas $(a^3 + b^3)$ and $(c^3 - d^3)$ $$\rightarrow \frac{[(a + b)^{1/2} + (a - b)^{1/2}][(a + b) - \sqrt{a^2 - b^2} + (a - b)]}{[(c + d)^{1/2} - (c - d)^{1/2}][(c + d) + \sqrt{c^2 - d^2} + (c - d]}$$ $$\rightarrow \frac{7[(a + b)^{1/2} + (a - b)^{1/2}]}{13[(c + d)^{1/2} - (c - d)^{1/2}]}$$ From here, I do not know how to proceed. Can anyone help me?
$$\sqrt{8+2\sqrt{15}}=\sqrt5+\sqrt3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4109671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
What distinguished the Möbius strip from the cylinder as fibre bundles? I'm a physicist trying to understand fibre bundles, and I think I'm pretty happy with the wikipedia definition "a space that is locally a product space, but globally may have a different topological structure". I think I also have a grasp, in the general and tangent cases, on the different objects in the fibre bundle: $(E, \pi, M, F, G)$ taken singularly and abstractly. However, I'm having trouble seeking what tells apart two fibre bundles $E$, the Möbius strip and the cylinder. Both have base $M=S^1$ and fibre $F=$ (a segment), so what's different in the two of them? Is it the projection $\pi$, the structure group $G$ or maybe trivially just the total space $E$?
The direct answer to your question about the difference in the tuple you considered is the following: usually the fibre bundle is not defined as the tuple of objects you wrote. The fiber bundle usually is defined to be just $\pi$, and the other objects are named “total space”, “basis” and “fibre”. But what is fundamental here is the map $\pi$. Indeed in the definition of fibration one has conditions on that particular map. So, yes, the main difference stands in how the map $\pi$ is defined, giving rise to different total spaces even with the same base and the same fiber. Even topologically, they are different objects: we cannot find an homeomorphism between them. For example notice that one is orientable while the other is not (recalling that orientability of a manifold is a property that is preserved under homeomorphisms; like compactness: if $X$ is compact and $X$ is homeomorphic to $Y$, then $Y$ must be compact as well). Locally they can be considered the same, but notice that their difference is topological, which means a difference in their global structure. Indeed, this is the same difference there is between a cartesian product of the base and the fiber and the whole structure of fiber bundle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4109814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
Interesting Integral including $\ln x$ I would like to evaluate this integral: $$\int_0^1 \frac{\sin(\ln(x))}{\ln(x)}\,dx$$ I tried a lot, started by integral uv formula [integration-by-parts?] and it went quite lengthy and never ending. Then, thought may be expansion of $\sin x$ but it didn't made any sense, it's a infinite series and there's no way here that it will end.. It is highly appreciated if you give some hints to solve it. I am a high school student,so I expect it to be simple but tricky:-)
Observe that $\displaystyle \frac{\sin(\ln(x)) }{\ln{x}} = \int_0^1 \cos(t \ln{x}) \, \mathrm dt$. Then: $$\begin{aligned} \int_0^1\frac{\sin(\ln(x)) }{\ln{x}}\, \mathrm dx &= \int_0^1 \int_0^1 \cos(t\ln{x})\;{\mathrm dt}\;{\mathrm dx} \\& = \ \int_0^1 \int_0^1 \cos(t\ln{x})\;{\mathrm dx}\;{\mathrm dt} \\&= \int_0^1 \frac{1}{t^2+1} \;{\mathrm dt} \\& = \frac{\pi}{4}. \end{aligned}$$ Equivalently, consider the function $$\displaystyle f(t) = \int_0^1\frac{\sin(t\ln(x)) }{\ln{x}}\, \mathrm dx.$$ Then $$\displaystyle f'(t) = \int_0^1 \cos(t \ln{x})\, \mathrm{d}x = \frac{1}{1+t^2}.$$ Therefore $f(t) = \arctan(t)+C$. But $f(0) = 0$ so $C = 0$. Hence $f(t) = \arctan(t)$. We seek $\displaystyle f(1) = \arctan(1) = \frac{\pi}{4}$. Series solution: $\displaystyle I = \int_0^1\frac{\sin(\ln(x)) }{\ln{x}}\, \mathrm dx = \int_0^1\sum_{k \ge 0} \frac{(-1)^k \ln^{2k}{x}}{(2k+1)!}\, \mathrm dx = \sum_{k \ge 0} \frac{(-1)^k }{(2k+1)!} \int_0^1 \ln^{2k}{x}\, \mathrm dx $ Then we calculate $\displaystyle \int_0^1 \ln^nx \,\mathrm{d}x$ via integration by parts to find that it's equal to $(-1)^n n!$ Or consider $\displaystyle f(m) = \int_0^1 x^m \,{dx} = \frac{1}{1+m}.$ Then taking the $n$-th derivative of both sides: $\displaystyle f^{(n)}(m) = \int_0^1 x^m \ln^{n}{x} \,{dx} = \frac{(-1)^n n! }{(1+m)^{n+1}}.$ In either case we get $\displaystyle \int_0^1 \ln^{2k}{x}\, \mathrm dx = (2k)!$. Hence: $\displaystyle I = \sum_{k \ge 0} \frac{(-1)^k (2k)!}{(2k+1)!} = \sum_{k \ge 0} \frac{(-1)^k (2k)!}{(2k+1)(2k)!} = \sum_{k \ge 0} \frac{(-1)^k }{(2k+1)} = \frac{\pi}{4}.$ To prove the last equality, consider $\displaystyle \frac{1}{2k+1} = \int_0^1 x^{2k} \, \mathrm dx$ and the geometric series $\displaystyle \sum_{k \ge 0}(-1)^kx^{2k} = \frac{1}{1+x^2}$. Then $\begin{aligned} \displaystyle \sum_{k \ge 0} \frac{(-1)^k}{2k+1} & = \sum_{k \ge 0}{(-1)^k}\int_0^1 x^{2k}\,{\mathrm dx} \\& = \int_0^1 \sum_{k \ge 0}(-1)^k x^{2k} \, \mathrm dx \\& = \int_0^1 \frac{1}{1+x^2}\,\mathrm dx \\& = \frac{\pi}{4}.\end{aligned}$ Regarding the integral $\displaystyle I = \int_0^1 \cos(t \ln{x})\, \mathrm{d}x $, we let $x = e^{-y}$. Then $\displaystyle I = \int_0^\infty e^{-y}\cos(ty)\,\mathrm{d}y.$ We get the answer by applying integration by parts (twice). Or we can consider the real part, if we're familiar with complex numbers: \begin{align} I & = \int_0^\infty e^{-y}\cos(ty)\,\mathrm{d}y =\Re\left(\int_0^\infty e^{-(1-it)y}\mathrm{d}y\right)\\ &=\Re\left(\int_0^\infty e^{-(1+t^2)y}\mathrm{d}(1+it)y\right)\\ &=\Re\left(\frac{1+it}{1+t^2}\int_0^\infty e^{-(1+t^2)y}\mathrm{d}(1+t^2)y\right)\\ &=\Re\left(\frac{1+it}{1+t^2}\right)\\& =\frac{1}{1+t^2}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4109949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
evaluating limit of $\lim_{x \to 0}\frac{1-\cos(x)}{\sin(x)(e^x-1)}$ This is for evaluating limit of $\lim_{x \to 0}\frac{1-\cos(x)}{\sin(x)(e^x-1)}$. It's easy with evaluating using L'Hôpital's Rule, but I want to use Taylor series. I can see here $\cos(x)=\sum_{n=0}^{\infty}\frac{(-1)^nx^{2n}}{(2n)!}$, $\sin(x)=\sum_{n=0}^{\infty}\frac{(-1)^nx^{2n+1}}{(2n+1)!}$, also $e^x=\sum_{n=0}^{\infty}\frac{x^n}{n!}$. So I use these to find it. What to do next?
Let $\mathcal{L}$ be your limit : \begin{align} \mathcal{L}&=\lim_{x \to 0}\frac{1-\cos x}{\sin x(e^x-1)}\\ &=\lim_{x\to 0} \frac{1-\cos x}{x^2}\frac{x^2}{\sin x (e^x-1)}\\ &=\lim_{x\to 0} \frac{1-\cos x}{x^2}\frac{x}{\sin x} \frac{x}{e^x-1} \end{align} Recall : $$\lim_{x\to 0} \frac{1-\cos x}{x^2} = \frac{1}2 \ \ \ \ \ ;\ \ \ \ \ \lim_{x\to 0} \frac{\sin x}{x}=1\ \ \ \ \ ;\ \ \ \ \ \lim_{x\to 0}\frac{e^x-1}{x}=1 $$ Using these usual limits, you can easily get : $$\mathcal{L}=\frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4110074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Why can we use $\cos^2t+\sin^2t=1$ to eliminate $t$ from $x=5\cos t$, $y=2\sin t$? Given the parametric equations $x=5 \cos t$ and $y=2 \sin t$, I want to eliminate the parameter. So, how can we eliminate the parameter here? Supposedly, all we need to do according to the solution is to recall this trig identity: $$ \cos ^{2} t+\sin ^{2} t=1 $$ Then from the parametric equations we get, $$ \cos t=\frac{x}{5} \quad \sin t=\frac{y}{2} $$ Then, using the trig identity from above and these equations we get, $$ 1=\cos ^{2} t+\sin ^{2} t=\left(\frac{x}{5}\right)^{2}+\left(\frac{y}{2}\right)^{2}=\frac{x^{2}}{25}+\frac{y^{2}}{4} $$ and should thus conclude that we haven an ellipse. However, I don't understand why this works. We just took some equation (trig identity) and plugged something in - how do we know that this is equal to our parametric equations? I mean we could have taken any other formula and plug in values and make a completely different conclusion - it seems very arbitrary to me.
This is because $\left(\frac{x}{5}\right)^{2} + \left(\frac{y}{2}\right)^{2}$ matches the form of $\cos^{2}t + \sin^{2}t$. But before this, let's understand what $\cos^{2}t + \sin^{2}t = 1$ mean. We know that the center-radius (standard) form of a circle of radius $r$ with its center at the origin has the equation $$x^{2} + y^{2} = r^{2}.$$ Dividing both sides by $r^{2}$, $$\begin{align*}\frac{x^{2}}{r^{2}} + \frac{y^{2}}{r^{2}} &= 1 \\ \left(\frac{x}{r}\right)^{2} + \left(\frac{y}{r}\right)^{2} &= 1.\end{align*}$$ But we know that $\cos t =\frac{x}{r}$ and $\sin t = \frac{y}{r}$ where $t$ is the angle from the positive $x$-axis. By substitution, we get the equation $$\cos^{2}t + \sin^{2}t = 1 \tag{1}.$$ We will now go to ellipses. Circles as special cases of ellipses where both $x$ and $y$ are scaled by a factor of $r$. However, ellipses that are not circles have different scaling factors for $x$ and $y$. If the scaling factor for $x$ and $y$ are $a$ and $b$, the equation will be $$\left(\frac{x}{a}\right)^{2} + \left(\frac{y}{b}\right)^{2} = 1 \tag{2}.$$ As $(1)$ is similar to $(2)$, we can equate the terms to each other. \begin{align*}\left(\frac{x}{a}\right)^{2} &= \cos^{2}t &\qquad \left(\frac{y}{b}\right)^{2} &= \sin^{2}t \\ \frac{x}{a} &= \cos t &\qquad \frac{y}{b} &= \sin t \\ x &= a\cos t &\qquad y &= b \sin t.\end{align*} This is why it works. I can't think of anything aside from this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4110224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
On Sierpinski's example of a set intersects every horizontal or vertical line in exactly one point The following is an exercise from Bruckner's Real Analysis: Using the continuum hypothesis, one can prove that there exists a Lebesgue non-measurable subset $E$ of $\mathbb{R^2}$ such that $E$ intersects every horizontal or vertical line in exactly one point. (a) Use this set to show that there exists a function $f : \mathbb{R^2} \to \mathbb{R}$ such that $f$ is Borel measurable in each variable separately, yet f is not Lebesgue measurable. (b) Show that the restriction of $f$ to any horizontal or vertical line has only one point of discontinuity. The first sentence of the exercise is explained in here, even though not an explicit construction of the set is shown. Still couldn't solve the exercise with reading the mentioned OP and the answers there. Please help, thanks!
Given $E \subseteq \Bbb R^2$ consider $f: \Bbb R^2 \to \Bbb R$ given by $f(x,y)=1$ if $(x,y) \in E$ and $0$ otherwise. $f$ is not measurable as $E$ is not. But if we fix the first coordinate to $x_0 \in \Bbb R^2$, the one-variable functon $f(x_0, \cdot)$ assumes the value $1$ only once, so that function is almost $0$ and thus measurable, quite trivially. The same for $f(\cdot, y_0)$ for any fixed $y_0 \in \Bbb R$. So that takes care of $a)$ and $b)$ both.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4110909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of unordered combinations of N-sized subsets of an m*N-sized set I have an array of 24 batteries in my off-grid solar power system. They are arranged in a series of 4 blocks having 6 parallel-connected batteries in each. So, in my case, N=6 and m=4. The batteries slightly vary by state of health, which is reflected in their internal resistance. This causes each of the 4 serial blocks to have slightly different resistance, which in turn causes the blocks to get charged to unequal voltage which is not ideal (some of the 4 blocks get charged to a higher voltage than the other, although they all get the same current). Given that I can measure internal resistance of each battery individually, I am about to come up with the best possible combination where the 4 blocks have the closest resistance possible. Now, calculating resistance for a given combination is easy and is not a question. The question is how to get all those (unordered) combinations? How many are there, and how to iterate over all of them? (so that I could write a script that would just try them all and return the best combination) Say if N was 2 and m was 2, the 4 batteries were labeled from A to D, there would have been only 3 unordered combinations: AB+CD, AC+BD and AD+BC. For N=3 and m=2 we get 10 (unless I have missed some): * *ABC+DEF *ABD+CEF *ABE+CDF *ABF+CDE *ACD+BEF *ACE+BDF *ACF+BDE *ADE+BCF *ADF+BCE *AEF+BCD So, where m=1 there is only one unordered combination no matter how big N is. But what is the equation?
There are $24 \choose 6$ ways to choose the first pack, $18 \choose 6$ ways to choose the second, $12 \choose 6$ to choose the third, and $6 \choose 6$ ways to choose the last. We have overcounted by a factor $4!$ because we could choose the same four packs in any order. This gives $${24 \choose 6}{18 \choose 6}{12 \choose 6}{6 \choose 6}\frac 1{4!}=\frac {24!}{6!^4\cdot 4!}\approx 9.62\cdot 10^{10}$$ Generally is is $$\frac{(mN)!}{(N!)^m\cdot m!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Understanding a proof of $\text{[meas]} \implies \text{[mean]}$ The following is from Bruckner's Real Analysis: I don't understand the followings: Red : Even if $g \in L^1$ does not imply that $\mu({\{x: 2g \le \alpha})< \infty$ so how do we choose $\alpha>0$ such that the mentioned integral is arbitrarily small? Green : How $\int_{B_n} 2g \ d \mu < \frac{\epsilon}{3}$ ?
Red: Based on my reading of the link, it seems like the author is applying what he calls the rectangle principle. As far as a more rigorous justification, I've been playing around with it a bit. We know that $g\geq 0$ and that $\int g < +\infty$. We can write $$\int g = \int_{\{g>\alpha\}}g + \int_{\{g\leq\alpha\}}g.$$ From the DCT, we have $$\int g = \lim_{\alpha \to 0} \int_{\{g>\alpha\}} g.$$ Given some $\varepsilon > 0$, this should imply that we can pick some $\beta > 0$ sufficiently small so that $$\int g - \int_{\{g>\beta\}} g < \varepsilon.$$ As I said this is just from playing around with it a bit, but I don't immediately see any flaws in the above. I certainly invite folks to point out any problems. Green: This follows from the absolute continuity of the Lebesgue integral. By definition, this means that for any Lebesgue integrable $u$ and $\varepsilon > 0$, there exists $\delta > 0$ such that $\mu(B) < \delta$ implies that $$\int_B u \ d\mu < \varepsilon.$$ Given our function $g$ and an arbitrary $\varepsilon > 0$, we can find a $\delta > 0$ so that the above holds. By convergence in measure, we can pick $N$ sufficiently large so that $$B_n = \{x \in A : | f(x) - f_n(x) | \geq \eta \}$$ satisfies $$\mu(B_n) < \delta \ \forall n \geq N.$$ Then, the absolute continuity implies that $$\int_{B_n} g \ d\mu < \varepsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Consider the polynomial $p(x)=x^{2021}-x+1$, calculate the sum $$ r_{1}^{2021}+r_{2}^{2021}+\ldots+r_{n}^{2021} $$ where $r_i$ are the 2021 roots of $p$ I'm not sure how to start, I know that can be done with Moivre's formula but I don't get any result.
If $a$ is a root of $x^n-x+1=0, a^n=a-1=y$(say) $$\implies(y+1)^n-y=0\implies y^n+ny^{n-1}+\cdots+1=0$$ $$\implies\sum_{k=1}^nr_k=-\dfrac n1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Divisibility by 11 I'm trying to prove that a number $a_n=a_0+a_1 \cdot 10^1+...+c_n\cdot 10^n$ is divisible by 11 iff $a_0-a_1+a_2-...+(a_n)\cdot(-1)^{n+1}$ is divisible by 11. I began my proof and I found out that I first need to prove that $10^k+(-1)^{k+1}$ is divisible by 11 for all $k\in \mathbb{N} $. I tried to prove this with induction: for $k=1$ we get $t_1=10^1+(-1)^{1+1}=11$ now I assume $t_k=10^k+(-1)^{k+1}$ is divisible by 11. need to prove that: $\\$ $t_{k+1}=10^{k+1}+(-1)^{k+2}=10^{k+1}+(-1)^{k}$ is divisible by 11. I looked at: $t_{k+1}-t_{k}=10^{k+1}+(-1)^{k}-10^k-(-1)^{k+1}=9\cdot 10^k +2\cdot (-1)^k$ and now I'm stuck :( if you have any idea how can I complete my proof I will be very grateful! Thank You All!
Note first that $$y^k-x^k = (y-x)\sum_{m=0}^{k} y^m x^{k-m}$$ (you can prove this for yourself as the right hand side is a telescoping sum). Then $$10^k + (-1)^{k+1} = 11 \sum_{m=0}^{k} (-1)^{k-m} 10^m$$ which is divisible by 11.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Show that $\langle Tx,Ty\rangle_1=\langle y, x\rangle$ defines an inner product on $H\mbox{*}$ that induces a norm on $H\mbox{*}$ The correspondence $T:H\rightarrow H\mbox{*}$ by $y\mapsto f_y$, and $y\in H$ and $T$ is a conjugate-linear isometry. We call $H\mbox{*}$ the dual space. We know that something is conjugate-linear if $$T(\lambda u+\mu v)=\bar{\lambda} Tu+\bar{\mu} Tv$$ for all $u\in X,v\in Y$ and $\lambda,\mu\in\mathbb{F}$. And isometry being $||Tx||=||x||$ We want to show that $$\langle Tx,Ty\rangle_1=\langle y,x\rangle$$ defines an inner product on $H\mbox{*}$ that induces a norm on $H\mbox{*}$. I know that I am supposed to show that $$||x||=0\iff x=0$$ $$\text{Triangle Inequality}$$ $$||\alpha x||=|\alpha|||x||$$ But I am struggling with the exact steps to do this. I will post what I have in a little if you would be more open to checking what I have. This is what I was thinking but for some reason I just felt like it wasn't quite right. [(a)][$||x||=0 \iff x=0$]: \begin{align*} ||Tx||&=0\\ ||Tx||^2&=\\ \langle Tx,Tx\rangle_1&=\\ \langle x,x\rangle&=0\iff x=0\\ \end{align*} [(b)][$||\alpha x||=|\alpha|||x||$]: \begin{align*} ||T\alpha x||&=||\bar{\alpha}Tx||\\ &=||\bar{\alpha} x||\\ &=|\alpha|||x||\\ &=|\alpha|||Tx|| \end{align*} [(c)][Triangle inequality]: \begin{align*} ||Tx+Ty||&=||x+y||\\ &\leq||x||+||y||\\ &=||Tx||+||Ty|| \end{align*}
As already said you haven't proved that $\langle\cdot,\cdot\rangle_1$ defines an inner product. I will give you some hints: $$\langle f,g\rangle_1 := \langle T^{-1}g,T^{-1}f\rangle$$ for every $f,g \in H^*$. Prove that: $$\langle f,g\rangle_1 = \overline{\langle g,f\rangle}_1$$ $$\langle \mu f + \lambda h,g\rangle_1 = \mu \langle f,g\rangle_1 + \lambda \langle h,g\rangle_1$$ $$\langle f,f\rangle_1 \geq 0$$ $$\langle f,f\rangle_1 = 0 \; \text{iff} \; f=0$$ all follows from the properties of $\langle \cdot,\cdot\rangle$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Does such a semigroup exist? A homework question asks if there exists a semigroup that satisfies: $\forall x\forall y \exists z (x\circ z=y)$, $\forall x\forall y \exists z' (z'\circ x=y)$, $\exists x \exists y\exists z(x\circ z=y \wedge z\circ x \neq y)$ All element has a way to any other element via left or right multiplication, but the left "intermediate" element from x to y is different from the right "intermediate", for some x and y.
Your homework question describes a kind of object called a "non-Abelian group". The smallest example is the set of all permutations of a $3$-element set $X$ (that is, all bijections $f:X\to X$) with the operation of composition. An example that may be familiar to you is the set of all invertible $2\times2$ matrices with the operation of matrix multiplication.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4111955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Associativity of binary operations on a two-element underlying set (is there a pattern?) The overall problem is to establish, which binary operations on a two-element set $A=\{a, b\}$ are commutative and associative. There are 16 of them altogether, obviously, analogous to operations on Booleans: $$ \begin{array}{c|c|c|c|c|c|c} & x & y & O_1 & O_2 & ... & O_{16} \\\hline P_1 & a & a & a & a & ... & b \\\hline P_2 & a & b & a & a & ... & b \\\hline P_3 & b & a & a & a & ... & b \\\hline P_4 & b & b & a & b & ... & b \\\hline comm. & & & + & + & ... & + \\\hline assoc. & & & + & + & ... & + \\\hline \end{array} $$ The case of commutativity is simple, it is enough to check if $a*b=b*a$, so to check that $P_2$ and $P_3$ are both the same. It also yields a perfect symmetry: $+ + - - - - + + + + - - - - + +$ But the associativity is not that straightforward in terms of the algorithm and does not seem to produce any pattern. There is no well-ordering property of A, so it seems the induction does not lend itself to it. I did not come up with any simple approach to prove associativity, so I have composed a simple Python code to exhaust all the combinations: a = "a" b = "b" A = (a, b) for i in range(16): Ops = { (a, a): b if (i//8)%2 else a, (a, b): b if (i//4)%2 else a, (b, a): b if (i//2)%2 else a, (b, b): b if i%2 else a } # test associativity: x * (y * z) = (x * y) * z assoc = True; for x in A: for y in A: for z in A: yz = Ops[(y, z)] x_yz = Ops[(x, yz)] xy = Ops[(x, y)] xy_z = Ops[(xy, z)] if x_yz != xy_z: assoc = False print(i, Ops.values(), assoc) It produces the following output: $+ + - + - + + + - + - - - - - +$ There seems to be no pattern here. But the ratio of associative operations is 50%, just as of the commutative ones. I have perused several useful answers related to this topic, namely: Showing associativity Number of associative binary operations Ratio of associative binary operations What I understood is that there is no simple proof (algorithm) for checking associativity, especially if we take a larger underlying set. (Or is there?) Secondly, the ratio of associative operations will decrease with the growth of the underlying set. Associativity is somehow related to idempotence, i.e. operations, where $a*a=a$ and $b*b=b$. Namely, associativity occurs more readily, where idempotence holds. My questions are: * *Is there a non-obvious pattern in the incidence of associative operations? *Is there an inductive or other style of proof for associativity when the underlying set growth beyond a binary set? (not just a brute-force approach I did)
The number of associative binary operations on an $n$-set is the A023814 OEIS sequence. There is no known closed formula for it. Your question is also related to the number of two-element semigroups. First, finite semigroups always contain an idempotent (an element $e$ such that $ee = e$). So let's suppose that $a$ is idempotent. If $b$ is not idempotent, then $b^2 \not= b$ and hence $b^2 = a$. One possibility is that $ba = ab = a$, so that $a$ is actually a zero. The other possibility is $ba = ab = b$, which corresponds to the cyclic group of order $2$. Suppose now that $a$ and $b$ are both idempotent. Then there are three other semigroups of this kind. The first one is the monoid $\{1,0\}$ under the usual multiplication of integers. The second one is defined by $aa = ab = a$ and $ba = bb = b$. The third one is $aa = ba = a$ and $ab = bb = b$. So altogether, you have $5$ possible two-element semigroup. You get $8$ operations, because you are not classifying operations up to isomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4112101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uses of Vieta Jumping in research mathematics? Vieta jumping has been a prominent method for solving Diophantine equations since 1988. It was popularized when it was used to solve an IMO problem, but has it been applied to research mathematics, and applied in solving previously unsolved questions? Context: Vieta Jumping is a method of showing that for quadratics $f$ and $g$, if two positive integers $A, B$ satisfy $f(A,B)|g(A,B)$, then there is a method to generate $A'$ such that $f(A',B)|g(A',B)$ where $A'$ is always larger/smaller than $A$. We can then continue applying this method to generate infinitely many unique pairs of numbers of the form $(x,B)$ such that $f(x,B)|g(x,B)$. The method is as follows: Suppose $f(A',B)|g(A',B)$ and $$\frac{g(A,B)}{f(A,B)}=k$$ for some positive integer $k$. Then, this implies $$g(A,B)-kf(A,B)=0$$ We can then rewrite this as a quadratic in terms of $A$. As this is a quadratic, there exists some $A'$ which is also a root of $g(x,B)-kf(x,B)=0$. We can then use Vieta's formulae to show that $A'$ is a positive integer, and also that $A'$ is larger/smaller than $A$. Hence, there are infinitely many solutions to the diophantine equation $\frac{f(x,y)}{g(x,y)}=k$. If $A'\leq A$, then Vieta Jumping can be used as a form of infinite descent. If $A'\geq A$, then Vieta Jumping tells you that there are infinitely many solutions. Though Vieta Jumping was used in the solution for Problem 6 of the 1988 International Mathematics Olympiad, it has existed before that in various other names.
Vieta Jumping has a much longer history than I previously thought, and in fact it's not really a new innovative method or anything, and already known in more advanced number theory as reduction theory of quadratic forms. It has been known since the times of Gauss, when he used it in Disquisitiones Arithmeticae (not exactly sure where though), and more recently by Hurwitz when analysing the Hurwitz equation. IMO students were not expected to know about reduction theory of quadratic forms pre-1988, and so when it was re-discovered in 1988 the method gained the new name of Vieta jumping. I am interested in the application of Vieta jumping/reduction theory of quadratic forms post 1988, so feel free to post an answer if you know anything about that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4112390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Image of a close set of a continuous function is not necessarily closed Suppose a function $f: [a, b] \rightarrow \mathbb{R}$ which is continuous. What can we say on its image? I thought the image has to be closed, but it seems not true.
To give the most general answer: it does have to be closed because $[a,b]$ is a compact set, images of compact sets under continuous functions are compact (this is trivial to prove), and maps from compact to Hausdorff spaces are closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4112483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How do you find the time of the collision of circles moving under uniform circular motion? I am working on a simulation involving circular pucks moving in circular orbits at constant angular speed. I found the equation determining when the pucks collide to be intractable so have been calculating the time of collisions in a given time delta via a binary search which has worked adequately; however, I am by no means an expert in numerical techniques. I do believe that a closed form symbolic solution to this problem either does not exist or is so large and unwieldy it is worthless, but I wonder if there is a technique for finding a numerical solution that is more direct than the binary search approach I am using. If I am wrong about a symbolic solution being intractable I'd also be interested in that, obviously. The problem can be formalized as below. One puck is revolving around the origin with angular speed $\alpha_1$, initial angular position $\theta_1$, and radius of revolution $r_1$. The other puck is revolving around $(c_x,c_y)$ with analogous properties. $d$ is the sum of the of the pucks' radii; $t$ is time; we are solving for time: $$ [r_1 \cos(\theta_1 + t \alpha_1) - (c_x + r_2 \cos(\theta_2 + t \alpha_2))]^2 + [r_1 \sin(\theta_1 + t \alpha_1) - (c_y + r_2 \sin(\theta_2 + t \alpha_2))]^2 = d^2$$
Both the first and second derivatives of the function returning the difference of the squared distance between the pucks and the squared sum of the pucks radii are defined and continuous thus Halley's method performs well given correct bracketing of a root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4112695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Problem with using mean value theorem Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is a continuous and convex function over $[a,b]$ which has first derivative over $[a,b]$. prove that: $f'(a)\leq \frac{f(b)-f(a)}{b-a} \leq f'(b)$ I know that this is easy through mean value theorem, however, I think that the function should be twice differentiable and the second derivative should be continuous, otherwise I think the result above is not true. Am I right? If yes, could you give me a counterexample? I tried but I didn't find anyone.
Assume $f$ is convex and $f'(a),f'(b)$ exist. Fix $x\in (a,b)$. Then letting $t={\large{\frac{x-a}{b-a}}}$, we have $$ \left\lbrace \begin{align*} &0 < t < 1\\[4pt] &x=(1-t)a+tb\\[4pt] \end{align*} \right. $$ hence by convexity of $f$ we have $$ f(x)\le (1-t)f(a)+tf(b)=f(a)+t\bigl(f(b)-f(a)\bigr) $$ hence \begin{align*} \frac{f(x)-f(a)}{x-a} &\le \frac{t\bigl(f(b)-f(a)\bigr)}{x-a} \\[4pt] &= \bigl(f(b)-f(a)\bigr) \left( \frac{t}{x-a} \right) \\[4pt] &= \bigl(f(b)-f(a)\bigr) \left( \frac{1}{b-a} \right) \\[4pt] &= \frac{f(b)-f(a)}{b-a} \\[4pt] \end{align*} hence from $$ f'(a)=\lim_{x\to a^+} \frac{f(x)-f(a)}{x-a} $$ it follows that $$ f'(a)\le \frac{f(b)-f(a)}{b-a} $$ as was to be shown. The inequality $$ f'(b)\ge \frac{f(b)-f(a)}{b-a} $$ can be proved analogously.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4112911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculus 2 test question that even the professor could not solve (Parametric Arc length) everyone got this question wrong from my calculus 3 midterm. " find the arc length of the curve on the given interval: $x=\sqrt(t), \space y=8t-6,\space on\space 0 \leq t \leq 3$" I set the problem up just fine, however at that point I got stuck. $$ \int_{0}^{3}\sqrt{\left ( \frac{1}{2\sqrt{t}} \right )^{2}+64} dt$$ we asked the teacher for a review, he got the problem set up then quit and said he would not count it against us. but it's killing me I need to know how to do it! thanks
First simplify. $\left( \frac{1}{2\sqrt{t}} \right) ^2 = \frac{1}{4t}$. Next, factor out the $4t$: $$f(t) = \sqrt{\frac{1}{4t} + 64} = \frac{1}{2\sqrt{t}}\cdot \sqrt{1 + 256t}$$ Now let $u = \sqrt{t}$, so $du = \frac{1}{2\sqrt{t}} dt$ and $$f(t) dt = \sqrt{1+256u^2}du$$ From here you can use a trigonometric substitution where $256u^2 = \tan^2 \theta$, so $16u = \tan \theta$ and $du = \frac{1}{16}\sec^2 \theta d\theta$, resulting in $$f(t)dt = \sqrt{1 + \tan^2 \theta}\frac{1}{16}\sec^2 d\theta = \frac{1}{16}\sec^3 \theta d\theta$$ From there you can use a reduction of order formula to relate that integral to the integral of $\sec \theta$, which is doable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Fourth root of a $3\times 3$ matrix I have solved this question by generalizing my assumption about some properties of the given matrix. However, it is not a rigorous proof and I haven't fully understood the background/meaning of it. Here's the question. Find the matrix X satisfying $ X^4= \begin{bmatrix} 3&0&0\\ 0&3&1\\ 0&0&0\\ \end{bmatrix} $ I found that $(X^4)^n=\begin{bmatrix}3^n&0&0\\0&3^n&3^{n-1}\\0&0&0\\\end{bmatrix}$ Therefore I assumed $X^n=\begin{bmatrix}3^\frac{n}{4}&0&0\\0&3^\frac{n}{4}&3^\frac{n-1}{4}\\0&0&0\\\end{bmatrix}$ Although I am on the entry-level in linear algebra, any approach to this question is welcomed, I will try my best to understand it.
Note that $X^4$ is diagonalizable with eigenspaces \begin{align} E_3(X^4) &= \langle (1,0,0), (0,1,0) \rangle, \\ E_0(X^4) &= \ker(X^4) = \langle (0,1,-3)\rangle. \end{align} Hence, after a change of basis the problem translates to finding $Y$ with $$ Y^4 = \begin{pmatrix} 3 & 0 & 0 \\ 0 & 3 & 0 \\ 0 & 0 & 0\end{pmatrix}. $$ Since $Y^4$ is singular, $Y$ needs to be singular as well and since $\ker(Y)\subseteq\ker(Y^4)$ we know that $\ker(Y)=\ker(Y^4)=\langle(0,0,1)\rangle$, so the third column of $Y$ must be zero. Since the dimensions of the kernels of $Y$ and $Y^4$ are equal, the dimensions of the images have to be equal as well and we get $\operatorname{im}(Y)=\operatorname{im}(Y^4)=\langle (1,0,0), (0,1,0)\rangle$. Hence, we have $$ Y = \begin{pmatrix} * & * & 0 \\ * & * & 0 \\ 0 & 0 & 0\end{pmatrix}, $$ where $Z=(\begin{smallmatrix}*&*\\*&*\end{smallmatrix})$ is any solution to $Z^4=3I$, where $I$ is the $2\times 2$ identity matrix. Such matrices are of the form $$ Z = U^{-1} \begin{pmatrix} \eta_1 & 0 \\ 0 & \eta_2\end{pmatrix} U, $$ where $\eta_1,\eta_2$ are any (possibly equal) solutions to $\eta^4=3$ and $U$ is an invertible $2\times 2$ matrix, so $U=(\begin{smallmatrix}a&b\\c&d\end{smallmatrix})$ with $ad-bc\neq 0$. Putting things together, we first get \begin{align} Z &= \frac{1}{ad-bc} \begin{pmatrix}d&-b\\-c&a\end{pmatrix}\begin{pmatrix} \eta_1 & 0 \\ 0 & \eta_2\end{pmatrix}\begin{pmatrix}a&b\\c&d\end{pmatrix} \\ &= \frac{1}{ad-bc} \begin{pmatrix} ad\eta_1-bc\eta_2 & bd(\eta_1-\eta_2)\\ac(\eta_2-\eta_1) & ad\eta_2-bc\eta_1\end{pmatrix}. \end{align} Then $$ Y = \frac{1}{ad-bc} \begin{pmatrix} ad\eta_1-bc\eta_2 & bd(\eta_1-\eta_2) & 0 \\ ac(\eta_2-\eta_1) & ad\eta_2-bc\eta_1 & 0 \\ 0 & 0 & 0\end{pmatrix}, $$ and changing back to the original basis finally $$ X = \frac{1}{ad-bc} \begin{pmatrix} ad\eta_1-bc\eta_2 & bd(\eta_1-\eta_2) & \frac 1 3 b d (\eta_1-\eta_2) \\ ac(\eta_2-\eta_1) & ad\eta_2-bc\eta_1 & \frac 1 3 (ad\eta_2-bc\eta_1)\\ 0 & 0 & 0\end{pmatrix}. $$ The constraints are $ad-bc\neq 0$ and $\eta_i^4=3$ for $i=1,2$. The solution obtained by you (with the correction of Henry) is obtained by choosing $\eta_1=\eta_2=3^{\frac 1 4}$ and $U=I$, so $a=d=1$, $b=c=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Counting question - binomial coefficients Why is there a difference in choosing $2$ out of $n$ elements and after that $2$ out of the $n-2$ remaining elements compared to choosing $4$ elements. So why is, intuitively (not algebraically), ${n \choose 2} \cdot {n-2 \choose 2} \cdot \dfrac{1}{2}\neq {n \choose 4}$?
The first method selects two groups of two and not one of four. So $\{a,b,c,d\}$ could be $\{a,b\}\{c,d\}$, or $\{a,c\}\{b,d\}$, and so on. In fact, we have: $$\binom{n}{2}\binom{n-2}{2}=\binom{n}{4}\binom{4}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove $|\int_{a}^{b}f(x)dx| \le \int_{a}^{b}|f(x)|dx $ I want to prove that $|\int_{a}^{b}f(x)dx| \le \int_{a}^{b}|f(x)|dx $ We know that $-|f(x)| \le f(x) \le |f(x)|$, so by linearity, we get: $$\int_{a}^{b}-|f(x)|dx \le \int_{a}^{b}f(x)dx \le \int_{a}^{b}|f(x)|dx$$ And: $$-\int_{a}^{b}|f(x)|dx \le \int_{a}^{b}f(x)dx \le \int_{a}^{b}|f(x)|dx$$ But how can I conclude that the statement is correct from here? Thanks!
First, take a function: $$f_1(x)\ge0\forall x\in[a,b]$$ it is then clear that: $$|f_1(x)|=f_1(x)$$ and so: $$\int_a^b|f_1(x)|dx=\int_a^bf_1(x)dx$$ and then it follows that: $$\left|\int_a^bf_1(x)dx\right|=\int_a^b|f_1(x)|dx$$ Now lets make a new function, $a<c<b$: $$f(x)=\begin{cases}f(x)\ge0&a\le x<c\\f(x)\le0&c\le x\le b\end{cases}$$ now we can say that: $$\int_a^bf(x)dx=\int_a^cf(x)dx+\int_c^bf(x)dx$$ if we assign values to these we can use them later: $$F=F_1+F_2$$ now notice that $F_1\ge0,F_2\le0$: $$\int_a^b|f(x)|dx=|F_1|+|F_2|$$ $$\left|\int_a^bf(x)dx\right|=|F_1+F_2|$$ now since we know the values of $F_1,F_2$ it is a given that: $$|F_1+F_2|\le|F_1|+|F_2|$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Solving integral: $\int_0^1\ln^2{\left(\frac{1+x}{1-x}\right)} dx$ I want to show that the solution of a BVP: $\ u(x) = \ln{\frac{1+x}{1-x}}$ is in $L^2(0,1)$, so I need to show that the integral $$\int\limits_0^1\ln^2|\frac{x+1}{x-1}|dx < \infty$$ Just looking at the function, however, it's not even defined in the interval $[0,1]$, right? So can this not actually be a solution to the BVP? Even more generally about the integral itself, does that make it just $0$? And if not, can someone explain how an integral of a function can be defined in a region when the function itself is not? Also, I know that there exist integral calculators online and have looked this up, specifically using: https://www.integral-calculator.com/ If you put in the given integral it says the integral could not be found and then gives a complex number approximation. What should be made of this? I have taken a complex variables course but don't really see how you could solve this using Residue Calculus since we don't have any symmetries. EDIT: Note I had originally forgot the absolute value signs - my mistake.
Substitute $\frac{1-x}{1+x}\to x $ \begin{align} \int_0^1\ln^2{\frac{1+x}{1-x}} dx = &\int_0^1\frac{2\ln^2{x}}{(1+x)^2}dx= \int_0^1{\ln^2{x}}\>d\left(\frac {2x}{1+x}\right)\\ =& -4\int_0^1\frac{\ln x}{1+x}dx=-4\cdot (-\frac{\pi^2}{12})=\frac{\pi^2}3 \end{align} $ \int_0^1\frac{\ln x}{1+x}dx =-\int_0^1\frac{\ln (1+x)}xdx =-\frac{\pi^2}{12}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Gradient Boosting (XGBoost) definition help I was hoping somebody could help explain the definition below. Particularly I have trouble understanding the definition of the space of regression trees $$F = \{ f(x)= w_{q(\mathbf{x})} \}(q: R^m \to T, w \in R^T) $$ $q$ is a function that maps a vector to the number of trees? Doesn't make sense to me. $w_i$ is the score in the $i^{th}$ leaf but what is meant by $w_q(x)$ then? $w$ is indexed by the structure of the tree $q$? $w \in R^T$: I understand as there are $T$ leafs and $w$ is a vector of $T$ leaf scores. But what is meant by $F=\{f(x) = w_q(x)\}$? Image from: "XGBoost: A Scalable Tree Boosting System": https://arxiv.org/abs/1603.02754
$q_k$ is a function that maps the data ($x_i$) called the example in the article to a number from $1$ to $T_k$ where $T_k$ is the number of leaves in the $k$th tree (and could generally vary from tree to tree), $k$ from $1$ to $K$. It is a bit confusing that they named them $q$ and $T$ for all the trees. $w_{k_i}$ is the score of the $i$th leaf of the $k$th tree (shown in the article as $w_i$). It is also a bit confusing since there is a different set of "$w$"'s for each tree, as well as the possible values that $i$ can take (from $1$ to $T_k$). In this case, $w_{k_{q_{k}(x_i)}}$ refers to the $q_{k}(x_i)$th leaf of $w_k$. (In particular, $q_k:\mathbb R^m\rightarrow \{1,2,...,T_k\})$ and $w_k=\{w_{k_1},w_{k_2},...,w_{k_{T_k}}\}$.) Lastly $\mathscr F=\{f_k(x):f_k(x)=w_{k_{q_k(x_i)}}, k=1,2,...,K\}$ is the set of functions, one for each tree, that takes an example/data vector $x_i$, maps it to the appropriate leaf of that tree via the $q_k$ function, and then indexes the appropriate leaf $w_{k_j}$ of that tree with $j=q_k(x_i)$. The ensemble model is created by summing the continuous outcome variable that is received from each of these trees, specifically the value at the end of each leaf that has been chosen from each tree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Real roots of $f(x)=8x^3-6x+1$ given in complex form The function $f (x) = 8x^3-6x + 1$ has three real roots as it is easy to check (a quick way is to see the graph of f). However, Wolfram gives the following exact values as roots: There is no Wolfram error here because if asked for approximate root values, it gives the real numbers $-0.93969,0.17365$ and $0.76604$. In sum, each of the three exact values when simplified must give a real number. I have tried to find at least one of these three values but it was not possible in a first attempt. Can someone describe a method to do it?
This phenomenon is called Casus irreducibilis. You cannot express the real roots in terms of real-valued radicals; however, a trigonometric expression is possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4113990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Example of two non-homeomorphic spaces with the same de Rham group Can anyone give an example of two non-homeomorphic spaces with the same de Rham cohomology? I was thinking of $[0,1]$ and $\{0\}$ but does anyone have a more spectacular example?
For a slightly more exciting example, try a torus with one puncture and a "pair of pants" (shown below). These are both manifolds (with boundary) which deformation retract to a figure 8. $\quad\quad\quad$ A deep result (de Rham's Theorem) says that the de Rham cohomology of a manifold actually agrees with the singular cohomology (computed as a mere topological space)! Of course, the singular cohomology only depends on the homotopy type. In particular, we have: $$ \begin{aligned} H_\text{de Rham}^n(\text{punctured torus}) &\cong H_\text{singular}^n(\text{punctured torus}) \\ &\cong H_\text{singular}^n(\text{figure 8}) \\ &\cong H_\text{singular}^n(\text{pair of pants}) \\ &\cong H_\text{de Rham}^n(\text{pair of pants}) \end{aligned} $$ Lastly, we should check that a punctured torus is not homeomorphic to a pair of pants. But the punctured torus has only one boundary component (the puncture) while the pair of pants has three (the waist and two legs). I hope this helps ^_^
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proof of non-trivial fact to show singularity is essential In this question the asker wanted to show that the function $$f:z\mapsto\frac{(e^{iz}-1)e^{1/z}}{\sin z}$$ Has an essential singularity at $0$. User @wisefool came up with two complex sequences $z_n=1/n$ and $w_n=i/n$ that both converge to $0$, and showed that $$\lim_{n\to\infty}|f(z_n)|\neq \lim_{n\to\infty}|f(w_n)|$$ And used this to conclude the singularity at $0$ is essential. My question is... why? This really doesn't seem obvious to me, and he neither gave nor linked to any proof. Could someone help me construct a proof for this?
Let $f$ an analytic function with an isolated singularity at $w$ By definition of a pole of order $n \ge 1$, we have $f(z)=\frac{g(z)}{(z-w)^n}$ for $z$ in a small neighborhood of $w$ where $g$ is analytic there and $g(w) \ne 0$, This clearly implies that $|f(w)| \to \infty, z \to w$ as $1/|z-w|^n$ does so while $g(w) \ne 0$ By definition of a removable singularity, $f$ extends analytically hence continuously to $w$ so $f(z) \to f(w)$ which implies $|f(z)| \to |f(w)|, z \to w$ So in both cases above we get a unique limit, infinity for poles, and arbitrary finite for removable singularities. The only case where two distinct limits are possible is when the singularity is essential
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find lipschitz constant of the gradiant of $\sqrt{1+x^2}$? I know $\left|\frac{x}{\sqrt{1+x^2}}\right| \le 1$. But I don't know how to find $L$ that $\left|\frac{x}{\sqrt{1+x^2}}-\frac{y}{\sqrt{1+y^2}}\right| \le L|x-y|$. Would you please explain it? Thank you.
Hint: the function $f(x):=\sqrt{1+x^2}$ has second derivative $\dfrac{1}{(1+x^2)^{3/2}}$, so $f$ is in $\mathrm C^2$. Clue: Lagrange's theorem will apply. Clue: Lagrange gives here that $|f'(x) -f'(y)| \leq |f''(z_0)||x-y|$ where $z_0$ is somewhere in $(x, y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Example of function that is Gâteaux-differentiable but not Fréchet-differentiable I am looking for an example of a function that is Gateaux-differentiable but not Fréchet-differentiable. I know that there is a lot of example of function $f: \mathbb R^2 \to \mathbb R$ that satisfies this property. An example is $$f(x, y) = \begin{cases} \frac{x^3}{x^2 + y^2} & \text{if } (x, y) \neq 0,\\ 0 & \text{otherwise.} \end{cases}$$ But since the Fréchet and Gâteaux derivative are defined for Banach spaces in general, I am looking for a more fancy example such as a function from $L^p$ to $L^q$ for instance. Any idea ?
The typical way of constructing Gateaux, but not Frechet differentiable functions is to consider functions which are homogeneous of degree 1, but not (bounded and) linear. See the argument in section 3.7, page 148 of Loomis and Sternberg, or this answer of mine for a slight generalization. Note that this is precisely the reason why the $f$ you suggested has all directional derivatives, yet is not Frechet differentiable at the origin. Other examples like $f(x,y,z)=\frac{x^2yz^2}{x^4+y^4+z^4}$ if $(x,y,z)\neq (0,0,0)$ and $0$ otherwise will also have all directional derivatives, yet not be Frechet differentiable. As a concrete example in infinite dimensions, fix a finite measure space $(X,\mathfrak{M},\mu)$, and for $p\in [1,\infty]$, let $L^p$ denote $L^p(\mu,\Bbb{R})$, so we only consider real-valued functions. Consider the function $T:L^2\times L^2\times L^{\infty}\to L^1$ defined as \begin{align} T(f,g,h):= \begin{cases} \dfrac{fgh}{\displaystyle\int_X(f^2+g^2+h^2)\,d\mu}&\text{if $(f,g,h)\neq 0$}\\\\ 0 & \text{if $(f,g,h)=0$} \end{cases} \end{align} Note that by Holder's (or in this case Cauchy-Schwarz) inequality, $fgh$ does indeed lies in $L^1$ and because $f,g\in L^2$ and $h\in L^{\infty}$ with $\mu$ being a finite measure, the denominator will be finite if $(f,g,h)\neq (0,0,0)$; furthermore because we're only dealing with real-valued functions, the denominator only vanishes when $(f,g,h)=0$, so everything is well-defined. Now, $T$ is easily verified to be homogeneous of degree $1$, but is not a bounded linear transformation. Hence, all the directional derivatives exist, but $T$ is not Frechet-differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A vector-product formula Let $\mathbf a, \mathbf b,$ be vectors in $\mathbb R^3$ and let $R$ be a $3\times 3$ matrix. Then we have $$ ^t\!R\bigl(R\mathbf a\times R\mathbf b\bigr)=(\det R)(\mathbf a\times \mathbf b). \tag 1$$ The proof of (1) is using the very definition of the vector-product: we have $ \langle \mathbf a\times \mathbf b, \mathbf c\rangle=\det( \mathbf a, \mathbf b, \mathbf c) $ so that $$\langle ^t\!R\ (R\mathbf a\times R\mathbf b), \mathbf c\rangle =\langle R\mathbf a\times R\mathbf b, R\mathbf c\rangle=\det( R\mathbf a, R\mathbf b, R\mathbf c)=(\det R)\det( \mathbf a, \mathbf b, \mathbf c) =(\det R)\langle \mathbf a\times \mathbf b, \mathbf c\rangle, $$ which yields (1). My question: can anyone survive a "direct" proof of (1) by brute force, i.e. by calculating explicitly all terms of each side; I guess that a fine use of Einstein convention could help, but it seems to me that it is one more example that a little bit of abstraction would save you from an intractable computation.
A basic "index-based" compuation is totally feasable. To make it nice, you need Einstein-sum-convention and also the "Levi-Civita-Symbol" $\epsilon_{ijk}$. See wikipedia for a bunch of nice formulas how to express cross-products and determinants with $\epsilon$. First expand everything: \begin{align} (R^T(Ra \times Rb))_i &= R_{ji} (Ra \times Rb)_j \\ &= R_{ji} (Ra)_k (Rb)_l \epsilon_{jkl} \\ &= R_{ji} R_{km} a_m R_{ln}b_n\ \epsilon_{jkl} \end{align} and then reorder everything and group again: \begin{align} &= \left(R_{ji} R_{km} R_{ln}\epsilon_{jkl}\right) a_m b_n \\ &= \det(R)\epsilon_{imn} a_m b_n \\ &= \det(R)(a\times b)_i \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Notes on Low-Dimensional Topology I am studying algebraic topology at the moment and I'm halfway done with Hatcher's book. I am extremely interested in low-dimensional topology, so I was wondering if anybody knows a good set of notes in knot theory and 4-dimensional manifolds. So any reference would be much appreciated
There are many introductory books on knot theory. I will list few in an order of increasing difficulty. * *Adams: The knot book (2004), may be too easy since it doesn't even mention the fundamental group of a knot. *Livingston: Knot Theory (1993) *Murasugi: Knot theory and its applications (1996) *Burde, Zieschang: Knots (2013) *Kawauchi - A survey of Knot Theory (1996), not exactly a textbook but contains some proofs skipped by previous items. I haven't read Lickorish's book, so can't judge it. Material listed above should more than enough to understand fundamentals of knot theory. If you want to continue your journey with knots you should switch to reading papers from journals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4114932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Prove $f(x,y) =\sqrt{16-x^2-y^2}$ is continuous using $\epsilon$, $\delta$? I want to prove that for every $\epsilon > 0$ there exists a $\delta > 0$ such that $$(x-x_0)^2 + (y-y_0)^2 < \delta^2 \Rightarrow \left|\sqrt{16-x^2-y^2}-\sqrt{16-x_0^2-y_0^2} \right| < \epsilon.$$ I note that $|x-x_0| \leq \delta$ and $|y-y_0| \leq \delta$. What I've tried: $$\left| \sqrt{16-x^2-y^2}-\sqrt{16-x_0^2-y_0^2} \right| = \frac{|x^2-x_0^2 + y^2-y_0^2|}{\sqrt{16-x^2-y^2}+\sqrt{16-x_0^2-y_0^2} }$$ $$= \frac{|(x-x_0)(x+x_0) + (y-y_0)(y+y_0)|}{\sqrt{16-x^2-y^2}+\sqrt{16-x_0^2-y_0^2} } \leq \frac{|x-x_0| \cdot |x+x_0| + |y-y_0|\cdot |y+y_0|}{\sqrt{16-x^2-y^2}+\sqrt{16-x_0^2-y_0^2} }$$ $$\leq \frac{|x-x_0| \cdot |x+x_0| + |y-y_0|\cdot |y+y_0|}{\sqrt{16-x_0^2-y_0^2} }$$ But then I do not know how to continue.
HINT You are on the right track! Notice that \begin{align*} \begin{cases} |x + x_{0}| = |(x - x_{0}) + 2x_{0}| \leq |x - x_{0}| + 2|x_{0}|\\\\ |y + y_{0}| = |(y - y_{0}) + 2y_{0}| \leq |y - y_{0}| + 2|y_{0}| \end{cases} \end{align*} Consequently, if we set $k^{-1} = \sqrt{16 - x^{2}_{0} - y^{2}_{0}}$, then we have that \begin{align*} k(|x - x_{0}||x + x_{0}| + |y - y_{0}||y + y_{0}|) \leq 2k\delta^{2} + 2k\delta(|x_{0}| + |y_{0}|) := \varepsilon \end{align*} Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4115146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
System of 3 simultaneous equation where one is non-linear Hi I have a problem where I need to solve the following set of equations: $$ v = U u $$ $$ u = 1 -Uv $$ $$ U^2 = u^2 + v^2 $$ I have tried subbing $u$ and $v$ into the expression for $U^2$ but it seems to get very messy very quickly. Any help solving for $u$,$v$ and $U$ would be greatly appreciated.
$v = Uu, u = 1-Uv, U^2 = u^2 + v^2$ I solve the first for $U = v/u$ and substitute into both of the others: $$u = 1-\frac{v^2}{u}, \frac{v^2}{u^2} = u^2 + v^2$$ Then the first becomes $$ u^2 = u - v^2$$ and the second becomes $$v^2 = u^4 + u^2v^2$$ The former rearranges into $u = v^2 + u^2$ and putting that into the original third equation, I find $U^2 = u$. So $$\frac{v^2}{u^2} = U^2 = u,$$ so $v^2 = u^3$ or $v = u^{3/2}$ Finally, $u = 1-Uv$ becomes $u = 1-u^2$, which quadratic equation solves to $$u = {{-1\pm\sqrt{5}}\over {2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4115308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Product operation for cosets In Artin's book, he gives a lemma that for a normal subgroup $N \subset G$ and cosets $aN$, $bN$, that the product set $(ab)N$ is also a coset; the proof simply uses the definition of normality: $$(aN)(bN) = a(Nb)N = a(bN)N = (ab)(NN) = (ab)N.$$ My first question is how to put this proof in the context of demonstrating that this product operation is well-defined, which requires proving that if $aN = a'N$ and $bN = b'N$, then $abN = a'b’N$. I am not sure if this is in fact a proof of well-definedness and know that we can define a set product, say $AB$ of elements $ab$ where $a \in A, b \in B$. But we can't "use" this product unless it is well-defined, in which case it is by definition a coset, which leads me to beleive that Artin is in fact saying that this proof allows the operation to "work." Can someone help me understand this?
Edit: I misinterpreted the question posed, so what's below is largely unhelpful. I will keep my answer up regardless for anyone who would like to know how to prove that coset multiplication is well defined for context. To prove that coset multiplication is well defined I will prove the following statement adapted from the book Abstract Algebra by John Beachy and William D. Blair: Let $N$ be a normal subgroup of $G$, and let $a,b,c,d\in G$. If $aN = cN$ and $bN = dN$, then $abN = cdN$. Which implies that multiplication of left cosets is well defined. Proof: Suppose that $aN = cN$ and $bN = dN$ then it is necessarily the case that $a^{-1}c\in N$ and $b^{-1}d\in N$. However, we know $N$ is normal and so $d^{-1}(a^{-1}c)d\in N$. Since $b^{-1}d\in N$ we have $(ab)^{-1}cd = (b^{-1}d)(d^{-1}a^{-1}cd)\in N$ which immediately implies that $abN = cdN$. As such, left coset multiplication is indeed well defined given that $N$ is a normal subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4115446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
finding the max of a function This question popped up during a nights drinking and it has been bugging me ever since. the conditions are: $$\left\{\begin{matrix} x+14y\leq 1820\\ x+25y\leq 2162.5\\ x\geq 0\\ y\geq 0 \end{matrix}\right.$$ The question is what is the maximum possible value of : \begin{matrix} \\ 4x+3y \end{matrix}. It took us 3 lads around an hour and plenty of drawing to solve this. My question is are there ways to solve this without plotting a graph and purely through equations. I tried looking up the Lagrange multipliers method but that method falls flat for me as all the derivative are reduced to 1.
That region is a polyhedron whose sides are: * *the line segment that goes from $(0,0)$ to $(0,86.5)$; *the line segment that goes from $(0,86.5)$ to $(1384.09,31.1364)$; *the line segment that goes from $(1384.09,31.1364)$ to $(130,0)$; *the line segment that goes from $(130,0)$ to $(0,0)$. Since the gradient of $4x+3y$ is never $0$, the maximum has to be attained at one of the sides. Now, $(x,y)$ belongs to the first side if and only if $x=0$ and $y\in[0,86.5]$. So, $4x+3y$ attains the maximum ($259.5$) at $(0,86.5)$. Do the same thing for the other three sides, and you will get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4115625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometry problem proving that all lines $DE$ passes through one point Let $I$ is the incenter of $\triangle ABC$. Let $K$ be the circumcircle of $ABC$. Let $D$ be a variable point on arc $AB$ on $K$ not containing $C$. Let $E$ be a point on line segment $BC$ such that $\angle ADI = \angle IEC$. Prove that as $D$ varies on arc $AB$, the line $DE$ passes through one point. First after testing a couple cases i found the point is the midpoint of arc $BC$. I've tried direct proof and reverse reconstructing the problem, but nothing worked. Can anyone help? (edit: I reflected $E'$ in the bisector of $C$ and created a cyclic quadrilateral $ADIE'$. Maybe it is a useful property...) (edit 2: I suspect inversion might be involved...)
Here is a detailed proof, also using "the point $F$" as in the post of Aqua, it is the point that best "geometrically interpolates" between $D$ and $E$. The definition of $F$ below will be the one, that lets us give a direct (instead of indirect) proof. We start by constructing the parallel $(f)$ to $BC$ through $I$. Then let us draw the circle $(\Gamma)$ through $A,D,I$, it intersects $(f)$ first time $I$, and for a second time (counting multiplicities) in a point $F$. This is the definition of $F$. By construction, we have the same measure $x$ of angles... $$ x =\widehat{IEC} =\widehat{ADI} =\widehat{AFI} \ . $$ The points $A'$, and $S$ are the intersections of the angle bisector $AI$ (in $A$ of $\Delta ABC$) with the side $BC$, and respectively with the circumcenter $(K)=\odot(ABC)$. Claim: * *$(1)$ The points $S,D,F$ are colinear. *$(2)$ The points $S,E,F$ are colinear. It is clear, that the OP follows from the above, showing that $DE$ passes through the fixed point $S$, the mid point of the arc $\overset\frown{BC}$ opposite to the vertex $A$. Proof of the claim: $(1)$ Using the two circles, the given $(K)$ and the constructed $(\Gamma)$ we have the equality of measures of angles: $$ \begin{aligned} \widehat{ADS} &\overset{(K)}= \widehat{ABS} = \widehat{ABC}+\widehat{CBS} = \hat B+\widehat{CAS} = \hat B+\frac 12\hat A\ , \\ \widehat{ADF} &\overset{(\Gamma)}= \widehat{FIS} = \widehat{BA'S} = \frac12(\overset\frown{BS}+\overset\frown{AC}) = \frac12\overset\frown{BS}+\frac12\overset\frown{AC} = \widehat{BAS}+\widehat{ABC} = \frac 12\hat A+\hat B\ , \\ &\qquad\text{ giving } \\ \widehat{ADS} &= \widehat{ADF} \ , \end{aligned} $$ so the points $D,F,S$ are colinear. This concludes $(1)$. $(2)$ We will show that $F,E,S$ are colinear by the reciprocal of the theorem of Thales, showing that proportions of corresponding sides in the a posteriori valid similarity $\Delta SA'E\sim\Delta SI$ are equal. Here is the computational path: $$ \begin{aligned} \frac {FI}{EA'} &= \frac {FI}{EA'} &&\text{ using }\Delta AIF\sim\Delta IA'E \\ &= \frac {AB}{BA'} &&\text{ using the angle bisector $BI$ in}\Delta ABA' \\ &= \frac {SC}{SA'} &&\text{ using }\Delta SA'C\sim\Delta BA'A \\ &= \frac {SI}{SA'} &&\text{ using $\Delta SIC$ isosceles, $SC=SI$} \ . \end{aligned} $$ The needed proportions are thus equal, so $F,E,S$ are colinear. This concludes $(2)$, and the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4115757", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }