Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to choose $b_n$ for $\sum_{n=1}^{\infty}\frac{5n^3+1}{2^n\left(n^3+n+1\right)}$ So I have to use the limit comparison test for the following problem, but I'm struggling with how to choose my $b_n$. $$\sum_{n=1}^{\infty}\frac{5n^3+1}{2^n\left(n^3+n+1\right)}$$ My first thought was to compare $2^n$ and $(n^3+n+1)$ to see which is faster growing, but then I wasn't sure, since these were in the denominator if I should actually compare $\frac{1}{2^n}$ and $\frac{1}{n^3+n+1}$, to see which goes to zero the slowest. But, then I realize that all the examples I had for choosing terms in this fashion were additive, and not multiplicative. So I'm really not sure what the best way to choose $b_n$ is for this problem. Any tips? Thank you!
So I think I figured this out on my own, just wanted to post and see if this looks right: $$\sum_{n=1}^{\infty}\frac{5n^3+1}{2^n\left(n^3+n+1\right)}\xrightarrow[]{dominant\ power\ terms}b_n=\frac{5n^3}{2^nn^3}=5\left ( \frac{1}{2} \right )^n\rightarrow Gradient\ series\ where \left | r \right |< 1\rightarrow convergent$$ then by LCT: $$\lim_{n \to \infty}\frac{\frac{5n^3+1}{2^n(n^3+n+1)}}{\frac{5}{2^n}}\rightarrow \lim_{n \to \infty}\frac{5n^3+1}{2^n(n^3+n+1)}\ast \frac{2^n}{5}\rightarrow \lim_{n \to \infty}\frac{n^3+\frac{1}{5}}{n^3+n+1}\xrightarrow[]{dominant\ terms}\lim_{n \to \infty}\frac{n^3}{n^3}=1\rightarrow both\ series\ converge$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rotating a vector around an axis so that it falls onto the specified plane I'm trying to refactor/solve the following equation for theta if $ \vec{v} , \vec{r} $ and $ \vec{n} $ are known vectors: $ \left(\vec{v} \cdot \cos\left(\theta\right) + \left(\vec{r} \times \vec{v}\right) \cdot \sin\left(\theta\right) + \vec{r} \cdot \left(\vec{r} \cdot \vec{v}\right) \cdot \left(1 - \cos\left(\theta\right)\right)\right) \cdot \vec{n} = 0 $ If this looks familiar, it's probably because it is - it's a combination of the Rodrigues equation for rotating a vector (in this case, $ \vec{v} \,$) around an axis $ \left(\vec{r} \right)$ and a simple dot operation to determine the distance of the resulting vector to a given plane defined by a normal $ \left(\vec{n}\right) $. In essence, I'm trying to solve for $ \theta $ such that the vector $ \vec{v} $ will be rotated onto plane $ \vec{n} $ if possible. Near as I can tell, there should be either $ 0, 1 $ or $ 2 $ solutions to this problem - making me think that it might be possible to refactor this into something that can be solved by a quadratic equation. Alternatively (and I have no idea how this would work) - I'm guessing it might also be possible to somehow "project" $ \vec{v} $ around the axis $ \vec{r} $ so that it falls on the plane with a normal defined by $ \vec{n} $ and then calculate the angle between the original vector and the projected vector $ \left(\vec{s}\right) $, but I'm not really sure how you would do that. How can I solve this problem?
Starting from your equation $$\left(\vec{v} \cdot \cos\left(\theta\right) + \left(\vec{r} \times \vec{v}\right) \cdot \sin\left(\theta\right) + \vec{r} \cdot \left(\vec{r} \cdot \vec{v}\right) \cdot \left(1 - \cos\left(\theta\right)\right)\right) \cdot \vec{n} = 0$$ gives $$a \cos \theta - b \sin \theta = c,$$ where $a=\left(\vec{n} \cdot \vec{r} \right) \left(\vec{r} \cdot \vec{v}\right) - \vec{n} \cdot \vec{v}$, $b=\vec{n} \cdot \left(\vec{r} \times \vec{v}\right)$ and $c=\left(\vec{n} \cdot \vec{r} \right) \left(\vec{r} \cdot \vec{v}\right)$. Let $M=\sqrt{a^2+b^2}$ and $\alpha = \arctan \dfrac{b}{a}$; then $$M \cos \left(\theta + \alpha\right) = c,$$ which will give two values of $\theta$ if $|c| < |M|$, one if $|c| = |M|$ and none if $|c| > |M|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Why is this matrix not diagonalizable Im studying for an exam in linear algebra, and doing some false/true statements, and can't figure out why this matrix is not diagonalizable. I am supposed to see that without any calculations. Would anyone mind explain? Thanks! $\begin{bmatrix} 1 & 1 \\ 0 & 1 \end{bmatrix}$
Denote by $e_1, e_2$ the basis for the $2$-dimensional vector space you're working on. Calling your matrix $A$, the first thing you can observe is that $Ae_1 = e_1$, so you have already found one eigenvector + eigenvalue. Now, two scenarios: * *either you have already studied triangular matrices, and you know that the second eigenvalue of $A$, if it has one, also has to be $1$ *or you haven't, in which case we will just assume $A$ does have a second eigenvalue $\lambda$, associated to some vector $v$ (ie $Av = \lambda v$) The second column of your matrix $A$ tells you that $Ae_2 = e_1 + e_2$. Now, writing $v = ae_1 + be_2$, for arbitrary scalars $a,b$, you have: $Av = ae_1 + be_1 + be_2 = (a+b)e_1 + be_2$, by writing out the calculations. So, $Av = \lambda v \iff (a+b)e_1 + be_2 = \lambda(ae_1 + be_2)$ In other words, you need to satisfy the two equations $a + b = \lambda a$ and $b = \lambda b$. Clearly, the only possible solution to this system is for $\lambda = 1, b = 0$. So then, the only possible choice of eigenvector is $ae_1$ for $a$ a scalar. So the only eigenvector of your matrix was the obvious one, $e_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof by induction that the sum of transposed matrices equals the transposition of the sum of the matrices Let $A_k \in \mathbb{K}^{p\times r}$ for all $k \in \underline{n}$. Show that $(A_1 + A_2 + \ldots + A_n)^T = A_1^T + A_2^T + \ldots + A_n^T.$ Or rather $\sum_{k=1}^{n}A_k^T = (\sum_{k=1}^{n}A_k)^T.$ I tried doing that with induction, but I'm having problems in the induction step: $$\sum_{k=1}^{n+1}A_k^T = \sum_{k=1}^{n}A_k^T + A_{n+1}^T = \left(\sum_{k=1}^{n}A_k\right)^T + A_{n+1}^T$$ How do I continue? How do I get that last term into the sum? I feel like I'd have to use the statement I want to prove to prove it which is nonsense of course.
You're asking about proving $$\sum_{i=1}^{n}A_i^T = (\sum_{i=1}^{n}A_i)^T \tag{1}\label{eq1A}$$ for all integers $n \ge 1$. With the first base case of $n = 1$, the LHS and RHS are both just $A_1^T$. Next, as basically suggested in Jair Taylor's comment, it's useful to prove the $n = 2$ on its own, i.e., $$B^T + C^T = (B + C)^T \tag{2}\label{eq2A}$$ for any $B, C \in \mathbb{K}^{p\times r}$. Similar to what's stated in If $A$ and $B$ are arbitrary $m\times n$ matrices, show that $^t(A+B)= {}^tA+{}^tB$?, this can be shown by checking component wise by using $X_{i,j}$ to mean the element at the $i$'th row and $j$'th column of the matrix $X$. By also using the entrywise sum property of matrix addition, the LHS is $(B^T + C^T)_{i,j} = (B^T)_{i,j} + (C^T)_{i,j} = B_{j,i} + C_{j,i}$, while the RHS is $\left((B + C)^T\right)_{i,j} = (B + C)_{j,i} = B_{j,i} + C_{j,i}$, i.e., they're equal. Next, for the induction step, assume for some integer $k \ge 2$ that \eqref{eq1A} is true for $n = k$. We then have using this (in the second line below), and \eqref{eq2A} (in the third line below), that $$\begin{equation}\begin{aligned} \sum_{i=1}^{k+1}A_i^T & = \sum_{i=1}^{k}A_i^T + A_{k+1}^T \\ & = (\sum_{i=1}^{k}A_i)^T + A_{k+1}^T \\ & = ((\sum_{i=1}^{k}A_i) + A_{k+1})^T \\ & = (\sum_{i=1}^{k+1}A_i)^T \end{aligned}\end{equation}\tag{3}\label{eq3A}$$ i.e., \eqref{eq1A} is also true for $n = k+1$. Thus, by induction, \eqref{eq1A} is true for all integers $n \ge 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4601960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is uniform differentiablity symmetric? For all $\varepsilon > 0$, there exists $\delta > 0$ such that for all $x \in \mathbb R$, $$\left \lvert \frac{f(x)-f(y)}{x-y} - f'(y)\right \rvert \leq \varepsilon$$ when $|x-y| < \delta$. Note: Initially, I had incorrectly written the following, as Theo Bendit and Anne Bauval helped correct: For all $\varepsilon > 0$, for all $x \in \mathbb R$, there exists $\delta > 0$ such that $$\left \lvert \frac{f(x)-f(y)}{x-y} - f'(y)\right \rvert \leq \varepsilon$$ when $|x-y| < \delta$. End note. Does this imply that $$\left \lvert \frac{f(x)-f(y)}{x-y} - f'(x)\right \rvert \leq \varepsilon$$? Intuitively, it seems it must: The first equation is stating that the slope of the secant is arbitrarily close to the derivative at one point of the secant. This should apply to both points of the secant - there's no way to distinguish one point $(y)$ from the other $(x)$. Yet, algebraically, I haven't been able to prove it. Is my assertion true? Can it be proven algebraically? Is it true if we replace $f'$ with arbitrary function $g$? Or does the proof somehow depend on the nature of derivative? What am I missing?
Now your assertion is true, even if you replace $f'$ by any function $g,$ and $\frac{f(x)-f(y)}{x-y}$ by any other symmetric function $F(x,y).$ Even more generally, if $$\forall(x,y)\in\Bbb R^2\quad(|x-y|<\delta\implies h(F(x,y),y)\leq \varepsilon)$$ then (by change of notations) $$\forall(y,x)\in\Bbb R^2\quad(|y-x|<\delta\implies h(F(y,x),x)\leq \varepsilon),$$ which is equivalent to $$\forall(x,y)\in\Bbb R^2\quad(|x-y|<\delta\implies h(F(x,y),x) \leq \varepsilon)$$ by symmetry of $F,$ of the distance, and of $\forall x\forall y.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4602123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Draw an area on the complex plane On the complex plane draw the area: $$ \begin{equation} \begin{cases} |z+4i| < 3 \\ |\arg(z-5-5i)|<\frac{\pi}{3} \end{cases} \end{equation} $$ Where $ \arg(z) \in (-\pi, \pi ]$ I can draw $|z+4i| < 3$: $|x + iy + 4i|<3 \Rightarrow \sqrt{x^2 + (y + 4)^2}<3 \Rightarrow x^2 + (y + 4)^2<9$: However, I have no idea how to draw and intersect with $|arg(z-5-5i)|<\frac{\pi}{3}$
To draw the region $|\text{arg}(z-5-5i)| < \frac \pi 3$, you should first draw the region $|\text{arg}(z)| < \frac \pi 3$ and then translate that shape by $5+5i$. The region $|\text{arg}(z)| < \frac \pi 3$ is just all points $r e^{i \theta}$ with $-\pi/3 < \theta < \pi/3$. It's the region between two rays that start at $z=0$. Therefore, the region $|\text{arg}(z-5-5i)| < \frac \pi 3$ is basically the same shape except the rays start at $z=5+5i$ instead. Combining that with the disc you already drew, we actually find that there is no overlap at all! Thus the region you want is actually empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4602374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Continuous injective function that fails to map open sets to open sets I'm studying metric spaces and have just proved that continuous maps preserve open sets under pre-image. The book I'm learning from says to beware of what this theorem does not say: that a continuous map sends the forward image of an open set to an open set. I found these counterexamples: * *any constant map $f:\mathbb{R}\to \mathbb{R}$. Since singletons are closed in $\mathbb{R}$ under usual metric. *$f:\mathbb{R}\to \mathbb{R}$; $f(x)=x^2$, since $f((-1,1))=[0,1)$ but $[0,1)$ is not open in $\mathbb{R}$. Reflecting on these it seems that both fail to be injective. So I have been trying to find a continuous injective function that fails to map forward open sets to open sets but am at a loss. Any ideas?
Consider $f:\Bbb R\to\Bbb R^2$, $x\mapsto(x,0)$. $\Bbb R$ is open in $\Bbb R$ but $f(\Bbb R)$ is not open in $\Bbb R^2$, and $f$ is certainly a continuous injection (be careful: if you change the domain to $\Bbb R\times\{0\}$ as a subspace of $\Bbb R^2$, then $f$ is open!) N.B. If you only deal with Euclidean spaces, you will find no counterexamples where $f:A\to B$ has $A$ a closed and bounded subset of $\Bbb R^n$ for compactness reasons. You can use reasoning like this to deduce Eric's comment that any continuous injection $\Bbb R\to\Bbb R$ is also open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4602753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is the closed strip homeomorphic to the closed half-plane? Let $$ X = \{(x,y) \in \mathbb{R}^2: 0 \le x \le 1\} \quad Y = \{(x,y) \in \mathbb{R}^2: 0 \le x\}. $$ Are $X$ and $Y$ homeomorphic? I first thought no because their boundaries are not homeomorphic however the argument is flawed because the theorem I want to use is: If $X \cong Y$ then for $A \subseteq X$ $$ \partial_X (A) \cong \partial_Y(f(A)). $$ So applying this theorem would yield $$ \partial_X(X) \cong \partial_Y(Y) \implies \emptyset \cong \emptyset $$ which is not helpful. Any hints?
To proof it without using any algebraic topology, the same as here works: Call a space $X$ compactly connected, if for each compact subset $A$ of $X$ there is compact subset $B$ of $X$, such that $A \subseteq B$ and $X \setminus B$ is connected. $Y = [0,\infty) \times \mathbb{R}$ is compactly connected, since each compact subset is contained in a compact rectangle, such that one of its edges lies on the $y$-axis. The remainder consists of three (infinite, non-closed) "rectangles", where one intersects the other two. Hence it is connected. $X = [0,1] \times \mathbb{R}$ is not compactly connected: Let $A := [0,1] \times \{0\}$. Then for any compact $B$ with $A \subset B$, $X \setminus B$ is not connected.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4603163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Difference between PEMDAS and BODMAS. I didn't get the point that when the PEMDAS and the BODMAS rule are different, then how can they both yeild the same results. I have searched over google but found everywhere that they're the same. Where as I see them as according to the precedency they have in the order the put in are different PEMDAS Parentheses > Exponents > Multiplication > Division > Addition > Subtraction BODMAS(Also known as PEDMAS/BIDMAS) Brackets/Braces > Order > Division > Multiplication > Addition > Subtraction The main difference I see in both if them is the change of precedency of multiplication and division. Can anyone elaborate this?
Neither of them quite accurately reflect actual usage. Here's how I'd describe the rules for grouping that they're referring to, with some of the subtle issues that are often left out: * *Parentheses/brackets group terms because that is the entire point of those symbols: to indicate groupings of terms. So this really precedes the whole question of conventions for grouping, as this is a way of explicitly indicating a grouping. *Exponentiation binds more tightly (that is, has higher operator precedence) than addition, subtraction, multiplication, or division. So, for example, $ab^c$ means $a(b^c)$, not $(ab)^c$. Note, however, that the raised position of the exponent itself acts as a "grouping" (by position rather than with explicit parentheses), so $a^{b+c}$ means $a^{(b+c)}$, not $a^b + c$. By convention, it is also treated as right-associative, that is, $a^{b^c}$ is taken to mean $a^{(b^c)}$, not $(a^b)^c$. (This is because, for positive real numbers $a, b, c$, we have $(a^b)^c = a^{bc}$; it can be expressed without nested exponents at all.) *Multiplication and division bind more tightly than addition and subtraction (so $a + b \cdot c$ means $a + (b \cdot c)$, not $(a + b) \cdot c$), but less tightly than exponentiation. But there are some further subtleties: * *Fractions denoted with a long horizontal bar also group "by position" like an exponent does, so $\frac{a + b}{c}$ means $(a + b)/c$, not $a + (b/c)$, even though there are no explicitly written parentheses grouping the $a + b$ expression. *Multiplication "by juxtaposition", that is, indicating multiplication by putting two expressions next to each other without a multiplication symbol between them ($ab$ rather than $a \cdot b$), can create genuinely ambiguous notation: $a/bc$ could mean either $(a/b)c$ or $a/(bc)$, and therefore such expressions should be avoided if it's not obvious from context what's intended. (For example: does $1/2x$ mean $\frac{1}{2x}$ or $\frac{1}{2} x$? I'd say the only proper answer is "don't write things like $1/2x$, and if someone else has written it, ask them to clarify". The point of mathematical notation is to express things clearly, unambiguously, and concisely, not to trick people with ambiguous edge cases of notation.) *Division is not associative, so expressions like $a/b/c$ are also potentially confusing and should be avoided (by using fractions with a long horizontal bar or by using parentheses). However, if you do encounter an expression like that, division is treated as left-associative, so that would mean $(a/b)/c$, not $a/(b/c)$. The division sign $\div$ is much less common than $/$ outside of some countries' grade school mathematics, but the two follow the same grouping rules. *Addition and subtraction bind less tightly than exponentiation, multiplication, and division. Also, subtraction is similar to division in that it's taken as left-associative; expressions of the form $a - b - c$ are also far more common than the analogous expressions for division, and unambiguously mean $(a - b) - c$, not $a - (b - c)$. Likewise when combined with addition: $a - b + c$ means $(a - b) + c$, not $a - (b + c)$. Another point is that these are rules for grouping, not really for "order of operations": If you encounter an expression like $$2 + 3 + 5 \cdot 7,$$ the grouping rules tell you that this means $$(2 + 3) + (5 \cdot 7),$$ but the fact that multiplication binds more tightly than addition doesn't mean you have to evaluate multiplication first. It's perfectly legitimate, for instance, to compute $2 + 3 = 5$ and simplify the expression to $5 + (5 \cdot 7)$, then compute the multiplication, then the final addition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4603369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Can sums of squares in a field always be written as the sum of four squares? Let $\mathbb{F}$ be a field. Let $S$ be the set of elements of $\mathbb{F}$ that can be written as a sum of squares. For $\mathbb{F} = \mathbb{Q},$ four squares might be required. Is there any field such that $S$ contains elements that aren't the sum of four squares? What is the smallest $n$ such that it is guaranteed that all elements of $S$ are the sum of $n$ squares?
For a field $F$, the minimal number $n$ such that every sum of squares in a $F$ is a sum of at most $n$ squares is called the Pythagoras number of $F$. The discussion on MO here indicates that the calculation of the Pythagoras number for the rational function field $\mathbf R(x_1,\ldots,x_m)$ is still largely open. It is known to be in the interval $[m+1,2^m]$, so for $m = 1$ the value is $2$. For $m = 2$, it is a theorem of Cassels, Ellison, and Pfister that the value is $4$. For $m \geq 3$ the exact value has not been determined. Hoffmann, in 1998 here, showed every positive integer is the Pythagoras number of some formally real field (that means a field that is an ordered field in at least one way, or equivalently $-1$ is not a sum of squares in the field).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4603521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $\sum\limits_{n = 1}^{\infty} \frac{3^n + 4^n}{2^n + 5^n}$ converge? My Attempt First, check the limit $$\lim_{n \to \infty} \frac{3^n + 4^n}{2^n + 5^n} = \lim_{n \to \infty} \frac{\left(\frac{3}{5}\right)^n + \left(\frac{4}{5}\right)^n}{\left(\frac{2}{5}\right)^n + 1} = 0.$$ So, we cannot conclude anything. I used Comparison Test and Ratio Test. Consider that $$\frac{3^n + 4^n}{2^n + 5^n} < 2\frac{5^n}{2^n + 5^n}.$$ If I can proof the convergence of the series on the right-side, then it's done by comparison test. I used ratio test, in order to proof the convergence of $$\sum_{n= 1}^{\infty} \frac{5^n}{2^n + 5^n}.$$ $$\lim_{n \to \infty} \frac{55^n}{2(2^n) + 5(5^n)} \frac{2^n + 5^n}{5^n} = 5 \lim_{n \to \infty} \frac{2^n + 5^n}{2(2^n) + 5(5^n)} = 1.$$ I didn't find the way to proof. Any suggestion? Thanks in advanced. Solution (@abiessu & @Thomas Andrew) Consider that $$\frac{3^n + 4^n}{2^n + 5^n} < 2\frac{4^n}{2^n + 5^n}.$$ Proof this series converge. $$\sum_{n= 1}^{\infty} \frac{4^n}{2^n + 5^n}.$$ Proof (Ratio Test) $$\lim_{n \to \infty} \frac{44^n}{2(2^n) + 5(5^n)} \frac{2^n + 5^n}{4^n} = 4 \lim_{n \to \infty} \frac{2^n + 5^n}{2(2^n) + 5(5^n)} = \frac{4}{5}.$$ The series converge. Hence $$\sum_{n = 1}^{\infty} \frac{3^n + 4^n}{2^n + 5^n}$$ converge.
My preferred method would be to note that $$ \frac{3^n+4^n}{2^n+5^n} \leq \frac{4^n+4^n}{0+5^n} = 2 \cdot \left(\frac{4}{5}\right)^n. $$ So, the series converges by the comparison test and the geometric series criterion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4603874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Nature of critical point in 3 variable quadratic form I want to determine the nature of the critical point $(x_0, y_0, z_0)=(0,0,0)$ for the function $$ f(x,y,z)=\alpha x^2+\beta(y^2+z^2)+\gamma x y+\delta xz $$ where $\alpha,\beta,\gamma,\delta\in\mathbb{R}$ and $\alpha,\beta$ have the same sign. I found the Hessian $$ \begin{pmatrix} 2\alpha & \gamma & \delta\\ \gamma & 2\beta & 0\\ \delta & 0 & 2\beta \end{pmatrix} $$ and that it has eigenvalues $$ \lambda = 2\beta, \alpha +\beta \pm \sqrt{\alpha^2+\beta^2+\gamma^2+\delta^2-2\alpha\beta}. $$ I am stuck on what to do if $4\alpha\beta = \gamma^2+\delta^2$. In this case, the eigenvalues are $$ \lambda = 0, 2\beta, 2\alpha+2\beta $$ and since $\alpha$ and $\beta$ have the same sign, this means we have all non-zero eigenvalues being the same sign and at least one eigenvalue being zero so the test is inconclusive?? How can I determine the nature of the critical point in this case?
Yes. The test is inclusive and further deductions through 1st-order and 2nd-order partial differentiations are not possible. An example is the following function $$ f(x,y,z)=x^2+y^2+\alpha z^4. $$ This function has only one minimizer candidate point $(x,y,z)=(0,0,0)$. Note the word candidate here. By this word, we mean that if $f(x,y,z)$ has any minimizer, it has to be $(0,0,0)$, not that it necessarily is. The Hessian is rather interesting: $$ H=\begin{bmatrix} 2&0&0\\ 0&2&0\\ 0&0&12\alpha z^2 \end{bmatrix} $$ which is equal to $ \begin{bmatrix} 2&0&0\\ 0&2&0\\ 0&0&0 \end{bmatrix} $ in $(x,y,z)=(0,0,0)$. However, the point $(0,0,0)$ is a minimizer only for $\alpha\ge 0$ and is a saddle point for $\alpha<0$. Such a case happens for your function (nevertheless, you may still be able to determine the nature in other ways). Further Reading You are also encouraged to read about First-Order Necessary Condition (FONC), Second-Order Necessary Condition (SONC) and Second-Order Sufficient Condition (SOSC).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Autoregressive Gaussian process -- limiting distribution? Let $x_0 = 0$ and suppose that $x_{t + 1} \mid x_t \sim N((1-\alpha) x_t, \alpha^2)$. That is, $$ x_{t + 1} = (1- \alpha) x_t + \alpha w_{t+1}, \quad \mbox{for}~t \geq 1, $$ where $w_{t}$ are independent and identically distributed standard Normal random variables. Here, $\alpha \in (0, 1)$. What is the distribution of $x_t$ as $t \to \infty$? What I can see is that $$ x_t = \alpha \sum_{l = 0}^{t - 1} (1-\alpha)^l w_{t - l}. $$ Hence only the "most recent" $w_t$ matter. Since each of these are independent Normals, my guess would be that $x_t \to N(0, 1)$, say in distribution, as $t \to \infty$. Is this correct?
If $X_1 \sim N(\mu_1, \sigma_1^2)$ and $X_2 \sim N(\mu_2, \sigma_2^2)$ and they are independent, then, $X_1 + X_2 \sim N(\mu_1+\mu_2, \sigma_1^2 + \sigma_2^2)$. This is known as the renewal property for normal distributions. One way to show this is to consider the characteristic function. Then, $x_t \sim N(0, \alpha^2 \sum_{l = 0}^{t-1}(1-\alpha)^{2l})$. Hence, $x_t$ converges to $N(0, \frac{\alpha}{2-\alpha})$ in the total variation, in particular, converges in distribution, as $t \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Catalan numbers, paths on a grid I'm struggling to understand the concept of the idea. Some questions "push" me to use the reflection method which is hard for me to imagine and understand. I prefer to look at it as a sequence, but check out this question: Consider a $7\times7$ grid whose points are $\lbrace (i, j) | 0 \leq i, j \leq 6\rbrace$ and a set $D = \lbrace(1,1), (2,2), (3,3), (4,4), (5,5)\rbrace$. How many paths of length $12$ that start from $(0,0)$ and end at $(6,6)$ do not cross the points of the set $D$? My solution was $2^*C(5)$ using the reflections idea, which I'm unsure is even right. Is there a way to solve this kind question more abstractly? (with sequences for example)
Yes, your solution works. An alternate explanation is: Say your first step is moving to (1,0). Then you'll need to pass through (6,5), and moving from (1,0) to (6,5) without touching the diagonal is the same as moving from (0,0) to (5,5) without going below the diagonal, which can be done in $C_5$ ways. Of course we could also have started by moving to (0,1) which gives another $C_5$ so the final answer is $2 C_5$. I'm not sure how your proof works because you haven't explained it much. I'm guessing you're referring to this reflection argument? Anyway if you want feedback on your proof and not just your answer then you'll probably need to give a link to the reflection proof you're looking at + explain how you're using reflections in this particular problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intersecting waves of rain droplets Let us image the rainy weather and water droplets on the puddle. The waves that rain make finally intersect but let us define how do they intersect exactly. Let us say that we examine two water drops in particular. They are, mathematically speaking, points on the plane: the first second they are tiny drops on the puddle. But then they cause waves to expand, i.e. from those two points two distinct circles appear with increasing radius until they intersect. Let us say we know that the 1st second, 1st drop was on (0,0) point and 2nd drop on (10,7). Then next second, we know that 1st drop transformed into circle with radius of $\sqrt2$ and 2nd drop transformed into circle with radius of $\sqrt3$. The 3rd second we know, that 1st circle is with radius of 2$\sqrt2$ and 2nd circle is with radius of 2$\sqrt3$. The question is: After how many seconds the circles will come intact? Maybe differential equation involving functions $R_1(t)$ and $R_2(t)$ would be applicable where $R_i$ is a radius of the $i$-th circle ($i=1,2$), $t$-time and we are given initial conditions of ODE.
Let's answer the question more generally: place the first drop at the origin with radius expanding at speed $u$ and the second drop at some $(a, b)$ in the plane, expanding at some speed $v$. Try to fill in the blanks and click on any of the spoilers to reveal the answer. At time $t$ measured after the drops splash simultaneously the radius of the first ring is $$ut$$ and the radius of the second ring is $$vt.$$ The Pythagorean theorem shows that the distance $r$ between the initial points satisfies $$r^2 = a^2 + b^2,$$ hence at time $t$ the wave fronts moving towards one another are at a distance $$r - ut - vt = r - (u+v)t$$. When the wave fronts collide, this distance is $0$ hence we must solve the equation $$r - (u+v)t = 0$$ for the variable $t$ which gives $$t = \frac{r}{u+v} = \frac{\sqrt{a^2 + b^2}}{u+v}.$$ With your example values, $(a, b) = (10, 7)$ and $(u, v) = \bigl( \sqrt{2}, \sqrt{3}\, \bigr)$, hence the time until contact is $$ t = \frac{\sqrt{10^2 + 7^2}}{\sqrt{2} + \sqrt{3}} \approx 3.88\ldots $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which ordinals can be order-embedded in $2^\kappa$ for a given infinite cardinal $\kappa$? Let $\kappa$ be an infinite cardinal. The set $2^\kappa=\{0,1\}^\kappa$ is given the lexicographically order in the usual way ($f<g$ if $f(\alpha)<g(\alpha)$ at the first position where the functions differ). I am wondering which ordinals can be embedded (as a linear order) into $2^\kappa$. The cardinal $\kappa$ viewed as an ordered set embeds into $2^\kappa$ by sending each ordinal $\gamma<\kappa$ to its characteristic function in $\kappa$. So any ordinal $\alpha\le\kappa$ embeds into $2^\kappa$. On the other hand, Jech's Set Theory Lemma 9.5 says: The lexicographically ordered set $\{0,1\}^\kappa$ has no increasing or decreasing $\kappa^+$-sequence. So $\kappa^+$ and any larger ordinals do not embed into $2^\kappa$. What can we say about the ordinals $\alpha$ with $\kappa<\alpha<\kappa^+$? For example, for $\kappa=\aleph_0$, any countable ordinal embeds into $2^{\aleph_0}$ (because every such ordinal embeds into $(\mathbb{Q},<)$, which embeds into $(\mathbb{R},<)$, which embeds into $2^\omega$). For $\kappa=\aleph_1$: Does $\omega_1+1=\{\alpha: 0\le\alpha\le\omega_1\}$ embed into $2^{\omega_1}$? Or higher ordinals less than $\omega_2$? Anything known in general? The answer possibly depends on some set theoretic assumptions (CH, etc). Any references would be appreciated. Added: I think we can always embed $\kappa+1$, which is the same as $\kappa$ plus a extra maximum point. The reason is that $2^\kappa$ also has a maximum point, namely the function identically equal to $1$, and $\kappa$ has no maximum, so just extend the embedding of $\kappa$ appropriately. More generally, I see how to embed any ordinal less than $\kappa\cdot\kappa$ (ordinal product), but not sure in general.
Any ordinal $\alpha<\kappa^+$ can be embedded in $2^\kappa$ with respect to the inclusion order (and thus also with respect to the lexicographic order which is stronger). To prove this, just note that $\alpha$ is isomorphic to its set of initial segments, ordered by inclusion. That is, $\alpha$ embeds in $2^\alpha$ with the inclusion order. Since $\alpha<\kappa^+$, there exists an injection $\alpha\to\kappa$, which gives an embedding of $2^\alpha$ into $2^\kappa$ with the inclusion order. (Similarly, every partial order of cardinality $\leq\kappa$ embeds in $2^\kappa$ with respect to the inclusion order, by sending each point to the set of elements less than or equal to it. For a total order, this implies it will then also embed in any total order on $2^\kappa$ which contains the inclusion order.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many roots of $x(1-x)^{2}=s$ are there in $(0,1)$? This is a self-answered question, which is part of answering this related question. Alternative solutions are welcomed. Let $0<s < \frac{4}{27}$. Prove that the cubic equation $x(1-x)^{2}=s$ has exactly two solutions in $(0,1)$. Moreover, the third solution is a real number greater than $1$. The limitation on the range of $s$ is due to $\frac{4}{27}=\max_{x \in [0,1]}x(1-x)^{2}$.
Define $F(x)= x(1-x)^2$. Then $F(0)=F(1)=0$, and $$F(\frac{1}{3})=\frac{4}{27}=\max_{x \in [0,1]}F(x).$$ Since $0<s < \frac{4}{27}$, by the Intermediate Value Theorem, the equation $F(x)=s$ admits at least one solution in each of the intervals $(0,\frac{1}{3})$ $(\frac{1}{3},1)$. It remains to show that each interval contains at most one solution. This follows from monotonicity: $$ F'(x)=3x^2 - 4x + 1>0 \iff x < \frac{1}{3} \,\,\text{ or }\,\, x>1. $$ Thus $F|_{(0,\frac{1}{3})}$ is decreasing, while $F|_{(\frac{1}{3},1)}$ is increasing. For any $s>0$, since $F(1)=0$, and $\lim_{x \to \infty} F(x)=\infty$, it follows that there exists $y>1$ such that $F(y)=s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4604922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
$\sum_{m=1}^{N}\lfloor{\frac{N}{m}\rfloor} \sim N \log N$ Let $d(n)$, the number of divisors of $n$. I am trying to show that $\sum_{n \leq N}d(n) = \#\{(m,k) \in \mathbb{N} \times \mathbb{N}: mk \leq N\} = \sum_{m=1}^{N}\lfloor{\frac{N}{m}\rfloor} \sim N \log N$ I am stuck at showing the asymptotic and how to deal with the floor function. We say $f(n) \sim g(n)$ if $\lim_{n \to \infty} \frac{f(n)}{g(n)} = 1$. Attempt: $\lfloor{\frac{N}{m}\rfloor} = \frac{N}{m} + O(1)$. Hence we have $\frac{\frac{N}{m} + O(1)}{N \log N}$. I am not sure if I can ignore the "Big oh" but think I can as its just a constant. I then tried applying L'hopital by differentiating w.r.t. $N$. $\frac{1}{m}(N \log N)^{-1} + \frac{N}{m}\Big( \frac{-N \log N + 1}{N^{2}\log^{2}N}\Big)$. But I don't then seem to be able to get this to evaluate out to $1$,I keep ending up with a log factor.
I don't see how you come up with the term $\frac{\frac{N}{m} + O(1)}{N \log N}$. If $\lfloor{\frac{N}{m}\rfloor} = \frac{N}{m} + O(1)$ then $\sum_{m=1}^{N}\lfloor{\frac{N}{m}\rfloor} = N \sum_{m=1}^{N} \frac 1m + O(N) = N \log N + O(N)$. More precisely: You can estimate $f(N) = \sum_{m=1}^{N}\left\lfloor \frac{N}{m}\right\rfloor$ from above with $$ f(N) \le \sum_{m=1}^{N} \frac Nm = N \sum_{m=1}^{N} \frac 1m \le N( 1 + \log N) $$ and from below with $$ f(N) \ge \sum_{m=1}^{N} \left(\frac Nm -1 \right) = N \sum_{m=1}^{N} \frac 1m - N \ge N \log N - N \, , $$ using $\log N<\sum_{n=1}^{N}\frac{1}{n}<1+\log N.$ It follows that $$ 1 - \frac{1}{\log N} \le \frac{f(N)}{N \log N} \le 1 + \frac{1}{\log N} $$ and therefore $$ \lim_{N \to \infty } \frac{f(N)}{N \log N} = 1 \, . $$ As @Conrad said, one can use the symmetry of the set $\{(m,k) \in \mathbb{N} \times \mathbb{N}: mk \leq N\}$ to obtain a better asymptotic expression: $$ \sum_{n \leq N}d(n) = 2 \sum_{m=1}^{\lfloor \sqrt N \rfloor } \left\lfloor \frac{N}{m}\right\rfloor - (\lfloor \sqrt N \rfloor)^2 = 2 N \sum_{m=1}^{\lfloor \sqrt N \rfloor } \frac 1m - N + O(\sqrt N) \, . $$ Using the asymptotic expansion $H_N = \log N + \gamma + O(1/N)$ for the harmonic numbers this gives $$ \sum_{n \leq N}d(n) = N \log N + (2\gamma -1) N + O(\sqrt N) \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there another simpler method to evaluate the integral $\int_0^{2 \pi} \frac{1}{1+\cos \theta \cos x} d x , \textrm{ where } \theta \in (0, \pi)?$ Using ‘rationalization’, we can split the integral into two manageable integrals as: $\displaystyle \begin{aligned}\int_0^{2 \pi} \frac{1}{1+\cos \theta \cos x} d x = & \int_0^{2 \pi} \frac{1-\cos \theta \cos x}{1-\cos ^2 \theta \cos ^2 x} d x \\= & \int_0^{2 \pi} \frac{d x}{1-\cos ^2 \theta \cos ^2 x}-\cos \theta \int_0^{2 \pi} \frac{\cos x}{1-\cos ^2 \theta \cos ^2 x} d x \\= & 4 \int_0^{\frac{\pi}{2}} \frac{\sec ^2 x}{\sec ^2 x-\cos ^2 \theta} d x+\int_0^{2 \pi} \frac{d(\cos \theta \sin x)}{\sin ^2 \theta+\cos ^2 \theta \sin ^2x} \\= & 4 \int_0^{\frac{\pi}{2}} \frac{d\left(\tan x\right)}{\sin ^2 \theta+\tan ^2 x}+\frac{1}{\sin \theta}\left[\tan ^{-1}\left(\frac{\cos ^2 \theta \sin x}{\sin \theta}\right)\right]_0^{2 \pi} \\= & \frac{4}{\sin \theta}\left[\tan ^{-1}\left(\frac{\tan x}{\sin \theta}\right)\right]_0^{\frac{\pi}{2}} \\= & \frac{4}{\sin \theta} \cdot \frac{\pi}{2} \\= & \frac{2 \pi}{\sin \theta}\end{aligned}\tag*{} $ Is there another simpler method to evaluate the integral? Your comments and alternative methods are highly appreciated.
Splitting the integral interval into two gives another solution. $$ \begin{aligned} \int_0^{2 \pi} \frac{1}{1+a \cos x} d x & =\int_0^\pi \frac{1}{1+a \cos x} d x+\int_\pi^{2 \pi} \frac{1}{1+a \cos x} d x \\ & =\int_0^\pi \frac{1}{1+a \cos x} d x+\int_0^\pi \frac{1}{1-a \cos x} d x \\ & =2 \int_0^\pi \frac{1}{1-a^2 \cos ^2 x} d x \\ & =4 \int_0^{\frac{\pi}{2}} \frac{\sec ^2 x}{\sec ^2 x-a^2} d x \\ & =4 \int_0^{\frac{\pi}{2}} \frac{d(\tan x)}{\tan ^2 x+\left(1-a^2\right)} \\ & =\frac{4}{\sqrt{1-a^2}}\left[\tan ^{-1}\left(\frac{\tan x}{\sqrt{1-a^2}}\right)\right]_0^{\frac{\pi}{2}} \\ & =\frac{2 \pi}{\sqrt{1-a^2}} \end{aligned} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Existence of code with parameters $[22, 15, 3]_2$ I need to verify the existence of a code with parameters as in the title ($n=22,k=15,d=3,q=2$). I have seen this post but neither of the bounds works to conclude anything for this case. I have also tried to actually construct a concrete code with these parameters, but with no success. Attempted as well some Hamming-like codes with these parameters but I can't get the distance to be 3, always get it of 2. Could you give some clever way on how to look at this problem? Thanks in advance.
Actually, according to Markus Grassl's codetables see here a code with those parameters and distance $d=4$ exists. Click on query form under linear block codes and enter the $n,k$ you desire. The answer comes back as below: Bounds on linear codes [22,15] over GF(2); lower bound:4; upper bound: 4 Luckily there is even a construction given. It is a series of steps: Regarding modification of codes such as shortening, subcodes etc, see Jon Hall's Chapter 6 of coding theory notes available online here. Note: The Plotkin sum is also referred to as $(u|u+v)$ construction in the literature.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How can I prove that $\mathrm{Spec}(B)\rightarrow\mathrm{Spec}(A)$ is continuous? Let $\phi:A\rightarrow B$ be a ring homomorphism. Then we can define $$f:\mathrm{Spec}(B)\rightarrow\mathrm{Spec}(A);~~~\mathfrak{p}\mapsto\phi^{-1}(\mathfrak{p})$$ I want to show that $f$ is continuous w.r.t the Zariski topology. Therefore let me pick $\;\mathcal{O}\subset\mathrm{Spec}(A)\;$ a closed set, then this means that $\;\mathcal{O}=V(I):=\{\mathfrak{q}\in\mathrm{Spec}(A): I\subseteq \mathfrak{q}\}$ for some ideal $I$. Now $$\begin{align} f^{-1}(\mathcal{O})&=f^{-1}((V(I))\\&=\{\mathfrak{p}\in\mathrm{Spec}(B): \phi^{-1}(\mathfrak{p})\in V(I)\}\\&=\{\mathfrak{p}\in\mathrm{Spec}(B): I\subseteq \phi^{-1}(\mathfrak{p})\}\\&=\{\mathfrak{p}\in\mathrm{Spec}(B): \phi(I)\subseteq \mathfrak{p}\}\end{align}$$ But now $\{\mathfrak{p}\in\mathrm{Spec}(B): \phi(I)\subseteq \mathfrak{p}\}=V(\phi(I))$ but $\phi(I)$ does not need to be an ideal in $B$. Is there a trick how to get an ideal in $B$ s.t. $f^{-1}(\mathcal{O})=V(\ldots)$ ? Thanks a lot.
For any subset $S\subseteq A$ you have that $V(S)=V(\langle S\rangle)$ where $\langle S\rangle$ is the ideal generated by $S$. So what you've proven is that $f^{-1}(\mathcal O)=V(\langle \phi(I)\rangle)$ and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does a function of its own arc length look like? Introduction What does a function of its own arc length look like? A strange question for sure, but first let me elaborate: Imagine a function that starts at the point $\left( 0, 0 \right)$. If we now assume that the two closest (from the left and right) points also have $y = 0$, then we can take the closest point of the function with a value which has the distance between the two points. If we now want to have the point after this, it has the value of the length of the previous curve (from $0$). This goes on and on, so we can say that the $y$-value of a point of the function is the length of that function from $0$ to the point infinitely close to the point. Now you might ask yourself why you should look for something. I don't have a plan but it looks like fun and I couldn't find anything online about it. My Thoughts $f\left( n \right) \in \mathbb{R}$ and $2 < n \in \mathbb{N}$ Since I don't see an obvious solution, I would first try to find such a function for $f\left( n \right) \in \mathbb{R}$ and $n \in \mathbb{N}$. Since the starting point is $\left( 0, 0 \right)$ aka $f\left( 0 \right) = 0$, we can already take $\left( 0, 0 \right)$ as a point of the function. The next point would be at $n = 1$, which due to the function having no length (which has length $0$) also gets the function value $0$ aka $f\left( 1 \right) = 0$, so we get $\left( 1, 0 \right)$. The next point would be at $n = 2$, which by virtue of the function having length as the distance between the two previous two points also has the function value as that length. With this we'll get $\left( 3, 1 \right)$ aka $f\left( 3 \right) = 1$. For the next points we do the same, only that instead of just calculating the newly added length, we also add it to the existing one, which gives us a recursiv formula: $$ \begin{align*} f\left( n \right) &= f\left( n - 1 \right) + \sqrt{\left( \left( n - 1 \right) - \left( n - 2 \right) \right)^{2} + \left( f\left( n - 1 \right) - f\left( n - 2 \right) \right)^{2}}\\ f\left( n \right) &= f\left( n - 1 \right) + \sqrt{\left( n - n - 1 + 2 \right)^{2} + \left( f\left( n - 1 \right) - f\left( n - 2 \right) \right)^{2}}\\ f\left( n \right) &= f\left( n - 1 \right) + \sqrt{\left( 1 \right)^{2} + \left( f\left( n - 1 \right) - f\left( n - 2 \right) \right)^{2}}\\ f\left( n \right) &= f\left( n - 1 \right) + \sqrt{1 + \left( f\left( n - 1 \right) - f\left( n - 2 \right) \right)^{2}}\\ \\ f\left( n \right) &= f\left( n - 1 \right) + \sqrt{1 + \left( f\left( n - 1 \right) - f\left( n - 2 \right) \right)^{2}} \tag{1.}\\ \end{align*} $$ with the graph (from $0$ to $4$): $f\left( x \right) \in \mathbb{R}$, $\Delta x \in \mathbb{Q}$ and $x - \Delta x > 2$ where $\Delta x$ is the distance between $x$-values ​​of the two closest points Since the principle worked well for $f\left( n \right) \in \mathbb{R}$ and $n \in \mathbb{N}$ I would simply want to apply it to $\lim_{{\Delta x} \to {0}^{+}} \Delta x, \Delta x \in \mathbb{Q}$ for decreasing distances between $x$-values ​​of the two closest points. With some work, the logic behind $\left( 1. \right)$ and the help of some more vector addition I found the generalized recursive formula: $$ \begin{align*} f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( \Delta f \right)^{2}}\\ f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( \left( x - 1 \cdot \Delta x \right) - \left( x - 2 \cdot \Delta x \right) \right)^{2} + \left( f\left( x - 1 \cdot \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}}\\ f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( \left( x - \Delta x \right) - \left( x - 2 \cdot \Delta x \right) \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}}\\ f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( x - x - \Delta x + 2 \cdot \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}}\\ f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}}\\ \\ f\left( x \right) &= f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}} \tag{2.}\\ \end{align*} $$ with the graph: (from $x = 0$ to $x = 4$, from $y = 0$ to $y = 8$ and $\Delta x \in \left\{ {\color{Red} 1}, {\color{Blue} 0} {\color{Blue} .} {\color{Blue} 5}, {\color{Purple} 0} {\color{Purple}.} {\color{Purple} 2} {\color{Purple} 5}, {\color{Black} 0} {\color{Black} .} {\color{Black} 1} {\color{Black} 2} {\color{Black} 5} \right\})$: It seems to me that if we let $\Delta x \to 0^{+}$, $f\left( x \right)$ with $x \to 1^{+}$ itself goes to $+\infty$, which I think is kinda cool as I wasn't expecting it. $y\left( x \right) \in \mathbb{R}$, $x \in \mathbb{R}$, $\lim_{{\Delta x} \to {0}^{+}} \Delta x$ and $\Delta x \in \mathbb{R}$ where $\Delta x$ is the distance between $x$-values ​​of the two closest points If we now extend the formula from $\left( 2. \right)$ by the fact that $\Delta x$ should approach $0$, we get the recursive formula: $$ \begin{align*} y\left( x \right) &:= \lim_{{\Delta x} \to {0^{+}}} f\left( x \right) = \lim_{{\Delta x} \to {0^{+}}} \left[ f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}} \right]\\ y\left( x \right) &\equiv \lim_{{\Delta x} \to {0^{+}}} \left[ f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}} \right]\\ \\ y\left( x \right) &\equiv \lim_{{\Delta x} \to {0^{+}}} \left[ f\left( x - \Delta x \right) + \sqrt{\left( \Delta x \right)^{2} + \left( f\left( x - \Delta x \right) - f\left( x - 2 \cdot \Delta x \right) \right)^{2}} \right] \tag{3.}\\ \end{align*} $$ Now comes the challenge... Finding an explicit function formula. I know that we can calculate the length of a function $f$ in $\left[ a, b \right]$ using "the arc length" formula $\operatorname{arc length}\left( a, b, \frac{\operatorname{d}y}{\operatorname{d}x} \right) = \int_{a}^{b} \sqrt{1 + \left( \frac{\operatorname{d}y}{\operatorname{d}x} \right)^{2}} \operatorname{d}x$ if $y$ is continuously differentiable in $x \in \left[ a, b \right]$ but I don't know how to deal with it here. I am grateful for every help, correction and suggestion.
No such function exists. (Well, but see below ...) A starting point which you've essentially gotten to already is to ask whether there is a continuously differentiable (or even nicer) function $f$ satisfying for all $x>0$ that $$\int_0^x\sqrt{1+f'(t)^2}dt=f(x).$$ Well, let's attack that expression! Differentiating both sides with respect to $x$ we get by FTC the much nicer equation $$\sqrt{1+f'(x)^2}=f'(x),$$ which after squaring both sides ... gives $1=0$. Wait, what? Taking a step back from calculus to geometry, we can see more easily why the goal is in fact impossible. Pick some $a>0$ and suppose $f(a)=b$. The shortest possible path connecting the origin to $(a,b)$ has length $\sqrt{a^2+b^2}$ ... but that is strictly greater than $b$, the supposed length of the graph of $f$ from $x=0$ to $x=b$. So we'll always fall short. Incidentally, this also explains why you're seeing "shooting-to-infinity" behavior in your discretized version of the question as you shrink the "mesh" size. Conversely, this should reinforce the sense that numerical experimentation is good: even before the problem was solved you had a clear indicator that something was wonky about it. But it's no fun to leave well enough alone ... In some sense it might feel like the above description isn't complete. The specific details of how we measure distance and arclength in the plane matter a lot. What would happen if we change the geometry on $\mathbb{R}^2$? There are various "nice" ways to measure distance on $\mathbb{R}^2$. Formally, I'm talking about metrics which induce the usual topology; informally, these are just functions $d:\mathbb{R}^2\rightarrow\mathbb{R}_{\ge0}$ between pairs of points that behave broadly how we would expect "distance notions" to behave and don't look crazy-bonkers. For example, we have: * *The taxicab metric $d_{taxi}((a,b),(c,d))=\vert a-c\vert+\vert b-d\vert$. (As we all know, taxis can go straight through buildings but can't move diagonally due to an agreement with the Vatican.) *The max metric $d_{max}((a,b),(c,d))=\max\{\vert a-c\vert, \vert b-d\vert\}$. *The elliptical metric $d_{ell}((a,b),(c,d))=\sqrt{(a-c)^2+{\color{red}{17}}(b-d)^2}$. *The let's squish Pythagoras metric $d_{squish}((a,b),(c,d))=\sqrt[{\color{red}{42}}]{(a-c)^{{\color{red}{42}}}+(b-d)^{{\color{red}{42}}}}$. And there are many others. Now we next need a definition of arclength. Here's the one I think is nicest: Suppose I have a path in the plane, thought of as the image of a function $p:[0,1]\rightarrow\mathbb{R}^2$ (intuitively $p(t)$ is the point on the path at time $t$). We'll restrict attention to "nice" functions $p$; in particular, $p$ should be injective. For $d$ a "nice" metric, we say that the $d$-arclength of (the range of) $p$ is $L$ iff for all $\epsilon>0$ there is some finite sequence of points $0\le a_1<...<a_n\le 1$ such that, for every finite sequence of points $0\le b_1<...<b_k\le 1$ with each $a_i$ appearing as some $b_j$, we have $$\vert L-\sum_{1\le i< k}d(p(b_i), p(b_{i+1}))\vert<\epsilon.$$ A fun exercise: which, if any, of the above metrics (or any of your favorite alternatives) admits a function $f$ that "computes its own $d$-arclength"? (And if such a function exists, need it be unique? There's a lot to play with here!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Olympiad geometry: Prove equal segments $A$, $B$, $C$, $D$, $E$ and $F$ are six concyclic points. $AC$, $BD$ and $EF$ are concurrent at $G$. Line $EF$ intersects $\odot(ABG)$ and $\odot(CDG)$ at $I$ and $J$ respectively. Show that $IE=FJ$. My idea was the trigonometry version of Ptolemy theorem. Let $\angle AGE=\angle CGF=\beta$ and $\angle BGE=\angle DGF=\alpha$. I got $$\begin{aligned}GI=\frac{AG\sin\alpha+BG\sin\beta}{\sin(\alpha+\beta)};\\GJ=\frac{CG\sin\alpha+DG\sin\beta}{\sin(\alpha+\beta)}.\end{aligned}$$ $EI=FJ$ is equivalent to $GI-GJ=GE-GF$. So we need only prove that $$GE-GF=\frac{AG\sin\alpha+BG\sin\beta-CG\sin\alpha-DG\sin\beta}{\sin(\alpha+\beta)}.$$This is not any much easier, although it becomes independent of $I$ and $J$.
Hint: Join J and I to center of big circle O,they intersct this circle at M and N respectively. Extend JO and IO to meet the circle at K and L respectively. You have to show that triangle OIJ is isosceles. In this case we have: $JF\times JE=JM\times JK=IN\times IL=IE\times IF$ Since: $IF=IE+Ef$ and: $JE=JF+EF$ We conclude that: $IE=JF$ Note that this true if DC is diameter of circle DJC and AB is the diameter of circle ABI, this may helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Sum of two subspaces: representing it with equations I found the following excercise: Let $W_1 = \{(x_1, ..., x_6) : x_1 + x_2 + x_3 = 0, x_4 + x_5 + x_6 = 0 \}$. Let $W_2$ be the span of $S := \{(1, -1, 1, -1, 1, -1), (1, 0, 2, 1, 0, 0), (1, 0, -1, -1, 0, 1), (2, 1, 0, 0, 0, 0)\}$. Give a base, a dimension and an equation representation of $W_1 + W_2$ I'm new to the concept of sum of subspaces. But as I understand it, the first step would be to note that any $\textbf{x} = (x_1, ..., x_6) \in W_1$ satisfies \begin{equation*} \begin{cases} x_1 = -x_2 - x_3 \\ x_4 = -x_5 - x_6 \end{cases} \end{equation*} so that its general form is \begin{equation*} \textbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{pmatrix} = \begin{pmatrix} -x_2 - x_3 \\ x_2 \\ x_3 \\ -x_5 - x_6 \\ x_5 \\ x_6 \end{pmatrix} \end{equation*} We also know any $\textbf{y} \in W_1$ is of the general form \begin{align*} \textbf{y} = \begin{pmatrix} y_1 \\ y_2 \\ y_3 \\ y_4 \\ y_5 \\ y_6 \end{pmatrix} = \begin{pmatrix}x + y +z + 2w \\ -x + w \\ x + 2y - z \\ -x + y - z \\ x\\ -x + z\end{pmatrix} \end{align*} Then for generals $\mathbf{x}, \mathbf{y}$ we have \begin{align*} \textbf{x} + \textbf{y} &= \begin{pmatrix} x + y +z + 2w + (-x_2 - x_3)\\ -x + w + x_2\\ x + 2y - z + x_3\\ -x + y - z + (-x_5 - x_6)\\ x + x_5\\ -x + z + x_6 \end{pmatrix} \end{align*} One can then conclude $$W_1 + W_2 = \Big\{\big(x + y + z + 2w - x_2 - x_3\big), \big(-x + w + x_2 \big), \big(x +2y - z + x_3 \big), \big(-x + y - z - x_5 - x_6 \big), \big(x + x_5 \big), \big(-x + z + x_6 \big) \mid x, y, z, x_2, x_3 \in \mathbb{R} \Big\}$$ But what would be an representation via equations of this system? I'm very new to linear algebra so go easy on me!
In contrast to Jose's and Anne's answers using dimension, let me show you the rote method you could use. From your step here: \begin{equation*} \mathbf{x} = \begin{pmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \\ x_5 \\ x_6 \end{pmatrix} = \begin{pmatrix} -x_2 - x_3 \\ x_2 \\ x_3 \\ -x_5 - x_6 \\ x_5 \\ x_6 \end{pmatrix}, \end{equation*} you could then find a spanning set for $W_1$ like so: \begin{align*} \mathbf{x} &= \pmatrix{-x_2\\x_2\\0\\0\\0\\0} + \pmatrix{-x_3\\0\\x_3\\0\\0\\0} + \pmatrix{0\\0\\0\\-x_5\\x_5\\0} + \pmatrix{0\\0\\0\\-x_6\\0\\x_6} \\ &= x_2\pmatrix{-1\\1\\0\\0\\0\\0} + x_3\pmatrix{-1\\0\\1\\0\\0\\0} + x_5\pmatrix{0\\0\\0\\-1\\1\\0} + x_6\pmatrix{0\\0\\0\\-1\\0\\1}. \end{align*} The vector $\mathbf{x}$ is arbitrary in $W_1$, so we've shown that $$W_1 \subseteq \operatorname{span}\left\{\pmatrix{-1\\1\\0\\0\\0\\0},\pmatrix{-1\\0\\1\\0\\0\\0},\pmatrix{0\\0\\0\\-1\\1\\0},\pmatrix{0\\0\\0\\-1\\0\\1}\right\}.$$ Equality holds, because every vector in the spanning set also lies in $W_1$. It's also easy to see the above is linearly independent, so we have a basis, but this is not necessary to observe! Next, once you have spanning sets for $W_1$ and $W_2$, you can form a spanning set for $W_1 + W_2$ by unioning the sets; every vector in $W_1 + W_2$ is the sum of a vector in $W_1$ (a linear combination of the first spanning set) and a vector in $W_2$ (a linear combination of the second spanning set), so the union will span the sum. That is, $$W_1 + W_2 = \operatorname{span}\left\{\pmatrix{-1\\1\\0\\0\\0\\0},\pmatrix{-1\\0\\1\\0\\0\\0},\pmatrix{0\\0\\0\\-1\\1\\0},\pmatrix{0\\0\\0\\-1\\0\\1}, \pmatrix{1\\-1\\1\\-1\\1\\-1},\pmatrix{1\\0\\2\\1\\0\\0},\pmatrix{1\\0\\-1\\-1\\0\\1},\pmatrix{2\\1\\0\\0\\0\\0}\right\}.$$ This is already a technically correct description of $W_1 + W_2$, but it's usually best to reduce the spanning set down to a basis. There are two standard ways to do this, both involving row-reduction: * *Place them as rows in a matrix, row-reduce down to row-echelon form (reduced, if you prefer) and keep only the non-zero rows. These rows will be linearly independent, but retain the same span as the original set, thus producing a basis, or *Place them as columns in a matrix, and row-reduce down to row-echelon form. Note the columns where the pivots (leading $1$s) appear, and retain only the vectors from the original set that you placed in those columns. Either way, we get an element in the basis of $W_1 + W_2$ for every pivot in the row-reduced matrix. And, no matter which method you use, you will get $6$ pivots, which tells you that the dimension of $W_1 + W_2$ is $6$, and must be all of $\Bbb{R}^6$. Indeed, if you apply the first method, reducing until reduced row-echelon form, you should find that you get back the standard basis!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4605954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Finding sum of an infinite series I was trying to evaluate the following sum for each $x \in (0,\infty)$ and I am getting the following: $\sum_{n=1}^{\infty} \frac{ (-1)^{n+1}}{n^2}( x^{2n}+ \frac{1}{x^{2n}}-2)= 2(\ln x)^2. $ But in the proof I am starting with the function $f(x)= \sum_{n=1}^{\infty} \frac{x^{2n}}{n^2}.$ Then I differentiate f term by term and then integrate from $x$ to $1$ within its radius convergence to obtain the value of f in terms of integral. But when I put the value $\frac{1}{x}$ and $x \in (0,1)$ then we can't do term by term differentiation. Whereas if we put the same value for g($\frac{1}{x}$) as g(x), we obtain the same value of the sum as mentioned above. Is there any alternate precise way to prove this ? Any help or hint would be appreciated. Thanks in advance.
You are facing the definition of the polylogarithm function $$\sum_{n=1}^{\infty} \frac{ (-1)^{n+1}}{n^2}\, x^{2n}= -\text{Li}_2\left(-x^2\right) $$ $$\sum_{n=1}^{\infty} \frac{ (-1)^{n+1}}{n^2}\, x^{-2n}=-\text{Li}_2\left(-\frac{1}{x^2}\right)$$ $$\sum_{n=1}^{\infty} \frac{ (-1)^{n+1}}{n^2}=\frac{\pi ^2}{12}$$ Now, remains to use $$\text{Li}_2(-t)+\text{Li}_2\left(-\frac{1}{t}\right)=-\frac{1}{2} \log ^2(t)-\frac{\pi ^2}{6}$$ Then the sum is $$\frac{1}{2} \log ^2(x^2)=2\log ^2(x)$$ as you found.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4606079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can we solve this integral equation? The following equation seems extremely simple: $$ g(b)=\int_1^{\frac{1}{b}} \frac{X f(X)}{\sqrt{1-b X}} \, dX $$ But how to solve it, that is, restore the function $f(X)$ from the known function $g(b)$? Maybe this is some well-known integral transformation, which I was not told about at the university? Additionally, we can assume that $f(1)=0$.
This is not a complete solution, just two ideas which are too large for a comment let us define helper function $h$, so that: $$g(b)=h(1/b), $$ now we have $$h(t) = \int_1^{t} \frac{X f(X)}{\sqrt{1- X/t}} \, dX $$ and use chain rule with fundamental theorem $h'(t)=\frac{tf(t)}{\sqrt{1-t/t}}$ since we get division by zero corollary is not enough and we may have to go to https://en.wikipedia.org/wiki/Leibniz_integral_rule#General_form:_Differentiation_under_the_integral_sign In particular $$\frac{d}{dx}\left(\int_a^{x} f(x,t)dt\right) = f(x,x)+\int_a^{x} \frac{d}{dx}f(x,t) dt$$ or if we are comfortable with Fourier transforms we can possibly use convolution theorem with Fourier transform of $\frac{1}{\sqrt{1-t}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4606244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Counting number of ways to buy items A man buys 240 items for \$250 at a store. There are 3 types of items: A (\$15), B (\$1) and C (25¢). He buys at least one of each. How many solutions? I am working on this problem for a programming logic class. I have already completed a program, but I want to find a way to check if it is correct or not. I got four solutions: A = 2 bought, B = 214 bought, C = 24 bought: Solution #1 A = 5 bought, B = 155 bought, C = 80 bought: Solution #2 A = 8 bought, B = 96 bought, C = 136 bought: Solution #3 A = 11 bought, B = 37 bought, C = 192 bought: Solution #4 I am solving it using two for loops (in Javascript). First loop = number of object A (from 1 to 13), second loop = number of object B (from 1 to 234). Then, I fill the rest in with object C, and I print whatever equals 250$. I believe I have the right answer, but I would like to figure out an equation to check it. The number of solutions is between 1 and 50.
As it is indicated in comments , you can use generating functions as alternative ways. To do this , lets use the exponents of $x's$ as cost of product , and use the exponent of $a's$ as the number of items. As @trueblueanil mentioned , we can say that $60A +4B+C=1000$ by multiply both side of $15A+B+(1/4)C=250$ by $4$ Then , * *Generating function for type A : $$a^1x^{60}+a^2x^{120}+a^3x^{180}+...+a^kx^{60k}+...= \frac{ax^{60}}{1-ax^{60}}$$ *Generating function for type B : $$a^1x^{4}+a^2x^{8}+a^3x^{12}+...+a^kx^{4k}+...= \frac{ax^4}{1-ax^4}$$ *Generating function for type C : $$a^1x^{1}+a^2x^{2}+a^3x^{3}+...+a^kx^{k}+...= \frac{ax}{1-ax}$$ Now , find the coefficient of $a^{240}x^{1000}$ in the expansion of $$\bigg(\frac{ax^{60}}{1-ax^{60}}\bigg)\bigg(\frac{ax^4}{1-ax^4}\bigg)\bigg(\frac{ax}{1-ax}\bigg)$$ I think you can find it using your programming skills.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4606753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to derive the flow pattern in magic square problem? And my question is how the boy managed to derive the pattern of the sequence of number that summed to 265? Check this link: https://www.youtube.com/shorts/pYwMQEE-6NY
I don't know the exact formula the boy used in the video, but I can provide you with one of my own that I discovered that works for $\textbf{all numbers}$. An observation is that the rows, columns and diagonals passing through the center number are symmetric to each other, at least when you compare the difference to the middle number of that row, column and diagonal. For example, taking the middle row, and letting the number in the center be $0$, we get the numbers, from left to right, $-9,-7,0,7,9$, for the 0 mod 5 case for example. Another important observation is that the center number is the sum divided by $5$, that is, $\frac{265}{5}=53$. If we take the center number to 0, as we know what the center number is, we can now base every other number around it. Filling it in like below: This will give you numbers that are all unique, and satisfy the requirements in the video. Proof: When the center number is $0$, it is easily verified that all the columns, rows and diagonals are equal, and all numbers are unique. If we increase the center number by a value, say $\lambda$, we see that the uniqueness of the numbers are not affected by the increase. As the differences are not affected, each of the rows, columns and diagonals have the sum of $5\lambda+c$, with $c$ being the residue $mod$ $5$. Therefore, as we vary $\lambda$, we can achieve the condition shown in the video for every $5\lambda+c$, or every residue mod 5.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4606882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $f$ measurable? $f(x)=\begin{cases} 0& x\in \mathbb{R}-\mathbb{Q} \\ \frac{1}{b} & x\in \mathbb{Q},x=\frac{a}{b},\text{gcd}(a,b)=1 \end{cases}$ Let $f:\mathbb{R}\to\mathbb{R}$ Is $f$ measurable? $f(x)=\begin{cases} 0& x\in \mathbb{R}-\mathbb{Q} \\ \frac{1}{b} & x\in \mathbb{Q},x=\frac{a}{b},\text{gcd}(a,b)=1,b>0 \end{cases}$ To this function can I use the same answer of $f(x)=\begin{cases} 0, x\in I \cap [0,1] \\ \frac{1}{q} , x=\frac{p}{q}, p,q \in \mathbb{N}, (p,q)=1\end{cases}$ Is $f$ measurable? Continuous? In that case $f(x)=0$ if $x \in I\cap[0,1] $ Any hint please
Assuming the measure is lebesgue measure, We have that $\mathbb Q$ is a set of measure $0$. And hence $f=0$ almost everywhere, and hence it is measurable because the constant function $0$ is so. A rigorous proof would follow as: Let $A\subset \mathbb R$, be measurable. Case 1: $0 \in A$ Then $f^{-1}(A)=(\mathbb R-\mathbb Q)\cup B$, where $B\subset \mathbb Q$. (Note: $B$ may be empty). Since we are working in lebesgue measure ,which is complete and $\mathbb Q$ is a set of measure $0$, so $B$ is measurable. And hence we have $f^{-1}(A)=(\mathbb R-\mathbb Q)\cup B$, union of two measurable set is measurable. Case 2:$0\notin A$ Then $f^{-1}(A)\subset \mathbb Q$, and by the same reason as above(proof for $B$), $f^{-1}(A)$ is measurable. Hence for $A\subset \mathbb R$ measurable,$f^{-1}(A)$ is measurable.Proving $f$ is measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4606994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
modules over C[x] $M$ be a module over $\mathbb C[x]$ and let $T=$ {$m\in M: x^n\cdot m=0\ \text{for some}\ n>0$}. Is it true that $$dim_{\ \mathbb C(x)} M\otimes_{\mathbb C[x]}\mathbb C(x)=dim_{\ \mathbb C} (M/T)\otimes \mathbb C\ \text{?}$$ where $\mathbb C$ on the right is a $\mathbb C[x]$-module via $x=0.$ I can prove $\geq$ part and, for finitely generated $M$, the opposite inequality -- but not in general. Is it an equality for every countably generated $M$?
Take $M=\mathbb{C}[x]_x$. Then on the left you have $$\text{dim}_{\mathbb{C}(x)}\mathbb{C}[x]_x\otimes \mathbb{C}(x)=1,$$ while on the right you have $$\text{dim}_{\mathbb{C}}\mathbb{C}[x]_x\otimes \mathbb{C}=0.$$ If you know about sheaves and affine varieties, you are likely to enjoy the algebro-geometric interpretation: the left hand side is the rank of the sheaf at the generic point, while on the right side we quotient out the torsion at 0 and take the rank. In the above example we get a strict inequality since the sheaf is supported away from 0. Edit: Upper Semi-Continuity of the Rank of the Fibre of a Sheaf is, I think, the generic case of your argument for the opposite inequality in the finitely generated case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4607307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Calculus Problem: polar-coordinate integration of a slice/wedge of a right circular cylinder This image states the problem: As shown in the diagram, a solid right circular cylinder of radius 12 cm is sliced by a plane that passes through the center of the base circle. Find the volume of the wedge-shaped piece that is created, given that its height is 16 cm (and its base has a radius of 12).1 I realized the easiest way to do this is to integrate with triangles from each side but I was curious to see if it would be possible to integrate this using polar coordinates. I attempted to do this by integrating using the half-ellipses that are formed from the plane as cross sections and rotating that plane around the center of the base of the cylinder to create more ellipses. However, when I tried this I got $248$ instead of $\textbf{1536}$ (the correct answer). I'm confident in how I computed the integral--I think the mistake I made was in setting up the integration in the first place. Here is my working: Apologies if it's hard to follow Could anyone help me set it up properly? Can you spot what I did wrong? Thanks!
I'm going to deviate slightly from your integral setup but hopefully the point is clear. Put the base of the cylinder in the $(x,y)$ plane. Align the diameter of the wedge - or using the far more interesting name, the ungula - with the plane $y=0$. Each semielliptical cross section is made in a plane $z=tx$, where $0\le t \le \frac43$. Each semiminor axis will be the same as the base radius $12$. For a given value of $t$, the semimajor axis of the cross section has length $\sqrt{144+144t^2}$ (i.e. the distance from the origin to the point $\left(12,0,12t\right)$). Hence the faces of a cross section have an average area of about $72\pi \sqrt{1+t^2}$. Now, you may be tempted to conclude the volume must be $$\int_0^{\frac43} 72\pi \sqrt{1+t^2} \, dt$$ but this would be wrong, and for the same reason your integral is wrong. In fact, this gives the volume of a cone of height $\frac43$ with elliptical cross sections possessing the same semiaxes. You effectively ended up treating each cross section as though they have the same thickness. But the solid we care about isn't made by stacking elliptical half-disks like a warped roll of coins, it's more like putting a misshapen orange back together from separate wedges. (One such wedge is shown in red.) The non-uniform thickness of the wedge needs to be accounted for. To approximate the volume of a given wedge, we can compare it to the volume of a sliver of an oblate-spheroidal shell - take the arc of an ellipse with semimajor/-minor axes $a$ and $b$, respectively, and revolve it by a small angle $\theta$ about its minor axis. This volume $V_{\rm sliver}$ is proportional to the spheroid's total volume when the arc is revolved by $2\pi$, such that $$\frac{V_{\rm sliver}}\theta = \frac{\frac{4\pi}3 a^2b}{2\pi} \implies V_{\rm sliver} = \frac23 a^2 b \theta$$ A small increase in the angle of $\Delta \theta$ causes $V_{\rm sliver}$ to increase to $$V'_{\rm sliver} = \frac23 a^2 b (\theta + \Delta \theta)$$ so that the overall change in sliver volume amounts to $$\Delta V = V'_{\rm sliver}-V_{\rm sliver} = \frac23 a^2 b (\theta + \Delta \theta) - \frac23 a^2 b \theta = \frac23 a^2 b \Delta \theta$$ Now divide both sides by $\Delta\theta$ to get the ratio of change in volume to change in angle. Letting $\Delta\theta\to0$ (note the convergence to non-constant), we have $$\lim_{\Delta\theta\to0} \frac{\Delta V}{\Delta \theta} = \frac{dV}{d\theta} = \frac23 a^2 b$$ Then the volume of an ungula would be obtained by the definite integral (omitting the domain) $$V = \int dV = \int \frac23 a^2 b \, d\theta$$ For a given sliver that makes up our ungula, for which $a=12\sqrt{1+t^2}$ and $b=12$, we have the relation $\cos(\theta)=\frac{12}a$. It follows that $$\theta = \cos^{-1}\left(\frac{1}{\sqrt{1+t^2}}\right) = \tan^{-1}(t) \implies \frac{d\theta}{dt} = \frac{1}{1+t^2}$$ and hence, using the chain rule, the ungula's volume is $$\begin{align*} V &= \int\limits_{t\in\left[0,\frac43\right]} \frac23 a^2 b \, d\theta \\[1ex] &= \int_0^{\frac43} \frac23 \left(12\sqrt{1+t^2}\right)^2 \cdot 12 \, \frac{d\theta}{dt} \, dt \\[1ex] &= 1152 \int_0^{\frac43} (1+t^2) \, \frac{dt}{1+t^2} \\[1ex] &= 1152 \cdot \frac43 = \boxed{1536} \end{align*}$$ It turns out the integral you got isn't too far off. Let $t=\tan(\theta)$. As we have $t\in\left[0,\frac43\right]$, we get $\theta\in\left[0,\tan^{-1}\left(\frac43\right)\right]$. Then $$V = 1152 \int_0^{\tan^{-1}\left(\frac43\right)} \sec^{\color{red}{2}}(\theta) \, d\theta$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4607519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Supplementary exercise 51, chapter 3 from 'A walk through combinatorics' 4th edition I'm selfstudying 'A walk through combinatorics' by Miklós Bóna. This book has some supplementary exercises at the end of each chapter, no solution provided. I'm trying exercise 51, chapter 3. Problemstatement: A store has $n$ different products for sale. Each of them has a different price that is at least one dollar, at most $n$ dollars, and is a whole dollar. A customer only has the time to inspect $k$ different products. After doing so, she buys the product that has the lowest price among the $k$ products she inspected. Prove that on average she will pay $\frac{n+1}{k+1}$ dollars. Attempt at solution: The customer pays: * *1 dollar if the product with price 1 dollar is the cheapest among the $k$ inspected products. This is possible in $\binom{n-1}{k-1}$ ways, since we need to pick $k-1$ products with prices in $\{2, \ldots, n\}$. *2 dollars if the product with price 2 dollars is the cheapest among the $k$ inspected products. This is possible in $\binom{n-2}{k-1}$ ways, since we need to pick $k-1$ products with prices in $\{3, \ldots, n\}$. *$\ldots$ *$n-k+1$ dollar if the product with price $n-k+1$ is the cheapest among the $k$ inspected products. There are only $k-1$ products which are more expensive, so this is possible in $\binom{n-(n-k+1)}{k-1} = \binom{k-1}{k-1}$ ways. This implies that the customer has to pay, on average $$\frac{1}{\binom{n}{k}}\cdot\left(1\cdot \binom{n-1}{k-1} + 2 \cdot \binom{n-2}{k-1} + \ldots + (n-k+1) \cdot \binom{k-1}{k-1}\right).$$ This is where I'm stuck: I want to show that the sum in brackets equals $\binom{n+1}{k+1}$ (guess based on what we need to prove, checked with some values of $n,k$. This seems to be correct). Question: How to prove that $$1\cdot \binom{n-1}{k-1} + 2 \cdot \binom{n-2}{k-1} + \ldots + (n-k+1) \cdot \binom{k-1}{k-1} = \binom{n+1}{k+1},$$ a hint would be appreciated. I tried to give a combinatorial proof, but failed.
A small hint for alternative: * *List all possible $(k+1)$ - subsets of $\{0,1,...,n\}$ *Remove the smallest element from each subset You'll end up with a list of all possible $k$ - subsets of $\{1,...,n\}$ but with each subset repeated as many times as its smallest element. How many entries are in the list? $\binom{n+1}{k+1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4607654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Divergence Theorem I'm looking for an alternate proof of a result. Let $A,B,C\in\mathbb{S}^d$. Let $T$ be the intersection of $\mathbb{S}^d$ with the cone generated by $A,B,C$. Then, $T$ is a spherical triangle. The vertices $A,B,C$ are opposite arcs with lengths $a,b,c$. Call $[T]$ the area of $T$. The centroid of $T$ is $$g:=\frac{1}{[T]}\int_T xd\mu$$ where $\mu$ is uniform on the sphere. JE Brock found the centroid of a spherical triangle $T=\triangle ABC$ to be (paraphrased, but it checks out for me) $$g=\frac{1}{2[T]}\left(\frac{A\times B}{|A\times B|}c+\frac{B\times C}{|B\times C|}a+\frac{C\times A}{|C\times A|}b\right)$$ Thinking of $A\times B/|A\times B|$ as the unit vector perpendicular to side $c$, we can write this as $$\int_T xd\mu=g*[T]=\frac{1}{2}\int_{\partial T} \vec{n} ds$$ where $\vec{n}$ is the inward pointing unit vector. This formulation looks like the divergence theorem. So, my question is how to prove it with the divergence or Stokes Theorem. $$\int_T\mathrm{div}F=\int_{\partial T} F\cdot\vec{n}$$ I've tried separating components setting the vector field $F$ to $(x_1^2,0,\dots,0)$ to get a linear term in the divergence, but that doesn't match the right side where I expect $F=(1,0,\dots,0)$ to get just the normal around the boundary. So, I suspect I need to use Stoke's Theorem on manifolds. The form can be the unit tangent so that the derivative points toward the center of the sphere. That would fit the form I'm looking for, I just need some details. Does that seem like the right course? Thanks! Happy Holidays!
You can prove this centroid formula using Stokes' theorem of vector calculus. First note that using standard identities (e.g. from Wikipedia) we can find the relation $$ \nabla \times \left(\frac{1}{2} \vec c \times \vec r\right) = \vec c, $$ for a constant vector $\vec c$ and $\vec r = (x,y,z)$. Inserting this particular vector field into Stokes' theorem yields $$ \vec c \cdot \int d\vec a = \int \nabla \times \left(\frac{1}{2} \vec c \times \vec r\right) \cdot d\vec a = \frac{1}{2}\int ( \vec c \times \vec r)\cdot d\vec l = \frac{1}{2}\vec c \cdot \int \vec r \times d\vec l, $$ and as $\vec c$ is arbitrary, we can conclude $$ \int d\vec a = \frac{1}{2}\int \vec r \times d\vec l. $$ The integral on the LHS is the quantity you are interested in computing as $d\vec a = \hat n da$. Thus we can equivalently compute it by integrating the RHS over the three side of the triangle. This is easy as $\vec r$ has unit length on the sphere and is perpendicular to $d\vec l$. That is the integrand is constant and we find $$ \int_{A\rightarrow B} \vec r \times d\vec l = \frac{\vec A\times \vec B}{|\vec A\times \vec B|} \int _{A\rightarrow B} dl, $$ where the integral in your notation equal $c$. Adding up the three sides gives the final answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4607776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that a stochastic process is strictly stationary Let $X = (X_t)_{t \in \mathbb Z}$ and $Y = (Y_t)_{t \in \mathbb Z}$ be two independent and strictly stationary stochastic processes with mean zero, i.e., for any $(t_1,...,t_p)\in \mathbb Z^p$ and any $p\in \mathbb N$, we have $$(X_{t_1},...., X_{t_p}) \overset d \sim (X_{t_1+h},...., X_{t_p+h})\quad \forall \, h >0$$ $$E[X_t]=0,\quad \forall\, t \in \mathbb Z$$ The same for $Y = (Y_t)_{t \in \mathbb Z}$. Now, consider a $U\sim \hbox{Bernoulli(p)}$ independent of $X$ and $Y$. Define $Z = (Z_t)_{t \in \mathbb Z}$ as: \begin{equation}\label{abc}\tag{I} Z=\begin{cases} X, & \text{ if $U =1$}\\ Y, & \text{ if $U =0$} \end{cases} \end{equation} How to show that $Z = (Z_t)_{t \in \mathbb Z}$ is strictly stationary? i.e.: $$(Z_{t_1},...., Z_{t_p}) \overset d \sim (Z_{t_1+h},...., Z_{t_p+h})\quad \forall \, h >0$$ I need express the distribution of $(Z_{t_1},...., Z_{t_p})$ in function of the distribution $(X_{t_1},...., X_{t_p})$, $(Y_{t_1},...., Y_{t_p})$ and $U$. Denote the characteristic function, respectively as $\varphi_{Z_{t_1,...,t_p}}$, $\varphi_{X_{t_1,...,t_p}}$, $\varphi_{Y_{t_1,...,t_p}}$ and $\varphi_U$. Are there any relations that can write $\varphi_{Z_{t_1,...,t_p}}$ in terms of $\varphi_{X_{t_1,...,t_p}}$, $\varphi_{Y_{t_1,...,t_p}}$ and $\varphi_U$?
For $(t_1,\dots,t_p)\in\mathbb Z^p$, $h>0$ and $B$ a Borel subset of $\mathbb R^p$, \begin{align} \mathbb P\left( (Z_{t_1+h},...., Z_{t_p+h})\in B\right)&=\mathbb P\left( (Z_{t_1+h},...., Z_{t_p+h})\in B,U=0\right)+\mathbb P\left( (Z_{t_1+h},...., Z_{t_p+h})\in B,U=1\right)\\ &=\mathbb P\left( (Y_{t_1+h},...., Y_{t_p+h})\in B,U=0\right)+\mathbb P\left( (X_{t_1+h},...., X_{t_p+h})\in B,U=1\right)\\ &=\mathbb P\left( (Y_{t_1+h},...., Y_{t_p+h})\in B\right)\mathbb P\left(U=0\right)+\mathbb P\left( (X_{t_1+h},...., X_{t_p+h})\in B\right)\mathbb P\left(U=1\right)\\ &=\mathbb P\left( (Y_{t_1},...., Y_{t_p})\in B\right)\mathbb P\left(U=0\right)+\mathbb P\left( (X_{t_1},...., X_{t_p})\in B\right)\mathbb P\left(U=1\right)\\ &=\mathbb P\left( (Y_{t_1},...., Y_{t_p})\in B,U=0\right)+\mathbb P\left( (X_{t_1},...., X_{t_p})\in B,U=1\right)\\ &=\mathbb P\left( (Z_{t_1},...., Z_{t_p})\in B,U=0\right)+\mathbb P\left( (Z_{t_1},...., Z_{t_p})\in B,U=1\right)\\ &=\mathbb P\left( (Z_{t_1},...., Z_{t_p})\in B\right), \end{align} where the third and fifth equality is a consequence of independence of $U$ of $(X,Y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4607942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Regarding a converse to Hopf's Umlaufsatz I read in a differential geometry textbook that the total signed curvature of a closed plane curve is an integer multiple of $2\pi$. In that same textbook, I also read about Hopf's Umlaufsatz, which states that the total signed curvature of a simple closed plane curve is either $2\pi$ or $-2\pi$. Now, I am interested in a converse of the previous statement. If a closed plane curve has total signed curvature either $2\pi$ or $-2\pi$, must that curve be a simple closed plane curve?
The "total signed curvature", as you call it, is an isotopy invariant for smooth, oriented, closed plane curves (i.e. immersions of $S^1$ in $\mathbb{R}^2$). The answer to your question is "no" because you can deform a simple closed curve into a nonsimple closed curve. So, for instance, this curve is isotopic to a simple counter-clockwise loop and has the same total signed curvature. Here's a slightly more interesting-looking example, this time isotopic to a simple clockwise loop. In general, the total signed curvature is a complete isotopy invariant. If two immersions of $S^1$ have the same total signed curvature, then they are isotopic. That is, one can be deformed to the other through a family of immersions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Simplifying a binomial coefficient summation I am trying to solve a problem related to expected value, but I am having problem trying to reduce a binomial coefficient expression. How can I prove the following? Assume $a \le k \le n$, $$\sum_{i=0}^{k}\frac{{ i \cdot {{k}\choose{i}}}\cdot {{n-k}\choose{a-i}}} {{{n}\choose{a}}} = \frac{a\cdot k}{n}$$
\begin{align} \sum_{i=0}^{k}\frac{{ i \binom{k}{i}} {{n-k}\choose{a-i}}} {\binom{n}{a}} &= \sum_{i=1}^{k}\frac{{ i \binom{k}{i}} {{n-k}\choose{a-i}}} {\binom{n}{a}} \\ &= \frac{\sum_{i=1}^{k}{ i \binom{k}{i}} {{n-k}\choose{a-i}}} {\binom{n}{a}} \\ &= \frac{\sum_{i=1}^{k}{ i \frac{k}{i} \binom{k-1}{i-1}} {{n-k}\choose{a-i}}} {\frac{n}{a}\binom{n-1}{a-1}} \\ &= \frac{ak}{n}\cdot \frac{\sum_{i=1}^{k}{\binom{k-1}{i-1}} {{n-k}\choose{a-i}}} {\binom{n-1}{a-1}} \\ &= \frac{ak}{n} &&\text{by Vandermonde's identity} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Adjoint action of $\mathrm{SL}_2$ on a Cartan subalgebra Let $G = \mathrm{SL}_2(\mathbb{R})$ and $\mathfrak{g}$ its Lie algebra. We define the adjoint action of $G$ on the Lie algebra $\mathfrak{g}$ like the map $\mathrm{Ad} \colon G \times \mathfrak{g} \to \mathfrak{g}$ with $\mathrm{Ad}(g)(X) = g X g^{-1}$. We define the Lie bracket like the map $\mathrm{ad} \colon \mathfrak{g} \times \mathfrak{g} \to \mathfrak{g}$ given by $\mathrm{ad}(X,Y) = [X,Y] = XY-YX$. I know that we have relations like $\mathrm{ad}(h,f) = [h,f] = -2f$ when $f \in \mathfrak{g}$ is lower triangular and $h$ in Cartan subalgebra. Can we deduce relations like that when we consider the action of the Lie group on a Cartan subalgebra? I.e. the value of $\mathrm{Ad}(F)(h) = FhF^{-1}$ for $F$ in the unipotent radical and $h$ in a Cartan subalgebra?
You don't specify the unipotent radical of what but I assume you mean of the subgroup of lower triangular matrices since $f$ is lower triangular (indeed you want it strictly lower triangular). The nilpotent radical $\mathfrak{n}$ of the lower triangular subalgebra $\mathfrak{b}$ is isomorphic to the unipotent radical in the lower triangular subgroup via its exponential map since $\mathfrak{n}$ is nilpotent. In other words, $F = \exp(f)$ for some $f\in\mathfrak{n}$. So you are trying to compute $\operatorname{Ad}(\exp(f))(h)$ but this is the same as $$\exp(\operatorname{ad}(f))(h) = h + [f,h]+\frac{1}{2}[f,[f,h]] + \dots$$ Now $[f,h]= 2f$ and $[f,[f,h]]=0$ and indeed the rest of the terms vanish. So $\operatorname{Ad}(F)(h) = h + 2f$. Note this will of course extend to other $\mathfrak{sl}_n$ although of course it will get a little bit more complicated as we would need to account for different possibilities. For example, at what stage is $\operatorname{ad}^n(f)=0$, is $f$ a root vector and for which root, is $h$ a coroot and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Pushouts for putting together structures In Sannella and Tarlecki book "Foundations of algebraic specification and formal software development" they describe pushouts as: Pushouts provide a basic tool for putting together structures of various kinds. Given two objects $A$ and $B$, a pair of morphisms $f:C \rightarrow A$ and $g: C \rightarrow B$ indicates a common source from which "parts" of A and B come. The pushout of $f$ and $g$ puts together $A$ and $B$ while identifying the parts coming from the common source as indicated by $f$ and $g$, but keeping the new parts disjoint. My question is about the last line. I understand that a pushout puts $A$ and $B$ together but I dont understand what they mean when they say that it's able to identify parts from $C$ and that it keep the new parts disjoint. I understand that this is a very specific question, thank you for any help.
Suppose that $A$, $B$ and $C$ are sets, and that $f \colon C \to A$ and $g \colon C \to B$ are functions between these sets. We have two kinds of elements in $A$: those elements which lie in the image of $f$ (and thus come from $C$ via the map $f$) and those elements that don’t lie in this image. This gives us a disjoint union $$ \newcommand{\dcup}{\mathbin{\dot{\cup}}} A = (A ∖ f(C)) \dcup f(C) \,. $$ Let us abbreviate the set $A ∖ f(C)$ as $A'$, so that $$ A = A' \dcup f(C) \,. $$ We have similarly the disjoint union $$ B = B' \dcup g(C) \quad \text{where} \quad B' = B ∖ g(C) \,. $$ The sets $f(C)$ and $g(C)$ are the “parts” that come from $C$, whereas the sets $A'$ and $B'$ are the “new parts”. The pushout in question will be of the form $$ A' \dcup I \dcup B' \,, $$ with the set $I$ given by $(f(C) \amalg g(C)) / {∼}$, where $\sim$ is the equivalence relation generated by the conditions $f(x) ∼ g(x)$ for all $x ∈ C$. Intuitively speaking, the set $I$ arises by putting the two sets $f(C)$ and $g(C)$ right next to each other, and then stitching them together so that $f(x)$ and $g(x)$ become the same (in $I$). We thus “identify” $f(x)$ with $g(x)$ for every $x ∈ C$. We see that the pushout consists of three disjoints parts: the new parts $A'$ and $B'$, and the identified part $I$. We see in particular that $A'$ and $B'$ stay disjoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Approximating the solution to a system of 3 oscillatory ODEs? ODE System I have the following system of ODEs: $x'(t)=x(t)\frac{z(t)}{Z}-x(t)\frac{x(t)+y(t)}{J}$ $y'(t)=y(t)\left(1-\frac{z(t)}{Z}\right)(1-q)-y(t)\frac{x(t)+y(t)}{J}$ $z'(t)=y(t)\left(1-\frac{z(t)}{Z}\right)(1-q)-mz(t)$, where all variables and parameters are positive and $0<q<1$. Background The system exhibits an equilibrium, $\left(\overline{x},\overline{y},\overline{z}\right)$, where $\overline{x}$, $\overline{y}$, and $\overline{z}$ are positive when $J$ is greater than a critical value $J_{Crit}$. The real parts of the eigenvalues $\left(\lambda_1,\lambda_2,\lambda_3\right)$ corresponding to $\left(\overline{x},\overline{y},\overline{z}\right)$ are all negative when $J>J_{Crit}$. Upon further analysis of the eigenvalues, one finds that $\left(\overline{x},\overline{y},\overline{z}\right)$ always exhibits oscillations (i.e., two of the eigenvalues are complex). The answer to this post showed that the solution to such a 3-dimensional system is well-approximated by $\overline{V} + Ae^{\sigma t}\cos{\left(\omega t+ \phi\right)} + be^{\lambda_3t} \ \forall \ \overline{V} \in \left(\overline{x},\overline{y},\overline{z}\right)$, where $A$, $\sigma$, $\omega$, and $\phi$ give the oscillations' amplitude, decay rate, frequency, and initial phase, respectively. Both $A$ and $b$ are functions of initial conditions, which are $x(0)$, $y(0)$, $z(0)$. Question In general, how does one calculate $b$ for a 3-dimensional ODE system like the one presented here?
You want the solution $$ \overline{V} + Ae^{\sigma t}\cos{\left(\omega t+ \phi\right)} + be^{\lambda_3t} $$ to be equal to $V_0=(x(0),y(0),z(0))$ at $t=0$. So $$ \overline{V} + A\cos{\left(\phi\right)} + b = V_0. $$ Next it needs to be taken into account that $b$ is a multiple of the eigenvector associated with $\lambda_3$ and $A$ is a linear combination of the eigenvectors associated with $\lambda_1$ and $\lambda_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Interesting ways to write 2023 The year 2023 is near and today I found this nice way to write that number: $\displaystyle\color{blue}{\pi}\left(\frac{(\pi !)!-\lceil\pi\rceil\pi !}{\pi^{\sqrt{\pi}}-\pi !}\right)+\lfloor\pi\rfloor=2023$ where $\color{blue}{\pi}$ is the counting function of prime numbers. My question is, do you know any other interesting way to write 2023? By the way, happy new year everyone
$\text{2022}$+$\text{1}$=$\text{2023}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "33", "answer_count": 20, "answer_id": 14 }
Solving a non-homogeneous Volterra integral equation of the second kind Q. If $y(x)=1+\displaystyle\int_0^x e^{-(x+t)}y(t)\,dt,$ then $y(1)$ equals: (a) $0$, (b) $1$, (c) $2$, (d) $3$. I tried the successive approximations method (starting with $y_0=0$), conversion to a DE, and finding the resolvent kernel. But none of them worked out for me! So, I thought of applying the Laplace transform technique: \begin{align} \mathscr{L}[y(x)]&=\mathscr{L}[1] + \mathscr{L}\left[\int_0^x e^{-(x+t)}y(t)\,dt\right]\\ &=\frac 1s + \frac 1s \mathscr{L}[e^{-(x+t)}y(t)] \end{align} In $\mathscr{L}[e^{-(x+t)}y(t)]$, both $x$ and $t$ are variables. Is it even possible to find $\mathscr{L}[e^{-(x+t)}y(t)]$ ?
$$y(x)=1+\displaystyle \int_0^x e^{-(x+t)}y(t)\,dt, \implies y(x)=1+e^{-x}\int_{0}^{x} e^{-t} y(t) dt $$ So $y(0)=1$,D.w.t.$x$ using Lebnitz as $$y'(x)=-e^{-x}\int_{0}^{x} e^{-t} y(t) dt+e^{-x} e^{-x} y(x).$$ $$\implies y'(x)=-(y-1)+e^{-2x} y(x)$$ $$\implies y'(x)+(1-e^{-2x})y(x)=1$$ This is liner ODE whose integrating factor is $$I=e^{\int (x-e^{-2x})dx}=e^{x+\frac{1}{2}e^{-2x}}.$$ So, $$y(x)=e^{-x-\frac{1}{2}e^{-2x}}\int e^{x+\frac{1}{2}e^{-2x}} dx+ C e^{-x-\frac{1}{2}e^{-2x}}$$ $$y(x)=1-\sqrt{\frac{\pi}{2}}e^{-x-\frac{1}{2}e^{-2x}} \text{Erfi}(e^{-x}/\sqrt{2})+C e^{-x-\frac{1}{2}e^{-2x}}$$ Note that $\int e^{x+\frac{1}{2}e^{-2x}} dx=e^{x+\frac{1}{2}e^{-2x}}-\sqrt{\frac{\pi}{2}} \text{Erfi}(e^{-x}/\sqrt{2}).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4608843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
What is $\left ( \vec{\nabla} \times \vec{A} \right ) \cdot \left ( \vec{\nabla} \times \vec{A} \right )$? I'm trying to rewrite $\left ( \vec{\nabla} \times \vec{A} \right ) \cdot \left ( \vec{\nabla} \times \vec{A} \right )$ in some other way. I tried using Levi-Civita symbol and Kronecker delta, but I'm stuck. Here is what I did: $$\left ( \vec{\nabla} \times \vec{A} \right ) \cdot \left ( \vec{\nabla} \times \vec{A} \right ) = \left ( \vec{\nabla} \times \vec{A} \right )_i \left (\vec{\nabla} \times \vec{A} \right )_i = \epsilon_{ijk} \frac{\partial A_k}{\partial x_j} \epsilon_{imn} \frac{\partial A_n}{\partial x_m} = \left ( \delta_{jm} \delta_{kn} - \delta_{jn} \delta_{km} \right ) \frac{\partial A_k}{\partial x_j} \frac{\partial A_n}{\partial x_m}$$ $$= \left ( \frac{\partial A_k}{\partial x_m} \right )^{2} - \frac{\partial A_m}{\partial x_j} \frac{\partial A_j}{\partial x_m} = \left ( \frac{\partial A_k}{\partial x_m} \right )^{2} - \frac{\partial A_j}{\partial x_j} \frac{\partial A_m}{\partial x_m} $$ And I'm stuck with both these terms. (I'm sorry for no rigour switching order of partials, but I couldn't come up with anything else). Where I messed up?
Your work so far is correct. The answer given here (by user15546) gives one path to condensing this expression. Let $J$ denote the Jacobian, so that $$ J_{ij} = \frac{\partial A_i}{\partial x_j}. $$ Then we can write $$ \sum_{k,m=1}^3\left ( \frac{\partial A_k}{\partial x_m} \right )^{2} - \sum_{j,m = 1}^3\frac{\partial A_j}{\partial x_j} \frac{\partial A_m}{\partial x_m} = \operatorname{trace}(JJ^T) - \operatorname{trace}(J^2). $$ We could alternatively derive a similar formula directly, avoiding the index juggling. To start, note the non-zero entries of $J-J^T$ are the components of the curl $\nabla \times A$, with each component appearing twice. Thus, $$ (\nabla \times A) \cdot (\nabla \times A) = \frac 12 \|J - J^T\|_F^2 = \frac 12 \operatorname{tr}[(J - J^T)(J^T - J)]. $$ From there, we can expand $$ \frac 12 \operatorname{tr}[(J - J^T)(J^T - J)] = \\ \frac 12 \left(\operatorname{tr}[JJ^T] - \operatorname{tr}[J^2] - \operatorname{tr}[J^TJ^T] + \operatorname{tr}[J^TJ]\right) = \\ \frac 12 \left(2\operatorname{tr}[JJ^T] - 2\operatorname{tr}[J^2]\right) = \\ \operatorname{tr}[JJ^T] - \operatorname{tr}[J^2]. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Is there other method to evaluate $\int_1^{\infty} \frac{\ln x}{x^{n+1}\left(1+x^n\right)} d x, \textrm{ where }n\in N?$ Letting $x\mapsto \frac{1}{x}$ transforms the integral into $\displaystyle I=\int_1^{\infty} \frac{\ln x}{x^{n+1}\left(1+x^n\right)} d x=-\int_0^1 \frac{x^{2 n-1} \ln x}{x^n+1} d x \tag*{} $ Splitting the integrand into two pieces like $\displaystyle I=- \underbrace{\int_0^1 x^{n-1} \ln x d x}_{J} + \underbrace{\int_0^1 \frac{x^{n-1} \ln x}{x^n+1} d x}_{K} \tag*{} $ For the integral $J,$ letting $z=-n\ln x$ transforms $J$ into $\displaystyle J= -\frac{1}{n^2} \int_0^{\infty} z e^{-z} d z =-\frac{1}{n^2}\tag*{} $ For integral $K$, using the series for $|x|<1,$ $\displaystyle \ln (1+x)=\sum_{k=0}^{\infty} \frac{(-1)^k}{k+1} x^{k+1},\tag*{} $ we have $\displaystyle \begin{aligned}K& =-\frac{1}{n} \sum_{k=0}^{\infty} \frac{(-1)^k}{k+1} \int_0^1 x^{n(k+1)-1} d x \\& =-\frac{1}{n^2} \sum_{k=0}^{\infty} \frac{(-1)^k}{(k+1)^2} \\& =-\frac{1}{n^2}\left[\sum_{k=1}^{\infty} \frac{1}{k^2}-2 \sum_{k=1}^{\infty} \frac{1}{(2 k)^2}\right] \\& =-\frac{1}{2 n^2} \cdot \frac{\pi^2}{6} \\& =-\frac{\pi^2}{12 n^2}\end{aligned}\tag*{} $ Putting them back yields $\displaystyle \boxed{ I=\frac{1}{12 n^2}\left(12-\pi^2\right)}\tag*{} $ Is there alternative method? Comments and alternative methods are highly appreciated.
Thanks to @Quanto’s short solution and @Claude’s generalisation using hypergeometric function. I am going to use Feynman’s Integration technique to prove the generalisation $$ \boxed{I=\int_1^{\infty} \frac{\ln x}{x^m\left(1+x^n\right)} d x=\frac{1}{4 n^2}\left[\zeta\left(2, \frac{m+n-1}{2 n}\right)-\zeta\left(2, \frac{m+2 n-1}{2 n}\right)\right]} $$ Letting $x=t^{-\frac{1}{n}}$ transforms the integral into $$ I=-\frac{1}{n^2} \int_0^1 \frac{t^{\frac{m-1}{n}} \ln t}{1+t} d t= -\frac{1}{n^2}\left.\frac{\partial}{\partial b} J(b)\right|_{b=\frac{m-1}{n}}, $$ where $J(b)=\int_0^1 \frac{t^b}{1+t} d t.$ Using power series, we have $$ J(b) =\int_0^1 \frac{t^b}{1+t} d t=\sum_{k=0}^{\infty}(-1)^k \int_0^1 t^{k+k} d t=\sum_{k=0}^{\infty} \frac{(-1)^k}{b+k+1} $$ Differentiating both sides w.r.t. $b$ and putting $b=\frac{m-1}{n}$ gives $$ \begin{aligned} \int_1^{\infty} \frac{\ln x}{x^m\left(1+x^n\right)} d x & = \frac{1}{n^2}\sum_{k=0}^{\infty} \frac{(-1)^{k}}{\left(k+\frac{m+n-1}{n}\right)^2} \\ & =\frac{1}{4 n^2}\left[\zeta\left(2, \frac{m+n-1}{2 n}\right)-\zeta\left(2, \frac{m+2 n-1}{2 n}\right)\right], \end{aligned} $$ which matches exactly @Cluade’s answer $$\color{blue}{J=\frac 1{4n^2}\left(\psi ^{(1)}\left(\frac{m+n-1}{2 n}\right)-\psi ^{(1)}\left(\frac{m+2 n-1}{2 n}\right)\right)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find all cubic polynomials $p$ and $q$ so that $p(0)=-24,q(0)=30, p(q(x)) = q(p(x))\,\forall x\in\mathbb{R}$. Find all cubic polynomials $p$ and $q$ so that $p(0)=-24,q(0)=30, p(q(x)) = q(p(x))\,\forall x\in\mathbb{R}$. Write $p(x) = p_3 x^3 + p_2x^2 + p_1x+p_0, q(x) = q_3x^3 + q_2 x^2 + q_1x+q_0,$ where we know that $p_0=-24, q_0=30$. Then $p(q(x)) = q(p(x))\Rightarrow p_3 q(x)^3 + p_2 q(x)^2 + p_1q(x)+p_0 = q_3p(x)^3 + q_2p(x)^2 + q_1p(x)+q_0,$ which seems like a rather complex expression. Comparing leading coefficients, we see that $q_3 p_3^3 = p_3 q_3^3\Rightarrow p_3^2 = q_3^2,$ so $p_3=\pm q_3$. It could be useful to find the roots of p and/or q and/or obtain some useful bounds on them. For instance, if x is a root of p, then by the triangle inequality, $|p_3 x^3|\leq \sum_{i=0}^2 |p_i x^i|.$ I also know the more general fact that for complex numbers x and y, $|x+y|\leq |x|+|y|$ with equality if and only if $x$ and $y$ are collinear and are in the same direction, or at least one of x and y is zero. We know that $p(30) = q(-24)$. I know that if I let $h(x) = x+b$ for some real number b, then $h^{-1}(x)=x-b$ and that if $f$ and $g$ are commuting cubic polynomials, then $h\circ f \circ h^{-1}$ and $h\circ g\circ h^{-1}$ commute as well. Clearly the polynomials $f(x)=ax^3$ and $g(x)=-ax^3$ commute for any nonzero real constant $a$. We have $h\circ f\circ h^{-1} = a(x-b)^3 + b, h\circ g\circ h^{-1} = -a(x-b)^3+b.$ Then note that using this new form, we can obtain commuting polynomials $p$ and $q$ satisfying $p(0)=-24,q(0)=30$. Indeed, we just need to choose a,b so that $-ab^3+b=-24,ab^3+b=30.$ We have $2ab^3=54\Rightarrow ab^3=27, 2b=6\Rightarrow b = 3.$ So $a=1$.
A direct calculation as follows may suffice. Put $p(x) = p_3 x^3 + p_2 x^2 + p_1 x+p_0, q(x) = q_3 x^3 + q_2 x^2 + q_1 x+q_0$, where I assume $p_i, q_i$ are real numbers with $p_3 q_3\neq 0$. Looking at the coefficients of terms of $x^9, x^8, x^7$ in the equation $p(q(x))=q(p(x))$, we have $$\begin{split} & p_3q_3^3=q_3p_3^3\\ & p_3\cdot 3q_3^2q_2=q_3\cdot 3p_3^2p_2\\ & p_3(3q_3^2q_1+3q_3q_2^2)=q_3(3p_3^2p_1+3p_3p_2^2). \end{split}$$ Now we have two cases from the 1st equation: (1) $p_3=q_3$, or (2) $p_3=-q_3$. In case (1), $p_2=q_2$ and $p_1=q_1$ follows from the remaining equations. We see that $p(x)-q(x)$ is a constant which equals $p(0)-q(0)=-54$, so $p(x)=q(x)-54$. Substituting this into $p(q(x))=q(p(x))$, we get $$q(q(x))-54=q(q(x)-54)\quad (\forall x\in \mathbb{R}).$$ Next we use the fact that the function $\mathbb{R}\rightarrow \mathbb{R},\ x\mapsto q(x)$ is surjective since $q(x)$ is a cubic polynomial, by the intermediate value theorem. Then, the above is equivalent to that $$q(x)-54=q(x-54)\quad (\forall x\in \mathbb{R}).$$ This is impossible, however: if we differentiate twice, we get $q''(x)=q''(x-54)$, which forces $q_3=0$, a contradiction. Therefore, we proceed to case (2). Here, $p_2=-q_2,\ p_1=-q_1$ follows from the rest of equations. Therefore, $p(x)+q(x)$ is a constant which equals $p(0)+q(0)=6$. The same argument as in (1) now shows $-q(q(x))+6=q(-q(x)+6)$, which is equivalent to $$-q(x)+6=q(-x+6) \quad (\forall x\in \mathbb{R}).$$ We put $r(x):=q(x+3)$. Then, $r(x)$ is a cubic polynomial with $r(x)+r(-x)=6$. Therefore, $r(x)=ax^3+bx+3$ for some $a\neq 0$ and $q(x)=a(x-3)^3+b(x-3)+3$. Finally, $q(0)=30$ forces $b=-9(a+1)$, so $$q(x)=a(x-3)^3-9(a+1)(x-3)+3,\quad p(x)=-a(x-3)^3+9(a+1)(x-3)+3\quad (a\neq 0).$$ That these functions satisfy the conditions is clear from the argument.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving discontinuity at $\,x =1\,$ using $\,\varepsilon$-$\delta\,$ definition I was doing a proof for discontinuity using $\,\varepsilon$-$\delta\,$ definition but I’m not sure whether the proof is right. Would you mind checking it for me please, thanks! $g(x)=\begin{cases}\dfrac{1}{1-x}\;,\quad& x\neq1\\0\;,&x=1\end{cases}$ For the proof, I have chosen $\,\varepsilon = 1$. And said for $x\in [0,2]\cap(1-\delta, 1+\delta)\,$ where $\,x\neq1$ $|x-1|<\delta\,$ but $\,|g(x)-g(1)|\geqslant1\,,$ Is this right? Thanks!
The function g(x) is continuous at $x=x_0$ if $$\forall \epsilon >0, \exists \delta >0:|x-x_0|<\delta\Rightarrow |g(x)-g(x_0)|<\epsilon$$ The function g(x) is discontinuous at $x=x_0$ if $$\exists \epsilon >0\,,\forall \delta >0\,,\exists x\in(x_0-\delta,x_0+\delta) :|g(x)-g(x_0)|\ge \epsilon\,.$$ In this case $x_0=1; g(x_0)=0\,.$ Lets take $\epsilon=1$. Observe that, for $0<|x-1|<1$, $$ \frac{1}{|x-1|}>\epsilon,\quad\text{ or equivalently, }\quad \left|\frac{1}{x-1}-0\right| \gt \epsilon\,. $$ That is: $$ |g(x)-g(1)| \gt \epsilon\,. $$ This shows that for $\delta=1$ every $x\in(1-\delta,1)\cup(1,1+\delta)=(0,1)\cup(1,2)$ satisfies $$\tag{1} |g(x)-g(1)|>\epsilon. $$ Clearly, for any other $\delta>0$ we can find at least one $x\in(1-\delta,1)\cup(1,1+\delta)$ that satisfies (1) as well because the intersection of this set with the previous one, $(0,1)\cup(1,2)\,,$ is not empty. This finishes the proof that $g(x)$ is discontinuous at $x=1\,.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Compare difference of probability between two Poisson distributions, evaluate at certain point. $X,Y$ are two independent random variables such that $X\sim \mathop{\mathrm{Po}}(\lambda_1), Y\sim \mathop{\mathrm{Po}}(\lambda_2)$, where $\lambda_2>\lambda_1$. Is there a conclusion about the size relationship between $\mathop{\mathrm{Po}}(\lambda_2)\{k\}$ and $\mathop{\mathrm{Po}}(\lambda_1)\{k\}$, if $k>\lambda_2$? Simulations show that $\mathop{\mathrm{Po}}(\lambda_2)\{k\}>\mathop{\mathrm{Po}}(\lambda_1)\{k\}$. Is there proof, or what other conditions should I add?
Let's show the following equivalent statement according to Andrew: For $\lambda_3>\lambda_2>\lambda_1$, we have $\lambda_3\log(\frac{\lambda_2}{\lambda_1})>\lambda_2-\lambda_1$ Proof: Define $\delta=\lambda_2/\lambda_1>1$, $\lambda_3\log(\delta)>\lambda_2\log(\delta)=\lambda_1\delta\log(\delta)$, we want to show $\lambda_1\delta\log(\delta)>(\delta-1)\lambda_1$, which is equivalent to show $\delta\log(\delta)>(\delta-1)$. Let $f(x)=x\log(x)-x+1,x>1$ $$f^{\prime}(x)=\log(x)+1-1>0$$ which imply $f(\delta)>f(1)=0$. #
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Spaces for which all transpositions are homeomorphisms The discrete, indiscrete, cofinite, cocountable, co-(any downwards closed collection of cardinals) topological spaces are part of the unique class of spaces such that all permutations of the space (bijections from the space to itself) are autohomeomorphisms. This is because their topologies may be defined purely in terms of cardinality, so everything is bijection-invariant. If we relax the requirement to just saying that all transpositions (i.e. permutations that just swap two elements) are homeomorphisms, do we get a broader class of spaces, or is this property equivalent to bijection-invariance?
Let's call a topological space $X$ [weakly] anti-rigid, if every permutation [transposition] is a homeomorphism. And filter like, if whenever $U$ is a non-empty open subset of $X$, and $U \subset V \subset X$, then also $V$ is open. Then we have: $X$ weakly anti-rigid, iff $X$ is filter like and ($X$ is T1 or $X$ is indiscrete). Proof. $"\Rightarrow"$: If $X$ contains an isolated point, then $X$ is discrete, hence filter like and T1. So let's assume that $X$ contains no isolated points, and $|X| \ge 2$. Let $\emptyset \neq U$ be open, $y \in X$. Case 1: $y \in U$: then $U \cup \{y\} = U$ is open. Case 2: $y \notin U$. Choose an $x \in U$. Then the transposition $f: X \rightarrow X$ swapping $x$ and $y$ shows that $(U \setminus \{x\}) \cup \{y\} = f(U)$ is open. Hence $U \cup \{y\} = U \cup ((U \setminus \{x\}) \cup \{y\})$ is open. Thus, any subset $V$ such that $U \subset V$ is open. Now assume $X$ is not T1: then the intersection of all non-empty open sets is non-empty, hence equals $X$, hence X is indiscrete. $"\Leftarrow"$: W.l.o.g. $X$ T1. Let $x, y \in X$, $f: X \rightarrow X$ be the transposition swapping $x$ and $y$. We have to show that $f$ is continuous: Let $U$ be open in $X$. Case 1: $x, y \in U$ or $x, y \notin U$: Then $f^{-1} (U) = U$. Case 2: $x \in U, y \notin U$: Then, by T1, $U \setminus \{x\}$ is open, hence $f^{-1}(U) = (U \setminus \{x\}) \cup \{y\}$ is open. Case 3: $x \notin U, y \in U$: analogous. Now let $\mathfrak{F}$ be a free ultrafilter on $\mathbb N$. Then $\mathfrak{F} \cup \{\emptyset \}$ is filter like topology on $\mathbb N$ and it is $T1$, hence weakly anti-rigid, but not anti-rigid: Let $\mathbb N = A \cup B, A \cap B = \emptyset$, $A, B$ be infinite. W.l.o.g. $A \in \mathfrak{F}$. There is a bijective map $f: \mathbb N \rightarrow \mathbb N$, such that $f(B) = A$. Hence, $f$ is not continuous. Remark: it is easy to see that weakly anti-rigid Hausdorff spaces are discrete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4609944", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Prove that element is a square Trying to post this question again that got downvoted. Suppose that $x^5 + ax + b \in \mathbb{F}_p[x]$ is irreducible over $\mathbb{F}_p$. Is it true that $25b^4 + 16a^5$ is a square in $\mathbb{F}^{\times}_p$? My idea: the Galois group of $f$ is a subgroup of $A_n$ iff its discriminant is a square. The discriminant of $f$ is $256a^5 + 3125b^4$ and while this kind of looks like $25b^4 + 16a^5$, its not quite it. So not sure if I even should try to prove the galois group is a subgroup of $A_n$. Any other ideas?
Here's a slightly simpler example, in my opinion. Let $p=11$ and $f(x)=x^5+x+3$. We have $25\cdot3^4+16\equiv6\pmod{11}$ and $6$ is not squared modulo $11$. Check the irreducibility of $f(x)$. This polynomial has no roots in $\mathbb{Z}_{11}$ can be checked using Horner's method. This will require $10$ lines of computation. Let us check that it has no divisors of degree two. If $$ f(x)=(x^2+\alpha'x+\beta)(x^3+\alpha'x^2+\beta'x+\gamma'), $$ then $\alpha'=-\alpha$, $\beta'=\alpha^2-\beta$, $\gamma'=3/\beta$ and \begin{align*} 2\alpha\beta^2-\alpha^3\beta-3& =0, \\ \alpha^2\beta^2+3\alpha & =\beta^3+\beta. \end{align*} To prove that this system has no solutions over $\mathbb{Z}_{11}$ we can do the following. Assume alternately $\alpha=0,1,\ldots,10$, then the first equation is a quadratic equation with respect to $\beta$. Calculate $\beta$ and substitute it into the second equation. All of these calculations can be done manually, but you can use a computer. I have done all these calculations, I leave it to the author of the question to do them again. Note. For $p=3,5,7$ there are no such polynomials; for $p=11,13,17,19$ there are $4,16,24,34$, respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4610070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Change signs of sequence elements such that sum of elements converges Suppose there is a sequence like $(a_i)_{i \ge 1}$, such that infinite sum of sequence diverges and for every element $|a_i|\le1$ and $a_n \to 0$. Is it possible for any sequence to change sign of each element in a way that their sum converges? also is it possible to make it converge to zero or make it bounded? I thought about Harmonic series and the answer for it ,is positive (to converge I mean), however I am not sure if this is correct in general, I really appreciate if you could help me.
If $(a_n)$ is a sequence of real numbers such that $a_n \to 0$ and $\sum_{n=1}^\infty a_n$ diverges then there is a sequence of “signs” $(s_n)$ such that $s_n = \pm 1$ for all $n$ and $\sum_{n=1}^\infty s_n a_n = 0$. Here is a possible construction per induction. Without loss of generality we can assume that all $a_n \ge 0$. We start by setting $s_1 = +1$. If $s_1, \ldots,s_n$ are already defined then we set $$ s_{n+1} = \begin{cases} -1 & \text{ if } \sum_{k=1}^n s_k a_k \ge 0 \\ +1 & \text{ if } \sum_{k=1}^n s_k a_k < 0 \, . \end{cases} $$ Now we show that $\sum_{n=1}^\infty s_n a_n = 0$. Let $\epsilon > 0$. Since $a_n \to 0$ there is an index $N$ such that $0 \le a_n < \epsilon $ for $n \ge N$. Set $S = \sum_{k=1}^N s_k a_k$. The following partial sums decrease (if $S \ge 0$) or increase (if $S < 0$) until they reach a value in the range $[-\epsilon, \epsilon]$ and then stay in that range. So there is a $M > N$ such that $-\epsilon \le \sum_{k=1}^n s_k a_k \le \epsilon$ for all $n \ge M$. Such an index $M$ exists for all $\epsilon > 0$, and that proves the converges of $\sum_{k=1}^n s_k a_k$ to zero. In the same way one can show that for every $L \in \Bbb R$ there is a choice of signs $(s_n)$ such that $\sum_{n=1}^\infty s_n a_n = L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4610221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $a(n)$ is not prime for all $n\ge 8$ Let $a(n)$ be the number formed by concatenating $n+1$ nines, $n$ zeroes and a one. I noticed the following pattern for the first 9 terms: \begin{array}{|c|c|c|} \hline n & a(n)& \text{Is prime} \\ \hline 0& 91 & \color{red}{\text{False}} \\ \hline 1& 9901 & \color{blue}{\text{True}} \\ \hline 2& 999001 & \color{red}{\text{False}} \\ \hline 3& 99990001 & \color{blue}{\text{True}} \\ \hline 4& 9999900001 & \color{red}{\text{False}} \\ \hline 5& 999999000001 & \color{blue}{\text{True}} \\ \hline 6& 99999990000001 & \color{red}{\text{False}} \\ \hline 7& 9999999900000001 & \color{blue}{\text{True}} \\ \hline 8& 999999999000000001 & \color{red}{\text{False}} \\ \hline \end{array} Interestingly, the pattern fails for $n>9$, and it seems that all the next terms are not prime (checked up to $n=1000$ with Python). So how would one prove that for $n\ge8$ all terms are not prime?
The smallest such number for which factordb does not know a prime factor is here. According to my calculations with PFGW there is no further prime of the form $$(10^n-1)\cdot 10^n+1$$ up to $n=13\ 000$ , hence another prime of this form must have at least $26\ 000$ digits. Small forced factors apparently do not exist and I think neither algebraic or aurifeuillean factors. In this case, there are probably infinitely many such primes, but already to find the next can be an enormous task.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4610463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that there is no right triangle whose legs are rational numbers and whose hypotenuse is $\sqrt{2022}$. Show that there is no right triangle whose legs are rational numbers and whose hypotenuse is $\sqrt{2022}$. My tries: * *I used Pythagoras' Theorem to get: $$\sqrt{2022}^2=a^2+b^2 \implies a^2+b^2 = 2022$$ where $a$ and $b$ are the legs of the triangle. I don't know what to do next: Is there another formula I could use? I know that $a+b>\sqrt{2022}$ but I don't think this is going to help us much. hope one of you can help me! thank you!
Quickly ruling out $\ a\ $ being even, we try $\ a\ $ and $\ b\ $ both being odd: $$ (2k_1+1)^2 + (2k_2+1)^2 = 2022\quad k_1,k_2\in\mathbb{Z}$$ $$ \implies 4({k_1}^2 + {k_2}^2 + k_1 + k_2) + 2 = 2022 $$ $$ \implies {k_1}^2 + k_1 + {k_2}^2 + k_2 = 505, $$ which is impossible, since $\ {k_i}^2 + k_i\ $ is even for $\ i=1,2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4610611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
find the Galois group of $x^4-3$ over $\mathbb{Q}$ and show that is isomorphic to $D_4$ I have to determine the Galois group of $x^{4}-3$. If I have not made any mistakes, then the Galois group is: $$Gal(\mathbb{Q}(\sqrt[4]{3},i)/\mathbb{Q})=\{ \sigma_{mn}\in Aut(\mathbb{Q}(\sqrt[4]{3},i)/\mathbb{Q})\mid \sigma_{mn}(i) = i^m, m=\{1,3\}, \sigma_{mn}(\sqrt[4]3)=i^n \sqrt[4]{3}, n=\{0,1,2,3\} \}$$ I would like to calculate all elements of the Galois group in the following table: I think I have made a mistake. Because the group $D_4$ has the following elements $D_4 = \{id,(1234),(13)(24),(1432),(24),(13),(12)(34),(14)(23)\}$. When I compare this with the last line, the permutations are not the same. So I have a mistake there. Does anyone see my mistake? Thanks a lot.
I think it will be easier to construct $D_4$ by thinking about generators rather than all elements together. It is known that $D_4$, as the symmetry group of the square, is generated by a flip and a $90^\circ$ rotation. Do we have something like that here? Yes, we do. For the flip, the most likely candidate is the complex conjugate. For a rotation, we would like $i^n\sqrt[4]3\mapsto i^{n+1}\sqrt[4]3$, and there is an element in the Galois group which does this (you have called it $\sigma_{11}$). We see that these two elements together act on the square defined by the four roots of the polynomial exactly the way we would like $D_4$ to act. Together with an argument for why the Galois group can't have more than 8 elements, and we are done. If we want to follow your approach to the end, though, we can do that too. But we need to be careful to not make any mistakes the way you seem to have done (for instance, your $\sigma_{11}$ is tagged as $(1432)$ but really it's $(1324)$ and your $\sigma_{13}$ really corresponds to $(1423)$). Also, ordering the roots along the square rather than in diagonal pairs will help the correspondence with your standard $D_4$ presentation: $$ \begin{array}{c|c|c|c|c|c|c|c|c|c|} &\text{root}&\sigma_{10}&\sigma_{11}&\sigma_{12}&\sigma_{13}&\sigma_{30}&\sigma_{31}&\sigma_{32}&\sigma_{33}\\\hline 1 & \sqrt[4]3& \sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3\\\hline 2 & i\sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3\\\hline 3 & -\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3& i\sqrt[4]3\\\hline 4 & -i\sqrt[4]3& \sqrt[4]3& \sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& i\sqrt[4]3& -\sqrt[4]3& -i\sqrt[4]3& \sqrt[4]3\\\hline \text{permutation} && \operatorname{id}&(1234)&(13)(24)&(1432)&(24)&(12)(34)&(13)&(14)(23)\end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4610722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How do I prove this statement about the cardinalities of finite sets without using the inclusion-exclusion principle? If we were allowed to use the inclusion-exclusion principle:$|A \cup B|=|A|+|B|-|A \cap B|$ then the question statement explicitly states that the two sets are disjoint, so this proof would be trivial. Let $A$ and $B$ be finite sets. Assuming for this part that $A$ and $B$ are disjoint, and adopting the recursive definition of cardinality: (the empty set $\varnothing$ is finite with $|\varnothing|=0$. A set $S$ is finite with $|S|=n+1$, if there exists $s \in S$ such that $|S \backslash\{s\}|=n$ for some $n \in \mathbb{N}$. We call $|S|$ the cardinality of $S$), use induction on $|B|$ to show that $A \cup B$ is finite and that $$ |A \cup B|=|A|+|B| \text {. } $$
Try an induction on $|B|$. More formally, let $P(n)$ be the statement “For all $B$ disjoint from $A$ such that $|B| = n$, $|A \cup B| = |A| + |B|$”. You should find that the inductive step and the base case follow from the two parts of the recursive definition of cardinality. Edit: the base case $B = \emptyset$ is trivial. Now suppose $|B| = n + 1$. Then take $b \in B$ such that $|B \setminus \{b\}| = n$. Then we have $|A \cup (B \setminus \{b\})| = |A| + |B \setminus \{b\}|$ by the inductive hypothesis. And $A \cup (B \setminus \{b\}) = (A \cup B) \setminus \{b\}$. Thus, $|A \cup B| = |A| + |B \setminus \{b\}| + 1 = |A| + |B|$, as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4611000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability that one person is selected and other is not selected in a group There are $30000$ students, out of which only $1000$ students are selected. There are two students $A$ and $B$, what is the probability that $A$ is selected and $B$ is not selected? The solution is given as $\frac{1000}{30000}* \frac{29000}{30000}$, but this does not seems correct to me. The way I look at this problem, the sample space has $_{30000}C_{1000}$ elements. The event, since we need all the sets from $SS$ that has $A$ but not $B$, we have $_{29998}C_{999}$ Since we fix $A$ in the set and exclude $B$ Both give very different results, request your help in understanding which one is correct and why, Thanks.
Your answer is correct, and theirs is very slightly incorrect. Letting $A$ represent the event where student A is selected and $B$ represent the event where student B is not selected, their solution is based on the logic that $$P(A \cap B) = P(A)P(B)$$ which is the multiplication rule for independent events. However, whenever you sample a population without replacement, the selections are dependent events, so the proper logic would actually be to use the general $$P(A \cap B) = P(A)P(B \ \vert\ A)$$ which follows from the definition of conditional probability. In our case, $P(B \ \vert \ A)$ is the probability of $B$ not being selected given that $A$ was already selected, which ends up being $\dfrac{29000}{29999},$ essentially because we need to subtract $A$ from the $30000$ students originally under consideration, and the numerator stays the same because exactly the same number still need to be passed over. So the correct answer would be $\dfrac{1000 \cdot 29000}{30000 \cdot 29999},$ which ends up being the same as your answer: $$\frac{29998\choose999}{30000\choose1000} = \frac{\frac{29998!}{999!28999!}}{\frac{30000!}{1000!29000!}} = \frac{1000!}{999!} \cdot \frac{29000!}{28999!} \cdot\frac{29998!}{30000!} = \frac{1000 \cdot 29000}{30000 \cdot 29999}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4611239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Petrov (1995), Theorem 6.7 Does someone know the proof of the following statement: Given a sequence of independent random variables $(Y_n)_{n=1}^{\infty}$ and a sequence of positive constants $(a_n)_{n=1}^{\infty}$ such that $a_n\to\infty$ and $$\sum_{n=1}^{\infty}\frac{\text{var}(Y_n)}{a_n^2}<\infty$$ then $$\frac{1}{a_n}\left(\sum_{i=1}^{n}Y_i - \mathbb{E}\left[\sum_{i=1}^{n}Y_i\right]\right)\rightarrow 0\text{ almost surely}$$ I have come up with the following: by the triangular inequality, $$\left|\frac{1}{a_n}\sum_{i=1}^{n}Y_i-\mathbb{E}[Y_i]\right|\le\frac{1}{a_n}\sum_{i=1}^{n}\left| Y_i-\mathbb{E}[Y_i]\right|$$ so that $$\mathbb{P}\left(\frac{1}{a_n}\sum_{i=1}^{n}\left| Y_i-\mathbb{E}[Y_i]\right|<\varepsilon\right)\le\mathbb{P}\left(\left|\frac{1}{a_n}\sum_{i=1}^{n}Y_i-\mathbb{E}[Y_i]\right|<\varepsilon\right)$$ now, for $1\le i\le n$, let $S_i^n$ be the event $$|Y_i-\mathbb{E}[Y_i]|<\frac{a_n\varepsilon}{n}$$ so that $$\mathbb{P}(S_1^n\wedge S_2^n\wedge\dots\wedge S_n^n)\le \mathbb{P}\left(\frac{1}{a_n}\sum_{i=1}^{n}\left| Y_i-\mathbb{E}[Y_i]\right|<\varepsilon\right)$$ But $X_i = |Y_i-\mathbb{E}[Y_i]|$ are indepentent random variables because the $Y_i$ are independent, so $$\mathbb{P}(S_1^n\wedge S_2^n\wedge\dots\wedge S_n^n)=\prod_{i=1}^{n}\mathbb{P}(S_i^n)=$$ $$=\prod_{i=1}^{n}\mathbb{P}\left(|Y_i-\mathbb{E}[Y_i]|<\frac{a_n\varepsilon}{n}\right)\ge\prod_{i=1}^{n}\left(1-\frac{n\sigma_i}{a_n\varepsilon}\right)$$ where $\sigma_i$ is the standard deviation of $Y_i$ and the last inequality follows from Chebyshev's inequality. This doesn't work, but I suspect choosing the right $\varepsilon$ and bounds for $S_i^n$ will work. Any clue how? Also, are the $X_i$ really independent? How can this be proven? Thanks in advance!
As @Botnakov N. commented, this is Kolmogorov's SLLN. The proof is rather complicated and needs some preparation work. Hint: Suppose $X_1,X_2,\dots$ are independent r.v.s with finite expectation. Prove that if $\sum_i \operatorname{Var}(X_i)<\infty$, then $\sum_i [X_i-E(X_i)]$ converges a.s. (The proof is also hard and needs some lemmas.) Then the assumption of the theorem would imply $$ \sum_i \frac{Y_i-E(Y_i)}{a_i} $$ converges a.s. The conclusion thus holds by a result in elementary calculus known as Kronecker's Lemma. You would probably need a book to check the proofs there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4611589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
SDE driven by Poisson Process Suppose that $(N_t)_{t\in \mathbb{R}^+}$ is a Poisson process with intensity $\lambda$>0 and that $a\in\mathbb{R}$ and $X$ being a stochastic process which solves the following SDE:$$dX_t=aX_t^-dN_t$$ Now I want to find an explicit representation for $X$ in terms of $X_0,a$ and $N_t$.Now I would like to try the change of variables $Z_t=log(aX_t)$ and use Ito's Lemma, however the material I have been given is very sparse and as far as I can see only applicable to SDEs driven by Brownian motions. So my question is if anyone could give a hint on how to solve this equation or any relevant material concerning similar SDEs.
We know $$\tag{1} \int_0^t aX_{s-}\,dN_s=\sum\limits_{0<s\leq t}aX_{s-}\Delta N_s\,. $$ The SDE $$\tag{2} dX_t=aX_{t-}\,dN_t $$ is nothing else than the integral equation $$\tag{3} X_t=1+\int_0^t aX_{s-}\,dN_s\,. $$ and the proposed solution of this SDE is $$\tag{4} X_t=X_0(1+a)^{N_t}\,. $$ Proof. The $X_t$ in (4) changes only by jumps of $N_t$ and only by the amount $$\tag{5} \Delta X_t=X_t-X_{t-}=X_0(1+a)^{N_{t-}+1}-X_{t-}=(1+a)X_{t-}-X_{t-}=aX_{t-}\,. $$ if and only if there is a jump of $N_t$ in $t\,.$ This can be written as $$ \Delta X_t=aX_{t-}\Delta N_t $$ which is the discrete version of the SDE (2). Due to the properties of $X_t$ (changes only by jumps) the integral equation (3) follows now from (1). $$\tag*{$\Box$} \quad $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4611735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can $\frac{1}{1-x^2}$ be integrated by parts? I was watching a video of a professor who said that $\int \frac{1}{1-x^2} dx$ can be done using the method of integration by parts. Though he skipped the solution and simply gave the answer, I tried solving it myself but I'm having difficulty moving forward. What I did was: Let $u = \frac{1}{1-x^2}, du = \frac{2x}{(1-x^2)^2} dx , dv = dx, v=x$. Then, $$\int \frac{dx}{1-x^2} = \frac{x}{1-x^2} - \int \frac{2x^2}{(1-x^2)^2} dx$$ However, the new integral seems problematic. I tried doing the method again but the equation just ends up to be $0=0$. Is integration by parts not a viable way of solving this integral? P.S.: I already know the answer using a different method (partial fraction decomposition). I just want to know how to solve the integral in a different way.
Here is to integrate by parts, albeit unnaturally \begin{align} I=\int \frac{dx}{1-x^2} =\int\ln^{\frac12}\frac{1+x}{1-x}d\left(\ln^{\frac12}\frac{1+x}{1-x}\right) =\ln\frac{1+x}{1-x}-I=\frac12 \ln\frac{1+x}{1-x} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 2 }
Find the minimum of $\sqrt{\cos x+3}+\sqrt{2\sin x+7}$ without derivative How do we find the minimum of $$f(x)=\sqrt{\cos x+3}+\sqrt{2\sin x+7}$$ without using derivatives? This problem is probably related to circles of Apollonius. I have tried AM-GM and Cauchy-Schwarz inequality but I can't work it out. Anyway, I have solved it in a more geometric way. Here's my answer. Firstly we can do some identical transformation. $$f(x)=\dfrac{\sqrt{2}}{2}(\sqrt{(\cos x+1)^2+(\sin x)^2+4}+\sqrt{(\cos x)^2+(\sin x+2)^2+9})$$ So that it makes sense in geometry. $P(\cos x,\sin x)$ is on the circle $x^2+y^2=1$, and the value of $f(x)$ equals to sum of the distance from $A(0,-2)$ to $P$ and from $B(-1,0)$ to $P$. In other words: $$f(x)=\dfrac{\sqrt{2}}{2}(\sqrt{|PB|^2+4}+\sqrt{|PA|^2+9}).$$ And here we can use Minkowski inequality. $$f(x)\geq \dfrac{\sqrt{2}}{2} \sqrt{(|PA|+|PB|)^2+25}$$ When $P$,$A$,$B$ is collinear, $RHS$ gets the minimum. Meanwhile, $LHS = RHS$. Therefore, $f(x)_{min}=\sqrt{15}$.
Because $$\sqrt{\cos x+3}+\sqrt{2\sin x+7}=$$ $$=\sqrt{15}+\left(\sqrt{\cos x+3}-2\sqrt{\frac{3}{5}}+\frac{5\sqrt5}{24\sqrt3}\left(\cos^2x-\frac{9}{25}\right)\right)+$$ $$+\left(\sqrt{2\sin x+7}-3\sqrt{\frac{3}{5}}+\frac{5\sqrt5}{24\sqrt3}\left(\sin^2x-\frac{16}{25}\right)\right)\geq\sqrt{15}.$$ The equality occurs for $(\cos{x},\sin{x})=\left(-\frac{3}{5},-\frac{4}{5}\right).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 1 }
Is the mapping cylinder of a map between two locally compact Hausdorff spaces locally compact? In Spanier's book "Algebraic Topology" on page 97 Sec 2.8, he proves a theorem about construction of fibration from cofibration. THEOREM. Let $f:X' \rightarrow X$ be a cofibration, where $X'$ and $X$ are locally compact Hausdorff spaces, and let $Y$ be any space. Then the map $p: Y^X \rightarrow Y^{X'}$ defined by $p(g) = g \circ f$ is a fibration. In the proof he claims that: Because $X'$ and $X$ are locally compact Hausdorff spaces, so is $\bar{X}.$ Here $\bar{X}$ is the mapping cylinder of $f$ defined by the quotient space of $(X' \times [0,1]) \vee (X\times 0)$ under the equivalence $(x',0)\thicksim (f(x'),0)$. The claim above is clear because here the cofibration is actually a closed inclusion map. My question is: in general, is the mapping cylinder of a map between two locally compact Hausdorff spaces still locally compact?
No. For instance, consider the unique map $\mathbb{N}\to \{*\}$. Its mapping cylinder is the cone on $\mathbb{N}$, which is not locally compact at the cone point. (For instance, for sequence of numbers $\epsilon_n>0$, the sequence $(n,\epsilon_n)$ has no accumulation point, and every neighborhood of the cone point contains such a sequence.) A sufficient general condition for the mapping cylinder to be locally compact is that $f$ is proper. Indeed, if $f$ is proper, then given a point $(x,0)\in X\times\{0\}$, you can pick a compact neighborhood $K$ of $x$ and then $f^{-1}(K)\times[0,1]\cup K\times\{0\}$ gives a compact neighborhood of $(x,0)$ in the mapping cone.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
confusion about chain rule in linearity proof Suppose $v,p\in\mathbb{R}^n$, then $D_v|_p:C^\infty\to\mathbb{R}$ is defined by $$D_v|_p(f):=\frac{d}{dt}|_{t=0}f(p+tv).$$ Now I'm looking at a proposition that states that the map $v\mapsto D_v|_p$ is a linear isomorphism from $\mathbb{R}^n\to T_p\mathbb{R}^n$. I understand most of the proof, but I'm confused by the linearity part: $$\frac{d}{dt}|_{t=0}f(p+tv+\lambda tw) = \frac{d}{dt}|_{t=0}f(p+tv) + \lambda\cdot\frac{d}{dt}|_{t=0}f(p+tw)$$ for $v,w,p\in\mathbb{R}^n$ and $\lambda\in\mathbb{R}$. This equality should immediately follow from the chain rule, but I don't see how. Probably I'm just a bit confused by the notation of (directional) derivatives... but I already searched quite a bit and just can't see how the chain rule is used here. Can someone explain this?
We have $$ D_v|_p(f):=\left.\dfrac{d}{dt}\right|_{t=0}f(p+tv) $$ Set $h(t)=p+tv.$ Then $$ D_p(f)(v):=\left.\dfrac{d}{dt}\right|_{t=0}f(p+tv)=\left.\dfrac{d}{dt}\right|_{t=0}f(h(t))=f'(h(0))\cdot h'(0)=f'(p)\cdot v $$ Next, set $g(t)=p+tv+\lambda tw.$ It is $f'=D_p(f)\cdot v$ reading "Derivative at point $x=p$ of function $f$ (= gradient = Jacobi matrix = linear approximation) in direction of $v$." \begin{align*} D_p(f)\cdot (v+\lambda w)&= \left. \frac{d}{dt}\right|_{t=0}f(p+t(v+\lambda w)) \\&=\left. \frac{d}{dt}\right|_{t=0}f(g(t)) \\ &=\left. \frac{d}{dt}\right|_{g(t)=g(0)}f((g(t)) \cdot \left. \frac{d}{dt}\right|_{t=0}g(t)\\ &=\left. \frac{d}{ds}\right|_{s=0}f(s)\cdot (v+\lambda w)\\ &=f'(p)\cdot v +\lambda \cdot f'(p)\cdot w\\ &=\left. \dfrac{d}{dt}\right|_{t=0}f(p+tv)+\lambda \cdot \left. \dfrac{d}{dt}\right|_{t=0}f(p+tw)\\ &=D_p(f)\cdot v +\lambda \cdot D_p(f)\cdot w\\ &=D_p(f)(v)+\lambda D_p(f)(w) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Solving $f\left( n \right) = \sum\limits_{i > j \ge 0} {{}^{n + 1}{C_i}{}^n{C_j}} $ My approah is as follow Let $f\left( n \right) = \sum\limits_{i > j \ge 0} {{}^{n + 1}{C_i}{}^n{C_j}} $ $f\left( n \right) = \sum\limits_{i = i}^n {{}^{n + 1}{C_i}} \sum\limits_{j = 0}^i {{}^i{C_j}} \Rightarrow f\left( n \right) = \sum\limits_{i = i}^{n + 1} {{2^i}.{}^{n + 1}{C_i}} $ ${\left( {1 + 2x} \right)^{n + 1}} = {}^{n + 1}{C_0} + {}^{n + 1}{C_1}\left( {2x} \right) + ... + {}^{n + 1}{C_{n + 1}}{\left( {2x} \right)^{n + 1}} = {3^{n + 1}}\left\{ {x = 1} \right\}$ Not able to proceed
The summand is the coefficient of $x^{i-j}$ from the following expression: $$ (1+x)^{n+1}\left(1+\frac{1}{x}\right)^{n} = \frac{(1+x)^{2n+1}}{x^{n}} $$ Since $i-j>1$, we just need to sum the coefficients of $x^{n+1},...,x^{2n+1}$ from the numerator on RHS. $$ \binom{2n+1}{n+1}+\binom{2n+1}{n+2}+...+\binom{2n+1}{2n+1}=2^{2n} $$ Seems like all the options are correct except (B)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
About ring with local units A ring $R$, not necessarily unital, is said to be a ring with local units if there is a set $E\subseteq R$ of commuting idempotents such that for any $r\in R$ there is $e\in E$ such that $er=r=re$. Let us call such $e$ to be a local unit for $r$. A local unit for $r$ may not be unique. Anyway, a common (I dare say folklore) result about hence an equivalent definition of the ring is the following: If $(R,E)$ is a ring with local units, then arbitrary finite number of elements of $R$ have a common local unit. To be formally precise, if $r_1, r_2,\dots, r_n$ are arbitrary elements, then there is $t\in E$ such that $tr_i=r_i=r_it$ for each $i=1,2,\dots, n$. Most papers in the literature which mention the result do not provide an explicit proof or simply dismissing it as an easy proof by induction. I thought the latter is really the case too but I have so far been unable to prove it. Even among arbitrary two idempotents in $E$ themselves I couldn’t quite see how to produce a common local unit. Did I miss anything? I appreciate a hint instead of a full answer. Edit: As an example of such a ring we may consider the ring of continuous real valued functions on $\mathbb R$ with compact support. Then it is a ring with local units where the commuting idempotents can be taken to be the characteristic function $\chi_I$ for each closed interval $I$. In this example the nature of characteristic functions easily prove the result above but I am not sure how to proceed in the most general case.
The local unity common to $r_1$ and $r_2$ is $e_1+e_2-e_1e_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4612891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to formally denote a bijection formula I'm learning set theory by doing exercises from a textbook that gives a high level overview of sets theory. Unfortunately it doesn't provide a lot of examples on the formal math language that I can use to formally denote bijection formulas. My question is how to formally define a bijection formula between two sets without resorting to a lot of explanation. Take as an example the following exercise: Build a bijection between an interval $[0, 1]$ and half-interval $[0, 1)$ I solved this problem as follows: Let's rewrite $M_1=[0, 1]$ as $M_1=C_1 + R_1 : R_1 = \{\frac{1}{n} \mid n \in \Bbb N \} \land C_1 = M_1 \setminus R_1$ and $M_2=[0, 1)$ as $M_2=C_2 + R_2 : R_2 = \{\frac{1}{n} \mid n \in (\Bbb N \setminus \{1\})\} \land C_2 = M_2 \setminus R_2$. A "plus" sign here means a union of non-intersecting sets. It is easy to see that $C_1 = C_2$. So a bijection would be a one-to-one mapping for elements of $C_1, C_2$. For elements of $R_1, R_2$ bijection could be defined as follows: $$g : R_1 \rightarrow R_2, g \left (\frac 1n \right ) = \frac{1}{n+1} \text{ for } n \in \Bbb N$$ and $$g^{-1} : R_2 \rightarrow R_1, g^{-1}\left(\frac 1n \right) = \frac{1}{n-1} \text{ for } n \in (\Bbb N \setminus \{1\})$$ In the above proof I wanted to convey the idea that both intervals would have a common part - i.e. either interval minus a set $R = \{x \mid x = \frac{1}{n}, n \in \Bbb N\}$ and then show a bijection for remaining elements, where remaining elements would be either interval minus set $R$. Is there an easier or more accurate way to write the above?
Bear in mind that the two spaces aren't homeomorphic (one is compact and the other is not), so there's no way to do this in a bicontinuous manner. Indeed, since a continuous image of a compact set $([0, 1])$ achieves its maximum, there's no way to do this in a continuous manner. The easiest way I can think of is this: Let $\{q_n \mid n \in \Bbb N \}$ enumerate the rationals in $[0, 1)$. (It's tedious, but you can come up with an express formula to do this if you want.) Then define $f(1)= q_0, f(q_k)=q_{k+1}$ for the rationals, and let $f$ be the identity function on the irrationals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrating $\frac{1}{(x-2)^4 \sqrt{x^2 + 6x + 2}}$ I'm struggling with the integral, $$\int\frac{1}{(x-2)^4 \sqrt{x^2 + 6x + 2}}dx.$$ I tried it as follows: Substituting $x-2 = \frac1t \implies dx = \frac{-dt}{t^2}.$ $$\therefore \int\frac{dx}{(x-2)^4 \sqrt{x^2 + 6x + 2}} = \int \frac{- dt}{\frac{t^2}{t^4} \sqrt{(\frac1t + 2)^2 + 6 (\frac1t + 2) + 2}} = \int \frac{-t^3}{\sqrt{18t^2 + 10t + 1}}\ dt$$ How to continue from here?
Substitute $y=x-2$ to rewrite the integral as $$I_4= \int\frac{1}{(x-2)^4 \sqrt{x^2 + 6x + 2}}dx =\int \frac{1}{y^4 \sqrt{y^2 + 10y + 18}}dy $$ and then integrate by parts to get the reduction formula $$18(n-1)I_n=-\frac{\sqrt{y^2 + 10y + 18}}{y^{n-1}}-5(2n-3)I_{n-1}-(n-2)I_{n-2} $$ Apply the formula three times to reduce it to $$I_4=\frac1{54} \left(-\frac1{y^3} + \frac{25}{36y^2}-\frac{101}{216y}\right)\sqrt{y^2 + 10y + 18}-\frac{355}{11664}I_1 $$ where $$I_1= \int\frac{1}{y \sqrt{y^2 + 10y + 18}}dy =-\frac1{\sqrt{18}}\tanh^{-1}\frac{\sqrt{18}\sqrt{y^2 + 10y + 18} }{5y+18} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Does the functor $\mathrm{Aut}$ have an adjoint I am studying for a group theory exam, trying to present the subject categorically. In order to study for this I am trying, to in every exercise spot if there are any functors involved and trying to find if they have adjoints. One such functor I have stumbled upon which has stumped me is $$\mathrm{Aut} \colon \quad G \mapsto \mathrm{Aut}(G), \quad (f \colon G \rightarrow H) \mapsto \mathrm{Aut}(f)(\phi)=f\circ\phi\circ f^{-1} \,.$$ From the category $\mathbf{Grp}_{\mathrm{iso}}$ to itself. Intuitively I would want to say that studying the automorphism groups should be weakly equivalent to studying the underlying group, is there any adjunction to formalise this hunch? And if not, can we prove why? Also is there any way to study $\mathrm{Aut}$ as a functor, without restricting to isomorphism, because this makes the category a lot less interesting?
In the category $\bf{Grp}_{\text{iso}}$ with groups as objects and group isomorphisms as morphisms, there are no "interesting" morphisms. Indeed, $\operatorname{Hom}(A,B)$ is non-empty only if $A$ and $B$ are isomorphic. For an adjoint functor $F$ to $\operatorname{Aut}\colon \bf{Grp}_{\text{iso}}\to \bf{Grp}_{\text{iso}}$, we demand either $$ \operatorname{Hom}(A,\operatorname{Aut}(B))\cong \operatorname{Hom}(F (A),B)$$ or $$ \operatorname{Hom}(\operatorname{Aut}(A),B)\cong \operatorname{Hom}(A,F(B)).$$ In particular both sides are empty or both are non-empty. Thus we want $$A\cong \operatorname{Aut}(B)\iff F(A)\cong B.$$ To show that this is impossible, it suffices to find two groups $B_1,B_2$ with $B_1\not\cong B_2$ and $\operatorname{Aut}(B_1)\cong \operatorname{Aut}(B_2)$. That's easy: For example, the trivial group and the group of order $2$ both have trivial automorphism group. Or $\Bbb Z$ and the group of order $3$ both have the group of order $2$ as automorphism group
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is Closure a necessary axiom in Group Theory? How can we explain the logic behind the closure axiom in Group Theory to a non-math major? What is the reasoning for why it is necessary? If the result of compounding two elements falls outside the group, why does that matter?
There is no logic to it. It's just how we define groups: A magma just obeys closure, and is the simplest possible algebraic structure. Otherwise, you would basically just have a set (since the operation is meaningless). If I take the set $\{1,2,3\}$ and apply addition, there is nothing inherently special about addition to this set. I could just as easily apply multiplication. Closure is just a simple relation to link the set to the operation that closes it. As for why groups in particular obey closure, it's simply because they are defined as a magma with particular properties (and any other combination just has a similar label). i.e groups are nothing special in terms of requiring closure, it's just a name we associate for structures that have the property of closure among other requirements On an additional note, it's worth pointing out that the entire concept of algebraic structures is to generalise existing concepts. i.e since there are groups of cycles or matrices, if I prove something in general about a group, it will be true for both cycles and matrices. Closure is a common thing that most useful algebra (such as matrices and cycles) obey. Hence it would make sense for it to be a property of groups (so we can prove more lemmas, theorems etc).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Solve $\,f(x)Suppose for a given function $f(x)$ we want to solve the inequation $x<f^{-1}(x)$. As you know the inverse of a function is its reflection over the line $y=x$. So I believe it kind of makes sence to say in places where $x<f^{-1}(x)$ then $f(x)<x$ because when the $f^{-1}$ is on top of the line $y=x$, its reflection -which is $f(x)$- is under the line. That's the case for some functions like $f(x)=0.5x+1$. but if we take $f(x)=1/x$ this trick does not work because the inverse of the function is itself and where $x<f^{-1}(x)$ also $x<f(x)$. I wanted to know if the above trick was just lucky or not and if not, are there any general rules to know if we can apply this trick to a certain function or not?
To start with, let's agree that it is certainly true that if $a = b$ then $f(a) = f(b)$, where $f$ is any function you like. But when you try to solve $x < f^{-1}(x)$ by applying $f$ to both sides and claiming that $$f(x) < f(f^{-1}(x)) = x,$$ you are assuming that if $ a < b $ then $f(a) < f(b)$, where $a = x$ and $b = f^{-1}(x).$ For some functions it is indeed true that for any numbers $a$ and $b,$ if $ a < b $ then $f(a) < f(b).$ Such a function is called an increasing function. The fact that we call some functions "increasing functions" should make you wonder if there are functions that are not increasing. And indeed there are such functions. There is such a thing as a function $g$ such that whenever $a < b,$ then $g(a) > g(b).$ The function $g(x) = 1/x$ on the positive real numbers is such a function. And it does exactly the opposite of what you want. There can even be an invertible function that is neither increasing nor decreasing. consider the function $$ h(x) = \begin{cases} \dfrac2x & x > 0, \\[1ex] \dfrac x2 & x \leq 0. \end{cases} $$ You can confirm that this function is invertible. But $h(-2) < h(-1)$ while $h(1) > h(2).$ The first inequality says the function is not a decreasing function, while the second says it is not an increasing function. So in general, if a function is increasing, then you can solve $x < f^{-1}(x)$ by solving $f(x) < x$; if a function is decreasing, you can solve $x < f^{-1}(x)$ by solving $f(x) > x$; but if the function is neither increasing nor decreasing then you must be more careful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of relations that don't satisfy one of the three properties of an equivalence relation while satisfying the other two? Just as a question that I have posed to myself: I want to find three relations $(S, \spadesuit)$, $(R, \clubsuit)$ and $(T, \blacksquare)$ for which $(S, \spadesuit)$ doesn't satisfy the reflexive property but satisfies symmetric and transitive. $(R, \clubsuit)$ doesn't satisfy the symmetric but satisfies the other two. Finally, $(T, \blacksquare)$ doesn't satisfy the transitive but satisfies the other two. Here my notation is that $S$, $R$, and $T$ are sets and $\spadesuit$, $\clubsuit$ and $\blacksquare$ are the relations. So far here is what I have come up with: $$(S, \spadesuit)=(\mathbf{Z}, \text{x and y are even})$$ $$(R, \clubsuit)=(\mathbf{R}, x \leqslant y)$$ $$(T, \blacksquare )= ([x\, |\, x \text{ is a nonempty set} ] , x\cap y \neq \emptyset )$$ I guess I am asking for people to join me in the creativity here. Are there more interesting relations that satisfy these rules? I predict the question might be deemed inapropriate for the site. So here is a follow up question: I would love to find simple examples of equivalence relations that are provable at a very modest level. For example, I love the relation $x\sim y$ defined to be "$x-y$ is an integer" on the reals or the congruence of line segments. Are there other "easier" examples?
Symmetric, transitive but not reflexive relation Consider on $\mathbb{R}$ the following relation: $$x\sim y \iff x\cdot y>0$$ It's symmetric and transitive, but not reflexive because $0 \nsim 0 $. Reflexive, transitive but not symmetric relation Let $A$ and $B$ two sets, define the relation as follow: $$A\sim B \iff A\subseteq B $$ This is clearly not symmetric as $A\subseteq B$ does not imply $B\subseteq A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4613849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Converting to polar double integral with a specific order of integration We are asked to write down the integral $$I=\int_0^1 \int_x^{x\sqrt3}f(\sqrt{x^2+y^2})dydx$$ in a polar form first with respect to $\theta$ and then with respect to $r$. I have recognised that $0<r<2$ and $r=\frac{1}{\cos\theta}$ but I am finding it difficult to find bounds for $\theta$ as a function of $r$.
To express the integral $I$ in polar form, we can make the substitution $x = r\cos\theta$ and $y = r\sin\theta$. This gives us: $$I = \int_0^1 \int_x^{x\sqrt{3}} f(\sqrt{x^2 + y^2}) \frac{r}{\cos\theta}\sin\theta drd\theta$$ To find the bounds for $\theta$, we need to consider the region of integration in the $xy$-plane. This region is a triangle with vertices at $(0,0)$, $(1,0)$, and $(1,\sqrt{3})$. We can see that the $x$-coordinate varies from 0 to 1 and the $y$-coordinate varies from 0 to $x\sqrt{3}$, as indicated in the integral. In polar coordinates, the line $y = 0$ corresponds to the angle $\theta = 0$ and the line $y = x\sqrt{3}$ corresponds to the angle $\theta = \frac{\pi}{3}$. Therefore, the bounds for $\theta$ are $\theta = 0$ to $\theta = \frac{\pi}{3}$. Substituting these bounds into the integral, we get: $$I = \int_0^{\frac{\pi}{3}} \int_0^{\frac{1}{\cos\theta}} f(r) r\sin\theta drd\theta$$ This is the integral $I$ expressed in polar form with respect to $\theta$. To express the integral in polar form with respect to $r$, we can simply switch the order of integration: $$I = \int_0^{\frac{1}{\sqrt{3}}} \int_0^{\frac{\pi}{3}} f(r) r\sin\theta d\theta dr + \int_{\frac{1}{\sqrt{3}}}^2 \int_0^{\arcsin\left(\frac{\sqrt{3}}{r}\right)} f(r) r\sin\theta d\theta dr$$ This is the integral $I$ expressed in polar form with respect to $r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4614183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An upper bound on Gaussian mean width The following is an excerpt from this blog post on Talagrand's Generic Chaining: "Suppose that we have a subset $T \subseteq {\mathbb R}^n$, we pick a random Gaussian vector $g$ from $N({\bf 0}, I)$, and we are interested in the random variable $\sup_{t \in T} \langle g,t \rangle $. A first observation is that each $\langle g,t \rangle$ is Gaussian with mean zero and variance $|| t||^2$. If $T$ is finite, we can use a union bound to estimate the tail of $\sup_{t\in T} \langle t,g \rangle$ as $$\displaystyle \Pr \left[ \sup_{t\in T}\ \langle g, t \rangle > \ell \right] \leq |T| \cdot e^{-\ell^2 / 2 \sup_{t\in T} \lVert t\rVert^2} $$ and we can compute the upper bound $$\mathbb{E}_{g \sim N({\bf 0},I)} \left( \sup_{t \in T} \langle g,t \rangle \right) \leq O \left( \sqrt{\log |T|}\cdot \sup_{t \in T}\lVert t\rVert \right) ."$$ I don't see how we get the last bound on expectation. We can get a lower bound on the cdf of $\sup_{t \in T} \langle g,t \rangle $ from the tail bound. But this doesn't seem to help in getting an upper bound on the expectation.
A useful result from Talagrand's book is that for any r.v. $X \ge 0$ satisfying for any $t\ge 0$ $$ P(X \ge t) \le A \exp (-t^2/B^2) $$ for constants $A\ge 2$ and $B>0$. Then $E X \le CB \sqrt{\log A}$, where $C$ is a universal constant. The proof is straightforward: \begin{align*} E X &= \int_0^{\infty} P(X \ge t) dt\\ &= \int_0^{t_0} P(X \ge t) dt + \int_{t_0}^{\infty} P(X \ge t) dt\\ &\le t_0 + \int_{t_0}^{\infty} P(X \ge t) dt\\ &\le t_0 + \int_{t_0}^{\infty} A\exp \left (-\frac{t^2}{B^2}\right)dt\\ &\le t_0 + \frac{1}{t_0}\int_{t_0}^{\infty} t A\exp \left (-\frac{t^2}{B^2}\right)dt\\ &= t_0 + \frac{AB^2}{2t_0} \exp \left ( -\frac{t_0^2}{B^2} \right), \end{align*} and minimizing with respect to $t_0$ yields the choice $t_0 = B\sqrt{\log A}$. See also this post for a similar result going from high probability to expectation. What if $X$ is not non-negative? If $X$ cannot be assumed non-negative, as in the case for example where $X := \sup_{t \in T} \langle g, t\rangle$, then we can use instead the fact: $$ E X = \int_0^{\infty} P(X \ge t) dt - \int_{-\infty}^0 P(X < t)dt \le \int_0^{\infty} P(X \ge t) dt $$ and reason as above. How do we get $t_0 = CB\sqrt{\log A}$? Since as you say the minimizer in closed form is hard/impossible to compute, the idea here is to reason about the correct order of the minimizer. In our case, we have $$ t_0 + \frac{AB^2}{2t_0} \exp \left ( -\frac{t_0^2}{B^2} \right) \le t_0 + \frac{AB^2}{2} \exp \left ( -\frac{t_0^2}{CB^2} \right) $$ is always true for a constant $C$ large enough. Now we have two terms that move in opposite directions, as $t_0$ grows bigger, the first term grows but the second term is smaller, and vice versa. The idea then is to find a $t_0$ such that both terms are of constant order. Note that the choice of $t_0$ mentioned before does exactly this. This is not an exact minimizer but it is a minimizer up to constants which is good enough for our purpose. Note that universal constants $C$ can differ from line to line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4614271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $|f(y) − f(x)| \leq f(|y − x|)$ if $|y − x| ≤ 1/2$ given $f(x)=-x\log_2 x$ How do I prove that whenever $|y − x| ≤ 1/2$ it follows that $|f(y) − f(x)| \leq f(|y − x|)$ given $f(x)=-x\log_2 x$? where $x,y\in[0,1]$ The graph of $f(x)=-x\log_2 x$ function is $$ f'(x)=-\log x-\frac{x}{x\ln 2}=-\log x-\frac{1}{\ln 2}=0\implies \log x=-\frac{1}{\ln 2}\\ \frac{\ln x}{\ln 2}=-\frac{1}{\ln 2}\implies \ln x=-1\implies x=1/e\approx 0.3679 $$ I was only able to write the following proof: When $0\le x\le y\le 1$, \begin{align} |f(x)-f(y)|&=|-x\log x+y\log y|\\ &=|-x\log x+\frac{x}{\ln 2}+y\log {y}-\frac{y}{\ln 2}+\frac{y}{\ln 2}-\frac{x}{\ln 2}|\\ &\leq |-(x\log x-\frac{x}{\ln 2})+(y\log y-\frac{y}{\ln 2})|+\frac{|y-x|}{\ln 2}\\ &=|\int_x^y \log tdt|+\frac{|y-x|}{\ln 2}=-\int_x^y \log tdt+\frac{|y-x|}{\ln 2}\\ &=-\int_x^{x+(y-x)} \log tdt+\frac{|y-x|}{\ln 2}\\ &\leq -\int_0^{y-x} \log tdt+\frac{|y-x|}{\ln 2}\\ &=-\Big(t\log t-t\Big)_0^{y-x}+\frac{|y-x|}{\ln 2}\\ &=-(y-x)\log(y-x)+(y-x)+\frac{|y-x|}{\ln 2}\\ &=f(|y-x|)+|y-x|+\frac{|y-x|}{\ln 2} \end{align} My Attempt Thanks @Balajisb for the hint. If $f$ is a concave function then $f(a+b)\leq f(a)+f(b)$ for all $a,b>0$ $$ -y\log y=-(y-x+x)\log(y-x+x)\le-(y-x)\log(y-x)-x\log x\\ -y\log y-(-x\log x)\le -(y-x)\log(y-x)\\ f(y)-f(x)\le f(y-x) $$ $$ D_{\log x}\in(0,\infty]\implies x,y,y-x\geq 0\implies 1\ge y>x>0\\ $$ $$ 1>y-x>0\implies -(y-x)\log(y-x)>0 $$ $$ f(y)-f(x)\le f(|y-x|) \;\forall\;(x,y)\;|\;y>x>0\\f(x)-f(y)\le f(|y-x|) \;\forall\;(x,y)\;|\;x>y>0 $$ Therefore, $$ |f(y)-f(x)|\le f(|y-x|) \;\forall\;(x,y)\;|\;y>x>0\;\&\;f(y)>f(x)\\|f(y)-f(x)|\le f(|y-x|) \;\forall\;(x,y)\;|\;y<x<0\;\&\;f(y)<f(x) $$ In order to prove $|f(y)-f(x)|\le f(|y-x|) \;\forall\;x,y>0$ we need to also consider the cases $y>x>0\;\&\;f(y)<f(x)$ and $x>y>0\;\&\;f(x)<f(y)$. So I think that's where the condition $y-x\leq 1/2$ lies in. Case 1 : $y>x>0\;\&\;f(y)<f(x)$ $$ -y\log y<-x\log x\implies y\log y>x\log x\\ x\log x-y\log y<0\\x\log x-\frac{x}{\ln 2}-y\log y+\frac{y}{\ln 2}-\frac{y-x}{\ln 2}<0\\ \int_y^x \log t dt-\frac{y-x}{\ln 2}>0\\ y-x<\ln 2\int_y^x \log t dt=-\ln 2\int_x^y \log t dt=-\ln 2\int_x^y \frac{\ln t}{\ln 2}dt=-\int_x^y \ln t dt\\ <-\int_0^1 \log t dt=-1\times -1=1 $$ How do I obtain the condition $y-x\leq 1/2$ in this case ?
Here is a proof. WLOG, assume that $x \le y$. We need to prove that $$|-y\ln y + x\ln x| \le -(y - x)\ln(y - x). \tag{1}$$ Using the identity for $u \ge 0$ (easy to prove) $$u\ln u =\int_0^1 \frac{u(u - 1)}{1+(u-1)t}\,\mathrm{d} t,$$ (1) is written as $$\left|\int_0^1 \left(\frac{y(1-y)}{1+(y-1)t} - \frac{x(1-x)}{1+(x-1)t}\right)\,\mathrm{d} t\right| \le \int_0^1 \frac{(y-x)(1 - (y-x))}{1 + (y-x - 1)t}\,\mathrm{d} t.$$ It suffices to prove that $$\int_0^1 \left|\frac{y(1-y)}{1+(y-1)t} - \frac{x(1-x)}{1+(x-1)t}\right|\,\mathrm{d} t \le \int_0^1 \frac{(y-x)(1 - (y-x))}{1 + (y-x - 1)t}\,\mathrm{d} t.$$ It suffices to prove that, for all $t \in [0, 1]$ and $0 \le x \le y \le 1$ with $y - x \le 1/2$, $$\left|\frac{y(1-y)}{1+(y-1)t} - \frac{x(1-x)}{1+(x-1)t}\right| \le \frac{(y-x)(1 - (y-x))}{1 + (y-x - 1)t}$$ or $$\left(\frac{y(1-y)}{1+(y-1)t} - \frac{x(1-x)}{1+(x-1)t}\right)^2 \le \left(\frac{(y-x)(1 - (y-x))}{1 + (y-x - 1)t}\right)^2$$ or (clearing the denominators and simplifying) $$xy[1 - 2(y - x)]t^2 + (2x^2 - xy - 2y^2 + 2y)t(1 - t) + 2(1-y)(1-t)^2 \ge 0 \tag{2}$$ which is true (the proof is given at the end). We are done. Proof of (2): It suffices to prove that $2x^2 - xy - 2y^2 + 2y \ge 0$. If $0 \le x \le 1/3$, then $$2x^2 - xy - 2y^2 + 2y \ge 2x^2 - xy - 2y\cdot (x + 1/2) + 2y = 2x^2 + y(1 - 3x) \ge 0.$$ If $1/3 < x < 1/2$, then $$2x^2 - xy - 2y^2 + 2y \ge 2x^2 - xy - 2y\cdot (x + 1/2) + 2y = 2x^2 - y(3x - 1) $$ $$\ge 2x^2 - (x + 1/2)(3x - 1) = \frac12(1 + x)(1 - 2x) \ge 0.$$ If $1/2 \le x \le 1$, then $2x^2 - xy - 2y^2 + 2y = x(2x - y) + 2y(1 - y) \ge 0$. We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4614700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
What is $\;\mathbb{E}[p(X,Y)/p(X)p(Y)]\;$? Let $X,Y$ be two random variables. Let $p(x,y)$ be the joint p.d.f. and $p(x),p(y)$ be the marginal p.d.fs. My question is: What is $$\mathbb{E}_{X,Y}\!\left[\frac{p(X,Y)}{p(X)p(Y)}\right] = \iint \frac{p(x,y)^2}{p(x)p(y)} dx dy$$ What is the meaning of this quantity? And is there any work that studies this? Background: I am working on a course project about estimating a certain quantity, and after some computation that quantity is equivalent to this quantity, so I am wondering what this quantity means. Update: This is an information theory course, so probably this has something to do with information theory. If I am not mistaken, $\mathbb{E}_{X,Y}\!\left[\text{ln} \frac{p(X,Y)}{p(X)p(Y)}\right]$ is the mutual information between $X$ and $Y$.
The quantity $r(x,y)=\frac{p(x,y)}{p(x)p(y)}$ is the (Radon-Nikodym) derivative of $(X,Y)$ with respect to $X\otimes Y$, same as in the general definition of the mutual information. The quantity $\mathbb E[r(X,Y)]$ you're looking at is the expected derivative under $(X,Y)$. Recall the $\chi^2$-divergence $\chi^2(U\|V)$ with $U=(X,Y)$ and $V=X\otimes Y$, and notice that \begin{align*} \mathbb E[r(X,Y)]&=\mathbb E[r(U)]=\mathbb E[r(U)+r(U)^{-1}-2]+1=\mathbb E[r(U)^{-1}(r(U)-1)^2]\\ &=\mathbb E[(r(V)-1)^2]+1 =\chi^2(U\|V)+1, \end{align*} using that $u\mapsto r(u)^{-1}$ is the derivative of $V$ with respect to $U$. Notice that $\chi^2(U\|V)=\chi^2((X,Y)\|X\otimes Y)$ is the mutual information induced by the $\chi^2$-divergence, analogously to the mutual information $I(X,Y)=D((X,Y)\|X\otimes Y)$ induced by the relative entropy. Thank you, stochasticboy321, for pointing out the $\chi^2$-divergence!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solve the equation $\log_{1-2x}(6x^2-5x+1)-\log_{1-3x}(4x^2-4x+1)=2$ Solve the equation $$\log_{1-2x}(6x^2-5x+1)-\log_{1-3x}(4x^2-4x+1)=2$$ We have $$D_x:\begin{cases}1-2x>0\\6x^2-5x+1>0\\1-3x>0\\1-3x\ne1\\4x^2-4x+1>0\iff(2x-1)^2>0\iff x\ne\dfrac12\end{cases}\iff x\in(-\infty;0)\cup(0;\dfrac{1}{3})$$ Also the quadratic $6x^2-5x+1$ factors as $(2x-1)(3x-1)$. The equation then becomes $$\log_{1-2x}(2x-1)(3x-1)-\log_{1-3x}(2x-1)^2=2\\\log_{1-2x}(2x-1)(3x-1)-2\log_{1-3x}(1-2x)=2,$$ as $\log_{1-3x}(2x-1)^2=2\log_{1-3x}|2x-1|,$ but we know from $D_x$ that $2x-1<0$, $$\log_{1-2x}(2x-1)+\log_{1-2x}(3x-1)-\dfrac{2}{\log_{1-2x}(1-3x)}=2$$ I don't know what to do next.
We have that for $1-2x>0$, $1-3x>0$, $1-2x\neq 1$, $1-3x\neq 1$ $$\log_{1-2x}(6x^2-5x+1)-\log_{1-3x}(4x^2-4x+1)=2 $$ $$\iff \frac{\log((1-2x)(1-3x))}{\log (1-2x)}-2\frac{\log (1-2x)}{\log (1-3x)}=2$$ $$\iff \frac{\log (1-3x)}{\log (1-2x)}-2\frac{\log (1-2x)}{\log (1-3x)}=1$$ then by $u= \frac{\log (1-3x)}{\log (1-2x)}$ we obtain $$u-\frac 2 u =1 \implies u = \frac{\log (1-3x)}{\log (1-2x)}=2 \implies x=\frac14$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Analytical solution to this differential equation with cubic term I have this differential equation that I would like to solve analytically: $$ {y''_{}}\left( x \right) + \left[ {{E_{}} - D\bigg( {1 - {e^{ - \delta x}} + 2{e^{ - \delta x}}\frac{{{\delta ^{}}}}{{{{\left( {1 - {e^{ - \delta x}}} \right)}^{}}}}} \bigg) - C\frac{{{\delta ^2}}}{{{{\left( {1 - {e^{ - \delta x}}} \right)}^2}}}} \right]y\left( x \right) = 0 $$ where $E,C,D$ and $\delta$ are constants. First I made a coordinate shift from x to z by putting $ z = {e^{ - \delta x}}.$ After some simplification, the DE transformed to $$ \begin{split} {y''}\left( z \right) &+ \frac{{y'\left( z \right)}}{z} \\ & +\left( {\frac{{ - D + E - C{\delta ^2} + z\left( {3D - 2E - 2D\delta } \right) + {z^2}\left( { - 3D + E + 2D\delta } \right) + D{z^3}}}{{{{\left( {z(1 - z)} \right)}^2}}}} \right)y\left( z \right) = 0\end{split}$$ The cubic term is giving me a headache. I am told that without it, the problem can be solved analytically. Many different transformations that I tried also failed to get rid of the cubic term. Is there a better transformation than the one that I chose here that would not generate the cubic term? Is it still possible to solve the DE analytically with the cubic term present? By converting this to a solvable hypergeometric equation for instance ? If so, please explain. Hope to learn from the advice given here. Thank you.
HINT: Your coordinate transformation isn't correct. With the substitution $z=\exp (-\delta x)$ you will get the transformed DE $$y(z) \left(-\frac{C \delta ^2}{(z-1)^2}+D \left(\frac{2 \delta z}{z-1}+z-1\right)+E\right)+\delta ^2 z \left(z y''(z)+y'(z)\right)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ordinary DEQ - Frobenius Method I have a DEQ: $$y''-\frac{6}{x}y'+\frac{12+x^2}{x^2}y=0$$ Then in series form: $$\sum_{m=0}^{\infty}(m+r)(m+r-1)a_mx^{m+r-2}-6\sum_{m=0}^{\infty}(m+r)a_mx^{m+r-2}+12\sum_{m=0}^{\infty}a_mx^{m+r-2}+\sum_{m=0}^{\infty}a_mx^{m+r}$$ The lowerst power is obviously $m+r-2$, and we get the indicial equation $r(r-1)-6r=0 \rightarrow r_1=0, r_1=7$ Next, we equate the powers, since the first 3 are the same, we change the index on the 4th series: $$s_4=\sum_{m=2}^{\infty}a_{m-2}x^{m+r-2}$$ Rearranging the series: $$((m+r)(m+r-1)-6(m+r)+12)a_m+a_{m-2}=0$$ Let $r=0$, i get: $$a_m=-\frac{a_{m-2}}{m(m-7)+12}$$ Then $$a_2=-\frac{a_0}{30}, a_4=-\frac{a_2}{56}=-\frac{a_0}{1680}.....$$ now that just doesn't seem very likely,,i triple-checked my solution, and don't know where i might have made a mistake, can someone have a look?
As of @JerryHolmes' request, I will show how I solved it. There is no need to overcomplicate stuff by using Frobenius' method; we can seek a simple power-series solution $$y(x)=\sum_{n=0}^\infty a_nx^n.$$ If we substitute the series with its derivatives into ODE, you should get $$\sum_{n=0}^\infty(n-4)(n-3)a_nx^{n-2}+\sum_{n=0}^\infty a_nx^n=0.$$ Notice that to combine the $2$ series, we have to set the first $2$ terms of the lagging series equal to $0.$ If we do so, we get $a_0=a_1=0.$ Now we can combine the $2$ series if we increase the index of the lagging series by $2$ $$\sum_{n=0}^\infty((n-1)(n-2)a_{n+2}+a_n)x^n=0.$$ So we get $$(n-2)(n-1)a_{n+2}+a_n=0.\tag{$\star$}\label{1}$$ Notice that we are going to solve for the coefficients via the $a_{n+2}$ term, so we don't want its coefficient in $\eqref{1}$ to be $0$ because if that happens, we can't solve for them so we will have to assume them as initial conditions. This happens for $n=1,2\implies a_{n+2}=a_3,a_4.$ So we know (or delt with) $a_0,a_1,a_3,a_4.$ To solve for $a_2,$ plug $n=0$ into $\eqref{1}$ to get $a_2=0.$ Now, we need to solve for $a_{n>4}.$ We can now safely solve explicitly for $a_{n+2}$ without the risk of dividing by $0.$ $$a_{n+2}=-\frac{a_n}{(n-2)(n-1)}.$$ Since the coefficients jump by $2,$ an even coefficient will depend on the previous even one and the same follows for the odd coefficients. $$a_{2n+1}=-\frac{a_{2n-1}}{(2n-3)(2n-2)}\\ a_{2n}=-\frac{a_{2n-2}}{(2n-4)(2n-3)}$$ To avoid confusion in this part, set $2k+1=n+2$ to solve for the odd coefficients and set $2k=n+2$ to solve for the even coefficients. Since we start with $a_3$ and $a_4,$ odd coefficients will depend on $a_3$ and even coefficients will depend on $a_4.$ If you solve for the first few coefficients (or just notice the fact that we just keep dividing by consecutive integers,) you will notice the pattern $$a_{2n+1}=(-1)^{n+1}\frac{a_3}{(2n-2)!}\\ a_{2n}=(-1)^{n+1}\frac{a_4}{(2n-3)!}.$$ Now we can write the general solution $$y(x)=a_3\sum_{n=1}^\infty \frac{(-1)^{n+1}}{(2n-2)!}x^{2n+1}+\\a_4\sum_{n=2}^\infty\frac{(-1)^{n+1}}{(2n-3)!}x^{2n}.$$ If we notice that $a_3=\frac{y^{(3)}(0)}{3!}$ and $a_4=\frac{y^{(4)}(0)}{4!}$ because of Mclaurin series, we can write it as $$y(x)=\frac{y^{(3)}(0)}{3!}\sum_{n=1}^\infty \frac{(-1)^{n+1}}{(2n-2)!}x^{2n+1}+\\\frac{y^{(4)}(0)}{4!}\sum_{n=2}^\infty\frac{(-1)^{n+1}}{(2n-3)!}x^{2n}.$$ We can factor $x^3$ and increase the index of the first series by $1$ and that of the second series by $2$ to get $$y(x)=x^3\left(\frac{y^{(3)}(0)}{3!}\sum_{n=0}^\infty \frac{(-1)^n}{(2n)!}x^{2n}-\\\frac{y^{(4)}(0)}{4!}\sum_{n=0}^\infty\frac{(-1)^n}{(2n+1)!}x^{2n+1}\right)$$ which can be written in terms of familiar functions $$y(x)=x^3\left(\frac{y^{(3)}(0)}{3!}\cos x-\\\frac{y^{(4)}(0)}{4!}\sin x\right)$$ as your professor said.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615692", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How many five-digit numbers are there that have exactly two odd digits in their decimal notation? I thought that in the first place you can choose 9 out of 10 numbers (all but zero) then in the second place choose 4 out of 5. Or the second option in the first place can be 4 out of 5 odd numbers, in the second and third places we choose odd. it seems to me that I am confused in these combinations, I don’t understand how to calculate, given that zero cannot be the first
As suggested in the comments, take the $\binom{5}{2}5^25^3$ numbers with at most five digits, and subtract the numbers with at most four digits, that is $\binom{5}{2}5^5-\binom{4}{2}5^4=44\cdot 5^4=27500$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that equation $\det(A+xB)=0$ has real solutions if and only if $\det(A^{2}+B^{2})\geq(\det(A)+\det(B))^{2}$ We have $A,B$ two $2×2$ matrices with real values and we know $\det(AB-BA)=0$. Show that equation $\det(A+xB)=0$ has real solutions if and only if $$\det(A^{2}+B^{2})\geq(\det(A)+\det(B))^{2}.$$ I used the formula: $$\det(A+xB)=\det(A)+x^{2}\det(B)+x(\det(A+B)-\det(A)-\det(B)).$$ To have real solutions for $x$, we need to have: $$(\det(A+B)-(\det(A)+\det(B)))^{2}\geq4\det(A)\det(B).$$ Now I dont know how to use the fact that $\det(AB-BA)=0$ Is there any formula to rewrite $\det(AB-BA)$ in terms of $\det(A+B)$, $\det(A)$, $\det(B)$?
As noted in the comments, you need some additional assumptions for the statement to be true. I'll assume $B$ is invertible. Since these are $2\times 2$ matrices we could probably prove the statement fairly easily by just computing with elements, but I'm going to use some more advanced machinery at the risk of it being overkill. For any real $2\times 2$ matrices $X$ and $Y$ we have $$\det(X+Y)=\det(X)+\mathop{\mathrm{tr}}(X\mathbin{\square}Y)+\det(Y)\tag{1}$$ where $X\mathbin{\square}Y$ denotes the $1\times 1$ box product of $X$ and $Y$. Note the formula you used for $\det(A+xB)$ can be obtained by first taking $X=A$ and $Y=xB$ and then taking $X=A$ and $Y=B$ in (1). Also, the inequality you obtained is equivalent to $$\mathop{\mathrm{tr}}[(A\mathbin{\square}B)^2]=[\mathop{\mathrm{tr}}(A\mathbin{\square}B)]^2\ge 4\det(AB)\tag{2}$$ Taking $X=AB$ and $Y=-BA$ in (1) yields $$0=\det(AB-BA)=2\det(AB)-\mathop{\mathrm{tr}}(AB\mathbin{\square}BA)$$ so $$\mathop{\mathrm{tr}}(AB\mathbin{\square}BA)=2\det(AB)\tag{3}$$ Taking $X=A^2$ and $Y=B^2$ in (1) yields $$\det(A^2+B^2)=\det(A^2)+\mathop{\mathrm{tr}}(A^2\mathbin{\square}B^2)+\det(B^2)$$ so the desired inequality is $$\mathop{\mathrm{tr}}(A^2\mathbin{\square}B^2)\ge 2\det(AB)\tag{4}$$ By the Greub-Vanstone identity (see the link above), $$(A\mathbin{\square}B)^2=A^2\mathbin{\square}B^2+AB\mathbin{\square}BA\tag{5}$$ Now (2) is equivalent to (4) by (3) and (5), which establishes the result (assuming $B$ invertible).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4615976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How can I prove that $\sum_{k=1}^n \langle x,e_k\rangle e_k$ in a Hilbert space does not converge in norm? Let $X$ be a Hilbert space an $(e_k)_{k\geq 1}$ be a complete orthonormal set (i.e. a basis). Then define $$F_n:X\rightarrow X;~~~x\mapsto \sum_{k=1}^n \langle x,e_k\rangle e_k$$I want to prove that $||F_n-F_m||=1$ if $n\neq m$. W.l.o.g. we can assume $m<n$, then $$\begin{align}||F_n-F_m||&=\sup_{||x||=1}||F_n(x)-F_m(x)||\\&=\sup_{||x||=1} \left|\left|\sum_{k=m+1}^n \langle x,e_k\rangle e_k \right|\right|\\&=\sup_{||x||=1} \left(\sum_{k=m+1}^n |\langle x, e_k\rangle|^2\right)^{1/2}\end{align}$$ Now somehow I don't see why the last equality should be true. I read something that it has to do with the fact that $(e_k)$ is orthonormal, but I don't see why. Can maybe someone explain me the last equality?
Let $\alpha_j:= \langle x,e_j\rangle$. By definition of the norm $$ \left\lVert\sum_{k=m+1}^n \alpha_k e_k\right\rVert^2 =\left\langle \sum_{j=m+1}^n \alpha_j e_j,\sum_{k=m+1}^n \alpha_k e_k\right\rangle $$ and by bilinearity of inner product, $$ \left\lVert\sum_{k=m+1}^n \alpha_k e_k\right\rVert^2 =\sum_{j=m+1}^n\sum_{k=m+1}^n \alpha_j\overline{\alpha_k } \left\langle e_j, e_k\right\rangle $$ and by orthonormality, $\left\langle e_j, e_k\right\rangle=1$ if $j=k$ and $0$ otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4616309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simulate the results of rolling a die using Uniform[0, 1] I saw this question in one of my textbooks: Let U be a random variable having a uniform(0,1) distribution. Describe how to simulate the outcome of the roll of a die using U. I know that the outcome of rolling a die follows discrete uniform distribution with 6 outcomes, and each outcome has a probability of 1/6. I'm not quite sure how to approach this question. The straightforward solution I thought about was adding 6 unif[0, 1] distributions together to obtain the result of rolling a die. Would that be reasonable? If not, could someone correct me? Thanks!
The 6 different sides of the die are all equally likely with probability $\frac{1}{6}$. So if you are given another random variable, just partition its image set into six equally likely subsets. In your case: If $X$ follows a uniform distribution on $[0,1]$, just define a function $F : [0,1]\rightarrow \{1,...,6\}$ via $$F(x) = 1, \text{ if } x \in\left[0,\frac{1}{6}\right],$$ $$\vdots$$ $$F(x) = 6, \text{if } x \in \left[\frac{5}{6},1\right],$$ and set $Y = F(X)$ as your new variable. This random variable will yield a number from $1$ to $6$ with probability of $\frac{1}{6}$ for each one, i.e. simulate the throw of a die.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4616480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Eigenvectors of an isomorphism. Let $f: \mathbb{R}^{n} \to \mathbb{R}^{n}$ be a linear isomorphism, with $n \geq 4$. I have to prove that $f$ and $f^{3}$ have the same eigenvectors. Clearly, every eigenvector of $f$ is an eigenvector of $f^{3}$. For the converse, I think that if $v$ is an eigenvector of $f^{3}$ associated to the eigenvalue $\lambda \neq 0$ then $v$ is also a eigenvector of $f$ associated to the eigenvalue $\sqrt[3]{\lambda}$. But I am lost trying to prove this. Thanks in advance.
Let $n=4$ and $f(v)=Av$, where $A$ is a rotation by 120° degrees in the first two coordinates, i.e. $$A=\begin{pmatrix}\cos(2\pi/3) & \sin(2\pi/3) & 0 & 0\\-\sin(2\pi/3) & \cos(2\pi/3)&0&0\\ 0&0&1&0\\ 0&0&0&1\end{pmatrix}.$$ I'm pretty sure that $f$ has two eigenvectors, which are the third and fourth unit vector, while $f^3$ is the identity and thus has four eigenvectors, namely the unit vectors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4617154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Iterated behaviour of Doubling map In the dynamical system, there is one map called Doubling Map, defined as \begin{align} f:[0,1) & \rightarrow [0,1) & \\ f(x)&= \ \begin{cases} 2x & x< \frac{1}{2} \\ 2x-1 & \frac{1}{2} \leq x<1 & \end{cases} \\ \end{align} Equivalently $f(x) = 2x$mod$1$. The iteration of this map for any $x \in [0,1)$ is given by $f^{n}(x) = 2^{n}x$mod$1$. I found that the sequence of iterates,$f^{n}(x)$, for $x \in \{ \frac{r}{2^m}: r,m \in \mathbb{Z}^+, 0<r<2^m\}$ converges to $0$. I want to prove that these,$\{\frac{r}{2^m}: r,m \in \mathbb{Z}^+, 0<r<2^m\}$, are the only points for which the sequence of iterates converges. And also I want to do characterization of converges/behaviour of this sequence in the more general setting i.e. (1) What is the behaviour of the sequence $f^{n}(x)$ when $x$ is irrational nummber in $[0,1)$ ? $(2)$ What is the behaviour of the sequence $f^{n}(x)$ when $x$ is rational nummber in $[0,1)$
The best way to think of this dynamical system is via binary expansion. If $x=0.a_1 a_2 \ldots$, where $a_i \in \{0,1\}$ is the binary expansion of $x \in [0,1)$, then the binary expansion of $f(x)$ is $0.a_2 a_3 \ldots$. In other words, your map $f$ is conjugated to the shift operator $\sigma (a_1, \ldots) = (a_2, \ldots)$ acting on $X=\{0,1\}^\mathbb{N}$. Then for example the orbit of $x$ converges if and only if it's binary expansion is eventually either all zeroes or all ones. Now to address your questions. * *is much to broad to hope for a simple answer. The assumption that $x$ is irrational means that its binary expansion is not eventually periodic; but other than that is can be pretty much anything. In particular, the sequence $\{f^n(x)\}$ may or may not be dense in $[0,1)$, depending on whether all finite binary words eventually appear or not in the binary expansion of $x$. This is often open for specific irrationals, such that for instance $\pi$ mod 1 = $\pi -3$. *is a lot simpler. If $x$ is rational, then its binary expansion is eventually periodic, and so the sequence $f^n(x)$ is also eventually periodic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4617417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inverse of $UAA^TU^T$ Let $U\in \mathbb{R}^{n\times m}$, $n<m$ be a matrix with orthogonal rows, $UU^T=I$, and $A\in\mathbb{R}^{m\times k}$, $m<k$ be any general real matrix. What can I say on $(UAA^TU^T)^{-1}$ as a function of $U$ and $A$?
Well, $$(UAA^TU^T)^n=U(AA^T)^nU^T,$$ so $$(UAA^TU^T)^{-1}=U(AA^T)^{-1}U^T,$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4617647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the name for this relationship between a 1-form and a vector? all. I have a question about Visual Differential Geometry and Forms - A Mathematical Drama in Five Acts (by Tristan Needham). This book shows two relationships between Forms and vector. And I have a question on the second one. The first one is a relationship between a 2-form $\Psi$ and a vector $\Psi$(underlined), which is shown as (34.10) on page p377. Their corresponding components are equal, that is, $\Psi^i$ = $\Psi$(underlined)$_i$. This relationship is called Hodge (star) duality operator. The second one is a relationship between a 1-form $\phi$ and a vector $\phi$(underlined), which is shown as (34.14) on page p379. Their corresponding components are equal, that is, $\phi_i$ = $\phi$(underlined)$_i$. My question is: What is the name for this second relationship? P.S. This second relationship is so natural that it deserves a name, right?
Given any finite dimensional vector space $V$ over a field $k$, and a choice of basis $e_1,\dots,e_n$, there is a canonical isomorphism $\alpha\colon V\cong V^*$, where $V^*$ is the dual vector space to $V$, i.e., the vector space of linear maps $V\to k$. (Linear maps $V\to k$ are also called one-forms.) The isomorphism $\alpha$ is defined by sending the vector $e_i$ to the one-form $e^i$ (note the superscript) defined on a vector $v = v_1e_1 + \dots + v_ne_n$ by $$ e^i(v) = e^i(v_1e_1 + \dots + v_ne_n) = v_i. $$ In other words, $e^i$ just returns the $i$th component of the vector $v$. You should check that the one-forms $e^1,e^2,\dots,e^n$ form a basis for $V^*$. This basis is known as the corresponding dual basis for $V^*$. This is all linear algebra, but in differential geometry, this isomorphism is a simple version of the musical isomorphism, which is the more general term for the isomorphism between the tangent and cotangent bundles of a manifold. In the example in Needham's book, if we choose the standard basis $\mathbf e_1,\mathbf e_2,\mathbf e_3$ for $\mathbb R^3$, then the one-forms $dx^1,dx^2,dx^3$ are precisely the corresponding dual basis for $(\mathbb R^3)^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4617949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Rearranging cylinder equation Given this equation $(x-y)^2+(y-z)^2+(z-x)^2=r^2$, which when plotted for some radius $r$ results in a "rotated" cylinder: Plotted equation with $r=1$ Is it possible to rearrange this equation to a form where to rotation is clearer? Perhaps to something around the lines of $(\overrightarrow{p}-\overrightarrow{c})^2=r^2+(\overrightarrow{d}\bullet(\overrightarrow{p}-\overrightarrow{c}))^2$, where $\overrightarrow{p}$ is a point on the cylinder and the centre-axis of the cylinder is defined by a point $\overrightarrow{c}$ on the axis and the axis' directions $\overrightarrow{d}$. Is a rearranging even needed or can the rotation be easily read from the original equation?
With the orthonormal change of coordinates $$(s,t,u) = \left(\frac1{\sqrt{3}}(x + y + z), \frac{1}{\sqrt{2}}(x - y), \frac1{\sqrt{6}}(x + y - 2z)\right)$$ the equation becomes $$2t^2 + \frac12(\sqrt{3} u - t)^2 + \frac12(\sqrt{3} u - t)^2 = r^2$$ or equivalently, $$t^2 + u^2 = \frac13 r^2 .$$ This is a right cylinder on a circle of radius $r/\sqrt{3}$ around the origin in the $(t, u)$ plane (ie the $x + y + z = 0$ plane). The exact choice of the $t$ and $u$ axes on that plane are not important, I just chose the unit vector along $(1, -1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find minimum value of $(ab-2a+4)^2 + (bc-2b+4)^2+ (ca-2c+4)^2$ where $0 \leq a,b,c \leq2$ Find minimum value of $(ab-2a+4)^2 + (bc-2b+4)^2+ (ca-2c+4)^2$ where $0 \leq a,b,c \leq2$ For this kind of problems it's usually easy to guess that the minimum value is obtained at $a=b=c$ or when the answer is at the boundary. When $a=b=c$ we can easily find minimum = 27. When we are at the boundary, we let $c=2$ we can see the expression is greater than or equal to 24. But this solution seems hacky and i'd like to find a better way.
For $(a,b.c)=(2,1,0)$ we'll get a value $24$. We'll prove that it's a minimal value. For $a=2$ by C-S we obtain: $$\sum_{cyc}(ab-2a+4)^2=(2b)^2+(bc-2b+4)^2+16=$$ $$=\frac{1}{2}(1+1)((2b)^2+(bc-2b+4)^2)+16\geq\frac{1}{2}(2b+bc-2b+4)^2+16=$$ $$=\frac{1}{2}(bc+4)^2+16\geq8+16=24,$$ which says that it's enough to solve our problem for $\{a,b,c\}\subset[0,2).$ Now, let $a=\frac{2x}{x+1},$ $b=\frac{2y}{y+1}$ and $c=\frac{2z}{z+1},$ where $x$, $y$ and $z$ are non-negatives. Thus, we need to prove that: $$\sum_{cyc}\left(\frac{4xy}{(x+1)(y+1)}-\frac{4x}{x+1}+4\right)^2\geq24$$ or $$2\sum_{cyc}\frac{(xy+y+1)^2}{(x+1)^2(y+1)^2}\geq3$$ or $$\sum_{cyc}(x^2y^2z^2+2x^2y^2z+x^2y^2-2x^2y+2x^2z+x^2+2x+1)\geq0,$$ which is true because $$\sum_{cyc}(x^2y^2z^2+2x^2y^2z+x^2y^2-2x^2y+2x^2z+x^2+2x+1)>$$ $$>\sum_{cyc}(x^2y^2-2x^2y+x^2)=\sum_{cyc}x^2(y-1)^2\geq0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Finding conformal map verification I would like to know if this reasoning is correct: I want to find a conformal map from $\mathbb D$ to $\mathbb C$. My reasoning: First we seek a conformal map from $\mathbb H$ to $\mathbb D$ which is easy to find: $$A=\begin{pmatrix}1&-i\\1&i\end{pmatrix}\leftrightarrow \phi_A(z)=\frac{z-i}{z+i}\text{ and invert it to obtain :}\,\phi_A^{-1}(z)=-i\frac{z+1}{z-1}$$ such that $\phi_A^{-1}:\mathbb D\rightarrow\mathbb H$ is conformal. Now we seek to extend $\mathbb H$ to $\mathbb C$. FOr this, we consider doubling the argument thus considering the map: $$\phi_{\mathbb D\rightarrow\mathbb C}(z)=(\phi_A^{-1})(z)^2=\left(-i\frac{z+1}{z-1}\right)^2=-\frac{(z+1)^2}{(z-1)^2}=-\frac{z+1}{z-1}$$ which is a conformal map from $\mathbb D$ to $\mathbb C$. Thanks in advance. NB: $\mathbb D=D(0,1)$, $\mathbb H$ is the upper open half plane.
While you used some very good ideas in this attempt, it turns out to be incorrect (and ultimately doomed): * *In the last equality, you stripped the squares away incorrectly (nothing cancels there). Indeed, your proposed map is a fractional linear transformation which will take the unit disk to a half-plane, not the entire plane. *Assuming we're talking about open sets here (the open unit disk, the open half plane), then the squaring map on the half plane is not surjective—its image does not contain the positive real axis. So the second-to-last expression is also incorrect. *Finally, there is no conformal map from the unit disk (or any bounded set) onto the entire complex plane! If there were, its inverse would be a bounded entire function, hence constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing $x_n-x_nx_1+\sum_{k=1}^{n-1} (x_k-x_kx_{k+1})\leq\left\lfloor\frac{n}{2}\right\rfloor$, with $x_i\in[0,1]$ Let $x_1, x_2,\ldots, x_n$ be arbitrary numbers from the interval $[0,1]$ with $n>1$. Show that $$x_n-x_nx_1+\sum_{k=1}^{n-1} (x_k-x_kx_{k+1})\leq\left\lfloor\frac{n}{2}\right\rfloor$$ I tried to factor out the $x_k$ from each term to show that if the coefficient $x_k$ of $x_k(1-x_{k+1})$ is larger than $\dfrac{1}{2}$, then the term $x_{k-1}(1-x_k)$ must be smaller than $\dfrac12$, but I don't know where to go from here or if it is even the right approach.
*This is not an official answer. Edit I apologise for my inconsistency in my previous answer, hence its been removed. While, this question already has an accepted answer/correct answers, this method serves to confirm the answers/inequality. The following are for $n = 2,3,4$ respectively and display the inequality approaching values $[0, $floor$(\frac{n}{2})]$ but never exceed floor$(\frac{n}{2})$ computationally. Higher values of $n$ were not computed as they pose time restrictions in computation due to probability restrictions. $n = 2$" /> $n = 3$" /> $n = 4$" />
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618522", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to derive a closed form of a recursion (maybe using generating functions) Let $a_0=9$ and consider the following recurrence relation: $$a_n=36(n+1)2^{n-2}+2a_{n-1},$$ I'm looking for the closed form of $\{a_n\}.$ I have tried using generator functions: \begin{align*} f(x)&=\sum_{n=0}^\infty a_nx^n\\ &=9 +\sum_{n=1}^\infty \left(4.5(n+1)2^{n+1}+2a_{n-1}\right)x^n\\ &=9 +4.5\sum_{n=1}^\infty (n+1)2^{n+1}x^n+2\sum_{n=1}^\infty a_{n-1}x^n\\ &=9 +\frac{4.5}{x}\sum_{n=1}^\infty (n+1)2^{n+1}x^{n+1} + 2x\sum_{n=1}^\infty a_{n-1}x^{n-1}\\ &=9 +\frac{4.5}{x}\sum_{n=2}^\infty n2^{n}x^{n} + 2xf(x) \end{align*} What should I do next? Is there a quicker way to solve this? Thanks in advance
We are given that $$ a_n = 36 (n+1) 2^{n-2} + 2a_{n-1}, \tag{0} $$ we find that, for any positive integer $n$, we have \begin{align} a_n &= 36 (n+1) 2^{n-2} + 2a_{n-1} \\ &= 36 \left( 2^{n-2} \right) (n+1) + 2^1 a_{n-1} \tag{1} \\ &= 36 (n+1)2^{n-2} + 2 \left( 36 (n-1+1) 2^{n-1-2} + 2 a_{n-2} \right) \\ & \qquad \mbox{[using $n-1$ in place of $n$ in (0) above ]} \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n \big) + 2^2 a_{n-2} \tag{2} \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n \big) + 2^2 \left( 36 (n-2+1) 2^{n-2-2} + 2a_{n-2-1} \right) \\ & \qquad \mbox{[ using $n-2$ in place of $n$ in (0) above ]} \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n + (n-1) \big) + 2^3 a_{n-3} \tag{3} \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n + (n-1) \big) + 2^3 \left( 36 (n-3+1) 2^{n-3-2} + 2a_{n-3-1} \right) \\ &\qquad \mbox{[ using $n-3$ in place of $n$ in (0) above ]} \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n + (n-1) + (n-2) \big) + 2^4 a_{n-4} \tag{4} \\ &= \ldots \\ &= 36 \left( 2^{n-2} \right) \big( (n+1) + n + (n-1) + (n-2) + \ldots 2 \big) + 2^n a_0 \\ &\qquad \mbox{[ using the pattern suggested by (1), (2), (3), and (4) above ]} \\ &= 36 \left( 2^{n-2} \right) \big( 1 + 2 + \ldots + (n+1) -1 \big) + 2^n (9) \\ &= 9 \left( 2^n \right) \left( \frac{(n+1)(n+2)}{2} - 1 +1 \right) \\ &= 9 \left( 2^n \right) \frac{(n+1)(n+2)}{2} \\ &= 9 (n+1)(n+2) \left( 2^{n-1} \right). &= \end{align} Finally, we can use induction to verify that this formula is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Convergence of series $\sum_{n \geq 1} {\frac{1^2+2^2+ \cdots + n^2}{n^4}}$ In the study of the following series $$ \sum_{n \geq 1} {\frac{1^2+2^2+ \cdots + n^2}{n^p}} $$ it is not hard to prove that it diverges for $p \leq 3$, since the sequence itself does not converge to 0. You can also conclude that the series converges for $p > 4$ by comparison with Riemann series. Raabe's test yields that the series diverges for p between 3 and 4. However it does not give any information for the case $p=4$.
We know that $\displaystyle \sum _{i=1}^{n} i^{2} \ =\ \frac{n( n+1)( 2n+1)}{6}$ So $\displaystyle \frac{\sum _{i=1}^{n} i^{2}}{n^{p}} \ =\ \frac{( n+1)( 2n+1)}{6n^{p-1}}$ Now if we take the limit $\displaystyle n\rightarrow \infty $ and apply L'Hospital's Rule, we can see that $\displaystyle \frac{4n+3}{6( p-1) n^{p-2}} \ =\ \frac{4}{6( p-1)( p-2) n^{p-3}}$ Edit: As correctly pointed out in the comments, this series diverges for $p = 4$ and later on converges for $p = 5$ (the famous Basel problem). This is because when $p = 4$, each term effectively becomes of the form $\frac 1n$ and this we know to be diverging. On the other hand for $p = 5$ it becomes $\frac 1{n^2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Is My Textbook Incorrect On Explaining How To Solve Annual Interest Compounded Monthly? I doubt that the textbook solution is correct. If I have $\$100$ and put it into a bank with annual interest compounded monthly of $6\%$, how much money $y$ would I have after $t$ years? The equation that the textbook provides is $y = 100(1+x)^{12t}$. The textbook states that for annual interest compounded monthly, the rate of $x$ equals $6\%/12$, which would equal $0.005$. But putting this into the equation with $t$ equaling $1$ for $1$ year, I get $106.1677812$ instead of the expected answer of simply $106$. Anyone know how to calculate $x$ to just get the correct exact answer of $106$? Is the equation incorrect as well?
The nominal interest rate compounded monthly is $i^{(12)}=6\%$. So the effective annual interest rate $i$ is $$ i=\left(1+\frac{i^{(12)}}{12}\right)^{12}-1\approx 6.168\% $$ So after $t$ years you have the amount $S(t)$ $$ S=S_0(1+i)^t=S_0\left(1+\frac{i^{(12)}}{12}\right)^{12\,t} $$ where $S_0$ is the initial amount of money. In general, for a nominal interest rate $i^{(m)}$ compounded $m$ times in a year, you have an effective annual interest rate $$ i=\left(1+\frac{i^{(m)}}{m}\right)^{m}-1 $$ The quantity $i_m=\frac{i^{(m)}}{m}$ is called interest rate per conversion period. Compounded monthy means $m=12$, quarterly $m=4$, weekly $m=52$ and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4618966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is there a "closed form" expression for the powerset of a complement? Let us consider some consistent subset of naïve set theory, in which a universal set $U$ exists, the power set $\mathcal{P}(A)$ and complement $A'=U-A$ exists of any set $A$, on top of the usual binary operations $\cup$, $\cap$, $-$, $\Delta$ (symmetric difference), etc. Considering that $\mathcal{P}(U)=U$ (as $U$ is the universe, rather than some arbitrary domain of discourse), since all subsets of $U$ are elements of $U$ and vice-versa, is there a "closed-form" expression for $\mathcal{P}(A')$? By "closed-form" here, I mean an expression only in terms of $A$ and $U$ under the given operations (so, no set-builder notation). An immediate answer evades me, partly because naïve set theory is often unintuitive, but also because I am sleep deprived (so I apologize If I have overlooked something trivial). We may first ask the question of what exactly $\mathcal{P}(A')$ represents. It is not so hard to deduce that $\mathcal{P}(A')=U-\Gamma$, where $\Gamma$ is the family of all sets which are not disjoint with $A$. I.e; $$\Gamma=\{x\in U:x\cap A\neq\emptyset\}.$$ If we were allowed set-builder notation, this would indeed be our answer! However, since we don't, the question becomes whether or not $\Gamma$ can be represented in some closed-form. Equivalently, $$\mathcal{P}(A')=\{x\in U:x\cap A=\emptyset\}$$ can be seen to be the family of all sets which are disjoint with $A$ (which is, IMO, an interesting property!). Whether this is easier to be shown to have or not have a "closed form" is not obvious to me, but worth mentioning. Any and all advice to this recreational problem would be greatly appreciated.
Suppose the universe is uncountable, and that $A$ is countable. Then any set you can build using the given list of operations is either countable or has countable complement. But $\mathcal{P}(A^c)$ has neither of these properties.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4619133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Add a minimum of edges to make the graph Hamiltonian We have graph $G=K_{9,15}$. I need to add some edges to it for make Hamilton Graph. So, we know: $$V(G) = 24,\ E(G) = 135. $$ Every vertex has to be equial or more than 12. (24 / 2). So the question is, how can I find the minimum number of edges I need to add to my original graph without draw? I can't find it in google, maybe I did it wrong?
Another way to think about it: create a bracelet with 9 red beads and 15 blue beads, minimizing the number of pairs of adjacent beads that are the same color. If two red beads are adjacent, can you improve the count of adjacent pairs of the same color?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4619278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the equation of the circle touching the line $(x-2)\cos\theta+(y-2)\sin\theta=1$ for all values of $\theta$ Find the equation of the circle touching the line $(x-2)\cos\theta+(y-2)\sin\theta=1$ for all values of $\theta$ The answer is given on toppr website. It says, $(x-2)\cos\theta+(y-2)\sin\theta=\cos^2\theta+\sin^2\theta$, then comparing coefficients, it says, $x-2=\cos\theta, y-2=\sin\theta$ I am not sure about this step. Do you think this step is valid? Another answer exists on sarthaks website. It says equation of tangent to the circle $(x-h)^2+(y-k)^2=a^2$ at $(x_1,y_2)$ is $(x-h)x_1+(y-k)y_1=a^2$ Is this valid? I am not sure about this.
You can see that the family of lines is always at a distance $1$ from $(2,2)$. Hence this family is tangent to $(x-2)^2+(y-2)^2=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4619501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solve the equation $8^x+3\cdot2^{2-x}=1+2^{3-3x}+3\cdot2^{x+1}$ Solve the equation $$8^x+3\cdot2^{2-x}=1+2^{3-3x}+3\cdot2^{x+1}$$ The given equation is equivalent to $$2^{3x}+\dfrac{12}{2^x}=1+\dfrac{8}{2^{3x}}+6\cdot2^x$$ If we put $a:=2^x>0$, the equation becomes $$a^3+\dfrac{12}{a}=1+\dfrac{8}{a^3}+6a$$ which is $$a^6-6a^4-a^3+12a^2-8=0$$ The LHS factors as $(a+1)(a-2)(a^4+a^3-3a^2-2a+4)$, which is in no case obvious. Let's say that we find the roots $1$ and $-2$, then how do we show that $(a^4+a^3-3a^2-2a+4)$ does not factor any more? Taking these into consideration, I believe there is an another approach. Any ideas would be appreciated.
Hint Let's start from $$a^3+\dfrac{12}{a}=1+\dfrac{8}{a^3}+6a$$ Now rewrite it as $$\left[a^3-\left(\frac2a\right)^3\right]-6\left[a-\frac 2a\right]-1=0$$ $$\left(a-\frac2a\right)\left[a^2+\frac{4}{a^2}-4\right]-1=0$$ $$\left(a-\frac2a\right)^3-1=0$$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4619652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute $\int_a^b e^x dx$ as a Riemann Sum I tried computing the integral $$\int_a^b e^x dx$$ as a Riemann sum. Therefore split the interval in to $n$ parts of the length $$\frac{b-a}{n}$$ and then took the limit of the Riemann sum. $$\lim _{n \rightarrow \infty} \frac{b-a}{n} \sum_{k=0}^n e^{\frac{k(b-a)}{n}}$$. When I computed this sum I got a limit, but not the right one. $$ \begin{aligned} & \int_a^b e^x d x=\lim _{n \rightarrow \infty} \frac{b-a}{n} \sum_{k=0}^n e^{\frac{k(b-a)}{n}} \\ & =\lim _{n \rightarrow \infty} \frac{b-a}{n} \sum_{k=0}^n\left(e^{\frac{b-a}{n}}\right)^k \\ & =\lim _{n \rightarrow \infty} \frac{b-a}{n}\left(\frac{\left(e^{\frac{b-a}{n}}\right)^{n+1}-1}{e ^{\frac{b-a}{n}}-1}\right) \\ & =\lim _{n \rightarrow \infty} \frac{(b-a)\left(\left(e^\frac{b-a}{n}\right)^{n+i}-1\right)}{n\left(e ^{\frac{b-a}{n}}-1\right)} \\ & =\lim _{n \rightarrow \infty} \frac{(b-a)\left(e^{\frac{(b-a)(n+1)}{n}}-1\right)}{n\left(\left(1+\frac{1}{n}\right)^{b-a}-1\right)} \\ & =\lim _{n \rightarrow \infty} \frac{(b-a)\left(e^{\frac{(b-a)(n+1)}{n}}-1\right)}{n\left(1+\frac{b-a}{n}-1\right)} \text { after Taylor Expansion } \lim _{x \to 0}(1+x)^a=1+x \cdot a\\ & =\lim _{n \rightarrow \infty} e^{\frac{(b-a)(n+1)}{n}}-1 \\ & =\lim _{n \rightarrow \infty} e^{b-a}-1 \\ & \end{aligned} $$ Does somebody spot my mistake?
Three mistakes : * *the Riemann sum is over all integers $k$ in $[0,n-1]$ or $[1,n]$, not $[0,n]$. *When you divide $[a,b]$ in $n$ intervals with length $(b-a)/n$, the bounds of those intervals are the points $a+(k/n)(b-a)$ where $k \in [0,n]$. This is why you do not get the right answer. *Applying Taylor expansion as you do is not rigorous. You should use $$\frac{e^{(b-a)/n}-1}{(b-a)/n} \to \frac{\mathrm{d}}{\mathrm{d}x}(e^x) \Big|_{x=0} = 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Canonical Filtration of a partial sum I wonder if the canonical filtration of a partial sum of a discrete stochastic process is equal to the canonical filtration of the process it self e.g. Is ${\sigma}(X_1,\ldots,X_n)={\sigma}(X_1,X_1+X_2,\ldots,X_1+\ldots+X_n)$ where $(X_n)_{n\in \mathbb{N}}$ is a discrete real valued stochastic process. Clearly we have ${\sigma}(X_1,\ldots,X_n)\supseteq{\sigma}(X_1,X_1+X_2,\ldots,X_1+\ldots+X_n)$ but what about the other inklusion?
Yes, the reverse inclusion also holds. If we define $S_n = \sum_{k=1}^n X_k$ then $X_n = S_n-S_{n-1}$, from which we conclude that $\sigma(S_1,...,S_n) \supseteq \sigma(X_1,...,X_n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A multiplicative function in Number theory that is $0$ for all sufficiently large enough $n\in\mathbb{N}$ Dear MSE Mathematicians , I have a query regarding proof verification of the following Theorem $\textbf{Theorem}$ Suppose that $f$ is a multiplicative function such that $$\lim_{p^{k}\rightarrow \infty} f(p^k)=0$$ Where $p^k$ denote the prime powers. Then $$\lim_{n\rightarrow \infty} f(n)=0$$ $\textbf{Proof}$ Since $$\lim_{p^{k}\rightarrow \infty} f(p^k)=0$$ there are only finitely many prime powers such that $|f(p^k)|\geq 1$. Among all such prime powers choose the largest since it's finite there exist one largest say $q^j$ Then , Define $$n=\prod_{ \text{atleast one prime power dividing n}> q^j,p|n}p^{v_p(n)}$$. $\textbf{Claim}:$ There exist a positive integers $N$ such that every integer $>N$ takes the form of $n$. Proof: Define $$S=\{n|\text{atleast one prime power that divides n is}>q^j\}$$ Look over $S^c$ then clearly every member of $S^c$ is divisible by a a prime power less than $q^j$ so $S^c$ must contains a largest element name that element $N$. Back to the problem Note that $N+i\in S$ and so every element $a\in A=S\cap \{N+i|i\in\mathbb{N}\}$ has $|f(a)|=0$ implying, $$\lim_{n\rightarrow \infty}f(n)=0$$ $\blacksquare$ Thanks to Everyone, who provide suggestions and comment for this post. $\textbf{My Attempt 2}:$ Define $a(p,k)=p^k$ Since we have that $\lim_{p^k\rightarrow \infty} f(p^k)=0$ we have Then there exist $a(N,M)$ such that for all $q>N$ and $j>M$ such that $a(q,j)=0$. Now applying the above Claim in attempt 1. Can we thru? Is my attempt 2 correct. Both attempt are wrong.
Note that $\lim f(p^k)=0$ does not imply that only finitely many $|f(p^k)|\ne0$! To mend the proof, consider $\epsilon>0$. From the existence of $\lim f(p^k)$, we see that $|f(p^k)|\ge1$ only for finitely many $p^k$. Let $N$ be the product of all $p^k$ with $|f(p^k)|\ge1$ and let $M$ be the product of the corresponding $|f(p^k)|$. Find $A$ with $|f(p^k)|<\epsilon/M$ for all $p^k>A$. For sufficiently large $n$, at least one prime power dividing $n$ is $>A$ so that $|f(n)|<M\cdot \epsilon/M$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Count the number of doubles created from rolling three dice I have doubts about my calculations for the number of doubles created from rolling three dice. By doubles, I mean the outcomes of the form $aab$ where $a$ and $b$ are distinct values from $1$ to $6$. In the case of two dice where I calculate the number of singles (outcome of the form $ab$). I can calculate it like this: $\binom{6}{2}\cdot2!=30$ (number of ways to choose two values from the set $1, 2, ..., 6$ times the number of arrangements of $ab$). On the other hand, if I try to calculate the number of doubles created from rolling three dice, I get the incorrect result using the same logic for calculating number of singles from two dice: $\binom{6}{2}\cdot\frac{3!}{2!}=45$ (number of ways to choose three values from the set $1, 2, ..., 6$ times the number of arrangements of $aab$). It seems like I need to multiply by $2$ to get the correct result: $90$. I read other answers that $2$ represents the number of ways to choose a value for a pair but I don't understand why we need to multiply by $2$ here. I need an intuitive explanation on this part. Related question: Why this is true? $$\binom{6}{1}\binom{5}{1}=\binom{6}{2}2$$ Why there is need to multiply $\binom{6}{2}$ by $2$?
You wish to find the number of outcomes in which one number appears twice while another number appears once when three six-sided dice are rolled. There are six ways to select the value that appears twice, $\binom{3}{2}$ ways to select the two dice on which that number appears, and five ways to select the value that appears on the other die. Hence, there are $$6 \cdot \binom{3}{2} \cdot 5$$ such outcomes. As for your approach, there are $\binom{6}{2}$ ways to select which two values appear, two ways to select which value appears twice (you omitted this step), and $\binom{3}{2} = \frac{3!}{2!1!}$ ways to distribute the values on the two dice, so there are $$\binom{6}{2}\binom{2}{1}\binom{3}{2}$$ such outcomes. As for the problem, you posed in the comments about finding the number of outcomes in which exactly one value appears twice and three other values each appear once when a die is rolled five times, choose which of the six values will appear twice, choose on which two of the five dice that value will appear, choose which three of the five remaining values will each appear once, and arrange those three distinct values on the remaining three dice. $$\binom{6}{1}\binom{5}{2}\binom{5}{3}3!$$ Alternatively, choose which four of the six values will appear on the five dice, choose which of these four values will appear twice, choose on which two of the five dice that value will appear, and arrange the remaining values on the remaining three dice. $$\binom{6}{4}\binom{4}{1}\binom{5}{2}3!$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is maximum number of circles in $\mathbb R^2$ that intersect a common circle, but are otherwise disjoint? Given a circle of a fixed radius in the plane, how many circles can be drawn intersecting with that circle, that are disjoint from each other? (If this type of question has a name or has been studied in the past, feel free to let me know! It seems vaguely related to sphere-packing stuff, but not exactly) More precisely, let $S^1 \subset \mathbb R^2$ be the unit circle entered at the origin. What is the maximum number of circles $S_i$ of radius $1$ such that $S_i \cap S^1 \neq \emptyset$ for all $i$, and $S_i \cap S_j = \emptyset$ for all $i\neq j$? I spent some time drawing examples and started with squares, since they are a lot easier to draw. I suspect the answer is 5 for squares, but I'm not totally sure, and would like to know how to go about proving this rigorously. Remark (edit): I also know about doubling dimension, which may have some role to play in the answer (emphasis on "may"!) but of course the definition is quite dissimilar from my problem is that it uses balls of radius $r/2$, and does not care whether the covering balls intersect, though maybe the doubling constant gives a weak upper bound or something. (Brian below thinks it is $7$.) Remark 2: Since I forgot about rotations, my answer of 5 for squares was not correct. Coincidentally, it ended up being correct for circles (maybe indirectly because I was drawing squares with the same orientation). At any rate, if anyone can find the answer for squares as well, feel free, or maybe I should post as a separate question.
Let $O$ be the centre of your original circle. Let $O_1$ and $O_2$ be the centres of any two such circles. As the circles don't intersect each other, it follows that $|O_1O_2|>2$. However, they intersect the original circle, so $|OO_1|,|OO_2|\le 2$. This implies that, in the triangle $\triangle OO_1O_2$ the side $O_1O_2$ is the longest, so the opposite angle $\angle O_1OO_2$ must be the largest, and therefore larger than $60^\circ$. Now, join $O$ with all the centres $O_i$ via rays. Any two of those rays make up an angle $>60^\circ$, and so those rays divide the full angle of $360^\circ$ into angles $>60^\circ$, which can only happen if there are at most five of them. It is also easy to show that you can draw five circles to satisfy your conditions. (Imagine a regular pentagon inscribed in your circle, and five other circles touching at the corners of the pentagon.) So, five is your answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Riemann surfaces and coverings Suppose i have two riemann surfaces $S_{g_1}$ $S_{g_2}$ of genus $g_1$ and $g_2$, i want to find a criteria when first surface covers the second. My guess is that $2-2g_1 = k(2-2g_2)$(euler characteristics).My question is in topological setting, so im interested in classical coverings. So i have already proven, using triangulations , the necessity of it,but im stuck with sufficiency. Any ideas?
One can prove sufficiency using some covering space theory combined with the classification of surfaces. To start, one needs to know that $\pi_1(S_{g_2})$ contains subgroups of every finite index $k \ge 2$. This is not hard to see using surjectivity of the Hurewicz homomorphism $\pi_1(S_{g_2}) \to H_1(S_{g_2}) \approx \mathbb Z^{2{g_2}}$, just compose that with your favorite surjective homomorphism $\mathbb Z^{2{g_2}} \mapsto C_k$ onto the order $k$ finite cyclic group $C_k$, and then take the kernel of the composition. Next, one uses covering space theory to produce a degree $k$ covering map $p : \widetilde S \to S_g$ with connected covering space $\widetilde S$. Next, one proves that $\chi(\widetilde S) = k \chi(S_g) = k(2-2{g_2})$. I suspect from your post that you already know how to do this. One can lift a triangulation of $S_{g_2}$ to get a triangulation of $\widetilde S$, and then one can verify that the number of simplices of each dimension in $\widetilde S$ is equal to $k$ times the number of cells of that dimension in $S_{g_2}$. Finally, one uses the classification of surfaces: since $S_{g_1}$ and $\widetilde S$ are both orientable and have the same Euler characteristic, there exists a homeomorphism $S_{g_1} \mapsto \widetilde S$. Composing that homeomorphism with the covering map $\widetilde S \to S_{g_2}$ one obtains the desired covering map $S_{g_1} \mapsto S_{g_2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4620915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }