Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$\frac{f(x)-f(0)}{g(x)-g(0)}=\frac{f'(\nu(x))}{g'(\nu(x))} $ ,the value of the limit: $\lim_{x \to 0^+} \frac{\nu(x)}{x} $ Good evening, I thought a lot about this issue. I think I have to apply Lagrange, Taylor. Can someone help me to calculate this limit? $$f,g \in C^2 [0,1]: \\ f'(0)g''(0) \ne f''(0) g'(0) \\ g'(x) \ne 0, \forall x \in (0,1) \\ \nu(x) \text{ is a real number }: \\ \frac{f(x)-f(0)}{g(x)-g(0)}=\frac{f'(\nu(x))}{g'(\nu(x))} \\ \lim_{x -> 0^+} \frac {\nu(x)}{x} $$ My reasoning, using Taylor: $ f(x)=f(0)+xf'(0)+x^2 \frac{f''(0)}{2} \\ g(x)=g(0)+xg'(0)+x^2 \frac{g''(0)}{2} \\ \frac{f'(0)+\frac{f''(0)}{2}x}{g'(0)+\frac{g''(0)}{2}x}=\frac{f'(\nu(x))}{g'(\nu(x)) } \\ \frac{f'(0)+\frac{f''(0)}{2}x}{g'(0)+\frac{g''(0)}{2}x}=\frac{f'(\nu(0))+\nu'(0)f''(\nu(0))x}{g'(\nu(0))+\nu'(0)g''(\nu(0))x } $ Can you give me a hint to continue the reasoning? Is there any mistake? Thanks.
$\nu(0)=0$ because $0\leq \nu(x)\leq x$. Then cross-multiply, and take $O(x)$ terms. From your last line, everything is evaluated at zero: $$(f'+f''x/2)(g'+g''\nu'x)\approx(f'+f''\nu'x)(g'+g''x/2)\\ f'g'+(f''g'+2f'g''\nu')x/2+Ax^2\approx f'g'+(2g'f''\nu'+f'g'')x/2+Bx^2\\ f''g'+2f'g''\nu'=2g'f''\nu'+f'g''\\ \nu'=1/2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linear transformation invariant wrt. maximum. All matrices are real. Define the operator $\max$ on matrices as a function that returns the largest value in each row. Consider a matrix $F$ of size $n \times l$. The matrix has the property that any vector $v$ of the form $v(i) = F_{i,q(i)}$ for any mapping $q$ is in the range of some matrix $A$ of size $n \times k$. Is it possible to find a matrix $C$ of size $k \times n$ such that we have: $C \max F = \max C F$ If not, is it possible to at least find such a matrix $C$ with $O(k)$ rows?
Independent of the property of the matrix $F$, it is always possible to find such a matrix $C$. Pick any "rectangular permutation matrix" (Not sure if that is an well-established term) $C$, which has in every row exactly one entry with $1$ and else zero entries and in every column at most one entry with $1$ and otherwise zero entries. Now, $CF$ basically contains in it rows a subset of the rows of $F$ and thus $\max CF$ is the same as picking the max values of the corresponding rows of $F$, thus $C \max F$, hence $C \max F = \max C F$. $C$ are not very fancy and I am not sure if that is what you were looking for, but that should answer the stated problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1083948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the zeros of $f(x)=x^3+64$ $$f(x)=x^3+64$$ Again, I am really not sure how to do this I tried to factor but it clearly was not the right answer
At the precalculus level, any cubic polynomial (i.e. a function of the form $f(x) = ax^3 + b x^2 + cx + d$) that you are asked to find the roots of will almost certainly have one "obvious root". In this example, can you find a value of $x$ such that $f(x) = 0$ - i.e. $$x^3 = -64$$ Once you have this one root, let's call it $a$, you can use this root to find the other roots by factorising the cubic. Since $f(a) = 0$, by the Remainder Theorem, we know that $f$ has $(x-a)$ as a factor. So find $b, c$ such that $$f(x) = (x-a)(x^2 +bx + c)$$ and you're left with a quadratic to factorise. The roots of $f$ are then just $a$ and the roots of the quadratic equation $x^2 + bx +c$, which you can solve in the usual way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How to differentiate the function $f(\mathbf x) = \|\mathbf x\|^2 \mathbf x$? Let $f:\mathbb R^n\to\mathbb R^n$ be given by the equation $f(\mathbf x)=\|\mathbf x\|^2 \mathbf x$. Show that $f$ is of class $C^\infty$ and that $f$ carries the unit ball $B(\mathbf 0;1)$ onto itself in a one-to-one fashion. Show, however, that the inverse function is not differentiable at $\mathbf 0$. How does one differentiate a function involving the Euclidean norm? It's simple enough if it was just the norm itself, but multiplied by a vector I'm not sure how to go about it.
Hint: To make things easier for you, let's work on $n=2$ as always... $f(x) = f(x_1,x_2) = \begin{pmatrix} (x_1^2+x_2^2)x_1 \\ (x_1^2+x_2^2)x_2 \end{pmatrix}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
A non-vanishing one form on a manifold of arbitrary dimension So the problem I have is: Let $\theta$ be a closed 1-form on a compact Manifold M without boundary. Further suppose that $\theta \neq 0$ at each point of M. Prove that $H^{1}_{dR}(M)\neq 0$. The only approach I see to doing this is finding a closed loop in M such that the integral over it is non-zero. In local coordinates, writing $\omega=f_{1}(x)dx^{1}+...+f_{n}(x)dx^{n}$ and using the fact that $f_{i}(x)\neq 0$ in some neighborhood of $x$ I can integrate along that segment to get a nonzero integral. Then, switching to different coordinates if needed, if $f_{i}(x_{0})=0$, then there is some other $f_{j}$ so that it is nonzero around that point, and picking the segment with right orientation, I can integrate along this path to increase the value of the integral. Continuing this indefinitely, I get a constantly increasing value. However, I don't think this is the right approach since I can't guarantee that the curve closes. Though, I could imagine the following. If at some point this continuation crosses itself, then I'm done. If not, then eventually this curve must somehow become dense on the manifold, maybe arguing by compactness (this is just my intuition and is not rigorous, I am imagining a curve on the sphere or torus wrapping around without intersecting). Then my starting point will be close to some other point on the curve, so that when I connect them, the integral along the segment will be potentially negative, but not so negative to cancel the contribution from the other segments. However, I think there should be a much simpler and rigorous solution to this. So, any thoughts and input would be appreaciated!
Suppose there exists $f: M\rightarrow \mathbb{R}$ such that $df = \theta$. What happens at a maximum of $f$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to adopt the Woodbury matrix identity to this matrix formula The Woodbury matrix identity is defined as follows: $$ {(A+UCV)}^{-1}=A^{-1}-A^{-1}U{(C^{-1}+VA^{-1}U)}^{-1}VA^{-1} $$ I want to use the Woodbury matrix identity theorem to change the following matrix formula $$ W={(XX^T+\lambda G)}^{-1}XY $$ into the following form $$ W=G^{-1} X {(X^TG^{-1}X+\lambda I)}^{-1}Y $$ The dimensions are as follows: $$ X\in R^{p\times n}\\ G\in R^{p\times p}\\ Y\in R^{n\times c} $$ Could anyone help give some hints? UPDATE: From the two formulas about $W$, we could get the following equations, thus the two $W$s should be equal:
I'm not sure the different forms of W, as stated, are equivalent. For one thing, they do not appear to be equivalent when the matrices involved are replaced by scalars. To illustrate, let $X=a$, $G=b$ and $Y=c$. Then, $$ \begin{eqnarray*} W{}={}{(XX^T+\lambda G)}^{-1}XY &{}\implies{}&W{}={}\frac{ac}{a^2{}+{}\lambda b}\,,\newline \end{eqnarray*} $$ while $$ \begin{eqnarray*} W{}={}G^{-1} X {(X^TG^{-1}+\lambda I)}^{-1}Y &{}\implies{}&W{}={}\frac{ac}{a{}+{}\lambda b}\,.\newline \end{eqnarray*} $$ Furthermore, direct manipulation of the first posted equation with $W$ gives $$ W{}={}G^{-1} X {(X^TG^{-1}X+\lambda I)}^{-1}Y\,, $$ which is different from the second $W$ equation posted but, now, seems consistent (assuming, in addition, that $X$ is invertible).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many time the digit 6 appear when we count from 6(base 8) to 400 (base 8)? How many time the digit 6 appear when we count from 6(base 8) to 400 (base 8)? I am not sure if I am going in the right path. I want to find the most accurate approach of solving this problem. 6(base 8)=6(base 10). Also, 400(base 8)= 256 (base 10). Now, zero will come in base 8 system if we encounter 10 somehow. So, 8(base 10) will yield 10(base 8). Now, in the units place next 6 will come at 16 (base 8) which is eight places away from 6. So, 6 will come in the units place in gaps of 8. So, total number of 6's in units place = int(((256-6)+1)/8)+1=32. Similarly, for tens place 6 will come 8 times per 64(base 10) numbers. Therefore, total 6's in tens place=256/64*8=32. We ignore the effect of first 5 numbers and take the whole as 256 because it does not matter if I take the first 5 into account or not. It does not contain any 6. Now total number of 6s=32+32=64. Is this answer ok? I don't know the correct answer. Please help on the approach.
Your answer is correct. Your idea of starting from $1$ instead of $6$ makes it easy: One-eighth of the third-place (units) digits are $6$, and one-eighth of the second-place (eights) digits are $6$. So the answer is $\dfrac{400_8}{8} + \dfrac{400_8}{8} = 64_{10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Getting wrong answer trying to evaluate $\int \frac {\sin(2x)dx}{(1+\cos(2x))^2}$ I'm trying to evaluate $$\int \frac {\sin(2x)dx}{(1+\cos(2x))^2} = I$$ Here's what I've got: $$t=1+\cos(2x)$$ $$dt=-2\sin(2x)dx \implies dx = \frac {dt}{-2}$$ $$I = \int \frac 1{t^2} \cdot \frac {-1}2 dt = -\frac 12 \int t^{-2}dt = -\frac 12 \frac {t^{-2+1}}{-2+1}+C$$ $$=-\frac 12 \frac {t^{-1}}{-1}+C = \frac 1{2t} +C = \frac 1{2(1+\cos(2x)}+C$$ I need to find what went wrong with my solution. I know the answer is correct but the line $dt=-2\sin(2x)dx$ concludes that $dx=\frac {dt}{-2}$ is false. But why? What should it be instead?
Perhaps simpler: $$\begin{cases}1+\cos2x=1+2\cos^2x-1=2\cos^2x\\{}\\\sin2x=2\sin x\cos x\end{cases}\;\;\;\;\;\;\;\;\;\;\;\implies$$ $$\int\frac{\sin2x}{(1+\cos2x)^2}=\int\frac{2\sin x\cos x}{4\cos^4x}dx=-\frac12\int\frac{(\cos x)'dx}{\cos^3x}=-\frac12\frac{\cos^{-2}x}{-2}+C=$$ $$=\frac14\sec^2x+C\ldots\ldots\text{and your solution is correct, of course}$$ using that $$\int\frac1{x^3}dx=-\frac12\frac1{x^2}+c\implies \int\frac{f'(x)}{f(x)^3}dx=-\frac1{2f(x)^2}+c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
Evaluating the limit $\lim_{x\to\infty}\frac{(1+1/x)^{x^2}}{e^x}$ Could anybody show me step by step why the following equality holds? $$\lim_{x\to\infty}\frac{(1+1/x)^{x^2}}{e^x} = e^{-1/2}$$ The most obvious method gives you 1 as an answer, but I understand that only the limit of $(1+1/x)^x$ is $e$, but expression is not actually equal to $e$. And now I am stuck.
Compute the limit of the logarithm of your function: $$ \lim_{x\to\infty}\log\frac{(1+1/x)^{x^2}}{e^x}= \lim_{x\to\infty}(x^2\log(1+1/x)-x) $$ Now set $x=1/t$: $$ \lim_{x\to\infty}(x^2\log(1+1/x)-x)= \lim_{t\to0^+}\frac{\log(1+t)-t}{t^2} $$ The limit becomes $\displaystyle\lim_{t\to0^+}\frac{t-t^2/2+o(t^2)-t}{t^2}=-\frac{1}{2}$. So your original limit is $e^{-1/2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
(Kelley's General Topology) Exercise G chapter 1. I am finding difficultes in solving the following exercise written on the Kelley's book as in the title. Could anyone help me? Thanks in advance. If $A$ is dense in a topological space and $U$ is open, then $U \subseteq \overline{(A \cap U)}$.
Suppose that $x\in U$. Since $A$ is dense, $x$ is contained in the closure of $A$. It follows that there exists a net $(x_{\alpha})_{\alpha\in D}$ in $A$ (where $D$ is some index set directed by a relation $\geq$) such that $x_{\alpha}\to x$. By the definition of convergence of nets, since $x\in U$ and $U$ is open, there exists some $\alpha_{0}\in D$ such that $\alpha\geq \alpha_0$ implies that that $x_{\alpha}\in U$. Hence, $(x_{\alpha})_{\alpha\geq\alpha_0}$ is a net in $A\cap U$ that also converges to $x$. It follows that $x$ is in the closure of $A\cap U$. Since $x\in U$ was arbitrary, the conclusion that $U\subseteq\overline{A\cap U}$ follows. Alternative proof, with no reference to nets $\phantom{---}$Let $x\in U$. Since $x$ is in the closure of $A$, either $x\in A$ or $x$ is an accumulation point of $A$; see Theorem 1.7 in Kelley (1955, p. 42). Case 1 $\phantom{---}$$x\in A$. Then, $x\in A\cap U\subseteq\overline{A\cap U}$, trivially. Case 2 $\phantom{---}$$x$ is an accumulation point of $A$. I will show that $x$ is an accumulation point of $A\cap U$. Let $V$ be an open set containing $x$. Then, $U\cap V$ is also an open set containing $x$. Therefore, $A\cap (U\cap V)\setminus\{x\}$ is not empty because $x$ is an accumulation point of $A$. Since $V$ was an arbitrary open neighborhood of $x$, it follows that $x$ is an accumulation point also of $A\cap U$—remember that $(A\cap U)\cap V\setminus\{x\}$ is not empty. Hence, $x$ is in the closure of $A\cap U$. Conclusion: $U\subseteq\overline{A\cap U}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Does the following type of "SVD" exist? SVD of $A$ gives $U$, $\Sigma$ and $V$ such that $A = U \Sigma V$. I am interested in a different problem. Given an $A$ and $\Sigma_1,\ldots,\Sigma_{n-1}$ diagonal matrices, such that we know that $$A = U_1 \Sigma_1 U_2 \Sigma_2 U_3 \Sigma_3 \ldots U_n$$ for some unitary matrices $U_1,\ldots,U_n$, we need to recover $U_1,\ldots,U_n$. EDIT: I can choose $\Sigma_i$ freely a priori, if that helps make the recovery of $U$ possible. I also don't mind if the $U$'s are identified up to a multiplication by $-1$ or $1$.
In general this can't be done. For example, in the $2 \times 2$ case with $V = \pmatrix{0 & 1\cr 1 & 0\cr}$ and $\Sigma_3 = \Sigma_1$, $$ I \Sigma_1 I \Sigma_2 V \Sigma_1 V = V \Sigma_1 V \Sigma_2 I \Sigma_1 I $$ In addition, you can always multiply the $U$'s by scalars of absolute value $1$ whose product is $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove every derived set is closed if this is the case for singleton sets Suppose that for each $x \in X$, the set of accumulation points of $\{ x \}$ is closed. Then for each $S \subseteq X$, the set of its accumulation points is closed. This is the last part of exercise $D$ of chapter 1. I managed to solve all the previous points (not indicated here), but unfortunately this one remains open for me. Thank you in advance!
Suppose $S \subseteq X$ and $x \notin S'$. We need to find some open $U$ containing $x$ and such that $U$ misses $S'$. $x \notin S'$ means that there exists some open $O$ containing $x$ such that $O \cap S \subseteq \{x\}$. If $O \cap S = \emptyset$, we can pick $U = O$. If $O \cap S = \{x\}$, then use that $X \setminus \{x\}'$ is open...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084874", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is continuous $f$ constant if every point of $\mathbb{R}$ is local minimum of $f$? Suppose $f:\mathbb{R} \rightarrow \mathbb{R}$ is continuous. Is $f$ constant if every point of $\mathbb{R}$ is local minimum of $f$? What metric spaces we can use instead of $\mathbb{R}$? I guess we have same result for $f:\mathbb{R}^n \rightarrow \mathbb{R}$.
This holds for $f:X\to\mathbb R$ if $X$ is a connected space. For each $x\in X$, $f^{-1}([f(x),\infty))$ is closed by continuity, and open by the condition on local minima. This set is nonempty because it contains $x$, hence it equals $X$ by connectedness. Thus for all $y\in X$, $f(y)\geq f(x)$. Because $x$ and $y$ were arbitrary, this implies that $f$ is constant. See also Continuous function with local maxima everywhere but no global maxima.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1084929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Proving ratio of subsequent terms in sequence <1, then limit of sequence tends to 0 If $\lim_{n \rightarrow \infty} \left|\frac{u_{n+1}}{u_n}\right|=|a|<1$, prove that $\lim_{n \rightarrow \infty} u_n=0$. We have to prove that for $\forall \epsilon>0: |u_n|<\epsilon, \forall n>N\in\mathbb{N}$ I started by: $$\lim_{n \rightarrow \infty} \left|\frac{u_{n+1}}{u_n}\right|<1\implies 0<|u_{n+1}|<|u_n|$$ So $|u_n|$ is strictly monotonic decreasing and bounded below, so it converges to a value $u^*$. So, $\forall\epsilon<0:|u_n-u^*|<\frac{\epsilon}{2}, \forall n>N$. Hence, we can create a set $$I=\bigcap_{n\in\mathbb{N}}I_n=\bigcap_{n\in\mathbb{N}}[0,|u_n|]$$ Where $$I_n\subset I_{n+1}\subset I_{n+2}\subset...$$ Then we have to show there exists only one value in the set, namely $0$, so that $u^*=0$. Then for $n>N$: $$u^*\leq |u^*-u_N|+|u_N-0|<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon\implies u^*<\epsilon\implies u^*=0$$ Is this proof correct? I think I made a wrong assumption at the end, but I wouldn't know how else to do it.
I don't think you can say $|u_N - 0| \leq \frac{\epsilon}{2}$ without implicitly assuming $u^* = 0$. Here's what I would do. Start from the point where you have defined $u^*$. Let $a < 1$ be the limit of the ratio of terms of the sequence. Then we have that $$\lim_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n} \right| = a \\ \implies \\ \lim_{n \rightarrow \infty} |a_{n+1}| = \lim_{n \rightarrow \infty} |a_{n}| \left| \frac{a_{n+1}}{a_n} \right| = a \lim_{n \rightarrow \infty} |a_n| \\ \implies \\ u^* = au^* \\ \implies \\ u^*(1-a) = 0$$ Then as $a < 1$, $1 - a \not = 0$. So $u^* = 0$. Alternatively, you can use the fact $$\limsup_{n \rightarrow \infty} \left| \frac{a_{n+1}}{a_n} \right| < 1 \implies \sum_{n \in \mathbb{N}} a_n \text{ converges} \implies a_n \rightarrow 0$$ if these results have already been proven in whatever setting you are working in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Characterization of Matrices Diagonalizable by Matrices P such that P times P^Transpose is Diagonal Let $M$ be a square matrix with complex entries. What is a characterization of $M$ such that $M = P^{T} D P$, where both $D$ and $P^{T} P$ are diagonal matrices? For example, such a characterization includes all real symmetric matrices using only orthogonal matrices for $P$ (so that $P^{T} P$ is the identity matrix, which of course is diagonal).
It is a theorem due to Takagi that any complex (entrywise) symmetric matrix may be written as $M=PDP^T$, and $P$ may be chosen to be unitary. As others have noted, it is clear that only symmetric matrices can be factored in this way, so that symmetry is both sufficient and necessary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How find the maximum possible length of OC, where ABCD is a square, and AD is the chord of the circle? Given a circle $o(O(0,0), r=1)$. How to find the maximum possible length of $OC$, where $ABCD$ is a square, and $AD$ is the chord of the circle? I have no idea how to do this, can this be proved with simple geometry?
With the diagram as labeled, we see that $$\begin{align} |\overline{OC}|^2 &= \cos^2\theta + ( \sin\theta+2\cos\theta )^2 \\ &= \cos^2\theta + \sin^2\theta + 4 \cos\theta\sin\theta + 4 \cos^2\theta \\ &= 3+2 \sin 2\theta + 2 \cos 2\theta \\ &= 3+2\sqrt{2}\left( \sin 2\theta \cos45^\circ + \cos 2\theta \sin45^\circ \right) \\ &= 3+2\sqrt{2}\sin(2\theta+45^\circ) \end{align}$$ This value is clearly maximized when $\sin(2\theta+45^\circ) = 1$ (which happens for $\theta = 22.5^\circ$), so that $$|\overline{OC}|^2 = 3 + 2\sqrt{2} = ( 1 + \sqrt{2} )^2 \quad\to\quad |\overline{OC}| = 1 + \sqrt{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
strongly continuous mapping implies bounded mapping Hi does anyone know how to show the result that if we have a relexive Banach space $X$ and a mapping $A: X \rightarrow X^{*}$ (not necessarily linear), which is strongly continuous, which means $$u_{n} \rightharpoonup u~~~\text{in }X\implies A(u_{n}) \rightarrow A(u)~~\text{ in }X^{*}$$ Then does it follow that $A$ is also bounded, in the sense that $A$ takes bounded sets to bounded sets? Thanks for any help.
Thanks for responses in the comments. Is this then okay? Proof: Assume there is some bounded set $B \subset X$, where $A(B)$ is unbounded in $X^{*}$. Then choose a sequence $\{ A(u_{n}) \}_{n \in \mathbb{N}} \subset A(B)$ such that $\| A(u_{n}) \| > n$. Note then that $\{ u_{n}\}_{n} \subset B$ is bounded in $X$, so by Kakutani's Theorem and Emerlein-Smulian Theorem it follows that there exists a subsequence $\{ u_{n_{k}} \}_{k}$ such that $u_{n_{k}} \rightharpoonup u$ in $X$. It follows then from the strong continuity assumption that $A(u_{n_{k}}) \rightarrow A(u)$, the sequence is therefore bounded. But we also have $\| A(u_{n_{k}}) \| \geq k$, which contradicts $\{ A(u_{n_{k}}) \}_{k}$ being bounded. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is it that an ideal is homogeneous if and only if it is generated by homogeneous elements? Hartshorne says the following on pg. 9 An ideal is homogeneous if and only if it is generated by homogeneous elements. Take $\langle x+y,x^3+y^3\rangle$. It is generated by homogeneous elements. But how is it homogeneous? Clearly $x+y+x^3+y^3$ is not homogeneous.
If an ideal I is homogeneous, and if a possibly non-homogeneous P(x,y) element belongs to I, each homogeneous component of P(x,y) belongs to I. Therefore, given any system of generators of $I$, each of their homogeneous components belo,gs to $I$ – in other words, $I$ is generated by the homogeneous components of a system of not necessarily homogeneous set of generators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $C^\infty(\mathbb{R}^n)$ is dense in $L^2(\mathbb{R}^n, (1 + |\xi|^2)^s d\xi$ I would like to show that $C^\infty(\mathbb{R}^n)$ is dense in the space $L^2(\mathbb{R}^n, (1 + |\xi|^2)^s d \xi)$ (here, $s$ is an arbitrary element of $\mathbb{R}$). I am familiar with the standard argument that if $\phi \in C^\infty_0(\mathbb{R}^n),$ $\int \phi = 1$, $\phi_\epsilon = \epsilon^{-n} \phi (\epsilon^{-1}( \cdot))$ and $f \in L^2(\mathbb{R}^n, d \xi)$, then each $f \ast \phi_{\epsilon} \in C^\infty$ with $f \ast \phi_{\epsilon} \to f$ in $L^2(\mathbb{R}^n, d \xi)$ as $\epsilon \to 0^+$. I am wondering if this same style of argument can be used to prove density in $L^2(\mathbb{R}^n, (1 + |\xi|^2)^s d \xi)$? If we try to reproduce the old argument verbatim, then we run into a bit of trouble because the factor $(1 + |\xi|^2)^s$ now lies inside the integral, and it's not quite clear to me how to define convolution in the setting of this more general measure space $L^2( \mathbb{R}^n, (1 + |\xi|^2)^s d \xi)$. Hints or solutions are greatly appreciated.
Hint. A possible approach is to use the fact that the Schwartz space $\mathcal S(\mathbb{R})$, the space of functions all of whose derivatives are rapidly decreasing, is dense in $L^2(\mathbb{R}^n, (1 + |\xi|^2)^s d \xi)$ (recall that a rapidly decreasing function is essentially a function $f(\cdot)$ such that $f'(\cdot), f''(\cdot),f'''(\cdot), \ldots ,$ all exist everywhere on $\Bbb R$ and go to zero as $x → ±∞$ faster than any inverse power of $x$) and that this space $\mathcal S(\mathbb{R})$ is a subspace of $C^\infty(\mathbb{R}^n)$, giving the desired density. Here is a related link.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
An analogue to Cantor's theorem Cantor's theorem states that for all sets $$|A| < |2^A|$$ I was interested in a similar proposition. If $A$ is a set, denote by $A! := \{f : A \rightarrow A \mid f \text{ is a bijection}\}$. Is it true in general that $|A| < |A!|$? It is not to difficult to show that $|\mathbb{N}| < |\mathbb{N}!|$. To see this, let $B \subset \mathbb{N!}$ be the set of permutations which are either the identity or have some number of even naturals swapped with their right neighbor. (0 1 (3 2) 4 5 (7 6) 8 9 10 11 12 ... is an example). Then $B$ is uncountable because if we map unswapped pairs to 0 and swapped pairs to 1, this constitutes a bijective mapping into the set of infinite binary strings, which we know to be uncountable by the classic diagonalization argument. Then as $B \subset \mathbb N!$, $\mathbb N!$ is uncountable. Unfortunately this proof does not yield an approach to the general case. Any ideas?
Taking a hint from Unit’s comment down below (ultimately coming from Factorials of Infinite Cardinals by Dawson and Howard, it seems), here’s a case differentiation proving the result in ZF (as far as I can tell). In case $|A| = |2 × A|$: Mapping $2^A → (2×A)!,~ B ↦ σ_B$, where $σ_B$ swaps the two copies of $B$ in $2×A = A \sqcup A$ pointwise and fixes the rest, is injective. If $2×A \cong A$ you have $(2×A)! \cong A!$ and hence this proves $|A| < |2^A| ≤ |A!|$. In case $|A| < |2 × A|$: Assume $|A| > 2$ and fix a two element set $2_A ⊂ A$ and a check-point $ξ ∈ A\setminus2_A$. Then the mapping $2_A×A → A!,~(α,x) ↦ (α~ξ~x)$ is evidently left-inverted by $A! → 2_A×A,~τ ↦ (τ^{-1}(ξ),τ(ξ))$, hence it’s injective, proving $|A| < |2×A| ≤ |A!|$. I hope this is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Continuous map to a subspace I'm studying the book Topology, Geometry, and Gauge Fields by Gregory L Naber. This is for self study. I'm trying to prove the first part of Lemma 1.1.2 Let $Y$ be a subspace of $Y'$. If $f:X \to Y'$ is a continuous map with $f(X) \subseteq Y$, then, regarded as a map into $Y$, $f:X \to Y$ is continuous. My attempt is as follows: To show that $f:X \to Y$ is continuous we need to show that if $U$ is open in $Y$ then $f^{-1}(U)$ is open in $X$. We know that $f:X \to Y'$ is continuous, so for any open set $U'$ of $Y'$ it follows that $f^{-1}(U')$ is open in $X$. Note since $Y$ is a subspace of $Y'$ then it has the relative topology $T = \{ Y \cap U' : U' \in T'\}$, where T' is the topology of $Y'$. So if $U$ is open in $Y$ then $U = Y \cap U'$ for some open subset $U'$ of $Y'$. So now we have $f^{-1}(U) = f^{-1}(Y \cap U')$ and it is at this point where I'm having trouble moving forward. If this function were injective then I could simply write $f^{-1}(Y \cap U') = f^{-1}(Y) \cap f^{-1}(U') = X \cap f^{-1}(U')$ which is open. But I'm not given that it is injective so somehow I need to use the fact that $f(X) \subseteq Y$. I feel like I need to concentrate on $f^{-1}(Y \cap U')$ in conjunction with $f(X) \subseteq Y$. Am I on the right track here? I'm just not sure of how to move forward. Thanks
Because $f(X) \subseteq Y$, $f^{-1}(W) = f^{-1}(Y \cap W)$ for any set $W \subseteq Y'$. (Proof: If $x \in X$ is such that $f(x) \in W$, then $f(x) \in Y \cap W$. Conversely, if $f(x) \in Y \cap W$, then $f(x) \in W$.) So, in your proof, we have $f^{-1}(U) = f^{-1}(Y \cap U') = f^{-1}(U')$, which is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does Green's $\mathcal{J}$-relation define a total order on the equivalence classes? In a semigroup $S$ we define $a\le_\mathcal{J} b$ iff $a=xby$ for some $x,y\in S^1$. Defining $a\equiv_\mathcal{J} b$ by $a\le_\mathcal{J} b$ and $b\le_\mathcal{J} a$ gives a partial order on $S/\equiv_\mathcal{J}$. Is this order always total?
No it is not always a total order. For example, in the semigroup $S$ with elements $a$, $b$, and $c$ and multiplication in the table below: \begin{equation*} \begin{array}{c|ccc} &a&b&c\\\hline a&a&a&a\\ b&a&b&a\\ c&a&a&c \end{array} \end{equation*} The $\mathscr{J}$-order on $S$ has $b$ and $c$ incomparable, $a\leq_{\mathscr{J}} b$ and $a\leq_{\mathscr{J}} c$. More generally, every partially ordered set with a minimum element is order-isomorphic to the partial order of the $\mathscr{J}$-classes of some semigroup. A reference for this is Theorem 4(i) in the paper C. J. Ash and T. E. Hall, Inverse semigroups on graphs, Semigroup Forum 11 (1975) 140–145.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1085938", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why do complex eigenvalues correspond to a rotation of the vector? We have a linear transformation $T: \Bbb R^m \to \Bbb R^n$ defined by $T(x)=Ax$ for $x \in \Bbb R^m$ and $A \in M_{n \times m}(\Bbb R)$. I understand why real-valued eigenvalues of $A$ correspond to scaling the length of the associated eigenvectors, but why is it that complex eigenvalues are said to rotate the eigenvector? If you have an eigenvector $x = (x_1, x_2, x_3)$ whose eigenvalue is $\lambda = a+bi$, how is $\lambda x = ((a+bi)x_1, (a+bi)x_2, (a+bi)x_3)$ a rotation of $x$? Shouldn't a rotation just be something like $\sin$'s and $\cos$'s multiplied by each component? Where do the imaginary parts fit in? Thanks.
For a real matrix, if $a + bi$ is an eigenvalue, so is $a - bi$. And in fact, the classic matrix with this pair of eigenvalues is $$ \begin{bmatrix} a & -b \\ b & a \end{bmatrix}. $$ You probably know some theorems that say you can sometimes diagonalize a matrix by changing basis. In the cases where diagonalization isn't possible (for real matrices), which you can get is $2 \times 2$ blocks like that one on the diagonal. So what's that $2 \times 2$ matrix do? Well, it multiplies lengths by $r = \sqrt{a^2 + b^2}$. So let's assume we've already factored that out into a diagonal matrix: $$ \begin{bmatrix} a & -b \\ b & a \end{bmatrix} = \begin{bmatrix} r & 0 \\ 0 &r \end{bmatrix} \cdot \begin{bmatrix} a' & -b' \\ b' & a' \end{bmatrix} $$ so that $a'^2 + b'^2 = 1$. Letting $\theta = \arctan(b'/a')$, we get $\sin \theta = b', \cos \theta = a'$, so this remaining matrix is just a rotation by angle $\theta$. Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 5, "answer_id": 2 }
Polynomial with a root modulo every prime but not in $\mathbb{Q}$. I recently came across the following fact from this list of counterexamples: There are no polynomials of degree $< 5$ that have a root modulo every prime but no root in $\mathbb{Q}$. Furthermore, one such example is given: $(x^2+31)(x^3+x+1)$ but I have not been able to prove that this does has that property above. How can such polynomials be generated and can we identify a family of them?
If you just want an easy example of polynomial that has root modulo every prime but not in $\mathbb Q$ — just take e.g. $$ (x^2-2)(x^2-3)(x^2-6) $$ (it has this property since the product of two non-squares mod p is a square mod p). One more interesting example is $x^8-16$ (standard proof uses quadratic reciprocity). As for possibility of complete description of all such polynomials — I'm skeptical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Counter example for Poincare inequality does not hold on unbounded domain The Poincare inequality states that if domain $\Omega$ is bounded in one direction by length $d>0$ then for any $u\in W_0^{1,p}(\Omega)$ we have $$ \int_\Omega|u|^p\,dx\leq \frac{d^p}{p}\int_\Omega |\nabla u|^p\,dx $$ Now I assume the domain $U\subset \mathbb R^N$ contains a sequence $B(x_n,r_n)$, where $x_n\in U$ and $r_n\to \infty$, then I want to prove that the previous Poincare's inequality fails on $W_0^{1,p}(U)$. Yes, of course, if I have such balls in $U$ then the domain $U$ can not be bound in one direction, and hence I done. But honest, is that it? I feel uncomfortable with my argument... Is there something more going on? Could you help me to write a more serious argument?
Actually you can build an example out of almost any function: Just notice that if $u: B(0,1)\to \mathbb{R}$, then $u_r(x)=u(x/r)$ is defined in the ball of radius $r$ and you get $$ \| u_r\|_{p,B(0,r)} = r^{n/p}\| u\|_{p,B(0,1)}, \quad \| \nabla u_r\|_{p,B(0,r)} =r^{-1} r^{n/p} \| \nabla u\|_{p, B(0,1)}. $$ Translating appropriately you get an example in the balls $B(x_k, r_k)$. Also notice that your argument doesn't work: What you have is "$U$ bounded in one direction, then we have Poincare in the domain". However you're actually using the other direction (which you don't know), i.e. "If Poincare holds in $U$, then $U$ is bounded in one direction" (actually what you want to prove is a weaker version of this statement).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\int \left[\left(\frac{x}{e}\right)^x + \left(\frac{e}{x}\right)^x\right]\ln x \,dx$ Integrate: $$\int \left[\left(\frac{x}{e}\right)^x + \left(\frac{e}{x}\right)^x\right]\ln x \,dx$$ This question looks like of the form of $$\int\ e^x(f(x)+f'(x))\,dx,$$ but don't know how to get the proper substitution?
Hint Try with \begin{align*} y & = \left(\frac{e}{x}\right)^{x}\\ \ln y & = x[1-\ln x]\\ \frac{y^{'}}{y} & = -\ln x. \end{align*} Now for the first component think of a similar idea.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
definition of product of modules I have been given this definition of a product between modules: If $I$ is an indexing set with $M_i$ as an $R$-Module then the product $\prod \limits_{i \in I} M_i$ is defined as the set consisting of I indexed tuples $(x_i)_{i\in I}$ for each $i \in I$ which is made into an R module by component wise addition and multiplication. my problem is understanding this definition, could anyone give me a basic example of what the product of two Modules $M_1$ and $M_2$ would be, just so i could see how it works practically. thanks in advance for the help!
Consider $\prod \limits_{i \in I} M_i$. The sum is $$( a_i)_{i \in I } + ( b_i)_{i \in I } = ( a_i + b_i)_{i \in I }$$ The action of R : $$ r \cdot ( a_i)_{i \in I } = ( r \cdot a_i)_{i \in I }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $Y^2 + X^2(X+1)^2$ is irreducible over $\mathbf R$ Show that $Y^2 + X^2(X+1)^2$ is irreducible over $\mathbf R$. Are there some general tricks for avoiding barbaric computations in general case?
If it were reducible it would factor in two non-unit factors. Either both of them are of degree 1 in $Y$ or one of them is of degree zero in $Y$ and the other is of degree 2 in $Y$. In the second case we see that the factor that is of degree zero in $Y$ must divide $Y^2$. Therefore it must be a unit, which is a contradiction. Assume then that the two factors are of degree one in $Y$. So $$Y^2+X^2(X+1)^2=(A(X)Y+B(X))(C(X)Y+D(X))=A(X)C(X)Y^2+(A(X)D(X)+C(X)B(X))Y+B(X)D(X)$$ From this $A(X)$ and $C(X)$ are units. So, we can assume they are $1$. We get $$Y^2+X^2(X+1)^2=(Y+B(X))(Y+D(X))=Y^2+(B(X)+D(X))Y+B(X)D(X)$$ From this $B(X)+D(X)=0$ and $X^2(X+1)^2=B(X)D(X)$. Therefore $$X^2(X+1)^2=-B^2(X).$$ Therefore, $$1=-B^2_{0}$$ where $B_0$ is the leading term of $B(X)$. But there is no such $B_0$ in the reals, who's square is $-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Could one be a friend of all? The social network "ILM" has a lot of members. It is well known: If you choose any 4 members of the network, then one of these 4 members is a friend of the other 3. Proof: Is then among any 4 members always one who is a friend of all members of the social network "ILM"? Note: If member A is a friend of member B then member B is also a friend of member A. I don't find an approach right now. Any kind of help or advice will be really appreciated.
Suppose there isn't a person who knows everyone. Then take vertex $v$, he doesn't know everyone, in particular there is a $w$ he does not know. If $w$ doesn't know anybody we are done, if he does select a friend of $w$, Preferably somebody who does not know somebody besides $v$ (if everybody that knows $w$ knows everybody except possibly $v$ you can argue that everybody who knows $w$ does not know $v$, hence you can pick any other two vertices and there will be no one knowing all three). So suppose there is a friend of $w$ called $u$ that does not know someone else called $t$. In the group $v,w,u,t$ there is no one who knows the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closed form for integral of inverse hyperbolic function in terms of ${_4F_3}$ While attempting to evaluate the integral $\int_{0}^{\frac{\pi}{2}}\sinh^{-1}{\left(\sqrt{\sin{x}}\right)}\,\mathrm{d}x$, I stumbled upon the following representation for a related integral in terms of hypergeometric functions: $$\small{\int_{0}^{1}\frac{x\sinh^{-1}{x}}{\sqrt{1-x^4}}\,\mathrm{d}x\stackrel{?}{=}\frac{\Gamma{\left(\frac34\right)}^2}{\sqrt{2\pi}}\,{_4F_3}{\left(\frac14,\frac14,\frac34,\frac34;\frac12,\frac54,\frac54;1\right)}-\frac{\Gamma{\left(\frac14\right)}^2}{72\sqrt{2\pi}}{_4F_3}{\left(\frac34,\frac34,\frac54,\frac54;\frac32,\frac74,\frac74;1\right)}}.$$ I'm having some trouble wading through the algebraic muckity-muck, so I'd like help confirming the above conjectured identity. More importantly, can these hypergeometrics be simplified in any significant way? The "niceness" of the parameters really makes me suspect it can be... Any thoughts or suggestions would be appreciated. Cheers!
$$\sinh^{-1}(\sqrt{\sin x}) = \sum\limits_{n=0}^{\infty} \frac{(-1)^n (2n)!}{2^{2n}(n!)^2}\frac{{(\sin x)}^{(2n+1)/2}}{2n+1}$$ so the integral is equivalent to $$\sum\limits_{n=0}^{\infty} \frac{(-1)^n (2n)!}{2^{2n}(n!)^2(2n+1)}\int_0^{\pi/2}(\sin x)^{(2n+1)/2}\,dx$$ You have $$\int_0^{\pi/2}(\sin x)^{(2n+1)/2}\,dx=\frac{\sqrt\pi}{2}\frac{\Gamma\left(\dfrac{2n+3}{4}\right)}{\Gamma\left(\dfrac{2n+5}{4}\right)}$$ (I haven't done the work myself, but I've seen the derivation somewhere...) Now it suffices to show that $$\sum\limits_{n=0}^{\infty} \frac{(-1)^n (2n)!}{2^{2n}(n!)^2(2n+1)}\frac{\Gamma\left(\dfrac{2n+3}{4}\right)}{\Gamma\left(\dfrac{2n+5}{4}\right)}=\sqrt\pi\ln2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Continuous bijection at boundary of open set Suppose $f:U \to V$ is a continuous bijection, where $U \subset \mathbb{R}^n$,$V \subset\mathbb{R}^m$ and $U$ is open. Suppose further that $U \ni x_n \to x \notin U$. Then $y_n:=f(x_n)$ may not necessarily converge. I have two questions: * *Define $d_n:= \text{distance}(y_n,\partial V)$. Is it true that $d_n \to 0$? *If $y_n$ does converge, say to $y$, then must we have $y \in \partial V$? (Edit: initially I forgot to mention the function is bijective.)
Both of them are incorrect. For example, for $n = m=1$, $U= (0,3\pi/2)$ and $f(x) = \cos x$. Then if $x_n \in U$ and $x_n \to 3\pi/2$, we have $y_n \to 0$ and $0$ is an interior point of $V = [-1, 1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1086922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate a sum which contains limit variables? How to evaluate a sum which contains limit variables? For example: $$\lim_{n\to\infty}\sum_{i=1}^n\frac{n-1}n\frac{1+i(n-1)}n $$ And would the result necessarily be rational, because each term appears to be the multiplication of two rational fractions?
Notice that in the sum, each piece has a common factor of $\frac{n-1}{n^2}$ which can be pulled out of the sum as per the distributive property since it does not depend on the index, $i$. $$\lim\limits_{n\to\infty}\sum\limits_{i=1}^n \frac{n-1}{n}\frac{1+i(n-1)}{n} = \lim\limits_{n\to\infty}\left(\frac{n-1}{n^2}\left(\sum\limits_{i=1}^n 1+i(n-1)\right)\right)$$ Now, what is remains in the sum can be split into two seperate sums with the one on the right again having a common factor of $(n-1)$ present which can be brought outside. $$=\lim\limits_{n\to\infty}\left(\frac{n-1}{n^2}\left(\sum\limits_{i=1}^n 1\right)+\frac{(n-1)^2}{n^2}\left(\sum\limits_{i=1}^ni\right)\right)$$ The left sum is equal to $n$ since there are $n$ occurences of adding $1$ together, and the right sum is the $n$th triangle number, given by $\frac{n(n+1)}{2}$ $$=\lim\limits_{n\to\infty}\left(\frac{n-1}{n^2}(n)+\frac{(n-1)^2}{n^2}\left(\frac{n(n+1)}{2}\right)\right)$$ Hopefully you can take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Simplifying $2^{30} \mod 3$ I simply cannot seem to get my head around this subject of mathematics. It seems so counter-intuitive to me. I have a question in my book: Simplify $2^{30}\mod 3$ This is my attempt: $2^{30}\mod 3 = (2\mod 3)^{30}$ (using the laws of congruences). $2\mod 3$ must be equal to $2$, right? $0\cdot 3 + 2 = 2$ then the rest is "2". Therefore, I concluded that $2^{30}$ is the answer. However, my book says $1$. How can this be? Can anyone explain this to me? It's probably really easy, and I feel kind of stupid for asking, but I simply don't get it. And also, are there any sites on the internet that provide similar problems with which you can learn the concepts? Thank you.
You are being asked to find the least non-negative residue modulo $3$ (or perhaps the residue with least absolute value. For the first the options are $0,1,2$ and for the second you can choose $-1,0,1$. The residue is essentially the same as the remainder on division by $3$. Now $2^{30}=(2^{10})^3=(1024)^3$ is of the order of $10^9$ and the point is to find a strategy to get a smaller value. Now $1024\equiv 1$ is easy to establish. But the easiest route, once you have correctly identified the problem, is to notice that $2\equiv -1$. Quite often, these power problems involve finding a small power which is equivalent to $\pm 1$, since these are easy to work with. For these kinds of problems - especially if they become more advanced - you should have Fermat's Little Theorem and the Euler-Fermat theorem to hand. But first get very used to the residue being the remainder on division.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Confused by how to derive the derivative of $f(\boldsymbol{x})=g(\boldsymbol{y})$ I was watching an online tutorial and saw this derivation. It seems the the author took the derivative with respect to y on left side and to x on right side. I thought dx should always be in the denominator and should on both side of the equation. Is it partial derivative? Or maybe my misunderstanding of the notation? Could anyone explain how this works? FYI the link of the tutorial is https://www.youtube.com/watch?v=aXBFKKh54Es&list=PLwJRxp3blEvZyQBTTOMFRP_TDaSdly3gU&index=98, the differentials was taken at around 2'20" Much appreciated! Happy New Year.
It's curious that you should call this a "derivation" because this is more accurate than saying that the derivative in the usual sense was taken. The construction used was the exterior derivative, or differential. This satisfies $$df(x,y)=\frac{\partial f}{\partial x}dx+\frac{\partial f}{\partial y}dy$$ for a function depending only on $x$ and/or $y$, so if $f(x,y)=\ln x$, then $$df(x,y)=\frac{1}{x}dx+0dy=\frac{dx}{x}$$ and if $g(x,y)=\ln y$ then $$dg(x,y)=0dx+\frac{1}{y}dy=\frac{dy}{y}$$ The reason it is called a derivation is for any functions $p,q$ we have $$d(pq)=qdp+pdq$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
What is the value of this continued fraction? I am curious about the value of the continued fraction $$1+\cfrac{1}{2+\cfrac{1}{3+\cfrac{1}{4+\cfrac{1}{5+\cfrac{1}{6+\dots}}}}}.$$ * *Can we evaluate it ? *Is it a nice value ? Clearly it should be a transcendental number. But I have no idea about calculate it.
Here's the info on this continued fraction and others. http://mathworld.wolfram.com/ContinuedFractionConstants.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
formula to calculate number of arch with certain angel could be fixed in a circle I'm looking looking for a formula to calculate how many arches with certain angle could be fixed around a circle or in circular formation. I want to use that formula to write a procedure for MSWlogo for designing purpose. applying trial and error method I found few data like $a=135$ degree $b=90$ degree and $8$ arch can make a perfect circular formation, similarly $a=120$ $b=90$ and $12$ arch can form a circle too etc. but I failed to find out any general mathematical formula to use in program. my class teacher told me that I'll find the answer when I'll study higher geometry and refused to answer me. so I've no choice other than asking here. please help me to find out a generalized formula to do the job. (ps: English is not my first language so if there are any grammatical mistake I'm sorry for that)
The argument of the normal to an arch changes by $a$ as we traverse that arch. The normal then gets turned back by $\pi-b$ by the angle. Thus, each arch is rotated $a+b-\pi$ from the previous one. Thus, there should be $$ n=\frac{2\pi}{a+b-\pi} $$ arches before they repeat the argument of the normal. In degrees rather than radians, this becomes $$ n=\frac{360^\circ}{a+b-180^\circ} $$ Examples For $a=135^\circ$ and $b=90^\circ$, we get $n=\frac{360^\circ}{135^\circ+90^\circ-180^\circ}=8$. For $a=120^\circ$ and $b=90^\circ$, we get $n=\frac{360^\circ}{120^\circ+90^\circ-180^\circ}=12$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is $\rightarrowtail$ used for? I have come across this symbol many times, but I am unsure as to how to correctly use it. So I can read up on it, what is the name of this mapping function? When would it be correct to use and when wouldn't you use it? I think it may be used when you haven't specified a function for the mapping, but just a guess. Example: Any wellordered set $\langle X,\prec\rangle$ is order isomorphic to the set of its segments ordered by $\subset$ Proof: Let $Y=\{X_a\vert a\in X\}$. Then $a\rightarrowtail X_a$ is a (1-1) mapping onto $Y$, and since $a\prec b\Leftrightarrow X_a\subset X_b$ the mapping is order preserving. Is it necessary to use $\rightarrowtail$ here, could you just use $\mapsto$ or $\rightarrow$, in this example why did we use this symbol?
It is usually used to denote an injection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Non-additive-subtractive prime sequence Call the following a NON additive-subtractive prime sequence or lets name it Gary's sequence. It goes like this: let a(0)=2. The next term is defined as smallest prime number which cannot be expressed as the sum and/or difference of any previous terms (note that it means using any combination of plus and minus signs and any number of previous terms). The sequence begins: 2, 3, 7, 11, 29, 53, 107,... this sequence is infinite. My question is: Can you describe the behavior of this sequence or maybe you can even create a formula for generating this? (Remember you are allowed to use each prime exactly once and repetition like 2+2+3+7+11 is absolutely not allowed). For example, 17 is not a member of this sequence because 17 can be expressed as 11+2+7-3 . Also 83 is not a member of this sequence since 83 can be expressed as 11+29-3-7+53 . Also 47 is not a member of this sequence because 47 can be expressed as 29+7+11)
This seems identical to: OEIS A138000 "Least prime such that the subsets of { a(1),...,a(n) } sum up to 2^n different values." http://oeis.org/A138000 Discovered by S. J. Benkoski and P. Erdos, On weird and pseudoperfect numbers, Math. Comp. 28 (1974) 617-623 You're following in the footsteps of giants ;-) Your next term should be a(8) = 211 (not 157 = 107+29+11+7+3) As to the closed-form, OEIS says: a(n) > a(n-1) and a(n) <= nextprime(sum(a(i),i=1..n-1)-(-1)^n) ; but in fact a(n) ~ sum(a(i),i=1..n-1) and thus a(n) ~ constant*2^n where constant >~ 1.6739958... It's probably safest to define a(n) using nearestprime() rather than nextprime(), to avoid precision issues for large n. As to the name, Benkoski and Erdos didn't give this sequence a name... but I don't think you can call it Gary's sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can a function be continuous but not Hölder on a compact set? Is it possible to construct a function $f: K \to \mathbb{R}$, where $K \subset \mathbb{R}$ is compact, such that $f$ is continuous but not Hölder continuous of any order? It seems like there should be such a function--it would probably oscillate wildly, like the Weierstrass-Mandlebrot function. However, the W-M function itself doesn't work, since it is Hölder. Edit: I guess I did have in mind for the function to not be Hölder anywhere, even though I didn't explicitly say so.
Take a look at the functon $$f(x) = \sum_{n=0}^\infty {\sin(2^n x)\over n^2}.$$ This function is continuous on the line. Much ugliness ensues when you try to show it's Hölder continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 2 }
Does $\sum_{n\ge0} \sin (\pi \sqrt{n^2+n+1}) $ converge/diverge? How would you prove convergence/divergence of the following series? $$\sum_{n\ge0} \sin (\pi \sqrt{n^2+n+1}) $$ I'm interested in more ways of proving convergence/divergence for this series. My thoughts Let $$u_{n}= \sin (\pi \sqrt{n^2+n+1})$$ trying to bound $$|u_n|\leq |\sin(\pi(n+1) )| $$ since $n^2+n+1\leq n^2+2n+1$ and $\sin$ is decreasing in $(0,\dfrac{\pi}{2} )$ $$\sum_{n\ge0}|u_n|\leq \sum_{n\ge0}|\sin(\pi(n+1) )|$$ or $|\sin(\pi(n+1) )|=0\quad \forall n\in \mathbb{N}$ then $\sum_{n\ge0}|\sin(\pi(n+1) )|=0$ thus $\sum_{n\ge0} u_n$ is converge absolutely then is converget any help would be appreciated
$$\lim_{n\rightarrow\infty}{\sin\left(\pi\cdot\sqrt{n^2+n+1}\right)}=\sin{\lim_{n\rightarrow\infty}\left(\pi\cdot\sqrt{n^2+n+1}\right)}=$$ $$=\sin\left(\pi\cdot\lim_{n\rightarrow\infty}\left(\sqrt{n^2+n+1}-(n+1)+(n+1)\right)\right)=$$ $$=\sin\left(\pi\cdot\left(\lim_{n\rightarrow\infty}\left(\sqrt{n^2+n+1}-(n+1)\right)+\lim_{n\rightarrow\infty}\left(n+1\right)\right)\right)=$$ $$=\sin\left(\pi\cdot\lim_{n\rightarrow\infty}\left(-\frac12+(n+1)\right)\right)=$$ $$=\lim_{n\rightarrow\infty}\sin\left(\pi n+\frac{\pi}{2}\right)=\lim_{n\rightarrow\infty}(-1)^{n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Let $G$ to be abelian group and $|G|=mn$ when $(m,n)=1$. $G_m=\{g\mid g^m=e\}$,$G_n=\{g\mid g^n=e\}$, prove isomorphism I want to prove $ f:G_n\times G_m\rightarrow G$ when $f(g,h)=gh $ is an isomorphism. First of all I showed that $G_m,G_n$ are subgroups of $G$ (easy). Now I want to show that for every $ a,b, \in G$, $f(ab)=f(a)f(b) $. Let $a=(g_1,h_1)$ and $b=(g_2,h_2)$ $\implies f(g_1,h_1)=g_1h_1$ , $f(g_2,h_2)=g_2h_2$ $\therefore f(g_1,h_1) f(g_2,h_2)=g_1h_1g_2h_2=g_1g_2h_1h_2$ (because G is abelian) $=f\bigl((g_1g_2),(h_1h_2)\bigr)$ Then, I need to show that only $f(e)=e$. Because $(m,n)=1$, only $f(e,e) = e e = e$. Am I right? If not how can I prove this? Is $f$ an isomorphism even if $G$ is not abelian?
Suppose $$f(a,b):=ab=e\implies a=b^{-1}$$ Buth te last equality is impossible as $\;a\in G_n\;,\;\;b\in G_m\;$ and thus the only possible element in both of them is the unity, i.e. $\;G_n\cap G_m=\{e\}\;$ . For a counter example with $\;G\;$ non abelian take $\;G=S_3\;$ , though in this case $\;G_2\;$ is not a subgroup.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Compute the contraction of a 1-form with a vector field Question: Let $\alpha$ be the $1$-form on $\mathbb{R}^3$ given by $\alpha=zdy-ydz$ and let $\mathbb{X}$ be the vector field on $\mathbb{R}^3$ given by $\mathbb{X}=(0,y,-z)$. Compute $i_\mathbb{X}\alpha$ Answer: $i_{(0,y,-z)}(zdy-ydz)=i_{(0,y,-z)}(zdy)-i_{(0,y,-z)}(ydz)=zy-y(-z)=2yz.$ Is this answer correct? I feel that I must have done something wrong as I feel there should be more to the question. I am not familiar with taking the contraction of a one form. Can you just use the relation: $i_vf^{(j)}=v^j$? Any feedback would be greatly appreciated.
Your computation of $i_{\mathbb{X}}\alpha$ is correct. Let me explain some general facts that work on any manifold (at least locally), but keep it restricted to $\mathbb{R}^3$. First note that a one-form $\alpha$ is a function on vector fields. Any one-form can be written as a linear combination of $dx$, $dy$, and $dz$ where the coefficients are functions of $x$, $y$, and $z$ - another way of saying this is that $\{dx, dy, dz\}$ is a basis for the one-forms. By linearity, in order to understand the value of a one-form on a vector field, it is enough to know the values of $dx$, $dy$, and $dz$ on a vector field. If $V = (v^1, v^2, v^3)$ is a vector field (here the $v^i$ are functions of $x$, $y$, and $z$), then $$dx(v^1, v^2, v^3) = v^1\qquad dy(v^1, v^2, v^3) = v^2\qquad dz(v^1, v^2, v^3) = v^3.$$ Because of these relationships, one can specify $dx$, $dy$, and $dz$ as the dual basis of the standard basis of vector fields. What does any of this have to do with contraction? Well, in the case of one-forms $i_{\mathbb{X}}\alpha = \alpha(\mathbb{X})$; i.e. contraction by a vector field is nothing but evaluation of the form on the vector field. Note, in the case of $k$-forms where $k > 1$, contraction is not the same as evaluation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1087931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Convergence of series of $\sin(x/n^2)$ For the convergence of series of $\sin\left(\frac{x}{n^2}\right)$, is it enough to say that since, for large $n$, $$a_n:= \sin\left(\frac{x}{n^2}\right) \approx b_n:= \frac{x}{n^2},$$ so that $\displaystyle\lim_{n\to\infty} \dfrac{a_n}{b_n} \ \text{ exists}$, and by the limit comparison test, series $a_n$ and series $b_n$ converge or diverge together - and since series $b_n$ is a convergent $p$-series for all $x$, series an is convergent for all $x$, too? Or am I missing something? Thanks,
Note it suffices we check this for $x\geqslant 0$. We can use that $\sin{x}n^{-2}\leqslant {x}{n^{-2}}$ and that if $x>0$ is fixed and $n>N$ large enough, $\sin({x}{n^{-2}})\geqslant 0$ (since $\sin$ is positive on $(0,\pi/2)$). Thus, by comparison, $$0\leqslant \sum_{n>N}\sin(xn^{-2})\leqslant x\sum_{n>N} n^{-2}<\infty$$ This in fact shows convergence is uniform in every compact subset of $\Bbb R$. You can also simply use $|\sin y|\leqslant |y|$ to avoid the argument that the summands are eventually positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Convergence of $ \int_{-\infty}^\infty \cos(x\log(\lvert x\lvert ))\,dx $ Show that the improper integral $$ \int_{-\infty}^\infty \cos(x\log(\lvert x\lvert ))\,dx $$ is convergent. I rewrote it, using even function symmetry of cosine, as twice the integral from zero to +infinity. Now the argument of log is simply $x$, not $|x|$. Now, to deal with $+infty$ I replace my upper limit with $R$ and will evaluate the limit as R goes to $\infty$. There is no issue at the origin for $x\log x$, since I think by l'hopital's rule, this is viewed as a convergent sequence - converging to $0$, as $x \to 0$. So $\cos (x\log x)$ is well-defined for all $x$. Does anyone have any clever ideas with how to proceed? I saw an integration by parts method on this site to show convergence of $\cos^3(x)$ (on the positive real line), so I will try this method for $\cos(x\log x)$. If anyone has any cool tips to offer, please feel free to share :). Thanks,
Notice that it suffices to consider the convergence of $$ \int_{1}^{\infty} \cos (x \log x) \, dx. $$ Now let $f : [1, \infty) \to [0, \infty)$ by $f(x) = x \log x$. This function has an inverse $g = f^{-1}$ which is differentiable. So we have $$ \int_{1}^{R} \cos (x \log x) \, dx = \int_{0}^{f(R)} g'(y)\cos y \, dy = \int_{0}^{f(R)} \frac{\cos y}{f'(g(y))} \, dy. \tag{1} $$ Notice that $f'(g(y)) = \log g(y) + 1$ is increasing and diverges to $+\infty$ as $y \to \infty$. Thus, as $R \to \infty$ it follows that (1) converges from the alternating series test. Indeed, for large $R$ and $N = N(R) = \lfloor f(R) / \pi \rfloor$ it follows that \begin{align*} \int_{1}^{R} \cos (x \log x) \, dx &= \sum_{k=0}^{N-1} \int_{k\pi}^{(k+1)\pi} \frac{\cos y}{f'(g(y))} \, dy + \int_{\pi N}^{f(R)} \frac{\cos y}{f'(g(y))} \, dy \\ &= \sum_{k=0}^{N-1} (-1)^{k} a_{k} + \mathcal{O}(a_{N}), \end{align*} where $a_{k}$ is defined by $$ a_{k} = \int_{0}^{\pi} \frac{\cos y}{f'(g(y + k\pi))} \, dy. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the field of fractions and the integral closure of a subring of $\mathbb Z[x]$. Let $R$ be a subring of $\mathbb{Z}[x]$ consisting of polynomials such that the coefficients of $x$ and $x^2$ are zero. * *Find the field of fractions of $R$. *Find the integral closure of $R$ in it's field of fractions. *Show that $R$ cannot be generated as a ring by 1 and a polynomial $f(x) \in R$. I feel like the answers I have are correct, but at the same time I feel like I'm missing something here. Do these solutions look right? * *Let $\mathbb{K}$ be the field of fractions, then certainly $\mathbb{Q} \subset \mathbb{K}$ because of $\mathbb{Z}$. Also $\frac{1}{x^3} \in \mathbb{K}$ so $\frac{1}{x^3} \cdot x^5 = x^2 \in \mathbb{K}$, likewise $\frac{1}{x^4} \cdot x^5 = x \in \mathbb{K}$. Thus $\mathbb{K}$ must be $\mathbb{Q}(x)$. *$R$ is a UFD because $\mathbb{Z}[x]$ is a UFD, thus $R$ is integrally closed. i.e., it's integral closure is itself. *Suppose it were, then consider the homomorphism, $\phi: \mathbb{Z}[t] \to R $, such that $t \mapsto f(x) $. Then $\phi$ is injective, implying $R = \mathbb{Z}[x]$, contradiction.
I'm guessing your argument for $2$ is that you can use factorization in $\mathbf{Z}[x]$ to factor in $R$. The problem is that when you use factorization in $\mathbf{Z}[x]$, the factors will may not be in $R$! In particular, consider the problem of factoring of $x^8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Surface Integral without Parametrization Let $S$ be the sphere $x^2 + y^2 + z^2 = a^2$. * *Use symmetry considerations to evaluate $\iint_Sx\,dS$ without resorting to parametrizing the sphere. *Let F $= (1, 1, 1)$. Use symmetry to determine the vector surface integral $\iint_S$F$\cdot \, dS$ without parametrizing the sphere. I find it difficult to understand question 1. What does it mean by 'symmetry considerations'? For question 2, without parametrizing the sphere, I find out that the integral should be equal to $$\frac1a \iint_S(x+y+z)dS.$$ Now symmetry comes in - a sphere is symmetric in all axes, thus the answer should be $\frac3a$ times the answer in part a. Am I correct? Any help will be appreciated.
Integrals of the form $$\iint x \, dS$$ refer to center of mass calculations. This double integral will give the $x$ coordinate of the center of mass of your surface, likewise for $y$ and $z$. However, the center of mass of a sphere is at its center, and this sphere is centered at the origin. Therefore, this surface integral is zero. For question $2$, note that a normal vector is $(x,y,z)$. What happens when you compute ${\mathbf F} \cdot (x,y,z)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Besides proving new theorems, how can a person contribute to mathematics? There are at least a few things a person can do to contribute to the mathematics community without necessarily obtaining novel results, for example: * *Organizing known results into a coherent narrative in the form of lecture notes or a textbook *Contributing code to open-source mathematical software What are some other ways to make auxiliary contributions to mathematics?
Another way to contribute that hasn't been mentioned yet is advocacy. It is important to raise awareness on the importance of mathematics and scientific literacy in general. For instance, here are a few concepts that it would be useful to disseminate to the general public: * *that some knowledge of mathematics, statistics and science is important for the average person, too. *that there is still active research in mathematics; theorems weren't all discovered 300 years ago. *that mathematics (and STEM in general) is a viable career path, and people interested in it shouldn't be laughed at. *that research, even basic research, is an important endeavour and needs funding.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "80", "answer_count": 9, "answer_id": 7 }
How does $\sum_{k=0}^n (pe^t)^k{n\choose k}(1-p)^{n-k} = (pe^t+1-p)^n$? How does $\sum_{k=0}^n (pe^t)^k{n\choose k}(1-p)^{n-k} = (pe^t+1-p)^n$? Where $e$ is Euler's number and $p,n$ are constants. ${n\choose k}$ is the binomial coefficient. If context helps, I'm currently trying to show that the moment generating function of a binomial random variable $X$ with parameters $n$ and $p$ is given by $(pe^t+1-p)^n$. I'm stuck on this very last bit. My notes just simply state that $\sum_{k=0}^n (pe^t)^k{n\choose k}(1-p)^{n-k} = (pe^t+1-p)^n$, but I don't understand why this is so. Many thanks in advance.
The binomial theorem, or binomial identity or binomial formula, states that: $$(x+y)^n=\sum\limits_{k=0}^n{n\choose k}x^{n-k}y^k$$ where $n\in\mathbb{N}$ and $(x,y)\in\mathbb{R}$ (or $\mathbb{C}$). Then for $x=(1-p)$ and $y=pe^t$ you get your formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Non exact sequence of quotients by torsion subgroups $$0\rightarrow G_{1}\rightarrow G_{2}\rightarrow G_{3}\rightarrow 0$$ is a short exact sequence of finitely generated abelian groups. We call $\bar{G_{i}}$ the quotient of $G_{i}$ by its torsion subgroup. I want to show that $$0\rightarrow \bar{G_{1}}\rightarrow \bar{G_{2}}\rightarrow \bar{G_{3}}\rightarrow 0$$ is not exact. I am looking for example with $\mathbb{Z}/n\mathbb{Z}$ or something else... Can you help me ? Thank you.
Suppose your sequence is always exact. Then take $G_2$ any torsionfree group and $G_1$ any subgroup of $G_1$. Then also $G_1$ is torsionfree and exactness would imply that $G_2/G_1$ is torsionfree. But for every abelian group $G$ there exists a surjection $F\to G$, with $F$ a free group (in particular torsionfree). So every group would be torsionfree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find solutions of $\alpha x^n = \ln x$ How can I find the solutions of $$\alpha x^n = \ln x$$ when $\alpha \in \mathbb{R}$ and $n\in \mathbb{Q}$? Or, if it is not possible to have closed form solutions, how can I prove that there exist one (or there is no solution) and that it is unique? (I'm particularly interested in the cases $n=2$, $n=1$, and $n=1/2$).
In the case that $\alpha n\not =0$, we have $$\alpha x^n = \ln x$$ $$ \alpha nx^{n}= n\ln x$$ $$ \alpha nx^{n}= \ln x^{n}$$ $$ e^{\alpha nx^{n}} = x^{n}$$ $$ 1= \frac{x^{n}}{e^{\alpha nx^{n}}}$$ $$ 1= x^{n}e^{-\alpha nx^{n}} $$ $$ -\alpha n= -\alpha nx^{n}e^{-\alpha nx^{n}} $$ $$ W(-\alpha n)= -\alpha nx^{n} $$ $$ -\frac{W(-\alpha n)}{\alpha n}= x^{n} $$ $$ x=\left(-\frac{W(-\alpha n)}{\alpha n}\right)^{\frac1n}$$ Where $W$ is the Lambert W function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is there an easy way to calculate $\int_0^{2\pi} \sin^2x\ \cos^4x\ dx $? I want to calculate the integral $$\int_0^{2\pi} \sin^2x\ \cos^4x\ dx $$ by hand. The standard substitution $t=\tan(\frac{x}{2})$ is too difficult. Multiple integration by parts together with clever manipulations can give the result. But it would be nice to have an easier method, perhaps one single substitution or some useful formulas for trigonometric powers.
Hint: $$e^{ix} = \cos x + i \sin x$$ and the binomial theorem. Even further to simplify the algebra a bit initially you can use $$\sin^2x + \cos^2x = 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $1234567891011121314151617181920212223......$ an integer? This question came from that one and from that talk where it's noted that "integers have a finite count of digits", so that the "number" in the title is not at all a number (not integer nor rational or real) . This statement seems to me justified because I suspect that if we admit an infinite count of digits we build a set of "numbers" that is not countable (but I've not a proof). I'm wrong? Reading comments and answers I'm a bit confused. So I add a more specific question: Is the string in the title a number in some model of Peano Axioms?
Your number is not an integer. Suppose that you can have an integer with an infinite number of digits. Then by Cantor's diagonal argument you can show that the set of all integers in uncountable, which is a contradiction if you accept the Peano axioms as suggested by Ahaan S. Rungta. Thus your number cannot be an integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 3 }
Find other iterated integrals equal to the given triple integral I want to find all the other five iterated integrals that are equal to the integral $$\int_{1}^{0}\int_{y}^{1}\int_{0}^{y} f(x,y,z)dzdxdy$$ I have so far found these three $$\int_{0}^{1}\int_{0}^{x}\int_{0}^{y} f(x,y,z)dzdydx$$ $$\int_{0}^{1}\int_{0}^{y}\int_{y}^{1} f(x,y,z)dxdzdy$$ $$\int_{0}^{1}\int_{z}^{1}\int_{y}^{1} f(x,y,z)dxdydz$$ These three were easy to find as I had to look at the XY plane for the first one and ZY plane for the last two. Now to find the rest, I have to look at the XZ plane. I am not able to do it. I would appreciate if someone can help.
The bounds of integration determine equations that bound a solid $S$ in three-dimensional space. After you integrate with respect to the first variable, you should orthogonally project $S$ along the axis specified by that first variable onto the plane spanned by the other two variables. That projection then determines a two-dimensional region which is the domain over which you integrate next. In your original integral, you have $$\int_{1}^{0}\int_{y}^{1}\int_{0}^{y} f(x,y,z)\,dz\,dx\,dy = -\int_{0}^{1}\int_{y}^{1}\int_{0}^{y} f(x,y,z)\,dz\,dx\,dy.$$ Note that I have switched the outer bounds of integration and changed the sign of the integral to alleviate the minor annoyance that $1$ is not less than $0$. Now, the bounds of integration imply three inequalities that specify the domain of integration. $$ \begin{array}{l} 0<z<y \\ y<x<1 \\ 0<y<1. \end{array} $$ Now, the solid, together with the projections of interest, looks like so: Now, I guess the reason that the $yz$ and $xy$ projections are relatively easy is that those projections correspond to cross-sections of the solid, i.e. $z=0$ for the $xy$ projection and $x=1$ for the $yz$ projection. This allows us to just set the variable that we integrate with respect to first to that constant. It's even easier here, since neither $x$ nor $z$ appear explicitly in the bounds. We can't do that here but, when we think of it as a projection as we have here, we can see that it's just the triangle $$\left\{(x,z): 0<x<1 \text{ and } 0<z<x \right\}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1088957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Units in a ring of fractions Let $R$ be a UFD and $D \subseteq R$ multiplicative set. What are the units in $D^{-1}R$? I assume the answer should be $D^{-1}R^{\times}$, but I get stuck: If $a/b$ is a unit, then there exists $c/d$ so that $$\frac{a}{b} \cdot \frac{c}{d} = \frac{1}{1} \Longleftrightarrow ac = bd,$$ but I don't see what this tells me. For example, just because $ac \in D$ doesn't seem to imply anything about $a, c$.
We can assume $0\notin D$, or the ring $D^{-1}R$ would be trivial. Also it's not restrictive to assume $1\in D$. Let $a/b$ be a unit in $D^{-1}R$. Then there exists $c/d\in D^{-1}R$ such that $$ \frac{a}{b}\cdot\frac{c}{d}=\frac{1}{1} $$ so $$ ac=bd $$ In particular, $ac\in D$, because $b,d\in D$. Thus $a$ divides an element of $D$. Conversely, suppose $ac\in D$, for some $c\in R$ and let $b\in D$; then $$ \frac{a}{b}\cdot\frac{bc}{ac}=\frac{abc}{bac}=\frac{1}{1} $$ Therefore $a/b$ is a unit in $D^{-1}R$. Note that $(bc)/(ac)\in D^{-1}R$ because $ac\in D$. So we can summarize the facts above in a proposition. For a subset $X$ of $R$ denote by $\hat{X}$ the set $$ \hat{X}=\{r\in R:rs\in X\text{ for some }s\in R\} $$ often called the saturation of $X$. Proposition. An element $a/b\in D^{-1}R$ $(a\in R, b\in D)$ is invertible if and only if $a\in\hat{D}$. Since $R$ is a UFD, we can say that $D^{-1}$ is the set of elements whose prime factor decomposition contains only primes appearing in the prime factor decomposition of an element of $D$. Something should be noted more generally. I'll not assume that $R$ is a domain; when writing $a/b$, the hypothesis that $b\in D$ will be implicit. If $a/b\in D^{-1}R$ is invertible, then $(a/b)(c/d)=1/1$ for some $c/d\in D^{-1}R$. Thus, for some $z\in D$, we have $$ acz=bdz $$ so that $a\in\hat{D}$ (same definition as before), because $bdz\in D$. Conversely, if $a\in\hat{D}$, so $ac\in D$ for some $c\in R$, we have $$ \frac{a}{b}\cdot\frac{bc}{ac}=\frac{1}{1} $$ and $a/b$ is invertible for every $b\in D$. Thus we see that the hypothesis that $R$ is a domain is completely irrelevant. The saturation of $D$ is closed under multiplication and it's a good exercise in ring of fractions proving that $$ D^{-1}R\cong\hat{D}^{-1}R $$ the isomorphism being the obviously defined one. Note that the saturation of $\hat{D}$ is $\hat{D}$ itself. So if $D$ is saturated (that is, $\hat{D}=D$), then the set of units of $D^{-1}R$ can be described easily as the set of fractions $a/b$ with $a\in D$. An important case in which $D$ is saturated is when $D=R\setminus P$, when $P$ is a prime ideal of $R$. The proof is very simple.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
reduction order method $xy''-(1+x)y'+y=x^2e^{2x}$, $y_1=1+x$ reduction order method $xy''-(1+x)y'+y=x^2e^{2x}$, being $y_1=1+x$ a solution of the homogeneous equation. I made y=u(1+x) and got $u''(x^2+x)-u'(x^2+1)=x^2e^{2x}$ Then i did $u'=w$ and obtained $w=\frac{e^{2x}x^2+c_1e^xx}{(x+1)^2}$, however $u=\int{w }dx$ will be a strange expression that will not give to the solution $y=c_1e^x+c_2(1+x)+\frac{e^{2x}(x-1)}{2}$. Can you help me finding my mistake? Thanks
Your work so far looks correct, so to find $u=\displaystyle\int\frac{x^{2}e^{2x}+Cxe^x}{(x+1)^2}dx=\int\frac{(xe^x)(xe^x+C)}{(x+1)^2}dx$, use integration by parts with $w=xe^x+C, dw=(x+1)e^x dx$ and $ dv=\frac{xe^x}{(x+1)^2}dx, \;v=\frac{e^x}{x+1}$ to get $\displaystyle u=(xe^{x}+C)\left(\frac {e^x}{x+1}\right)-\int e^{2x}dx=\frac{xe^{2x}+Ce^x}{x+1}-\frac{1}{2}e^{2x}+D$. Then $y=(x+1)u$ should give the correct solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that $ \sum_{n=1}^\infty \ln\big(n\sin \frac{1}{n}\big)$ converges. Prove that $\displaystyle \sum_{n=1}^\infty \;\ln\left(n\sin\frac{1}{n}\right)$ converges. My Work: $$\left|\ln \left(n \sin \frac{1}{n}\right)\right| \leq\left|\ln \left(n \sin \frac{1}{n^{2}}\right)\right| \leq\left|\ln \left(\sin \frac{1}{n^{2}}\right)\right|$$ I was going to use comparison test. But now stuck. Please give me a hint.
Let's amply use Taylor series to give the leading orders. Note that $\sin\left( \frac{1}{n} \right) \approx \frac{1}{n} - \frac{1}{6n^3}$. So $n \sin \left( \frac{1}{n} \right) \approx 1 - \frac{1}{6n^2}$. Note also that $\ln(1 - x) \approx x + x^2 + \dots$, so that $$\ln\left( n \sin \frac{1}{n} \right) \approx \frac{1}{6n^2} + \frac{1}{36n^4} + \dots$$ This means that you are wondering about $$ \sum_{n \geq 1} \sum_{j \geq 1} \frac{1}{(6n^2)^j} \ll \sum_{n \geq 1} \frac{1}{n^2},$$ which converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Prove: If $a^2+b^2=1$ and $c^2+d^2=1$, then $ac+bd\le1$ Prove: If $a^2+b^2=1$ and $c^2+d^2=1$, then $ac+bd\le1$ I seem to struggle with this simple proof. All I managed to find is that ac+bd=-4 (which might not even be correct).
1st Method $\begin{align}\left(a^2+b^2\right)\left(c^2+d^2\right)=1& \implies (ac+bd)^2+(ad-bc)^2=1\\&\implies (ac+bd)^2\le1\end{align}$ 2nd Method $\begin{align}\left(a^2+b^2\right)+\left(c^2+d^2\right)=2& \implies \left(a^2+c^2\right)+\left(b^2+d^2\right)=2\\&\implies 2(ac+bd)\le 2\qquad \text{(by A.M.- G.M. Inequality)}\\&\implies (ac+bd)\le 1\end{align}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 5 }
How do I integrate: $\int\sqrt{\frac{x-3}{2-x}} dx$? I need to solve: $$\int\sqrt{\frac{x-3}{2-x}}~{\rm d}x$$ What I did is: Substitute: $x=2\cos^2 \theta + 3\sin^2 \theta$. Now: $$\begin{align} x &= 2 - 2\sin^2 \theta + 3 \sin^2 \theta \\ x &= 2+ \sin^2 \theta \\ \sin \theta &= \sqrt{x-2} \\ \theta &=\sin^{-1}\sqrt{x-2} \end{align}$$ and, $ cos \theta = \sqrt{(3-x)} $ $ \theta=\cos^{-1}\sqrt{(3-x)}$ The integral becomes: $$\begin{align} &= \int{\sqrt[]{\frac{2\cos^2 \theta + 3\sin^2 \theta-3}{2-2\cos^2 \theta - 3\sin^2 \theta}} ~~(2 \cos \theta\sin\theta)}~{\rm d}{\theta}\\ % &= \int{\sqrt[]{\frac{2\cos^2 \theta + 3(\sin^2 \theta-1)}{2(1-\cos^2 \theta) - 3\sin^2 \theta}}~~(2 \cos \theta\sin\theta)}~{\rm d}{\theta} \\ % &= \int\sqrt[]{\frac{2\cos^2 \theta - 3\cos^2 \theta}{2\sin^2 \theta - 3\sin^2 \theta}}~~(2 \cos \theta\sin\theta) ~{\rm d}\theta \\ % &= \int\sqrt[]{\frac{-\cos^2 \theta }{- \sin^2 \theta}}~~(2 \cos \theta\sin\theta) ~{\rm d}\theta \\ % &= \int \frac{\cos \theta}{\sin\theta}~~(2 \cos \theta\sin\theta)~{\rm d}\theta \\ % &= \int 2\cos^2 \theta~{\rm d}\theta \\ % &= \int (1- \sin 2\theta)~{\rm d}\theta \\ % &= \theta - \frac {\cos 2\theta}{2} + c \\ % &= \sin^{-1}\sqrt{x-2} - \frac {\cos 2(\sin^{-1}\sqrt{x-2})}{2} + c \end{align}$$ But, The right answer is : $$\sqrt{\frac{3-x}{x-2}} - \sin^{-1}\sqrt{3-x} + c $$ Where am I doing it wrong? How do I get it to the correct answer?? UPDATE: I am so sorry I wrote: = $\int 2\cos^2 \theta .d\theta$ = $\int (1- \sin 2\theta) .d\theta$ It should be: = $\int 2\cos^2 \theta .d\theta$ = $\int (1+ \cos2\theta) .d\theta$ = $ \theta + \frac{\sin 2\theta}{2} +c$ What do I do next?? UPDATE 2: = $ \theta + \sin \theta \cos\theta +c$ = $ \theta + \sin \sin^{-1}\sqrt{(x-2)}. \cos\cos^{-1}\sqrt{(3-x)}+c$ = $ \sin^{-1}\sqrt{(x-2)}+ \sqrt{(x-2)}.\sqrt{(3-x)}+c$ Is this the right answer or I have done something wrong?
$$I=\int\sqrt{\frac{x-3}{2-x}}~{\rm d}x$$ Integrating Let $x=2\cos^2t+3\sin^2t$, $dx=\sin2tdt$ $$I=\int\sqrt{\frac{-\cos^2t}{-\sin^2t}}\sin2tdt=\int2\cos^2tdt=\int(1+\cos2t)dt=t+\frac12\sin2t+c\\I=\underbrace{\cos^{-1}\sqrt{3-x}}_{\pi/2-\sin^{-1}\sqrt{3-x}}+\sqrt{x-2}\sqrt{3-x}+c\\I=\underbrace{\sqrt{x-2}\sqrt{3-x}}_{\sqrt{5x-x^2-6}}-\sin^{-1}{\sqrt{3-x}}+c'$$ Differentiating back $$I'=\frac1{2\sqrt{(x-2)(3-x)}}\cdot(5-2x)-\underbrace{\frac1{\sqrt{1-(\sqrt{3-x})^2}}}_{\sqrt{x-2}}\cdot\frac1{2\sqrt{3-x}}(-1)\\I'=\frac{2(3-x)}{2\sqrt{(x-2)(3-x)}}=\sqrt{\frac{3-x}{x-2}}=\sqrt{\frac{x-3}{2-x}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Convergence of $\int_{1}^{\infty} \frac{\sin x}{x^{\alpha}}dx$ For which values of $\alpha > 0$ is the following improper integral convergent? $$\int_{1}^{\infty} \frac{\sin x}{x^{\alpha}}dx$$ I tried to solve this problem by parts method but I am nowhere near to the answer. :(
As David Mitra mentioned in the comments you can split the integral up. First we do $$\int_{1}^{\infty} \frac{\sin x}{x^{\alpha}}dx = \int_{1}^{\pi} \frac{\sin x}{x^{\alpha}}dx + \int_{\pi}^{\infty} \frac{\sin x}{x^{\alpha}}dx$$ Now the first part converges for sure (you can approxiamate it with $\frac{1}{x^a}$). For the second $$ \int_{\pi}^{\infty} \frac{\sin x}{x^{\alpha}} dx = \sum_{j=1}^{\infty} \int_{j\pi}^{(j+1)\pi} \frac{\sin x}{x^{\alpha}} dx \le \sum_{j=1}^{\infty} \int_{j\pi}^{(j+1)\pi} \frac{\sin x}{(j\pi)^{\alpha}} dx = \sum_{j=1}^{\infty} \frac{1}{(j\pi)^{\alpha}} \int_{j\pi}^{(j+1)\pi} \sin x dx = \sum_{j=1}^{\infty} 2 (-1)^j \frac{1}{(j\pi)^{\alpha}} $$ and since $\frac{2}{(j\pi)^{\alpha}} \to 0$ for $j \to \infty$ and $(-1)^j$ is alternating, the series converges (and so the integral) for every $\alpha > 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089495", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What are other methods to Evaluate $\int_0^{\infty} \frac{y^{m-1}}{1+y} dy$? I am looking for an alternative method to what I have used below. The method that I know makes a substitution to the Beta function to make it equivalent to the Integral I am evaluating. * *Usually we start off with the integral itself that we are evaluating (IMO, these are better methods) and I would love to know such a method for this. *Also, I would be glad to know methods which uses other techniques that I am not aware, which does not necessarily follow (1) $$\Large{\color{#66f}{B(m,n)=\int_0^1 x^{m-1} (1-x)^{n-1} dx}}$$ $$\bbox[8pt,border: 2pt solid crimson]{x=\frac{y}{1+y}\implies dx=\frac{dy}{(1+y)^2}}$$ $$\int_0^{\infty} \left(\frac{y}{1+y}\right)^{m-1} \left(\frac{1}{1+y}\right)^{n-1} \frac{dy}{(1+y)^2}=\int_0^{\infty} y^{m-1} (1-y)^{-m-n} dy$$ $$\Large{n=1-m}$$ $$\Large{\color{crimson}{\int_0^{\infty} \frac{y^{m-1}}{1+y} dy=B(m,1-m)=\Gamma(m)\Gamma(m-1)}}$$ Thanks in advance for helping me expand my current knowledge.
I think something might be wrong with your answer, suppose $m\geq 0$, let $v = 1+y$. Your integral becomes $$I=\int\limits_1^{\infty} \frac{(v-1)^m-1}{v}dv . $$ We can then apply the binomial expansion and transform the integral into $$I= \int\limits_1^{\infty} \frac{ \sum\limits_{i=0}^{m}\left[ {m \choose i} (-1)^iv^{m-i} \right]-1}{v}dv = \int\limits_1^{\infty} \sum\limits_{i=0}^{m}\left[ {m \choose i} (-1)^iv^{m-i-1} \right]-\frac1v dv . $$ This diverges for $m\geq 0$. However, your answer of $\Gamma(m)\Gamma(m-1)$ would lead us to believe that for negative $m$ the integral is undefined but $m\geq 1$ would be no problem. For instance if $m=4$ then $\Gamma(4)\Gamma(3)$ is defined, whereas your integral is not. EDIT: With the updated question we may use the same method to obtain $$I = \int\limits_1^{\infty} \sum\limits_{i=0}^{m}\left[ {m \choose i} (-1)^iv^{m-i-2} \right] dv,$$ for integer $m \geq 1$, which does agree with the corrected result that $B(m,1-m) = \frac{\pi}{\sin(\pi m)}$, namely, it is undefined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089601", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Find the angle if the area of the two triangles are equal? Let $I$ be the incenter of $\triangle ABC$, and $D$, $E$ be the midpoints of $AB$, $AC$ respectively. If $DI$ meets $AC$ at $H$ and $EI$ meets $AB$ at $G$, then find $\measuredangle A$ if the areas of $\triangle ABC$ and $\triangle AGH$ are equal. I played around with GeoGebra and found out that it should be $60^{\circ}$, but am unable to prove it. I saw that $\cot \measuredangle HDA= \frac{a-b}{2r}$ and some stuff like that, but its not really helping me. Can anyone solve it, preferably by pure geometry? :)
This is an easy problem to solve with trilinear coordinates. We have $I=[1,1,1]$, $D=\left[\frac{1}{a},\frac{1}{b},0\right]$ and $E=\left[\frac{1}{a},0,\frac{1}{c}\right]$. Moreover, we have $H=[1,0,\mu]$ and $G=[1,\eta,0]$, with $\det(D,I,H)=\det(E,I,G)=0$, so: $$ H=\left[\frac{1}{b}-\frac{1}{a},0,\frac{1}{b}\right],\qquad G=\left[\frac{1}{c}-\frac{1}{a},\frac{1}{c},0\right].\tag{1}$$ The previous line gives: $$\frac{AH}{HC}=\frac{\frac{1}{ba}}{\frac{1}{bc}-\frac{1}{ac}},\qquad \frac{AG}{GB}=\frac{\frac{1}{ca}}{\frac{1}{cb}-\frac{1}{ab}},\tag{2}$$ so: $$\frac{AH}{AC}=\frac{c}{c+a-b},\qquad \frac{AG}{AB}=\frac{b}{b+a-c}\tag{3}$$ and $[AGH]=[ABC]$ iff: $$ a^2 = b^2+c^2-bc\tag{4} $$ that, in virtue of the cosine theorem, is equivalent to $\cos\widehat{A}=\frac{1}{2}$, or: $$\color{red}{\widehat{A}=60^\circ}\tag{5}$$ as wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Must algebraic extensions of the same degree have subfields of the same degree? Let $\mathbb F$ be a field and let $\mathbb K_1$ and $\mathbb K_2$ be finite extensions of $\mathbb F$ with the same degree, that is, $[\mathbb K_1:\mathbb F]=[\mathbb K_2:\mathbb F]$. Now, assume that $\mathbb K_1$ contains a subfield of degree $s$ over $\mathbb F$. My question is: Can we conclude that also $\mathbb K_2$ contains a subfield of degree $s$ over $\mathbb F$? If the field $\mathbb F$ is finite, then this is true, since we can embedded $\mathbb K_1$ and $\mathbb K_2$ in the matrix ring $M_m(\mathbb F)$, where $m=[\mathbb K_1:\mathbb F]=[\mathbb K_2:\mathbb F]$ and it can be proved that there is an inner authomorphism of $M_m(\mathbb F)$ that apply $\mathbb K_1$ in $\mathbb K_2$.
Let $F=\mathbb{Q}$. Let $K_1$ be the splitting field of $f_1=x^4+8x+12$ over $\mathbb{Q}$, and let $K_2$ be the splitting field of $f_2=x^{12}+x^{11}+\cdots+x+1$ over $\mathbb{Q}$ (this is just the $13$th cyclotomic polynomial). Then $$[K_1:F]=[K_2:F]=12$$ However, $\mathrm{Gal}(K_1/F)\cong A_4$ and $\mathrm{Gal}(K_2/F)\cong\mathbb{Z}/12\mathbb{Z}$, and recall that $\mathbb{Z}/12\mathbb{Z}$ has a subgroup of size $6$, while $A_4$ does not. Therefore, by the fundamental theorem of Galois theory, there is an intermediate field $K_2\supset L_2\supset F$ with $[L_2:F]=2$, but there is no intermediate field $K_1\supset L_1\supset F$ with $[L_1:F]=2$. (For my claim that $\mathrm{Gal}(K_1/F)\cong A_4$, see p.6 of this article by Keith Conrad.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089786", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that $ \int^{\infty}_{0} \frac{\ln (1+x)}{x(x^2+1)} \ dx = \frac{5{\pi}^2}{48} $ without complex analysis? The Problem I am trying to show that $ \displaystyle \int^{\infty}_{0} \frac{\ln (1+x)}{x(x^2+1)} \ dx = \frac{5{\pi}^2}{48}$ My attempt I've tried substituting $x=\tan\theta$, and then using the substitution $u=1 + \tan \theta $ which gives: $ \displaystyle \int^{\infty}_{1} \frac{\ln u}{(u-1)(u^2-2u+2)} \ du $ , however I am unable to evaluate this.
We have: $$I=\int_{0}^{+\infty}\frac{\log(u+1)}{u^3+u}\,du =\int_{0}^{+\infty}\int_{0}^{1}\frac{1}{(u^2+1)(1+uv)}\,dv\,du$$ and by exchanging the order of integration, then setting $v=\sqrt{w}$: $$ I = \int_{0}^{1}\frac{\pi +2v\log v}{2+2v^2}\,dv =\frac{\pi^2}{8}+\int_{0}^{1}\frac{v\log v}{1+v^2}\,dv=\frac{\pi^2}{8}+\frac{1}{4}\int_{0}^{1}\frac{\log w}{1+w}\,dw$$ so: $$ I = \frac{\pi^2}{8}-\frac{\pi^2}{48} = \color{red}{\frac{5\pi^2}{48}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1089877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 1 }
Manipulating Partial Derivatives of Inverse Function In lectures we're told: $$\dfrac {\partial y} {\partial x} = \dfrac 1 {\dfrac {\partial x} {\partial y}}$$ as long as the same variables are being held constant in each partial derivative. The course is 'applied maths', i.e non-rigorous, so don't confuse me. But anyway, if we have: $$\xi = x - y \qquad \eta = x$$ Then $\dfrac {\partial x} {\partial \xi} = 0$ and $\dfrac {\partial \xi} {\partial x} = 1$. The rule presumably fails because one of the partial derivatives is $0$. But then isn't this rule useless? Since I cannot use it without checking that the partial derivative isn't in fact $0$, but then I've worked out the partial derivative manually anyway.
The rule presumably fails because one of the partial derivatives is $0$ No, not because of that. Take $$\xi = x - y \qquad \eta = x+y$$ then $$x = \frac12(\xi+\eta) \qquad y = \frac12(\eta-\xi)$$ so $\dfrac {\partial x} {\partial \xi} = \dfrac12$ and $\dfrac {\partial \xi} {\partial x} = 1$. The correct way to find the derivative of the inverse map is to arrange partial derivatives into a matrix, and take the inverse of that matrix. In your scenario $$ \begin{pmatrix} \frac {\partial \xi} {\partial x} & \frac {\partial \xi} {\partial y} \\ \frac {\partial \eta} {\partial x} & \frac {\partial \eta} {\partial y} \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ 1 & 0 \end{pmatrix} $$ hence $$ \begin{pmatrix} \frac{\partial x} {\partial \xi} & \frac{\partial x} {\partial \eta} \\ \frac{\partial y} {\partial \xi} & \frac {\partial y}{\partial \eta} \end{pmatrix} = \begin{pmatrix} 1 & -1 \\ 1 & 0 \end{pmatrix}^{-1} = \begin{pmatrix} 0 & 1 \\ -1 & 1 \end{pmatrix} $$ from where you can read off all the partials of the inverse map. The rule you were told has an important clause as long as the same variables are being held constant in each partial derivative. which is usually the case when we take partials of the inverse map. For it to apply, you would need to compute $\dfrac {\partial x} {\partial \xi} = 0$ while holding $y$ constant (not $\eta$). That is, holding $\eta-\xi$ constant. Then the rule gives the correct result : $$\dfrac {\partial x} {\partial \xi}\bigg|_{\eta-\xi \text{ constant}} = 1$$ despite $$\dfrac {\partial x} {\partial \xi}\bigg|_{\eta \text{ constant}} = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Limit involving square roots, more than two "rooted" terms The limit is $$\lim_{x\to\infty} \left(\sqrt{x^2+5x-2}-\sqrt{4x^2-3x+7}+\sqrt{x^2+7x+5}\right)$$ which has a value of $\dfrac{27}{4}$. Normally, I would know how to approach a limit of the form $$\lim_{x\to\infty}\left( \sqrt{a_1x^2+b_1x+c_1}\pm\sqrt{a_2x^2+b_2x+c_2}\right)$$ (provided it exists) by using the expression's conjugate, but this problem has me stumped. I've considered using the conjugate $$\sqrt{x^2+5x-2}-\sqrt{4x^2-3x+7}-\sqrt{x^2+7x+5}$$ and a term like this one, $$\sqrt{x^2+5x-2}+\sqrt{4x^2-3x+7}-\sqrt{x^2+7x+5}$$ but that didn't seem to help simplify anything. Edit: I stumbled across something at the last second that lets me use the conjugate approach. The expression can be rewritten as $$\sqrt{x^2+5x-2}-\sqrt{4x^2-3x+7}+\sqrt{x^2+7x+5}\\ \sqrt{x^2+5x-2}-2\sqrt{x^2-\frac{3}{4}x+\frac{7}{4}}+\sqrt{x^2+7x+5}\\ \left(\sqrt{x^2+5x-2}-\sqrt{x^2-\frac{3}{4}x+\frac{7}{4}}\right)+\left(\sqrt{x^2+7x+5}-\sqrt{x^2-\frac{3}{4}x+\frac{7}{4}}\right)$$ which approaches $$\frac{23}{8}+\frac{31}{8}=\frac{27}{4}$$
Rewrite it as: $$\left(\sqrt{x^2+5x-2} - \left(x+\frac{5}{2}\right)\right) -\\ \left(\sqrt{4x^2-3x+7}-\left(2x-\frac{3}{4} \right)\right) +\\ \left(\sqrt{x^2+7x+5}-\left(x+\frac{7}{2}\right)\right)+\left(\frac{5}{2}+\frac 34+\frac{7}2\right)$$ Or something like that. Each of the first three terms has limit zero...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Why are there only limits and colimits? Part of my intuition about the construction of limits and colimits is based on the idea that they are initial and terminal objects in the appropriate category: The limit of a diagram $D$ is of course a/the final object in the category of cones over $D$, and similarly for colimits and cocones. The part that I don't understand is that there seem to be four options here, and only two are commonly discussed: $$\begin{array}{c|cc} & \text{Terminal} & \text{Initial} \\ \hline \text{Cones} & \text{Limit} & ?? \\ \text{Cocones} & ?? &\text{Colimit} \end{array}$$ What fills in the blanks, and why are they less emphasized? It seems like we want final objects when arrows are naturally going 'out of something' and initial objects when they're going 'into something', which vaguely makes sense but doesn't feel satisfactory. Products are of course important. On the other hand, nothing prevents one from asking for an initial object in some category of spans, but books (as far as I've read, which only involves the basics) never mention them. Is there a reason?
Here's one of the several equivalent ways of thinking about limits and colimits in which this question doesn't arise: for a diagram $J$ and a category $C$, there is a natural diagonal functor $C \to C^J$. If limits of shape $J$ always exist in $C$, they organize themselves into a right adjoint for this functor, and if colimits of shape $J$ always exist in $C$, they organize themselves into a left adjoint for this functor. And that's it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
How many $4\times 3$ matrices of rank three are there over a finite field of three elements? Let $M$ be the space of all $4\times 3$ matrices with entries in the finite field of three elements. Then the number of matrices of rank three in $M$ is A. $(3^4 - 3)(3^4 - 3^2)(3^4-3^3)$ B. $(3^4 - 1)(3^4 - 2)(3^4 - 3)$ C. $(3^4-1)(3^4-3)(3^4-3^2)$ D. $3^4(3^4 - 1)(3^4 - 2)$ Is there any specific formula to solve this type of problem? Is there any specific formula to calculate the number of matrices? I have no idea how to start this problem. Any guidance please.
Sorry for the previous post.the answer I am getting is c $(3^4-1)(3^4-3)(3^4-3^2)$ For the $1$st column we have $4$ places and $3$ elements to fill it.So $3^4$ choices but the elements can't be all zero.S0 we have $3^4-1$ choices .For the second column we have $3^4$ choices but the second column cant be linearly dependent with the $1$st .So $(3^4-3)$ choices .Similarly the $3$rd column is not linearly dependent to both $1$st and $2$ nd cloumn
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Factorise a matrix using the factor theorem Can someone check this please? $$ \begin{vmatrix} x&y&z\\ x^2&y^2&z^2\\ x^3&y^3&z^3\\ \end{vmatrix}$$ $$C_2=C_2-C_1\implies\quad \begin{vmatrix} x&y-x&z\\ x^2&y^2-x^2&z^2\\ x^3&y^3-x^3&z^3\\ \end{vmatrix}$$ $$(y-x) \begin{vmatrix} x&1&z\\ x^2&y+x&z^2\\ x^3&y^2+xy+x^2&z^3\\ \end{vmatrix}$$ $$(y-x)(z-x) \begin{vmatrix} x&1&1\\ x^2&y+x&z+x\\ x^3&y^2+xy+x^2&z^2+xz+x^2\\ \end{vmatrix}$$ $$R_2=R_2-xR_1\implies\quad (y-x)(z-x) \begin{vmatrix} x&1&1\\ 0&y&z\\ x^3&y^2+xy+x^2&z^2+xz+x^2\\ \end{vmatrix}$$ $$R_3=R_3-x^2R_1\implies\quad (y-x)(z-x) \begin{vmatrix} x&1&1\\ 0&y&z\\ 0&y^2+xy&z^2+xz\\ \end{vmatrix}$$ factor $x$$$\implies\quad x(y-x)(z-x) \begin{vmatrix} 1&1&1\\ 0&y&z\\ 0&y^2+xy&z^2+xz\\ \end{vmatrix}$$ $$\implies\quad x(y-x)(z-x)(yz^2-zy^2)$$ $$\implies\quad xyz(y-x)(z-x)(z-y)$$ Also I'd like practical tips on using the factor theorem for these types of questions. My understanding is that the determinant is $f(x,y,z)$ so if we hold $y$ and $z$ constant we could apply it somehow to $f(x)$ alone. I'm not that great spotting difference of squares etc and want a more fail safe alternative. Thanks in advance.
What you did is correct. But there is an easier way. Remember that for polynomial $p(x)$, if $p(a)=0$ then $(x-a)$ is a factor of $p(x)$. Denote the determinant by $\Delta$. It is obviously a polynomial in $x,\ y$ and $z$. Now, note that: * *$x=0\implies \Delta = 0$, so $x$ is a factor of $\Delta$. Same for $y = 0$ and $z=0$. *$x=y\implies \Delta = 0$, so $(x-y)$ is a factor of $\Delta$. Similarly for $y=z$ and $z=x$ Finally note that $\Delta$ is degree $6$ polynomial. So it cannot have more than $6$ linear factors, and we have listed all of them above. Clearly $$\Delta=Cxyz(x-y)(y-z)(z-x)$$ where $C$ is some constant. Taking some values (eg. $x=1,\ y=2,\ z=3$), we get $C=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090570", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving $2x^2 \equiv 7 \pmod{11}$ I'm doing some previous exams sets whilst preparing for an exam in Algebra. I'm stuck with doing the below question in a trial-and-error manner: Find all $ x \in \mathbb{Z}$ where $ 0 \le x \lt 11$ that satisfy $2x^2 \equiv 7 \pmod{11}$ Since 11 is prime (and therefore not composite), the Chinese Remainder Theorem is of no use? I also thought about quadratic residues, but they don't seem to solve the question in this case. Thanks in advance
$$2x^2\equiv 7\pmod{11}\\2x^2\equiv -4\pmod{11}\\x^2\equiv -2\pmod{11}\\x^2\equiv 9\pmod{11}\\x^2\equiv\pm3\pmod{11}$$ As HSN mentioned since $11$ is prime than there are at most 2 solutions,
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
"Bizarre" continued fraction of Ramanujan! But where's the proof? $$\frac{e^\pi-1}{e^\pi+1}=\cfrac\pi{2+\cfrac{\pi^2}{6+\cfrac{\pi^2}{10+\cfrac{\pi^2}{14+...}}}}$$ "Bizarre" continued fraction of Ramanujan! But where's the proof? i have no training in continued fractions so i have no idea how to attempt to prove it.
The left hand side is $ \tanh \frac{\pi}{2} $ so it may be worth looking at a continued fraction for $ \tanh x $. It looks like a special case of Gauss's continued fraction (I can't link to it directly but there is a Wikipedia page on it). Also, Ramanujan is not mentioned.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Natural Lie algebra representation on function space There is natural Lie group representation of $GL(n)$ on $C^\infty(\mathbb{R}^n)$ given by \begin{align} \rho: GL(n) & \rightarrow \text{End}(C^\infty(\mathbb{R}^n)) \\ A & \rightarrow \left( \rho(A):f \rightarrow f\circ A^{-1} \right) \end{align} What is the associated lie algebra representation? It should be tangent map of $\rho$ at unit element. So take path $A(t)$ in $GL(n)$, that $A(0)=I$, $A'(0)= X$. And start differentiating $$ \frac{d}{dt}\Bigg|_{t=0} f(A^{-1}(t)x) = -f'(x)Xx $$ This gives me $\rho^*(X)(f(x)) = - f'(x)Xx$. But than $\rho([X,Y]) \neq [\rho(X),\rho(Y)]$ because $\rho([X,Y])(f)$ would contain first derivatives of $f$ but $[\rho(X),\rho(Y)](f)$ would contain second derivatives of $f$. I have one more question. Do I get into the trouble when I consider representation on $C(\mathbb R^n)$(i.e. dropping the smoothness)? Is it still sensible to ask for Lie algebra representation? Ok based on the encouragement from answer, that I got it right. I went through the calculation. Here it is: \begin{align} \rho(X)(f(x)) &= - (\partial_i f(x)) X_{ij} x_j \\ \rho(Y)\rho(X)(f(x)) &= \partial_k ((\partial_i f(x)) X_{ij} x_j) Y_{kl}x_l \\ &= (\partial_k \partial_i f(x)) X_{ij} Y_{kl} x_j x_l + (\partial_i f(x)) X_{ik} Y_{kl} x_l \\ \rho(X)\rho(Y)(f(x)) &= (\partial_k \partial_i f(x)) Y_{ij} X_{kl} x_j x_l + (\partial_i f(x)) Y_{ik} X_{kl} x_l \\ &= (\partial_k \partial_i f(x)) X_{ij} Y_{kl} x_j x_l + (\partial_i f(x)) Y_{ik} X_{kl} x_l \end{align} As you can see those second order term in $\rho(X)\rho(Y)(f(x))$ and $\rho(Y)\rho(X)(f(x))$ are the same and cancel out in the commutator $[\rho(X),\rho(Y)]$. I had to use index notation in the proof because it seamed to me as the most reasonable choice, but still I do not like it. Another option is to write down the second order term as $$ (\partial_k \partial_i f(x)) Y_{ij} X_{kl} x_j x_l = x^T Y^T f''(x) X x $$ But this is not useful when I would do third derivatives.
Your computation is correct, but you were too pessimistic about the continuation: the second derivative terms in the bracket of $\rho(X)$ and $\rho(Y)$ cancel. It becomes less reasonable to ask for a Lie algebra repn on not-smooth functions, since natural Lie algebra repns are differential operators. Nevertheless, once you start down that path, if you go all the way to the action on distributions and distributional derivatives, things become sensible again.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1090940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Given $a+b+c$, Can I calculate $a^2+b^2+c^2$? I want to calculate $a^2 + b^2 + c^2$ when I am given $a+b+c$. It is known that a,b,c are positive integers. Is there any way to find that.
If $a+b+c = k$ then $(a+b+c)^2 = k^2 \implies a^2+b^2+c^2 = k^2 - 2(ab + ac + bc)$, hence you would have to know the value of $ab + ac + bc$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091013", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Partial derivative notation I still have a little problem with notation for partial derivatives. Let $$ f(x,y) = x^2y $$ What do you think that this should equal to? $$ \frac{\partial f}{\partial x}(y,x) =\, ? $$ There are two options $2yx$ or $y^2$. Do you think that following is the same? $$ \frac{\partial f(y,x)}{\partial x}= \,? $$ And now take substitution $g(x,y)=f(y,x)$. What is following? $$ \frac{\partial g}{\partial x}(x,y)=\,? $$ I would love to hear your opinions? Based on DanielV comment, I need answer for things like these $$ \frac{\partial f(y,z)}{\partial z} $$ $$ \frac{\partial f(f(y,z),z)}{\partial z} $$ I get constantly confused during physics lectures because of this :(
If $f$ is defined by $f(x,y)$, the operation $\frac{\partial f}{\partial x}$ means that you differentiate $f$ with respect to $x$. With your example this will become $2xy$. Now, $\frac{\partial f}{\partial x}(y,x)$ means, that you FIRST differentiate with respect to $x$ (which was $2xy$) and AFTERWARDS plug in $(x,y)=(y,x)$. Therefore we obtain $\frac{\partial f}{\partial x}(y,x)=2yx(=2xy)$. Now, if $g(x,y)=f(y,x)$, then $g(x,y)=y^2x$. Again, $\frac{\partial g}{\partial x}(x,y)$ means that you have to differentiate $g$ with respect to $x$ and afterwards plug in $(x,y)=(x,y)$ (so this won't change anything). Thus, we obtain $\frac{\partial g}{\partial x}(x,y)=y^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 3 }
Is there a generalization of the Lagrange polynomial to 3D? What is a way to construct a smooth polynomial surface ($\mathbb{R}^2 \rightarrow \mathbb{R}$) with Lagrange-polynomial properties in every partial derivative? I want to try this for image interpolation.
If you fix the value $x$ one variable you have the Lagrange interpolation formula $$ f(x, y) = \sum_{k=1}^N f(x, y_k)\prod_{i \neq k} \frac{y_i - y}{y_i - y_j} $$ For each fixed value $y = y_k$ you can construct a Lagrange polynomial for the function $f(x, y_k)$. $$ f(x, y_k) = \sum_{k=1}^N f(x_k, y_k)\prod_{i \neq k} \frac{x_i - x}{x_i - x_j} $$ The result is expansion in some kind of bilinear Lagrange basis. $$ \prod_{i \neq k} \frac{y_i - y}{y_i - y_j} \prod_{i \neq k} \frac{x_i - x}{x_i - x_j} = \delta(i = k)\delta(i = k)$$ We have constructed one of many possible B-splines. In theory the Weistestrass approximation theorem says continuous functions on the interval can be uniformly approximated by polynomials. Using the argument above it makes sense that 2-variable polynomials are dense in continuous functions on the square. This is a special case of the Stone-Weierstrass Theorem: $$ \overline{ \mathbb{R}[x,y]} = C^0\big([0,1]^2\big) $$ Using linear interpolation we can always draw some kind of surface connecting your control points and having the value specified, $(x_k, y_k, f(x_k,y_k))$. The Stone Weierstrass theorem says it can always be a polynomial and the difference between your approximation $f_1(x,y) = \sum a x^m y^n$ to your original function $f$ can be made arbitrarily small (as small as you want). $$ \sup_{(x,y)\in [0,1]^2} |f_1(x,y) - f(x,y)| < \epsilon $$ It practice it would be really great to know the degree of the polynomials you find and how to build them (find the coefficients). Splines let you do just that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091281", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
What does "decidability" of a Model mean exactly? I'm looking at the theorem concerning the Model of Arithmetic: M arith = (Integers, +, *, <) is undecidable. What does the "decidability" of a model mean exactly? Does that mean that "the problem of determining if the given model satisfies any FOL statement" is undecidable? Thanks for the help!
In this case, the undecidability means there is no algorithm for deciding in general whether a sentence in our language is true in $\mathbb{Z}$. To prove the undecidability, it is easiest to use the fact that there is no algorithm for deciding whether a sentence over the the usual language for arithmetic is true in the natural numbers $\mathbb{N}$. (Here it is convenient and customary, but not necessary, to have $\mathbb{N}$ include $0$.) Then to prove the required undecidability result, we show that given any sentence $\varphi$ of "arithmetic," we can mechanically produce a sentence $\varphi'$ such that $\varphi$ is true in $\mathbb{N}$ if and only if $\varphi'$ is true in $\mathbb{Z}$. This shows that if we had an algorithm for deciding truth in $\mathbb{Z}$, we could produce an algorithm for deciding truth in $\mathbb{N}$. But there is no such algorithm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Do you decline a multiplier in reading a mathematical formula in Russian? How do you read "Порядок определителя равен $2n$"? Is it "двум эн" or is it "два эн"? And in a sum, do you read $c = a_5 + a_6$ as "це равно а пятому плюс а шестому"? Or does the plus sign interfere with the declension in some way?
Since Grigory M won't be posting an answer, I will post the answer I've received here, for future reference. For the question about $2n$, it seems that the multiplying numeral $2$ can either be left invariable as "два" or put in the dative as "двум," with the latter option considered more correct by some people who have an opinion on the matter. For $c = a_5 + a_6$, there are two options. Either retain the indices as adjectives by saying "це равно сумме а пятого и а шестого" ("$c$ is equal to the sum of $a$ the fifth and $a$ the sixth"), or, less elegantly, convert the indices to undeclined numerals as in "це равно а пять плюс а шесть" ("$c$ is equal to $a$ five plus $a$ six.") The reading "це равно а пятому плюс а шестому" ("$c$ is equal to $a$ the fifth [dat.] plus $a$ the sixth [dat.]") proposed in the OP is apparently not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
When the arguments of two roots of a quadratic equation are equal? Let $az^2+bz+c=0$ be a quadratic equation with complex coefficients $a,b,c$ and roots $z_1, z_2.$ How can I obtain the condition for $$\arg z_1=\arg z_2$$ containing $a,b,c?$ At present I have, Since $$z_1z_2=\dfrac{c}{a}$$ If $\arg z_1=\arg z_2,$ then $$\arg z_1=\arg z_2=\dfrac{1}{2}(\arg c-\arg a)$$ Is there any reference discuss about roots of quadratic equations with complex coefficients? Here, I have ask a similar question about $|z_1|=|z_2|,$ which has a nice solution.
The roots $(-b\pm\sqrt{b^2-4ac})/2a$ are a real positive multiples of each other. For some positive $\mu$, $$-b+\sqrt{b^2-4ac}=\mu(-b-\sqrt{b^2-4ac})$$ $$(\mu-1)^2b^2=(\mu+1)^2(b^2-4ac)$$ $$b^2=\frac{(\mu+1)^2}{\mu}ac$$ The latter function of $\mu$ can take any value not smaller than $4$, and the condition is $$b^2=\lambda ac,$$with $\lambda\ge4$, which can be expressed as $2\arg b=\arg a+\arg c\land |b|^2\ge4|ac|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Laplace tranform of $t^{5/2}$ It is asked to transform $t^{5/2}$. I did $t^{5/2}=t^3\cdot t^{-1/2}$. Then followed the table result $$L\{{t^nf(t)}\}=(-1)^n\cdot\frac{d^n}{ds^n}F(s)$$ However i got $\frac{1}{2} \cdot\sqrt\pi \cdot s^{-7/2}$ instead of $\frac{15}{8} \cdot\sqrt\pi \cdot s^{-7/2}$. Can you help me with the derivations? Thanks
For every real number $r>-1$, we have, $$L(t^r)(s)=\int_0^\infty e^{-st}t^rdt\\\hspace{60mm}=\int_0^\infty e^{-x}(\frac{x}{s})^r\frac{dx}{s}\hspace{10mm}\text{(Putting $x=st$)}\\\hspace{20mm}=\frac{1}{s^{r+1}}\int_0^\infty e^{-x}x^rdt\\\hspace{5mm}=\frac{\Gamma(r+1)}{s^{r+1}}.$$ So for $r=\frac{5}{2}$, we have, $$L(t^{\frac{5}{2}})(s)=\frac{\Gamma(\frac{5}{2}+1)}{s^{\frac{5}{2}+1}}=\Gamma(\frac{7}{2})s^{-\frac{7}{2}}$$ Now we get $$\Gamma(\frac{7}{2})=\Gamma(\frac{5}{2}+1)=\frac{5}{2}\Gamma(\frac{5}{2})=\frac{5}{2}\Gamma(\frac{3}{2}+1)=\frac{5}{2}\frac{3}{2}\Gamma(\frac{1}{2}+1)\\=\frac{5}{2}\frac{3}{2}\frac{1}{2}\Gamma(\frac{1}{2})=\frac{15}{8}\sqrt{\pi}.$$ $$\therefore L( t^{\frac{5}{2}})(s)=\frac{15}{8}\sqrt{\pi}\cdot s^{-\frac{7}{2}}.$$ For $p>0$ the Euler gamma function $\Gamma(p)$ is defined as, $$\Gamma(p)=\int_0^\infty e^{-x}x^{p-1}dx$$ A property of gamma function: $$\Gamma(p+1)=p\cdot\Gamma(p)$$ Also $\Gamma(\frac{1}{2})=\sqrt{\pi}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
how to find number of sides of an irregular polygon? An irregular polygon has one angle 126 degrees and the rest 162 degrees. how many sides are there in this irregular polygon? I have tried to find our the sides with the formula of a regular polygon; obviously it doesn't work.
Using the formula of the sum of interior angles $$162(n-1) + 126 = 180(n-2) \Rightarrow n=18. $$ Remark: I assume a simple and plane polygon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What is the terminology for "lemma of lemma" Let's say I need to prove a main theorem, to prove which I need three lemmas. Thus in writing the structure is as follows: Lemma 1 Proof Lemma 2 Proof Lemma 3 Proof Theorem Proof But if when proving Lemma 1, I need to prove another two results ("lemmas for the proof of a lemma"). Is there any standard terminology for this situation? It seems to me that calling them Lemmas makes the structure of the article messier.
You can use Claim to set it on a lower footing than a lemma, but it's really not a big deal. One lemma can build on other lemmas. The biggest distinction you want to make is between the results you consider broad and important (theorems), narrow but important for the proof of a theorem (lemmas), and immediate consequences of theorems (corollaries).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Trace of a matrix $A$ with $A^2=I$ Let $A$ be a complex-value square matrix with $A^2=I$ identity. Then is the trace of $A$ a real value?
Another way. $$A^2-I=0,$$ Since minimal polynomial divide this polynomial (why) then it must be either $(x-1)(x+1)$ or $(x-1)$ or $(x+1)$. hence all the eigenvalues are reals (even in set $\{1,-1\}$) (why) and hence the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1091929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
How to define an affine transformation using 2 triangles? I have $2$ triangles ($6$ dots) on a $2D$ plane. The points of the triangles are: a, b, c and x, y, z I would like to find a matrix, using I can transform every point in the 2D space. If I transform a, then the result is x. For b the result is y, and for c the result is z And if there is a given d point, which is halfway from a to b, then after the transformation the result should be between x and y halfway. I've tried to solve it according to NovaDenizen's solution, But the result is wrong. The original triangle: $$ a = \left[ \begin{array}{ccc} -3\\ 0\\ \end{array} \right] $$ $$ b = \left[ \begin{array}{ccc} 0\\ 3\\ \end{array} \right] $$ $$ c = \left[ \begin{array}{ccc} 3\\ 0\\ \end{array} \right] $$ The x, y, z dots: $$ x = \left[ \begin{array}{ccc} 2\\ 3\\ \end{array} \right] $$ $$ y = \left[ \begin{array}{ccc} 3\\ 2\\ \end{array} \right] $$ $$ z = \left[ \begin{array}{ccc} 4\\ 3\\ \end{array} \right] $$ I've created a figure: I tried to transform the (0, 0) point, which is halfway between a and b, but the result was (3, 3.5) instead of (3, 3) The T matrix is: $$\left[ \begin{array}{ccc} 1/3 & 1/6 & 0\\ 0 & -1/2 & 0\\ 3 & 3,5 & 1\\ \end{array} \right]$$
There is a neat formula for your case $$ \vec{P}(p_1; p_2) = (-1) \frac{ \det \begin{pmatrix} 0 & \vec{x} & \vec{y} & \vec{z} \\ p_1 & a_1 & b_1 & c_1 \\ p_2 & a_2 & b_2 & c_2 \\ 1 & 1 & 1 & 1 \\ \end{pmatrix} }{ \det \begin{pmatrix} a_1 & b_1 & c_1 \\ a_2 & b_2 & c_2 \\ 1 & 1 & 1 \\ \end{pmatrix} }, $$ where $p_1$ and $p_2$ are coordinates of the point you are mapping. Here's how it works (plugging in initial points) $$ \vec{P}(p_1; p_2) = (-1) \frac{ \det \begin{pmatrix} 0 & \vec{x} & \vec{y} & \vec{z} \\ p_1 & -3 & 0 & 3 \\ p_2 & 0 & 3 & 0 \\ 1 & 1 & 1 & 1 \\ \end{pmatrix} }{ \det \begin{pmatrix} -3 & 0 & 3 \\ 0 & 3 & 0 \\ 1 & 1 & 1 \\ \end{pmatrix} } = \frac{\vec{z} - \vec{x}}{6} p_1 + \frac{2 \vec{y} - \vec{x} - \vec{z}}{6} p_2 + \frac{\vec{x} + \vec{z}}{2} = $$ now I plug in the final points $$ = \frac{1}{6} \left[ \begin{pmatrix} 4 \\ 3 \end{pmatrix} - \begin{pmatrix} 2 \\ 3 \end{pmatrix} \right] p_1 + \frac{1}{6} \left[ 2 \begin{pmatrix} 3 \\ 2 \end{pmatrix} - \begin{pmatrix} 2 \\ 3 \end{pmatrix} - \begin{pmatrix} 4 \\ 3 \end{pmatrix} \right] p_2 + \frac{1}{2} \left[ \begin{pmatrix} 2 \\ 3 \end{pmatrix} + \begin{pmatrix} 4 \\ 3 \end{pmatrix} \right] $$ Simplification yields $$ \vec{P}(p_1; p_2) = \begin{pmatrix} 1/3 \\ 0 \end{pmatrix} p_1 + \begin{pmatrix} 0 \\ -1/3 \end{pmatrix} p_2 + \begin{pmatrix} 3 \\ 3 \end{pmatrix} $$ Or you can write that in canonical form $$ \vec{P}(p_1; p_2) = \begin{pmatrix} 1/3 & 0 \\ 0 & -1/3 \end{pmatrix} \begin{pmatrix} p_1 \\ p_2 \end{pmatrix} + \begin{pmatrix} 3 \\ 3 \end{pmatrix} $$ For more details on how this all works you may check "Beginner's guide to mapping simplexes affinely", where authors of the equation elaborate on theory behind it. In the guide there is exactly the same 2D example solved as the one you are interested in. The same authors recently published "Workbook on mapping simplexes affinely" (ResearchGate as well). They discuss many practical examples there, e.g. show that the same equation may be used for color interpolation or Phong shading. You may want to check it if you want to see this equation in action.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Derivative of a continuous function. Let $g:R\to R$ be a continuous function with $g(x+y)=g(x)+g(y), \forall x,y\in R.$ Find $\frac{dg}{dx},$ if it exist.
First we have $g(0) = 2g(0)\to g(0)=0$. Then it follows that $$g'(x) = \lim_{y\to 0}\frac{g(x+y)-g(x)}{y} = \lim_{y\to 0}\frac{g(y)}{y} = \lim_{y\to 0}\frac{g(y)-g(0)}{y} = g'(0)$$ As noted below in the comments, this only shows that if $g'(x)$ exist for one $x$ then it exist for all $x$ and is indeed a constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Calculate the limit of $(1+x2^x)/(1+x3^x)$ to the power $1/x^2$ when $x\to 0$ I have a problem with this: $\displaystyle \lim_{x \rightarrow 0}{\left(\frac{1+x2^x}{1+x3^x}\right)^\frac{1}{x^2}}$. I have tried to modify it like this: $\displaystyle\lim_{x\rightarrow 0}{e^{\frac{1}{x^2}\ln{\frac{1+x2^x}{1+x3^x}}}}$ and then calculate the limit of the exponent: $\displaystyle \lim_{x\rightarrow 0}{\frac{1}{x^2}\ln{\frac{1+x2^x}{1+x3^x}}}$. But I don't know what to do next. Any ideas?
As usual we take logs here because we have an expression of the form $\{f(x) \} ^{g(x)} $. We can proceed as follows \begin{align} \log L&=\log\left\{\lim_{x\to 0}\left(\frac{1+x2^{x}}{1+x3^{x}}\right)^{1/x^{2}}\right\}\notag\\ &=\lim_{x\to 0}\log\left(\frac{1+x2^{x}}{1+x3^{x}}\right)^{1/x^{2}}\text{ (via continuity of log)} \notag\\ &=\lim_{x\to 0}\frac{1}{x^{2}}\log\frac{1+x2^{x}}{1+x3^{x}}\notag\\ &=\lim_{x\to 0}\frac{1}{x^{2}}\cdot\dfrac{\log\dfrac{1+x2^{x}}{1+x3^{x}}}{\dfrac{1+x2^{x}}{1+x3^{x}}-1}\cdot\left(\frac{1+x2^{x}}{1+x3^{x}}-1\right)\notag\\ &=\lim_{x\to 0}\frac{1}{x^{2}}\cdot\frac{x(2^{x}-3^{x})}{1+x3^{x}}\notag\\ &=\lim_{x\to 0}\frac{2^{x}-3^{x}}{x}\notag\\ &=\lim_{x\to 0}\frac{2^{x}-1}{x}-\frac{3^{x}-1}{x}\notag\\ &=\log 2-\log 3\notag\\ &=\log(2/3)\notag \end{align} and hence $L=2/3$. We have used the following standard limits here $$\lim_{x\to 1}\frac{\log x} {x-1}=1,\,\lim_{x\to 0}\frac{a^{x}-1}{x}=\log a$$ There is no need of more powerful tools like L'Hospital's Rule or Taylor's theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Computing $\int_0^\infty \frac{\sin(u)}{u}e^{-u^2 b} \, du$ I want to compute $\int_0^\infty u^{-1}(1-e^{\frac{-u^2 t}{2}})\sin(u(|x|-r))\,du$ and so ,as shown below, I want to compute $$\int_0^\infty \frac{\sin(u)}{u}e^{-u^2 b} \, du$$ Attempt We split the first integral into two: * *$\displaystyle \int_0^\infty u^{-1} \sin(u(|x|-r))\, du=\frac{\pi}{ 2}$ *$\displaystyle \int_0^\infty u^{-1} e^{\frac{-u^2 t}{2}}\sin(u(|x|-r))\,du = \int_0^\infty \frac{\sin(z)}{z} e^{-z^2 b} \, dz$ where $b=\dfrac{t}{2(|x|-r)^2}$ is a positive real constant. any suggestions? How to show that the value is $\frac{\pi}{2}erf(\frac{1}{2\sqrt{b}})$? Given that I can swap integral and sum we have $\sum_{-1}^{\infty}\frac{(-i)^{n}+(i)^{n}}{2(1+n)}\int_{0}^{\infty}x^{n}e^{-bx^{2}}dx=\sum_{-1}^{\infty}\frac{(-i)^{n}+(i)^{n}}{2(1+n)} \frac{\Gamma(\frac{n+1}{2})}{2b^{\frac{n+1}{2}}}.$
Are you wanting just an answer or the full solution. Assuming $b$ is positive and Real, Mathematica says: $$ \int_0^\infty \frac{\sin{u}}{u}e^{-u^2 b}du\quad= \quad\frac{1}{2}\pi \,Erf(\frac{1}{2\sqrt{b}}) $$ Where $Erf$ is the error function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Find $\sin^3 a + \cos^3 a$, if $\sin a + \cos a$ is known Given that $\sin \phi +\cos \phi =1.2$, find $\sin^3\phi + \cos^3\phi$. My work so far: (I am replacing $\phi$ with the variable a for this) $\sin^3 a + 3\sin^2 a *\cos a + 3\sin a *\cos^2 a + \cos^3 a = 1.728$. (This comes from cubing the already given statement with 1.2 in it.) $\sin^3 a + 3\sin a * \cos a (\sin a + \cos a) + \cos^3 a = 1.728$ $\sin^3 a + 3\sin a * \cos a (1.2) + \cos^3 a = 1.728$ $\sin^3 a + \cos^3 a = 1.728 - 3\sin a * \cos (a) *(1.2)$ Now I am stuck. What do I do next? Any hints?
Squaring $\sin a + \cos a = b$, $b^2 =\sin^2a+2\sin a \cos a + \cos^2 a = 1+2\sin a \cos a $, so $\sin a \cos a =(b^2-1)/2 $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 6, "answer_id": 1 }
The n-th prime is less than $n^2$? Let $p_n$ be the n-th prime number, e.g. $p_1=2,p_2=3,p_3=5$. How do I show that for all $n>1$, $p_n<n^2$?
In Zagier's the first 50 million prime numbers a very elementary proof is given that for $n > 200$ we have $$\pi(n) \ge \frac23 \frac{n}{\log n}$$ where $\pi(x)$ is the number of primes below $x$ (as well as a bound in the other direction). In fact, it already holds for $n \ge 3$, as can be directly checked. Suppose that $p_n > n^2$, then for this $n$ we have $\pi(n^2) < n$, but that violates the bound already for $n = 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 0 }
Solve $x^5 - x = 0$ mod $4$ and mod $5$ I'm trying to solve $$x^5-x=0$$ in $\mathbb{Z/5Z}$ and $\mathbb{Z/4Z}$ I don't see how to proceed, could you tell me how ? Thank you
In $\mathbb{Z}/5\mathbb{Z}$, the answer follows quickly from Fermat's Little Theorem. In $\mathbb{Z}/4\mathbb{Z}$, by the Euler Totient Theorem, we have $$x^{\varphi(4)} \equiv x^2 \equiv 0 \mod 4$$ for $x$ coprime to $4$. The only remaining cases are $x=0$ and $x=2$, which can easily be checked. I observe that the above examples can be seen in broader context: again by Fermat's Little Theorem, every element $x$ of $\mathbb{Z}/p\mathbb{Z}$ (p prime) satisfies $x^p - x = 0$ (and more generally, any element of the finite field $\mathbb{F}_{p^k}$ for all $k$). Similarly, in $\mathbb{Z}/p^k\mathbb{Z}$, every element $x$ coprime to $p^k$ satisfies $x^{p^k+1} - x = 0$, since $\varphi(p^k) = p^{k-1}(p-1)$, and no other nonzero $x$ does, since we have for any noncoprime $x$ that $x = p^n$ for some $n\ge1$, so since $p^k+1 > k$, $p^k|x^{(p^k+1)}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why does it seem to decrease probability to roll more times? I am looking how to find the probability of rolling 10 sixes in 60 rolls. After reading Calculate probabilities. 40 sixes out of a 100 fair die rolls & 20 sixes out of a 100 fair die roll, it seems that I can use the following formula: $60 \choose 10$$(\frac{1}{6})^{10}$$(1-\frac{1}{6})^{60-10}$ That comes to about 13.7%. So far, so good. But then I decided to test it. What about getting 10 sixes in 120 rolls? $120 \choose 10$$(\frac{1}{6})^{10}$$(\frac{5}{6})^{120-10}$ That's only 0.37%! I'm getting something wrong, but what?
The formula you quote is for getting exactly (not at least) $10\ 6$'s in $60$ rolls. The factor $(1-\frac 16)^{60-10}$ ensures that the other dice roll something other than $6$. Your extension is correct for getting exactly (not at least) $10\ 6$'s in $120$ rolls. The expected number is $20$, so it is not surprising that the probability has decreased-you are farther from the expected number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equivalent definitions of a surface do Carmo Differential Geometry of Curves and Surfaces defines a regular surface as per the below post. Lee Introduction to Smooth Manifolds defines an embedded or regular surface to be an embedded or regular submanifold of $\mathbb{R}^3$ of codimension 1, namely a subset $S\subset\mathbb{R}^3$ that is itself a smooth $2$-dimensional manifold whose topology is the subspace topology and whose smooth structure ensures the inclusion map $\iota:S\hookrightarrow\mathbb{R}^3$ is an embedding. Question: Are these definitions equivalent? If so can someone present or point to in the literature a detailed proof.
The answer to my question is yes, and is given by Theorem 5-2 in Spivak Calculus on Manifolds combined with Theorem 5.8 in Lee Introduction to Smooth Manifolds. See also here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Prim , Kruskal or Dijkstra I've a lot of doubts on these three algorithm , I can't understand when I've to use one or the other in the exercise , because the problem of minimum spanning tree and shortest path are very similar . Someone can explain me the difference and give me some advice on how I can solve the exercise? Thank you so much.
Prim and Kruskal are for spanning trees, and they are most commonly used when the problem is to connect all vertices, in the cheapest way possible. For Example, designing routes between cities. Dijkstra, is for finding the cheapest route between two vertices. For Example, GPS navigation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1092949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How many ways to tie $2$ ropes so that we do not have a loop BdMO 2014 Higher Secondary: Avik is holding six identical ropes in his hand where the mid portion of the rope is in his fist. The first end of the ropes is lying in one side, and the other ends of the rope are lying on another side. Kamrul randomly chooses the end points of the rope from one side, then tie every two of them together. And then he did the same thing for the other end. If the probability of creating a loop after tying all six rope is expressed as $\dfrac{a}b$, (where a, b are coprime) find the value of $(a+b)$. There are $\dbinom{6}{2}$ possible pairs of ropes which we could tie.Also,the number of different ways to tie three pairs of ropes is $\dbinom{6}2\cdot\dbinom{4}2$ which is $90$.Hence the total number of ways to tie the ends on both sides(we will call these sides top and bottom) is $90\cdot 90$. Now,consider first tying the ends at the bottom.There are $\dbinom{6}{2}-3=12$ possible ways to tie the ropes at the top.Hence the total number of ways of not having any loop is $\dbinom{12}3\cdot90$.Hence,the probability of not having any loop is $\dfrac{2}{15}$.Thus the required probability is $\dfrac{13}{15}$ and the required sum is $28$. Am I right?
After doing what he did he has three ropes: $1,2,3$. Suppose he puts them on the table, He begins tying rope $1$, the probability he ties it with another rope is $\frac{4}{5}$ since there are $5$ other rope ends and only $1$ is the same rope. If it ties with another rope there are now two ropes, proceed to tie the rope that is not made up of two ropes, the probability it ties to the other rope is $\frac{2}{3}$ After this there are only two edges of one big rope and there is only one way to tie them. The probability is therefore $\frac{4}{5}\cdot\frac{2}{3}=\frac{8}{15}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
uniform continuity in $(0,\infty)$ Prove uniform continuity over $(0,\infty)$ using epsilon-delta of the function $$f(x)=\sqrt{x}\sin(1/x)$$ I know that I start with: For every $\varepsilon>0$ there is a $\delta>0$ such that for any $x_1$, $x_2$ where $$|x_1-x_2|<\delta$$ then $$|f(x_1)-f(x_2)|<\varepsilon$$ But I can't manage the expression of $f(x)$. Thank you for helping.
Note that $$\left| \sqrt{x} \sin (1/x)- \sqrt{y} \sin (1/y) \right|\\ \leqslant \sqrt{x}\left| \sin (1/x)- \sin (1/y) \right|+|\sqrt{x}-\sqrt{y}||\sin(1/y)| \\\leqslant \sqrt{x}\left| \sin (1/x)- \sin (1/y) \right|+|\sqrt{x}-\sqrt{y}|.$$ Using $\displaystyle \sin a - \sin b = 2 \sin \left(\frac{a-b}{2}\right)\cos \left(\frac{a+b}{2}\right),$ we have $$\left| \sqrt{x} \sin (1/x)- \sqrt{y} \sin (1/y) \right| \\ \leqslant 2\sqrt{x}\left| \sin \left(\frac1{2x}-\frac1{2y}\right) \right|\left| \cos \left(\frac1{2x}+\frac1{2y}\right) \right|+|\sqrt{x}-\sqrt{y}| \\ \leqslant 2\sqrt{x}\left| \sin \left(\frac1{2x}-\frac1{2y}\right) \right| +|\sqrt{x}-\sqrt{y}| \\ \leqslant2\sqrt{x}\left| \frac1{2x}-\frac1{2y} \right| +|\sqrt{x}-\sqrt{y}| \\ \leqslant \frac{\left| x-y \right|}{\sqrt{x}|y|} +|\sqrt{x}-\sqrt{y}| \\ \leqslant \frac{\left| x-y \right|}{\sqrt{x}|y|} +\frac{|x-y|}{{\sqrt{x}+\sqrt{y}}} .$$ Hence, on the interval $[a,\infty)$ we have $$\left| \sqrt{x} \sin (1/x)- \sqrt{y} \sin (1/y) \right| \leqslant \left(\frac1{a^{3/2}} + \frac1{2a^{1/2}}\right)|x - y| < \epsilon,$$ when $$ |x-y| < \delta = \epsilon \left( \frac1{a^{3/2}} + \frac1{2a^{1/2}} \right)^{-1}.$$ Therefore $f$ is uniformly continuous on $[a, \infty)$ for any $a > 0$. Note that $f$ is continuous on $(0,a]$ and $\lim_{x \to 0+}f(x) = 0.$ Then $f$ is continuously extendible and uniformly continuous on the compact interval $[0,a]$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding value of 1 variable in a 3-variable $2^{nd}$ degree equation The question is: If $a,b,\space (a^2+b^2)/(ab-1)=q$ are positive integers, then prove that $q=5$. Also prove that for $q=5$ there are infinitely many solutions in $\mathbf N$ for $a$ and $b$. I simplified the equation as follows:-$$\frac {a^2+b^2}{ab-1}=q\\\begin{align}\\&=>\frac {2a^2+2b^2}{ab-1}=2q\\&=>\frac{a^2+b^2+2ab+a^2+b^2-2ab}{ab-1}=2q\\&=>(a+b)^2+(a-b)^2=2q(ab-1)\\&=>2(a+b)^2+2(a-b)^2=q(4ab-4)\\&=>2(a+b)^2+2(a-b)^2=q((a+b)^2-(a-b)^2-4)\end{align}$$Substituting $a+b=X$ and $a-b=Y$, we get $$2X^2+2Y^2=q(X^2-Y^2-4)\\\begin{align}&=>(q-2)X^2=(q+2)Y^2+4q\end{align}$$Now using the quadratic residues modulo $5$, I know that $X^2,Y^2\equiv0, \pm1(mod\space 5)$. But using this directly doesn't give the answer. So what to do after this? An answer without the use of co-ordinate geometry would be greatly appreciated as it seems there is a very good resemblance of the equation to a pair of hyperbolas which are symmetric with respect to the line $y=x$ but I don't understand co-ordinate geometry very well.
For such equations: $$\frac{x^2+y^2}{xy-1}=-t^2$$ Using the solutions of the Pell equation. $$p^2-(t^4-4)s^2=1$$ You can write the solution. $$x=-4tps$$ $$y=t(p^2+2t^2ps+(t^4-4)s^2)$$ It all comes down to the Pell equation - as I said. Considering specifically the equation: $$\frac{x^2+y^2}{xy-1}=5$$ Decisions are determined such consistency. Where the next value is determined using the previous one. $$p_2=55p_1+252s_1$$ $$s_2=12p_1+55s_1$$ You start with numbers. $(p_1;s_1) - (55 ; 12)$ Using these numbers, the solution can be written according to a formula. $$y=p^2+2ps+21s^2$$ $$x=3p^2+26ps+63s^2$$ If you use an initial $(p_1 ; s_1) - (1 ; 1)$ Then the solutions are and are determined by formula. $$y=s$$ $$x=\frac{p+5s}{2}$$ As the sequence it is possible to write endlessly. Then the solutions of the equation, too, can be infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Solve the following functional equation $f(xf(y))+f(yf(x))=2xy$ Find all function $f:\mathbb{R}\rightarrow \mathbb{R}$ so that $f(xf(y))+f(yf(x)=2xy$. By putting $x=y=0$ we get $f(0)=0$ and by putting $x=y=1$ we get $f(f(1))=1$. Let $y=f(1)\Rightarrow f(x)+f(f(x)f(1))=2x$, which tells us that $f$ is an injective function. The only solutions I came up with so far are $f(x)=x$ and $f(x)=-x$.
This is all I have found so far: Just keep feeding the snake with its own tail. As you noted $f(f(1))=1$. So with $x=y=f(1)$ we then see that $1=f(f(1))=(f(1))^2$ so that $f(1)=\pm 1$. As you almost correctly noted, we have for $y=f(1)$ that $$ f(x)+f(f(x)f(1))=2xf(1) $$ which indeed shows that $f$ is injective. With that observation, the comment by Pp.. works to conclude that $f$ is odd since $f(xf(x))=x^2=f(-xf(-x))$ then implies $f(x)=-f(-x)$. Now, if we knew that $f$ was a polynomial it would be easy to conlude that $f(xf(x))=x^2$ implies $\deg f(\deg f+1)=2$ so $f$ has to have degree 1 in that case. Since $f(0)=0$ and $f(1)=\pm 1$ that would then imply $f(x)=\pm x$. But if $f$ is not a polynomial, all I can tell this far is that $f$ is an injective, odd function with $f(0)=0$ and $f(1)=\pm 1$ and a bucnh of functional equations attached to it. Maybe someone else can elaborate further ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093413", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Deduce that if $G$ is a finite $p$-group, the number of subgroups of $G$ that are not normal is divisible by $p$ Given: Let $G$ be a group, and let $\mathcal{S}$ be the set of subgroups of $G$. For $g\in G$ and $H\in S$, let $g\cdot H=gHg^{-1}$ Question: Deduce that if $G$ is a finite $p$-group, for some prime $p$, the number of subgroups of $G$ that are not normal is divisible by $p$. Comments: * *The subgroups of $G$ that are normal have the property that $g\cdot H=gHg^{-1}=H$ *The question is equivalent to showing $p$ divides $\left|G\right|-\left|\{H\in S\vert gHg^{-1}\neq H\}\right|$ *$p$-group: $\forall g\in G,|g|=p^k$ for $k\in\mathbb{N}^+$ *I don't know where to start with this problem, been thinking about it for a while to no avail, I feel that there are too many definitions for me to consider when finding a solution. *My problems I have considered and not been able to answer are: what is the order of $|G|$? I think it must be of order that is divisible by $p$, hence the question becomes show $p$ divides the order of $\left|\{H\in S\vert gHg^{-1}\neq H\}\right|$. How do you compute the order of this set? *It seems like quite a standard problem, so I would really appreciate it if I could be directed to more information about whatever problem it is.
Let $X$ be the set of not-normal subgroups of $G$. A conjugate of a not-normal subgroup is not-normal again, so $G$ acts on $X$ by conjugation and (by construction) there are no trivial orbits, so the size of every orbit is a positive power of $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Derangement, example, paradox? How can we explain that $!0 =1 $, but $!1=0$? I understand the case of permutations. I get why $0! =1$, and that $1!$ is also $1$. This result doesn't argue with my intuition. But, when it comes to derangements, it's hard to understand for me. Please help me.
$0!$ is one, since the number of bijections between $\emptyset$ and $\emptyset$ is $1$. moreover the one bijection that exists does not send any element of $\emptyset$ to itself, so it is also a derrangement and we conclude $!0=1$ The number of bijections from $\{1\}$ to $\{1\}$ is also $1$, so $1!=1$. However this bijection sends $1$ to itself, and therefore is not a derrangement. The number of derrangements on the set $\{1\}$ is therefore $0$, so $!1=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Help on proving a trigonometric identity involving cot and half angles Prove: $\cot\frac{x+y}{2}=-\left(\frac{\sin x-\sin y}{\cos x-\cos y}\right)$. My original idea was to do this: $\cot\frac{x+y}{2}$ = $\frac{\cos\frac{x+y}{2}}{\sin\frac{x+y}{2}}$, then substitute in the formulas for $\cos\frac{x+y}{2}$ and $\sin\frac{x+y}{2}$, but that became messy very quickly. Did I have the correct original idea, but overthink it, or is there any easier way? Hints only, please.
Try using $ \displaystyle \sin{x} - \sin{y} = \sin{\frac{2x}{2}} - \sin{\frac{2y}{2}} = 2 \frac{\sin \left( {\frac{x+y}{2} + \frac{x-y}{2}} \right) - \sin \left( {\frac{x+y}{2} - \frac{x-y}{2}} \right)}{2}$ and then the product to sum-formula for sine, i.e. $ \displaystyle \frac {\sin \left({x + y}\right) - \sin \left({x - y}\right)}{2} = \cos{x} \sin{y} $. You can prove it geometrically as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A problem for laplace operator in Sobolev space Suppose $u\in L^2(\Omega)$, then for any $\phi\in C_c^\infty(\Omega)$ we have $$ \int_\Omega v\,\phi\,dx=\int_\Omega u\Delta \phi\,dx $$ Then can I conclude that $u\in H_0^1\cap H^2(\Omega)$ and $$\Delta u=v $$ Also assume that $\Omega\subset\mathbb R^N$ is open bounded, smooth boundary.
Integrating by part, we can get $$\int_\Omega u\frac{\partial^2\phi}{\partial x_i^2}\,dx=-\int_\Omega \frac{\partial u}{\partial x_i}\frac{\partial\phi}{\partial x_i}\,dx+\int_{\partial \Omega}u \frac{\partial\phi}{\partial x_i}\nu_i\,dS$$ and $$\int_\Omega \frac{\partial u}{\partial x_i}\frac{\partial\phi}{\partial x_i}\,dx=-\int_\Omega \frac{\partial^2 u}{\partial x_i^2}\phi\,dx+\int_{\partial \Omega}\phi \frac{\partial u}{\partial x_i}\nu_i\,dS.$$ If $u\in H_0^1(\Omega)$ and $\phi\in C_c^\infty(\Omega)$, then $$\int_\Omega (v-\Delta u)\phi\,dx=0,$$ which implies $$\Delta u=v.$$ If $u\in L^1(\Omega)$ only, we can not get $\Delta u=v$ from $\int_{\Omega}v\phi\,dx=\int_{\Omega}u\Delta \phi\,dx$, because boundary terms during the integration would remain. In order to obtain $u\in H^2$, the condition $v\in L^2(\Omega)$ and $\partial \Omega\in C^2$ should be hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1093802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Riemann Hypothesis, is this statement equivalent to Mertens function statement? All: I saw one form of Riemann Hypothesis, it says: $$ \lim ∑(μ(n))/n^σ $$ Converges for all σ > ½ Is this statement same as the order of Mertens function is less than square root of n ?
Yes, since $\frac{1}{\zeta(\sigma)} = \sum{\frac{\mu(n)}{n^\sigma}}$, this is equivalent to the more canonical statement of RH that $\zeta$ has no zeroes to the right of the critical line. You also mention $M(x) = O(x^{\frac{1}{2}+\epsilon})$, you can use the Mellin transform to show this is also equivalent to RH.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1094888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }