Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to solve $-20=15 \sin \theta- 30.98 \cos \theta$? How can I solve the following equation? $$-20=15 \sin \theta- 30.98 \cos \theta$$ I can't think of any way to solve it. You can't factor out cosine because of the annoying little negative twenty, and if you divide by cosine you also get nowhere.
If you're familiar with basic vectors, you may reason the $a\cos\theta + b\sin\theta $ formula like this $$20 = 30.98 \cos \theta - 15\sin \theta = \langle 30.98, -15 \rangle \bullet \langle \cos \theta, \sin\theta \rangle = C \cos(\theta - \alpha) $$ $C = \sqrt{30.98^2 + 15^2}$ $\alpha = \arctan\left(- \dfrac{30.98}{15}\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/929344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
A Definite Integral Simplification There was this question on integration that I am correct with the answers section of my textbook upto a point. So I will only ask this last step of evaluation. This is what the book says - And this is how my simplication ended up in - As you can see both the book and my first part is same but I get a different final answer than book. The book has 1/2 inside the bracket where I have 1. I just dont know how the book got 1/2 or what I am missing out. Can anyone help me figure it out? :)
$$\left(\frac{1}{\sqrt{2}}\right)^{2n+1}=\left(\frac{1}{\sqrt{2}}\right)\left(\frac{1}{2}\right)^n\not=\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/929421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to show that $a+c>2b$ do not guarantee that $ac>b^2$? HW problem. I solved it by giving counter example. But it will be nice if there is an algebraic way to show this. Given both $a,c$ are positive, and given that $a+c>2b$, the question asks if this guarantee that $ac>b^2$. The answer is no. Counter example is $a=8,c=2,b=4$. I tried to show this using algebra, but my attempts are not leading to anything. I started by squaring both sides of $a+c>2b$ but this lead to nothing I can see. I thought someone here would know of a smart easy way to do this.
Edit: perhaps this is what you are thinking of? Fix $S>0$ and suppose that $2b<S$. With $b>0$, this is equivalent to $4b^2<S^2$ or $b^2\in(0,\frac{S^2}{4})$. Then a counter example is guaranteed if we can show that for any $P\in(0,\frac{S^2}{4})$, there exist $a>0,c>0$ such that $a+c=S$ and $ac=P$. Such $a$ and $c$ are roots to the quadratic polynomial $$ x^2-Sx+P. $$ With $0<P<S^2/4$, you can use the quadratic formulas to check that this polynomial indeed has 2 positive roots. Because you can just pick $P>b^2$ (but still less than $S^2/4$), this proves that $a+c>2b$ does not guarantee that $ac>b^2$. But like others have said, a counter example is the best method. Original answer: Another counter example: $7+1>2\times 3$ but $7\times 1<3^2$. This one and yours share a common trait that $|a-c|$ is relatively large compared to the smaller of the two. In general, given $a+c$ and $a,c>0$, to make "small" $ac$, try increasing $|a-c|$. This follows from $4ac = (a+c)^2-(a-c)^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/929614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Derivative of a contravariant tensor Let $T$ be a contravariant tensor so it transforms under change of coordinates like $$ T^{i'} = T^i\ \frac{\partial x^{i'} }{\partial x^i} $$ In this it seems $T^{i'}$ is a function of the "primed" coordinates, i.e. $T^{i'}(x')$, so it should be possible to calculate the partial derivative $\partial/ \partial x^{j'}$ by using the product rule: $$ \frac{\partial T^{i'}}{\partial x^{j'}} = \left( \frac{\partial T^i}{\partial x^{j}} \frac{\partial x^{j} }{\partial x^{j'}} \right) \frac{\partial x^{i'} }{\partial x^i} + T^i \left(\frac{\partial}{\partial x^{j'}}\frac{\partial x^{i'}}{\partial x^i}\right) $$ However, I don't really know what to do with the expression in the second pair of brackets, since it has "primed" and "unprimed" partials?
No, actually $T^{i'}$ is a function in not primed coordinates. It's components show which value the primed coordinate will have depending of not primed one. Example: $$x=r \cos(\phi), y = r \sin(\phi)$$ Here primed coordinates are x and y. If you will take contravariant tensor of form $T=(f^1(\phi, r), f^2(\phi, r))^T$, you will get $T'=(\cos(\phi)f^1(\phi, r) -r \sin(\phi) f^2(\phi, r), f^1(\phi, r) \sin(\phi) + f^2(\phi, r)r \cos(\phi))$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/929701", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derived categories as homotopy categories of model categories Given an abelian category A, is there a model structure on the category of complexes C(A) (or K(A) ("classical" homotopy category)) such that its homotopy category "is" the derived category D(A)?
Let $\mathbf{A}$ be an abelian (or Grothendieck) category, and consider the injective model structure on the category of cochain complexes $\mathrm{Ch}(\mathbf{A})$. Then the derived category $\mathcal{D}(\mathbf{A})$ is the homotopy category $\mathrm{Ho}(\mathrm{Ch}(\mathbf{A}))$ by inverting the weak equivalences.
{ "language": "en", "url": "https://math.stackexchange.com/questions/929752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Let $a$ and $b$ be two elements in a commutative ring $R$ and $(a, b) = R$, show that $(a^m, b^n) = R$ for any positive integers $m$ and $n$. I stumbled across a question that I have no idea how to start. I know the questions asking to show that the multiples of $a$ and $b$ as an ordered pair make still make the whole ring. Any sort of hints or suggestions to start? I don't really want an complete solution, just some sort of hint to start the question. Thank you!
Write $1$ as a linear combination of $a$ and $b$. Then raise both sides to the $m+n$ power.
{ "language": "en", "url": "https://math.stackexchange.com/questions/929845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Convergence of multiple integral in $\mathbb R^4$ Denote $(x,y,z,w)$ the euclidean coordinates in $\mathbb R^4$. I am trying to study the convergence of the integral $$\int \frac{1}{(x^2+y^2)^a}\frac{1}{(x^2+y^2+z^2+w^2)^b} dx\,dy\, dz\, dw$$ over a disk (or cube, or any open set) containing the origin in terms of the real parameters $a>0$ and $b>0$. If needed we can assume that $b$ is fixed and get a bound on $a$ in terms of $b$. Does anyone have an idea how could I proceed? I tried using cylindrical coordinates (in $(x,y)$ and $(z,w)$ separately) but I couldn't get anywhere. Thank you in advance.
As DiegoMath suggests, using polar coordinates for $(x,y)$ and $(z,w)$ is efficient, more precisely, define $$(x,y,z,w):=(r\cos\theta,r\sin\theta,s\cos\phi,s\sin\phi).$$ Then the integral $$I(a,b):=\int_{B(0,1)} \frac{1}{(x^2+y^2)^a}\frac{1}{(x^2+y^2+z^2+w^2)^b} dx\,dy\, dz\, dw$$ converges if and only if the integral $$\int_0^1\int_0^1 \frac{rs}{r^{2a}(r^2+s^2)^b}\mathrm ds\mathrm dr$$ converges. Using again polar coordinates, this time with $(r,s)$, that is, $r=R\cos t$, $s=R\sin t$, we derive that $I(a,b)$ converges if and only if $$\int_0^1\int_0^{2\pi}\frac{R^3\cos t\sin t }{R^{2a+2b}(\cos t)^{2a}} \mathrm dt\mathrm dR .$$ One can compute explicitely the integral in $t$ on $(0,2\pi)\setminus (\pm\pi/2-\delta,\pm\pi/2+\delta)$ in order to get a condition on $a$; the integral on $R$ yields a conditionon $a$ and $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/929954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Jordan form of a Matrix with Ones over a Finite Field Question: Find the Jordan Form of $n\times n$ matrix whose elements are all one, over the field $\Bbb Z_p$. I have found out that this matrix has a characteristic polynomial $x^{(n-1)}(x-n)$ and minimal polynomial $x(x-n)$, for every $n$ and $p$. Here I have two cases: If $n$ is not divisible by $p$, means $n\neq0 \pmod p$, Then the minimal polynomial is separable and thus the form is diagonal: $\operatorname{diag}(0,...,0,n)$. I am stuck in the second case, where we have $p\mid n$, thus $n=0 \pmod p$. I know that there is a block of order $2$ with $0$, but is there only one as such - and why? I do not see how to prove this using the polynomials only. Thanks
Your matrix always has rank$~1$ (if $n>0$). This means (rank-nullity) that the eigenspace for $\lambda=0$ has dimension $n-1$; this is the geometric multiplicity of that eigenvalue, and the algebraic multiplicity of $\lambda=0$ is either $n-1$ or $n$. The trace$~n$ of the matrix is the remaining eigenvalue (as the sum of all eignevalues with their algebraic multiplicities equals the trace); if it differs from $0$ in the field (that is $p\nmid n$) then your matrix is diagonalisable, in agreement with what you found from the minimal polynomial. If on the other hand $p\mid n$, then the matrix is not diagonalisable: the characteristic polynomial is $x^n$, but the eigenspace for $\lambda=0$ has dimension only $n-1$. But that's close enough to determine the Jordan type completely: every Jordan block (for $\lambda=0$) of size $d$ contributes $d-1$ to the rank of the matrix (the dimension of the image of the linear map), which rank is only$~1$. So the Jordan type must be $(2,1,1,\ldots,1)$. The same by the way applies to any matrix of rank$~1$, except that the trace does not have to be$~n$ now, so certain instances of$~n$, those which originate from the trace, should be replaced by the trace of the matrix. See also this answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to explain Borel sets and Stieltjes integral to beginner maths student? The problem is that I know by the definition what Borel sets and Stieltjes integral are but I'm not good to explain in layman terms what they are. Is there easier answer that "write down the definitions until you have reduced everything to the axioms"?
A Borel set is a set that we can create by taking open or closed sets, and repeatedly taking countable unions and intersections. Any measure defined on a Borel set is called a Borel measure. The Riemann integral is a definite integral that was the foundation of integration over a given interval (before the far superior Lebesgue integration took over). The idea of Riemann integration is to approximate the region you are given, and continuously improve the approximations. We take a partition of an interval \begin{equation*} a=x_0<x_1<...<x_n=b \end{equation*} on the real line $\mathbb{R}$, and approximate the area we are integrating with "strips". As the partition gets finer, we take the limits of the Riemann sum (the area of each of the strips) to get the Riemann integral. The Stieltjes integral is a generalisation of this. It can be represented in the form \begin{equation*} \int fd\mu \end{equation*} with respect to the measure $\mu$ (however there must be no point of discontinuity). Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/930129", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
System of Linear Equations - how many solutions? For which real values of t does the following system of linear equations: $$ \left\{ \begin{array}{c} tx_1 + x_2 + x_3 = 1 \\ x_1 + tx_2 + x_3 = 1 \\ x_1 + x_2 + tx_3 = 1 \end{array} \right. $$ Have: a) a unique solution? b) infinitely many solutions? c) no solutions? I haven't done linear algebra in almost a year, so I'm really rusty and could use some pushes in the right direction.
$\text{I have used the Gauss elimination and then studied the rank of the coefficient matrix:}$ $\text{1)If}\ t=1 \text{ the system reduces to just one equation, and it has}\ \infty^2 \text{solutions.}$ $\text{2)If }\ t=-2 \text{ there are no solutions.}$ $\text{3)If}\ t≠1,-2 \text{ there is a unique solution, depending on t. }$
{ "language": "en", "url": "https://math.stackexchange.com/questions/930252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Are there PL-exotic $\mathbb{R}^4$s? The title may or may not say it all. I know that there are examples of topological 4-manifolds with nonequivalent PL structures. In some lecture notes, Jacob Lurie mentions that not every PL manifold is smoothable, and that while smoothings exist in dimension 7 they may not be unique, as the existence of exotic $S^7$s shows when combined with the PL Poincar`e conjecture in dimensions other than $4$. This phrasing suggests that smoothings of PL manifolds are unique in dimensions 1 through 6, which would mean that from the continuum of exotic smooth $\mathbb{R}^4$s we'd get a continuum of exotic PL $\mathbb{R}^4$s. But I haven't yet found a reference for the italicized fact-is it true, or is Lurie's phrasing imprecise, and if it's false, are there even so some exotic PL $\mathbb{R}^4$s?
In this survey article on differential topology, Milnor outlines a proof that every PL manifold of dimension $n \leq 7$ possesses a compatible differential structure, and whenever $n<7$ this structure is unique up to isomorphism. He includes references for the various facts he uses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $a+b+c=1$ and $abc>0$, then $ab+bc+ac<\frac{\sqrt{abc}}{2}+\frac{1}{4}.$ Question: For any $a,b,c\in \mathbb{R}$ such that $a+b+c=1$ and $abc>0$, show that $$ab+bc+ac<\dfrac{\sqrt{abc}}{2}+\dfrac{1}{4}.$$ My idea: let $$a+b+c=p=1, \quad ab+bc+ac=q,\quad abc=r$$ so that $$\Longleftrightarrow q<\dfrac{\sqrt{r}}{2}+\dfrac{1}{4}$$ Note this $a,b,c\in \mathbb{R}$, so we can't use schur inequality such $$p^3-4pq+9r\ge 0, \quad pq\ge 9r$$ and so on maybe can use AM-GM inequality to solve it.
Edit: Incomplete approach. Only works if $a,b,c\geq 0$. By replacing $\frac{1}{4}$ on the RHS with $\frac{(a+b+c)^2}{4}$, the inequality you seek is equivalent to $$ a^2+b^2+c^2+2\sqrt{abc}>2(ab+bc+ca).\tag{I} $$ To prove (I), we use the following result $$ a^2+b^2+c^2+3(abc)^{2/3}\geq 2(ab+bc+ca)\tag{II} $$ the proof of which can be found here. Because of (II), it is enough to verify now that $$ 2\sqrt{abc}>3(abc)^{2/3}\iff abc<\left(\frac{2}{3}\right)^6 $$ but this last inequality follows from the AM-GM inequality $$ \sqrt[3]{abc}\leq\frac{a+b+c}{3}=\frac{1}{3}\implies abc\leq\left(\frac{1}{3}\right)^3<\left(\frac{2}{3}\right)^6. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/930427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Can the logic associative law be applied here? $\big(p \rightarrow (q \rightarrow r)\big)$ is logically equivalent to $\big(q \rightarrow (p \rightarrow r)\big)$ I am a little confused when dealing with the 'implies' operator $\rightarrow$ and the logic laws. To prove that these are logically equivalent, can I just apply associative law and be done with it? Or do I need to apply more laws?
$\big(p \rightarrow (q \rightarrow r)\big)$ is logically equivalent to $\big(q \rightarrow (p \rightarrow r)\big)$ Hint for alternative proof suggestion: '$\to$' * *$p \rightarrow (q \rightarrow r)$ (Premise) *$q$ (Premise) *$p$ (Premise) *$q \rightarrow r$ (Detachment, 1, 3) etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Calculus limit epsilon delta Prove using only the epsilon , delta - definition $\displaystyle\lim_{x\to 2}\dfrac{1}{x} = 0.5$ Given $\epsilon > 0 $, there exists a delta such that $ 0<|x-2|< \delta$ implies $|(1/x) – 0.5| < \epsilon$. Therefore, $$\begin{align}|(1/x) – 0.5| & = |(2-x)/2x| \\ & =|\frac{-(x-2)}{2(x-2+2)}| \\ & < |\frac{(x-2)}{2(x-2)} |=\frac{|x-2|}{2|x-2|}=\delta/2\delta=0.5 \end{align}$$ But epsilon is $> 0$. My $0.5$ is causing issue.
This is the jist of it. You are required to bound the value of $ | \dfrac 1 x - \dfrac 1 2 | $. But you are only allowed to bound $ |x - 2| $. So by putting in a restriction on $ |x - 2| \lt \delta$ you need to achieve the restriction $ | \dfrac 1 x - \dfrac 1 2 | \lt \epsilon$. And more importantly you need to prove that such a $\delta$ exists for every $\epsilon \gt 0$. So we begin with $ |f(x) - L| $. $$ | \dfrac 1 x - \dfrac 1 2 | = \frac{|x - 2|}{2|x|} ---(1) $$ Since we can bound $|x - 2|$ as severely as we want we may as well allow $$ |x - 2| \lt 1 \iff x \in ( 1, 3 ) \implies |x| \gt 1 \implies \frac{1}{|x|} \lt 1 $$ Using this on $(1)$ we have, $$ | \dfrac 1 x - \dfrac 1 2 | \lt \dfrac{|x - 2|}{2} \;\; \text{as long as } |x - 2| \lt 1 $$ Now we have a bound on $ | \dfrac 1 x - \dfrac 1 2 | $ which is a multiple of $ |x - 2| $. So as soon as we restrict $ |x - 2| $ we automatically restrict $ | \dfrac 1 x - \dfrac 1 2 | $. All we need to do is to choose $ \delta = 2 \epsilon $ and we will be done. But not so fast. we used the additional assumption that $ |x - 2| \lt 1$. So if we let $ \delta = \min \{2 \epsilon, 1\} $ then all our required constraints will be satisfied. The most important thing to notice is that such a $\delta \;$ exists for any $\epsilon \gt 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $\lim_{n\to\infty}\frac{a^n}{n!}=0$ and that $\sqrt[n]{n!}$ diverges. Let $a\in\mathbb{R}$. Show that $$ \lim_{n\to\infty}\frac{a^n}{n!}=0. $$ Then use this result to prove that $(b_n)_{n\in\mathbb{N}}$ with $$ b_n:=\sqrt[n]{n!} $$ diverges. Okay, I think that's not too bad. I write $$ \frac{a^n}{n!}=\frac{a}{n}\cdot\frac{a}{n-1}\cdot\frac{a}{n-2}\cdot\ldots\cdot a $$ and because all the factors converges to 0 resp. to $a$ (i.e. the limits exist) I can write $$ \lim_{n\to\infty}\frac{a^n}{n!}=\lim_{n\to\infty}\frac{a}{n}\cdot\lim_{n\to\infty}\frac{a}{n-1}\cdot\ldots\cdot\lim_{n\to\infty}a=0\cdot 0\cdot\ldots\cdot a=0. $$ Let $a_n:=\frac{a^n}{n!}$ and $a=1$ then $$ b_n=\frac{1}{\sqrt[n]{a_n}}. $$ Because (as shown above) $a_n\to 0$ it follows that $\sqrt[n]{a_n}\to 0$, because $$ \lvert\sqrt[n]{a_n}\rvert\leqslant\lvert a_n\rvert\to 0\implies\lvert\sqrt[n]{a_n}\rvert\to 0 $$ and therefore $b_n\to\infty$. I think that's all. Am I right?
For the first limit, hint: Let $a$ be any real number. As $n$ tends to $+\infty,$ $$ \left|\dfrac{\dfrac{a^{n+1}}{(n+1)!}}{\dfrac{a^n}{n!}}\right|=\left|\dfrac{a}{n+1}\right| < \frac{1}2, \quad n\geq2|a|, $$ and, by an elementary recursion, $$ \left|\dfrac{a^n}{n!}\right|< \frac{C(a)}{2^n}, $$ as $n$ is great, where $C(a)$ is a constant in the variable $n$, and your sequence then tends to zero. For the second limit, you may apply Riemman integral, as $n$ tends to $+\infty,$ $$\large \sqrt[n]{n!}=\displaystyle e^{\frac1n \sum_1^n\ln k}=e^{\frac1n \sum_1^n\ln (k/n)+\ln n}\sim n \: e^{\:\Large \int_0^1\ln t \:{\rm d}t } $$ giving a divergent sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 2 }
Show that lim inf Bn and lim sup Bn equals to a null set Suppose that ${B_n: n \geq 1}$ is a sequence of disjoint set. Show that $$\begin{align}\limsup_{n\rightarrow \infty}B_n &= \emptyset \text{ and}\\ \liminf_{n \rightarrow \infty}B_n&= \emptyset \end{align}$$ Where $$\begin{align}\limsup_{n\rightarrow \infty}B_n &= \{x \mid x\in B_n \text{ for infinitely many } n\} \text{ and}\\ \liminf_{n \rightarrow \infty}B_n&= \{x \mid x \notin B_n \text{ for at most finitely many } n\}. \end{align}$$ Can someone help me to solve this? Since it's a disjoint set, means there is no intersection in the sequence. So the union of intersection of Bn will be empty set and that's why lim inf Bn is and empty set too? Please help me, thanks..
As a given element $x$ cannot be in more than one $B_n$, in particular it can't be in infinitely many, giving both results as the lim sup requires $B_n$ to be in infinitely many, and the lim inf is even stronger requiring it to be in all but finitely many.
{ "language": "en", "url": "https://math.stackexchange.com/questions/930996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $\arctan a+\arctan(\frac{1}{a})=\frac{\pi}{2}$ $$\arctan a+\arctan\left(\frac 1 a \right)=\frac \pi 2$$ I have the mark scheme in front of me, and I understand where the numbers come from, but I don't understand why they do what they do. You need this part in to show it: The markscheme says: Again, I understand where the numbers come from, but can someone explain to me where the first line comes from, as well as the rest of it? It's a "Hence" question so I have to solve it this way.
Differentiate with respect to $a$, to prove that it has to be constant. Then taking the value at $a=1$ gives you the result. That's probably the easiest way to prove it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/931113", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
How do I find $\lim\limits_{x \to 0} x\cot(6x) $ without using L'Hôpital's rule? How do I find $\lim\limits_{x \to 0} x\cot(6x) $ without using L'Hôpital's rule? I'm a freshman in college in the 3rd week of a calculus 1 class. I know the $\lim\limits_{x \to 0} x\cot(6x) = \frac{1}{6}$ by looking at the graph, but I'm not sure how to get here without using L'Hôpital's rule. Here is how I solved it (and got the wrong answer). Hopefully someone could tell me where I went wrong. $$\lim\limits_{x \to 0} x\cot(6x) = (\lim\limits_{x \to 0} x) (\lim\limits_{x \to 0}\cot(6x)) = (0)\left(\lim\limits_{x \to 0}\frac{\cos(6x)}{\sin(6x)}\right) = (0)(\frac{1}{0}).$$ Therefore the limit does not exist. I am also unsure of how to solve $\lim\limits_{x \to 0} \frac{\sin(5x)}{7x^2} $ without using L'Hôpital's rule.
$$ x\cot (6x) = \frac 1 6 \cdot \frac{6x}{\sin(6x)}\cdot\cos(6x) = \frac 1 6 \cdot \frac{u}{\sin u}\cdot\cos(6x) $$ and as $x$ approaches $0$, so does $u$. The fraction $u/\sin u$ has both the numerator and denominator approaching $0$, and its limit is well known to be $1$ (presumably you've seen that one before).
{ "language": "en", "url": "https://math.stackexchange.com/questions/931212", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
being $\mathbf{w}$ a vector, how do I calculate the derivative of $\mathbf{w}^T\mathbf{w}$? Let's say that I have a vector $\mathbf{w}$. How can I calculate the derivative in the following expression? $\frac{\mathrm{d}}{\mathrm{d}\mathbf{w}}\mathbf{w}^T\mathbf{w}$ Update: found these useful definitions
It is easier to see it in component form. Let $\hat{x_i}$ be the unit vector in the $i$-th direction, then we can express a vector as $$\mathbf{w}=\sum_{i=1}^{n}w_i \hat{x_i} \tag{1}$$ And $$\frac{d}{d\mathbf{w}}=\sum_{i=1}^{n}\hat{x}^T_i \frac{d}{dw_i} \tag{2}$$ So $$\mathbf{w}^T \mathbf{w}=\sum_{i=1}^{n}w_i^2 \tag{3}$$ $$\frac{d}{d\mathbf{w}}(\mathbf{w}^T \mathbf{w})=\sum_{i=1}^{n}2w_i\hat{x}^T_i =2\mathbf{w}^T\tag{4}$$ EDIT: I made a minor correction ($\hat{x}_i$ to $\hat{x}^T_i$)in (2) and (4) based on rych's suggestion. Now the final results is $2\mathbf{w}^T$
{ "language": "en", "url": "https://math.stackexchange.com/questions/931432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I determine the base of the following numbers for the operations to be correct? Given: $24_A + 17_A = 40_A$ How can I find the base of the following number $A$ so the operations are correct?
Hint to get you started: By definition, in base b, the first digit from the right is representative of the $b^0$ place, the second from the right is $b^1$'s, just like in base 10 the number 12 is $2 * 10^0 + 1* 10^1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/931586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find a basis of $M_2(F)$ so that every member of the basis is idempotent Let $V=M_{2\times 2}(F)$ (the space of 2x2 matrices with coefficients in a field $F$). Find a basis $\{A_1,A_2,A_3,A_4\}$ of $V$ so that $A_j^2=A_j$ for all $j$. My attempt. Let $A_j$ be $$\begin{pmatrix} a_1 & a_2\\ a_3 & a_4\\ \end{pmatrix}. $$ We want to have $A_j^2=A_j$, so $$\begin{pmatrix} a_1^2+a_2a_3 & a_1a_2+a_2a_4\\ a_1a_3+a_3a_4 & a_3a_2+a_4^2\\ \end{pmatrix} = \begin{pmatrix} a_1 & a_2\\ a_3 & a_4\\ \end{pmatrix}. $$ If we let $a_1=1$ and $a_2=a_3=a_4=0$ then the matrix $$\begin{pmatrix} 1 & 0\\ 0 & 0\\ \end{pmatrix} $$ meets the property. Similarly if we let $a_4=1$ and $a_1=a_2=a_3=0$ the matrix $$\begin{pmatrix} 0 & 0\\ 0 & 1\\ \end{pmatrix} $$ also meets the property, but I´m having trouble finding the other two matrices, can you help me please? I would really appreciate it :)
An idempotent basis for V is: $\{A_1, A_2, A_3, A_4\} =\{[1, 0; 0, 0], [1, 0; 1, 0], [0, 1; 0, 1], [0, 0; 0, 1]\}$ To see this, first of all notice that A1^2 = A1, A2^2 = A2, A3^2 = A3, and A4^2 = A4. Next, to show that $\mathrm{ span}\{A_1, A_2, A_3, A_4\} = V$, notice that A2- A1 = [0, 0; 1, 0] and A3 - A4 = [0, 1; 0, 0]. Thus we have that each matrix v = [a1,a2;a3,a4] in V can be written as a linear combination of A1, A2, A3, A4 as follows. v = a1(A1)+a2(A3-A4)+a3(A2-A1)+a4(A4). Moreover, 0 = [0 0; 0 0] = a1(A1)+a2(A2)+a3(A3)+a4(A4) = [a1+a2, a3, a2, a3+a4] => a1 = a2 = a3 = a4 = 0. Therefore, A1, A2, A3, and A4 are also linearly independent. So we can conclude that A1, A2, A3, and A4 are a basis for V.
{ "language": "en", "url": "https://math.stackexchange.com/questions/931690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
List of unsatisfiable cores? Is there a place I can find a list of known unsatisfiable cores for X variables [no more then 10] in CNF format? Or is there an 'easy' way to find out, say I have 7 variables how many clauses [of the 7 variables] do I need for an unsatisfiable cores. [I calculated 2-5 by hand, I am just trying to figure out 6+]. Thanks Staque
The question you are asking has been dealt with in the literature, and there is even a system that does it. Here is an example: https://sun.iwu.edu/~mliffito/publications/jar_liffiton_CAMUS.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/931803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Using Big-O to analyze an algorithm's effectiveness I am in three Computer Science/Math classes that are all dealing with algorithms, Big-O, that jazz. After listening, taking notes, and doing some of my own online searching, I'm pretty damn sure I understand the concept and reason behind Big-O, and what it means when one function is Big-O of the other. The problem I am having right now is when I am asked to PROVE that $f(x) = O(g(x))$. I know the basic order of which kinds of functions are a lower order than others, but its questions that have functions with a lot of combinations of different functions that stump me. I don't know where to start analyzing, and from hearing classmates and professors talk it definitely seems like there's some general rule-of-thumb/algorithm that helps with pretty much every Big-O problem. Can y'all share some insight into how you go about thinking through these problems? Give a big-O estimate for this function. For the function $g$ in your estimate $f(x)$ is $O(g(x))$, use a simple function $g$ of smallest order. $(n^3 + n^2\log n)(\log n + 1) + (17\log n + 19)(n^3 +2)$ Or this one: Find the least integer $n$ such that $f(x)$ is $O(x^n)$ for each of these functions. a) $f(x) = 2x^3 + x^2\log x$ b) $f(x) = (x^3 + 5\log x)/(x^4 + 1)$
Give a big-O estimate for this function. For the function $g$ in your estimate $f(x)$ is $O(g(x))$, use a simple function $g$ of smallest order. $$x_n=(n^3 + n^2\log n)(\log n + 1) + (17\log n + 19)(n^3 +2)$$ To solve this, one first spots the dominant term in each part, here $n^2\log n\ll n^3$, $1\ll\log n$ and $1\ll n^3$ hence one is left with $$(n^3)(\log n) + (\log n)(n^3),$$ thus, one can guess that $x_n$ is of order $n^3\log n$. Second one proves that $n^3\log n$ is indeed the smallest order big-O. First this is a big-O: indeed for every $n\geqslant3$, $1\leqslant\log n\leqslant n$ and $1\leqslant n^3$ hence $$x_n\leqslant(n^3 + n^2\cdot n)(\log n + \log n) + (17\log n + 19\log n)(n^3 +2\cdot n^3)=112\cdot n^3\log n.$$ Finally, one proves this is the best big-O: indeed $$x_n\geqslant(n^3 + 0)(\log n + 0) +0=n^3\log n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/931884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Two definitions of connectedness: are they equivalent? A topological space $(X, \tau)$ is connected if $X$ is not the union of two nonempty, open, disjoint sets. A subset $Y \subseteq X$ is connected if it is connected in the subspace topology. In detail, the latter means that there exist no (open) sets $A$ and $B$ in the original topology $\tau$ such that $Y \cap A \neq \emptyset$, $Y \cap B \neq \emptyset$, $Y \subseteq A \cup B$, $Y \cap A \cap B = \emptyset$. Fine so far, but I am reading a text that defines a subset $Y$ of $\mathbb{R}^n$ with its usual topology to be connected by keeping the first three requirements, but making the final one more stringent: $A \cap B = \emptyset$. Is that definition really equivalent (i) in $\mathbb{R}^n$ and (ii) in general topological spaces?
When dealing with the subspace topology, one really shouldn't bother what happens outside the subspace in consideration. Nevertheless, assume that $Y$ is a subset of $\mathbb R^n$ such that there exist open sets $A,B$ such that $Y\cap A$ and $Y\cap B$ are disjoint and nonempty, but necessarily $A\cap B\ne \emptyset$ for such $A,B$ (clearly, the intersection is outside of $Y$). We shall see that this cannot happen. All we use to show this is that the topology of $\mathbb R^n$ comes from a metric: Since $A$ is open, for every point $x\in Y\cap A$, there exists $r>0$ such thet the $r$-ball $B(x,r)$ around $x$ is $\subseteq A$ and hence disjoint from $B\cap Y$. Pick such $r=r(x)$ for each $x\in A\cap Y$ and similarly for each $x\in B\cap Y$ such that $B(x,r)$ is disjoint from $A\cap Y$. Then $A':=\bigcup_{x\in A\cap Y}B(x,\frac12r(x))$ and $B':=\bigcup_{x\in B\cap Y}B(x,\frac12r(x))$ are open subsets of $\mathbb R^n$, are disjoint, $A'\cap Y=A\cap Y$, and $B'\cap Y=B\cap Y$. In a general topological space, the situation may differ. Let $X=\{a,b,c\}$ where a subset is open iff it is empty or contains $c$. Then the subspace topology of $Y=\{a,b\}$ is discrete, hence $Y$ is not connected, but any open $A,B\subset X$ witnessing this of course intersect in $c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
The area not covered by six pointed star In a circle with radius $r$, two equi triangles overlapping each other in the form of a six pointed star touching the circumference is inscribed! What is the area that is not covered by the star? Progress By subtracting area of the star from area of circle , the area of the surface can be found! But how to calculate the area of the star?
The length of a side of an equilateral triangle is $$\sqrt{r^2+r^2-2\cdot r\cdot r\cdot \cos (120^\circ)}=\sqrt 3r.$$ The distance between the center of the circle and each side of an equilateral triangle is $$\sqrt 3r\cdot \frac{\sqrt 3}{2}\cdot \frac{1}{3}=\frac 12r.$$ Hence, the length of a side of the smaller equilateral triangle, which is the 'corner' of the star, is $$\frac 12r\cdot \frac{2}{\sqrt 3}=\frac{1}{\sqrt 3}r.$$ Hence, the area of the star is $$\frac{\sqrt 3}{4}\cdot (\sqrt 3r)^2+3\times \frac{\sqrt 3}{4}\left(\frac{1}{\sqrt 3}r\right)^2=\sqrt 3r^2.$$ So, the answer is $$\pi r^2-\sqrt 3r^2=(\pi-\sqrt 3)r^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/932172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How can we composite two piecewise functions? Let say $$f(x) := \left\{\begin{array}{l l}x+1 & \text{ if } -2\leq x<0 \\ x-1 & \text{ if }0\leq x\leq 2\end{array}\right.$$ How can we find out $f\circ f$?
Hint: The first thing to do is to write down the domain and codomain of your function. Here we have: \begin{equation} f:[-2,2]\longrightarrow[-1,1] \end{equation} Next, since $[-1,1]\subset[-2,2]$, it is OK to compose $f$ with itself. Try to figure out yourself what $f\circ f$ does.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
I want to form a group of order one, is it possible? Can a set with one element be a group, or does a group need to have at least two elements?
It is possible to have an element with just one element. Take $G=\{e\}$ with the group law defined as $e\circ e=e$. Then it is easy to verify that: * *The group law is associative: $(e\circ e)\circ e=e\circ(e\circ e)=e$ *There is an identity $e$ such that $e\circ g=g = g \circ e$ for all $g\in G$. *Every element of $G$ has an inverse, since $e$ is its own inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\frac{1}{1^4}+\frac{1}{2^4}+\cdots+\frac{1}{n^4} \le 2-\frac{1}{\sqrt{n}}$ I have to show: $\displaystyle\frac{1}{1^4}+\frac{1}{2^4}+...+\frac{1}{n^4} \le 2-\frac{1}{\sqrt{n}}$ for natural $n$ I tried to show it by induction (but I think it could be possible to show it using some ineqaulity of means) so for $n=1$ we have $1=1$ so inequality holds then I assume it's true for $n$ and for $n+1$ my thesis is $\displaystyle\frac{1}{1^4}+\frac{1}{2^4}+...+\frac{1}{(n+1)^4} \le 2-\frac{1}{\sqrt{n+1}}$ I know that: $\displaystyle\frac{1}{1^4}+\frac{1}{2^4}+...+\frac{1}{(n+1)^4} \le 2-\frac{1}{\sqrt{n}}+\frac{1}{(n+1)^4}$ but later I'm not sure if I have to show $\displaystyle2-\frac{1}{\sqrt{n}}+\frac{1}{(n+1)^4}\le2-\frac{1}{\sqrt{n+1}}$ or should be $\ge$
You should show its: $\displaystyle2-\frac{1}{\sqrt{n}}+\frac{1}{(n+1)^4}\le2-\frac{1}{\sqrt{n+1}}$ The other ways does not help your proof (think about it for a while)
{ "language": "en", "url": "https://math.stackexchange.com/questions/932427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Why does the product topology allow proper subsets for only finitely many elements? Consider Theorem 19.1 from Munkres' topology: The box topology on $\prod X_\alpha$ has as basis all sets of the form $\prod U_\alpha$, where $U_\alpha$ is open in $X_\alpha$ for each $\alpha$. The product topology on $\prod X_\alpha$ has as basis all sets of the form $\prod U_\alpha$, where $U_\alpha$ is open in $X_\alpha$ for each $\alpha$ and $U_\alpha$ equals $X_\alpha$ except for finitely many values of $\alpha$. I am having a hard time seeing how the last sentence follows from the subbasis-definition of the product topology. I don't find Munkres' preceding explanation to be helpful.
There are two ways to view the generation of a topology from a subbasis $\mathcal S$. * *The topology generated by $\mathcal S$ is the smallest (coarsest) topology in which all the sets in $\mathcal S$ are open. *We first transform $\mathcal S$ into a basis $\mathcal B$ consisting of all (nonempty) finite intersections of sets in $\mathcal S$, and then take the topology generated by this basis. These two ways can be shown to be equivalent, though the second method is somewhat more illustrative in the current situation, since we outright say that the basis open sets are finite intersections of subbasic open sets. This means that $U$ is open in the generated topology if and only if for each $x \in U$ there are $W_1 , \ldots , W_n$ (for some $n \geq 1$) in $\mathcal S$ such that $x \in W_1 \cap \cdots \cap W_n \subseteq U$. Since the subbasis for the product topology is the family of all sets $\pi_\beta^{-1} ( U )$ where $\beta$ an index, and $U$ is open in $X_\beta$, then the basic open sets are of the form $$\pi_{\beta_1}^{-1} ( U_{\beta_1} ) \cap \cdots \cap \pi_{\beta_n}^{-1} ( U_{\beta_n} ),$$ where $\beta_1 , \ldots , \beta_n$ are indexes, and $U_{\beta_i}$ is open in $X_{\beta_i}$ for all $i$. We may then note that if $\beta_i = \beta_j = \beta$, then $\pi_{\beta_i}^{-1} ( U_{\beta_i} ) \cap \pi_{\beta_j}^{-1} ( U_{\beta_j} ) = \pi_{\beta}^{-1} ( U_{\beta_i} \cap U_{\beta_j} )$, and $U_{\beta_i} \cap U_{\beta_j}$ is open in $X_\beta$, and so each index need only appear (at most) once.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Calculating $\lim_{x\rightarrow +\infty} x \sin(x) + \cos(x) - x^2 $ I want to calculate the following limit: $\displaystyle\lim_{x\rightarrow +\infty} x \sin(x) + \cos(x) - x^2 $ I tried the following: $\displaystyle\lim_{x\rightarrow +\infty} x \sin(x) + \cos(x) - x^2 = $ $\displaystyle\lim_{x\rightarrow +\infty} x^2 \left( \frac{\sin(x)}{x} + \frac{\cos(x)}{x^2} - 1 \right) = +\infty$ because $\displaystyle\lim_{x\rightarrow +\infty} x^2 = +\infty$, $\displaystyle\lim_{x\rightarrow +\infty} \frac{\sin(x)}{x} = 0$, and $\displaystyle\lim_{x\rightarrow +\infty} \frac{\cos(x)}{x^2} = 0$, but I know from the plot of the function that this limit goes to $- \infty$, so I'm clearly doing something wrong. Sorry in advance for the simple question and perhaps for some silly mistake.
Hint: $-\frac{1}{2}x^2\geq x+1-x^2\geq (x\sin x+\cos x-x^2)$ for $x\,$ large enough. Now let $x$ go to infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Prove some properties of the $p$-adic norm I need to prove that the $p$-adic norm is an absolute value in the rational numbers, by an absolute value in a field $K$ I mean a function $|\cdot|:K \to \mathbb{R}_{\ge 0}$ such that: $$\begin{align} \text{I)}&~~~~ |x|=0 \implies x=0\\ \text{II)}&~~~~|x\cdot y|=|x|\cdot|y|\\ \text{III)}&~~~~\text{The triangle inequality.} \end{align} $$ I do not know how to prove II) and III) for the $p$-adic norm. I have for II) that: $|x|_{p}·|y|_{p}=p^{-r}·p^{-a}=p^{-r-a}$ and $|x·y|_{p}=p^{-m}$. How can $p^{-m}=p^{-r}·q^{-a}$?
For (II), write $q_1=p^mr, q_2=p^ns$ where $r,s\in\Bbb Q$ has numerator and denominator prime to $p$ and $m,n\in\Bbb Z$ Then $$|q_1q_2|_p=p^{-(n+m)}=p^{-n}p^{-m}=|q_1|_p\cdot|q_2|_p$$ For (III) assume, that $n> m$, then $$|q_1+q_2|_p=p^{-m}|p^{n-m}s+r|_p=p^{-m}\le p^{-m}+p^{-n}$$ because $$p|(p^{n-m}s+r)\implies p\bigg|\big\lbrace (p^{n-m}s+r)-p^{n-m}s\big\rbrace =r$$ a contradiction, proving the triangle inequality there. If $n=m$ write $$|q_1+q_2|_p=p^{-n}|r+s|_p$$ Then since the common denominator of $r,s$ is still prime to $p$, we can reduce to the case $r,s\in\Bbb Z$. But then it's easy, we just need to show, for integers, $r,s$ that $$|r+s|_p\le |r|_p+|s|_p.$$ For this we note that if $r=p^nr', \; s=p^ns'$ with $\gcd(p,r')=\gcd(p,s')=1$ then $r+s=p^n(r'+s')$ and if $p|(r'+s')$ then $$|r+s|_p\le p^{-n-1}\le 2p^{-n}=|r|_p+|s|_p$$ if $p\not |(r'+s')$ then $$|r+s|_p=|p^n|_p\cdot |r'+s'|= p^{-n}\le 2p^{-n}=|r|_p+|s|_p.$$ Note: By breaking things into cases like this we have an excellent observation: Almost always we have that $|q_1+q_2|_p=\max\{|q_1|_p, |q_2|_p\}$. And in the case that equality doesn't hold we have that $|q_1|_p=|q_2|_p$ which means all triangles in the $p$-adic world are isosceles, when we measure their lengths with this absolute value!
{ "language": "en", "url": "https://math.stackexchange.com/questions/932763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof that $S_3$ isomorphic to $D_3$ So I'm asked to prove that $$S_{3}\cong D_{3}$$ where $D_3$ is the dihedral group of order $6$. I know how to exhibit the isomorphism and verify every one of the $6^{2}$ pairs, but that seems so long and tedious, I'm not sure my fingers can withstand the brute force. Is there a better way?
Every symmetry of the triangle permutes its vertices, so there is an embedding $D_3\to S_3$ already just from geometry. Then it suffices to check they have the same order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 0 }
What does it really mean when we say that the probability of something is zero? Conventionally, people will say a probability of zero is equivalent as saying that the event is impossible. But when we look at the probability from a mathematics perspective, probability is defined as the frequency of occurrence over the how many times the experiment is performed, limit as the number of trials goes to infinity. Doesn't this mean a probability of zero is an occurrence that is arbitrarily small but possible? What are some of the ways to make this line of argument more rigorous?
One way of making this rigorous is to make an analogue between probability and area. The comparison here is made precise via measure theory, but can be explained without recourse to technical definitions. Consider two "shapes": a point and the empty shape consisting of no points. Think of these geometric entities as sitting inside two-dimensional space. What is the area of a point? Zero. What is the area of nothing? Also zero. Does that mean that a point is the same as nothing? Of course not. All we can say is that the notion of area is not capable of distinguishing between them. Using the above example, we can construct two games. In the first game, take a black dartboard and paint a single point red. In the second game, leave the dartboard black. Throw a dart at either one. Is the probability of hitting the red different between these games? Does this mean that these games are equivalent? Does this thought experiment suggest anything about the limitations of the probabilistic method?
{ "language": "en", "url": "https://math.stackexchange.com/questions/932928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
computing the Haar measure for O(n) and U(n) groups My question is about how to compute the Haar measure for O(n) and U(n) groups. For example, for the conventional parametrization of SO(3) with 3 angels, the Haar measure is $ dO= cos(\theta_{13})*d\theta_{12}*d\theta_{13}*d\theta_{23}$ . How one derives this formula? I want to compute the Haar measure for both O(n) and U(n) for general n. I am a student of physics and I am not quite familiar with Haar measure.
You might like these lecture notes. In particular, the second page discusses such calculations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/932985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
There is a unique polynomial interpolating $f$ and its derivatives I have questions on a similar topic here, here, and here, but this is a different question. It is well-known that a Hermite interpolation polynomial (where we sample the function and its derivatives at certain points) exists uniquely. That means that the sort of "modified Vandermonde matrix" such as $$ \left[ \begin{matrix} 1&\alpha_1&\alpha_1^2&\alpha_1^3&\alpha_1^4 \\ 0&1&2\alpha_1 & 2\alpha_1^2 & 4\alpha_1^3 \\ 0&0&2&6\alpha_1 &12 \alpha_1^2 \\ 1 & \alpha_2 & \alpha_2^2 & \alpha_2^3 & \alpha_2^4\\ 0&1&2\alpha_2 &3\alpha_2^2 &4\alpha_2^3 \end{matrix} \right] $$ is invertible for $\alpha_1 \neq \alpha_2$, because $$ \left[ \begin{matrix} 1&\alpha_1&\alpha_1^2&\alpha_1^3&\alpha_1^4 \\ 0&1&2\alpha_1 & 2\alpha_1^2 & 4\alpha_1^3 \\ 0&0&2&6\alpha_1 &12 \alpha_1^2 \\ 1 & \alpha_2 & \alpha_2^2 & \alpha_2^3 & \alpha_2^4\\ 0&1&2\alpha_2 &3\alpha_2^2 &4\alpha_2^3 \end{matrix} \right] \left[ \begin{matrix} c_0 \\ c_1 \\ c_2 \\ c_3 \\ c_4 \end{matrix} \right] = \left[ \begin{matrix} f(\alpha_1) \\ f'(\alpha_1)\\ f''(\alpha_1) \\ f(\alpha_2) \\ f'(\alpha_2) \end{matrix} \right] $$ is the equation for the coefficients of the Hermite polynomial $c_4 x^4 + c_3 x^3 + \dotsb + c_0$ that agrees with $f$ at $\alpha_1$ up to order 2 and at $\alpha_2$ up to order 1. This will be the unique polynomial of degree $\leq 4$ with this property. This matrix has another interesting application, which I'll place below my question. I'm wondering: how can we show this matrix, and others of the same form, are invertible? The normal way to show uniqueness of the Hermite interpolation is through "divided differences", and I'd like a proof that doesn't rely on them. Things I've tried: * *Playing with column reduction, similar to the way that we show that the Vandermonde matrix has determinant $\prod_{i<j}(x_i - x_j)$. It got messy. *Note that the determinant of the matrix is $$ \frac{\partial^4}{\partial x_2 \partial x_3^2 \partial x_5}\left((x_1, \dotsc, x_5)\mapsto \prod_{1\leq i < j \leq 5}(x_i - x_j)\right) \left. \right|_{\substack{x_1=x_2=x_3=\alpha_1\\ x_4=x_5=\alpha_2}}. $$ Through computation I could show this is nonzero in this case, but I haven't found a way to extend this result more generally. I claim also that the invertibility of a matrix like this one (or, more precisely, its transpose) is exactly what we need in order to show that there is a unique solution to an initial value problem $f^{(n)}=\sum a_i f^{(i)}$ when we have initial data at zero, and when $x^n - \sum a_i x^i$ has repeated roots. The above case would be if $\alpha_1$ were a thrice-repeated root, and $\alpha_2$ were a twice-repeated root. Then the system would be $$ \left[ \begin{matrix} 1&\alpha_1&\alpha_1^2&\alpha_1^3&\alpha_1^4 \\ 0&1&2\alpha_1 & 2\alpha_1^2 & 4\alpha_1^3 \\ 0&0&2&6\alpha_1 &12 \alpha_1^2 \\ 1 & \alpha_2 & \alpha_2^2 & \alpha_2^3 & \alpha_2^4\\ 0&1&2\alpha_2 &3\alpha_2^2 &4\alpha_2^3 \end{matrix} \right]^T \left[ \begin{matrix} c_1 \\ c_2 \\ c_3 \\ c_4 \\ c_5 \end{matrix} \right] = \left[ \begin{matrix} f(0) \\ f'(0)\\ f''(0) \\ f^{(3)}(0) \\ f^{(4)}(0) \end{matrix} \right], $$ where we want a solution of the form $c_1 e^{\alpha_1 x} + c_2 x e^{\alpha_1 x} + c_3 x^2 e^{\alpha_1 x} + c_4 e^{\alpha_2 x} + c_5 x e^{\alpha_2 x}$.
One can prove that derivatives of the form we mention of $x_1, \dotsc, x_k \mapsto \prod\limits_{1\leq i<j\leq n}(x_i-x_j)$ are nonzero. Note that the operation of taking the derivative naturally divides the $x_i$ into groups, e.g. in the above matrix there are two groups, $x_1, x_2, x_3$ and $x_4, x_5$. So it helps to divide $\prod\limits_{1\leq i<j\leq n}(x_i-x_j)$ into groups accordingly: $p_1\cdot p_2 \cdot \dotsc \cdot p_k \cdot p_{k+1}$, where the first $k$ factors contain only the factors from one group, and $p_{k+1}$ is all the factors left over. Similarly divide the differential operators into groups $\partial^{\beta_1}\dotsb \partial ^{\beta_k}$ where $\partial^{\beta_i}$ contains all the operators pertaining to group $i$. For each group we have a unique real number $\alpha_j$ that represents the point at which we are sampling (see the example above, where there are two groups). Recall that if $x_{j}, x_{j+1}, \dotsc, x_{j+l}$ is group $m$, then $\partial ^{\beta_m}$ is of the form $$ \partial^{\beta_ i} = \frac{\partial}{\partial x_{j+1}} \frac{\partial^2}{\partial x_{j+2}^2} \dotsb \frac{\partial^l}{\partial x_{j+l}^l}\left.\right|_{x_{q}=\alpha_m, \,\,j\leq q \leq j+l }. $$ Now when we take $$(\partial^{\beta_1}\dotsb \partial^{\beta_k})(p_1\cdot p_2\cdot \dotsc \cdot p_k \cdot p_{k+1}),$$ we have the product rule, where we apply the operators to the factors in every combination and then sum up, with certain coefficients. I claim that if we do not apply all of the operators in $\partial ^{\beta_k}$ to group $k$, group $k$ will evaluate to zero, since we will have two identical rows in the corresponding matrix. This causes things to simplify greatly, and we have $$ (\partial^{\beta_1}\dotsb \partial^{\beta_k})(p_1\cdot p_2\cdot \dotsc \cdot p_k \cdot p_{k+1}) = C \partial^{\beta_1}p_1 \dotsb \partial^{\beta_k}p_k\cdot p_{k+1}, $$ where $C$ is a constant depending on the size of the various groups. This expression is nonzero, since $\partial^{\beta_i}p_i$ corresponds to an upper-triangular matrix with positive diagonal entries, and $p_{k+1}$ is just a product of nonzero terms (since $\alpha_i \neq \alpha_j$ for $i\neq j$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/933124", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Second difference $ \to 0$ everywhere $ \implies f $ linear Exercise 20-27 in Spivak's Calculus, 4th ed., asks us to show that if $f$ is a continuous function on $[a,b]$ that has $$ \lim_{h\to 0}\frac{f(x+h)+f(x-h)-2f(x)}{h^2}=0\,\,\,\text{for all }x, $$ then $f$ is linear. We do this by first treating the case $f(a)=f(b)$, and assuming there is a point $x\in (a,b)$ such that $f(x)>f(a)$. Then we look at $g(x):= f(x)+ \epsilon (x-a)(x-b)$. Suppose that we abandon the hypothesis of continuity. (The only reason we need continuity is to conclude that $g$ has a maximum on $(a,b)$). What is a counterexample, where the Schwarz second derivative is zero everywhere, and yet the function is not linear?
counterexample :set $f(x)=-1$ for $0\leq x<1$ , $f(x)=1$ for $1<x\leq 2$ and $f(1)=0$. Clearly that limit is zero on for $x\not=1$ and $\frac{f(1+h)+f(1-h)-2f(1)}{h^2}=0. $
{ "language": "en", "url": "https://math.stackexchange.com/questions/933188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $GL_n(F)$ is non-abelian for $n \geq 2$ and for any field $F$ I'm trying to show that $GL_n(F)$ is non-abelian for any field $F$ and $n \geq 2$. I'm doing so by constructing two $2 \times 2$ matrices that do not commute and "extending" them to $n \times n$ matrices with zeros in every other entry. We define $\displaystyle A = \left[ \begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right]$ and $\displaystyle B = \left[ \begin{array}{cc} 1 & 0 \\ 1 & 1 \end{array}\right]$. A quick calculation reveals that $AB \neq BA$, so the products of the extended $n \times n$ matrices are also not equivalent. My question is: does this suffice in proving the statement for any field $F$? The only field that I can think of where this might fail is in $\mathbb{Z}/2\mathbb{Z}$, but that turns out to not be the case.
As explained in the comments, the problem is one of extending the two matrices in such a way that 1) the same non-commutativity can be easily verified and 2) the matrices are non-singular. A standard recipe is to extend by ones along the diagonal and zeros elsewhere. Why is this "standard"? The way I think about it is in terms of linear transformations. Remeber that matrix multiplication is defined to faithfully represent composition of linear transformations (matrices are with respect to a given basis). If $n>2$ we can write the vector space $V=F^n$ as a direct sum $V=F^2\oplus F^{n-2}$. The two matrices, $A$ and $B$, can be thought of as two linear transformations, $T_A$ and $T_B$, from $F^2$ to itself. Now we can define two linear transformations from $V$ to itself as follows. Using the above direct sum decomposition we can write any vector $v\in V$ as $v=(x,y)$ with $x\in F^2$ and $y\in F^{n-1}$. Define $E_A:V\to V$ by declaring $E_A(v)=(T_A(x),y)$ and similarly define $E_B:V\to V$ by $E_B(v)=(T_B(x),y)$. Because $T_A,T_B$ and the identity mapping of $F^{n-2}$ are all invertible, these linear transformations are both non-singular. They do not commute, because they do not commute on vectors of the form $v=(x,0)$ by your calculation. Thus we are done. We also see that the matrices of $E_A$ (resp. $E_B$) with respect to the usual basis have the block structure $$ E_A=\left(\begin{array}{c|c} A&0\\ \hline 0&I_{n-2} \end{array}\right)\qquad \text{and}\qquad E_B=\left(\begin{array}{c|c} B&0\\ \hline 0&I_{n-2} \end{array}\right) $$ respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
What is the universal cover of a discrete set? Just curious, what is the universal covering space of a discrete set of points? (Finite or infinite, I'd be happy to hear either/or.) If there is just a single point, I think it is its own universal covering space, since it is trivial simply connected. At two points or more, I'm at a loss.
I don't think there's a widely accepted definition of what the universal cover of a disconnected space is; the standard definition, as the maximal connected cover, only applies to (sufficiently nice) connected spaces, since a disconnected space has no connected covers. One candidate is "the disjoint union of the universal covers of its connected components," at least for a space which is the disjoint union of its connected components, in which case the answer for discrete spaces is themselves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933358", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is this set neccesarily to be a vector space? Suppose $F$ be a field and $S$ be a non empty set such that 1) $a+b \in S $ 2) $ \alpha a \in S$ for all $a,b \in S$ and $ \alpha \in F.$ Is always $S$ to be a Vector space?
No, for we could take the multiplication map $\cdot: F \times S$ to be zero map---the resulting structure fails to be a vector space only because it does not have a multiplicative identity, i.e. an element $1 \in F$ such that $1 \cdot s = s$ for all $s \in S$. Ferra's nice example shows that preventing this degenerate solution by demanding existence of a multiplicative identity still isn't enough to guarantee that the $S$ is a vector space over $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Inequality $\int^{1}_{0}(u(x))^2\,\mathrm{d}x \leq \frac{1}{6}\int^{1}_{0} (u'(x))^2\,\mathrm{d}x+\left(\int_{0}^{1} u(x)\,\mathrm{d}x \right)^2$ I've been scratching my brain on this one for about a week now, and still don't really have a clue how to approach it. Show that for $u \in C^1[0, 1]$ the following inequality is valid: $$\int^{1}_{0}(u(x))^2\,\mathrm{d}x \leq \frac{1}{6}\int^{1}_{0} (u'(x))^2\,\mathrm{d}x+\left(\int_{0}^{1} u(x)\,\mathrm{d}x \right)^2$$
The constant $\frac{1}{6}$ can be slightly improved without using the full strength of Wirtinger Inequality. $$\begin{align} \int_0^1 u^2(t)\,dt - \left(\int_0^1 u(t)\,dt\right)^2 & = \frac{1}{2}\int_0^1\int_0^1 \left(u(x) - u(y)\right)^2\,dx\,dy \\ &= \int_0^1 \int_x^1 \left(\int_x^y u'(t)\,dt\right)^2 \,dy\,dx \\ & \le \int_0^1\int_x^1 (y-x)\left(\int_x^y (u'(t))^2\,dt\right) \,dy\,dx \tag{1} \\ & = \int_0^1 (u'(t))^2 \int_0^t \int_t^1 (y-x)\,dy\,dx\,dt \tag{2} \\ & = \frac{1}{2}\int_0^1 t(1-t)(u'(t)) ^2 \,dt \\ & \le \frac{1}{8} \int_0^1 (u'(t))^2\,dt \tag{3}\end{align}$$ $(1)$ Cauchy Schwarz Inequality. $(2)$ Change in order of integration. $(3)$ $\displaystyle t(1-t) \le \frac{1}{4} \textrm{ for } t \in (0,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
A special case: determinant of a $n\times n$ matrix I would like to solve for the determinant of a $n\times n$ matrix $V$ defined as: $$ V_{i,j}= \begin{cases} v_{i}+v_{j} & \text{if} & i \neq j \\[2mm] (2-\beta_{i}) v_{i} & \text{if} & i = j \\ \end{cases} $$ Here, $v_i, v_j \in(0,1)$, and $\beta_i>0$. Any suggestions or thoughts are highly appreciated.
Let me consider $W:=-V$; we have $$ \det(V)=(-1)^n\det(W). $$ As already noted in the comments, $W=D-(ve^T+ev^T)$, where $v:=[v_1,\ldots,v_n]^T$, $e:=[1,\ldots,1]^T$, and $D:=\mathrm{diag}(\beta_i v_i)_{i=1}^n$. Since $D>0$, we can take $$ D^{-1/2}WD^{-1/2}=I-(\tilde{v}\tilde{e}^T+\tilde{e}\tilde{v}^T), $$ where $\tilde{v}:=D^{-1/2}v$, $\tilde{e}:=D^{-1/2}e$. Note that $\det(D^{-1/2}WD^{-1/2})=\det(D)^{-1}\det(W)$ and hence $$ \det(W)=\det(D)\det(I-(\tilde{v}\tilde{e}^T+\tilde{e}\tilde{v}^T)) =\det(I-(\tilde{v}\tilde{e}^T+\tilde{e}\tilde{v}^T))\prod_{i=1}^n v_i\beta_i. $$ Since $I-(\tilde{v}\tilde{e}^T+\tilde{e}\tilde{v}^T)$ is the identity plus a symmetric rank-at-most-two matrix, its determinant can be written as $$ \det(I-(\tilde{v}\tilde{e}^T+\tilde{e}\tilde{v}^T))=(1-\lambda_1)(1-\lambda_2), $$ where $\lambda_1$ and $\lambda_2$ are the two largest (in magnitude) eigenvalues of $BJB^T$ with $$B=[\tilde{v},\tilde{e}]\quad\text{and}\quad J=\begin{bmatrix}0 & 1\\1 & 0\end{bmatrix}.$$ Since the eigenvalues of $BJB^T$ are same as the eigenvalues of $JB^TB$ (up to some uninteresting zero eigenvalues), we have that $$ \begin{split} \det(I-BJB^T) &= \det(I-JB^TB) = \det \left( \begin{bmatrix} 1-\tilde{e}^T\tilde{v} & -\tilde{e}^T\tilde{e} \\ -\tilde{v}^T\tilde{v} & 1-\tilde{v}^T\tilde{e} \end{bmatrix} \right)\\ &= \det \left( \begin{bmatrix} 1-e^TD^{-1}v & -e^TD^{-1}e \\ -v^TD^{-1}v & 1-v^TD^{-1}e \end{bmatrix} \right)\\ &= (1-e^TD^{-1}v)^2-(e^TD^{-1}e)(v^TD^{-1}v)\\ &= \left(1-\sum_{i=1}^n\frac{1}{\beta_i}\right)^2 - \left(\sum_{i=1}^n\frac{v_i}{\beta_i}\right) \left(\sum_{i=1}^n\frac{1}{v_i\beta_i}\right). \end{split} $$ Putting everything together, $$ \det(V) = (-1)^n\left[\left(1-\sum_{i=1}^n\frac{1}{\beta_i}\right)^2 - \left(\sum_{i=1}^n\frac{v_i}{\beta_i}\right) \left(\sum_{i=1}^n\frac{1}{v_i\beta_i}\right)\right]\prod_{i=1}^n\beta_iv_i. $$ I guess that after some manipulation you could arrive to the formula given in the other answer (probably by playing further with the term $(1-e^TD^{-1}v)^2-(e^TD^{-1}e)(v^TD^{-1}v)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/933640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Define $A = \{ 1,2,A \}$, $A$ can not be a set (Axiom of regularity). Can $A$ be a "class" or a "collection" of elements. This question is probably very naive but it does bother me and I am not sure where to look for an answer. Define $A = \{ 1,2,A \}$, $A$ can not be a set (Axiom of regularity). Can $A$ be a "class" or a "collection" of elements. Suppose it is. Don't we get the Russel's paradox again? Take the collection of all collections which does not contain themselves as an element. Does it contain itself as an element? Is there a short answer or is there a lot of knowledge involved? Thank you Remark: I know that $A$ cannot be a set because of axiom of regilarity. But, what prevents it from being a class or a collection?
A class in standard (ZF) set theory generally coincides with a property that any set may or may not have. For instance, the property might be "$\phi(x) \equiv x \not\in x$". Then $\{x \;:\; x\not\in x\}$, while not necessarily a set, is a well-defined class. But note that the elements of a class are sets, not classes. In your case, you want to know whether there is a class corresponding to $A$. Such a class would contain exactly $1$, $2$, and a third set equal to the entire class. This could only occur if $A$ were itself a set, which you've already noted is ruled out by the axiom of regularity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Gosper's Identity $\sum_{k=0}^n{n+k\choose k}[x^{n+1}(1-x)^k+(1-x)^{n+1}x^k]=1 $ The page on Binomial Sums in Wolfram Mathworld http://mathworld.wolfram.com/BinomialSums.html (Equation 69) gives this neat-looking identity due to Gosper (1972): $$\sum_{k=0}^n{n+k\choose k}[x^{n+1}(1-x)^k+(1-x)^{n+1}x^k]=1 $$ Would anyone know if there is a simple proof of this identity without using induction?
Let $m>0$ a natural number and let's consider a set of polynomial functions: $$A_{0}\left(x\right)=1$$ $$A_{n}\left(x\right)=\sum_{k=0}^{n}\binom{n+m}{k}x^{k}\left(1-x\right)^{n-k},\,\,\,\,n=1,\,2,\,3,\,\ldots$$ We have $$A_{l}\left(x\right)=\sum_{k=0}^{l}\binom{l-1+m}{k}x^{k}\left(1-x\right)^{l-k}+\sum_{k=1}^{l}\binom{l-1+m}{k-1}x^{k}\left(1-x\right)^{l-k}$$ $$A_{l}\left(x\right)=\left(1-x\right)A_{l-1}\left(x\right)+\binom{l-1+m}{l}x^{l}+xA_{l-1}\left(x\right)$$ $$A_{l}\left(x\right)=A_{l-1}\left(x\right)+\binom{l-1+m}{l}x^{l},\,\,\,l=1,\,2,\,\ldots,\,n$$ Then $$\sum_{l=1}^{n}A_{l}\left(x\right)=\sum_{l=0}^{n-1}A_{l}\left(x\right)+\sum_{l=1}^{n}\binom{l-1+m}{l}x^{l}$$ which leads to $$A_{n}\left(x\right)=\sum_{k=0}^{n}\binom{m+k-1}{k}x^{k}.$$ i.e. $$\sum_{k=0}^{n}\binom{n+m}{k}x^{k}\left(1-x\right)^{n-k}=\sum_{k=0}^{n}\binom{m+k-1}{k}x^{k}\, \, \left(1\right)$$ If we set $m=n+1$, we get: $$\sum_{k=0}^{n}\binom{2n+1}{k}x^{k}\left(1-x\right)^{n-k}=\sum_{k=0}^{n}\binom{n+k}{k}x^{k}$$ for any natural number $n>0$. Now $$1=\left[x+\left(1-x\right)\right]^{2n+1}=\sum_{k=0}^{2n+1}\binom{2n+1}{k}x^{k}\left(1-x\right)^{2n+1-k}$$ $$\sum_{k=0}^{n}\binom{2n+1}{k}x^{k}\left(1-x\right)^{2n+1-k}+\sum_{k=n+1}^{2n+1}\binom{2n+1}{k}x^{k}\left(1-x\right)^{2n+1-k}=1$$ $$\sum_{k=0}^{n}\binom{2n+1}{k}x^{k}\left(1-x\right)^{2n+1-k}+\sum_{k=n+1}^{2n+1}\binom{2n+1}{2n+1-k}x^{k}\left(1-x\right)^{2n+1-k}=1$$ $$\sum_{k=0}^{n}\binom{2n+1}{k}x^{k}\left(1-x\right)^{2n+1-k}+\sum_{k=0}^{n}\binom{2n+1}{k}x^{2n+1-k}\left(1-x\right)^{k}=1$$ $$\left(1-x\right)^{n+1}\sum_{k=0}^{n}\binom{2n+1}{k}x^{k}\left(1-x\right)^{n-k}+x^{n+1}\sum_{k=0}^{n}\binom{2n+1}{k}\left(1-x\right)^{k}x^{n-k}=1$$ Using $\left(1\right)$ we obtain: $$\left(1-x\right)^{n+1}\sum_{k=0}^{n}\binom{n+k}{k}x^{k}+x^{n+1}\sum_{k=0}^{n}\binom{n+k}{k}\left(1-x\right)^{k}=1$$ or $$\sum_{k=0}^{n}\binom{n+k}{k}\left[x^{n+1}\left(1-x\right)^{k}+\left(1-x\right)^{n+1}x^{k}\right]=1$$ which concludes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Can someone help me to find a counter example that shows that $a \equiv b \mod m$ does not imply $(a+b)^m \equiv a^m +b^m \mod m$ Can someone help me to find a counter example that shows that $a \equiv b \mod m$ does not imply $(a+b)^m \equiv a^m +b^m \mod m$. I have tried many different values but I can't seem to find one. I tried to prove it but I also cannot do it. Thanks.
Take $a=1$, $b=2$ and $m=6$. $(a+b)^6 = a^6 + 6 a^5 b + 15 a^4 b^2 + 20 a^3 b^3 + 15 a^2 b^4 + 6 a b^5 + b^6$ $(a+b)^6 \equiv a^6 - a^3 b^3 + b^6 \bmod 3$ If $(a+b)^6 \equiv a^6 + b^6 \bmod 6$, then $3$ must divide $a$ or $b$. It is enough to take $a=1$ and $b=2$ to have $(a+b)^6 \not\equiv a^6 + b^6 \bmod 6$. In general, $(a+b)^6 \equiv a^6 + b^6 \bmod 6$ iff $2$ divides $ab(a+b)$ and $3$ divides $ab$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/933915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A question involving the probability generating function I'm stumped by the following exercise: The probability generating function for X is given by $g_{X}(t)$. What is the probability generating function for $X+1$ and $2X$, in terms of $g_{X}(t)$? For the first part I think I've got an answer down although I can't say I'm 100% sure it's correct. This is what I got: $g_{X+1}(t)=E(t^{X+1})=E(t^{X}*t)=tE(t^{X})=tg_{X}(t)$ When it comes to the probability generating function for 2X, I don't know what to do though. I'd be grateful for any answers.
WLOG, consider that $X$ is a discrete random variable, which assumes values in the set $\mathcal{X}$. Using the definitions.... $$ g_{2X}(t) = \mathbb{E}[t^{2X}] = \sum_{x \in \mathcal{X}} p(x)t^{2x} = \sum_{x \in \mathcal{X}} p(x)(t^{2})^x = g_X(t^2).$$ In general: $$ g_{aX+b}(t) = \mathbb{E}[t^{aX+b}] = \sum_{x \in \mathcal{X}} p(x)t^{ax+b} = \sum_{x \in \mathcal{X}} p(x)(t^{a})^xt^b = t^bg_X(t^a).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/934004", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Maximal commutative subring of the ring of $2 \times 2$ matrices over the reals Motivated by complex numbers, I noticed that the set of all elements of the following forms a commutative sub-ring of $M_2(\mathbb{R})$: \begin{pmatrix} x & y\\ -y & x \end{pmatrix} Is this sub-ring maximal w.r.t commutativity? If this sub-ring is commutatively maximal, are there 'other' such maximally commutative sub-rings? P.S: 'other'=non-isomorphic.
Maximal dimension of a commutative subalgebra of $n$ by $n$ matrices is $$ 1 + \left\lfloor \frac{n^2}{4} \right\rfloor. $$ In even dimension $2m,$ this is realized by $$ \left( \begin{array}{rr} \alpha I & A \\ 0 & \alpha I \end{array} \right), $$ where $A$ is any $m$ by $m$ matrix and $I$ is the $m$ by $m$ identity. This is a theorem of Schur, see https://mathoverflow.net/questions/29087/commutative-subalgebras-of-m-n and references in Robin Chapman's answer. Note $$ \left( \begin{array}{rr} \alpha I & A \\ 0 & \alpha I \end{array} \right) \left( \begin{array}{rr} \beta I & B \\ 0 & \beta I \end{array} \right) = \left( \begin{array}{rr} \alpha \beta I & \beta A + \alpha B \\ 0 & \alpha \beta I \end{array} \right), $$ same as $$ \left( \begin{array}{rr} \beta I & B \\ 0 & \beta I \end{array} \right) \left( \begin{array}{rr} \alpha I & A \\ 0 & \alpha I \end{array} \right) = \left( \begin{array}{rr} \alpha \beta I & \beta A + \alpha B \\ 0 & \alpha \beta I \end{array} \right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/934094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Compositions of filters on finite unions of Cartesian products Let $\Gamma$ be the lattice of all finite unions of Cartesian products $A\times B$ of two arbitrary sets $A,B\subseteq U$ for some set $U$. See this note for other equivalent ways to describe the set $\Gamma$: http://www.mathematics21.org/binaries/funcoids-are-filters.pdf It is obvious that $G\circ F\in\Gamma$ for every binary relations $F,G\in\Gamma$. Thus for every filters $f$ and $g$ on $\Gamma$ we can define $g\circ f$ as the filter on $\Gamma$ defined by the base $\{ G\circ F \mid F\in f,G\in g \}$. I need to prove (or disprove) that if $K\in g\circ f$ then there exist $F\in f$ and $G\in g$ such that $K\supseteq G\circ F$ (for all filters $f$, $g$ on $\Gamma$). It seems that it's enough to prove this conjecture only for the case if $f$ or $g$ is a principal filter.
Let $f$, $g$ be filters on $Γ$. "define $g∘f$ as the filter on $Γ$ defined by the base $\{G∘F∣F∈f,G∈g\}$." So as claimed, $g∘f$ is the filter generated by (=smallest filter containing) the base $\{G∘F∣F∈f,G∈g\}$. This means that $\{G∘F∣F∈f,G∈g\}$ is a base for the filter $g\circ f$. Which, by definition of a base for a filter, implies: For each $K$, if $K∈g∘f$ then there exist $F∈f$ and $G∈g$ such that $K⊇G∘F$. So there remains nothing to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Can a game with negative expectation still have a positive utility? Intuitively, I think not. But I can't clearly prove why. Specifically, I've been thinking about lottery games, where the expectation is obviously negative. But can the utility of hitting the jackpot ever be enough to justify playing?
A common assumption about a utility function is that it is concave down: $U(tx + (1-t)y) \geq tU(x) + (1-t)U(y)$ for any $x,y$ and $0 \leq t \leq 1$. One can prove that any concave down function also satisfies the condition that, if $p_i$, $1 \leq i \leq n$ are nonnegative numbers with $\sum_i p_i = 1$, then for any $x_i$ we have: $$U(\sum_ip_ix_i) \geq \sum_i p_i U(x_i)$$ In other words, if $E$ is the expectation of the game and $E_U$ the expected utility: $$U(E) \geq E_U$$ It is also generally assumed that $U$ is increasing, and $U(0) = 0$. Hence if $E < 0$ one has $E_U \leq U(E) < U(0) = 0$, so the game has negative expected utility. If one does not assume that utility is concave down this result does not hold, as shown in Jorge's answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why can't a set have two elements of the same value? Suppose I have two sets, $A$ and $B$: $$A = \{1, 2, 3, 4, 5\} \\ B = \{1, 1, 2, 3, 4\}$$ Set $A$ is valid, but set $B$ isn't because not all of its elements are unique. My question is, why can't sets contain duplicate elements?
Short Answer: The axioms of the Set Theory do not allow us to distinguish between two sets like $A = \{1,2\}$ and $B = \{1,1,2\}$. And every sentence valid about sets should be derived in some way from the axioms. Explained: The logic of the set theory is extensional, that means that doesn't matter the nature of a set, just its extension. The set $A = \{1,1,2,3,4\}$ could be considered different from $B = \{1,2,3,4\}$ in intension, but they are not different in extension, since $1 = 1$, both sets have the same elements. Even if one instance of the number $1$ precedes the other, the axioms of the set theory do not allow to distinguish between two sets using the order of its elements, because the notion of order is not defined by the axioms of the theory. To do that, the ordered pair is defined as follows: $$\left(a,b\right) \equiv \{\{a\},\{a,b\}\}$$ Thus $\left(a,b\right) \neq \left(b,a\right)$ since $\{\{a\},\{a,b\}\} \neq \{\{b\},\{b,a\}\}$. Thus, the set $A$ can't be distinguished fom B using its order. Moreover, the cardinality of two sets is never used to prove that two sets are different. What about the ordered pair $\left(1,1\right) \equiv \{\{1\},\{1,1\}\} $? Since $1 = 1$ we have $\{1,1\} \equiv \{1\}$, this pair is a well known structure called singlenton, obtained as a consequence of the axiom of pairing and (as you can see) the axiom of extension. Moreover, since $\{1\}$ is present in $\left(1,1\right)$ two times, $\left(1,1\right) \equiv \{\{1\}\}$ another singlenton. Which is not a problem for the Analysis in $\mathbb{R}^2$, because this pair is different from $\{1\}$, a subset of $\mathbb{R}$, and different from each other pair in $\mathbb{R}^2$. The moral of this story is that you can't distinguish between the first and the second element of $\left(1,1\right)$, and it's ok. Although the symbol "$\equiv$" is used to introduce syntactic abbreviations instead of semantic equality denoted by "$=$", this difference is just important in the study of Formal Languages, but not really important for this case, since the axiom of extension is what allows us to consider $\{1,1\} \equiv \{1\}$ as $\{1,1\} = \{1\}$. Just read the Section 3 of the book Naive Set Theory of Paul R. Halmos, what introduces the term singlenton in that way, and that's how is used by the specialized literature. Other reference to understand the difference between intension and extension. https://plato.stanford.edu/entries/logic-intensional/
{ "language": "en", "url": "https://math.stackexchange.com/questions/934378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "55", "answer_count": 9, "answer_id": 3 }
the shortest path between two points and the unit sphere and the arc of the great circle Prove that the shortest path between two points on the unit sphere is an arc of a great circle connecting them Great Circle: the equator or any circle obtained from the equator by rotating further: latitude lines are not the great circle except the equator I need help with starting this question, because I am not quite sure how to prove this.
HINT: (Edited 8/27/2021) Start with two points on the equator. Every great circle (except one) meets the shorter great circle arc joining them in at most one point. Let $\Sigma$ be the set of great circles meeting it in one point. Show that for any other curve $C$ joining the points, there must be an open set containing $\Sigma$ of great circles meeting $C$ in at least two points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Does $\vec{a} \times \vec b=\vec a \times\vec c$ and $\vec{a} \cdot \vec b=\vec a \cdot\vec c$ imply that $\vec b = \vec c$? Does $\vec{a} \times \vec b=\vec a \times\vec c$ and $\vec{a} \cdot \vec b= \vec a \cdot\vec c$ imply that $\vec b = \vec c$ if $\vec a \not=0$ ? My attempt- $\vec{a} \times {(\vec b- \vec c)} = 0 \implies$ they are parallel $\vec{a} \cdot ( \vec b-\vec c) = 0 \implies $ they are perpendicular Thus we can conclude $\vec b = \vec c$ Is my proof correct?
Yes you are right. Both the vector equations to be true vector b and c must be the same.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $R = \{ (x,y) \in \mathbb{Z} \times \mathbb{Z} : 4 \mid(5x+3y)\}$ is an equivalence relation. Let $R$ be a relation on $\mathbb{Z}$ defined by $$ R = \{ (x,y) \in \mathbb{Z} \times \mathbb{Z} : 4 \mid (5x+3y)\}.$$ Show that $R$ is an equivalence relation. I'm having a bit of trouble with this exercise in my book and I am trying to study. Can anyone give guidance for this? I know we have to show reflexivity, symmetry, and transitivity, but I don't think what I have on my paper is completely right. I would appreciate other people's opinions on what the solution should be.
Reflexive: $xRx = 5x + 3x = 8x$ which is clearly divisible by $4$ if $x\in\mathbb Z.$ Symmetry: Assume $xRy$, that implies that $5x + 3y$ is divisible by $4$. Then $yRx = 5y + 3x = (9y - 4y) + (15x - 12x) = 9y + 15x - 4y - 12x $$= (3y + 5x)- 4(y + 3x).$ Since $3y + 5x$ is divisible by $4$ and $4(y + 3x)$ is divisible by $4$, their difference is divisible by $4$ too, when $x$ and $y$ are integers. Transitivity: Assume $xRy = 5x + 3y$ and $yRz = 5y + 3z$. Sum them together and you get the $5x + 8y + 3z$, which is divisible by $4$ (since you added two terms divisible by $4$), so $5x + 8y + 3z - 8y$ must also be divisible by $4$ (because you subtracted a number divisible by $4$ from an expression divisible by $4$) and there you get the $xRz$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Why this gamma function reduces to the factorial? $$\Gamma(m+1) = \frac{1\cdot2^m}{1+m}\frac{2^{1-m}\cdot3^m}{2+m}\frac{3^{1-m}\cdot4^m}{3+m}\frac{4^{1-m}\cdot5^m}{4+m}\cdots$$ My books says that in a letter from Euler to Goldbach, this expression reduces to $m!$ when $m$ is a positive integer, but that Euler verified it only for $m=2$ and $m=3$ How can I verify it? Also, the book shows this other form: $$\frac{1\cdot2\cdot3\cdots n\cdot(n+1)^m}{(1+m)(2+m)\cdots(n+m)}$$
This involves the fact that $\Gamma(1)=1$ and $\Gamma(n+1)=n\Gamma(n)$ based on the definition of Gamma, $\Gamma(t)=\int_0^\infty x^{t-1}e^{-x}dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
differentiable as an R2 function v.s. as a complex function Let $f: \mathbb C \to \mathbb C$ be a complex function. Denote its real and imaginary part by $u: \mathbb C \to \mathbb R$ and $v: \mathbb C \to \mathbb R$ respectively. Consider the function $\widetilde f : \mathbb R^2 \to \mathbb R^2$ defined by $\widetilde f (x,y) = (u(x+iy),v(x+iy))$. I am aware that $f$ is differentiable (in the complex sense, i.e. $\lim_{h \to 0} \frac{f(z+h)-f(z)}{h}$ exists) iff $\widetilde f$ is continuously differentiable and $u$ and $v$ satisfy Cauchy-Riemann equation. I think if "continuously differentiable" above is replaced by "differentiable (in the real sense, i.e. $\lim_{\mathbf h \to 0} \frac{\lvert \widetilde f(\mathbf x+ \mathbf h)-\widetilde f(\mathbf x) - D \widetilde f \mathbf h\rvert}{\lvert \mathbf h \rvert} = 0$)", the "if" part will not hold. Can anyone give a counterexample for this? (Or if it holds, can anyone give a proof?)
You can sort this thing one point in the domain at a time. Fact: $f\colon U \to \mathbb{C}$ is complex differentiable at $z$ if and only if $f$ is differentiable at $z$ ( as a function from $U$ to $\mathbb{R}^2$ ) and the partial derivatives at $z$ ( which exist since $f$ is differentiable at $z$) satisfy the Cauchy-Riemann equations. Note that differentiable at $z$ implies continuous at $z$. Therefore: $f\colon U \to \mathbb{C}$ is complex differentiable on $U$ if and only if $f$ is differentiable on $U$ ( as a function from $U$ to $\mathbb{R}^2$ ) and the partial derivatives ( which exist since $f$ is differentiable on $U$) satisfy the Cauchy-Riemann equations. The first equivalence is elementary and the second follows from the first and definitions. From the theory of complex functions it follows that a function $f$ is complex differentiable at $U$ if and only if it is complex analytic, that is for every point $z_0$ in $U$ the values of $f$ on an open disk centered at $z_0$ and contained in $U$ are given by a power series in $(z-z_0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/934845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to use Newton's method to find the roots of an oscillating polynomial? Use Newton’s method to find the roots of $32x^6 − 48x^4 + 18x^2 − 1 = 0$ accurate to within $10^{-5}$. Newton's method requires the derivative of this function, which is easy to find. Problem is, there are several roots near each other: These roots are so close that they seem to prevent Newton's method from working correctly, but I don't know why. How to modify the method so that it still works in this case? And how to code this modified method (in Matlab) so that the algorithm find these roots efficiently?
Here you go: alpha = 0.05; NM = @(x,f)x-alpha*prod(([1,0]*f([x,1;0,x])).^[1,-1]); f = @(x)32*x^6-48*x^4+18*x^2-x^0; x = 0.5; eps = 1; while eps > 1e-5 xn = NM(x,f); eps = abs(xn-x); x = xn; end [x f(x)] I caution you to not turn in this code to your professor. He will probably ask you how it works. I provide this so that you may see how the method works. However, the underlying mechanics of how this particular code works are somewhat complicated and likely beyond your level. You will have to write your own function. If you turn this in, and the professor reviews your code, I guarantee you will get penalized for plagiarism. In an earnest attempt at an answer, the above code includes one trick. An iteration of Newton's method looks like this: $$x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}.$$ However, if we plot our polynomial, we find that it oscillates somewhat like a sine wave centered at $x=0$. This causes a unique behavior of Newton's method: each projection of the slope throws the value to another part of the wave, and you get a "back and forth" action. In order to accommodate this, we add a scaling parameter, $\alpha$: $$x_{n+1} = x_n - \alpha\frac{f(x_n)}{f'(x_n)}.$$ Setting this scaling parameter such that $0 < \alpha < 1$ attenuates the effect of the slope. This slows down the convergence, but it ensures that we don't project "too far" ahead and get caught in an infinite loop. Examine the difference between alpha = 0.05 and alpha = 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Proofs about Matrix Rank I'm trying to prove the following two statements. I can prove them easily by considering the matrix as a representation of a linear map with a given basis, but I don't know a proof which uses just the properties of matrices. First, I want to prove that similar matrices have the same rank. This seems obvious because the rank is just the dimension of the image of the linear map, but these matrices represent the same map (just in a different basis). Next, I want to prove that $rank(AB)\le \min(rank(A),rank(B))$. Again, this seems relatively obvious if we just consider $AB$ as the composition of the two maps , but I can't see how to do it with matrices.
If $n$ vectors $v_1,..v_n$ are linearly independent(dependent) then for non singular matrix $P$ the vectors $Pv_1,...Pv_n$ also will be independent(dependent). From this fact and from the definition of the rank as a number of linearly independent columns (rows) we immediately can conclude that similar matrix have the same rank. The second fact follows from the dimension left and right kernels.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Shortest and most elementary proof that the product of an $n$-column and an $n$-row has determinant $0$ Let $\bf u$ be any column vector and $\bf v$ be any row vector, each with $n \geq 2$ arbitrary entries from a field. Then it is well known that ${\bf u} {\bf v}$ is an $n \times n$ matrix such that $\det({\bf uv})=0$. I am curious to know the $\bf shortest$ and $\bf most~elementary$ proof of this result, say understandable by a (good) high school student. I have one in mind, but to make this interesting, perhaps I should let you present your version first? UPDATE: Someone already presented (below) the same proof I had in mind. But let's see if there is a simpler proof; finding one is the main motivation here.
This question can be easily answered by recalling determinant properties. Specifically: $det(A_1,A_2,\dots,cA_j,\dots,A_n)$ = $c$$(det(A_1,A_2,\dots,A_j,\dots,A_n))$ Therefore if $u = \begin{bmatrix}u_1\\ u_2\\ \vdots \\ u_n \end{bmatrix}$ and $v = \begin{bmatrix} v_1,& v_2,& \dots,& v_n\end{bmatrix}$ Then: $uv$ = $\begin{pmatrix} u_1v_1 & u_1v_2 & \cdots & u_1v_2 \\ u_2v_1 & u_2v_2 & \cdots & u_2v_n \\ \vdots & \vdots & \ddots & \vdots \\ u_nv_1 & u_nv_2 & \cdots & u_nv_n \end{pmatrix} $ Then, by the property mentioned, $det(uv)$ = $v_1v_2$$\cdots v_n$ $det$ $\begin{pmatrix} u_1 & u_1 & \cdots & u_1 \\ u_2 & u_2 & \cdots & u_2 \\ \vdots & \vdots & \ddots & \vdots \\ u_n & u_n & \cdots & u_n \end{pmatrix} $ Clearly this determinant is zero, therefore $det(uv)$ = $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 9, "answer_id": 4 }
Series expansions and perturbation My professor said that $ f \left( y_1(x)+ \epsilon y_2(x)+... \right)= f(y_1(x)) +f'(y_1(x))\> (\epsilon y_2(x)+...) + ...$ but I have no idea how the series continues. Has anyone seen this before? Can you please tell me how the series progresses?
Taylor expansion: $$+\frac{1}{2!}f''(y_1(x)) (\epsilon y_2(x)+\ldots)^2+\frac{1}{3!}f'''(y_1(x)) (\epsilon y_2(x)+\ldots)^3+\ldots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/935364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Kernel of an integral operator with Gaussian kernel function Suppose we have the integral operator $T$ defined by $$Tf(y) = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}}f(xy)\,dx,$$ where $f$ is assumed to be continuous and of polynomial growth at most (just to guarantee the integral is well-defined). If we are to inspect that the kernel of the operator, we would want to solve $$0 = \int_{-\infty}^{\infty} e^{-\frac{x^2}{2}}f(xy)\,dx.$$ If $f$ were odd, this would be trivially zero so we would like to consider the case that $f$ is even. My hunch is that $f$ should be identically zero but I haven't been able to convincingly prove it to myself. The reason that I feel like it should be the zero function is that by scaling the Gaussian, we can make it arbitrarily close to $0$ or $1$. I'll sketch some thoughts of mine. If $y=0$, then we would have that $0 = \sqrt{2\pi}f(0)$, forcing $f(0)=0$. Making use of even-ness and supposing instead that $y\neq 0$, we can make a change of variable to get $$0 = \int_0^{\infty} e^{-\frac{x^2}{2y^2}}f(x)\,dx.$$ Since $f$ has polynomial growth at most, given a fixed $y$, for any $\varepsilon > 0$, there exists $R_y$ such that $$\left|e^{-\frac{x^2}{2y^2}}f(x)\right| \le \frac{\varepsilon}{1+t^2}$$ for all $x > R_y$. Thus we can focus instead on the integral from $0$ to $R_y$ since the tail effectively integrates to zero: $$0 = \int_0^{R_y}e^{-\frac{x^2}{2y^2}}f(x)\,dx.$$ Since $[0,R_y]$ is compact and $f$ is a continuous function, it can be approximated uniformly by (even) polynomials with constant term $0$ by Stone-Weierstrass, i.e. $$f(x) = \lim_n p_n(x),$$ where $p_n(x) = \sum\limits_{m=1}^n a_{m,y}x^{2m}$. Here the coefficients are tacitly dependent upon the upper bound (so I've made it explicit to prevent any confusion). From here, we have $$0 = \int_0^{R_y}e^{-\frac{x^2}{2y^2}}\lim_n p_n(x)\,dx.$$ However since the convergence is uniform, we can commute limit and integral to get that $$0 = \lim_n\sum_{m=0}^n a_{m,y}\int_0^{R_y}e^{-\frac{x^2}{2y^2}}x^{2m}\,dx.$$ Making use of a change of variable, this gives $$0 = \lim_n\sum_{m=0}^n a_{m,y}y^{2m+1}\int_0^{\frac{R_y}{y}} e^{-\frac{x^2}{2}}x^{2m}\,dx.$$ I would like to be able to say that the coefficients must be zero but this is pretty messy at this point. Does anyone have any clue as to how to proceed? Or is there a better way to do this? (I would like to avoid Fourier transform-based or Weierstrass transform-based arguments.)
If you are able to prove that, given your constraints, $Tf$ is an analytic function, we have: $$(Tf)^{(2n+1)}(0) = 0,\qquad (Tf)^{(2n)}(0) = 2^{n+1/2}\cdot\Gamma(n+1/2)\cdot f^{(2n)}(0)$$ hence $Tf\equiv 0$ implies that all the derivatives of $f$ of even order in the origin must vanish. Assuming that $f$ can be well-approximated by analytic functions over larger and larger neighbourhoods of the origin, we have that $Tf\equiv 0$ implies that $f$ is an odd function. We cannot hope in more than this since when $f(x)=x$, $Tf\equiv 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Evaluating an indefinite integral $\int\sqrt {x^2 + a^2} dx$ indefinite integral $$\int\sqrt {x^2 + a^2} dx$$ After some transformations and different substitution, I got stuck at this $$a^2\ln|x+(x^2+a^2)| + \int\sec\theta\tan^2\theta d\theta$$ I am not sure I am getting the first step correct. Tried substituting $ x=a\tan \theta$ but that doesn't help either.
Here we have another way to see this: $$ \int \sqrt{x^2+a^2} dx $$ using the substitution $$ t=x+\sqrt{x^2+a^2}\\ \sqrt{x^2+a^2}=t-x $$ and squaring we have $$ a^2 =t^2-2tx\\ x=\frac{t^2-a^2}{2t}. $$ Finally we can use: $$ dx=\frac{2t(t)-(t^2-a^2)(1)}{2t^2}dt = \frac{t^2+a^2}{2t^2}dt\\ \sqrt{x^2+a^2}=t-\frac{t^2-a^2}{2t}=\frac{t^2+a^2}{2t}. $$ Thus: $$ \int \sqrt{x^2+a^2} dx = \int \frac{t^2+a^2}{2t} \frac{t^2+a^2}{2t^2}dt=\int \frac{(t^2+a^2)^2}{4t^3}dt $$ which is elementary, if we expand the square of the binomial: $$ \int\frac{t}{4}dt+\int\frac{a^2}{2t}dt+\int\frac{a^4}{4t^3}dt=\frac{t^2}{8}+a^2\ln\sqrt{t}-\frac{a^4}{16t^4}, $$ where as stated $t=x+\sqrt{x^2+a^2}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/935519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 3 }
Evaluating modulos with large powers I need some help evaluating: $$13^{200} (mod \ 6)$$ What I've been trying to do: $$13^1 \equiv 1 (mod \ 6)$$ $$13^2 \equiv 1 (mod \ 6)$$ Can I just say that: $$13^{200} = 13^2 * 13^2 * ... * 13^2 \equiv 1^{200}$$ Or is this incorrect? Thanks in advance.
Since $gcd(13,6)=1$ we apply Euler's Theorem: $$13^{\phi{(6)}} \equiv 1 \pmod 6 \Rightarrow 13^2 \equiv 1 \pmod 6$$ $$13^{200} \equiv (13^2)^{100} \equiv 1^{100} \equiv 1 \pmod 6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/935649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the average acceleration Find the average acceleration of the tip of the 2.4-cm long hour hand in the interval noon to 6pm. I found the average velocity is -2.2x10^6 but I'm not sure how to go about finding acceleration. If someone has a few minutes can we chat and you can explain the set up to me?
Average velocity is displacement over time: in this case, $4.8$ cm in $6$ hours. That won't help much with computing average acceleration. Average acceleration is change in velocity over time. The speed of the tip of the hand doesn't change, but between noon and $6$ pm its direction reverses. So the change in velocity is equal to twice the speed. Divide by $6$ hours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there any book/resource which explain the general idea of the proof of Fermat's last theorem? I look for a book/resource which display the general idea of the proof of Fermat last theorem in a simple manner for the public. I mean, books which is not for mathematicians but for the general public. Books like: Gödel's Theorem: An Incomplete Guide to Its Use and Abuse by Torkel Franzén Do anyone now any books of this kind? Also, articles or any resources are good.
Perhaps the closest thing is this article: "A marvelous proof", by Fernando Gouvêa, The American Mathematical Monthly, 101 (3), March 1994, pp. 203–222. This article got the MAA Lester R. Ford Award in 1995. This and other papers (with various degrees of difficulty) can be found at Bluff your way in Fermat's Last Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935833", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 7, "answer_id": 1 }
Probability: Distinguishable vs Indistinguishable So there are 5 red balls and 4 blue balls in an urn. We select two at random by putting them in a line and selecting the two leftmost. What's the probability both are different colors if the balls are numbered, and if the balls of the same color are indistinguishable? All arrangements have the same probability. I don't think distinguishability of balls matters here, does it? All we care about is the color. So in both cases there's a 5/9 probability of red being leftmost, and a 4/9 probability of blue being leftmost. Given red is leftmost, there's a 4/8 chance that blue is next and given blue is first there's a 5/8 chance that red is next. So the probability for both numbered and unnumbered = (5/9)(4/8) + (4/9)(5/8). Is this correct, or am I missing something?
Distinguishability is not a property of the balls. It is the property of the observer. If there are balls of different colors arranged in boxes then certain arrangements will be distinguishable for those with a good eyesight and some arrangements will not be distinguishable for the colorblind. In probability theory objects are called indistiguishable if the observer (experimenter) finds those arrangements equally likely that he can distinguish when, in theory, other observers with better eyes could distinguish further arrangements. That is: "indistiguishability" is a misnomer. So your answer is correct whether or not you are able to see the numbers on the balls. If you are then you would have more elementary events but you would have to unite some. See again André Nocolas' answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/935922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What to do *rigorously* when the second derivative test is inconclusive? How do you rigorously check if a point is a local minimum when the second derivative test is inconclusive? Does there exist a way to do this in general for arbitrary smooth (or analytic...) functions? I know I can graph the function, plug in a few values, etc. (there are lots of questions with these as answers) but that's not rigorous. I'm wondering what I would do in a rigorous proof -- I don't think I ever learned this.
This answer is for functions of one variable. First, a comment. The second derivative test is often not the best approach. For one thing, it involves computing and evaluating the second derivative. That involves work, and carries a non-zero probability of error. In many of the usual calculus problems, the right approach is to look at the behaviour of the first derivative near the critical point. If the first derivative is non-positive in an interval $(a-\epsilon,a)$ of the critical point $a$, and non-negative in an interval $(a,a+\epsilon)$, then we have a local minimum at $a$. Now assume that $f$ is analytic in an interval about the critical point $a$. Let $k$ be the smallest integer $\ge 2$ such that $f^{(k)}(a)\ne 0$. If $k$ is even, and $f^{(k)}(a)\gt 0$, then we have a local minimum at $a$. If $k$ is even, and $f^{(k)}(a)\lt 0$, then we have a local minimum at $a$. If $k$ is odd, we have neither a local maximum nor a local minimum at $x=a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question of trigonometry If $\cos^2 A=\dfrac{a^2-1}{3}$ and $\tan^2\left(\dfrac{A}{2}\right)=\tan^{2/3} B$. Then find $\cos^{2/3}B+\sin^{2/3}B $. I tried componendo and dividendo to write the second statement as cos A but i couldnt simplify it
$$\cos^{2/3}B+\sin^{2/3}B=\cos^{2/3}B\left(1+\tan^{2/3}B\right)=\cos^{2/3}B\left(1+\tan^2\frac{A}{2}\right)=\left(\cos^2B\right)^{1/3}\left(1+\tan^2\frac{A}{2}\right)=\left(\frac{1}{1+\tan^6\left(\frac{A}{2}\right)}\right)^{1/3}\left(1+\tan^2\frac{A}{2}\right)=\left(1+\frac{3\tan^2 \left(\frac{A}{2}\right)\left(1+\tan^2\left(\frac{A}{2}\right)\right)}{1+\tan^6\left(\frac{A}{2}\right)}\right)^{1/3}$$ I hope you can continue from here (you just need to find $\tan^2\frac{A}{2}$ as a function of $a$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/936253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is functions of cauchy sequences is also Cauchy? Recently i saw in some book that if a sequence is Cauchy then function of that sequence is also Cauchy.I have confusion about this. Please help me.
Let $\{a_n\}$ be a Cauchy sequence in R. $ \,\,\, $ Then $\{a_n\}$ has a limit $a$. If $f:R→R$ is continuous on $x=a$, then the sequence $\,\,\{f(a_n)\}\,\,$ converges to $\,\,f(a)$, thus it is Cauchy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Can anyone tell me how $\frac{\pi}{\sqrt 2} = \frac{\pi + i\pi}{2\sqrt i}$ I was working out a problem last night and got the result $\frac{\pi + i\pi}{2\sqrt i}$ However, WolframAlpha gave the result $\frac{\pi}{\sqrt 2}$ Upon closer inspection I found out that $\frac{\pi}{\sqrt 2} = \frac{\pi + i\pi}{2\sqrt i}$ But I cant seem to derive it myself and it has been bugging me all day. How can this complex number be reduced to a real number?
Rewriting everything in polar form, the numerator is $\sqrt{2} \pi e^{i \pi/4}$ and the denominator is, for the choice of branch of $\sqrt{}$ that Wolfram is using, $2 e^{i \pi/4}$. So the $e^{i \pi/4}$ terms cancel and you're left with a real number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Proving that $\frac{\phi^{400}+1}{\phi^{200}}$ is an integer. How do we prove that $\dfrac{\phi^{400}+1}{\phi^{200}}$ is an integer, where $\phi$ is the golden ratio? This appeared in an answer to a question I asked previously, but I do not see how to prove this..
More generally, $\dfrac{\phi^{2n}+1}{\phi^{n}}$ is an integer for $n$ even. Indeed, let $n=2m$ and $\alpha=\phi^2$. Then $$ \dfrac{\phi^{2n}+1}{\phi^{n}} = \phi^{n}+\dfrac{1}{\phi^{n}} = \alpha^{m}+\dfrac{1}{\alpha^{m}} =: y_m $$ Since $\alpha$ and $\dfrac{1}{\alpha}$ are the roots of $x^2=3x-1$, we have $ y_{k+2} = 3y_{k+1}-y_{k}$ for all $k \in \mathbb N$. Since $y_0=2$ and $y_1=3$ are integers, so is $y_k$ for every $k \in \mathbb N$. In fact, $y_k=L_{2k}$, the $2k$-th Lucas number. Yet more generally, $\dfrac{\phi^{2n}+(-1)^n}{\phi^{n}}$ is an integer for all $n \in \mathbb N$. In fact, it is the $n$-th Lucas number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 4 }
Use of $\mapsto$ and $\to$ I'm confused as to when one uses $\mapsto$ and when one uses $\to$. From what I understand, we use $\to$ when dealing with sets and $\mapsto$ when dealing with elements but I'm not entirely sure. For example which of the two is used for the following? $$\begin{pmatrix} x \\ y \\ z \end{pmatrix} \cdots \begin{pmatrix} x+y \\ z+y \\ x+z \\ -z\end{pmatrix}$$
The difference is that $\mapsto$ denotes the function itself. Thus you need not name the function. $a\mapsto b$ fully describes the action of the function. $\to$, on the other hand, describes only the domain and codomain. Thus one might say $x\mapsto x+1$ is equivalent to $f(x)=x+1$, in which case we would say $f:\mathbb{R}\to\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
The Number of Binary Vectors Whose Sum Is Greater Than $k$ I want to determine the number of vectors $(x_1,\ldots,x_n)$, such that each $x_i$ is either $0$ or $1$ and $$\sum\limits_{i=1}^n x_i \geq k$$ * *My Approach The number of $1$'s range from a minimum of $k$ to a maximum of $n$. Thus I must count all the ways in which I can get $$\sum\limits_{i=1}^n x_i = k,\,\,\,\, \sum\limits_{i=1}^n x_i = k +1\,\,\,\,,..., and\,\,\,\,\sum\limits_{i=1}^n x_i = n,$$ $n \choose k$ denotes all the unique ways I can select $k$ $ones$ from the $n$ possible positions. Since the number of $1$'s range from a minimum of $k$ to a maximum of $n$, $n\choose k+1$ denotes all the unique ways I can select $k+1$ $ones$ from the $n$ possible positions. Continuing this pattern all the way up $n$, the solution comes to $\sum\limits_{i=k}^n$ $n\choose i$ $=$ $n \choose k$ $+$ $n \choose k+1$ $+$ $n \choose k+2$ $+$ $\cdot \cdot \cdot +$ $n\choose n$ Please give me hints and suggestions regarding my solution.
If an approximation is good enough for your needs, you can always use the central limit theorem. Just think of each $x_i$ as a random variable, whose mean and standard deviation would each be $\frac12$. Then, by the central limit theorem, your sum $\sum_{i=1}^nx_i$ approximately has a normal distribution with mean $\frac n2$ and standard deviation $\frac{\sqrt n}2$. So, the number of 0-1 vectors satisfying $\sum_{i=1}^nx_i>k$ would be $2^n\cdot{\rm Prob}(\sum_{i=1}^nx_i>k)$, which is approximately $$2^n\cdot\frac1{\sqrt{2\pi}\frac{\sqrt n}2}\int_k^\infty\exp\left(-\frac{2(x-\frac n2)^2}{n}\right)\,{\rm d}x.$$ In many applications, the parameter $k$ can is thought of as a function of $n$, in which case you can further approximate the integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936677", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Probability of impossible event. There is question in my book: Probability of impossible event is? After reading the question my instant answer was $0$ and that was the answer given. But then i thought other way, question is probability of impossible event, so there are two outcomes possible or impossible (event can be certain or impossible). Therefore probability of impossible outcome is $\frac{1}{2}$. Can that also be answer?
Your first answer is right. Your second argument only works to compute the probabilities of equally likely outcomes. So it's perfectly good to figure out the chances of heads coming up from a coin flip. But if one outcome is specified to be impossible, then by definition you are not working with equally likely outcomes and so the computation breaks down.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Does the first player never lose this numbers game? There is an even number of numbers in a row. Two players cross out numbers one by one from left or right. It is not allowed to cross out a number in the middle. Only left or right. After all numbers are crossed out they find the sum of numbers they crossed. The winner is the one who has greatest sum. How to prove that if first player plays correctly he never loses to second player?
Here is a simple strategy that guarantees a tie for the first player: Colour the numbers alternately blue and yellow. Let $B$ be the sum of the blue numbers, and $Y$ the sum of the yellow numbers. If $B \ge Y$: Always pick the blue number. (When it is your turn to play, the two numbers at the end will always be coloured differently.) Then your opponent will have to choose a yellow number. So you end up with $B$ points, and your opponent with $Y$ points. If $Y > B$: Always pick the yellow number. Note that this is not, in general, the best strategy. For instance, if the numbers are $(2, 1, 1, 2, 1, 1)$, then B = Y = $4$, and this strategy will result in a tie. But you can do better by starting with blue number $2$, and then after your opponent plays, switching allegiance to the yellows. This gives you $5$ points to your opponent's $3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Associative law is not self evident The statement: "It is important to understand that the associative law is not self-evident; indeed, if $a*b=a/b$ for positive numbers $a$ and $b$ then, in general, $(a*b)*c\ne a*(b*c)$." - p. 3, A. Beardon, Algebra and Geometry. I am unclear as to how to take the assumption $a*b=a/b$. Does he mean that we suppose, as an alternative (thought experiment), that $a*b := a/b$ and substitute $(a/b)$ for each $(a*b)$ and prove the inequality of the resulting statement? Or, should one notice that $a*b=a/b$ is true when $b=1$, but then note that the domain was assumed to be $\mathbb{R}_+$, therefore $(a*b)*c \ne a*(b*c)$ in general?
You are basically right in your guesses. What is being emphasized is that not all binary operations (i.e. things that take two inputs from your set and return a third element of the set) that one can define are necessarily associative just because they are binary operations. In this example they are saying that if you define the particular example of a binary operation, to which they give the name $*$, on the set $\Bbb R^+$, as in this example, i.e. $a*b={a\over b}$, then $$(a*b)*c={{a\over b}\over c}={a\over bc}$$ and $$a*(b*c)={a\over {b\over c}}={ac\over b}$$ by direct computation. At this point you might object and say "but doesn't $*$ mean multiplication?" If so, you can replace the symbol $*$ by another one if it helps you, $\oplus$ is a nice one as well, the point is it's the binary operation we've defined, no matter what it looks like on paper as a symbol. Now, by definition two real numbers, $x,y\in\Bbb R$, the definition of equality is $$x=y\iff x-y=0$$ and given two fractions, ${p\over q}, {r\over s}\in\Bbb R^+$ the definition of equality is $${p\over q}={r\over s}\iff ps-qr=0.$$ In your case: $${a\over bc}-{ac\over b}=0$$ $$\iff {a\over bc}={ac\over b}$$ $$\iff ab-abc^2=0$$ $$\iff ab=abc^2$$ for every $a,b,c\in\Bbb R^+$, however this condition clearly requires $c^2=1$ which is only true for $c=1$ over the positive real numbers, so the operation is not associative, because the relation only holds for a single positive real number, $c=1$, rather than all positive reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/936954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
How do I solve $\lim_{x\to -\infty}(\sqrt{x^2 + x + 1} + x)$? I'm having trouble finding this limit: $$\lim_{x\to -\infty}(\sqrt{x^2 + x + 1} + x)$$ I tried multiplying by the conjugate: $$\lim_{x\to -\infty}(\frac{\sqrt{x^2 + x + 1} + x}{1} \times \frac{\sqrt{x^2 + x + 1} - x}{\sqrt{x^2 + x + 1} - x}) = \lim_{x\to -\infty}(\frac{x + 1}{\sqrt{x^2 + x + 1} - x})$$ And multiplying by $\frac{\frac{1}{x}}{\frac{1}{x}}$ $$\lim_{x\to -\infty}(\frac{x + 1}{\sqrt{x^2 + x + 1} - x} \times \frac{\frac{1}{x}}{\frac{1}{x}}) = \lim_{x\to -\infty}(\frac{1 + \frac{1}{x}}{\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - 1})$$ That gives me $\frac{1}{0}$. WolframAlpha, my textbook, and my estimate suggest that it should be $-\frac{1}{2}$. What am I doing wrong? (Problem from the 2nd chapter of Early Transcendentals by James Stewart)
Little mistake: $$ \lim_{x\to -\infty}\frac{x + 1}{\sqrt{x^2 + x + 1} - x} \times \frac{\frac{1}{x}}{\frac{1}{x}} = \lim_{x\to -\infty}\frac{1 + \frac{1}{x}}{{\color{red}-}\sqrt{1 + \frac{1}{x} + \frac{1}{x^2}} - 1}=-\frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/937182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
derivate of a piecewise function $f(x)$ at$ x=0$. There is a piecewise function $f(x)$ $$f(x)= \begin{cases} 1 ,\ \ \text{if}\ \ x \geq \ 0 \\ 0,\ \ \text{if}\ \ x<0 \end{cases}$$ what is the derivative of the $f(x)$ at $x=0$? Is it $0$? Or since it is not continuous, the derivative does not exists?
Hint: Prove that if a given function $f$ is differentiable at a point $a$, then $f$ is continuous at $a$. Sketch: If $f$ is differentiable at $a$ then we may write (Theorem) $f(a+h) = f(a) + hf'(a) + \frac{r(h)}{h}h$ where $lim_{h \to 0 } \frac{r(h)}{h} = 0 $, then $lim_{h\to 0} f(a+h) = f(a)$, that is, $f$ is continuous at $a$. Show that your function is not continuous at $0$. Then use the contrapositive $p \Rightarrow q \equiv\ \sim q \Rightarrow\ \sim p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/937299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Closed-form of integral $\int_0^1 \int_0^1 \frac{\arcsin\left(\sqrt{1-s}\sqrt{y}\right)}{\sqrt{1-y} \cdot (sy-y+1)}\,ds\,dy $ I'm looking for a closed form of this definite iterated integral. $$I = \int_0^1 \int_0^1 \frac{\arcsin\left(\sqrt{1-s}\sqrt{y}\right)}{\sqrt{1-y} \cdot (sy-y+1)}\,ds\,dy. $$ From Vladimir Reshetnikov we already know it, that the numerical value of it is $$I\approx4.49076009892257799033708885767243640685411695804791115741588093621176851...$$ There are similar integrals having closed forms: $$ \begin{align} J_1 = & \int_0^1 \int_0^1 {\frac {\arcsin \left( \sqrt {1-s}\sqrt {y} \right) }{\sqrt {1-y} \sqrt {sy-y+1}}}\,ds\,dy = 2\pi -2\pi \ln 2. \\ J_2 = & \int_0^1 \int_0^1 {\frac {\arcsin \left( \sqrt {1-s}\sqrt {y} \right) }{\sqrt {1-s} \sqrt {y}\sqrt {sy-y+1}}}\,ds\,dy = -\frac{7}{4}\zeta\left( 3 \right)+\frac{1}{2}{\pi }^{2}\ln 2. \end{align}$$
I think Math-fun's second approach based on changing the order of integration is a good strategy. Appropriate use of substitutions and trig identities along the way clean up a lot of the resulting "mess": $$\begin{align} \mathcal{I} &=\int_{0}^{1}\mathrm{d}y\int_{0}^{1}\mathrm{d}s\,\frac{\arcsin{\left(\sqrt{1-s}\sqrt{y}\right)}}{\left(sy-y+1\right)\sqrt{1-y}}\\ &=\int_{0}^{1}\mathrm{d}y\int_{0}^{1}\mathrm{d}t\,\frac{\arcsin{\left(\sqrt{ty}\right)}}{\left(1-ty\right)\sqrt{1-y}};~~~\small{\left[1-s=t\right]}\\ &=\int_{0}^{1}\mathrm{d}y\int_{0}^{y}\mathrm{d}u\,\frac{\arcsin{\left(\sqrt{u}\right)}}{\left(1-u\right)y\sqrt{1-y}};~~~\small{\left[yt=u\right]}\\ &=\int_{0}^{1}\mathrm{d}u\int_{u}^{1}\mathrm{d}y\,\frac{\arcsin{\left(\sqrt{u}\right)}}{\left(1-u\right)y\sqrt{1-y}}\\ &=\int_{0}^{1}\mathrm{d}u\,\frac{\arcsin{\left(\sqrt{u}\right)}}{1-u}\int_{0}^{\sqrt{1-u}}\frac{2\,\mathrm{d}x}{1-x^2};~~~\small{\left[\sqrt{1-y}=x\right]}\\ &=\int_{0}^{1}\mathrm{d}u\,\frac{2\arcsin{\left(\sqrt{u}\right)}}{1-u}\cdot\operatorname{arctanh}{\left(\sqrt{1-u}\right)}\\ &=\int_{0}^{1}\frac{2\arcsin{\left(\sqrt{1-v}\right)}\operatorname{arctanh}{\left(\sqrt{v}\right)}}{v}\,\mathrm{d}v;~~~\small{\left[1-u=v\right]}\\ &=\int_{0}^{1}\frac{4\arcsin{\left(\sqrt{1-w^2}\right)}\operatorname{arctanh}{\left(w\right)}}{w}\,\mathrm{d}w;~~~\small{\left[\sqrt{v}=w\right]}\\ &=4\int_{0}^{1}\frac{\arccos{\left(w\right)}\operatorname{arctanh}{\left(w\right)}}{w}\,\mathrm{d}w\\ &=4\int_{0}^{1}\frac{\operatorname{Li}{\left(w\right)}-\operatorname{Li}{\left(-w\right)}}{2\sqrt{1-w^2}}\,\mathrm{d}w\\ &=4\,{_4F_3}{\left(\frac12,\frac12,1,1;\frac32,\frac32,\frac32;1\right)}.\\ \end{align}$$ And so we see that the above integral is intimately connected to this fun problem, which has generated so much discussion and so many spin-off questions that it wouldn't make sense for me to try to rehash everything here. And given the participation of this question's author in said discussions, I can't help but wonder if he suspected this integral's closed form value all along. =)
{ "language": "en", "url": "https://math.stackexchange.com/questions/937487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
I want to know whether the following is periodic or not periodic I have a question about system properties of the following function whether it is periodic or aperiodic. With an insight, I'd determine the function is aperiodic since the unit-step term looks implying that jump discontinuities occur at odd number times but don't have a detailed solution in mathematical terms. Many thanks in advance for your help. $$f(t)=\sum_{n=-\infty}^{+\infty}e^{-(2t-n)}u(2t-n)$$
The function is periodic with period $1/2$ since you have $$ f(t+1/2)=\sum_{n=-\infty}^\infty e^{-(2t-n+1)}u(2t-n+1)=f(t), $$ as you can easily see by an index shift in the sum. Hope this helps...
{ "language": "en", "url": "https://math.stackexchange.com/questions/937580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What does $R[[X]]$ and $R(X)$ stands for? I'm reviewing Linear Algebra these days and I saw these two notations in my notes without definition. Those are, $R[[X]]$ and $R(X)$ where $R$ is a commutative ring with unity. I remember that one of these denote the field of polynomials, but I don't know which one does.. Moreover, is there any notation for the set of polynomial functions? Hoffman&Kunze's text denote it as $R[X]^{\sim}$ btw.
Typically: * *$R[x]$ denotes the set of polynomials over $R$ *When $R$ is a domain, $R(x)$ denotes the set of rational polynomials over $R$ *$R[[x]]$ denotes the formal power series over $R$ *$R((x))$ denotes the Laurent series over $R$ vuur asked an interesting question in the comments which I can speak to here. The answer is "If $R$ is a commutative domain, then yes, $R(x)$ is the field of fractions for $R[x]$, and $R((x))$ is the field of fractions for $R[[x]]$. In that case, $R((x))$ can be expressed as "quotients of power series." What's going on here is that $R(x)$ is almost always defined as quotients of polynomials, and that necessitates $R$ (and hence $R[x]$) to be at least a domain, so that the product of two denominators is nonzero. However, $R((x))$ is not usually defined via quotients, it's usually described as "power series, but you can have negative powers of $x$, and you can start at any power of $x$ and go upward." Thus $R((x))$ is defined for any ring $R$, but it is not necessarily quotients of things from $R[[x]]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/937693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that any derivative of a given function is bounded Let the function $f\left( x \right) = \left( {\frac{{1 - \cos x}} {{{x^2}}}} \right)\cos (3x)$ if $x\ne 0$ and $f(0)=\frac{1}{2}$. Prove that any derivative of $f$ is bounded on $\mathbb{R}$. Thank so much for helping.
The original function is an entire function hence its derivatives are bounded on $[-1,1]$. Moreover, any derivative of $\frac{1-\cos x}{x^2}$ is bounded on $\mathbb{R}\setminus[-1,1]$ since it is a linear combination of functions of the form $\frac{f(x)}{x^k}$ where $f(x)$ is bounded trigonometric function and $k\geq 2$; so the same holds for $\frac{1-\cos x}{x^2}\cos(3x)$ over $\mathbb{R}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/937804", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving Diophantine equations involving $x, y, x^2, y^2$ My father-in-law, who is 90 years old and emigrated from Russia, likes to challenge me with logic and math puzzles. He gave me this one: Find integers $x$ and $y$ that satisfy both $(1)$ and $(2)$ $$x + y^2 = 8 \tag{1} $$ $$x^2 + y = 18 \tag{2}$$ I found one solution by deriving this equation: $$ x(x+1) + y(y+1) = 26$$ which means, I need to find two numbers that add up to $26$, where each number is the product of two consecutive numbers. Through trial and error I found a solution. From college days I seem to remember that there is a more elegant way using the theory of Diophantine equations. Can someone remind me or point to me to an easy to follow explanation of the method for solving problems like these?
You can do without the theory of Diophantine equations. $$x^2+y=18\implies y=18-x^2.$$ Plugged in the other equation, $$x+(18-x^2)^2=8$$or $$x^4-36x^2+x-316=0.$$ With a polynomial solver, you get two positive roots $x=4$ and $x\approx4.458899113761$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/937891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
How find this integral $\iint_{D}(x^2y+xy^2+2x+2y^2)dxdy$ let $$D=\{(x,y)|y\ge x^3,y\le 1,x\ge -1\}$$ Find the integral $$I=\dfrac{1}{2}\iint_{D}(x^2y+xy^2+2x+2y^2)dxdy$$ My idea: $$I=\int_{0}^{1}dx\int_{x^3}^{1}(x^2y+2y^2)dy+\int_{-1}^{0}dx\int_{0}^{-x^3}(xy^2+2x+2y^2)dy$$ so $$I=\int_{0}^{1}[\dfrac{1}{2}x^2y^2+\dfrac{2}{3}y^3]|_{x^3}^{1}dx+\int_{-1}^{0}[\dfrac{1}{3}xy^3+2xy+\dfrac{2}{3}y^3]|_{0}^{-x^3}dx$$ $$I=\int_{0}^{1}[\dfrac{1}{2}x^2+\dfrac{2}{3}-\dfrac{1}{2}x^8-\dfrac{2}{3}x^9]dx+\int_{-1}^{0}[-\dfrac{1}{3}x^{10}-2x^4-\dfrac{2}{3}x^9]dx$$ so $$I=\dfrac{5}{6}-\dfrac{1}{18}-\dfrac{2}{30}+\dfrac{1}{33}+\dfrac{1}{10}-\dfrac{2}{30}=\dfrac{67}{90}$$ My question: my reslut is true? can you someone can use computer find it value? because I use Tom methods to find this reslut is $$\dfrac{79}{270}$$ which is true? so someone can use maple help me?Thank you
$\frac{67}{90}$ doesn't look correct. Here is what wolfram computes
{ "language": "en", "url": "https://math.stackexchange.com/questions/937955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Cycles of odd length: $\alpha^2=\beta^2 \implies \alpha=\beta$ Let $\alpha$ and $\beta$ be cycles of odd length (not disjoint). Prove that if $\alpha^2=\beta^2$, then $\alpha=\beta$. I need advice on how to approach this. I recognized that $\alpha,\beta$ are even permutations (because they have odd length). I'm not sure if this is relevant to the problem. I'm also trying to figure out why this wouldn't work for even length cycles. Any hints on how to begin would be appreciated. Thanks.
Suppose that $\alpha$ and $\beta$ both have length $2k + 1$ for some $k \in \mathbb N$. Then observe that: \begin{align*} \alpha &= \alpha\varepsilon \\ &= \alpha^1\alpha^{2k+1} \\ &= \alpha^{2k+2} \\ &= (\alpha^2)^{k+1} \\ &= (\beta^2)^{k+1} \\ &= \beta^{2k+2} \\ &= \beta^1\beta^{2k+1} \\ &= \beta\varepsilon \\ &= \beta \end{align*} To see why this doesn't work for even length cycles, consider $\alpha = (1,2)$ and $\beta = (1,3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On multiplicity representations of integer partitions of fixed length This is a follow-up question on the question computing length of integer partitions and it is loosely related with the paper "On a partition identity". Let $\lambda$ be a partition of $n$, in the multiplicity representation $\lambda=(a_1,a_2,a_3,\dots)$ meaning that $$n=\underbrace{1+1+\dots}_{a_1}+\underbrace{2+2+\dots}_{a_2}+\underbrace{3+3+\dots}_{a_3}+\dots$$ I can express $\lambda$ by drawing $a_k$ squares in the $k$-th row of a diagram (this is not the usual Ferrers or Young diagram), e.g. \begin{array}{cccc} a_1&\leftarrow&\square&\square&\square\\ a_2&\leftarrow\\ a_3&\leftarrow&\square&\square&\square&\square\\ a_4&\leftarrow&\square&\square\\ \vdots\\ a_n&\leftarrow&\\ \ &\ &\downarrow&\downarrow&\downarrow&\downarrow&\ &\downarrow\\ \ &\ &\mu_1&\mu_2&\mu_3&\mu_4&\cdots&\mu_n \end{array} The $\mu_k$ numbers indicate how many squares there are in each column, so one can write the total number of squares $S$ (i.e. the length of the partition $\lambda$) in two ways: $S=\sum_ka_k=\sum_k\mu_k$. It is then very easy to show that the following holds: $$ G[\lambda]:=\prod_{k=1}^n a_k!=\prod_{k=1}^n k^{\mu_k} $$ Now, there can be many partitions with the same length $S$, which one obtains by rearranging the squares in the diagram above whilst maintaining the number $n=\sum_k ka_k$ constant. So we can divide the set $\Lambda$ of all the partitions of $n$ into non-overlapping subsets $\Lambda_S$ of partitions of fixed length $S$, i.e. $\Lambda=\bigcup_S\Lambda_S$. I would really like to be able to compute $$F(S)=\sum_{\lambda_i\in\Lambda_S}\frac{1}{G[\lambda_i]}$$ without resorting to the computation of all the partitions of $n$. Is there a way of doing this? Or, if it were not possible, is there a way of obtaining at least the list of the numbers $G[\lambda_i]$ with $\lambda_i\in\Lambda_S$?
Generating functions to the rescue! We have a sum over all partitions $\lambda$ in the world: \begin{align*} \sum_{\lambda} \frac{x^{n(\lambda)} y^{S(\lambda)}}{G[\lambda]} &= \sum_{a_1,a_2,\dots\ge0} x^{\sum_{j\ge0} ja_j} y^{\sum_{j\ge0} a_j} \prod_{j\ge0} \frac1{a_j!} \\ &= \sum_{a_1,a_2,\dots\ge0} \prod_{j\ge0} \frac{x^{ja_j} y^{a_j}}{a_j!} = \prod_{j\ge0} \sum_{a_j\ge0} \frac{x^{ja_j} y^{a_j}}{a_j!} \\ &= \prod_{j\ge0} \exp(x^jy) = \exp\bigg( \sum_{j\ge0} x^jy \bigg) = \exp\bigg( \frac{xy}{1-x} \bigg). \end{align*} The sum of $1/G[\lambda]$ over all $\lambda$ with $n(\lambda)=n$ and $S(\lambda)=S$ is the coefficient of $x^ny^S$ in this series. Since \begin{align*} \exp\bigg( \frac{xy}{1-x} \bigg) &= \sum_{S\ge0} \frac1{S!} \bigg( \frac{xy}{1-x} \bigg)^S = \sum_{S\ge0} \frac{y^S}{S!} x^S (1-x)^{-S} \\ &= 1+\sum_{S\ge1} \frac{y^S}{S!} x^S \sum_{k\ge0} \binom k{S-1} x^{k - S + 1} \\ &= 1+\sum_{S\ge1} \frac{y^S}{S!} \sum_{n\ge 1} \binom{n-1}{S-1} x^n, \end{align*} we conclude that $$ F(S) = \frac1{S!} \binom{n-1}{S-1}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/938280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
optimization of coefficients with constant sum of inverses Does anybody knows if there is an easy solution to the following problem: Given $A = [a_1, a_2, ... a_n]$ and K, find B = $[b_1, b_2,...b_n]$ that minimizes $AB^T$ such that $\sum_{i}^{n}\frac{1}{b_i} = K$?
As copper.hat suggests: Lagrange multipliers! We want to minimize $f(B) = AB^T$ subject to the constraint $g(B) = \sum_{i}^{n}\frac{1}{b_i} = K$. We can find all critical points by solving the system $$ \nabla f = -\lambda \nabla g\\ g(B) = K $$ Where $$ \nabla f = A = [a_1,\dots,a_n]\\ \nabla g = -\left[\frac{1}{b_1^2}, \dots, \frac{1}{b_n^2}\right] $$ So, we have the system of equations $$ a_1 = \frac{\lambda}{b_1^2}\\ \vdots\\ a_n = \frac{\lambda}{b_n^2}\\ \frac{1}{b_1} + \cdots + \frac{1}{b_n} = K $$ Solving the first $n$ lines for $\lambda$ yields $$ \lambda = a_1 b_1^2 = \cdots = a_n b_n^2 $$ So that the sign of $\lambda$ must match the sign of $a_i$ for each $i$. Assume, without loss of generality then, that all $a_i > 0$ (if any $a_i$ have signs that don't match, then there can be no critical points). $$ b_i = \pm\sqrt{\lambda/a_i} $$ So, plugging into $g(B) = K$, we have $$ \frac{\pm\sqrt{a_1} \pm \cdots \pm \sqrt{a_n}}{\sqrt{\lambda}} = K \implies\\ \lambda = \frac{(\pm\sqrt{a_1} \pm \cdots \pm \sqrt{a_n})^2}{K^2} $$ That is, we have $2^n$ possibilities to check for $\lambda$. It suffices to take each of these $\lambda$, compute $b_i = \pm\sqrt{\lambda/a_i}$, compute $AB^T$ for each such choice, and select the vector $B$ that produced the minimal $AB^T$. This process could probably be shortened with a second derivative test. I'll think about that...
{ "language": "en", "url": "https://math.stackexchange.com/questions/938371", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Proving that $\dfrac{\tan(x+y)-\tan x}{1+\tan(x+y)\tan x}=\tan y$ Edit: got it, silly mistakes :) I need to prove that $\dfrac{\tan(x+y)-\tan x}{1+\tan(x+y)\tan x}=\tan y$ $$=\frac{\tan x+\tan y-\tan x+\tan^2x\tan y}{1-\tan x\tan y+\tan^2x+\tan x\tan y}$$ $$=\frac{\tan y+\tan^2x\tan y}{1+\tan^2x}$$ $$=\frac{(\tan y)(1+\tan^2x)}{(1+\tan^2x)}$$ $$\boxed{=\tan y}$$ Any hints/help appreciated. Thanks
Hint: Recall that $$\tan(u-v)=\frac{\tan u-\tan v}{1+\tan u\tan v}.\tag{1}$$ Let $u=x+y$ and $v=x$. Remark: If Identity (1) requires proof, use $$\tan(u-v)=\frac{\sin(u-v)}{\cos(u-v)}=\frac{\sin u\cos v-\cos u\sin v}{\cos u\cos v+\sin u\sin v},$$ and divide top and bottom by $\cos u\cos v$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Characterization of a vector space over an associative division ring Let $M$ be a (left) module over an associative division ring $R$. Then it has the following properties. 1) For every submodule $N$ of $M$, there exists a submodule $L$ such that $M = N + L$ and $M \cap L = 0$. 2) Every finitely generated submodule has a composition series. Now let $M \neq 0$ be a (left) faithful module over an associative ring $R$ with unity 1. Suppose $M$ satisfies the above conditions. Is $R$ necessarily a division ring? EDIT Related question:Module over a ring which satisfies Whitehead's axioms of projective geometry
Let $R=\mathbb{Z}$ so that $R$-modules are abelian groups and submodules are subgroups. Let $M$ be a cyclic group of prime order. Then $M$ is simple, so 1) is satisfied, and every finite group has a composition series, so 2) is satisfied. However, $R$ is not a division ring.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Differentiation of $xx^T$ where $x$ is a vector How is differentiation of $xx^T$ with respect to $x$ as $2x^T$, where $x$ is a vector? $x^T $means transpose of $x$ vector.
Differentiation of $xx^T$ is a 3 dimensional tensor to the best of my knowledge,with each element in the matrix $xx^T$ to be differentiated by each element of the vector $x$,and you can just see the result of a derivative online at site http://www.matrixcalculus.org/.Any further communication is preferred.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Every non-unit is in some maximal ideal I am trying to prove that every non-unit of a ring is contained in some maximal ideal. I have reasoned as follows: let $a$ be a non-unit and $M$ a maximal ideal. If $a$ is not contained in any maximal ideal, then the ideal $\langle M, a \rangle$ (that is, the ideal generated by $M$ and $a$) strictly contains $M$, a contradiction. However, there is a detail I'm unsure about. It's easy to see that $\langle a \rangle$ is a proper ideal for any non-unit $a$, but how can I be sure that $\langle M,a \rangle$ is also a proper ideal? Couldn't there, for example, exist some $m \in M$ such that $1=m+a$, so that $\langle M,a \rangle$ is the whole ring?
Yes, it can happen that $\langle M,a\rangle$ is the whole ring, so your proof doesn't work. Try to reason as follows: let $\Sigma$ be the collection of ideals which contain $a$. Now use Zorn's lemma.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 1 }
Kernel of a Linear Map on A Tensor Product Suppose I have the linear maps $ l,k: V \otimes V \rightarrow V \otimes V$ defined by $ l( e_{i_1} \otimes e_{i_2} ) = e_{i_1} \otimes e_{i_2} + e_{i_2} \otimes e_{i_1}$ and $ k( e_{i_1} \otimes e_{i_2} ) = e_{i_1} \otimes e_{i_2} - e_{i_2} \otimes e_{i_1}$ where $ (e_{i})_{i=1}^{n} $ is a basis of V. What are the kernels of these two maps? Or at least an element in both their kernels... other than zero.
Here's one way to think about it: If $V$ (over, say, $\mathbb{F}$) is finite-dimensional, say, $\dim V = n$, then given a basis $(e_a)$ of $V$, we may identify $V \otimes V$ with the set of $n \times n$ matrices, so that the simple tensor $e_a \otimes e_b$ is identified with the matrix $E_{ab}$ whose $(a, b)$ entry is $1$ and for which all other entries are $0$, and so that $(E_{ab})$ is a basis for $V \otimes V$. Under this identification, the maps $l$ and $k$ act on basis elements by $$l: E_{ab} \mapsto E_{ab} + E_{ba}$$ and $$k: E_{ab} \mapsto E_{ab} - E_{ba}.$$ Now, we can write the general element $S \in V \otimes V$ as a matrix $\sum_{a,b} S_{ab} E_{ab}$, whose $(a, b)$ entry is $S_{ab}$. So, $$l(S) = l\left(\sum_{a, b} S_{ab} E_{ab}\right) = \sum_{a, b} S_{ab} l(E_{ab}) = \sum_{a, b} S_{ab} (E_{ab} + E_{ba}) = \sum_{a, b} S_{ab} E_{ab} + \sum_{a, b} S_{ab} E_{ba}.$$ The first term on the right is $S$, and the second can be written as $\sum_{a, b} S_{ba} E_{ab}$, which the matrix whose $(a, b)$ entry is $(S_{ba})$, namely, the transpose ${}^t S$ of $S$. So, in terms of the matrix identification $l$ is just the map $$l(S) = S + {}^t S$$ and similarly $$k(S) = S - {}^t S.$$ So, $S$ is in the kernel of $l$ iff $S = -{}^t S$, that is, iff $S$ is antisymmetric. Similarly, $S$ is in the kernel of $k$ iff $S = {}^t S$, that is, if $S$ is symmetric. In both cases, we can apply the same terminology to the $2$-tensors in $V \otimes V$. Moreover, each of the images is exactly the kernel of the other map: For example, if $S \in \ker l$, i.e., if $S$ is antisymmetric, then $k(S) = S - {}^t S = S - (-S) = 2 S$, and hence $S = \frac{1}{2} k(S)$. This shows that we can actually decompose any $2$-tensor uniquely as a sum of a symmetric tensor and an antisymmetric tensor, and that this decomposition is given by $$S = \frac{1}{2} l(S) + \frac{1}{2} k(S),$$ that is, $\frac{1}{2} l$ and $\frac{1}{2} k$ are respectively the vector space projections from $V \otimes V$ onto the subspaces of symmetric and antisymmetric tensors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is this calculus proof I came up with sound? We want to prove there every bounded sequence has a converging subsequence. Let $[l,u]$ be the interval to which we know $a_n$ is bounded.Let $\{a_n\}$ be the sequence and $[l_i,u_i]$ where $i$ is a positivie integer be a sequence of closed intervals such that each of the intervals contains an infinite amount of elements of ${a_n}$ and $[l_n,u_n]$ is always one of the two halfs of the sequence $[l_{n-1},u_{n-1}]$ and $[l_1,u_1]=[l,u]$ Pick a subsequence of $\{a_n\}$ called $\{b_n\}$ where $b_n\in[l_n,u_n]$. Then it converges since the sequence $(u-l)\frac{1}{2^n}$ converges, so for each $\epsilon>0$ there is an $N$ such that $u_N-l_N<\epsilon$. So if $N'>N$ then the distance from $b_{N'}$ to the midpoint of $[l_N,u_N]$ is less than $\epsilon$ since $B_{N'}\in[l_n,u_N]$.
The proof works except for the last line, where there is some mess with the indexes $n,N,N'$. Moreover, why are you taking into account the midpoint of $[l_n,u_n]$? If I'm guessing correctly, you would like to prove that the sequence $(b_n)_n$ converges to $[l_N,b_N]$, but this is false (indeed, your $N$ depends on $\epsilon$). Since you cannot know exactly to which point $(b_n)_n$ converges to, you cannot prove its convergence using simply the definition. You should use some convergence criterion. Finally, you should be more precise in your statement by saying that you are considering sequences of real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/938974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Algebra - proof verification involving permutation matrices Theorem. Let $\textbf{P}$ be a permutation matrix corresponding to the permutation $\rho:\{1,2,\dots,n\}\to\{1,2,\dots,n\}$. Then $\textbf{P}^t=\textbf{P}^{-1}.$ Proof. First note the following identity for the kronecker delta: $$\sum_k \delta_{ik}\delta_{kj} = \delta_{ij}$$ Now, by definition we have $\textbf{P}_{ij}=\delta_{i\rho(j)}$. Further, $\textbf{P}_{ij}^t = \delta_{\rho(i)j}$. But then: $$(\textbf{PP}^t)_{ij}=\sum_k \textbf{P}_{ik}\textbf{P}^t_{kj} =\sum_k \delta_{i\rho(k)}\delta_{\rho(k)j} = \delta_{ij}$$ Hence $\textbf{PP}^t = I$ and we are done.$\,\square$ Is there a more elegant way to go about this proof (perhaps using $S_n$ somehow)? Is the $\delta$ definition of a permutation matrix for $\rho$ even the best definition? (I haven't in the past seen any others...)
Let $e_1,\dots,e_n$ be a fixed orthonormal basis of a real $n$ dimensional inner product space $V$. Then $P$ can be regarded as the linear transformation $V\to V$ mapping $e_i\mapsto e_{\rho(i)}$. Since $\rho$ is a permutation, we will have $$\langle Pe_i,Pe_j\rangle\ =\ \delta_{i,j}\ =\ \langle e_i,e_j\rangle$$ so, by linearity, $P$ will preserve the inner product of any two vectors. Finally, by the definition of adjoint (which, as matrix, corresponds to the transpose), the above gives $$\langle P^tPe_i,e_j\rangle=\langle e_i,e_j\rangle$$ for all $i,j$, and consequently, $P^tP$ must be the identity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/939164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Linear transformation in linear algebra Let $e_1= \begin{bmatrix} 1\\ 0 \end{bmatrix} $ Let $e_2= \begin{bmatrix} 0\\ 1 \end{bmatrix} $ Let $y_1= \begin{bmatrix} 2\\ 5 \end{bmatrix} $ $y_2= \begin{bmatrix} -1\\ 6 \end{bmatrix} $ Let $\mathbb{R^2}\rightarrow\mathbb{R^2}$ be a linear transformation that maps e1 into y1 and e2 into y2. Find the images of $A= \begin{bmatrix} 5\\ -3 \end{bmatrix} $ $b= \begin{bmatrix} x\\ y \end{bmatrix} $ I am not sure how to this. I think there is a 2x2 matrix that you have to find that vies you the image of A.
If $L$ is a linear transformation that maps $\begin{bmatrix} 1\\ 0 \end{bmatrix} $ to $\begin{bmatrix} 2\\ 5 \end{bmatrix} $, $L$ has a matrix representation $A$, such that $A \begin{bmatrix} 1\\ 0 \end{bmatrix} =\begin{bmatrix} 2\\ 5 \end{bmatrix}$. But this means that $\vec{a_1}^{\,}$ is just $\begin{bmatrix} 2\\ 5 \end{bmatrix}$. The same reasoning can be applied to find the second column vector of $A$. Once you have the matrix representation of $L$, you're good.
{ "language": "en", "url": "https://math.stackexchange.com/questions/939372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is is true that if $E|X_n - X| \to 0$ then $E[X_n] \to E[X] $? My question is motivated by the following problem: Show that if $|X_n - X| \le Y_n$ and $E[Y_n] \to 0$ then $E[X_n] \to E[X]$. I started off by saying that since $$|X_n - X|\ge 0 $$ then $$E[|X_n - X|]\ge 0 $$ At the same time $$E[|X_n - X|]\le E[Y_n] $$ and so $$0 \le E[|X_n - X|] \le E[Y_n]$$ By the squeeze theorem then $E[|X_n - X|] \to 0$. I don't know how to proceed from here. I know that if $E[|X_n - X|] = 0$ then I can set up a contradiction, like so: Suppose that $|X_n - X| = c$, $c \ne 0$ and $E[|X_n - X|] = 0$. Then $$E[|X_n - X|] = E[c] = c \ne 0. $$ But I don't know how to show this would hold in the limit. I am planning to use this to show that $X_n \to X$ and therefore $E[X_n] \to E[X]$
$\vert E(X) - E(X_n) \vert =\vert E(X-X_n) \vert \leq E(|X-X_n|) \to 0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/939562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Examples of books giving problems that require more than one branch of mathematics I want to know if there are books that give problem sets requiring knowledge of two or more branches of mathematics. For example, there could be a problem requiring geometry, set theory, and number theory, and another problem requiring ring theory and calculus. All the problems should be solvable with merely an undergraduate background.
Vector Calculus, Linear Algebra and Differential Forms: A Unified Approach by John and Barbara Hubbard offer many problems that tie together analysis and linear algebra. The problem sets however do not require as diverse of a knowledge as competition level problems (as stated by Alfred).
{ "language": "en", "url": "https://math.stackexchange.com/questions/939659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Closed Form of Recursion $a_n = \frac{6}{a_{n-1}-1}$ Given that $a_0=2$ and $a_n = \frac{6}{a_{n-1}-1}$, find a closed form for $a_n$. I tried listing out the first few values of $a_n: 2, 6, 6/5, 30, 6/29$, but no pattern came out.
Start with $a_n=\frac6{a_{n-1}-1}$, and replace $a_{n-1}$ with $\frac6{a_{n-2}-1}$. We obtain $$a_n=\frac6{\frac6{a_{n-2}-1}-1}=\frac{6(1-a_{n-2})}{a_{n-2}-7}$$ Doing this again with $a_{n-2}=\frac6{a_{n-3}-1}$ and so forth, we get $$a_n=\frac{6(7-a_{n-3})}{7a_{n-3}-13}=\frac{6(13-7a_{n-4})}{13a_{n-4}-55}=\frac{6(55-13a_{n-5})}{55a_{n-5}-133}$$ It would seem then, that $$a_n=\frac{6(F_{k-1}-F_{k-2}a_{n-k})}{F_{k-1}a_{n-k}-F_k}$$ For some coefficients $F_k$. But what is $F_k$? To find out replace $a_{n-k}$ with $\frac6{a_{n-k-1}-1}$, then $$a_n=\frac{6(F_{k-1}-F_{k-2}a_{n-k})}{F_{k-1}a_{n-k}-F_k}=\frac{6(F_{k-1}-F_{k-2}\frac6{a_{n-k-1}-1})}{F_{k-1}\frac6{a_{n-k-1}-1}-F_k}=\frac{6((6F_{k-2}+F_{k-1})-F_{k-1}a_{n-k-1})}{F_{k}a_{n-k-1}-(6F_{k-2}+F_{k-1})}=\frac{6(F_{k}-F_{k-1}a_{n-k-1})}{F_{k}a_{n-k-1}-F_{k+1}}$$ So it would seem that $F_k=F_{k-1}+6F_{k-2}$. By noting that $F_1=1,F_2=7$, we have the solution (I presume you can solve it yourself) $$F_k=\frac25(-2)^k+\frac353^k$$ By plugging in $k=n$, we now have $$a_n=\frac{6(F_{n-1}-F_{n-2}a_{0})}{F_{n-1}a_{0}-F_n}=\frac{6(F_{n-1}-2F_{n-2})}{2F_{n-1}-F_n}$$ which eventually simplifies to $$a_n=3-\frac{5}{4\left(-\frac23\right)^n+1}$$ which fits both $a_0=2$ and $a_n=\frac6{a_{n-1}-1}$, so the expression is indeed correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/939725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How can I prove this problem on geometry? I need to prove the following: If $P$ is an inner point of $\triangle ABC$, then there is a single transverse $EF$ of $\overleftrightarrow{AB}$ and $\overleftrightarrow{AC}$, where $E$ is on $\overleftrightarrow{AC}$ and $F$ is on $\overleftrightarrow{AB}$, such that P is the middle of the segment $XY$. I tried for a longer time but nothing!! Some idea? Thanks in advance.
This answer is based on @noneEggs's suggestion. 1: Extend BP to Q such that BP = (0.5) BQ 2: (Can be skipped.) A circle (centered at P and radius = PB) is drawn cutting BP extended at Q. 3: A line through Q is drawn parallel to AB cutting AC (extended if necessary) at R. 4: Join BR. 5: A line through Q is drawn parallel to RB cutting AB at S. 6: Join SR. According to the construction, SBRQ is a parallelogram with diagonals BQ and SR cutting each other at T. Since the diagonals of a parallelogram bisect each other, BT = (0.5) BQ. This, together with the result in (1), means that T and P are the same point. Thus, RPS is a straight line with SP = PR. R and S are then the required E and F respectively.
{ "language": "en", "url": "https://math.stackexchange.com/questions/939824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Evaluating $\int_{0}^{\pi/4} \log(\sin(x)) \log(\cos(x)) \log(\cos(2x)) \,dx$ What tools would you recommend me for evaluating this integral? $$\int_{0}^{\pi/4} \log(\sin(x)) \log(\cos(x)) \log(\cos(2x)) \,dx$$ My first thought was to use the beta function, but it's hard to get such a form because of $\cos(2x)$. What other options do I have?
Mathematica cannot find an expression of this integral in terms of elementary functions. However, it can be integrated numerically, like this NIntegrate[Log[Sin[x]] Log[Cos[x]] Log[Cos[2 x]], {x, 0, Pi/4}] to yield the result -0.05874864 Edit: I'm told an analytic expression might still exist, even though it cannot be found with Mathematica. Regardless, the numeric value is the one above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/939937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 2, "answer_id": 1 }
Order type relation in poset and well ordered sets I just read the definition: Two partial ordered sets X and Y are said to be similar iff there a bijective function from X to Y such that for f(x) < f(y) to occur a necessary and sufficient condition is x < y. As much as I can understand necessary and sufficient is required because elements may not be comparable and it may occur f(x) < f(y).(this is from book Naive Set Theory by Halmos) But I also remember that there was same definition for well ordered sets i.e. 'Two well ordered sets X and Y are said to be similar iff there a bijective function from X to Y such that for f(x) < f(y) to occur a necessary and sufficient condition is x < y' I think we don't need necessary and sufficient condition here. Only necesssary or sufficient condition will be enough. Am I right?
You are right! If $x<y$ implies $f(x)<f(y)$, then $x\ge y$ implies either $y<x$, in which case $f(y)<f(x)$, or $x=y$, so that $f(x)=f(y)$. Either way, $x\not< y$ implies $f(x)\not<f(y)$, so that sufficiency implies necessity. A symmetric argument works for necessity implying sufficiency. This holds for any totally ordered set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/940018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$|b-a|=|b-c|+|c-a| \implies c\in [a,b]$ We know that if $c\in [a,b]$ we have $|b-a|=|b-c|+|c-a|$. I'm trying to prove that if the norm is induced by an inner product, then the converse holds. I need a hint or something. Thanks in advance
I will prove that generally, in an inner product space $(H,\langle,\rangle)$ the equality $$\Vert a-b\Vert=\Vert a-c\Vert+\Vert c-b\Vert$$ implies $c\in[a,b]$. Indeed, let $x=b-c$, $y=c-a$, so that the hypothesis becomes $\Vert x+y\Vert=\Vert x\Vert+\Vert y\Vert$. Squaring and simplifying we get $$ \langle x,y\rangle=\Vert x\Vert\cdot\Vert y\Vert $$ So $$ \left\Vert \Vert y\Vert x-\Vert x\Vert y \right\Vert^2= 2\Vert x\Vert^2\, \Vert y\Vert^2 - 2\Vert x\Vert\, \Vert y\Vert\langle x,y\rangle=0 $$ That is $\Vert y\Vert x-\Vert x\Vert y =0$. Going back to $a$, $b$ and $c$ this is equivalent to $$ c=\lambda a+(1-\lambda) b \quad\hbox{with}\quad \lambda=\frac{\Vert b-c\Vert}{\Vert a-b\Vert}. $$ That is $c\in[a,b]$.$\qquad\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/940070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }