Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is $(-1)^{\frac{1}3}$? I was surfing Facebook and I ran into this question posted by a high school student: $$\text{Which value equals } (-1)^{\frac{1}3}?\quad\text{Is it } 1\text{ or }-1?$$ He said that he asked this because he did it in two ways, both of which seem valid but they generate different answers. Way #1: $$(-1)^{\frac{1}3}=\sqrt[3]{-1}=-1$$ since $(-1)^3=-1$. However, he also did something else. Way #2: $$(-1)^{\frac{1}3}=(-1)^{\frac{2}6}=\left((-1)^2\right)^{\frac{1}6}=\sqrt[6]1=1$$ and this solution seems valid as well. He's confused, and after reading his question, I became somewhat confused as well. I know that $1^3\ne-1$, but I can’t see why way#2 is wrong. Which solution above is invalid? Or is $(-1)^{\frac{1}3}$ undefined? Thanks in advance. I apologize if this is a stupid question.
The fact is that $$a^{mn}={(a^m)}^n$$ need not be always valid. The correct way is $$x={(-1)}^{1/3}$$ $$x^3+1=0$$ $$(x+1)(x^2-x+1)=0$$ if x is real $$x=-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Minimal value of $\mathbb{E}[(X-g(Y))^2]$. In a test I have two true/false question and I'm not sure if I got it properly: Given X and Y as two random variable (we don't know if indipendent), with $g:\mathbb{R}\to\mathbb{R}$: * *$g(Y)=\mathbb{E}[X|Y]$ makes $\mathbb{E}[(X-g(Y))^2]$ minimal *$c=var[X]$ makes $\mathbb{E}[(X-c)^2]$ minimal Now, in both cases the second part resemble the formula of the variance, but I don't think I need it. Due to the fact that both $\mathbb{E}[(X-g(Y))^2]$ and $\mathbb{E}[(X-c)^2]$ are the expected value of a square, the minimal value that they can get is zero, and they get it when $g(Y)=X$ and $c=X$. So both the the answers should be false, but I'm not really sure if I'm missing something, can you confirm that my logic is right?
The key point here is that when trying to define an estimator function such as $g(\cdot )$, we should note it to be deterministic rather than stochastic. The substitution $g(X)=X$ may sound reasonably good, but impractical due to it uselessness because we don't know the exact amount of $X$. Instead, we can exploit additional information, say $Y$, to make a best possible (from the aspect of minimum error variance) estimation of $X$. The more $Y$ is dependent to $X$, the better the estimator is .
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
problem of maximum in physics Aa elastic ball with a mass $m_1$ is moving at the speed of $v$ and strikes a first elastic ball at rest. The second ball, with a mass $m_2$ acquires a speed $v_2$ and strikes a third elastic ball at rest, with a mass $m_3$, giving it a speed $v_3$. If the masses of the first and the third balls are fixed, find what mass should be the second ball in order to have the highest velocity for the third ball. I've tried with the conservation of the momentum and kinetic energy. $m_1* (v_{m1})^i=m_1* (v_{m1})^f+m_2* (v_{m2})^f+m_3* (v_{m3})^f$ $(v_{m1})^i=(v_{m1})^f+(v_{m2})^f+(v_{m3})^f$ but I'm afraid I'm lost
Suppose the first ball has speed $v^\prime$ after the first collision. We combine momentum conservation $m_1v=m_1v^\prime+m_2v_2$ with energy conservation (since the collision is elastic) $\tfrac12m_1v^2=\tfrac12m_1v^{\prime2}+\tfrac12m_2v_2^2$ to prove $v_2=\frac{2m_1v}{m_1+m_2}$ (I'll leave that as an exercise). By the same logic,$$v_3=\frac{2m_2v_2}{m_2+m_3}=\frac{4m_1m_2v}{(m_1+m_2)(m_2+m_3)}.$$So we choose $m_2$ to minimize$$\frac{(m_1+m_2)(m_2+m_3)}{m_2}=m_1+m_3+m_2+\frac{m_1m_3}{m_2}$$in terms of fixed $m_1,\,m_3$. By the AM-GM inequality, we take $m_2=\sqrt{m_1m_3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is it right that $\cup_{g\in G}gHg^{-1}\subsetneq G$, for every $H$, proper subgroup of infinite group $G$? Let $G$ be a group and $H$, a proper subgroup of $G$ with $[G:H]<\infty$. Is it right that $\bigcup\limits_{g\in G}gHg^{-1}$ is a proper subset of $G $? If $G$ is a finite group we can prove this claim, but is this right for infinite groups?
It is false for some infinite groups. (Edit: The "false" here refers to the question asked in the title, not in the body where there is the additional condition of $H$ having finite index in $G$.) The simplest counterexample is $G = {\rm GL}_2(\mathbf C)$ and $H$ is the subgroup of upper-triangular matrices $\begin{pmatrix}a&b\\0&c\end{pmatrix}$ where $a, c \in \mathbf C^\times$. Every $A \in {\rm GL}_2(\mathbf C)$ has an eigenvector in $\mathbf C^2$, say $v$ with eigenvalue $\lambda$: $Av = \lambda v$ and $v \not= \binom{0}{0}$. Let $w$ be a vector in $\mathbf C^2$ that is outside the line $\mathbf C v$. We can write $Aw = zv + z'w$ for $z$ and $z'$ in $\mathbf C$. (The matrix $A$ might not have an eigenvector linearly independent of $v$, i.e., not all $2 \times 2$ complex matrices are diagonalizable, so we need not be able to pick $w$ as an eigenvector of $A$.) The matrix representation of $A$ with respect to the basis $\{v,w\}$ of $\mathbf C^2$ is $\begin{pmatrix}\lambda &z\\0&z'\end{pmatrix}$, so $A$ is conjugate by an invertible matrix in $G$ to a $2 \times 2$ matrix in $H$. That proves $G = \bigcup_{g \in G} gHg^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Logarithmic convexity of incomplete gamma function. The incomplete Gamma function $F(t)$ satisfies $$ 1 - F(t) \sim \int^\infty_t dx \, e^{-x} x^{\alpha - 1} $$ for $t > 0$ and its derivative $F^\prime \sim e^{-t} t^{\alpha - 1}$ is the density function of the Gamma distribution. It is claimed that the hazard rate, also known as the failure rate $$h = -\frac{d}{dt} \ln (1 - F) = \frac{F^\prime}{1-F}$$ is increasing everywhere (i.e. $h^\prime (t) > 0$) for the case that $\alpha > 1$ and decreasing everywhere for the case that $\alpha < 1$. In other words, the logarithmic convexity of $1 - F$ depends only on the sign of $\alpha - 1$. This result is also given as a proof exercise in problem 5.22 of this book. It's unclear to me how to prove this result. We have $$ h^\prime = \frac{(1-F)F^{\prime \prime} + (F^\prime)^2}{(1-F)^2} = \frac{F^\prime}{(1-F)^2} \left[ (1-F)\left(\frac{\alpha - 1}{t} - 1 \right) + F^\prime \right], $$ but from this point the next step is unclear.
Consider $t=0$. Then, $h(0)=0$ and since for any $\epsilon > 0$ we have $h(\epsilon) > 0$, the derivative (to the right) at zero is positive. (I assume we can omit to consider $t<0$). Now, consider $t > 0$. On this set, $1/h$ is well defined and given by $$1/h = \int_t^{\infty} dx e^{t-x} \left(\frac{x}{t}\right)^{\alpha - 1}$$ Doing a change of variable, we get $$1/h = \int_0^{\infty} dx e^{-x} \left(1+\frac{x}{t}\right)^{\alpha - 1}$$ Now, note that the term $$\left(1+\frac{x}{t}\right)^{\alpha - 1}$$ increases (decreases) in $t$ if $\alpha<1$ ($\alpha>1$). The claim then follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why does the series diverge? Why does the series $\sum_{n=1}^\infty(-1)^n n$ diverge? Is it because the expanded series: $ -1 + 2 -3 + 4 -5 + 6... $ is the difference between the infinite sum of the even and odd numbers which would be infinity?
Method 1 Try grouping 2 terms at a time $-1+2-3+4-5+...$ $=(-1+2)+(-3+4)+(-5+6)+...$ $=1+1+1+...$ $=∞$ Method 2 $-1=-1$ $-1+2=1$ $-1+2-3=-2$ $-1+2-3+4=2$ $-1+2-3+4-5=-3$ $-1+2-3+4-5+6=3$ As we can observe the sequence doesn't seem to(and will not) converge at a particular points on the number line
{ "language": "en", "url": "https://math.stackexchange.com/questions/3907996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Integration of $\int \sqrt\frac{x-1}{x^5}\ dx$? How would you integrate $\int \sqrt\frac{x-1}{x^5}\ dx$? I've tried to split it up to integrate by parts, with $u = \sqrt{x-1}$ and $v' = \frac{1}{x^5}$. Then I get $-\frac{2}{3}x^{-3/2}(x-1)^{1/2} + \frac{1}{3} \int x^{-3/2}(x-1)^{-1/2}$. It seems to me that I need to further pursue integration by parts on the second integral, but that would seem to throw me into an endless loop. How can I proceed? I am given that the answer to this is $\frac{2}{3}(1-\frac{1}{x})^{3/2} + C$. What might be the fastest way to achieve this answer?
Another approach: Let $$I=\int \sqrt{\frac{x-1}{x^{5}}}dx$$so \begin{eqnarray*} I&=&\int \frac{\sqrt{x-1}}{x^{5/2}}dx\\ &=&2 \int \frac{\sqrt{u^{2}-1}}{u^4}du, \quad u=\sqrt{x}, du=\frac{1}{2\sqrt{x}}dx\\ &=&2\int\sin^{2}(s)\cos(s)ds, \quad u=\sec(s), du=\tan(s)\sec(s)ds\\ &=&2\int p^{2}dp, \quad p=\sin(s), dp=\cos(s)ds\\ &=&\frac{2p^{3}}{3}+C\\ &=&\frac{2}{3}\left(\frac{x-1}{x^{5}} \right)^{3/2}x^{6}+C \end{eqnarray*} Therefore, $$\boxed{\int \sqrt{\frac{x-1}{x^{5}}}dx=\frac{2}{3}\left(\frac{x-1}{x^{5}} \right)^{3/2}x^{6}+C}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3908121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
In how many ways can I distribute 5 distinguishable balls into 4 distinguishable boxes such that no box is left empty. So what I think is the way to solve this question is to first count the total number of ways of putting all the balls into boxes such that the restriction isn't satisfied (i.e. the total number of ways of putting the balls into the boxes). Then using the inclusion-exclusion principle for situations where the boxes are empty. I think the number of total ways that you can distribute them would be $4^5$ but I'm unsure how to do the second part of the question.
One box will have two balls. Choose which box that will be, choose two balls to put in it, then put one ball in each of the remaining boxes. $$\binom{4}{1}\binom{5}{2}\times 3!=4\times 10\times 6=240$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3908309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Difference between $\equiv$ and $=$? What is the difference between $\equiv$ and $=$? My thought is, that, when $\equiv$ is used $=$ could have been used as well. The resulting expression would not be wrong, but just take on a slightly different meaning. But what exactly is the relation between those symbols? To give a practical example, consider those: $$ =:⟺∀:(x∈\iff∈) $$ $$ =:=∀:(x∈\iff∈) $$ $$ =:\equiv∀:(x∈\iff∈) $$ Or those: $$ 5+7=12 $$ $$ 5+7\equiv12 $$ $$ 5+7=7+5 $$ $$ 5+7\equiv7+5 $$
$\equiv$ is used for $\text{identity}$ mostly, but $=$ is used for equation
{ "language": "en", "url": "https://math.stackexchange.com/questions/3908455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Probability that at least one Ace is dealt in poker with $6$ players If two cards are dealt to each of the 6 players from a 52 deck of cards what is the probability that there is at least one ace dealt? I've tried to solve this using the following reasoning: * *In total there are $ {52 \choose 12 } \cdot {12 \choose 6}$ ways to deal 12 cards to the 6 players. *Count the number of ways to deal with at least one ace. To count 2 notice that there is in general ${4 \choose N} \cdot 48 \cdot 47 \cdots (48 - (12 - N)) $ ways to choose $N$ aces in the $12$ dealt cards. For each of these combinations there are also ${12 \choose 6}$ ways to distribute two cards to each player. Thus in general the probability would be, following the above reasoning: $$ P(\text{at least one A}) = \frac{{4 \choose N} \cdot 48 \cdot 47 \cdots (48 - (12 - N))}{ {52 \choose 12 } \cdot {12 \choose 6}^2 }$$ However, this does not give a sensible answer. Can anyone see where the flaw in the above reasoning is? Edit: Thanks to those who have answered below - I can see that taking the inverse of the question by asking how to deal no aces is much easier. However, can anyone show me how to do it by count the positive cases as I have tried above?
The probability of at least one ace is 1 minus the probability of no ace: $$1-\frac{\binom{4}{0}\binom{48}{12}}{\binom{52}{12}}=\frac{2759}{4165},$$ so it is about twice as likely to have at least one ace than no ace. If you prefer to count positive cases: $$\sum_{k=1}^4 \frac{\binom{4}{k}\binom{48}{12-k}}{\binom{52}{12}}=\frac{2759}{4165}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3908716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
solving a system of cubic equations with real roots Consider the following system of equations: \begin{align} \frac{1}{8} (\alpha +2 x)^2 \left((\alpha +2 x)^2-12\right)+\\ +\frac{4}{9} y^2 \left((\alpha +2 x)^2+2\right)+\frac{32 y^4}{81}&=0\\ \alpha ^3-6 \alpha +8 x^3-4 \alpha x^2-2 \left(\alpha ^2+2\right) x&=0 \end{align} I would like to eliminate the $\alpha$ value in order to have an equation of the form $f(x,y)=0$. Using Cardano's formula or Mathematica, I end up with imaginary values. However if I solve them numerically I obtain a nice shape $f(x,y)=0$ Due to the numerical plots I obtain I am thinking that there must be a nice (analytical) solution for these equations. How could I derive such an expression for $f(x,y)=0$?
If you do have Mathematica, use the command GroebnerBasis[{pol1, pol2}, {x,y},{a}] where poli are the two expressions that equal $0$. This will eliminate the variable $a$. On WolframAlpha I got a polynomial in $x$, $y$ with fairly large coefficients. Check this link here. Hope there were no mistakes in the input. $\bf{Added:}$. You will get the resultant of the two expressions considered as polynomials in $a$. The Groebner command is just more general, can eliminate some variables ( the ones in the second group) from a system of equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3909033", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
How many students in the school? If they put the rows of 13, there are 8 students left; if they put the rows of 15, there are 3 students left and if they put the rows of 17, there are 9 students left. How many students are there given that the total students are < 5000? The following congruences are: $$x\equiv 8 \pmod {13} \\ x\equiv 3 \pmod {15} \\ x\equiv 9 \pmod {17}$$ I'm still new to this, how can I apply Chinese Remainder Theorem to these congruences to find the total students?
It's easy since here since the moduli are in A.P. (Arithmetic Progression) $\,13,15,17\,$ so we can apply a simple inversion-free version of CRT that only needs to invert the A.P. step-size (here $2$). First we transform our residues $\,8,3,9\,$ to $\rm\color{#0a0}{con}\color{#90f}{gruent}$ ones in A.P (using the Remark below), so $$\begin{align} &x\equiv 8\equiv\ \ \:\! \color{#90f}{99}\!\!\!\pmod{\!13}\\ &x\equiv 3\equiv\ \ \ \ \:\! 3\!\!\!\pmod{\!15}\\ &x\equiv 9\equiv\! \color{#0a0}{-93}\!\!\!\pmod{\!17}_{\phantom{|_{|_|}}}\end{align}\qquad\qquad$$ thus $\!\bmod \color{#c00}{13\!+\!2n}\!:\ x\equiv 99\!-\!96n\!$ $\iff\! 2x\equiv 198\!-\!96\smash[t]{\overbrace{(\color{#c00}{-13})}^{\large \color{#c00}{2n}}}\equiv 1446\,\smash{\overset{\large \div\,2_{\phantom|}\!}\iff}\, x\equiv\,\bbox[5px,border:1px solid #c00]{723}$ Remark $ $ To get residues in A.P. we solve $\,3\!-\!(8\!+\!13j) = (9\!-\!17k)\!-\!3 \!\iff\! 17k\!-\!13j\equiv 11$ $ \!\iff\!\!\bmod 13\!:\, 4k^{\phantom{|^|}}\!\!\!\equiv\! 11\!\equiv\! 24\!\iff\! k\equiv 6\,$ so $\,j = 7,\,$ so $\,8\!+\!13j=\color{#90f}{99},\,$ $9\!-\!17k=\color{#0a0}{-93}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3909216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Matrix inconsistency I've been looking into the following deriviation of a parameterization of a green assets Markowitz problem. One author completes the following step in a calculation that I can not follow. \begin{align} a&=(1-\lambda)\frac{d}{a^2}\left[ \frac{1}{\eta^2} \left(\underbrace{ I_N-\frac{1}{\eta^2/\sigma^2+N} \mathbf{1}_{N\times 1}\mathbf{1}_{N\times 1}' }_{Y}\right)\right]g\\ &=(1-\lambda)\frac{d}{a^2\eta^2}g \end{align} where $I_N$ is the identity matrix in N dimensions; $\mathbf{1}$ is a matrix of all ones; $\lambda, \eta, d, a$ are scalars; g is an $N\times1$ matrix. The authors mention $\eta^2\approx(0.7/0.3)\sigma^2$, however this feel irrelevant in this context. The above implies that the entire paranthesised part is $Y=1$? And I do not really understand why? I would be super grateful if anyone could point me in the right direction. EDIT 1: Additional information In the list of assumptions these two were shortly mentioned: $$ (\mathbf{1}_{N\times 1})'g=0, g'g =1 $$
The key point is that $Yg = g$. To see this, note that $$ Yg = [I_N - K \cdot \mathbf 1 \mathbf 1^T]g = I_N g - K \cdot \mathbf 1 (\mathbf 1^T g) = g - 0 = g. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3909452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Binomial type sum Does the following reduce to something simpler $\sum_j {k \choose 2j} x^{2j}$ I have perused the combinatorial identities by could but I did not find anything that fits my case.
HINT we say $\sum_j{k\choose 2j} a_j$ to be the $even $ $aerated $ $binomial $ $transformation$ of the sequence ${a_j}$. Now if $b_j$ is another sequence such that $∆b_j$$=a_j$ for $j>0$ we can easily show that THEOREM $1$ if ${g_k}$ is the binomial transform of ${a_j}$ $∆g_k-a_0[n=-1]$ is the binomial transform of $a_{j+1}$, binomial transform is defined by $g_k=\sum_j{k\choose j} a_j $ Let $G_k$ and $H_k$ be the even aerated binomoial transforms of $a_j$ and $b_j$ respectively, then for all $k$ consider the sequence {$b_0,0,b_1,0,b_2,....$}, theorem $1$ and get the new sequence {$b_1,0,b_2,....$}. the binomial transform of {$b_1,0,b_2,....$} is given by {$H_{k+2}-2H_{k+1}+H_n+b_0[k=-1]-b_0[k=-2]$}.Since $∆b_j$$=a_j$ for $j>0$ so for all $k$ $G_k=H_{k+2}-2H_{k+1}+H_n+b_0[k=-1]-b_0[k=-2]-H_n$=$H_{k+2}-2H_{k+1}+b_0[k=-1]-b_0[k=-2]$ I wish this helps you. * *if $P$ is a statement $[P]$ is $1$ for $P$ is true and $0$ otherwise. *$∆b_j=b_{j+1}-b_j$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3909668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Give the proof of the inequality $0.5(|A+B|+|A-B|)= \max (|A|,|B|)$ $$0.5(|A+B|+|A-B|)= \max (|A|,|B|)$$ where $A$ and $B$ are any values (real number) I have tried to solve this using triangle inequality but got $0.5(|A+B|+|A-B|)> |A|$ How to prove the given inequality??
WLOG, $A\ge B\ge0$ because swapping $A,B$ and/or changing one or two signs has no effect. Then the identity reduces to $$\frac{A+B+A-B}2=A,$$ which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Trying to understand a double summation formula I am trying to understand a double summation mentioned in this paper: https://doi.org/10.1016/0024-3795(95)00696-6. How can we prove that this is true? $$\sum_{l = 1}^n\;\; \sum_{k = l}^{n+l} = \sum_{k \,=\, 1}^n \;\; \sum_{l \,=\, 1}^{k} \,+\, \sum_{k \,=\, n+1}^{2n}\;\; \sum_{l \,=\, k-n}^{n}$$
Here is another variation to show the identity for $n\geq 1$: \begin{align*} \sum_{l=1}^n\sum_{k=l}^{n+l}a_{k,l} =\sum_{k=1}^n\sum_{l=1}^ka_{k,l}+\sum_{k=n+1}^{2n}&\sum_{l=k-n}^na_{k,l}\tag{1} \end{align*} We start with the right-hand side of (1) and consider at first the left sum. We obtain \begin{align*} \sum_{k=1}^n\sum_{l=1}^ka_{k,l}=\sum_{\color{blue}{1\leq l\leq k\leq n}} a_{k,l}=\sum_{l=1}^n\sum_{k=l}^{n}a_{k,l}\tag{2} \end{align*} In the middle of (2) we represent the index range of summation as inequality chain. This might help to better see the region of validity and how to exchange the sums. Now the other sum. We obtain \begin{align*} \sum_{k=n+1}^{2n}&\sum_{l=k-n}^na_{k,l}\\ &=\sum_{k=1}^{n}\sum_{l=k}^na_{k+n,l} =\sum_{\color{blue}{1\leq k\leq l\leq n}} a_{k+n,l}=\sum_{l=1}^n\sum_{k=1}^la_{k+n,l}\tag{3}\\ &=\sum_{l=1}^n\sum_{k=n+1}^{n+l}a_{k,l}\tag{4} \end{align*} Comment: * *In (3) we shift the index $k$ by $n$ to start with $k=1$. We are now in the same situation as in (2) and continue accordingly. *In (4) we shift the index back. Adding (2) and (4) gives \begin{align*} \sum_{l=1}^n\sum_{k=l}^{n}a_{k,l}+\sum_{l=1}^n\sum_{k=n+1}^{n+l}a_{k,l} \color{blue}{=\sum_{l=1}^n\sum_{k=l}^{n+l}a_{k,l}} \end{align*} and the claim (1) follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Let $a_{n+1}=\sqrt[3]{\frac{1+a_{n}}{2}}$ show that $0 \leq a_{n} \leq 1$ Let $0 \leq a_{1} \leq 1$ and $a_{n+1}=\sqrt[3]{\frac{1+a_{n}}{2}}$ for all $n \geq 1$. Show that: (a) $0 \leq a_{n} \leq 1$ for all n. (b) $ a_{n+1} \geq a_{n}$ for all n. I don't know if the following is correct. My attempt. By using induction: (a) Show that $0 \leq a_{n} \leq 1$ for all n. We have $0 \leq a_{1} \leq 1$ and we want to show that $0 \leq a_{2} \leq 1$ for all n. $0 \leq a_{1} \leq 1 \Rightarrow \frac{1}{2} \leq \frac{a_{1}+1}{2} \leq 1 \Rightarrow \sqrt[3]{\frac{1}{2}} \leq \sqrt[3]{\frac{a_{1}+1}{2}} \leq \sqrt[3]1 \Rightarrow 0.8 \leq a_{2} \leq 1 \Rightarrow 0 \leq a_{2} \leq 1 $ Now we suppose that the relation is true for n. We want to show that it is also true for n+1. We have: $0 \leq a_{n} \leq 1 \Rightarrow \frac{1}{2} \leq \frac{a_{n}+1}{2} \leq 1 \Rightarrow \sqrt[3]{\frac{1}{2}} \leq \sqrt[3]{\frac{a_{n}+1}{2}} \leq \sqrt[3]1 \Rightarrow 0.8 \leq a_{n+1} \leq 1 \Rightarrow 0 \leq a_{n+1} \leq 1 $ (b) Show that $ a_{n+1} \geq a_{n}$ for all n. $ a_{n+1} \geq a_{n} \Rightarrow \sqrt[3]{\frac{1+a_{n}}{2}} \geq a_{n} \Rightarrow 1+a_{n} \geq 2a_{n}^3$ This is true because we know that $0 \leq a_{n} \leq 1$ and that $ a_{n} \geq a_{n}^3$.
For (a) you don't need to prove the statement for $a_2$. You can simplify the argument by proving first that $a_n\ge0$. This is true for $a_1$; suppose it is true for $a_n$; then $a_{n+1}=\sqrt[3]{(a_n+1)/2}\ge0$. Now the other inequality, which you did well: if $a_n\le1$, then $(a_n+1)/2\le1$ and therefore $a_{n+1}=\sqrt[3]{(a_n+1)/2}\le1$. For (b) the arrows are in the wrong direction! You need to prove that $a_{n+1}\ge a_n$ and you can't take this as an assumption. However $$ a_{n+1}\ge a_n \quad\text{if and only if}\quad a_{n+1}^3\ge a_n^3 \quad\text{if and only if}\quad \frac{a_{n}+1}{2}\ge a_n^3 $$ so we are reduced to proving that $2a_n^3\le a_n+1$. Your argument is fine: $a_n\le1$ implies $a_n^3\le a_n$ and $a_n^3\le 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910321", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$AA^T =A$, what can you say about the rank of $A$? $AA^T =A$, what can you say about the rank of $A$? Interview question, and I have no idea. The closest thing I can relate to is the inverse. But that's not the case here.
Robert Israel has already given the answer to the question asked—the rank can be any nonnegative integer up to the dimension of the matrix. However I wanted to point out that one can say a lot more: note that $$ A^T = (AA^T)^T = (A^T)^TA^T = AA^T = A, $$ and so $A$ symmetric. In particular, we now know that $A^2=A$ and so $A^2-A=0$, which means that the minimal polynomial of $A$ divides $t^2-t$. This tells a lot about the matrix $A$ (that it's diagonalizable and that its eiegnvalues can only be $0$ or $1$), and ultimately says that it is a projection matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910453", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a clever solution to Arnold's "merchant problem"? There is a problem that appears in an interview$^\color{red}{\star}$ with Vladimir Arnol'd. You take a spoon of wine from a barrel of wine, and you put it into your cup of tea. Then you return a spoon of the (nonuniform!) mixture of tea from your cup to the barrel. Now you have some foreign substance (wine) in the cup and some foreign substance (tea) in the barrel. Which is larger: the quantity of wine in the cup or the quantity of tea in the barrel at the end of your manipulations? This problem is also quoted here. Here's my solution: The key is to consider the proportions of wine and tea in the second spoonful (that is, the spoonful of the nonuniform mixture that is transported from the cup to the barrel). Let $s$ be the volume of a spoonful and $c$ be the volume of a cup. The quantity of wine in this second spoonful is $\frac{s}{s+c}\cdot s$ and the quantity of tea in this spoonful is $\frac{c}{s+c}\cdot s$. Then the quantity of wine left in the cup is $$s-\frac{s^2}{s+c}=\frac{sc}{s+c}$$ and the quantity of tea in the barrel now is also $\frac{cs}{s+c}.$ So the quantities that we are asked to compare are the same. However, Arnol'd also says Children five to six years old like them very much and are able to solve them, but they may be too difficult for university graduates, who are spoiled by formal mathematical training. Given the simple nature of the solution, I'm going to guess that there is a trick to it. How would a six year old solve this problem? My university education is interfering with my thinking. $\color{red}{\star}\quad$ S. H. Lui, An interview with Vladimir Arnol′d, Notices of the AMS, April 1997.
The volume of spoon, $s$, is the conserved quantity. It is also the amount of wine in the cup. When you then take some mixture $\mathit{tea}+\mathit{wine} = s$ into the spoon, $s-\mathit{wine}$ is the amount of wine left in the cup and the amount of tea poured into the wine barrel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 10, "answer_id": 1 }
Solve $\lim\limits_{z\to 2i} 2x + y^2 = 4$ using the formal definition of limit First of all we need to remember that $z=x+iy$. Well, I'm confusing about a tip from the author where he says to consider $|z-z_{0}| < 1$. Then we need to prove $|x| < 1$ and $|y|<3$ . This is the part I'm not understanding! How can I prove that? Then he says to use this affirmation to conclude: $$\begin{align}|f(z) - f(z_0)| &= |2x+y^2 - 4| \\&= |2x + (y-2)(y+2)| \\&\leq 2|x| + |y-2|(|y|+2)\\& \leq 5|x| + 5|y-2|\end{align}$$ and then $5|x| + 5|y-2| \leq 10|z-2i|$
$|a+ib|=\sqrt {a^{2}+b^{2}}$ if $a$ and $b$ are real numbers. Hence $|a+ib| \geq \sqrt {a^{2}}=|a|$ and $|a+ib| \geq \sqrt {b^{2}}=|b|$. Thus $|(x+iy)- (0+2i)| =|x+i(y-2)| <1$ implies that $|x| <1$ and $|y-2| <1$ so $|y| <1+2=3$. [To finish the proof take $\delta =\frac {\epsilon} {10}$]
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find $\lim_{n \to \infty} \Big (1 - \frac{c \ln(n)}{n} \Big)^n$ Let $c \neq 1$ be a postive real number. Find the following limit $$\lim_{n \to \infty} \Big (1 - \frac{c \ln(n)}{n} \Big)^n.$$ I know that $\lim_{n \to \infty} \Big( 1 + \frac{c}{n} \Big)^{bn} = e^{bc}$. But it cannot be used here, since $\ln(n)$ is not a constant. I also know how to find the $\lim_{n \to \infty} \Big( 1 + \frac{1}{n^2} \Big)^{n}$, which is maybe closer to our limit. I tried to use some methods (e.g., L'Hopital's rule) that appear in the proofs of the above limits to solve also $\lim_{n \to \infty} \Big (1 - \frac{c \ln(n)}{n} \Big)^n$ but without success. The graphs of $\Big (1 - \frac{c \ln(n)}{n} \Big)^n$ indicate that the limit is $\infty$ if $c < 1$ and the limit is 0 otherwise. I appreciate any suggestions on how to solve this limit. Also, any readings on this topic (I did not find any such example anywhere) are appreciated.
Using the Maclaurin series of the logarithm and the exponential function, we have \begin{align*} \left( {1 - \frac{{c\log n}}{n}} \right)^n & = \exp \left( {n\log \left( {1 - \frac{{c\log n}}{n}} \right)} \right) = \exp \left( {n\left( { - \frac{{c\log n}}{n} + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{{n^2 }}} \right)} \right)} \right) \\ &= \exp \left( { - c\log n + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right)} \right) = n^{ - c} \left( {1 + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right)} \right). \end{align*} It is now easy to conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3910875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Triangle Area Problem with intersecting lines I recently saw this geometry problem and it's been killing me. Given $\overline{AB}=\overline{AC}=20$, $\overline{AD}=\overline{AE}=12$, and $[ADFE]=24$ with $[\cdot]$ denoting area, find the area of $\triangle BCF$ (see diagram) I started by considering line $\overline{DE}$. It's pretty clear to see that $\triangle DEF \sim \triangle CBF$. We also have $\triangle {ADE}\sim\triangle{ABC}$, with $9[ABC]=25[ADE]$. But once i've figured out these parts, I get stuck. What would be the next steps in this situation? I assume I'd gave to take advantage of the area ratios I've gotten, but I can't workout the details of that argument.
$\triangle ADF = \frac{1}{2} \times 24 = 12$ $\triangle ADF$ shares the same altitude with $\triangle CDF$ (from point $F$ to $AC$). So their area is in the ratio of their bases $12:8$. $\triangle CDF = 8 \implies \triangle ACE = (8 + 24) = \frac{1}{2} \times AC \times EH$ As $EH = \frac{12}{20} BG$ $\frac{1}{2} \times AC \times BG = \frac{160}{3} = \triangle ABC$ So $\triangle BCF = \frac{160}{3} - 24 - 8 -8 = \frac{40}{3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3911250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Using the converse if Cayley Hamilton theorem My textbook says : Let $M$ be a $3 \times 3$ Hermitian matrix which satisfies the matrix equation $$ M^{2}-5 M+6 I=0 $$ Where $I$ refers to the identity matrix. Which of the following are possible eigenvalues of the matrix $M$ (a) (1,2,3) (b) (2,2,3) (c) (2,3,5) (d) (5,5,6) Then it proceeds as: According to Cayley-Hamilton theorem, we can write $\lambda^{2}-5 \lambda+6=0 \Rightarrow \lambda=2,3$ Correct option is (b) It's clear that the author has used the Cayley Hamilton theorem but in reverse but how can we use the converse Cayley Hamilton theorem? I've read that the converse of Cayley Hamilton theorem doesn't hold in general so what's the author doing here? I'd be glad if someone pointed out my mistake. Much thanks.
Suppose that $p(M)=0$ for some square matrix $M$ and some polynomial $$ p(\lambda)=\lambda^k+a_{k-1}\lambda^{k-1}+\cdots + a_{1}\lambda+a_0. $$ Then $$ p(M)-p(\lambda)I = -p(\lambda)I. $$ You can rewrite the left side in order to obtain an inverse for $M-\lambda I$ for any $\lambda$ for which $p(\lambda)\ne 0$ as follows: $$ (M-\lambda I)q(\lambda,M)=q(\lambda,M)(M-\lambda I)=-p(\lambda)I $$ Therefore $M-\lambda I$ is invertible if $p(\lambda)\ne 0$. So the only possible eigenvalues of $M$ are the solutions of $p(\lambda)=0$. That does not mean every root of $p(\lambda)$ is an eigenvalue because $q(\lambda,M)=0$ could occur. But it is certainly the case that every eigenvalue of $M$ is a root of $p(\lambda)$. In your case, $p(M)=0$ where $p(\lambda)=\lambda^2-5\lambda+6$. So, the eigenvalues of $M$ must be roots of $p$, which are $3$ and $2$. That does not mean that both $2$ and $3$ are eigenvalues. But $2$ and $3$ are the only possible eigenvalues. Out of your possible answers, the only possible legitimate answer is (b) $2,2,3$ because neither $1$, nor $5$, nor $6$ are possible eigenvalues, as they are not roots of the annihilating polynomial $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3911412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How many parameter-free definable subsets are there of the structure $(\mathbb{R}, *)$? This is related to a question where I asked how many subsets of $\mathbb{R}$ are parameter-free definable in the structure $(\mathbb{R}, +)$. Now, I am asking how many subsets of $\mathbb{R}$ are parameter-free definable in $(\mathbb{R}, *)$. I believe there are only 32 of them. Is this correct?
A good first step, which is almost always easier, is to count automorphism orbits. (Note that this is also how the answer to your previous question went.) In the structure $(\mathbb{R};*)$, this is easy to do: the orbits are exactly $\{0\}$, $\{1\}$, $\{-1\}$, $\mathbb{R}_{>0}\setminus\{1\}$, and $\mathbb{R}_{<0}\setminus\{-1\}$. Checking that the latter two are in fact orbits is a bit annoying, but it's not hard. My suggestion is to first think about the orbits of the structure $(\mathbb{R};+)$, which is isomorphic to $(\mathbb{R}_{>0};*)$ and is easier to think about. This means that there are exactly $32$ automorphism-closed subsets of $(\mathbb{R};*)$. Since every parameter-freely-definable set is automorphism-closed, this gives an upper bound for the OP; it's easy to check that in fact each of the five orbits above (and hence each automorphism-closed set) is parameter-freely definable, so we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3911554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this relation symmetric? $R={(1,1),(0,0)}\ $ R={(1,1),(0,0)} A relation is symmetric if xRy and yRx but can it be x=y?
It symmetric. If $\color{green}0R\color{red}0$ then $\color{red}0R\color{green}0$. And if $\color{green}1R\color{red}1$ then $\color{red}1R\color{green}1$ And as those are the only two possible cases of $\color{green}x R\color{red}y$ possible, If we ever have $\color{green}x R\color{red}y$ it will always be the case we also have $\color{red}y R\color{green}x$. Please let me know if you hare red/green colorblind. If so consider this explanation: If $\color{purple}{\large{\text{ZERO}}}R\color{orange}{\small{\text{zero}}}$ then $\color{orange}{\small{\text{zero}}}R\color{purple}{\large{\text{ZERO}}}$. And if $\color{purple}{\large{\text{ONE}}}R\color{orange}{\small{\text{one}}}$ then $\color{orange}{\small{\text{one}}}R\color{purple}{\large{\text{ONE}}}$ And as those are the only two possible cases of $\color{purple}{\large{\text{X}}}R\color{orange}{\small{\text{y}}}$ possible, If we ever have $\color{purple}{\large{\text{X}}}R\color{orange}{\small{\text{y}}}$ it will always be the case we also have $\color{orange}{\small{\text{y}}}R\color{purple}{\large{\text{X}}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3911722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving that if $\bigcap_{n=1}^{\infty }K_{n}=\left \{ x \right \}\Rightarrow\operatorname{diam}(K_{n})\rightarrow 0$ let $(X,d)$ be a compact metric space if $K_{n}={}$closed and $K_{n+1}\subseteq K_{n} $ and $\bigcap_{n=1}^{\infty }K_{n}=\left \{ x \right \}$ show that $\operatorname{diam}(K_{n})\rightarrow 0$ I know there exists a different proof, but my first thought was to prove this with subsequences which is probably wrong, I wonder whether there is any way we can prove this with subsequences? my attempt: I think we can approach $\operatorname{diam}(K_{n})$ with subsequences $d(x_{k_{n}},y_{k_{n}})\rightarrow \operatorname{diam}(K_{n})$ know $x,y\epsilon K_{n}$ such that $x_{k_{n}} \rightarrow x$ and $y_{k_{n}} \rightarrow y$ $d(x_{k_{n}},y_{k_{n}})\leq d(x_{k_{n}},x)+d(x,y)+d(y,y_{k_{n}})$ and since $\bigcap_{n=1}^{\infty }K_{n}=\left \{ x \right \}$ and $n \to \infty $ $x=y$ $\Rightarrow $ $\operatorname{diam}(K_{n})\leq 0$ I think I know where the mistake is. I just need to fully understand why this can't work , if there is a way to prove this sequences, I would appreciate if someone could write it below.
Here is a proof. For all $\varepsilon >0$, call $$H_n= K_n \setminus B(x, \varepsilon )$$ where $B(x, \varepsilon )$ is the open ball of radius $\varepsilon$ centered at $x$. Clearly $H_n$ is compact and $H_{n+1} \subseteq H_n$. Note that $$\bigcap_n H_n = \bigcap_n K_n \setminus B(x, \varepsilon ) = \left\{ x \right\} \setminus B(x, \varepsilon ) = \emptyset $$ This means that one of the $H_n$ is empty (if all were not-empty, then their intersection would be not-empty). Say $H_N$ is the first one to be empty (and so are also $H_{N+1}, H_{N+2}, ...$) In other words $K_N \subseteq B(x, \varepsilon )$. In particular $$\mathrm{diam} \ K_N \le \mathrm{diam} \ B(x, \varepsilon ) = 2 \varepsilon$$ Hence, for all $\varepsilon >0$ there exists $N >0$ such that for all $n \ge N$ $$\mathrm{diam} \ K_n \le 2 \varepsilon$$ which is by definition $$\lim_{n \to \infty} \mathrm{diam} \ K_n =0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3911932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Find limit of $\mathop {\lim }\limits_{h \to {0^ + }} \int\limits_h^{1 - h} {{t^{ - a}}{{\left( {1 - t} \right)}^{a - 1}}dt}$ Given that for each $a\in (0, 1)$, $\mathop {\lim }\limits_{h \to {0^ + }} \int\limits_h^{1 - h} {{t^{ - a}}{{\left( {1 - t} \right)}^{a - 1}}dt}$ exists. Let this limit be $g(a)$. In addition, it is given that the function $g(a)$ is differentiable on $(0, 1)$. Find the value of $g(\frac{1}{2})$ My approach $g(a)=\mathop {\lim }\limits_{h \to {0^ + }} \int\limits_h^{1 - h} {{t^{ - a}}{{\left( {1 - t} \right)}^a}{{\left( {1 - t} \right)}^{ - 1}}dt} $ $g(a)=\mathop {\lim }\limits_{h \to {0^ + }} \int\limits_h^{1 - h} {{{\left( {\frac{{1 - t}}{t}} \right)}^a}{{\left( {1 - t} \right)}^{ - 1}}dt}$ $g(a)=\mathop {\lim }\limits_{h \to {0^ + }} \int\limits_h^{1 - h} {\frac{1}{{1 - t}}{{\left( {\frac{{1 - t}}{t}} \right)}^a}dt} $ From here onward not able to approach
$$g\left(\frac{1}{2}\right) = \int_0^1 \frac{dx}{\sqrt{x(1-x)}}$$ Now, if $u = \sqrt{x} \implies du = \frac{dx}{2\sqrt{x}}$ $$\implies g\left(\frac{1}{2}\right) = 2\int_0^1\frac{du}{\sqrt{1-u^2}} = 2[\arcsin(u)]_0^1 = \pi$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3912236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Jensen's Formula and the number of zeros. I am doing a question that requires me to find the number of zeros in a disk when $f$ is non-constant, bounded and analytic on $D$. I saw that the Wikipedia page of Jensen's formula has the following information: Jensen's formula can be used to estimate the number of zeros of analytic function in a circle. Namely, if f is a function analytic in a disk of radius R centered at $z_0$ and if |f| is bounded by M on the boundary of that disk, then the number of zeros of f in a circle of radius r < R centered at the same point $z_0$ does not exceed Does anyone know how this formula is derived?
Let $f(z)$ be analytic function such that $f(0)\ne0$, and let $z_1,z_2,\dots,z_m$ denote the zeros of $f(z)$ satisfying $|z_k|<R$. Then Jensen's formula gives $$ \int_0^1\log|f(Re^{2\pi it})|\mathrm dt=\log\left|f(0)\cdot{R\over z_1}\cdot{R\over z_2}\cdots{R\over z_m}\right| $$ Now, let $|f(z)|\le M$ for $|z|\le R$, then we have $$ \log\left|{R\over z_1}\cdot{R\over z_2}\cdots{R\over z_m}\right|\le\log M-\log|f(0)| $$ Let $0<r<R$, so we can classify $z_1,z_2,\dots,z_m$ into two classes: $$ \log\left|{R\over z_1}\cdot{R\over z_2}\cdots{R\over z_m}\right| =\sum_{|z_k|\le r}\log\left|R\over z_k\right|+\sum_{|z_j|>r}\log\left|R\over z_j\right| $$ Because $\log|R/z_k|$ is at least $\log R/r$, we have $$ N(r)\log\frac Rr\le\sum_{|z_k|\le r}\log\left|R\over z_k\right| $$ where $N(r)$ denotes the number of zeros of $f(z)$ satisfying $|z|\le r$. Now, by the transitivity of inequalities, we conclude that the number of zeros of $f(z)$ within $|z|\le r$ satisfies the following inequality: $$ N(r)\le{1\over\log R/r}\log\left|M\over f(0)\right| $$ Hope this will help you!
{ "language": "en", "url": "https://math.stackexchange.com/questions/3912429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
How to show that $a_{n+2}=\frac{a_{n+1} ^2 -1}{a_n}$ . is bounded? Let $a_{n+2}=\frac{a_{n+1} ^2 -1}{a_n}$ be a sequence of real numbers where $a_n>0$ for all $n \in \mathbb{Z}_{+}$. It is given that $a_1=1 , a_2=b>0$. It is given that $1<b<2$ .Is it possible to show that $a_n$ is bounded from this given information?
It seems that for all integers $n\geq 1$, $a_n$ will be a rational fraction in $b$ of degree $n-1$, with leading coefficient $1$ (by that I mean that for all fixed $n\geq 1$, you can write that $a_n\sim b^{n-1}$ as $b$ goes to infinity. So my intuition is that if $b>1$, then $a_n$ will behave like a diverging geometric sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3912591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Tao Analysis 2 Lemma 6.6.6 (lemma for inverse function theorem) Let $B(0,r)$ be a ball in $\mathbb{R}^n$ centered at the origin, and let $g: B(0,r) \to \mathbb{R}^n$ be a map such that $g(0) = 0$ and $$||g(x) - g(y)|| \le \frac12 ||x-y||$$ for all $x,y \in B(0,r)$ (here $||x||$ denotes the length of $x$ in $\mathbb{R}^n$). Then, the functions $f : B(0,r) \to \mathbb{R}^n$ defined by $f(x) : = x + g(x)$ is one to one, and furthermore the image $f(B(0,r))$ of this map contains ball $B(0,r/2)$. I understand how to prove one to one. The proof for the second part is on the picture below. It is not clear to me how the same argument shows that $F$ maps $\overline{B(0, r-\epsilon)}$ to itself. If I use the same argument when $x \in \overline{B(0, r-\epsilon)}$, $$||F(x)|| \le ||y|| + ||g(x)|| \le \frac{r}2 + ||g(x) - g(0)|| \le \frac{r}2 + \frac{r-\epsilon}2 = r- \frac{\epsilon}2$$ This is not sufficient to show that it maps to $\overline{B(0, r-\epsilon)}$. I think I am missing something here. I would appreciate if you give some help.
In the argument above $y$ is fixed, and $\|y\|<\frac{r}{2}$. So say $\|y\|=\frac{r-\varepsilon}{2}$. Then $F(x)=y-g(x)$ maps $\overline{B(0,r-\varepsilon)}$ into itself, since $$ \|F(x)\|\le \|y\|+\|g(x)-g(0)\|\le \frac{r-\varepsilon}{2}+\frac{1}{2}\|x-0\|\le r-\varepsilon. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3912767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Question regarding syntax of $\bigcap$ definition I know what this definition is supposed to do. However just looking at the syntax, i cannot explain it to myself. $$ \bigcap M := \{x \mid \forall y \in M : x \in y\} $$ Could somebody please explain me how to read this definition (not what it does, but rather how to read it close to the definition's syntax so that it is easier to understand)?
$$ \begin{array}{cccccc} \bigcap M & := & \{ & x & \mid & \forall & y \in M : & x \in y & \} \\ \text{This “$\bigcap M$” thing} & \text{means} & \text{the set of} & \text{all $x$} & \text{such that} & \text{for every} & \text{ set $y$ in $M$,} & \text{$x$ is in set $y$} & \text{.} \end{array} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3912932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Matrix Determinant Formula Problem. Let $A$ be a non-singular $n\times n$ matrix and let $\Gamma=[\Gamma_1\quad\Gamma_2]$ be an $n\times n$ orthogonal matrix where $\Gamma_1$ is $n\times n_1$, $\Gamma_2$ is $n\times n_2$ and $n=n_1+n_2$. Show that $$\det(\Gamma_1^TA\Gamma_1)=\det(A)\det(\Gamma_2^TA^{-1}\Gamma_2).$$ My Attempts. Here we make use of the property of orthogonal matrix: \begin{align} \det(A)=\det(\Gamma^TA\Gamma)=\det\left(\begin{bmatrix} \Gamma_1^T \\ \Gamma_2^T \end{bmatrix}A\begin{bmatrix} \Gamma_1 & \Gamma_2 \end{bmatrix}\right)=\det\left(\begin{bmatrix} \Gamma_1^TA\Gamma_1 & \Gamma_1^TA\Gamma_2 \\ \Gamma_2^TA\Gamma_1 & \Gamma_2^TA\Gamma_2 \end{bmatrix}\right). \end{align} Since $A$ is non-singular, $\Gamma_1^TA\Gamma_1$ is also non-singular. Thus, \begin{align} \det(A)=\det(\Gamma_1^TA\Gamma_1)\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right). \end{align} If the formula we want to prove is true, we would have \begin{align} 1&=\det(\Gamma_2^TA^{-1}\Gamma_2)\cdot\det\left(\Gamma_2^TA\Gamma_2-\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right) \\ &=\det\left(\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2\right). \end{align} Nonetheless, I have no idea how to simplify the terms in the parenthesis because I only have $\Gamma_1\Gamma_1^T+\Gamma_2\Gamma_2^T=I$. Hope anyone has good suggestions.
To simplify the terms, we indeed have to use the formula $\Gamma_2\Gamma_2^T=I-\Gamma_1\Gamma_1^T$. Here \begin{align*} \Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_2&=\Gamma_2^TA^{-1}(I-\Gamma_1\Gamma_1^T)A\Gamma_2 \\ &=\Gamma_2^TA^{-1}A\Gamma_2-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1A\Gamma_2 \\ &=I-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1A\Gamma_2 \end{align*} and \begin{align*} &\quad\ \Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2 \\ &=\Gamma_2^TA^{-1}(I-\Gamma_1\Gamma_1^T)A\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2 \\ &=\Gamma_2^TA^{-1}A\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2 \\ &=\color{red}{\Gamma_2^T\Gamma_1}(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_1\color{red}{\Gamma_1^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}}\Gamma_1^TA\Gamma_2 \\ &=0-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1^TA\Gamma_2=-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1^TA\Gamma_2. \end{align*} Thus, \begin{align*} &\quad\ \Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_2-\Gamma_2^TA^{-1}\Gamma_2\Gamma_2^TA\Gamma_1(\Gamma_1^TA\Gamma_1)^{-1}\Gamma_1^TA\Gamma_2 \\ &=I-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1A\Gamma_2-(-\Gamma_2^TA^{-1}\Gamma_1\Gamma_1^TA\Gamma_2)=I. \end{align*} The formula thus follows. P.S. If anyone has simpler solutions, please share it with me!! This calculation is too complicated.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3913032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing $\frac{(x^2+1)(y^2+1)(z^2+1)}{xyz}\geq 8$ I found the following exercise reading my calculus notes: If $x,y$ and $z$ are positive real numbers, show that $$\frac{(x^2+1)(y^2+1)(z^2+1)}{xyz}\geq 8$$ I've been trying to solve it for a while. However, I have no idea how to approach it. Any help is welcome.
Note that $(x-1)^2 \geq 0 \implies x^2+1 \geq 2x $ $(y-1)^2 \geq 0 \implies y^2+1 \geq 2y $ $(z-1)^2 \geq 0 \implies z^2+1 \geq 2z $ Multiplying these three inequalities you will get $(x^2+1)(y^2+1)(z^2+1)\geq 8xyz \implies \frac{(x^2+1)(y^2+1)(z^2+1)}{xyz}\ge8$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3913377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Showing that the inflection points of $x\sin x$ lie on a certain curve Show that the inflection points of $f(x)=x\sin x$ are on the curve $$y^2(x^2+4)=4x^2.$$ I checked the graph of each function, but it seems that $f$ has infinitely many inflection points. How should I proceed?
An inflection point is where $f''$ changes sign, so we need to find where $f''(x)=0$. $f(x)=x\sin(x), f'(x)=\sin(x)+x\cos(x), f''(x)=\cos(x)+\cos(x)-x\sin(x) =2\cos(x)-x\sin(x) $. So we want to show that this satisfies $y^2(x^2+4)=4x^2$. If $f''(x)=0$ then $2\cos(x)=x\sin(x) $ or $x=2/\tan(x)$. Since $f(x)=x\sin(x)$, we want $x^2\sin^2(x)(4/\tan^2(x)+4)=4x^2 $ or $\sin^2(x)(1/\tan^2(x)+1)=1 $ or $1=\sin^2(x)(\cos^2(x)/\sin^2(x)+1) $ or $1=\cos^2(x)+\sin^2(x) $ which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3913739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
Value of surface Integral Let S be the part of the surface $z = x^2+y^2$ which lies under the plane $z=4$. What is the value of the surface integral $\iint_S z \, ds \, \,$? What I've done so far: I have used the surface integral formula however I do not know if I need to convert to polar coordinates for the boundaries when integrating? i can't seem to figure out the boundaries. I have set up the interval as follows: $\int \int (x^2+y^2) \sqrt{1+4(x^2+y^2)} \, dx \, dy$
$\displaystyle \iint_S z \, dS = \iint (x^2+y^2) \sqrt{1+4(x^2+y^2)} \, \,dx \,dy \, \,$ is correct. $x^2 + y^2 = r^2 = z \leq 4 \,$ so $ \,0 \leq r \leq 2$. Your integral becomes $\displaystyle =\int_0^{2\pi} \int_0^2 r^2 \sqrt{1+4r^2} \, \, r \, dr \, d \theta = \int_0^{2\pi} \int_0^2 r^3 \sqrt{1+4r^2} \, dr \, d \theta$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Normal Line to a surface What is the equation of the normal line to the surface r(u,v)=(cosu, sinv+sinu ,cosv) at the point (1,1,0)? What I have done so far: I have found the partial derivatives of the surface separately with respect to u and respect to v. I know I need to multiply the two to get the i,j,k coordinates. Should I sub in the point given before doing this or after? should I change to polar coordinates? how do I sub these values in to cos and sin so that it is defined?
Some quick corrections and answers: I have found the partial derivatives of the surface separately with respect to u and respect to v. I know I need to multiply the two to get the i,j,k coordinates. There is no operation that is referred to as "multiplying vectors". Presumably you mean that you compute the cross-product $\mathbf r_u \times \mathbf r_v$. Should I sub in the point given before doing this or after? It is easiest to substitute in the correct values for $u,v$ before taking the cross-product Should I change to polar coordinates? How do I sub these values in to cos and sin so that it is defined? I have no clue what you mean by either of these questions. Here's a complete answer. We are given $\mathbf r(u,v) = (\cos u, \sin v + \sin u, \cos v)$. Presumably, we are also given a domain, i.e. the values of $u$ and $v$ that we consider. I suspect that this was given to you as $u,v \in [0,2\pi)$. Begin by computing the partial derivatives $$ \mathbf r_u(u,v) = (-\sin u, \cos u, 0), \quad \mathbf r_v(u,v) = (0,\cos v, -\sin v). $$ Now, we want to find these vectors at the point $(1,1,0)$ by plugging in the values of $u$ and $v$ associated with this point. This means that we need to find these values. That is, we need to find values of $u,v$ between $0$ and $2\pi$ for which $$ (\cos u, \sin v + \sin u, \cos v) = (0,1,1). $$ Verify that this only occurs when $u = \pi/2$ and $v = 0$. We compute $$ \mathbf r_u(\pi/2,0) = (-1,0,0), \quad \mathbf r_v(\pi/2,0) = (0,0,1). $$ To find a normal vector to the surface, we take the cross product of these vectors to get $$ (-1,0,0) \times (0,0,1) = (0,1,0). $$ To answer the question, give an equation for the line through $(0,1,1)$ in the direction of $(0,1,0)$. For instance, we can write $$ \ell(t) = (0,1,1) + (0,t,0). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Bounding integrals using Maclaurin series Prove that $$\int_0^1 \frac{ \sin x}{\sqrt{x} } < \frac23$$ I didn't see any easy way to integrate directly, so I just used the series approximation: $$ \sin(x) = \sum_{k=0}^{\infty} (-1)^k \frac{x^{2k+1}}{(2k+1)!}$$ Hence, $$\int_0^1 \frac{ \sin x}{\sqrt{x} }= \sum_{k=0}^{\infty} \frac{ (-1)^k}{(2k+1)! (2k+ \frac32)}= \frac23- \frac{1}{3! (10.5)} + \text{stuff}$$ The argument I had is that, the increments and decrements added by taking more and more terms keeps shrinking as higher terms are smaller in magnitude for the above series. Hence, the value of integral is no greater than the very first increment which is $2$. The problem is I feel this argument is not precise/ rigorous enough. Is there any theorems /results I can refer to when stating this?
You can use that $\sin x <x$ for $x>0$, hence $\;\dfrac{\sin x}{\sqrt x}<\sqrt x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914388", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the point of computing for $dy$? I can understand the significance of computing for a derivative $\frac{dy}{dx}$ because you can see the slope (change in y in relation to the change in x) at any given point of a function. By that logic, computing for $dy$ can find the change in the y axis at a given point x? What is the point of this?
Personally, I find taking the differential instead of the derivative makes it easier to keep my options open, and also makes it more amenable to multivariable equations. For instance, say you have the equation $y^2 = e^x + z$. If you take the differential, you get $2y\,dy = e^x\,dx + dz$. From here, you can solve for the derivative $\frac{dy}{dx}$ if you like. Or, you can divide both sides by $dt$ and get related rates. You can set one of the differentials to zero to get partial differentials, etc. Anyway, there's nothing wrong with the derivative, but, to me, it seems that always forcing it into a form of a derivative is messy. Additionally, there are a few places where the derivative is misleading. Take the equation $x = 0$. The derivative with respect to $x$ would lead you to say $1 = 0$, but the differential is more clear: $dx = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Equation on cross product I have to solve the equation $\vec x\times\vec a=\vec b-\vec x$ , where $\vec x, \vec a,\vec b$ are vectors and the last two are known. I have proven that $\vec a\cdot \vec x=\vec a\cdot\vec b$, but that is all. Thank you in advance ! << Symbol : $\vec a \vec b = \vec a \cdot \vec b$ >>
If $\vec{x}=\vec{y},\,\vec{x}=\vec{y}+\vec{z}$ both work, $\vec{z}\times\vec{a}=-\vec{z}$, so $\vec{z}$ is self-orthogonal and vanishes, and the solution $\vec{x}$ is unique. Now try the Ansatz $\vec{x}=A\vec{a}+B\vec{b}+C\vec{a}\times\vec{b}$. As I noted in a comment on @thecandyman's answer, you should find$$\vec{x}=\frac{(\vec{a}\cdot\vec{b})\vec{a}+\vec{b}+\vec{a}\times\vec{b}}{a^2+1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Roots of a polynomial must be real Say $P$ is a polynomial of degree $n$ with $n$ real roots. I have to prove that the polynomial $Q(z)=P(z+i)+P(z-i)$ has $n$ real roots. First it happens that for a real number $t$ and because $P$ is real polynomial we have $$\overline{Q}(t)= \overline{Q(t)} =\overline{P(t+i)}+\overline{P(t-i)} \\ = P(\overline{t+i})+P(\overline{t-i}) \\ = P(t-i)+P(t+i) = Q(t) $$ The polynomial $Q-\bar{Q}$ has infinite roots (namely all the real numbers) and thus $Q$ is a real polynomial Truth is I don't know how to proceed further. This exercice was given at the beginning of a complex analysis course so I don't know if I must search for an analytical argument or a simpler one. If you can just give me a hint (I want to solve the problem by myself) that would be really helpful
Hint: If $t$ is real, what is the complex conjugate of $t+i$. What does that tell you about $Q$? Hint 2: This is a heuristic hint. If you know, or assume, that a polynomial has only real roots, then you can assume the polynomial can be put in the for of a product of monomial factors, each of the form $(x-r_i)$, where r_i is a real root of the polynomial. $$a_n\prod_{i=1}^n(x-r_i)$$ The heuristic hint is this. You will exclude all of the polynomials that have complex roots simply by using that form rather than the form: $$a_0 +\sum_{i=1}^n a_nx^n$$ Hint 3: If $Q(z)= 0$, with $z\in\mathbb{C}$, what can one say about $|P(z+i)|$ as it relates to $|P(z-i)|$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914845", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof: not a perfect square Let $y$ be an integer. Prove that $$(2y-1)^2 -4$$ is not a perfect square. I Found this question in a discrete math book and tried solving it by dividing the question into two parts: $$y = 2k , y = 2k + 1$$ But that got me nowhere.
$(2y-1)^2 - 4 = (2y-1)^2 - 2^2 = (2y-1+2)(2y-1-2) = (2y+1)(2y-3)$ Note that $2y+1$ and $2y-3$ are always distinct integers. Hence proving their product cannot be a square is accomplished by showing they're coprime (no prime factors in common) and that they're not both squares at the same time. $\mathrm{gcd}(2y+1, 2y-3) =\mathrm{gcd}(2y+1, (2y+1)-(2y-3)) = \mathrm{gcd}(2y+1, 4) = 1$ (the last part is trivially observing that one is odd, the other even). Hence $2y+1$ and $2y-3$ are coprime. Now note that both $2y+1$ and $2y-3$ are odd with a difference of $4$. The minimum difference between two odd squares is $3^2 - 1^2 = 8$. So they cannot both be squares. Therefore $(2y+1)(2y-3) = (2y-1)^2 - 4$ cannot be a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3914956", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 4 }
Iff statement for Modulo I have a question that I've been stuck on for a long time. I wanted to see if friends at Stack Exchange can help me: Prove that for any odd prime $p$, the congruence $x^{2}+1\equiv0(mod p)$ has a solution if and only if $x^{2}+1\equiv0 (mod p^{2})$ has a solution. So far the forwards proof is giving me some trouble. I have the following down: Let p be an odd prime. Let n be an integer such that $n^{2}+1\equiv0(mod p)$. This means that there exists an integer k such that $n^{2}+1=pk$. We will show that there exists an integer x such that $x^{2}+1\equiv0(mod p^{2})$. Let $x=qp+r$ for some integers q and r: $$(qp+r)^{2}+1\equiv0 (mod p^{2})$$ Then $$2qpr+r^{2}\equiv0(mod p^{2})$$ Setting $r=n$, we get $2qpn+n^{2}+1\equiv0(mod p^{2})$. This is as far as I have gotten. I'm stuck from this point on: can someone tell me what my next steps should be to complete the forwards proof? Thank you. Note: this is not really a "proof verification" question since the proof is not fully complete, but I feel that I'm close enough.
The obvious thing to do is apply a Hensel’s-Lemma type of argument to the situation, improving what you have by adding a suitable multiple of $p$ to get something that’s good modulo $p^2$. This is the very sound strategy that motivated you. But let me show you a sneaky trick that is special to this situation, because the $p$-adic root of your equation $X^2+1$ is a root of unity, namely the fourth root of unity $i$ (or $-i$, naturally). Your hypothesis is that $x^2+1\equiv0\pmod p$, and I’m about to show you that $(x^p)^2+1\equiv0\pmod{p^2}$. (Check it out for $x=2$, $p=5$.) The hypothesis can be rewritten $x^2=pr-1$ for some integer $r$. Now, $$ (x^p)^2=(x^2)^p=(pr-1)^p\,, $$ and you just expand the right-hand end of this using Binomial Theorem, and see that the result is in the form $p^2R-1$ for a certain integer $R$, and that says that $(x^p)^2\equiv-1\pmod{p^2}$, as desired. The really wonderful thing about this is that if you take the number you just got, namely $x^p$, and raise it to the $p$-th power, you get something that is an even better root of $X^2+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3915133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Change of integration variables in multi-dimensional integral I have the following two-dimensional integral at hand \begin{equation} \int_{0}^{\infty}dx_{1}\int_{0}^{x_{1}}dx_{2} F(x_{1}-x_{2},x_{2}), \end{equation} where $F(x,y): \mathbb{R}^{2} \rightarrow \mathbb{R}$ is an arbitrary scalar function on a two-dimensional space. Intuitively, by imagining the integration domain it is clear that this should be equal to \begin{equation} \int_{0}^{\infty}dx_{1}'\int_{0}^{\infty}dx_{2}' F(x_{1}',x_{2}'). \end{equation} How can I show this formally via a change of integration variables?
Since $F$ is integrated on$$\{(x_1,\,x_2)\in\Bbb R|0\le x_2\le x_1\}=\{(x_1,\,x_2)\in\Bbb R|x_1-x_2\ge0\land x_2\ge0\},$$the change of variables $x_1^\prime:=x_1-x_2,\,x_2^\prime:=x_2$ rewrites the integral as$$\int_0^\infty dx_1^\prime\int_0^\infty dx_2^\prime JF(x_1^\prime,\,x_2^\prime),$$with $J$ a Jacobian determinant we need to check is $1$. I'll leave that part to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3915299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Group of order $15$ has element of order $5$ I need to show that group of order $15$ has element of order $5$. I need to do that without using Sylow theorems and their consequences and Cauchy theorem. Basically i need to show that Group of order $15$ can't be consisted of $e$ and elements of order $3$.
Let me present a correct proof that you would never love, which is adapted from A Proof of Cauchy's theorem. Assume $G$ is a group of order $15$ with unit $e$. We consider a set $X = \{(x_1,x_2,x_3,x_4,x_5)|\ x_i \in G,\ x_0x_1x_2x_3x_4x_5=e\}$. Note that the number of elements in $X$, which we denote as $\#X$, is $15^4$, since the first four components determine an element in $X$. Now consider $C_5 = <a>$, the cyclic group of order $5$ with a fixed generator $a$, acts on $X$ as cycles by the following rule: $a$ carries $(x_1,x_2,x_3,x_4,x_5)$ to $(x_2,x_3,x_4,x_5,x_1)$. Observation 1. According to Orbit-stabilizer theorem, the orbits under the action of $C_5$ should be of length either $1$ or $5$. Observation 2. Orbits of length $1$ is of the form $\{(x,x,x,x,x)\}$, where $x^5 =e$. In other words, they correspond 1-1 exactly to elements of order $5$ in $G$. Now by the obvious relation $$|X| = \#\{\mathrm{orbits\ of\ length}\ 1 \} + \#\{\mathrm{orbits\ of\ length}\ 5 \} * 5,$$ we conclude that $5$ divides $\#\{\mathrm{orbits\ of\ length}\ 1 \}$. Finally, since the unit $e$ is an element of order $5$, i.e. $\{(e,e,e,e,e)\}$ is such an orbit of length $1$, there must be some other length-$1$ orbit, i.e. nontrivial element of order $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3915397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving a geometric inequality $BE+CF>EF$ In the figure $D$ is the midpoint of $BC$ and $\Delta EDF$ is right triangle.Prove that $BE+CF>EF$ I tried triangle inequality $$BE+ED>BD$$ $$CD+FC>DF$$ Add:$$BE+FC>DF-ED$$ I am stuck now!
Rotate the figure by $180^\circ$ around $D$. Now the desired inequality becomes $BE + BF' > EF$. However $EF = EF'$ by congruent triangles $\triangle EDF' \cong \triangle EDF$. Finish off using triangle inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3915567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Trigonometric integral using residue complex analysis. There is this problem where I need to proof the following statement: $$\int_{0}^{\pi}\cos^{2n}\theta d\theta = \pi\frac{(2n)!}{2^{2n}(n!)^2}$$ First I used the fact that the function is ever so I can divide by $2$ and integrate over $(0,2\pi)$. Then I made the substituition: $$z=e^{i\theta}\,dz=iz\,d\theta\quad\text{then}\quad\cos\theta=\frac{z^2+1}{2z},$$ letting $C$ be the unitary circle:$$\frac{1}{i2^{2n+1}}\oint_{C}\frac{(z^2+1)^{2n}}{z^{2n+1}}dz.$$ Now since there is a pole at $z=0$, I chose to just use the Cauchy-formula for $(2n)$ rank pole, taking the numerator $(z^2+1)^{2n}$ for $f(z)$: $$\oint_{C}\frac{(z^2+1)^{2n}}{z^{2n+1}}dz=\frac{2\pi i}{(2n)!}\frac{d^{2n}}{dz^{2n}}f(0).$$ The $2n$ derivatives leaves me $(2n)!2^{2n}$ and then inserting this to the total: $$\int_{0}^{\pi}\cos^{2n}\theta\,d\theta = \pi.$$ I tried the residue theorem, but it is just what i have been doing above. What is wrong here? The problem or my calculations?
Your problem is with formula: $$\oint_{C}\underbrace{\frac{(z^2+1)^{2n}}{z^{2n+1}}}_{f(z)}dz=\frac{2\pi i }{(2n)!}\left(\frac{d^{2n}}{dz^{2n}}f\right)_{|z=0}.$$ which should be: $$\oint_{C}\frac{(z^2+1)^{2n}}{z^{2n+1}}dz=\frac{2\pi i }{(2n)!} \left(\frac{d^{2n}}{dz^{2n}}z^{2n+1}f\right)_{|z=0}.$$ due to formula of residue in a pole $c$ with order $n$: $$\displaystyle \operatorname {Res} (f,c)={\frac {1}{(n-1)!}}\lim _{z\to c}{\frac {d^{n-1}}{dz^{n-1}}}\left((z-c)^{n}f(z)\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3915754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Curve $\psi(t) = (t \cos\frac{1}{t}, t\sin\frac{1}{t})$ is simple, bounded and not rectifiable Let $\psi(t)$ be a curve defined on $t\in[-1,0)$ by $\psi(t) = (t \cos\frac{1}{t}, t\sin\frac{1}{t})$ Prove that $\psi(t)$ is simple, bounded and not rectifiable. I know this curve is simple (if you take $t_1\ne t_2 \in [-1,0)$ then $\psi(t_1)\ne \psi(t_2)$) but I don't know how to prove it properly. Instead, I really don't know where to start to show that is bounded and not rectifiable. To show that isn't rectifiable I need to prove that the length of the curve is infinitive, but I can't find any inscribed chord-polygons to use. I just need some hint.
Building off of Raffaele's answer, use the fact that $$ L(\psi|_{[-1, -\epsilon]}) = \int_{-1}^{-\epsilon} \| \psi'(t) \| \, dt $$ for all $\epsilon > 0$. Then, suppose for contradiction that $L(\psi)$ is finite, i.e. $\psi$ is rectifiable. Observe $$ L(\psi) \geq L(\psi|_{[-1, -\epsilon]} ) $$ for all $\epsilon > 0$. Since $L(\psi|_{[-1, -\epsilon]})$ gets arbitrarily large as $\epsilon \to 0^+$, we obtain a contradiction. Thus, $\psi$ is not rectifiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $G$ be a finite group and $A:=\{a\in G\mid a\neq a^{-1}\}$. Prove that $|A|$ is even. Let $G$ be a finite group and $A:=\{a \in G\mid a \neq a^{-1} \}$ a set that contains all the elements of $G$ that are not equal to their respective inverses. Prove that $A$ contains an even number of elements. I have seen some posts here here about this proof, but none of which were similar to my attempt. Here's my attempt: Since $G$ is finite, then $A$ is also finite. In addition, every element of $A$ has an inverse because $G$ is a group. Now, divide $A$ in two sets called $X$ and $Y$, such that $X\subseteq A$ and $Y\subseteq A$, so that every element of $X$ has its inverse in $Y$. Let $k_{1},k_{2} \in \mathbb{N}$, such that $\left | X \right | = k_{1}$ and $\left | Y \right | = k_{2}$. Since there is no element equal to its inverse in $A$, then $ \left | A \right | = \left | X \right | + \left | Y \right |$. Moreover, $\left | X \right | = \left | Y \right |$ because $A$ only contains elements which are different from their respective inverses. So, \begin{aligned} \left | A \right | &= \left | X \right | + \left | Y \right | \\ &= k_{1} + k_{2} && \text{[$\left | X \right | = k_{1}$ and $\left | Y \right | = k_{2}$]} \\ &= k_{1} + k_{1} && \text{[$\left | X \right | = \left | Y \right |$]} \\ &= 2\cdot k_{1} \end{aligned} $2k_{1}$ is an even number, by the definition of even number. Therefore, the set $A$ contains an even number of elements. Does my proof look fine? Every help is appreciated!
Yuo have a nice idea, but let's highlight some issues: * * Now, divide $A$ in two sets called $X$ and $Y$, such that $X\subseteq A$ and $Y\subseteq A$, so that every element of $X$ has its inverse in $Y$. Ok, here are a couple of instances: (a) $X=\emptyset$ and $Y=A$; (b) $X=A$ and $Y=A$. In order for the idea to work, you have to add: (i) that, for all $x\in A$, $x\in X$ or $x\in Y$; (ii) that $X\cap Y=\emptyset$ *If you add the conditions above, $\lvert X\rvert=\lvert Y\rvert$ is mainly due to the fact that $x\mapsto x^{-1}$ is an injection from $X$ to $Y$ (which is true regardless of (i) or (ii)), plus surjectivity thanks to (i). Condition (ii) or the fact that no element is equal to its inverse are irrelevant for this end. *On the other hand, (ii) is needed to prove that $\lvert A\rvert=\lvert X\rvert+\lvert Y\rvert$. The attentive reader could also question the existence of a pair of sets such as in (1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
If $f(x+1/n)$ converges to a continuous function $f(x)$ uniformly on $\mathbb{R}$, is $f(x)$ necessarily uniformly continuous? Suppose $f:\mathbb{R}\mapsto\mathbb{R}$ is a continuous function. Define $f_n(x)=f(x+1/n)$ for all $n\in\mathbb{N}^+$. If $f_n(x)\to f(x)$ uniformly on $\mathbb{R}$, can we conclude that $f$ is actually uniformly continuous? If not, can you give a counterexample of $f$ to be not uniformly continuous but satisfies all above conditions? I know that if $f_n(x)$'s are all uniformly continuous and $f_n(x)\to f(x)$ uniformly, then $f(x)$ must be uniformly continuous. However, here we do not assume $f(x)$ to be uniformly continuous, and $f_n(x)$ has specific structure related to $f(x)$, so at least you cannot directly conclude they are uniformly continuous. I guess the statement above is wrong but it's hard for me to find a counter-example. Can anyone help me?
Updated answer: No, $f$ need not be uniformly continuous, as shown by the example below. The example is tailored to the sequence $\{1/n\}_{n \in \mathbb{N}}$, and does not shed light on the general question of whether the condition that $f(x + \epsilon_n) \to f(x)$ uniformly implies that $f$ is uniformly continuous, for a given sequence $\{\varepsilon_n\}$ converging to zero. First, define a sequence recursively by $a_1 = 3$ and $a_{n+1} = (a_n)!$ for $n \geq 1$. Let $h$ be the periodic function with period $1$ such that $h(x) = 2x$ for $x \in [0, 1/2]$ and $h(x) = 2-2x$ for $x \in [1/2, 1]$. In particular, $h(k) = 0$ for each integer $k$, and $h$ is continuous and Lipschitz with constant $2$. Now using the sequence, for each $n$ we define the function $g_n$ by $$g_n(x) = \begin{cases} 0 & x < n \\ \frac{1}{n} h(a_n x) & x \geq n\end{cases}$$ so $g_n$ is continuous and Lipschitz with constant $2a_n/n$. Finally, define $f(x) = \sum_{n=1}^\infty g_n(x)$, where $f$ is continuous since each $g_n$ is continuous, and all but finitely many $g_n$ vanish on $(-\infty, b)$ for each $b$. We will now show that $f_m \to f$ uniformly, where $f_m(x) = f(x + 1/m)$. Fix $m$, and let $k$ be the smallest index such that $a_k \geq m$. In particular, this means that $m$ divides $a_n$ for all $n \geq k+1$, and thus for each such $n$, $1/m$ is a multiple of $1/a_n$, so $g_n(x + 1/m) = g_n(x)$ for all $x$ not in $(n-1, n)$. Therefore for any $x$, there is at most one $n \geq k+1$ for which $g_n(x + 1/m) \neq g_n(x)$, and this $n$ necessarily satisfies $|g_n(x + 1/m) - g_n(x)| \leq \frac{1}{n} \leq \frac{1}{k}$. It follows that for sufficiently large $m$ we have \begin{align*} |f(x + 1/m) - f(x)| &= \left|\sum_{n=1}^\infty g_n(x + 1/m) - g_n(x)\right| \\ &\leq \frac{1}{k} + \sum_{n=1}^k |g_n(x + 1/m) - g_n(x)| \\ &\leq \frac{1}{k} + \frac{1}{k} + \frac{1}{k-1} + \sum_{n=1}^{k-2} |g_n(x + 1/m) - g_n(x)| \\ &\leq \frac{4}{k} + \sum_{n=1}^{k-2} \frac{2a_n}{n} \cdot \frac{1}{m} \\ &\leq \frac{4}{k} + \frac{2ka_{k-2}}{m} \\ &\leq \frac{5}{k} \end{align*} where the last inequality follows from $m \geq a_{k-1} \geq 2k^2 a_{k-2}$, which clearly holds for sufficiently large $k$. Then if we define $k_m$ to be the smallest $k$ for which $a_k \geq m$, we see that $k_m \to \infty$ as $m \to \infty$, so since $|f(x + 1/m) - f(x)| \leq 5/k_m$ uniformly for sufficiently large $m$, it follows that $f(x + 1/m)$ converges to $f(x)$ uniformly. However, $f$ is not uniformly continuous. Note that $\int_0^1 h(a_nx) \,dx = 1/2$, so for any positive integer $k$, $\int_0^1 f(k + x) \,dx = \frac{1}{2}\sum_{n=1}^k \frac{1}{n}$, and thus there is some $x_k \in (0, 1)$ with $f(k + x_k) \geq \frac{1}{2}\sum_{n=1}^k \frac{1}{n}$. On the other hand $f(k) = 0$ always. But if $f$ were uniformly continuous, there would be some $M$ for which $|f(x) - f(y)| \leq M$ whenever $|x - y| \leq 1$, which clearly does not hold, so $f$ cannot be uniformly continuous. Original answer: Too long for a comment: Whatever the answer is, it might depend on the properties of the sequence $\{1/n\}_{n \in \mathbb{N}}$. Below is an example which shows that if we instead define $f_n(x) = f(x + 3^{-n})$, then the condition that $f_n \to f$ uniformly on $\mathbb{R}$ does not imply that $f$ is uniformly continuous. I was unable to construct a similar example for the sequence $\{1/n\}_{n \in \mathbb{N}}$. Let $h(x)$ be the "triangle wave" function, such that $h(x)$ is periodic with period $1$ and has $h(x) = 2x$ for $x \in [0, 1/2]$ and $h(x) = 2 - 2x$ for $x \in [1/2, 1]$. In particular, $h$ is continuous and has $h(0) = h(1) = 0$, $h(1/2) = 1$, and $|h(x) - h(y)| \leq 2|x - y|$ for all $x, y$. Next, for each $n$, define $g_n(x)$ so that $g_n(x) = 0$ for $x < n$ and $g_n(x) = \frac{1}{n} h(3^n x)$ for $x \geq n$. With this definition, $g_n$ is continuous, and satisfies $|g_n(x) - g_n(y)| \leq \frac{2}{n} \cdot 3^n|x - y|$ for all $x, y$. Finally, define $f(x) = \sum_{n=1}^\infty g_n(x)$. Note $f$ is continuous since each $g_n$ is continuous, and all but finitely many $g_n$ vanish on $(-\infty, a)$ for each $a$. Now we will show that $f_m \to f$ uniformly, where $f_m(x) = f(x + 3^{-m})$. For $n \geq m$, $g_n(x + 3^{-m}) = g_n(x)$ so long as $x \not \in (n - 3^{-m}, n)$, meaning $g_n(x + 3^{-m}) \neq g_n(x)$ only in $(n-1, n)$. Thus for any $x$, there is at most one $n$ with $n \geq m$ and $g_n(x + 3^{-m}) \neq g_n(x)$, and this $n$ necessarily satisfies $|g_n(x + 3^{-m}) - g_n(x)| \leq \frac{1}{n} \leq \frac{1}{m}$. Thus we have \begin{align*} |f(x + 3^{-m}) - f(x)| &= \left| \sum_{n=1}^\infty (g_n(x + 3^{-m}) - g_n(x)) \right| \\ &\leq \frac{1}{m} + \sum_{n=1}^{m-1} |g_n(x + 3^{-m}) - g_n(x)| \\ &\leq \frac{1}{m} + \sum_{n=1}^{m-1} \frac{2}{n} \cdot 3^{-(m - n)} \\ &\leq \frac{C}{m} \end{align*} for some sufficiently large $C$ not depending on $m$. Thus $|f_m - f| \leq C/m$, so $f_m \to f$ uniformly. On the other hand, $f$ is not uniformly continuous. Consider $x$ of the form $k + \frac{1}{2}$ for $k$ a positive integer. For $n \leq k$ we have $g_n(x) = \frac{1}{n}$ since $3^n x = 1/2$ modulo $1$, so $f(k + \frac{1}{2}) = \sum_{n=1}^k \frac{1}{n}$. In particular, letting $k \to \infty$ we see $f(k + \frac{1}{2}) \to \infty$, while $f(k) = 0$ always, so $f$ is not uniformly continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 3, "answer_id": 0 }
Numerical Analysis or Numerical Linear Algebra Book with Available Solutions? I am a beginning graduate student in Mathematics soon and I am planning to self-study Numerical Analysis and Numerical Linear Algebra. I know there are already reference-request questions about this, but I am looking for some more specific books. I am looking for books in the following categories: * *Books with difficult problems, and solutions in the back (ideal) *Books with difficult problems *Books with solutions in the back *Books with separate solution manuals *Problem books (ideally with solutions in the back, or solution manuals) Could you please recommend me some textbooks, and tell me in which category they are? Thank you very much, any recommendations are immensely appreciated!
Schaum's Outline of Numerical Analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
Find the product and coproduct of the category of Set with a given set I am learning Category theory and I've found a problem : Let $S$ be a fixed set. Define a category $\textbf{Set}_S$ , where collection of object is a set map $ f: X \rightarrow S$. Let $f':X' \to S$ be another object. For two objects $f: X \rightarrow S, f^{\prime}: X^{\prime} \rightarrow S, \text { a morphism } h: f \rightarrow f^{\prime} \text { is a map } h: X \rightarrow X^{\prime}$ such that $f=f^{\prime} \circ h $. The composition of two morphisms in $\textbf{Set}_{S}$ is defined in the obvious way. Describe the product and coproduct of $n$ objects $$ f_{1}: X_{1} \rightarrow S, f_{2}: X_{2} \rightarrow S, \ldots, f_{n}: X_{n} \rightarrow S.$$ My idea for product is just the Cartesian product $X_1\times \cdots \times X_n$ with a map $T^{(i)}:X_1\times \cdots \times X_n \to X_i$. But it seems not to be true as $f_{i}$ is defined from $X_{i}$ to $S$ instead of form $S$ to $X_i$. Also, for coproduct, should it be the disjoint union of $X_1,\cdots,X_n$? I got the commutative diagram By the way, I don't know where the morphism $h$ should be used.
First of all, since the objects of $\textsf{Set}_S$ are maps with $S$ as codomain, it is non-sense to say that the product in this category is the cartesian product. The product of $f_1,\dots,f_n$ is a set-map $f_1 \times \cdots \times f_n$ from some set $X$ to $S$ together with morphisms $p_i : f_1 \times \cdots \times f_n \to f_i$ (that is, $p_i$ is a set-map $X \to X_i$ for which $f_1 \times \cdots \times f_n = f_i \circ p_i$) such that for any object $f : Y \to S$ and morphisms $g_i : f \to f_i$ (that is, $g_i$ is a set-map $Y \to X_i$ for which $f = f_i \circ g_i$) there is a unique morphism $g : f \to f_1 \times \cdots \times f_n$ (again, $g$ is a set-map $Y \to X$ for which $y = (f_1 \times \cdots \times f_n) \circ g$) such that $p_ig = g_i$, where $p_ig$ denotes the composition in $\textsf{Set}_S$. So, take $X = \{(x_1,\dots,x_n) \in X_1 \times \cdots \times X_n : f_1(x_1) = \cdots = f_n(x_n)\}$, and $p_i$ the restriction on $X$ of the $i$-th coordinate projection $X_1 \times \cdots \times X_n \to X_i$ (what is $f_1 \times \cdots \times f_n$ then?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to solve this difference equations in the non-homogenous case Let $0<\alpha<1$ and $N\in\mathbb{N}$. I would like to solve the following difference equation $$n=0: s_0 = 1+\alpha s_1 $$ $$n\in\{1,\dots,N-2\}: s_n = 1+\alpha s_{n+1}+(1-\alpha)s_n$$ $$n=N-1:s_{N-1}= 1+\alpha s_0 + (1-\alpha)s_{N-1}$$ and the boundary condition $s_N=0$. I was able to calculate the homogeneous solution, i.e. the solution to the equations: $$n=0: s_0 = 1+\alpha s_1 $$ $$n\in\{1,\dots,N-2\}: s_n = \alpha s_{n+1}+(1-\alpha)s_n$$ $$n=N-1:s_{N-1}= 1+\alpha s_0 + (1-\alpha)s_{N-1}$$ to be $s_n = \frac{1}{\alpha},0<n<N$, $s_0=0$. Is this correct and how can I now solve this for a non-homogneous casse? To get the homogeneous solution I used that for $0<n<N$ we see that $s_n$ are all equal. Then I sued the equation for $n-1$ and $0$ to find the particular values for $s_0$ and all other values
$$n\in\{1,\dots,N-2\}: s_n = 1+\alpha s_{n+1}+(1-\alpha)s_n \\ \implies -1 = \alpha (s_{n+1}-s_n). $$ Therefore $s_1, s_2, \ldots, s_{N-1}$ is an arithmetic sequence. Can you end it now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to compute the infinite product? Given the following infinite product: $$\lim_{n\to \infty} \frac{1}{2}\frac{3}{4}\cdots \frac{2^n-1}{2^n}.$$ It is easy to see that above infinite product is convergent by the convergence of the following series: $$\sum_{n=1}^{\infty}\ln \left(1-\frac{1}{2^n}\right)$$. But I do not know how to calculate the infinite product. I would appreciate it if someone can give any suggestions and comments.
You maybe could simplify the denominator of the infinite product since $\prod_{n=1}^{m}2^n=2^{\sum_{n=1}^{m}n}=2^{\frac{m(m+1)}{2}}$ which would mean that $\lim_{m \rightarrow \infty} \prod_{n=1}^{m}\frac{2^n-1}{2^n}=\lim_{m \rightarrow \infty}\frac{1}{2^{\frac{m(m+1)}{2}}} \prod_{n=1}^{m}(2^n-1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3916969", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_0^3\sqrt\frac{x^3}{3-x}dx$. Evaluate $\int_0^3\sqrt\frac{x^3}{3-x}dx$. We can solve it by putting $x=3\sin^2\theta$. But it's not really an intuitive method. Is there any other way to solve it? Or, is there some pattern to this question that tells us instinctively that the substitution is $x=3\sin^2\theta$?
Or, is there some pattern to this question that tells us instinctively that the substitution is $x=3\sin^2\theta$? To the familiar eye, yes, there is. Note powers of $x,\,3-x$ are important here. Whenever two quantities in the integral you should care about have a nonzero constant sum, it's worth leveraging $\sin^2\theta+\cos^2\theta=1$, or possibly another sum-of-squares identity such as $\tanh^2\phi+\operatorname{sech}^2\phi=1$, but with the right transformation these are basically the same idea. Similarly, if two quantities you have to care about have a nonzero constant difference, exploit $\sec^2\theta-\tan^2\theta=1$ or, by the same logic, maybe $\cosh^2\phi-\sinh^2\phi=1$. (Mind you, out of circular vs hyperbolic you often find one more directly helpful with whichever problem is at hand, e.g. I'd recommend the circular option here.) Once you've switched to trigonometry, further tips often write the rest of the calculation for you. Actually, in this case the latter brings you back round to no trigonometry at all; the real trick is $x=3y$. But diving into trigonometry often helps you spot a useful rational transformation, which may well be much less trivial in other cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to evaluate these integrals in hydraulic? I am studying about hydraulics, there are integrals that I don't know how to evaluate. Here is the problem: Equation of velocity is: $$U_r=K(R^2-r^2)$$ Where $U_r$ is velocity at radius $r$ and $R$ is radius of the pipe and $K$ is constant. Find energy and momentum coefficients ($\alpha$ and $\beta$). We have: $\alpha=\cfrac{\int v^3dA}{V^3.A}$ and $\beta=\cfrac{\int v^2.dA}{V^2A}$ (where $v$ is velocity and $V$ is average velocity). I draw this: In this figure $(R^2-r^2)$ lies on the shaded area. so $A=\pi(R^2-r^2)$. but I don't know how to calculate these integrals to find $\alpha$ and $\beta$ EDIT: For $\alpha$ I tried it: $V=\frac{Q}{A}$ where $Q$ is flow rate: $$\alpha=\cfrac{\int v^3dA}{V^3.A}=\cfrac{\int v^3dA}{\dfrac{Q^3}{A^2}}=\frac{A^2}{Q^3}\int v^3 dA=\frac{A^2}{Q^3}\int(K(R^2-r^2))^3dA=\cfrac{K^3 (\pi R^2)^2}{Q^3}\times\int (R^2-r^2)^3dA $$ Using polar system: $dA=rdrd\theta$ But I am not sure what can I write for $(R^2-r^2)^3$ and what is the integral bound? I don't know how to change it to polar system integral.
$\displaystyle \int v^3dA = 2\pi K^3 \int_r^{R} (R^2-\rho^2)^3 \rho \, d\rho$ $\rho(R^2 - \rho^2)^3 = R^6 \rho - \rho^7 - 3\rho^3R^4 + 3 \rho^5R^2$ So your integral becomes $ \displaystyle 2\pi K^3 \int_r^{R} (R^6 \rho - \rho^7 - 3\rho^3R^4 + 3 \rho^5R^2) d\rho$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can you find the limit of this: $\lim\limits_{x \to \; 0} \frac{x^2}{\sqrt{1+x\sin(x)} \; - \; \sqrt{\cos(x)}}$? $\lim\limits_{x \to \; 0} \frac{x^2}{\sqrt{1+x\sin(x)} \; - \; \sqrt{\cos(x)}}$ I would start with expanding it with $* \frac{\sqrt{1+x\sin(x)} \; + \; \sqrt{\cos(x)}}{\sqrt{1+x\sin(x)} \; + \; \sqrt{\cos(x)}} \;$ but I don't know how to progress from there. I can't use L'Hospital's rule. I also have 3 other exercises that are like this but if I can see one solved, I think I will be able to do the other ones as well. Thanks
hint After multiplying by the conjugate as you done, the denominator becomes $$1+x\sin(x)-\cos(x)=$$ $$2\sin(\frac x2)\Bigl(x\cos(\frac x2)+\sin(\frac x2)\Bigr)$$ The function can written as $$\frac{x}{2\sin(\frac x2)}\frac{1}{\cos(\frac x2)+\frac 1x\sin(\frac x2)}$$ $$×(\sqrt{1+x\sin(x)}+\sqrt{\cos(x)})$$ the limit is then $$1×\frac{1}{1+\frac 12}×2=\frac 43$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
$\lim_{n \to \infty} \left(\frac{1}{\sqrt{n^2 + 1}} + \frac{1}{\sqrt{n^2 + 2}} +\cdots+ \frac{1}{\sqrt{n^2 +2n + 1}}\right)$, is my solution wrong? I needed to calculate: $$\lim_{n \to \infty} \left(\frac{1}{\sqrt{n^2 + 1}} + \frac{1}{\sqrt{n^2 + 2}} +\cdots+ \frac{1}{\sqrt{n^2 +2n + 1}}\right)$$ First of all I saw that it won't be possible to do that in any traditional way and actually calculate the limit, because of the form of expression. I mean - it's a sum with sqares on $n$ so I can't use Stolz lemma that easy. But, I thoght, that the solution is probably $0$, because probably every element of the sum is $0$ when $n \implies \infty$ and the limitation of sum in $\infty$ = sum of limitations in $\infty$. So I just went with that and decided to prove that using induction. My base is: $$\lim_{n \to \infty} \frac{1}{\sqrt{n^2 + 1}} = 0$$ My assumption: $$\lim_{n \to \infty} \left(\frac{1}{\sqrt{n^2 + 1}} + \frac{1}{\sqrt{n^2 + 2}} +...+ \frac{1}{\sqrt{n^2 +2n}}\right) = 0$$ My induction: $$\lim_{n \to \infty} \left(\frac{1}{\sqrt{n^2 + 1}} + \frac{1}{\sqrt{n^2 + 2}} +\cdots+ \frac{1}{\sqrt{n^2 +2n + 1}}\right) = 0 + \lim_{n \to \infty} \frac{1}{\sqrt{n^2 +2n + 1}}) = 0$$ So the limit is: $$\lim_{n \to \infty} \left(\frac{1}{\sqrt{n^2 + 1}} + \frac{1}{\sqrt{n^2 + 2}} +\cdots+ \frac{1}{\sqrt{n^2 +2n + 1}}\right) = 0$$ But then my grader at university said that at first sight it looks totaly wrong but he actualy needs to think about it a bit. So here is my question - is that wrong? How is that wrong? Ok, thank you for your answers. I thought I can solve that exercise that way because I asked not long ago very similar question on that forum: Find the limit of of such a sequence defined by recurrence Because of your answers I think that the problem is actually that in this case I am dealing with a SUM of elements, am I right (or it the answer that I got in other case wrong?)?
Using generalized harmonic numbers $$a_n=\sum_{k=1}^{2n+1}\frac{1}{\sqrt{n^2+k}}=H_{n^2+2 n+1}^{\left(\frac{1}{2}\right)}-H_{n^2}^{\left(\frac{1}{2}\right)}$$ Using the asymptotics $$H_p^{\left(\frac{1}{2}\right)}=2 \sqrt{p}+\zeta \left(\frac{1}{2}\right)+\frac{1}{2 \sqrt{p}}$$ and continuing with Taylor series $$a_n=2-\frac{1}{2 n^2}+O\left(\frac{1}{n^3}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917535", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Conditional distribution from order statistics I have the following data generating process: * *I have two Uniform RVs $X$ and $Y$ *I sample $n$ times from $X$ and $m$ times from $Y$, and then pick the max. Call this $Z$. *I'm trying to find the conditional distribution of $Z$ for each class $X$ and $Y$ separately. That is, the distribution of $Z$, given $Z$ came from $X$, and from $Y$. $$X\sim \textit{Unif}[0,1]$$ $$Y\sim \textit{Unif}[0,b], \space b>1$$ $$Z \sim Max\{X_1,X_2...X_n,Y_1,Y_2...Y_m\}$$ This is what I have so far. * *I have the overall distribution of $Z$. $Z$ can be written as $Z\sim Max(Max(X), Max(Y))$. Then, \begin{align} F_Z(z) &= Pr(Z \leq z) &\\ &=Pr(Max(X) \leq z)P(Max(Y) \leq z) \\ &=F_{Max(X)}(z)F_{Max(Y)}(z) \end{align} $F_{Max(X)}(x) = x^{n}$ and $F_{Max(Y)}(y) = (y/b)^{m}$ This gives $$F_Z(z) = \begin{cases} (z/b)^m & z \geq 1 \\ (z/b)^m z^n & otherwise \end{cases} $$ Note that $z$ is bounded by $[0,b]$. I checked this with simulation, and I got the same plots. *I also have $Pr(Max(X)>Max(Y))$, i.e probability that X is picked. \begin{align} Pr(Max(X)>Max(Y)) &= Pr(Max(Y)=x)Pr(Max(X)>x) \\ &= \int{f_{Max(X)}(1-F_{Max(Y)})dx} \\ &= \frac{b^{-m}n}{n+m} \end{align} I also checked this with simulation, and it looks right. With this information, I'm trying to find the conditional distribution of Z given that it's from class $X$ and class$Y$ separately. Here are the empirical pdfs of Z, where the different colors are $X$ and $Y$. This is what I'm trying to find analytically. Thanks in advance for the help!
Turns out, I was overcomplicating this. All I had to figure out was $$f(Max(X)|Max(X)>Max(Y)$$ and $$f(Max(Y)|Max(Y)>Max(X)$$ $$f_{X\mid X>Y}(x)=\frac{f_X(x)F_Y(x)}{P(X>Y)}$$ see Conditional expectation of X given X is greater than Y
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is the image of a continuous functions always a topological manifold Let $f:\mathbb{R}^n\rightarrow \mathbb{R}^m$ be continuous. Then, for some sufficiently large $M\geq m$, can we always guarantee that $f(\mathbb{R}^n)$ is locally-homeomorphic to some $\mathbb{R}^N$, where $N\leq n$? My intuition is no, since I can imagine a curve crossing itself; but I can't write down an explicit example.
Another counterexample with a different issue: Let $$f:\mathbb R^2\to\mathbb R^2,(x,y)\mapsto\begin{cases}(x,y)&x<0\\ (x,(1-x)y)&x\in[0,1]\\(x,0)&x>1\end{cases}$$ For $x<1$, the image is locally homeomorphic to $\mathbb R^2$. For $x>1$, it is locally homeomorphic to $\mathbb R$. At the boundary of both regions, $x=1$, the image can't be locally homeomorphic to either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3917909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How many codes can be formed from three letters (A-Z) followed by two digits ($0$-$9$)? Problem: A code consists of 3 letters followed by 2 digits. How many codes can we create if a)repetition is allowed b)repetition is not allowed? Attempt: The code should looks like this LLLDD (L for letter and D for digit) so if repetition is allowed, the number of possible codes is $$(26^3)(10)^2$$ and if repetition is not allowed the number of possible codes is $$26 \mathrm{ P }3 \times 10 \mathrm{ P }2.$$ My concern: How does $(26^3)(10)^2$ or $26 \mathrm{ P }3 \times 10 \mathrm{ P }2$ guarantee that the code has the correct pattern (LLLDD) and not other patterns like LDLDL, LLDDL, it's what permutations do after all (order is important), right?
Alternative explanation. how does $(26^3)(10)^2$ or ${}26 \mathrm{ P }3 \times 10 \mathrm{ P }2$ guarantee that the code has the correct pattern (LLLDD) and not other patterns like LDLDL , LLDDL , it's what permutations do after all (order is important), right? The formulas do not guarantee the pattern. However, the formulas are still correct. The original question is to enumerate the number of ways that the pattern LLLDD can occur. The formulas are accurate, under the assumption that your attention is confined to the LLLDD pattern. The fact that the formulas may also be correct for other patterns is totally irrelevant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How long does it take to change $90\%$ of water in an aquarium? I am thinking about buying an aquarium and thought about a system that automatically changes water. This led me to think about the following problem: Given a $500$$\ell$ tank and a system that automatically changes $1$$\ell$ of the water in the tank with fresh water every day, how many days does it take to replace $90\%$ of the water that's in the tank on day $0$? We assume the old water and the fresh water are perfectly mixed every time fresh water is added. My idea is that on the first day we simply replace $1/500$ of the water. On the second day again we replace $1/500$ of the water in the tank. However, $1/500$ has already been replaced on the first day and we have to subtract it. So on the second day we replace $\frac{1}{500}-\frac{\frac{1}{500}}{500}=\frac{1}{500}-\frac{1}{500^2}$. Thus, in total we have replaced $\frac{2}{500}-\frac{1}{500^2}$ on the second day. On the third we do the same thing, replace $1/500$ and subtract the water we have already replaced, obtaining $\frac{1}{500}-\frac{\frac{2}{500}-\frac{1}{500^2}}{500}=\frac{1}{500}-\frac{2-\frac{1}{500}}{500^2}$ of fresh water on the third day. However I can't find the pattern, but I think this is a series, isn't it? So all we would have to do is find an explicit form of the series and find the finite series that equals $450/500$, right? I think I was posed a similar question in my calculus course once and thought that there's an easy solution to the problem. Is my reasoning above correct? How can I find the solution to my problem?
On day $2$, $.998$ of the original water remains. On day $3$, $.998^2$ of the original water remains. On day $n$, $.998^{ n+1}$ of the original water remains. We want the smallest integer $n$ such that $.998^{n+1}<.1$ which is $n=1150.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculating the limit of the following series I want to prove that $$\lim_{x\to1^-}(1-x)\sum_{n=1}^{\infty}(-1)^{n-1}\frac{nx^n}{1-x^{2n}} = \frac14$$ So far I tried to manipulate the series for instance using $$\sum_{n=1}^{\infty}(-1)^{n-1}\frac{nx^n}{1-x^{2n}} = -\sum_{n=1}^{\infty}(-1)^{n}nx^n\sum_{m=0}^{\infty}\left(x^{2n}\right)^m$$ since $x < 1$. Interchanging the two sums (not sure if allowed) I obtained, assuming I did not make mistakes, the sum $$\sum_{m=0}^{\infty}\frac{x^{2m+1}}{(1+x^{2m+1})^2}$$ I am unable to continue from this point. Perhaps my work isn't actually useful at all. Can you help me?
The sum in question belongs more properly to the theory of theta functions and elliptic integrals and this is an approach which makes use of standard results from this theory. Let's put $q=-x$ so that the expression under limit becomes $$-(1+q)\sum_{n=1}^{\infty} \frac{nq^n} {1-q^{2n}}$$ The sum above can be written as $$\sum_{n\geq 1}\left(\frac{nq^{n}}{1-q^{n}}-\frac{nq^{2n}}{1-q^{2n}}\right)$$ which in terms of Ramanujan function $$P(q)=1-24\sum_{n\geq 1}\frac{nq^n}{1-q^n} $$ becomes $$\frac{P(q^2)-P(q)}{24}$$ and it follows that the original expression under limit is $$(1-x)\cdot\frac{P(-x)-P(x^2)}{24}$$ Let's replace this variable $x$ again by $q$ (for convention) and the limit we seek is $$\lim_{q\to 1^-}(1-q)\cdot\frac{P(-q)-P(q^2)} {24}$$ Treating $q$ as the nome we use the following standard results \begin{align} q&=\exp\left(-\pi\frac{K'} {K} \right)\notag\\ P(-q) &=\left(\frac{2K}{\pi}\right)^2\left(\frac{6E}{K}+4k^2-5\right)\notag\\ P(q^2) &=\left(\frac{2K}{\pi}\right)^2\left(\frac{3E}{K}+k^2-2\right)\notag \end{align} (for proofs see this post) and the expression under limit equals $$(1-e^{-\pi K'/K}) \frac{1}{8}\left(\frac{2K}{\pi}\right)^2\left(\frac{E} {K} - k'^2\right)$$ where moduli $k, k'$ and elliptic integrals $K, K'$ correspond to nome $q$. As $q\to 1^-$ we have $k\to 1^-,k'\to 0^+$ and $K\to\infty, K'\to\pi/2,E\to 1$ and the desired limit equals the limit of $$\frac{\pi K'} {K} \cdot\frac{K} {2\pi^2}\cdot(E-k'^2K)$$ This works out to be $1/4$ if we can show that $k'^2K\to 0$. This is an easy consequence of the asymptotic $$K=\log(4/k')+o(1)$$ as $k\to 1^-$ (for proof see this post).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Example of a non-continuous functional on C[0,1] with L2-Norm do You know some simple example of a non-continuous functional on $C[0,1]$ with the $L^2$-Norm? Surely we can build such example using the Hamel basis but I ask myself if there is some simpler example. On $C^1[0,1]$ we could simply take differentiation. But on $C[0,1]$? Thanks in advance.
If you take $\phi(f)=f(1)$, this is unbounded. Define $$ f_n(t)=\begin{cases} 0,&\ 0\leq t\leq 1-\tfrac1{n^2}\\[0.3cm] nt+\tfrac1n-n,&\ t>1-\tfrac1{n^2} \end{cases} $$ Then $\phi(f_n)=n$, while $$ \|f_n\|_2^2=\int_{1-\tfrac1n}^1(nt+\tfrac1n-n)\,dt=\tfrac1{3n^4}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Group of order $p^2q^n$, $q\neq 3$, $p,q$ prime. Let $p,q$ be prime numbers, $q \neq 3$, and $G$ be a group with order $p^2q^n$. Prove that $G$ has a normal subgroup with order $q^n$. Prove that $G$ is a semidrect product of a group with order $q^n$ and a group with order $p^2$. My attempt: For $n \geq 2$, $q^n>p^2$ and we have a unique $q$-sylow group, thus, we have our normal subgroup. I'm having some problems to show that the group is normal when $n=1$, but I think I can handle this. For the semidirect product I don't know what I can do, any hint?
This is not true when $p \gt q$: take $G=S_3 \times C_6$, so a group of order $36=3^2 \cdot 2^2$, hence $p=3, q=2=n$. A Sylow $2$-subgroup of $G$ is not normal. But it is true when $p \lt q$. Let $Q \in Syl_q(G)$, then $n_q(G) \in \{1,p,p^2\}$. If $n_q(G)=p$, then $p \equiv 1$ mod $q$, hence $q \leq p-1 \lt p \lt q$, which is absurd. If $n_q(G)=p^2$, then $q \mid p^2-1=(p-1)(p+1)$, hence by the previous argument $q \mid p+1$, but $p \lt q$, so this can only happen when $q=p+1$, meaning $p=2$ and $q=3$, contradicting $q \neq 3$. So $Q \lhd G$, and if $P \in Syl_p(G)$, $PQ$ is actually a subgroup, but then $G=PQ$ and $P \cap Q=1$, so $G$ is a semi-direct product $P \ltimes Q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivation of bivariate Gaussian copula density The multivariate Gaussian copula density, derived here, is $$c(u_1,\ldots,u_n;\Sigma)=|\Sigma|^{-\frac{1}{2}}\exp\!\left(-\frac{1}{2}x^{\top}(\Sigma^{-1}-I)x\right)$$ where $\Sigma$ is the covariance matrix, and $x=[\Phi^{-1}(u_1),\ldots,\Phi^{-1}(u_n)]^{\top}$. The bivariate Gaussian copula density, based on the pair-wise correlation coefficient $\rho$, is $$ c\left(u_{1}, u_{2} ; \rho\right)=\frac{1}{\sqrt{1-\rho^{2}}} \exp \left\{-\frac{\rho^{2}\left(x_{1}^{2}+x_{2}^{2}\right)-2 \rho x_{1} x_{2}}{2\left(1-\rho^{2}\right)}\right\} $$ What is the derivation of the second formula from the first?
Note that with standard normal marginals $$\Sigma=\left[\begin{array}{cc} 1 & \rho \\ \rho & 1 \end{array}\right],\,\, |\Sigma| = 1 - \rho^2$$ and $$\Sigma^{-1}= \frac{1}{1- \rho^2}\left[\begin{array}{cc} 1 & -\rho \\ -\rho & 1 \end{array}\right], \,\, \Sigma^{-1}-I= \frac{1}{1- \rho^2}\left[\begin{array}{cc} \rho^2 & -\rho \\ -\rho & \rho^2 \end{array}\right]$$ Hence, $$- \frac{1}{2}\mathbf{x}^{\top}(\Sigma^{-1}-I)\mathbf{x} = \frac{-1}{2(1- \rho^2)}\left[\begin{array}{cc} x_1 & x_2 \end{array}\right]\left[\begin{array}{cc} \rho^2 & -\rho \\ -\rho & \rho^2 \end{array}\right]\left[\begin{array}{cc} x_1 \\ x_2 \end{array}\right] \\= \frac{-1}{2(1- \rho^2)}\left[\begin{array}{cc} x_1 & x_2 \end{array}\right]\left[\begin{array}{cc} \rho^2x_1 -\rho x_2 \\ -\rho x_1 + \rho^2 x_2 \end{array}\right] \\= -\frac{\rho^2 (x_1^2 +x_2^2)- 2\rho x_1 x_2 }{2(1-\rho^2)},$$ and, thus, $$|\Sigma|^{-\frac{1}{2}}\exp\!\left(-\frac{1}{2}\mathbf{x}^{\top}(\Sigma^{-1}-I)\mathbf{x}\right) = \frac{1}{\sqrt{1- \rho^2}} \exp \left\{-\frac{\rho^2 (x_1^2 +x_2^2)- 2\rho x_1 x_2 }{2(1-\rho^2)} \right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3918915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to determine asymptotics for $\sum_{ab^2 < x} ab^2$ by summing separately over $a$ and $b$ I do not understand how to get asymptotics for the double sum $$\sum_{ab^2 < x} ab^2$$ If I sum over $a$ first, I get $$\sum_{b^2<x} b^2 \sum_{a < x/b^2} a = \frac12 \sum_{b^2<x} b^2 \frac{x^2}{b^4} \sim \frac{\zeta(2)}{2} x^2.$$ If I sum over $b$ first, I get $$\sum_{a<x} a \sum_{b < \sqrt{x/a}} b^2 = \frac13 \sum_{a<x} a \left( \frac{x}{a} \right)^{3/2} \sim \frac{2}{3} x^2. $$ What is happening. Am I doing something forbidden in switching summations this way? I suppose it is a possible divergence problem, so I do I get precise asymptotics for this sum?
TLDR: Your second method uses two asymptotics. The asymptotic approximation for the inner summation throws out "lower order terms" which end up mattering for outer summation. For integers $N \ge 0$, we have $$\displaystyle\sum_{k = 1}^{N}k^2 = \dfrac{N(N+1)(2N+1)}{6}.$$ Combining this with the inequality $y-1 \le \lceil y \rceil-1 \le y$, we get $$\dfrac{(y-1)y(2y-1)}{6} \le \displaystyle\sum_{k = 1}^{\lceil y \rceil-1}k^2 \le \dfrac{y(y+1)(2y+1)}{6},$$ i.e. $$\dfrac{1}{3}y^3-\dfrac{1}{2}y^2+\dfrac{1}{6}y \le \displaystyle\sum_{k < y}k^2 \le \dfrac{1}{3}y^3+\dfrac{1}{2}y^2+\dfrac{1}{6}y$$ for any real number $y \ge 1$. Applying this yields $$\dfrac{x^{3/2}}{3a^{3/2}}-\dfrac{x}{2a}+\dfrac{x^{1/2}}{6a^{1/2}} \le \sum_{b < \sqrt{x/a}}b^2 \le \dfrac{x^{3/2}}{3a^{3/2}}+\dfrac{x}{2a}+\dfrac{x^{1/2}}{6a^{1/2}},$$ and thus, $$\sum_{a < x}\left[\dfrac{x^{3/2}}{3a^{1/2}}-\dfrac{x}{2}+\dfrac{x^{1/2}a^{1/2}}{6}\right] \le \sum_{a < x}a\sum_{b < \sqrt{x/a}}b^2 \le \sum_{a < x}\left[\dfrac{x^{3/2}}{3a^{1/2}}+\dfrac{x}{2}+\dfrac{x^{1/2}a^{1/2}}{6}\right].$$ However, note that $\displaystyle\sum_{a < x}\dfrac{x^{3/2}}{3a^{1/2}} \sim \dfrac{2}{3}x^2$, $\displaystyle\sum_{a < x}\dfrac{x}{2} \sim \dfrac{1}{2}x^2$, and $\displaystyle\sum_{a < x}\dfrac{x^{1/2}a^{1/2}}{6} \sim \dfrac{1}{9}x^2$. So the lower order terms that the approximation $\displaystyle\sum_{k < y}k^2 \sim \dfrac{1}{3}y^3$ drops end up contributing a significant amount to the double sum. Hence, all you know from the second method is that the double sum is asymptotically bounded between $\dfrac{5}{18}x^2$ and $\dfrac{23}{18}x^2$. We can analyze the first method in the same manner. Using $$\dfrac{1}{2}y^2-\dfrac{1}{2}y \le \sum_{k < y}k \le \dfrac{1}{2}y^2+\dfrac{1}{2}y,$$ we get $$\dfrac{x^2}{2b^4}-\dfrac{x}{2b^2} \le \sum_{a < x/b^2}a \le \dfrac{x^2}{2b^4}+\dfrac{x}{2b^2},$$ and thus, $$\sum_{b < \sqrt{x}}\left[\dfrac{x^2}{2b^2}-\dfrac{x}{2}\right] \le \sum_{b < \sqrt{x}}b^2\sum_{a < x/b^2}a \le \sum_{b < \sqrt{x}}\left[\dfrac{x^2}{2b^2}+\dfrac{x}{2}\right].$$ As you already noted, $\displaystyle\sum_{b < \sqrt{x}}\dfrac{x^2}{2b^2} \sim \dfrac{\zeta(2)}{2}x^2$. The other term scales like $\displaystyle\sum_{b < \sqrt{x}}\dfrac{x}{2} \sim \dfrac{1}{2}x^{3/2} = o(x^2)$. Hence, the double sum does indeed scale like $\dfrac{\zeta(2)}{2}x^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Difficulty in solving a question regarding the sum of two arithmetic sequences. Hey guys, I am having a problem solving a question regarding the sum of two linear sequeneces. Using the equation $S_n=\frac{n}{2}[2a(n-1)d] = 15=5n$, whenever I do try to solve it, I get $25-5n=\frac{30-10n}{n}$. Further simplifying this into a quadratic equation, I get $n=1\space and \space n=6$. However, this is misleading and instead, leaves me even more confused. Can someone point out what I am doing wrong, thanks.
I think you have misunderstood the question. It just wants you to add together two terms: the $n$th term of A plus the $n$th term of B. You do not need to work out the sum of either sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to rigourously define $G^{|A|}$? where G is a group and A being an infinite set. It is clear to me that $G^n:=G\times G\times ...\times G$. Every member of $G^n$ looks like $(g_1,g_2,...g_n)$. But how to rigorously define $G^{|A|}$, i.e how to explicitly write elements of $G^{|A|}$?
$G^A$ is the set of functions $x:A\to G$, endowed with componentwise product and inverses. So to say, you gain access to the $a$-th component of the element $x$ by evaluating it, as a function, at $a$: id est $x_a:=x(a)$. More generally, you can define $\prod\limits_{i\in I}G_i$ as the set of functions $x:I\to\bigcup\limits_{i\in I}G_i$ such that $x(i)\in G_i$ for all $i\in I$. This works fine as a categorical product of groups. When $G$ is a group, there is another (not the only other) useful notion, sometimes denoted as $G^{(A)}$, which is $$G^{(A)}:=\left\{x\in G^A\,:\, \operatorname{card}\{a\in A\,:\, x_a\ne e\}\text{ is finite}\right\}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Limit of a product with growing number of factors I'm trying to solve the following limit: $$L=\lim_{n\to\infty}\frac{(n^2-1)(n^2-2)\cdots(n^2-n)}{(n^2+1)(n^2+3)\cdots(n^2+2n-1)}=\lim_{n\to\infty}\prod_{k=1}^n\frac{n^2-k}{n^2+2k-1}$$ Since originally, the limit yields an $1^\infty$ indeterminate expression, my first idea was taking logarithms: $$\log L = \lim_{n\to\infty}\log\left(\prod_{k=1}^n\frac{n^2-k}{n^2+2k-1}\right)=\\ =\lim_{n\to\infty}\sum_{k=1}^{n}\log\left(\frac{n^2-k}{n^2+2k-1}\right)=\\=\lim_{n\to\infty}\sum_{k=1}^n\log\left(\frac{1-\frac{k}{n^2}}{1+\frac{2k-1}{n^2}}\right)=\\ =\lim_{n\to\infty}\sum_{k=1}^n\left(\log\left(1-\frac{k}{n^2}\right)-\log\left(1+\frac{2k-1}{n^2}\right)\right)=\\=\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(\log\left(1-\frac{k}{n^2}\right)^{n^2}-\log\left(1+\frac{2k-1}{n^2}\right)^{n^2}\right)=\\ =\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(\log\left(1+\frac{1}{-\frac{n^2}{k}}\right)^{-\frac{n^2}{k}(-k)}-\log\left(1+\frac{1}{\frac{n^2}{2k-1}}\right)^{\frac{n^2}{2k-1}(2k-1)}\right)$$ Now if I could do the limit of the terms inside the sum I'd have: $$\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(\log\left(1+\frac{1}{-\frac{n^2}{k}}\right)^{-\frac{n^2}{k}(-k)}-\log\left(1+\frac{1}{\frac{n^2}{2k-1}}\right)^{\frac{n^2}{2k-1}(2k-1)}\right)=\\=\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(\log e^{-k}-\log e^{2k-1}\right)=\\ =\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(-k-2k+1\right)=\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\left(-3k+1\right)$$ and then the limit would be trivial. The question is: is that allowed? Why? If not, how could I proceed? Thanks!
Let's take the first part $$\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\log\left(1+\frac{1}{-\frac{n^2}{k}}\right)^{-\frac{n^2}{k}(-k)} = -\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n k\log\left(1+\frac{1}{-\frac{n^2}{k}}\right)^{-\frac{n^2}{k}}$$ as example. Note that $-\frac{n^2}{k}$ tends to $\infty$ for any $k=1,2,\cdots,n$, so we can use Taylor's expansion $$\left(1+\frac{1}{x}\right)^x = e - \frac{e}{2x} + O\left(\frac{1}{x^2}\right) \quad (x \to \infty)$$ to obtain $$\begin{aligned} \lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n\log\left(1+\frac{1}{-\frac{n^2}{k}}\right)^{-\frac{n^2}{k}(-k)} &= -\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n k\log\left[e+\frac{ek}{2n^2}+O\left(\frac{k^2}{n^4}\right)\right] \\ &= -\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n \left[k + k\log\left(1+\frac{k}{2n^2}+O\left(\frac{k^2}{n^4}\right)\right)\right] \\ &= -\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n \left[k + k\left(\frac{k}{2n^2}+O\left(\frac{k^2}{n^4}\right)\right)\right] \\ &= -\lim_{n\to\infty}\frac{1}{n^2}\sum_{k=1}^n k\left[1+O\left(\frac{1}{n}\right)\right], \end{aligned}$$ where we have used $\frac{k}{n^2}=O\left(\frac{1}{n}\right)$. So the term $O(1/n)$ can be discarded. Note: An alternative solution is given here: Lemma. Given $f(0) = 0$ and that finite $f'(0)$ exists, let $$a_n = \sum_{k=1}^n f\left(\frac{k}{n^2}\right) = f\left(\frac{1}{n^2}\right) + f\left(\frac{2}{n^2}\right) + \cdots + f\left(\frac{n}{n^2}\right),$$ then $$\lim_{n\to\infty} a_n = \frac{f'(0)}{2}.$$ Proof. First notice that $$f'(0) = \lim_{x \to 0} \frac{f(x)-f(0)}{x-0} = \lim_{x \to 0} \frac{f(x)}{x},$$ and using the definition of limit, $\forall \varepsilon > 0$, $\exists \delta > 0$, s.t. $\forall 0 < x < \delta$, $$f'(0) - \varepsilon < \frac{f(x)}{x} < f'(0) + \varepsilon.$$ Particularly for $x > 0$, we have $(f'(0)-\varepsilon)x < f(x) < (f'(0)+\varepsilon)x$. Now pick $N \in \mathbb{N}$, s.t. $N > \dfrac{1}{\delta}$. For $n > N$, $$\frac{k}{n^2} \leq \frac{1}{n} < \frac{1}{N} < \delta, \quad k=1,2,\cdots,n,$$ and hence $$(f'(0) - \varepsilon) \cdot \frac{k}{n^2} < f\left(\frac{k}{n^2}\right) < (f'(0) + \varepsilon) \cdot \frac{k}{n^2}, \quad k=1,2,\cdots,n.$$ Taking summation over $k$ gives $$\frac{f'(0)-\varepsilon}{2} \cdot \frac{n+1}{n} < \sum_{k=1}^n f\left(\frac{k}{n^2}\right) < \frac{f'(0)+\varepsilon}{2} \cdot \frac{n+1}{n}.$$ Since $\dfrac{n+1}{n} \to 1$ as $n \to \infty$, then $\exists N_1 \in \mathbb{N}$, s.t. $\forall n > N_1$, $$\frac{f'(0)-\varepsilon}{2} - \frac{\varepsilon}{2} < \sum_{k=1}^n f\left(\frac{k}{n^2}\right) < \frac{f'(0)+\varepsilon}{2} + \frac{\varepsilon}{2},$$ that is, $\dfrac{1}{2}f'(0)-\varepsilon < \sum\limits_{k=1}^n f\left(\frac{k}{n^2}\right) < \dfrac{1}{2}f'(0)+\varepsilon$. Therefore we have $$\lim_{n\to\infty} a_n = \lim_{n\to\infty} \sum\limits_{k=1}^n f\left(\frac{k}{n^2}\right) = \frac{f'(0)}{2}.$$ Return to your problem, we have $$I = \lim_{n\to\infty}\prod_{k=1}^n\frac{n^2-k}{n^2+2k-1} = \lim_{n\to\infty}\prod_{k=1}^n\frac{n^2-k}{n^2+2k} = \lim_{n\to\infty}\prod_{k=1}^n \frac{1-\frac{k}{n^2}}{1+2\frac{k}{n^2}} = \lim_{n\to\infty} e^{\sum_{k=1}^n f(k/n^2)},$$ where $f(x) = \ln\left(\frac{1-x}{1+2x}\right)$ and the second equality comes from $$1 \leftarrow \left(1-\frac{1}{n^2}\right)^n \leq \prod_{k=1}^n \frac{n^2+2k-1}{n^2+2k} = \prod_{k=1}^n \left(1-\frac{1}{n^2+2k}\right) \leq 1.$$ Therefore $I = e^{\frac{f'(0)}{2}} = \boxed{e^{-\frac{3}{2}}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If the sets defined by a formula are all trivial then what can we conclude from that? Let $\mathcal L$ be a language, $n$ a positive integer and $\phi=\phi(x_1,\dots,x_n)$ an $\mathcal L$-formula having $x_1,\dots,x_n$ as free variables. Then for every $\mathcal L$-structure $\mathfrak A$ there is a set $\phi^{\mathfrak A}$ defined by $\phi$ as: $$\phi^{\mathfrak A}:=\{(a_1,\dots,a_n)\in |\mathfrak A|^n: \mathfrak A\vDash\phi[a_1,\dots,a_n]\}$$where $|\mathfrak A|$ denotes the domain of structure $\mathfrak A$. Now let it be that for every $\mathcal L$-structure $\mathfrak A$ we have: $$\phi^{\mathfrak A}=\varnothing\text{ or }\phi^{\mathfrak A}=|\mathfrak A|^n$$ Can we conclude from this that one of the following statements must be true? * *$\phi^{\mathfrak A}=\varnothing$ for every $\mathcal L$-structure $\mathfrak A$. *$\phi^{\mathfrak A}=|\mathfrak A|^n$ for every $\mathcal L$-structure $\mathfrak A$.
No, we cannot; let $\phi'$ be any $\mathcal{L}$-sentence (so $\phi'$ contains no free variables), and let $\phi$ be the $\mathcal{L}$-formula $\phi'\wedge\bigwedge_{i=1}^nx_i=x_i$. (Note that $\phi$ does indeed contain every $x_i$ as a free variable, albeit in a vacuous way.) Then if $\mathfrak{A}\models\phi'$ we have $\phi^\mathfrak{A}=|\mathfrak{A}|^n$, and if $\mathfrak{A}\models\neg\phi'$ we have $\phi^\mathfrak{A}=\emptyset$. Since every $\mathcal{L}$-sentence either holds or does not hold in an $\mathcal{L}$-structure, one of these two cases must apply. Thus, chosing $\phi'$ so that there are some structures in which it holds and some structures in which it does not provides a countexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919844", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limit with parameter $p$ Find $ p $ that will make limit convergent $$ \lim _ { x \to \infty } \left( \left( n + \left( \frac { x ^ 2 + 2 } { x ^ 2 - x + 1 } \right) ^ { 2 x + 1 } \right) \cdot \sin \left( \frac { x \pi } 2 \right) \right) $$ I know how to solve most of the basic limits. In this problem I don't know if I'm allowed to separate it and make $ 3 $ or $ 4 $ small ones. I thought about making $ p \sin.. + \sin*(big bracket)$ and then separating again because $ p $ is parameter and can go in front of limit and I wanted to separate second one so I can find big limit (it's $e^2$ or something but that doesn't matter).
Here's one way to do it $$\lim\limits_{x\to\infty}\left(\frac{x^2+2}{x^2-x+1}\right)^{2x+1}$$ $$=\lim\limits_{x\to\infty}\exp\left(\ln\left(\left(\frac{x^2+2}{x^2-x+1}\right)^{2x+1}\right)\right)$$ $$=\exp\left(\lim\limits_{x\to\infty}\ln\left(\left(\frac{x^2+2}{x^2-x+1}\right)^{2x+1}\right)\right)$$ By series expansion, we have $$\exp\left(\lim\limits_{x\to\infty}\left[2+\frac4x+\frac1{6x^2}+O\left(\frac1{x^3}\right)\right]\right)$$ $$=\exp\left(2\right)$$ I think you can take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3919994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Self-Adjoint of Volterra Operator I have the Volterra operator defined as following: $$ A : L_{2}([0,1]) \to L_{2}([0,1]) $$ $$ A f(x) = \int_{0}^{x} f(t) dt $$ I was asked to show that: $$ A^{*}A f(x) = \int_{0}^{1} (1- \max(x,t)) f(t) dt $$ This is my attempt: Using the Fubini's theorem and changes of variables: I can show that: $$ <f, Ag> = \int_{0}^{1} f(x) \int_{0}^{x} g(y) dy dx = \int_{0}^{1} g(y) \int_{y}^{1} f(x) dx dy = <A^{*}f, g > $$ So, I get the adjoint operator $ A^{*} f(x) = \int_{x}^{1} f(t) dt$. But I do not see how to get the term $\max(x,t)$. From this, I only see that: $$ A^{*}A f(x) = \int_{x}^{1} \int_{0}^{y} f(z) dz dy $$ How can I show that $A^{*}A f(x)$ as in the question that asks me to do?
$ \langle A^{*}f,g \rangle= \langle f, Ag \rangle=\int_0^{1} g(y)\int_y^{1} f(x)dxdy$. We can write this as $ \langle A^{*}f,g \rangle= \langle h, g \rangle$ where $h(y)=\int_y^{1} f(x)dx$. Since this is true for all $g$ it follows that $A^{*}f=h$. Now $A^{*}Af(x)=\int_x^{1} Af(t)dt=\int_x^{1} \int_0^{t} f(s)ds dt$. Switch the order of integration to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3920176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exercise 12.9 book of Manetti (Topology) I am trying to prove the following exercise: Let $p:E\rightarrow X$ be a covering space. Prove that for every $Y$ connected and every continuos map $f:Y\rightarrow X$, $$f^*p:\{(e,y)\in E\times Y|p(e)=f(y)\}\rightarrow Y,\, f^*p(e,y)=y,$$ is a covering space. I think, I already proved that $f^*p$ is continuos and surjective. But I can't still find admisible open sets.
I will write $E\times_X Y$ for $\{(e, y) \in E\times Y \mid p(e)=f(y)\}$. Let $f^*p : E\times_X Y \to Y$ be the projection on the $Y$ coordinate. Given any $y \in Y$. Let $U$ be the evenly covered neighborhood of $f(y)$ in $X$. Then $f^{-1}(U)$ is a neighborhood of $y$. I claim that this $f^{-1}(U)$ is the evenly covered neighborhood of $y$. Note, $(p^*f)^{-1}(f^{-1}(U)) = \{ (e, y) \in E\times_X Y \mid y \in f^{-1}(U) \} = \{(e, y) \in E \times Y \mid p(e) = f(y) \in U\}$. But since $U$ is evenly covered by $p$, $p^{-1}(U) = \bigsqcup_\alpha V_\alpha$, such that, $p|_{V_\alpha}$ is a homeomorphism. So, $$(p^*f)^{-1}(f^{-1}(U)) = \bigsqcup_\alpha \{ (e, y) \in E\times Y \mid p(e) = f(y), e \in V_\alpha \} =: \bigsqcup_\alpha W_\alpha$$ Note that, $W_\alpha$ is open in $E\times_X Y$ since $W_\alpha = (V_\alpha\times Y) \cap (E\times_X Y)$. Now, you just need to check that $p|_{W_\alpha}$ is a homeomorphism, which should be easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3920335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show Sequence of Functions Converges Uniformly I'm working on a Real Analysis book and am stuck on the following problem: Let $f$ be continuous on $\mathbb R$ and let $$f_n(x)=\frac1n\sum_{k=0}^{n-1}f\left(x+\frac kn\right).$$ Prove that $f_n(x)$ converges uniformly to a limit on every finite interval $[a,b]$. How would I rigorously solve this?
The candidate for the limit is $g(x) := \displaystyle{\int_x^{x+1}} f(t)dt$. For any $x$, you can bound the error between the integral and its approximation with the Riemann sums in $f_n$: \begin{align*} |g(x)-f_n(x)| & = \Big| \sum \limits_{k=0}^{n-1} \displaystyle{\int_{x+\frac{k}{n}}^{x+\frac{k+1}{n}}} f(t) - f\big(x+\frac{k}{n}\big)dt\Big| \\ & \le \sum \limits_{k=0}^{n-1} \displaystyle{\int_{x+\frac{k}{n}}^{x+\frac{k+1}{n}}} \big|f(t) - f\big(x+\frac{k}{n}\big)\big|dt \end{align*} $f$ is continuous on $[a,b]$, so it is uniformly continuous on that interval. For $\varepsilon>0$, we know there exists $\eta>0$ such that $x,y \in [a,b], |x-y|<\eta \implies |f(x)-f(y)| \le \varepsilon$. Now if you take $n$ large enough to have $\frac{1}{n} \le \eta$, for $k \in [\![0,n-1]\!]$ and $t \in \big[x+\frac{k}{n}, x+\frac{k+1}{n}\big]$, $\big|t-x-\frac{k}{n}\big| \le \eta$ so $\big|f(t)-f\big(x+\frac{k}{n}\big)\big| \le \varepsilon$. Hence $|g(x)-f_n(x)| \le \sum \limits_{k=0}^{n-1} \displaystyle{\int_{x+\frac{k}{n}}^{x+\frac{k+1}{n}}} \varepsilon dt = \varepsilon$, for all $x \in [a,b]$, for $n$ large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3920539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to deduce $\forall x \forall y \exists z(Gxz \land Gzy)$ The task is to deduce this: $\forall x \forall y \exists z(Gxz \land Gzy)$ From this: $ \forall x (Gxx) $ And this: $ \forall x \forall y ( x \neq y \to \exists z(Gxz \land Gzy) $ Using this instance of the Law of Indiscernibility of Identicals: $ a = b \to (Gaa \equiv Gab) $ I tried to proceed by reductio, assuming as an additional premise this: $ \neg \forall x \forall y \exists z (Gxz \land Gzy) $ And from this premise (and its instances), the instance of the Law, and the second premise above (and its instances), I was able to deduce this: $\exists z (Gaz \land Gzb) $ Which is an instance of what I trying to prove but I won't be able to generalize appropriately to reach my conclusion, nor does it follow from the correct premises: I don't know how to use the first premise $ \forall x (Gxx) $ effectively, and further my reductio is not doing what it should be. Thanks for any help. I really appreciate it! This is from Goldfarb's Deductive Logic, IV5b.
There are two cases $x = y$ and $x \not= y$. if $x = y$, then $\exists z (G x z \land G z y)$ is $\exists z (G x z \land G z x)$ which holds with $z = x$ because $\forall x (G x x)$. If $x \not= y$, then $\exists z (G x z \land G z y)$ follows from $\forall x \forall y (x \not= y \implies \exists z (G x z \land G z y))$. In both cases, $\exists z (G x z \land G z y)$ follows as required.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3920747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\chi _M + \chi _N$ iff $M$ and $N$ are measurable Let $(\mathbb {R}^d, \mathcal {M}, \mu )$ be a measure space with $\mathcal {E} \subset \mathcal {M}$. $M$ and $N$ are disjoint subset in $\mathbb {R}^d$. $\chi _M + \chi _N$ and $\chi _M + 2 \chi _N$ should be measurable iff $M$ and $N$ are measurable right? ($\mathcal {E}$ is the set of all unions of finite intervals.)
Yes. If $\chi_M + \chi_N$ and $\chi_M + 2\chi_N$ are both measurable, then $(\chi_M + 2\chi_N) - (\chi_M + \chi_N) = \chi_N$ is also measurable, which then implies $\chi_M$ is measurable as well. So $M$ and $N$ are measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3920932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A counter example which shows that $\Omega(f|_{\Omega}) \neq \Omega(f)$ I'm looking for an example : A counter example which shows that $\Omega(f|_{\Omega}) \neq \Omega(f)$ $(X,f)$ is a Dynamical System if $f:X \to X$ is a homeomorphism and $X$ is a compact space. $\Omega(f)$ is the set of non wandering points, i.e. all $x$ such that $\forall$ U open containing $x$ and $\forall$ $N>0$ there exists some $n>N$ such that $f^n(U) \cap U \ne \emptyset$. I think we have to take $f$ on a circle but I don't know how to compute the non wandering set.
My previous answer have an error. I will try differrently. We will work on polar coordinate. Our space will be the union of the sets $C_{n} = \{(\frac{n}{n+1},\frac{\pi}{k}) , k \in \{-n,...,-1,1,...,n \}\}$ for $n \in \mathbb{N}^* $ with $C_{\infty} = \{(1,\frac{\pi}{k}) k \in \mathbb{Z}^* \} \cup \{(1,0)\}$ and with $D=\{(n,0) ,n \geq 2 \}$ with the topology induced by $\mathbb{C}$. Here a picture of the situation, the blue points are in the sets $C_k$ and the red one in $C_\infty$ We will consider the function $f$ defined by : On $D$ * *$f(n,0)=(n-1,0)$ if $n \ne 2$ *$f(2,0)=(1/2,\pi) \in C_{1}$ On $C_k$ * *$f(\frac{n}{n+1},\frac{\pi}{k})=(\frac{n}{n+1},\frac{\pi}{k+1})$ if $k \ne n$ and $k \ne -1$ *$f(\frac{n}{n+1},\pi)=(\frac{n}{n+1},\frac{\pi}{2})$ if $k=-1$ *$f(\frac{n}{n+1},\frac{\pi}{n})=(\frac{n+1}{n+2},\frac{-\pi}{n}) \in C_{k+1}$ if $k =n $ And on $C_{\infty}$ * *$f(1,\frac{\pi}{k})=(1,\frac{\pi}{k+1})$ if $k \ne -1$ *$f(1,\pi)=(1,\frac{\pi}{2})$ if $k =-1$ *$f(1,0) = (1,0)$ We have that $f$ is bijective. $\underset{k}{\cup} C_k \cup D$ and $C_{\infty}$ are dynamically speaking two bi-infinite shift and $(1,0)$ is a fixed point. Let's show that $f$ is a homeomorphism. $f$ is continuous and with continuous inverse at every point of $\underset{k}{\cup} C_k \cup D$ because the topology there is discrete. Now on $C_{\infty}$, take a sequence $(\frac{n_i}{n_i+1},\frac{\pi}{k_i}) \underset{i \to \infty}{\to} (1,\frac{\pi}{k})$. We should have that $n_i \underset{i \to \infty}{\to} \infty $ and $k_i \underset{i \to \infty}{\to} k $ so for every $i>I$ for $I$ large enough, $k_i \ne n_i $ and $f(\frac{n_i}{n_i+1},\frac{\pi}{k_i}) = (\frac{n_i}{n_i +1},\frac{\pi}{k_i +1}) \underset{i \to \infty}{\to} (1,\frac{1}{k +1})= f(1,\frac{1}{k})$. We avoid the difficulties $k =-1 $ because $-\pi = \pi$ on polar coordinate. If $(\frac{n_i}{n_i+1},\frac{\pi}{k_i}) \underset{i \to \infty}{\to} (1,0)$ then we could have for infinitely many i, $k_i = n_i$ but then $f(\frac{n_i}{n_i+1},\frac{\pi}{k_i}) = (\frac{n_i+1}{n_i + 2},\frac{-\pi}{n_i +1 }) \underset{i \to \infty}{\to} (1,0)= f(1,0)$ So $f$ is continuous. The same can be done for $f^{-1}$ it "just" rotate conterclockwise and sometimes can "gain" a level in $C_k$. Now we have that $\underset{k}{\cup} C_k \cup D$ is not in $\Omega(f)$ since the topology there is discret, you can just take a singleton wich won't intersept itself after iteration of $f$. $C_{\infty}$ is in $\Omega$, indeed for every $(1,\frac{\pi}{k})$, every open set which contain it, should contain $ \{ (\frac{n}{n+1},\frac{\pi}{k}) , n \geq N\}$ for a $N$ large enough. This subset intersect itself many times. So $\Omega(f)=C_{\infty}$, but on $C_{\infty}$ $f$ is just a shift and a fixed point so $\Omega(f_{| \Omega}) = \{ (0,1) \}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is there always a homotopy taking a point to another in a connected manifold? Let $M$ be a connected topological $n$-manifold. Note this implies $M$ is path-connected. Let $a,b \in M$. Must there always exist a continuous $H:M \times [0,1] \to M$ such that $H(m,0)=m$ for all $m \in M$, and $H(a,1)=b$? We can note that this is true for $\mathbb{R}^n$, by considering $(x,t) \mapsto x+t(b-a)$, and it is also true for $S^1$ by considering $(e^{i\theta}, t)\mapsto e^{i(\theta+t(\theta_b-\theta_a))}$ where $a=e^{i\theta_{a}}, b=e^{i\theta_b}$.
Let’s prove something stronger: for each such manifold $M$ (without boundary), the path-connected component of the identity in the group of homeomorphisms of $M$ acts transitively on $M$. Since it’s a continuous group action and $M$ is connected, it’s enough to show that any orbit is open. By considering homotopies that are the identity outside a ball, we can assume $M=B^n$, $a,b$ being interior points, and require that the homotopy must be the identity on the boundary. But in this case, you can consider the flow of a compactly supported vector field in the right direction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that any group of order $27$ is not simple I'm stuck on this problem from my abstract algebra course: Prove that if $G$ is a group with $|G|=27$, then $G$ is not simple. First I noticed $|G|=27=3^3$. I thought I can use a statement I saw on the text book: * *Given $H\leq G$ with $G$ finite and $|G:H|=p$ being $p$ the minimum prime number that divides $|G|$, then $H\unlhd G.$ This would prove that $G$ has a non-trivial normal subgroup, and that would mean $G$ is not simple. But in order to use this I need to prove first that my group $G$ has some subgroup of order $3^2$ (If I'm not wrong, this isn't trivial). So if my reasoning is right, I need to prove that any group of order $27$ has some subgroup of order $3^2$, and my problem will be solved. Am I right? How can I prove this last statement? Any help will be appreciated, thanks in advance.
A well-known fact is that any $p$-group of order $p^n$ contains subgroups of order $p^i$ for each $i\le n$. This is more than enough, since we can then take a subgroup of index $p$, which, by another well-known fact must be normal. Thus no $p$-group with $n\gt1$ is simple. There's an induction proof of the first fact that relies on the additional well-known fact that $p$-groups have nontrivial center (which can be seen by looking at the class equation). Couple this with the fourth well-known fact that the center is always a normal subgroup, and we have a different proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $\lim_{n \to \infty} \left(1+\frac{a_n}{n}\right)^n= e^a$ when $a_n \to a$ I have seen a proof that shows $$\lim_{n\to\infty} \left(1+\frac{x}{n}\right)^n = e^x$$ by looking at the Taylor series expansion of $\ln(1+x)$ at $x=0$. To prove a theorem, my textbook uses the fact $$\lim_{n \to \infty} \left(1+\frac{a_n}{n}\right)^n = e^a$$ when $a_n \to a$. How can I prove this?
You can try the following: \begin{align*} \lim_{n\to \infty}\left( 1+\frac{a_n}{n} \right)^n &= \lim_{n\to \infty}\left( 1+\frac{a}{n} + \frac{a_n-a}{n} \right)^n \newline &= \lim_{n\to \infty}\left( 1+\frac{a}{n} \right)^n + \lim_{n\to \infty}\frac{a_n-a}{n}(\textit{something with finite limit}) \newline &= e^a + 0 \newline &= e^a \end{align*} To formalize this second line you can use Binomial Expansion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Inductive proof that sum of reciprocals of odd natural numbers diverges? I am trying to prove via induction that for any natural number $N$, there exists an $n$ such that: $$ \sum_{i=1}^{n} \dfrac{1}{2i-1} > N $$ Or $$ 1 + \dfrac{1}{3} + \dfrac{1}{5} + ... + \dfrac{1}{2n-1} > N $$ I've started by attempting to use induction on $N$. The basis step is easy. Let $N=1$, and it is clear that $1 + \dfrac{1}{3} > N$, so $n = 2$ suffices to show the basis is true. For the inductive step, I've started with this: For some $k > 0$, show that $$ 1 + \dfrac{1}{3} + \dfrac{1}{5} + ... + \dfrac{1}{2n-1} + \dfrac{1}{2(n+1)-1} + ... + \dfrac{1}{2(n+k)-1} > N + 1 $$ Now I've tweaked this inequality in a bunch of different ways, but I haven't been able to get closer to proving that it's true for a certain $k$. The closest I've got is trying to show that for a given $k$, $$ \dfrac{1}{2(n+1)-1} + ... + \dfrac{1}{2(n+k)-1} > 1 $$ Can anyone point me in the right direction?
$$ \frac{1}{2(n+1)-1} > \frac{1}{2(n+2)-1} > \cdots > \frac{1}{2(n+k)-1}. $$ Therefore $$ \frac{1}{2(n+1)-1} + \frac{1}{2(n+2)-1} + \cdots + \frac{1}{2(n+k)-1} > \frac{k}{2(n+k)-1}. $$ The right-hand side will never be greater than $1.$ But if you can make it greater than, say, $\frac14,$ all you need to do is repeat the process four times and you will have a sum of terms that is greater than $1.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
A question on floor function. Prove that for $n \in N$, $[\sqrt{n} + \frac{1}{2} ] = [\sqrt{n-\frac{3}{4}} + \frac{1}{2}]$ where [.] is the greatest integer function. My attempt: k < $\sqrt{n} + \frac{1}{2} $ < k+1 . Obviously, $\sqrt{n-\frac{3}{4}} + \frac{1}{2}$ < $\sqrt{n} + \frac{1}{2} $, so it would suffice to show that $\sqrt{n-\frac{3}{4}} + \frac{1}{2}$> k , but it turns out that the inequlity is very weak so i am not able to prove it. Is there any way to do this using induction? Or am i trying the right thing? Any help is appreciated , thanks!
We have, where $k$ is the integer part of $\sqrt{n} + \frac{1}{2}$, $$\begin{equation}\begin{aligned} k & \lt \sqrt{n} + \frac{1}{2} \\ k - \frac{1}{2} & \lt \sqrt{n} \\ k^2 - k + \frac{1}{4} & \lt n \end{aligned}\end{equation}\tag{1}\label{eq1A}$$ Since $k$ and $n$ are positive integers, \eqref{eq1A} can actually be restated as being $$\begin{equation}\begin{aligned} k^2 - k + 1 & \le n \\ k^2 - k + \frac{1}{4} & \le n - \frac{3}{4} \\ \left(k - \frac{1}{2}\right)^2 & \le n - \frac{3}{4} \\ k - \frac{1}{2} & \le \sqrt{n - \frac{3}{4}} \\ k & \le \sqrt{n - \frac{3}{4}} + \frac{1}{2} \end{aligned}\end{equation}\tag{2}\label{eq2A}$$ Since $\sqrt{n} + \frac{1}{2} \gt \sqrt{n - \frac{3}{4}} + \frac{1}{2}$, as you stated, this shows $k$ is also the integer part of $\sqrt{n - \frac{3}{4}} + \frac{1}{2}$, which means that $$\left\lfloor \sqrt{n} + \frac{1}{2} \right\rfloor = \left\lfloor \sqrt{n - \frac{3}{4}} + \frac{1}{2} \right\rfloor \tag{3}\label{eq3A}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3921886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding the structure of a group without using sylows theorem. If$ |G|=pq $ and $p $ doesnt divide $(q-1)$ and $p <q$ then $G$ is cyclic. My proof stands as this .I have used the fact that $(i)$ If $|G|=pq$ then I showed that there will only be one element of order $p$ and one element of order $q$. My approach in proving this part has been to show that if there are two elements of order $p$ say $x_1$ and $x_2$ , then say $H_1$ is a group of order $p$ generated by $x_1$ and $H_2$ is a group of order $p$ generated by $x_2$ .We assume that the intersection is {e} if not then we can get $H_1$=$H_2$(by property of subgroup). Proving that the elements are distinct . We assume that the elements $(x_1)^{i}.(x_2)^j$ are not distinct then $(x_1)^{i}.(x_2)^j =(x_1)^{i'}(x_2)^{j'}$.From here we can arrive at a contradiction as $H_1 \cap H_2 =e$.So if there are $p^2$ elements then we can arrive at a contradiction as $p$ and $q$ are both primes. Similar results will hold in the case of $q$. $(ii)$ Now there is only one subgroup of order $p$ and one subgroup of order $q$ so they are both normal $(iii)$ let $H$ and $K$ be two subgroups of order $p$ and order $q$.Then we know that $H \cap K={e}$.$H$ and $K$ are both normal .Then I showed that $x^{-1}y^{-1}xy \in H \cap K$ and $xy=yx$. So the order of the element $xy$ is $pq$.Where am I going wrong in my proof and since I have not used the fact that $p $ doesnot $q-1$.
An argument close to the OP's idea could proceed as follows. Fill in the details. * *If $x$ and $y$ are elements of order $q$, then among the products $x^iy^j$, $0\le i,j<q$, there must be repetitions. This is because $q^2>|G|$. Show that this implies that the subgroup $H$ of order $q$ is unique. Let's fix a generator $x$ of $H$. *If $z\notin H$ has order $pq$ then $G$ is cyclic. Therefore the remaining possibility is that all such elements $z$ have order $p$. *Because $H$ is a unique subgroup of its order, $H\unlhd G$. Why does it follow that $zxz^{-1}=x^i$ for some $i, 1\le i<q$? *Why do we have $z^pxz^{-p}=x$? *On the other hand we also have $z^pxz^{-p}=x^{i^p}$, why? Why does this imply the congruence $$i^p\equiv1\pmod q?$$ *It follows that the coset of $i$ in the multiplicative group $\Bbb{Z}_q^*$ has either order $1$ or order $p$. Why? *If the order of the coset of $i$ is equal to one, then $zx$ has order $pq$. Why? *If the order of the coset of $i$ is equal to $p$, why does it follow that $p\mid q-1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
one example on game playing with some details, how this average step was calculated? I have one example in my TA notes: We have a two players games. two players will be agree on number $n$. game start with $x=2$. first player select by Coin flipping and start game. after that each of players arbitrary do one of the following: $x=x^2$ or $x=x^3$. for example if "David" start game and choose $x^3$ then $x$ will be equals to $8$. after playing of each players that condition $x>n$ occurred, the game will be stopped and the last player announced as winner. at average this game ended after $O(\log \log n)$ steps. This is very nice example, but how we can detect this average order can be achieved?
If we take $\log_2$ of everything, we end up with a simpler game that is equivalent to the original ($\log$ is monotonically increasing, so $x>n$ iff $\log x > \log n$). I will denote $\log_2x$ by $y$ and $\log_2n$ by $m$. When we take the log, squaring becomes doubling, and cubing becomes tripling. Then this game is equivalent to the following: Start with $y=1$. The players take turns either doubling or tripling $y$. The game ends when $y >m$. We can take the log again. When we take the log of a product, that's the sum of the log of the multiplicands, so doubling becomes adding the log of $2$, and tripling becomes adding the log of $3$ (side note: this means that this is a nim game, as adding until you reach some number is equivalent to starting at that number and subtracting until you get to zero). So then the game becomes: Start with $z=0$. Each turn, players add $\log_22=1$ or $\log_23$ to $z$. The game ends when $z > \log_2m$. Clearly, this game will end the fastest if each players increases $z$ by $\log_23$ each turn, in which case it will (with rounding) end after $\frac{\log_2m}{\log_23 }= \log_3m = \log_2\log_3n$ turns. The longest it can take is $\log_2\log_2n$. It's not clear that "average" is well defined, but it regardless of how it's defined, it will be between these two numbers. And if we're dealing with Big-O notation, the bases of the logarithms don't matter, so we're left with $\log\log n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
A question about floor functions and series The sequence $\{a_n\}^{∞}_{n=1} = \{2,3,5,6,7,8,10,...\}$ consists of all the positive integers that are not perfect squares. Prove that $a_n= n+ [\sqrt{n} + \frac{1}{2}]$. Well, I managed to prove that $ m^2 < n+ [\sqrt{n} + \frac{1}{2}] < (m+1)^2 $, where $[\sqrt{n} + \frac{1}{2}] = m$. But is this enough to answer the question? Or do we also need to prove that $n+ [\sqrt{n} + \frac{1}{2}] $ can take all non-perfect square values. If yes, then how? Any help is appreciated, thanks!
The following is just a first step toward the answer. From $$ \eqalign{ & a_{\,n} = n + \left\lfloor {\sqrt n + {1 \over 2}} \right\rfloor \cr & a_{\,n + 1} - a_{\,n} = 1 + \left\lfloor {\sqrt {n + 1} + {1 \over 2}} \right\rfloor - \left\lfloor {\sqrt n + {1 \over 2}} \right\rfloor \cr} $$ since we have $$ \eqalign{ & \left\lfloor {x - y} \right\rfloor = - \left\lceil {y - x} \right\rceil = \cr & = \left\lfloor x \right\rfloor - \left\lfloor y \right\rfloor + \left\lfloor {\left\{ x \right\} - \left\{ y \right\}} \right\rfloor = \cr & = \left\lfloor x \right\rfloor - \left\lfloor y \right\rfloor - \left[ {\left\{ x \right\} < \left\{ y \right\}} \right] \cr} $$ where $[P]$ denotes the Iverson bracket, then $$ \eqalign{ & a_{\,n + 1} - a_{\,n} = 1 + \left\lfloor {\sqrt {n + 1} + {1 \over 2}} \right\rfloor - \left\lfloor {\sqrt n + {1 \over 2}} \right\rfloor = \cr & = 1 + \left\lfloor {\sqrt {n + 1} - \sqrt n } \right\rfloor + \left[ {\left\{ {\sqrt {n + 1} + {1 \over 2}} \right\} < \left\{ {\sqrt n + {1 \over 2}} \right\}} \right] = \cr & = 1 + \left[ {\left\{ {\sqrt {n + 1} + {1 \over 2}} \right\} < \left\{ {\sqrt n + {1 \over 2}} \right\}} \right] \cr} $$ Now the Iverson bracket will be one, and thus we will have a jump in $a_{\,n}$, when $$ \left\{ {\sqrt {n + 1} + {1 \over 2}} \right\} < \left\{ {\sqrt n + {1 \over 2}} \right\} $$ which for $1 \le n$ , which means $sqrt{n+1} -\sqrt{n} < 1/2$, can occur only when $$ \sqrt n + {1 \over 2} < m < \sqrt {n + 1} + {1 \over 2} $$ with $m$ an integer, that is $$ \eqalign{ & \sqrt n < m - {1 \over 2} < \sqrt {n + 1} \quad \Rightarrow \cr & \Rightarrow \quad n < m^{\,2} + {1 \over 4} - m < n + 1\quad \Rightarrow \cr \quad \Rightarrow \cr & \Rightarrow \quad n - {1 \over 4} < m\left( {m - 1} \right) < n + {3 \over 4}\quad \Rightarrow \cr & \Rightarrow \quad n_ * = m\left( {m - 1} \right) \cr} $$ Thus $$ a_{\,n + 1} - a_{\,n} = 2\quad {\rm iff}\quad n = m\left( {m - 1} \right) $$ It remains then to demonstrate that $$ \forall q\;\exists m:\quad a_{\,n_{\, * } } + 1 = m\left( {m - 1} \right) + \left\lfloor {\sqrt {m\left( {m - 1} \right)} + {1 \over 2}} \right\rfloor + 1 = q^{\,2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Change of basis matrix between orthonormal bases of different inner products We were recently given the following exercise: Given a vector space of real polynomials of degree 2 defined over the interval [-1,1], let the inner product be defined as $\langle f,g\rangle = \int_{-1}^1 f(t)g(t)dt$. Let the following be an orthonormal basis in terms of this inner product: $\{\frac{1}{\sqrt{2}},\sqrt{\frac{3}{2}}t,\frac{3}{2}\sqrt{\frac{5}{2}}(-\frac{1}{3}+t^2)\}$. We were also given two polynomials (which are irrelevant to the problem I'm having) and then had to use Parseval's identity to find the "angle" between these two polynomials. Parseval's identity states that the abstract inner product between the two polynomials is the same as the euclidian inner product between their coordinate vectors in terms of this orthonormal basis. So as a first step, I tried to find a change of basis matrix from the given orthonormal basis to the "standard" basis for polynomials, where $\begin{pmatrix}1\\0\\0\end{pmatrix}$ represents a constant polynomial, $\begin{pmatrix}0\\1\\0\end{pmatrix}$ represents $t$ and $\begin{pmatrix}0\\0\\1\end{pmatrix}$ represents $t^2$. The coordinate vectors of the orthonormal basis in terms of the standard basis are thus $\begin{pmatrix}\frac{1}{\sqrt{2}}\\0\\0\end{pmatrix}, \begin{pmatrix}0\\\sqrt{\frac{3}{2}}\\0\end{pmatrix}, \begin{pmatrix}-\sqrt{\frac{5}{8}}\\0\\ \sqrt{\frac{45}{8}}\end{pmatrix}$. Now, this is were I'm confused. The change of basis matrix, which takes coordinate vectors from the orthonormal basis to the standard basis, is $\begin{pmatrix} \frac{1}{\sqrt{2}} &&0&&-\sqrt{\frac{5}{8}} \\ 0&&\sqrt{\frac{3}{2}}&&0 \\ 0&&0&&\sqrt{\frac{45}{8}}\end{pmatrix}$. This matrix is clearly not orthogonal. However, a few days ago we learned that the change of basis matrix between two orthonormal bases must always be orthogonal, and both bases I gave are clearly orthonormal. What am I missing here? Does it make a difference that one basis is orthonormal in terms of the euclidian dot product and one basis is orthonormal in terms of the newly defined inner product?
The result you mentioned is about a change of basis in the same $(E, \left <\,,\right>)$ inner product space. When you change the inner product $\left <\,,\right>$, distances and angles change too. To see this, take $E=\Bbb R^2$, $\left <a,b\right>=a_1b_1 +a_2b_2$ and $a=(-1,-1), b=(1,-1)$. We have $\left<a,b\right>=0$. Now, if $\left <a,b\right>=2a_1 b_1 + 3a_2b_2$ (you can check that is is an inner product on $\Bbb R^2$), we have $\left <a,b\right>=-2+3=1 \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving Chromatic Number without 5-Color Theorem We are currently studying planar graphs and I have come across a review exercise that is proving to be more trouble than it's worth. The problem is as follows: Let G = (V;E) be a planar graph with at most 12 vertices. Without using the 4-colour theorem or the 5 colour theorem (Theorems 13.4.1 or 13.4.2 in the Discrete Math book), prove that G has chromatic number at most 5. I've got a few tools at my disposal, mostly that of Euler's equation of $|V|+|F|=|E|+2$, and it's later derivation of $|E| \leq 3|V|-6$. From these two, I can prove the six color theorem, as we can say by Handshake Lemma there must be at least one vertex of degree 5 or less in order to not break the $|E| \leq 3|V|-6$ inequality. As, with 12 vertices max, that would mean there is a max of 30 edges. However, I keep coming back to the fact that it seems I'm going to have to basically pseudo-prove the 5 color theorem again. Can I utilize the amount of vertices to my advantage, as it bounds vertices, edges, and faces? Any help would be appreciated.
Hint: Show that has a vertex with degree at most 4 . First try contradiction. If all vertices of G are of at least 5 degree, by handshake lemma and ||≤3||−6, we know that it requires |V|=12, |E|=30, and G to be a 5-regular graph. Except this case, it's not possible for all vertices of G to have at least 5 degree, so for other cases, G has at least a vertex v with deg(v) ≤ 4. For the special case, use Brook's theorem. For the more general case, prove by induction when n ≤ 12, remove v, which is similar to the proof of 5-colour theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922631", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Mathematical problem related to spectral method for time evolution I have a mathematical problem related to spectral method for time evolution in QM. Considering the standard evolution for a generic quantum state $\psi(t) \in \mathbb{C}^N $ which is not express in a basis where U is diagonal, setting $\hbar=1$ we have: $$| \psi(t) \rangle = U |\psi(0) \rangle \hspace{5em} \text{where} \hspace{2em} U^\dagger U = UU^\dagger = 1_N \hspace{1em} \text{and} \hspace{1em} U=\exp[-iH(t-t_0)] $$ it is known that in the basis of the eigenvectors it can be written as: $$| \psi(t) \rangle =\sum_{n}c_n \exp[-iE_n(t-t_0)] | \phi_n \rangle \hspace{5em} (1)$$ I'm stuck in derive this simply expression, supposing to store the eigenvectors in the columns of P: $$| \psi(t) \rangle = U |\psi(0) \rangle = P D P^{\dagger} |\psi(0) \rangle $$ because of $ P^{\dagger} |\psi(0) \rangle=\vec{c} \hspace{2em} \text{where} \hspace{2em} c_n=\langle \phi_n|\psi(0) \rangle $ $$| \psi(t) \rangle =P D \vec{c} \neq \vec{c} D P $$ The commutation relation beacuse of the elements on the diagonal are different does not seem to me to be valid. I'm stuck
So indeed, we have $$ | \psi(t) \rangle = PD \vec c. $$ Note that the diagonal entries of $D$ are $\exp(-iE_n(t-t_0))$. So, $D\vec c$ is the vector $$ D\vec c = \pmatrix{c_1\exp(-iE_1 (t-t_0))\\ \vdots \\ c_N \exp(-iE_N(t-t_0))}. $$ Since the columns of $P$ are $|\phi_1\rangle,\dots,|\phi_N\rangle$, it follows that $$ PD\vec c = \sum_{n=1}^N c_n \exp(-iE_n(t-t_0)) \cdot |\phi_n\rangle, $$ which is what we wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3922975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the order of $\bar{2}$ in the multiplicative group $\mathbb Z_{289}^×$? What is the order of $\bar{2}$ in the multiplicative group $\mathbb Z_{289}^×$? I know that $289 = 17 \times 17$ so would it be $2^8\equiv 256\bmod17 =1$ and therefore the order of $\bar{2}$ is $8$? I'm not too sure about this
Define the set $H \subset {\displaystyle (\mathbb {Z} /289\mathbb {Z} )^{\times }}$ by $\tag 1 H = \bigr\{[a + 17m] \,\large \mid \, \normalsize a \in \{-1,+1\} \text{ and } 0 \le m \lt 17\bigr\}$ It is easy to show that $H$ contains exactly $34$ elements. Proposition 1: The set $H$ is closed under multiplication. Proof Consider, $\quad (a + 17m)(b+17n) = ab + 17(an +bm) + mn\cdot 17^2$ while dividing $an +bm$ by $17$ to get the non-negative residue. $\quad \blacksquare$ So we can state (see bullet $1$ of this elementary group theory) Proposition 2: The set $H$ forms a group of order $34$. Continuing, Proposition 3: The element $[16]$ generates $H$. Proof The order of $[16]$ must divide $34$. The order of $[16]$ is not equal to $2$. Moreover, by applying the binomial theorem we can write $\quad 16^{17} = \bigr((-1) + 17\bigr)^{17} = (-1)^{17} + \binom{17}{16}(-1)^{16}\cdot 17^{1} + K\cdot 17^2 \equiv -1 \pmod{289}$ and so the order of $[16]$ must be $34$. $\quad \blacksquare$ There are two methods we can use here to finding the order of $[2]$. Method 1: Since $[2]^4 = [16]$ and $[2] \notin H$ the order of $[2]$ is strictly greater than $34$. Also, with this fact and $\quad [2]^{136} = [16]^{34} = [1]$ we must conclude that the order of $[2]$ is either $68$ or $136$. Now $\quad [2]^{68} = [16]^{17} \ne [1]$ and we therefore conclude that the order of $[2]$ is $136$. Method 2 Since $[2]^1, [2]^2, [2]^3 \notin H$ and $[2]^4 = [16] \in H$ we can employ the group theory found here and conclude that the order of $[2]$ is $4 \times 34 = 136$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
proving convergence of $a_{n+1}=1+\frac{1}{1+a_{n}}$ $a_1=1.$ $a_{n+1}=1+\frac{1}{1+a_{n}}$ Prove that the sequence is convergent. I'm trying to prove the convergence of this sequence but having trouble. At first I thought this might be a monotone sequence since then I can try monotone convergence theorem to prove its convergence. But after checking some terms, I realized it seemed the sequence is oscillating. So I'm not sure how to prove the convergence of this sequence. Thanks.
This sequence is a Cauchy sequence so it converges. First you see $a_n>0, \forall n \in \mathbb{N}$ from recursive relation. [$a_1=1$ and $a_{n+1}$ is defined to added positive terms] Second since $a_n>0$ thus $a_{n+1} = 1 + \frac{1}{1+a_n} \leq 2 $ Now consider \begin{align} |a_{n+1} - a_n| = \left| \frac{1}{1+a_n} - \frac{1}{1+a_{n-1}} \right| = \frac{|a_n - a_{n-1}|}{(1+a_n)(1+a_{n-1})} \leq \frac{1}{4} | a_n - a_{n -1}| \end{align} and this is cauchy sequence. [Series with this form is called contractive and after repeatively apply the same procedure continuing to $|a_2-a_1|$, and by Squeeze theorem you can easily guess $a_n$ is a Cauchy sequence] In $\mathbb{R}$ cauchy sequence implies convergences so it converges. Then by taking limits $\lim_{n\rightarrow \infty} a_n = \alpha$ we have $\alpha^2 = 2$ and from $a_n>0$, $\alpha = \sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding a bijection of the set of binary strings of length $n$ Let $S$ be the set of binary strings of length $n$, i.e. each $x\in S$ is a string whose each entry is a $0$ or a $1$. Let $W(x)$ be the Hamming weight of $x\in S$ i.e. $W(x)$ is the number of ones in $x$. I am looking for a formula for a bijection $f: S\to S$ which shuffles a lot of vertices based on Hamming weight. For example, say $x\in S$ with $W(x)=w$ is mapped to say $y$ with $W(y)=w-2$ and $y$ in turn is mapped to $z$ with $W(z)=w-5$ and so on and finally the low weight strings are mapped back to $x$. This is just an example of what I'm looking for. Bijections that I'm not looking for: * *Swaps. For example, say we choose $2^n/3$ vertices using some rule and call this the set $A$, which has $1/3$ of the total strings, but each $x\in A$ is mapped to a vertex where ones and zeros are flipped. So in this case, low weight strings are swapped with high weight strings but this is not what I'm looking for. I need a good mixing of strings of different weights. *Shifts. Say we choose all strings of a particular weight $w$ and call this the set $B$. Now say each $x\in B$ is mapped to $y\in B$ where $y$ is obtained from $x$ by shifting each bit of $x$ by a constant amount. This bijection clearly maps strings of weight $w$ to strings of weight $w$ and hence this is not what I'm looking for because again I 'm looking for a good mixing of strings of different weights.
Here is one approach which may or may not be what you're looking for: Choose an arbitrary binary string to be the key. Then bit-wise xor with the key is a bijection. More concretely, take a binary string $x$, and flip all the bits where the corresponding entry of the key is a 1. For instance, if $n=3$ and the key is $101$, then this bijection is "flip the first and the third bit". If the weight of the key is even, then this bijection sends even-weighted strings to even-weighted strings, and vice versa. It can only change the weight by at most $W(\text{key})$. But if $W(\text{key})\approx \frac n2$, this shouldn't be too bad.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing the equivalence of two statements regarding the limits of sequences I need to show the equivalence of the following two statements $(i)$ For any sequence $(a_n)_n$ with $a_n \in D$ and $a_n > a$ for all $n \in N$ such that $\lim\limits_{n\to\infty} a_n = a$ we have $\lim\limits_{n\to\infty} f(a_n) = L.$ $(ii)$ For any sequence $(a_n)_n$ with $a_n \in D$ and $a_n \ge a_{n+1}>a$ for all $n \in N$ such that $\lim\limits_{n\to\infty} a_n = a$ we have $\lim\limits_{n\to\infty} f(a_n) = L.$ My attempt: Going from $(i)$ to $(ii)$ is obvious. To show that $(ii)=>(i)$, take a sequence $(a_n)_n$ with $a_n \in D$ and $a_n > a$ for all $n \in N$ and $\lim\limits_{n\to\infty} a_n = > a$. We want to show that $\lim\limits_{n\to\infty} f(a_n) = L$. $(a_n)_n$ has a non-increasing subsequence $(a_{k_n})_n$. Then, we have $a_{k_n} \in D$ and $a_{k_n} \ge a_{k_{n+1}}>a$ for all $n \in N$ and $\lim\limits_{n\to\infty} a_{k_n} = a$. Then, we can apply $(ii)$ to this sequence and get $\lim\limits_{n\to\infty} f(a_{k_n}) = L$. I am stuck here and I don't know how to go back to the limit of the original sequence. Am I on the right path? Can I get some hints?
Take a sequence $(a_n)_n$ with $a_n \in D$ and $a_n > a$ for all $n \in N$ and $\lim_{n\to\infty} a_n = a$. Assume that $\lim_{n\to\infty} f(a_n) = L$ does not hold. Then there is a $\epsilon > 0$ and a subsequence $(a_{n_k})$ of $(a_n)$ such that $|f(a_{n_k}) - L | \ge \epsilon$ for all $k$. Now apply (ii) to a decreasing subsequence of $(a_{n_k})$ to get a contradiction. This is an example of the following principle, applied to $x_n = f(a_n)$: Let $(x_n)$ be a sequence in a topological space $X$ and $L \in X$. If every subsequence of $(x_n)$ has itself a subsequence converging to $L$, then the complete sequence $(x_n)$ converges to $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Constructive logic and Russell's paradox To show that "naive set theory" doesn't work, Russell devised the famous example of the set $$A := \{ x \ | \ x \not\in x \},$$ which turns out can't be a set after all, because either * *$A \in A$, but then by the property of all members of $A$ it follows that $A \not\in A$ *$A \not\in A$, but then again $A$ has to be a member of $A$, so $A \in A$. Since both of these options yield a contradiction, $A$ can't be a set. But as I understand the whole premise of constructive logic is that it isn't valid to assume that either $A \in A$ or $A \not\in A$ has to hold. On the other hand, by the above we have shown that $A \in A \iff A \not\in A$, which even in constructive logic can't hold. So is Russell's paradox valid in constructive logic?
You can formulate the proof constructively as follows. Suppose $A\in A$. Then by definition of $A$, we conclude that $\neg(A\in A)$, which is a contradiction. Since the assumption $A\in A$ led to a contradiction, we conclude $\neg(A\in A)$ (note that this is still valid constructively--constructively, $\neg p$ means the same thing as $p\rightarrow\bot$). But now since $\neg(A\in A)$, the definition of $A$ implies that $A\in A$. Thus we have reached a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Show that $\{(X_{n},X_{n+1})\}_{n\geq 0}$ is a Markov Chain where $\{X_{n}\}_{n\geq 0}$ is a Markov Chain Show that $\{(X_{n},X_{n+1})\}_{n\geq 0}$ is a Markov Chain where $\{X_{n}\}_{n\geq 0}$ is a Markov Chain. Remark: We know that $\mathbb{P}(A|\emptyset)$ is undefined, I am right? This fact is important in my attempt and is what motivates this post. My attempt: We define $Y_{n}:=(X_{n},X_{n+1})$, then we are going to verify if we have the Markov property. On the one hand, we know that \begin{align} \mathbb{P}[Y_{n}=(i,j)\:|\:Y_{n-1}=(i_{n-1},j_{n-1})]&=\mathbb{P}[X_{n}=i,X_{n+1}=j\:|\:X_{n-1}=i_{n-1},X_{n}=j_{n-1}]\\ &=\left\{\begin{array}{ll}0 &\mbox{ if }i\neq j_{n-1}\\ p_{ij} &\mbox{ if } i=j_{n-1}. \end{array}\right. \end{align} where $P$ is transition matrix of $\{X_{n}\}_{n\geq 0}$. On the other hand, we have \begin{align} &\mathbb{P}[Y_{n}=(i,j)\:|\:Y_{0}=(i_{0},j_{0}),\ldots,Y_{n-2}=(i_{n-2},j_{n-2}),Y_{n-1}=(i_{n-1},j_{n-1})]\\ &=\mathbb{P}\left[X_{n}=i,X_{n+1}=j\:|\:X_{0}=i_{0},X_{1}=j_{0},\ldots,X_{n-2}=i_{n-2},X_{n-1}=j_{n-2},X_{n-1}=i_{n-1},X_{n}=j_{n-1}\right]\\ &=\left\{\begin{array}{ll}\mbox{undefined} & \mbox{if }\exists k\in\{0,1,\ldots,n-2\}\mbox{ such that }j_{k}\neq i_{k+1}\\ 0 & \mbox{if }\forall k\in\{0,1,\ldots,n-2\}\mbox{ such that }j_{k}=i_{k+1} \mbox{ and }j_{n-1}\neq i\\ p_{ij} & \mbox{if }\forall k\in\{0,1,\ldots,n-2\}\mbox{ such that }j_{k}=i_{k+1} \mbox{ and }j_{n-1}= i \end{array}\right. \end{align} My problem is in the undefined case, this is making me think that this chain is not a Markov chain. What do you think about this? This exercise appears in Adventures in Stochastic Processes; Resnick, Sidney I.
Usually, in the definition of a Markov chain (with state space $\mathbb N$), we require the equality $$ \mathbb P\left(X_n=j\mid X_{n-1}=i,X_{k}=i_k,0\leqslant k\leqslant n-2\right)=\mathbb P\left(X_n=j\mid X_{n-1}=i\right) $$ to hold for each $i$, $i_1,\dots,i_{n-2}$ such that the event $A:=\{X_{n-1}=i,X_{k}=i_k,0\leqslant k\leqslant n-2\}$ has a positive probability. This is possible that the previously event $A$ has probability zero, for example if $i_1$ and $i_2$ are two states which do not communicate. So an undefined case may also hold for Markov chains which are not defined as a vector like $(X_n,X_{n+1})$ hence this "issue" (quote mark because it is not really one) is not specific to this problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3923911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that $4\mid f_n$ if and only if $6\mid n$. We define the Fibonacci sequence via $f_1 = f_2 = 1$, and if $n > 2$, then $f_n = f_{n−1} + f_{n−2}$. Show that $4\mid f_n$ if and only if $6\mid n$. I have been trying to make a copy of the following solution for the exercise "$ 3 \mid f_n $ iff $ 4 \mid n $". But so far I have not found anything satisfactory. In this question $f_n$ is divisible by $4$ if and only if $n$ is divisible by $6$ they answer it by redefining the fibonacci sequence, but I wonder if it can be done as I am trying. By the way, what they do here $f_n$ is divisible by $4$ if and only if $n$ is divisible by $6$ I don't understand very well.
As you have asked for, this can be done using a similar argument to the one in your post: If you repeatedly apply the recurrence relation, then $f_{n} = 8f_{n-5}+5f_{n-6}$ so that when you use strong induction on some set of elements less than $n$ in the same way by supposing $6|n$, then $6|n-6$ and by hypothesis $4|f_{n-6}$ and by the equation above, $4|f_{n}$. If $6\nmid n$, then $6\nmid (n-6)$ and $4\nmid f_{n-6}$. If $4|f_{n}$, then $4|8f_{n-5}+5f_{n-6}\implies4|5f_{n-6}\implies4|f_{n-6}$, a contradiction. Hence, $4\nmid f_{n}$. In this way, both implications have been shown. For the second implication, I was unable to use the exact same argument as above since, in this case, $f_{n}$ was not able to reduced in such a convenient way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Evaluate $\oint_{|z-1/2|=3/2} \frac{\tan(z)}{z}$ with residue theorem Here is my attempt at the problem: $\oint_{|z-1/2|=3/2} \frac{\tan(z)}{z}$ We have that $z=0$ is a removable singularity, so we may consider that $\tan(z)=\frac{\sin(z)}{\cos(z)}$. From here we have that $\frac{\tan(z)}{z} = \frac{\sin(z)}{z\cos(z)}$. This have a pole of order 1 at $z=\frac{\pi}{2}$. We only consider this residue because its the only residue in the circle we are integrating over. So by the residue theorem we have $\oint_{|z-1/2|=3/2} \frac{\tan(z)}{z} = 2\pi i \operatorname{Res}[\frac{\sin(z)}{z\cos(z)}, \frac{\pi}{2}]$. We can evaluate this residue by the following $\operatorname{Res}[\frac{\sin(z)}{z\cos(z)}, \frac{\pi}{2}] = \lim_{z \to \pi/2}(z-\frac{\pi}{2})\frac{\sin(z)}{z\cos(z)}= \lim_{z \to \pi/2}(z-\frac{\pi}{2})\frac{\sin(z)}{z\cos(z)-z\cos(\frac{\pi}{2})+z\cos(\frac{\pi}{2})}$. If we pull out a $\frac{1}{z-\frac{\pi}{2}}$ on the bottom we have $\operatorname{Res}[\frac{\sin(z)}{z\cos(z)}, \frac{\pi}{2}]$ = $\lim_{z \to \pi/2}\frac{\sin(z)}{z\cos'(\frac{\pi}{2})} = \lim_{z \to \pi/2}\frac{\sin(z)}{-z\sin(\frac{\pi}{2})} = \frac{1}{-\frac{\pi}{2}} = -\frac{2}{\pi}.$ So by multiplying the $2\pi i$ by this we end up with a solution of $-4i$ but the integral shouldn't be negative, so where did I go wrong on here?
$\text{ For simple calculations, you need this :}$ $\cos z=-(z-\pi/2)+(z-\pi/2)^3/3!-(z-\pi/2)^5/5!+......$ $\text{ so that }$ $Res(f(z), z=\pi/2)=\lim_{z\rightarrow \pi/2}(z-\pi/2)f(z)=\lim_{z\rightarrow\pi/2}(-1)\times\frac{\sin z}{z}=-\frac{2}{\pi}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
how many 9 digit numbers have the property that the product of their firsts and last digits is even? How many 9 digit numbers have the property that the product of their firsts and last digits is even? I tried listing them out, but there's probably way too many. This is a basic counting problem.
The product of the first and last digits is even iff they are not both odd. There are $9×10$ ways to choose the first and last digits; of those $5×5$ result in an odd product due to both digits being odd. The remaining seven digits may be chosen arbitrarily, so the answer is $(9×10-5×5)10^7=65\cdot10^7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
count of matrix of 0,1 in which each row and each column have at least one 1. I know that the number of matrices with $0,1$ entries, in which each row and each column have at least one $1$, is equal to $$\sum_{r=0}^n \sum_{s=0}^n (-1)^{r+s} C(n,r) C(n,s) 2^{rs}.$$ I want to prove it combinatorially using Inclusion–exclusion principle.
Let's say $S_{t,u}$ is the number of binary $n$ by $n$ matrices with $t$ rows of zeroes and $u$ columns of zeroes. The rows can be chosen in $C(n,t)$ ways and the columns can be chosen in $C(n,u)$ ways. The remaining portion of the matrix has $(n-t)(n-u)$ binary elements. So $$S_{t,u} = C(n,t)C(n,u)2^{(n-t)(n-u)}$$ (Note that this formula works even when $t=u=0$.) By inclusion/exclusion, the number of matrices with no row of zeroes and no column of zeroes is $$\begin{align} N_0 &= \sum_{t=0}^n \sum_{u=0}^n (-1)^{t+u} S_{t,u} \\ &= \sum_{t=0}^n \sum_{u=0}^n (-1)^{t+u} C(n,t)C(n,u)2^{(n-t)(n-u)} \end{align}$$ Now make a change of indices to $r=n-t$ and $s=n-u$. The result is $$\begin{align} N_0 &= \sum_{r=0}^n \sum_{s=0}^n (-1)^{2n-r-s} C(n,n-r)C(n,n-s) 2^{rs} \\ &= \sum_{r=0}^n \sum_{s=0}^n (-1)^{r+s} C(n,r)C(n,s) 2^{rs} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Quadratic gauss sum equivalence of definition Let $r$ be a prime number and $\zeta_r$ be the $r$th rot of unity. I know quadratic gauss sum can be expressed in two ways, $g=\sum_{i=0}^{r-1}\zeta_r^{i^2}$ and $g=\sum_{i=0}^{r-1}(\frac{i}{r})\zeta_r^{i}$ $g$ being the sum. But I am not getting how to go from the first definition to the second. Can anyone help?
The connection is $$ g=\sum_{i=0}^{r-1}\zeta_r^{i^2}=\sum_{i=0}^{r-1}\left(1+\left(\frac ir\right)\right)\zeta_r^i=\sum_{i=0}^{r-1}\zeta^i+\sum_{i=0}^{r-1}\left(\frac ir\right)\zeta^i=0+\sum_{i=0}^{r-1}\left(\frac ir\right)\zeta^i=\sum_{i=0}^{r-1}\left(\frac ir\right)\zeta^i $$ where $\sum_{i=0}^{r-1}\zeta_r^{i^2}=\sum_{i=0}^{r-1}\left(1+\left(\frac ir\right)\right)\zeta_r^i$ as, $i^2\;(mod\;r)$ is a quadratic residue, and the sum only has the quadratic residues in the exponent of $\zeta_r$, each term is multiplied by $2$ as their are two solution of $x^2-a=0\;(mod\;r)$. So, now it can be easily seen that, two expressions are equivalent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Non-standard form of Sobolev's embedding: Does $\Vert f\Vert_{L^2}^2\leq C \Vert f\Vert_{L^\infty}\Vert f\Vert_{H^1}$ hold?? I am wondering exactly the question of the title. Is it true that there exists a constant $C>0$ such that for all $f\in H^1(\mathbb{R})$ it holds $$ \qquad \qquad \Vert f\Vert_{L^2}^2\leq C \Vert f\Vert_{L^\infty}\Vert f\Vert_{H^1} \quad ?? \qquad (*) $$ I know that by Sobolev's embedding we have that $$ \Vert f\Vert_{L^\infty}\leq C \Vert f\Vert_{H^1}, $$ and of course we have the trivial bound $\Vert f\Vert_{L^2}\leq \Vert f\Vert_{H^1}$. However, it is not clear to my if something like the inequality in $(*)$ holds. Does anyone has any thoughts?
No, this inequality cannot hold for any $f\in H^1(\mathbb{R})$. When you wonder if such a functional inequality can hold, a standard test is the scaling argument. Here is how it goes. Let $f\in H^1(\mathbb{R})$ be non-identically $0$ and define $f_\lambda(x) = \sqrt{\lambda}f(\lambda x)$ with $\lambda>0$. Then one easily computes $\|f_\lambda\|_{L^2} = \|f\|_{L^2}$, $\|f_\lambda\|_{L^\infty} = \sqrt{\lambda}\|f\|_{L^\infty}$ and $\|f_\lambda'\|_{L^2} = \lambda\|f'\|_{L^2}$. By contradiction assume that $(\ast)$ hold. Then since $f_\lambda\in H^1(\mathbb{R})$, it must also satisfy $(\ast)$. Therefore $$ \|f\|_{L^2}^2 \leq C\,\sqrt{\lambda}\|f\|_{L^\infty}(\|f\|_{L^2}^2 + \lambda^2\|f'\|_{L^2}^2)^{1/2} $$ By letting $\lambda\to 0$ we obtain a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3924902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }