Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Existence of such points in compact and connected topological space $X$ Let $X$ be a topological space which is compact and connected. $f$ is a continuous function such that; $f : X \to \mathbb{C}-\{0\}$. Explain why there exists two points $x_0$ and $x_1$ in $X$ such that $|f(x_0)| \le |f(x)| \le |f(x_1)|$ for all $x$ in $X$.
Define the function $g: X \to \mathbb{R} $ by $g(x) = |f(x)|$, which is continuous. Since X is compact, the result follows by the Extreme Value Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/166681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to get the characteristic equation? In my book, this succession defined by recurrence is presented: $$U_n=3U_{n-1}-U_{n-3}$$ And it says that the characteristic equation of such is: $$x^3=3x^2-1$$ Honestly, I don't understand how. How do I get the characteristic equation given a succession?
Here’s a rote rule for doing so. Start with the recurrence: $$U_n=3U_{n-1}-U_{n-3}$$ Convert each subscript to an exponent: $$U^n=3U^{n-1}-U^{n-3}$$ Change the variable to the one that you want to use in the characteristic equation: $$x^n=3x^{n-1}-x^{n-3}$$ Divide through by the smallest exponent, in this case $n-3$: $$x^{n-(n-3)}=3x^{(n-1)-(n-3)}-1\;,$$ which simplifies to $$x^3=3x^2-1\;.$$ With a little practice you can do the conversion in one go. For instance, the recurrence $$a_n=4a_{n-2}-6a_{n-3}+3a_{n-4}$$ has characteristic equation $$x^4=4x^2-6x+3\;,$$ as you can check by following through the steps given above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/166743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 5, "answer_id": 1 }
Help with binomial theorem related proof I'm currently working through Spivak on my own. I'm stuck on this proof, and the answer key is extremely vague on this problem. I think I'm missing a manipulation involving sums. Prove that $\displaystyle\sum_{k=0}^{l}\dbinom{n}{k}\dbinom{m}{l-k}= \dbinom{n+m}{l}$. As a hint, he gives "Apply the binomial theorem to $\displaystyle(1+x)^n(1+x)^m$" Following the hint, I get: $\displaystyle(1+x)^n(1+x)^m=(1+x)^{n+m}$ Applying the binomial theorem, we get: $\displaystyle\sum_{j=0}^n\dbinom{n}{j}x^j\cdot\sum_{k=0}^m\dbinom{n}{k}x^k=\sum_{l=0}^{n+m}\dbinom{n+m}{l}x^l$ Here's where I get stuck. How do I manipulate this into looking like the above? As an aside, this is not homework. I'm working through the book for my own benefit. I'm usually reticent about consulting the answer key, but I've been stuck on this one for about a day.
Starting where you got stuck, how do you get a term in $x^{\ell}$ from the product on the right side? You get it when you multiply ${n\choose j}x^j$ from the first term with ${m\choose\ell-j}x^{\ell-j}$ from the second term. [Note that you have a typo, an $n$ in the second sum where you wanted to have $m$] So, the coefficient of $x^{\ell}$ is the sum of all the terms ${n\choose j}{m\choose\ell-j}$, as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/166818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Detailed diagram with mathematical fields of study Some time ago, I was searching for a detailed diagram with mathematical fields of study the nearest one I could find is in this file, second page. I want something that shows information like: "Geometry leads to I topic, Geometry and Algebra leads do J topic and so on. Can you help me?
A diagram relating different areas of mathematics can be found at a blog post entitled An Attempt at Mapping Mathematics. It's by far the most comprehensive one I am aware of.
{ "language": "en", "url": "https://math.stackexchange.com/questions/166862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 4, "answer_id": 1 }
In set theory, what does the symbol $\mathfrak d$ mean? What's meaning of this symbol in set theory as following, which seems like $b$? I know the symbol such as $\omega$, $\omega_1$, and so on, however, what does it denote in the lemma? Thanks for any help:)
The symbol $\mathfrak d$ is used to denote the dominating number of the continuum. If $g,f\colon\omega\to\omega$ we say that $g$ dominates $f$ if for all but finitely many $n$, $f(n)\leq g(n)$. The dominating number is the smallest cardinality of a dominating family, namely the minimal $|F|$ such that $F\subseteq\omega^\omega$ and for every $f\colon\omega\to\omega$ there is some $g\in F$ such that $g$ dominates $f$. Some observations: * *$\aleph_0<\frak d\leq c$: the former is true because if we have a countable family of functions by diagonalization argument we can produce a non-dominated function; the latter is true because it is obvious that $F=\omega^\omega$ is a dominating family and its size is exactly $\frak c$. *If $\aleph_1=\frak c$ then $\frak c=d$, which is a trivial consequence of the above. *It is not provable that there is an equality, because by forcing we can ensure that $\frak d<\frak c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/166995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
In what base does "100" equal "4 in base 10" I was reading this answer to an amusing comic related question: https://math.stackexchange.com/a/166891/35132 and I understand that in the linked answer, the examples of how four may be expressed used base (expressed in decimal!!) is 10, 4, 3 for 4, 10, 11. What I can't figure out is what base would need to be used for his last example (100) to equal 4 in decimal? P.S. Are maths questions like this always this hard to put in words?!
$$100_{(b)}=b^2+0\cdot b + 0=b^2 \,.$$ So 100 in base $b$ is just the number $b^2$.... Can you find $b$ now?
{ "language": "en", "url": "https://math.stackexchange.com/questions/167073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The sum: $\sum_{n=1}^\infty (-1)^{n+1}\frac{1}{n}=\ln(2)$ using Riemann Integral and other methods I need to prove the following: $$1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots+(-1)^{n+1}\frac{1}{n}+\cdots=\sum_{n=1}^\infty (-1)^{n+1}\frac{1}{n}=\ln(2)$$ Method 1: The series $\sum_{n=1}^\infty (-1)^{n+1}\frac{1}{n}$ is an alternating series, thus it is convergent, say to $l$. Therefore, both $s_{2n}$ and $s_n$ are convergent to the same limit $l$. $$ \begin{align} s_{2n}=1-\frac{1}{2}+\frac{1}{3}-\frac{1}{4}+\cdots-\frac{1}{2n} & =\left(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots+\frac{1}{2n}\right)-2\left(\frac{1}{2}+\frac{1}{4}+\cdots+\frac{1}{2n}\right) \\[10pt] & =\frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} \end{align} $$ It is an easy exercise to prove that $$\lim_{n \to \infty }s_{2n}=\lim_{n \to \infty }s_n =\lim_{n \to \infty }\left [ \frac{1}{n+1}+\frac{1}{n+2}+\cdots+\frac{1}{2n} \right ]=\ln(2)$$ which implies that the given alternating series converges to $l=\ln 2$. However, I am interested to see proof of this problem using the definition of the Riemann Integral as a sum of infinitely many rectangles of widths tending to zero. I tried to come up with proof for this, but I couldn't. Can anyone share, please? Also, I am interested to see other methods of solving this problem (other than my method and the Riemann method.) If anyone of you is aware of any other methods, please share.
Here's another method by the Riemann integral, but not by definition: $$ \sum_{n=1}^{\infty }(-1)^{n+1}\frac{1}{n}= \lim_{m\to\infty}\sum_{n=1}^{m}(-1)^{n+1}\frac{1}{n}= $$ $$ \lim_{m\to\infty}\int_0^1(1-x+\ldots+(-1)^{m-1}x^{m-1})\,dx= \lim_{m\to\infty}\int_0^1\frac{1-(-x)^m}{1+x}\,dx= \int_0^1\frac{dx}{1+x}=\ln2. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/167155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 5, "answer_id": 0 }
Finding arithmetic mean, standard deviation, mode and median On the market quality of the fruit was measured and following results came out: Quality of Fruit( in measuring units ) 65 70 75 80 85 90 95 100 Number 2 3 2 5 8 7 5 3 Define: a) arithmetic mean and standard deviation b) mode and median How to find what is wanted here?
There are $35$ items that have been assessed. a) To find the mean, you need to calculate $$\tfrac{(2)(65)+(3)(70)+(2)(75)+(5)(80)+(8)(85)+(7)(90)+(5)(95)+(3)(100)}{35}.\tag{$1$}$$ The standard deviation would could be defined in a couple of different ways. I will use the one I guess is the one more likely for your course. For the sample variance, calculate first $$\tfrac{(2)(65^2)+(3)(70^2)+(2)(75^2)+(5)(80^2)+(8)(85^2)+(7)(90^2)+(5)(95^2)+(3)(100^2)}{35}.\tag{$2$}$$ Subtract the square of the sample mean calculated in $(1)$. That gives you the sample variance $s^2$. For the sample standard deviation, take the square root. But perhaps in your course, the formula for the sample variance and standard deviation involves an $n-1$ instead of an $n$. In that case, you should multiply the $s^2$ that I described by $\frac{35}{34}$. Then for the sample standard deviation, take the square root as usual. b) Since there are $35$ items, and the median is the "middle" number, count $18$ from the bottom, or $18$ from the top. We end up in the "$85$" slot, so the median is $85$. The mode is the value that occurs most often. A quick scan shows that the value $85$ is the one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Limit of a sequence of real numbers If $(a_n), (b_n)$ are two sequences of real numbers so that $(a_n)\rightarrow a,\,\,(b_n)\rightarrow b$ with $a, b\in \mathbb{R}^+$. How to prove that $a_n^{b_n}\rightarrow a^b$ ?
The function $f(x,y) = x^y = e^{y \ln x}$ is continuous on $\mathbb{R}_+ \times \mathbb{R}$, hence if $(a_n,b_n) \to (a,b)$ (with $a_n, a >0$, of course), then $a_n^{b_n} = f(a_n,b_n) \to f(a,b) = a^b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Question on Malcev's _Immersion of an Algebraic ring into a skew field_. I'm reading the paper Immersion of an algebraic ring into a skew field by Malcev. Doi: 10.1007/BF01571659, GDZ. On the third page of the paper, he writes that If $\alpha\beta\sim\gamma\delta$ and the words $\alpha$ and $\gamma$ have the same length then $$ \alpha=\mu m,\quad \beta=n\nu,\quad \gamma\sim\mu m',\quad\delta\sim n'\nu,\quad mn\sim m'n' $$ where $m,n,m',n'$ are each one of the letters $a,b,c,d,x,y,u,v$. How is this property easily seen? Malcev starts be starting with the semigroup of all words generated by eight letters $a,b,c,d,x,y,u,v$, where the operation is concatenation. He defines the pairs of two-letter words as "corresponding" $(ax,by)$, $(cx,dy)$ and $(au,bv)$ and says two words $\alpha$ and $\beta$ are equivalent if one can be obtained from the other by changing $ax$ to $by$, $cx$ to $dy$, etc. or vice versa. He also proves that there are never any overlap ambiguities of what two letters could be replaced, but I don't see why the above quoted property follows so easily. Thank you.
Malcev states before that in a word $ \dotsc m n p \dotsc$ we cannot replace $mn$ and $np$ by another word. This means that the relations don't overlap. Regardless how you change a word by the relations, the classes of the letters $\{a,b,c,d\}$ and $\{x,y,u,v\}$ stay the same at each position. So if we want to change $\alpha \beta$ into $\gamma \delta$, we have to change $\alpha$ into $\gamma$ and $\beta$ into $\delta$ separatedly (this is the case that the letters $m,m',n,n'$ don't appear; probably one should add this), or at one step we come across the two words and make something like $(\mu m)(n \nu) \sim (\mu m')(n' \nu)$. Of course this is not really formal. It can be made precise using induction on the length of the deduction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving a system of quadratic vector equations This problem arises from my research in computer vision, specifically projective homography: I have $n$ unknown variables, represented by an $n\times 1$ vector $\mathbf{x}$. There is a system of $n$ equations of the form $\mathbf{x}^{T}\mathbf{A_i}\mathbf{x} + \mathbf{b_i}^{T}\mathbf{x} + c_i = 0$, where $\mathbf{A_i}$ is an $n\times n$ symmetric matrix, $\mathbf{b_i}$ is an $n\times 1$ vector and $c_i$ is a scalar for $i=1\ldots n$, that $\mathbf{x}$ has to satisfy. It is also known that every entry of the solution $\mathbf{x}$ is small, i.e. comparatively smaller than their respective coefficients. Is there a closed-form or iterative method to solve for $\mathbf{x}$?
Newton's method converges fast, when it converges. In Newton's method, you start from an initial estimate $\bf x$, and solve a linear equation for $\bf h$ to get the next iterate $\bf x+h$. In this case this seems to give $$ {\bf x}^T({\bf A}_i{\bf x}+{\bf b}_i)+(2{\bf A}_i{\bf x}+{\bf b}_i)^T{\bf h}+c_i=0 $$ for $i=1$ to $n$. It is obtained by dropping the term which is quadratic in ${\bf h}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167487", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Finding coefficient $a_{1996}$ if $\;\prod_{n=1}^{1996}(1+nx^{3^n})=\sum_{n=0}^m a_nx^{k_n}$ This is from a math contest. I have solved it, but I'm posting it on here because I think that it would be a good challange problem for precalculus courses. Also, it's kind of fun. Write the polynomial $$ \prod_{n = 1}^{1996}\left(% 1 + nx^{3^{n}\rule{0pt}{3mm}}\right) = \sum_{n=0}^{m}a_{n}\,x^{k_{n}} $$ where the $k_{n}$ are in increasing order, and the $a_{n}$ are nonzero. Find the coefficent $a_{1996}$.
My attempt thanks to the hint by Thomas Andrews: The exponents K(n) of the powers of x in the polynomial are sums of powers of 3 each power being present in the sum at most ONE time.Therefore the powers of x represented in the ternary system will have the digits 0 and 1 (2 is not allowed since each power of 3 appears at most once in the sum). Therefore the ordering will be the same as in the binary system , the difference being of course that the digits in the ternary system will correspond to powers of 3 whereas the digits in the binary correspond to powers of 2. The 1996th term has the following binary representation (if I am not wrong 1996=1024+512+256+128+64+8+4 in binary form= 11111001100 therefore the 1996th coefficient is the following product: 10X9X8X7X6X3X2 which corresponds to the product of the exponents of the powers of 2 that are present in the binary representation of 1996.It is worth noting that the 1996th coefficient is the 1996th NON ZERO coefficient In between there are many zero coefficients which correspond to the powers of x which have a ternary system representation comprising the digit 2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
$f$ has an essential singularity in $z_0$. What about $1/f$? Let $\Omega$ be a non-empty, open subset of $\mathbb C$. Consider an holomorphic function $f:\Omega \setminus \{z_0\} \to \mathbb C$ and suppose we know $z_0$ is an essential singularity of $f$. I am wondering what we can say about the function $\tilde{f}:=\frac{1}{f}$ and its singularity in $z_0$. Do you know any theorem that answers to this question? Actually, I can't prove anything, since I do not know the answer: I've studied some examples. For instance, if you take $f(z)=e^{\frac{1}{z}}$ then $\tilde{f}$ will still have an essential singularity, hasn't it? On the other side, if we take $f(z)=\sin(\frac{1}{z})$ then I think that $z_0=0$ becomes a limit point of poles for $\tilde{f}$ (so we can't classify it, because it isn't an isolated singularity). Wha do you think? Do you know any useful theorem concerning this? Thank you in advance.
So we are given essential singularity of $f$ in $z_0$. If the function $g=1/f$ have a pole in $z_0$ of order $m$, then $f=1/g$ will have a removable singularity. In particular this will be zero of order $m$. Contradiction, hence $z_0$ is an essential singularity of $1/f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 0 }
What are some examples of non-identity bijections $f: X \to X$ such that $f^{-1} = f$ One example I can think of is $f: \mathbb{Z_2} \to \mathbb{Z_2}$ given by $f(1) = 0$ and $f(0) = 1$.
In a sense, every such bijection is going to look the same. If $a,b \in X$, then we will either have things that look like $f(a) = a$ or $f(a) = b, f(b) = a$ (which I'm going to refer to as a single 'transposition.' But this gives us infinitely many examples to choose from, even just in $\mathbb{Z}$. You might let $f$ be the identity on every element except, say, $1$ and $5$, such that $f(1) = 5, f(5) = 1$. Or you can have as many transpositions as you'd like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
proof for $ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$ this formula just pop up in textbook I'm reading without any explanation $ (\vec{A} \times \vec{B}) \times \vec{C} = (\vec{A}\cdot\vec{C})\vec{B}-(\vec{B}\cdot\vec{C})\vec{A}$ I did some "vector arithmetic" using the determinant method but I'm not getting the answer to agreed with above formula. I'm wondering if anyone saw this formula before and know the proof of it? the final result that i get is $b_{1}(a_{3}c_{3}+a_{2}c_{2})-a_{1}(b_{2}c_{2}+b_{3}c_{3})i$ $b_{2}(a_{3}c_{3}+a_{1}c_{1})-a_{2}(b_{3}c_{3}+b_{1}c_{1})j$ $b_{3}(a_{2}c_{2}+a_{1}c_{1})-a_{3}(b_{2}c_{2}+b_{1}c_{1})k$ But I failed to see any correlation for $(\vec{A}\cdot\vec{C})$ and $(\vec{B}\cdot\vec{C})$ part...
$\def\A{{\bf A}} \def\B{{\bf B}} \def\C{{\bf C}} \def\x{\times} \def\o{\cdot} \def\d{\delta} \def\e{\varepsilon}$I have always found such products easiest to discover using the properties of the Kronecker delta and the Levi-Civita symbol. Note that $\A\o\B = A_i B_j \d_{ij}$ and $(\A\x\B)_k = A_i B_j \e_{ijk}$, where we use the Einstein summation convention. Also, $\e_{ijk}\e_{lmk} = \e_{ijk}\e_{klm} = \d_{il}\d_{jm} - \d_{im}\d_{jl}$. Then \begin{eqnarray*} ((\A\x\B)\x\C)_m &=& A_i B_j \e_{ijk} C_l \e_{klm} \\ &=& A_i B_j C_l(\d_{il}\d_{jm} - \d_{im}\d_{jl}) \\ &=& (\A\o\C)B_m - (\B\o\C)A_m. \end{eqnarray*} Therefore, $$(\A\x\B)\x\C = (\A\o\C)\B - (\B\o\C)\A,$$ as claimed. Addendum: In proving such identities, there is always the question of what formalism you have to work with. My recommendation is to have the Levi-Civita symbol as it is more fundamental than the cross product. The result $\e_{ijk}\e_{lmk} = \d_{il}\d_{jm} - \d_{im}\d_{jl}$ is a consequence of the general properties of $\e$ but I give here a simple proof. The product $\e_{ijk}\e_{lmk}$ is antisymmetric in $ij$ and $lm$. Thus, $\e_{ijk}\e_{lmk} = c \left(\d_{il}\d_{jm} - \d_{im}\d_{jl}\right)$. But $\e_{123} = 1$, and so $1 = c\left(\d_{11}\d_{22} - \d_{12}\d_{21}\right)$. Therefore, $c=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Non-Galois cubic extensions Is there a necessary and sufficient condition for when a cubic extension of $\mathbb{Q}$ is not a Galois extension?
In a rather trivial way, a cubic extension $\,\Bbb Q(\alpha)/\Bbb Q\,$ is not Galois iff it is not normal iff the minimal polynomial of $\,\alpha\,$ in $\,\Bbb Q[x]\,\,,\,\,p(x)\,$ say, has only one root in $\,\Bbb Q(\alpha)\,$, namely $\,\alpha\,$ itself, iff the quadratic $$\frac{p(x)}{x-\alpha}\in\Bbb Q(\alpha)[x]$$ is irreducible there...
{ "language": "en", "url": "https://math.stackexchange.com/questions/167821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
proving :$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$. Let $a,b,c>0$ how to prove that : $$\frac{ab}{a^2+3b^2}+\frac{cb}{b^2+3c^2}+\frac{ac}{c^2+3a^2}\le\frac{3}{4}$$ I find that $$\ \frac{ab}{a^{2}+3b^{2}}=\frac{1}{\frac{a^{2}+3b^{2}}{ab}}=\frac{1}{\frac{a}{b}+\frac{3b}{a}} $$ By AM-GM $$\ \frac{ab}{a^{2}+3b^{2}} \leq \frac{1}{2 \sqrt{3}}=\frac{\sqrt{3}}{6} $$ $$\ \sum_{cyc} \frac{ab}{a^{2}+3b^{2}} \leq \frac{\sqrt{3}}{2} $$ But this is obviously is not working .
By AM-Gm $$\sum_{cyc}\frac{ab}{a^2+3b^2}=\sum_{cyc}\frac{ab}{a^2+b^2+2b^2}\leq\sum_{cyc}\frac{ab}{2\sqrt{2b^2(a^2+b^2)}}=\frac{1}{2\sqrt2}\sum_{cyc}\sqrt{\frac{a^2}{a^2+b^2}}.$$ Thus, it remains to prove that $\sum\limits_{cyc}\sqrt{\frac{x}{x+y}}\leq\frac{3}{\sqrt2}$, which followos from C-S. Indeed, $$\left(\sum\limits_{cyc}\sqrt{\frac{x}{x+y}}\right)^2\leq\sum_{cyc}\frac{x}{(x+y)(x+z)}\sum_{cyc}(x+z)=\frac{4(xy+xz+yz)(x+y+z)}{\prod\limits_{cyc}(x+y)}\leq\frac{9}{2},$$ where the last inequality it's just $\sum\limits_{cyc}z(x-y)^2\geq0$. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/167855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
Formal basis for variable substitution in limits So we do a lot of variable substitution in limits at school. Stuff like $\lim\limits_{x\to5}\ (x+1)^2\ =\ \lim\limits_{y\to6}\ y^2$, where we define the substitution $y = x + 1$. But I've never been clear on what exactly the theoretical basis for this is. What is the formula that you're actually applying when you do variable substitution? What are the formal conditions under which it is possible? My conjecture would be the following: For all continuous $f$, and all real $a$: $\lim\limits_{x\to a}\ (f\circ g)(x)\ =\lim\limits_{x\to g(a)}\ f(x)$, where $g$ is a continuous function So to take my first example, $f$ would be $x^2$, $g$ would be $x + 1$, and $a$ would be $5$. Am I in the right area? If this is correct, can it be proven using $\epsilon$-$\delta$? I had a half-hearted shot at it the other night and didn't get anywhere.
The complete story is as follows: If the functions $g:\ A\to B$ and $f:\ B\to C$ have limits $$\lim_{x\to\xi}g(x)=:\eta\ ,\qquad \lim_{y\to\eta}f(y)=:\zeta\ ,$$ and if $f$ is continuous at $\eta$ in case $\eta$ occurs as value of $g$, then $$\lim_{x\to\xi}f\bigl(g(x)\bigr)=\lim_{y\to\eta} f(y)\ .$$ This holds also if any one of $\xi$, $\eta$, $\zeta$ is $\ =\infty$. The extra condition "and if $f$ is continuous $\ldots$" is usually fulfilled, but one cannot do without it: Consider the example $g(x):\equiv 1$ and $f(y):=2$ $\ (y=1)$, $\ f(y):=3$ $\ (y\ne1)$. Then $\lim_{x\to1}f\bigl(g(x)\bigr)=2$, but $\lim_{x\to1}g(x)=1$, $\ \lim_{y\to1}f(y)=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/167926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "51", "answer_count": 6, "answer_id": 0 }
Polynomial-related manipulation My question is: Factorize: $$x^{11} + x^{10} + x^9 + \cdots + x + 1$$ Any help to solve this question would be greatly appreciated.
$$ \begin{align} & {}\quad (x^{11} + x^{10}) + (x^9 + x^8)+(x^7+x^6)+(x^5+x^4)+(x^3+x^2 )+( x + 1)\\[8pt] & =x^{10}(x+1)+x^8(x+1)+x^6(x+1)+x^4(x+1)+x^2(x+1)+(x+1)\\[8pt] & =(x+1)(x^{10}+x^8+x^6+x^4+x^2+1)\\[8pt] & =(x+1)(x^8(x^2+1)+x^4(x^2+1)+x^2+1)\\[8pt] & =(x+1)((x^2+1)(x^8+x^4+1))\\[8pt] & =(x+1)(x^2+1)(x^4+1-x^2)(x^4+1+x^2)\\[8pt] & =(x+1)(x^2+1)(x^4+1-x^2)(x^2+1-x)(x^2+1+x) \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/167981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Big List of Fun Math Books To be on this list the book must satisfy the following conditions: * *It doesn't require an enormous amount of background material to understand. *It must be a fun book, either in recreational math (or something close to) or in philosophy of math. Here are my two contributions to the list: * *What is Mathematics? Courant and Robbins. *Proofs that Really Count. Benjamin and Quinn.
Does God Play Dice? The New Mathematics of Chaos by Ian Stewart This book explains very nicely about a very complicated topic. I liked simple calculator example which shows chaos with a basic equation like $x^2$ http://www.amazon.com/Does-Play-Dice-Mathematics-Chaos/dp/0631232516 How to Cut a Cake: And Other Mathematical Conundrums by Ian Stewart This one is pure fun. http://www.amazon.com/How-Cut-Cake-Mathematical-Conundrums/dp/0199205906/ref=sr_1_1?s=books&ie=UTF8&qid=1342164338&sr=1-1&keywords=how+to+cut+a+cake
{ "language": "en", "url": "https://math.stackexchange.com/questions/168019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "58", "answer_count": 27, "answer_id": 17 }
number of distinct hamiltonian-path graphs that have equal number of vertexes and share degree There definitely are traceable graphs (hamiltonian-path graph) that have equal number of vertexes and have equal degree information - for example, graph A has vertex A that has degree of three, vertex B degree of four, C degree of three and graph B has vertex A' that has degree of three, vertex B' degree of four, C' degree of three and so forth. So, what would be the number of distinct traceable graphs that would be like as above? Or the bound of the number would also be fine.
I don't know if I have understand you correctly, but I would say there is at least an exponential lower bound. Consider the following Graph $G$ with $n$ vertices and $m=n/2-1$ orange edges. There is a Hamiltonian cycle in $G$ that avoids all the orange edges. Let $S$ be a set of $m/2$ of the orange edges, and let $G_S$ be the graph $G$ with deleted edges $S$. We define by $\Gamma$ the set of all such graphs $G_S$. Deleting an orange edge turns two degree-4 vertices into degree-3 vertices, independently of other deleted edges. Thus the degree sequence for all graphs in $\Gamma$ is the same. Let us now bound the number of graphs in $\Gamma$. We have ${m \choose m/2}=\Theta(2^m)$ different ways to select $S$. This yields a bound of $\Omega(\sqrt{2}^n)$ for the number of different Hamiltonian graphs with the same degree sequence. Notice that every isometry class of $\Gamma$ has only constant size.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Regarding sum of subsets of $\mathbb{R}^n$ Let $A$ , $B$ be open subsets of $\mathbb{R}^n$ . Is $A+B$ a open subset of $\mathbb{R}^n$? Same question replacing open sets by compact sets. I know that for closed sets the result is not true.
For $A, B$ open, I think the quickest way is to note that $A + B = \bigcup_{a \in A} (a + B)$. If the sets are instead compact, then note that addition is a continuous map $\mathbf R^n \times \mathbf R^n \to \mathbf R^n$, and that the product of two compact spaces is compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Given that $x=\dfrac 1y$, show that $∫\frac {dx}{x \sqrt{(x^2-1)}} = -∫\frac {dy}{\sqrt{1-y^2}}$ Given that $x=\dfrac 1y$, show that $\displaystyle \int \frac 1{x\sqrt{x^2-1}}\,dx = -\int \frac 1{\sqrt{1-y^2}}\,dy$ Have no idea how to prove it. here is a link to wolframalpha showing how to integrate the left side.
Let's treat this like a $u$-substitution problem. $$\int \frac 1{x\sqrt{x^2-1}}\,dx$$ Let $u = \frac{1}{x}$, so that $du = \frac{-1}{x^2}dx$. Then $$\int \frac {x}{x^2\sqrt{x^2-1}}dx = -\int \frac {\frac{1}{u}}{\sqrt{\frac{1}{u^2} - 1}}du = -\int \frac{1}{u\sqrt{1/u^2 - 1}}du = -\int \frac{1}{\sqrt{1 - u^2}}du$$ I just happen to use $u$ because that's how I teach it, but calling it $y$ is just the same. Most importantly, there is no need to actually integrate anything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Finding the critical points of $\sin(x)/x$ and $\cosh(x^2)$ Could someone help me solve this: What are all critical points of $f(x)=\sin(x)/x$ and $f(x)=\cosh(x^2)$? Mathematica solutions are also accepted.
As mentioned, there is a solution in each interval $(k \pi, (k+1)\pi)$. This solution can't be expressed in "closed form", but there is a series in negative powers of $k$: $$x = (k+1/2)\pi -{\frac {1}{k\pi }}+{\frac {1}{2 \pi \,{k}^{2}}}-{\frac {3\,{ \pi }^{2}+8}{12{\pi }^{3}{k}^{3}}}+{\frac {{\pi }^{2}+8}{8{\pi }^{3} {k}^{4}}}-{\frac {15\,{\pi }^{4}+240\,{\pi }^{2}+208 }{{240 \pi }^{5}{k}^{5}}}+{\frac {3\,{\pi }^{4}+80\,{\pi }^{2}+208}{96{\pi }^{5}{k}^{6}}}+\ldots $$ It looks to me like this converges for $k \ge 1$ (I'm not sure about $k=1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/168273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Solving for unknown inside square root Sorry if this is a very primitive question, but I really not sure if I am right about this kind of situations. Imagine the following equation where $a$ , $b$ and $c$ are known numbers and $x$ is the unknown variable: $$a\sqrt{bx}=c$$ Is it ok in this case to do it like $$a^2bx=c^2$$ If not, how to solve such equation?
Yes, this is fine, provided that $a$ and $c$ have the same algebraic sign. When you solve the second equation, you get $$x=\frac{c^2}{a^2b}\;.$$ Now try substituting that into the original equation: $$a\sqrt{\frac{bc^2}{a^2b}}=a\sqrt{\frac{c^2}{a^2}}=a\left|\frac{c}a\right|\;.\tag{1}$$ If $a$ and $c$ have the same algebraic sign, $\left|\dfrac{c}a\right|=\dfrac{c}a$, and $(1)$ can be simplified to $a\left(\dfrac{c}a\right)=c$, as desired. If one of $a$ and $c$ is positive and the other negative, the original equation has no solution, since by convention $\sqrt{bx}$ denotes the non-negative square root of $bx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A limit question related to the nth derivative of a function This evening I thought of the following question that isn't related to homework, but it's a question that seems very challenging to me, and I take some interest in it. Let's consider the following function: $$ f(x)= \left(\frac{\sin x}{x}\right)^\frac{x}{\sin x}$$ I wonder what is the first derivative (1st, 2nd, 3rd ...) such that $\lim\limits_{x\to0} f^{(n)}(x)$ is different from $0$ or $+\infty$, $-\infty$, where $f^{(n)}(x)$ is the nth derivative of $f(x)$ (if such a case is possible). I tried to use W|A, but it simply fails to work out such limits. Maybe i need the W|A Pro version.
First of all, note that $$ f(x)=\left(\frac{\sin(x)}{x}\right)^{\Large\frac{x}{\sin(x)}}\tag{1} $$ is an even function. This means that all the odd terms in the power series will be zero. Using the power series for $\log(1+x)$, we get $$ \begin{align} &\log\left(\left(1-\frac16x^2+\frac{1}{120}x^4+O\left(x^6\right)\right)^{\Large1+\frac16x^2+\frac{7}{360}x^4+O\left(x^6\right)}\right)\\ &=\left(-\frac16x^2-\frac{1}{180}x^4+O\left(x^6\right)\right)\left(1+\frac16x^2+\frac{7}{360}x^4+O\left(x^6\right)\right)\\ &=-\frac16x^2-\frac{1}{30}x^4+O\left(x^6\right)\tag{2} \end{align} $$ Then we apply the power series for $e^x$ to get $$ f(x)=1-\frac16x^2-\frac{7}{360}x^4+O\left(x^6\right)\tag{3} $$ Of course, using more terms in the power series for $\dfrac{\sin(x)}{x}$ and $\dfrac{x}{\sin(x)}$, we could get more terms for $f(x)$. To get the derivatives at $x=0$, you can just use the fact that the Taylor series near $0$ is $$ f(x)=\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n\tag{4} $$ to get that $f^{(n)}(0)=0$ for all odd $n$, and $$ \begin{align} f(0)&=1\\ f''(0)&=-\frac13\\ f^{(4)}(0)&=-\frac{7}{15}\\ &\text{etc.} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/168369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Is there an explicit way to determine $\mathrm{Mat}_n(R[X_1,\dots,X_m])\simeq\mathrm{Mat}_n(R)[X_1,\dots,X_m]$? For a commutative ring $R$, let $\mathrm{Mat}_n(R[X_1,\dots,X_m])$ denotes the matrix ring with entries from $R[X_1,\dots,X_m]$, and let $\mathrm{Mat}_n(R)[X_1,\dots,X_m]$ denotes the polynomial ring with coefficients in $\mathrm{Mat}_n(R)$. Is there an easy way to see that both structures are isomorphic as rings? Even experimenting with just one indeterminate at small cases of $n$, I'm having difficulty finding a suitable map to verify. What is the natural ring isomorphism here? Thanks.
There is an evident map $M_n(R)\to M_n(R[X_1,\dots,X_m])$, which is injective and a map of rings, so we can identify the elements of $M_n(R)$ with their images in $M_n(R[X_1,\dots,X_m])$. On the other hand, for each $i\in\{1,\dots,m\}$ let $\underline X_i$ be the element of $M_n(R[X_1,\dots,X_m])$ which is a diagonal matrix all of whose diagonal entries are $X_i$, so that $\underline X_i=X_i\cdot I_n$, with $I_n\in M_n(R[X_1,\dots,X_m])$ the identity matrix. An element $A$ of $M_n(R[X_1,\dots,X_m])$ can be written in exactly one way as a finite sum $$\sum_{i_1,\dots,i_m\geq0} a_{i_1,\dots,i_m}\underline X_1^{i_1}\cdots \underline X_m^{i_m}$$ with the $a_{i_1,\dots,i_m}$ elements of $M_n(R)$. That's where the map comes from. For all $i_1,\dots,i_m\geq0$ and all $i$, $j\in\{1,\dots,n\}$, the $(i,j)$th entry of the matrix $a_{i_1,\dots,i_m}$ is the coefficient of $X_1^{i_1}\cdots X_m^{i_m}$ in the $(i,j)$th entry of $A$. Alternatively, let us write $S=R[X_1,\dots,X_m]$. The ring $M_n(S)$ is the endomorphism ring of the free left $S$-module $S^n$ of rank $n$. One can check that there is a canonical isomorphism $$\hom_S(S^n,S^n)\to S\otimes_R\hom_R(R^n,R^n)$$ and, since $\hom_R(R^n,R^n)\cong M_n(R)$, this tells us that $$M_n(S)\cong S\otimes_R M_n(R)$$ We are thus left with showing that $S\otimes_R M_n(R)\cong M_n(R)[X_1,\dots,X_m]$. It is in fact true that for all $R$-algebras $\Lambda$ we have an isomorphism $$R[X_1,\dots,X_m]\otimes_R\Lambda\cong\Lambda[X_1,\dots,X_m],$$ and we want this when $\Lambda=M_n(R)$. Can you do this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/168443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Are these integrals of motion? What are the integrals of motion of a system with the following Lagrangian? $$L=a\dot{\phi_1}^2+b\dot{\phi_2}^2+c\cos(\phi_1-\phi_2)$$? where $a,b,c$ are constants, $\phi_1,\phi_2$ are angles and $\dot{\phi_i}$ represents differentiation wrt time. I believe the Hamiltonian is conserved, but are there any more? Perhaps there is an isotropy of space here, since $\phi_1,\phi_2$ only exist as a difference $\phi_1-\phi_2$? So angular momentum? Are the above 2 right? Are there any more? Thanks. ADDED: "integrals of motion" are sometimes referred to elsewhere as "constants of motions" or "conserved quantities".
Just write down the motion equations and you will get $$ a\ddot\phi_1=-c\sin(\phi_1-\phi_2) $$ $$ a\ddot\phi_2=c\sin(\phi_1-\phi_2). $$ Now, sum these two equations and you will get $$ \dot\phi_1+\dot\phi_2=constant. $$ Indeed, it is not difficult to realize that a change of coordinates to $\Phi_1=\phi_1+\phi_2$ and $\Phi_2=\phi_1-\phi_2$ can make all things somewhat clearer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Evaluating $\int_{0}^{\infty}(\ln \tan^2 bx)/(a^2+x^2)\ dx$ Some time ago I came across one of the integrals, which still goes over my mind: $$\int_{0}^{\infty}\frac{\ln \tan^2(bx)}{a^2+x^2}dx$$ a and b are parameters. I would be interested in possible solutions with complex analysis and without it as well.
If one can prove that the given integral converges, it's not hard to compute its value. Let's assume from now that the integral does converge. Since $\tan^2(-bx)=\tan^2(bx)$ and $(-a)^2=a^2$, there is no loss of generality in assuming that $a,b>0$. Then $$ I(a,b)=\int_0^\infty\frac{\ln\tan^2(bx)}{a^2+x^2}dx=2b\int_0^\infty\frac{\ln|\tan x|}{a^2b^2+x^2}dx=b\int_\mathbb{R}\frac{\ln|\tan x|}{a^2b^2+x^2}dx. $$ Consider the function $$ f: \mathbb{C} \to \mathbb{C},\ f(z)=b\frac{\ln|\tan z|}{a^2b^2+z^2}. $$ Given $n \in \mathbb{N}$, with $0<1/n<ab<n$, we denote by $\Delta_n$ the bounded region of $\mathbb{C}$ whose boundary consists of the segment $$ L_n=\{ x-\frac{i}{n}:\ |x|\le n\pi\} $$ and the upper half circle $$ \Gamma_n=\{\gamma_n(t)=-\frac{i}{n}+(n+\frac{1}{8})\pi e^{it}: \ 0 \le t \le \pi\}. $$ The set of poles of $f$ that lie inside $\Delta_n$, is $P=\{iab, k\pi/2:\ |k|\le 2n\}$. For every $k$ with $|k|\le 2n$, $z_k=k\pi/2$ is a pole of order 2 with $$ \text{Res}(f,z_k)=\lim_{z \to 0}\frac{d}{dz}(z^2f(z+z_k))=0, $$ and since $$ \text{Res}(f,iab)=\frac{1}{2ia}\ln\tanh(ab), $$ we have $$ \int_{\Delta_n}f(z)dz=i2\pi\text{Res}(f,iab)=\frac{\pi}{a}\ln\tanh(ab). $$ Hence $$ \int_{L_n}f(z)dz=\frac{\pi}{a}\ln\tanh(ab)-J_n $$ with $$ J_n:=\frac{\pi}{a}\ln\tanh(ab)-in\pi\int_0^\pi e^{it}f((n+\frac{1}{8})\pi e^{it}-\frac{i}{n})dt. $$ Notice that \begin{eqnarray} |J_n|&\le&(n+\frac{1}{8})\pi\int_0^\pi|f((n+\frac{1}{8})\pi e^{it}-\frac{i}{n})|dt\cr &\le& \frac{(n+\frac{1}{8})\pi}{((n+\frac{1}{8})\pi-\frac{1}{n})^2-a^2b^2}\int_0^\pi|\ln|\tan((n+\frac{1}{8})\pi e^{it}-\frac{i}{n})||dt\cr &=&\frac{(n+\frac{1}{8})\pi}{((n+\frac{1}{8})\pi-1/n)^2-a^2b^2}\int_0^\pi\left|\ln\left|\frac{\exp(i(2n+\frac{1}{4})\pi e^{it}+\frac{2}{n})-1}{\exp(i(2n+\frac{1}{4})\pi e^{it}+\frac{2}{n})+1}\right|\right|dt\cr &\le&\frac{(n+\frac{1}{8})\pi}{((n+\frac{1}{8})\pi-\frac{1}{n})^2-a^2b^2}A_n, \end{eqnarray} with $$ A_n=\int_0^\pi |\ln|e^{i(2n+\frac{1}{4})\pi\cos t}e^{\frac{2}{n}-(2n+\frac{1}{4})\pi\sin t}-1|+|\ln|e^{i(2n+\frac{1}{4})\pi\cos t}e^{-(2n+\frac{1}{4})\pi\sin t+\frac{2}{n}}+1||dt. $$ $A_n$ is clearly bounded, so we conclude that $J_n \to 0$ as $n \to \infty$, and $$ I(a,b)=\lim_{n \to \infty}\int_{L_n}f(z)dz=\frac{\pi}{a}\ln\tanh(ab). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/168642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
How to prove the uniqueness of the solution of $ax+b=0$? I have no background in mathematical analysis or the like, but I am interested to know how to prove the uniqueness of the solution of $ax+b=0$? Perhaps your answers will help me to prove other uniqueness problems.
A standard way of showing that a certain object is unique is two assume that you have two objects that satisfy the desired properties, and deduce that they must be equal (when we say "two objects", we mean two "names", but which may refer to the same object). In the case of the solutions to the equation $ax+b=0$, you have to distinguish two cases: if $a=0$, then the equation either has no solutions (if $b\neq 0$), or it has infinitely many solutions (if $b=0$). So uniqueness really only exists when $a\neq 0$. The uniqueness is based on the following fact about real numbers: For any real numbers $r$ and $s$, if $rs=0$, then $r=0$ or $s=0$. Once you have that: Claim. If $a\neq 0$, then there is at most one solution to $ax+b=0$. Proof. Suppose that both $x$ and $y$ are solutions. We aim to show that $x=y$. Since $x$ is a solution, $ax+b=0$. Since $y$ is a solution as well, $ay+b=0$. That means that $ax+b=ay+b$. Adding $-b$ to both sides we conclude that $ax=ay$. Adding $-ay$ to both sides, we obtain $ax-ay = 0$. factoring out $a$, we have $a(x-y)=0$. Since the product is $0$, then $a=0$ or $x-y=0$. Since $a\neq 0$ by assumption, we conclude that $x-y=0$, so $x=y$. Thus, if $x$ and $y$ are both solutions, then $x=y$, so there is at most one solution. $\Box$ Note that this argument works in the context of the real numbers, or other kinds of "numbers" where $rs=0$ implies $r=0$ or $s=0$. There are other situations where this is not the case. For example, if you work with "integers modulo 12" ("clock arithmetic", where $11+3 = 2$), then $2x+8 = 0$ has many different solutions $0\leq x\lt 12$: one solution is $x=2$ (since $2(2)+8 = 4+8=12=0$ in clock arithmetic), and another solution is $x=8$ since $2(8)+8 = 16+8=24 = 0$ in clock arithmetic).
{ "language": "en", "url": "https://math.stackexchange.com/questions/168690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
How many elements in a ring can be invertible? If $R$ is a finite ring (with identity) but not a field, let $U(R)$ be its group of units. Is $\frac{|U(R)|}{|R|}$ bounded away from $1$ over all such rings? It's been a while since I cracked an algebra book (well, other than trying to solve this recently), so if someone can answer this, I'd prefer not to stray too far from first principles within reason.
By the way, the number $\# U(R)/ \#R$ equals $\sum_{\mathfrak{m}} \left(1-\frac{1}{\# R/\mathfrak{m}}\right)$, where the sum ranges over all maximal ideals $\mathfrak{m} \subseteq R$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 2 }
Order of a product of subgroups. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. Let $H$, $K$ be subgroups of $G$. Prove that $o(HK) = \frac{o(H)o(K)}{o(H \cap K)}$. I need this theorem to prove something.
Firstly, it can be shown that $HK \le G \iff H \unlhd HK$ or $K \unlhd HK$. Without loss of generality, we can assume that $K \unlhd HK$. Let $T = H\cap K$. Then $T \unlhd H$. Consider the function $f: H/T \to HK/K$ where $f(hT)=hK$ for each left coset $hT \in H/T$. Suppose $f(hT)=f(gT)$ for some $hT, gT \in H/T$. Then $hK=gK$. So $h^{-1}g \in K$. But since $h, g \in H, h^{-1}g \in H$. So $h^{-1}g \in T$. Then $hT=gT$. So $f$ is an injective function. Now take $(hk)K \in HK/K$ where $h \in H$ and $k \in K$. Then $(hk)K=hK$. So there exists $hT \in H/T$ such that $f(hT)= (hk)K$. So $f$ is a surjective function. Since $f$ is a bijective function, $|H/T|=|HK/K|$. Then $\frac {|H|}{|T|}= \frac {|HK|}{|K|}$. Thus $|HK|= \frac {|H||K|}{|T|} = \frac {|H||K|}{|H \cap K|}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/168942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "48", "answer_count": 6, "answer_id": 5 }
Uniqueness of morphism in definition of category theory product (etc) I'm trying to understand the categorical definition of a product, which describes them in terms of existence of a unique morphism that makes such-and-such a diagram commute. I don't really feel I've totally understood the motivation for this definition: in particular, why must that morphism be unique? What's the consequence of omitting the requirement for uniqueness in, say, Set?
Well, a set-map $f:X\to A\times B$ should be uniquely determined by its components $f_A:X\to A$ and $f_B:X\to B$, and conversely any two functions $X\to A$ and $X\to B$ should combine to a map $X\to A\times B$. This is basically a tautology in terms of the ususal construction of the set-product: writing $f(x)=(a_x,b_x)$ yields the functions $x\mapsto a_x$ and $x\mapsto b_x$. In other words $f\mapsto (f_A,f_B)$ should be a bijection $$\text{Hom}(X,A\times B)\to \text{Hom}(X,A)\times \text{Hom}(X,B).$$ The uniqueness (resp. existence) part corresponds to injectivity (resp. surjectivity) of this map. If you drop the uniqueness part, you will get many products. For example $A\times B\times Z$, for any set $Z$, will then be a product (with the projections to $A,B$). Indeed, the map $$\text{Hom}(X,A\times B\times Z)\to \text{Hom}(X,A)\times \text{Hom}(X,B)$$ (sending a map to its $A,B$-projections) is surjective but highly non-injective: we are free to choose the $Z$-component.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Can mathematical definitions of the form "P if Q" be interpreted as "P if and only if Q"? Possible Duplicate: Alternative ways to say “if and only if”? So when I come across mathematical definitions like "A function is continuous if...."A space is compact if....","Two continuous functions are homotopic if.....", etc when is it okay to assume that the definition includes the converse as well?
I agree largely. However, be warned that sometimes there are several ways to define things, which are sometimes not equivalent. For example, in metric spaces, we can define continuity using $\varepsilon$-$\delta$ balls, and we can show this definition to be equivalent with one in terms of open sets (the latter would be a proposition, mind you). When we enter the realm of topology, we cannot speak of distance anymore, and we define continuity in terms of open sets. It's not true that a definition in terms of $\varepsilon$-$\delta$ balls is or is not equivalent - it just does not mean anything! Therefore, the fact that even though some function on a topological space may be continuous (as defined in terms of open sets), this does not always imply that we can speak of an "open ball" (which we would conclude from the metric definition of continuity).
{ "language": "en", "url": "https://math.stackexchange.com/questions/169158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Differential Inequality Help I have the inequality $f''(x)x + f'(x) \leq 0$ Also, $f''(x)<0$ and $f'(x)>0$ and $x \in R^+$. And I need to figure out when it is true. I know it is a fairly general question, but I couldn't find any information in several textbooks I have skimmed. Also, I am not sure if integrating would require a sign reversal or not, so I cant go ahead and try to manipulate it my self. Any help or mention of a helpful source would be much appreciated. edit: forgot to mention $f(x)\geq 0$ for every $x \in R^+$
No. It is not true that if $f(x) \geq 0$, $f'(x) >0$ and $f''(x) < 0$ for all $x \in \mathbb{R}^+$, then $x f''(x) + f'(x) \leq 0$. Below are a class of counter examples. Consider $f(x) = 1-\exp(-x) > 0$. We then have that $f'(x) = \exp(-x) > 0$ and $f''(x) = -\exp(-x) < 0$. However, $$xf''(x) + f'(x) = -x\exp(-x) + \exp(-x) = (1-x) \exp(-x)$$ which is negative only when $x>1$. In general, you can consider $f(x) = 1 - \exp(-\alpha x) > 0$, where $\alpha > 0$. We then have that $f'(x) = \alpha \exp(-\alpha x) > 0$ and $f''(x) = -\alpha^2 \exp(-\alpha x) < 0$. Hence, we get that $$x f''(x) + f'(x) = -\alpha^2 x\exp(-\alpha x) + \alpha \exp(-\alpha x) = \alpha \exp(-\alpha x) (1 - \alpha x)$$ The above is negative only when $x > \dfrac1{\alpha}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Convergence of $\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$ What theorem should I use to show that $$\sum_{n=0}^\infty(-1)^n\frac{4^{n-2}(x-2)}{(n-2)!}$$ is convergent no matter what value $x$ takes?
By "theorem", your teacher probably just means that $(x-2)$ can be moved outside of the series (distributive property over infinite sums), so that its convergence doesn't depend on $x$. Proving the convergence of the series itself would require the series to actually make sense, which it currently doesn't. Though if you nudge the index forward so that all values are defined it will certainly converge, as you can see by applying the ratio test, which may also be what your teacher was referring to.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Connection between bias/variance tradeoff and squared error The only explanations I've seen of the bias/variance tradeoff rely on rewriting the squared error of an estimator as the sum of bias and variance terms. How does the bias/variance tradeoff work if the loss function is not squared error? Thanks in advance!
MSE=bias$^2$ +variance So the tradeoff is obvious for MSE. If your loss function is some other function of bias and variance then there will be a variance-bias tradeoff for it too. Otherwise there is none for varying the expected loss.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169339", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Fermat's theorem on sums of two squares composite number Suppose that there is some natural number $a$ and $b$. Now we perform $c = a^2 + b^2$. This time, c is even. Will this $c$ only have one possible pair of $a$ and $b$? edit: what happens if c is odd number?
If a,b are both even or both odd, c is even. c will be odd iff a and b are opposite parity. Let a=2A and b=2B+1, then c≡1(mod 4). So if c≡3≡-1(mod 4), there will be no solution. In other way, we can take (a,b)=d, then $d^2|c$ , let $C^2=d^2.c$ Let $\frac{a}{A}=\frac{b}{B}=d$, clearly (A,B)=1 Then, A & B are both not even. The case of A,B being opposite parity has been dealt above. If A & B are both odd, let A=2m+1, B=2n+1. $A^2+B^2=(2m+1)^2+(2n+1)^2=2(p^2+q^2)$ (say)=$(p+q)^2+(p-q)^2$ Then p=m+n+1, q=m-n and p+q=2m+1 is odd=> p & q are of opposite parity. So, the problem boils down to finding A,B such that (A,B)=1 and A,B are of opposite parity. Clearly, C≡1(mod 4) is a necessary condition for solubility. Proof of sufficiency Brahmagupta Identity can used to prove that if C is a product of n primes of the form 4r+1, it can be represented as the sum of two squares in $2^{n-1}$ ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169425", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Further reading on the $p$-adic metric and related theory. In his book Introduction to Topology, Bert Mendelson asks to prove that $$(\Bbb Z,d_p)$$ is a metric space, where $p$ is a fixed prime and $$d_p(m,n)=\begin{cases} 0 \;,\text{ if }m=n \cr {p^{-t}}\;,\text{ if } m\neq n\end{cases}$$ where $t$ is the multiplicty with which $p$ divides $m-n$. Now, it is almost trivial to check the first three properties, namely, that $$d(m,n) \geq 0$$ $$d(m,n) =0 \iff m=n$$ $$d(m,n)=d(n,m)$$ and the only laborious was to check the last property (the triangle inequality). I proceeded as follows: Let $a,b,c$ be integers, and let $$a-b=p^s \cdot k$$ $$b-c=p^r \cdot l$$ where $l,k$ aren't divisible by $p$. Then $$a-c=(a-b)+(b-c)=p^s \cdot k+p^r \cdot l$$ Now we have three cases, $s>r$, $r>s$ and $r=s$. We have respectively: $$a-c=(a-b)+(b-c)=p^r \cdot(p^{s-r} \cdot k+ l)=p^r \cdot Q$$ $$a-c=(a-b)+(b-c)=p^{s} \cdot( k+p^{r-s} \cdot l)=p^s \cdot R$$ $$a-c=(a-b)+(b-c)=p^s \cdot (k+l)=p^s \cdot T$$ In any case, $$d\left( {a,c} \right) \leqslant d\left( {a,b} \right) + d\left( {b,c} \right)$$ since $$\eqalign{ & \frac{1}{{{p^r}}} \leqslant \frac{1}{{{p^s}}} + \frac{1}{{{p^r}}} \cr & \frac{1}{{{p^s}}} \leqslant \frac{1}{{{p^s}}} + \frac{1}{{{p^r}}} \cr & \frac{1}{{{p^s}}} \leqslant \frac{1}{{{p^s}}} + \frac{1}{{{p^s}}} \cr} $$ It might also be the case $k+l=p^u$ for some $u$ so that the last inequality is $$\frac{1}{{{p^{s + u}}}} \leqslant \frac{1}{{{p^s}}} + \frac{1}{{{p^s}}}$$ $(1)$ Am I missing something in the above? The author asks to prove that in fact, if $t=t_p(m,n)$ is the exponent of $p$, that $$t\left( {a,c} \right) \geqslant \min \left\{ {t\left( {a,b} \right),t\left( {b,c} \right)} \right\}$$ That seems to follow from the above arguement, since if $s \neq r$ then $$t\left( {a,c} \right) = t\left( {a,b} \right){\text{ or }}t\left( {a,c} \right) = t\left( {b,c} \right)$$ and if $s=r$ then $$t\left( {a,c} \right) \geqslant t\left( {a,b} \right){\text{ or }}t\left( {a,c} \right) \geqslant t\left( {b,c} \right)$$ $(2)$ Is there any further reading you can suggest on $p$-adicity?
I am currently trying to learn about $p$-adic numbers and analysis too, so I too would be really interested to hear the opinions of people who know more than I do about this. I am currently using the following three texts, but don't intend to work through them fully; just enough to get something useful for a better understanding of how they can be used in number theory: * *Koblitz - "$p$-adic Numbers, $p$-adic Analysis, and Zeta-functions" - The first chapter I have found very interesting, and pretty well-written, with lots of easy exercises to get used to the concepts, as well as some harder ones to test deeper understanding. *Robert - "A Course in $p$-adic Analysis" - covers much more material at a more advanced level than Koblitz, but isn't (quite) as off-putting as it seems at first, and it possible to pick out quite a few bits from the first two chapters which are illuminating. *Borevich and Shafarevich - "Number Theory" - This one has some relatively understandable stuff on $p$-adic numbers in the first chapter which I have found really useful as it gives a different feel to the topic in a rather more old-fashioned approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169542", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
Inequality's root Given $\sqrt x + \sqrt y < x+y$, prove that $x+y>1$. Havnt been able to try this yet, found it online, any help is appreciated thanks!
If $\sqrt{x}+\sqrt{y}<x+y$, then after squaring both sides we have $x+2\sqrt{xy}+y<x^2+2xy+y^2$. Rearrange to isolate the square root: $$2\sqrt{xy}<x^2+2xy+y^2-x-y=(x+y)^2-(x+y)=(x+y)(x+y-1)\;.$$ Can you see why this implies that $x+y>1$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/169611", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
limit question on Lebesgue functions Let $f\in L^1(\mathbb{R})$. Compute $\lim_{|h|\rightarrow\infty}\int_{-\infty}^\infty |f(x+h)+f(x)|dx$ If $f\in C_c(\mathbb{R})$ I got the limit to be $\int_{-\infty}^\infty |f(x)|dx$. I am not sure if this is right.
Hint: if $f \ne 0 $ for $ x \in (a,b)$, and $h$ is large enough, $|f(x+h)+f(x)| \ne 0$ for $x \in (a,b) \cup (a-h,b-h)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Trying to prove $\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+(\sin(\frac{\pi}{t}) -\frac{\pi}{t}\cos(\frac{\pi}{t}))^2}dt$ I posted this incorrectly several hours ago and now I'm back! So this time it's correct. Im trying to show that for $n\geq 1$: $$\frac{2}{n+\frac{1}{2}} \leq \int_{1/(n+1)}^{1/n}\sqrt{1+\left(\sin\left(\frac{\pi}{t}\right) -\frac{\pi}{t}\cos\left(\frac{\pi}{t}\right)\right)^2}dt$$ I checked this numerically for several values of $n$ up through $n=500$ and the bounds are extremely tight. I've been banging my head against this integral for a while now and I really can see no way to simplify it as is or to shave off a tiny amount to make it more palatable. Hopefully someone can help me. Thanks.
It is not hard to show (for example, in the problem following the problem you posted, exercise 10 in Section 1-3 of Differential Geometry of Curves and Surfaces by Do Carmo) that the arc length of and arc with endpoints $x$ and $y$ is at least the length of the straight line segment connecting them. In any case, the problem only asks for a "geometrical" proof. That integral is the arc length of the curve $f(t) =(t,\sin (\pi/t))$ between the points $t=1/(n+1)$ and $t=1/n$. These points are $(1/(n+1), \sin((n+1)\pi)/(n+1))$ and $(1/n, \sin(n\pi)/n)$ (so the $y$ coordinates are $0$). Call them $A$ and $B$, respectively. The arc passes through the point $(1/(n+1/2),\sin(n\pi/2)/(n+(1/2))=(1/(n+1/2),\pm 1/(n+(1/2))$. Call this $C$. We see the arc length is at least the sum of the length of the segments $AC$ and $CB$. These each have length at least $\frac{1}{n+\frac{1}{2}}$ (draw the picture. This is the length of the perpendicular to the $x$-axis).
{ "language": "en", "url": "https://math.stackexchange.com/questions/169721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 1 }
Find Elementary Matrics E1 and E2 such that $E_2E_1$A = I I am studying Linear Algebra part-time and would like to know if anyone has advice on solving the following type of questions: Considering the matrix: $$A = \begin{bmatrix}1 & 0 & \\-5 & 2\end{bmatrix}$$ Find elementary Matrices $E_1$ and $E_2$ such that $E_2E_1A = I$ Firstly can this be re-written as? $$E_2E_1 = IA^{-1}$$ and that is the same as? $$E_2E_1 = A^{-1}$$ So I tried to find $E_1$ and $E_2$ such that $E_2E_1 = A^{-1}$: My solution: $$A^{-1} = \begin{bmatrix}1 & 0 & \\{\frac {5}{2}} & {\frac {1}{2}}\end{bmatrix}$$ $$E_2 = \begin{bmatrix}1 & 0 & \\0 & {\frac {5}{2}}\end{bmatrix}$$ $$E_1 = \begin{bmatrix}1 & 0 & \\1 & {\frac {1}{5}}\end{bmatrix}$$ This is the incorrect answer. Any help as to what I did wrong as well as suggestions on how to approach these questions would be aprpeciated. Thanks
You can reduce A to I in two elementary row transformation, * *$R_2 + 5R_1 \rightarrow R_2$ *$\frac{1}{2} R_2 \rightarrow R_2$ An elementary matrix is a matrix that differs from I in exactly one elementary transformation. It turns out that you just need matrix corresponding to each of the row transformation above to come up with your elementary matrices. For example, the elementary matrix corresponding to the first row transformation is, $$\begin{bmatrix}1 & 0\\5&1\end{bmatrix}$$ Notice that when you multiply this matrix with A, it does exactly the first elementary row transformation. So, this is $E_1$. Similarly, figure out $E_2$ by getting the elementary matrix corresponding to the second transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$(x^n-x^m)a=(ax^m-4)y^2$ in positive integers How do I find all positive integers $(a,x,y,n,m)$ that satisfy $ a(x^{n}-x^{m}) = (ax^{m}-4) y^{2} $ and $ m\equiv n\pmod{2} $, with $ax$ odd?
Some partial results. Case 1) $n<m$. Then $ax^m<4$. The solutions of this inequality are: $$ a=1, m=1, x=1,2,3,\\ m\geq2, x=1, $$ and $$ a=2,3, x=1, m\geq 1. $$ From these we obtain two candidates (i) $x=1$ and (ii) $x=3$. (i) Substituting into the original equation we get $$ 0=(a-4)y^2. $$ Since $y>0$ and $ax^m=a<4$ there is no solution. (ii) Substituting into the original equation we get $$ 3^n-3=(3-4)y^2. $$ Since $n<m$ and $n\equiv m \pmod{2}$, thus $n=1$ which gives $y=0$ that is no solution. Case 2) $n=m$. Substituting into the original equation we have $$ (ax^m-4)y^2=0. $$ Since $y>0$ we get $$ ax^m=4. $$ Since $ax$ is odd there is no solution. Thus we have justified that $n>m$. So we can write $$ x^m(x^{n-m}-1)a=(ax^m-4)y^2. $$ Since $a$ is odd and $(a,ax^m-4)=1$, therefore $a|y^2$, and similarly $x^m|y^2$. Furthermore $n=m+2k$ where $k$ is positive integer. Thus we obtain $$ ax^m(x^{2k}-1)=(ax^m-4)y^2. $$ Introducing the new variables $u:=ax^m$, $z:=x^k$ we obtain $$ u(z^2-1)=(u-4)y^2, $$ where $u,z$ are odd and $z>1$. Obviously $8|z^2-1$, thus $4|y$. Since $$ u(z^2-1)=(u-4)y^2<uy^2, $$ it follows $z<y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
how to solve system of linear equations of XOR operation? how can i solve this set of equations ? to get values of $x,y,z,w$ ? $$\begin{aligned} 1=x \oplus y \oplus z \end{aligned}$$ $$\begin{aligned}1=x \oplus y \oplus w \end{aligned}$$ $$\begin{aligned}0=x \oplus w \oplus z \end{aligned}$$ $$\begin{aligned}1=w \oplus y \oplus z \end{aligned}$$ this is not a real example, the variables don't have to make sense, i just want to know the method.
As I wrote in my comment, you can just use any method you know for solving linear systems, I will use Gauss: $$ \begin{array}{cccc|c||l} \hline x & y & z & w &\ & \\ \hline\hline 1 & 1 & 1 & 0 & 1 & \\ 1 & 1 & 0 & 1 & 1 & \text{$+$ I}\\ 1 & 0 & 1 & 1 & 0 & \text{$+$ I}\\ 0 & 1 & 1 & 1 & 1 & \\ \hline 1 & 1 & 1 & 0 & 1 & \\ 0 & 0 & 1 & 1 & 0 & \text{III}\\ 0 & 1 & 0 & 1 & 1 & \text{II}\\ 0 & 1 & 1 & 1 & 1 & \text{$+$ III}\\ \hline 1 & 1 & 1 & 0 & 1 & \\ 0 & 1 & 0 & 1 & 1 & \\ 0 & 0 & 1 & 1 & 0 & \\ 0 & 0 & 1 & 0 & 0 & \text{$+$ III}\\ \hline 1 & 1 & 1 & 0 & 1 & \\ 0 & 1 & 0 & 1 & 1 & \\ 0 & 0 & 1 & 1 & 0 & \\ 0 & 0 & 0 & 1 & 0 & \\\hline \end{array} $$ Now we can conclude $w = 0$ from line 4, which gives $z = 0$ from 3 and $y = 1$ from 2, and finally $x = 0$. So $(x,y,z,w) = (0,1,0,0)$ is the only solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 2 }
Proving that for every real $x$ there exists $y$ with $x+y^2\in\mathbb{Q}$ I'm having trouble with proving this question on my assignment: For all real numbers $x$, there exists a real number $y$ so that $x + y^2$ is rational. I'm not sure exactly how to prove or disprove this. I proved earlier that for all real numbers $x$, there exists some $y$ such that $x+y$ is rational by cases and i'm assuming this is similar but I'm having trouble starting.
Hint They are basically hiding a simpler fact by stating the problem as find a real number y such that $x+y^2$ is rational. So basically; look for how are x and $x+y^2$ related.
{ "language": "en", "url": "https://math.stackexchange.com/questions/169971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Fourier transform eigenvalues Since I've studied the Fourier transform extension to the Hilbert space $L^2$, I wondered if there is a complete study relative to its eigenvalues. I know that its adjoint operator is the inverse transform, which means that I can't use the theory of self-adjoint operators to state something about the eigenvalues. Could someone of you tell me something about them? Are they countable? Is there any "algorithm" to determine them? (just like the one working for compact self-adjoint operators). If you have references to books, they are welcome! Thank you! (sorry for a possibly repeated or stupid question)
The eigenvalues of the Fourier transform are $\pm 1$ and $\pm i$. Note that if $\cal F$ is the Fourier transform operator, $({\cal F})^4 = I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Graph theory exercise on finding a subgraph with minimum degree. I was doing some practice problems in graph theory and would appreciate some help on this one. This problem is from a practice exam from the discrete mathematics course at Princeton. Consider a graph $G$ on $n$ vertices that has no cycles of length $\le 2k+1$. Let $m$ be the number of edges in the graph. The goal of the problem is to prove that $m \le n^{1 + \frac{1}{k}} + n$. 1) What is the average degree in $G$? Since there are $m$ edges, the total degree is $2m$. The average degree is therefore $\frac{2m}{n}$. 2) Prove that there exists a subgraph $H$ of $G$ with minimum degree $\frac{m}{n}$. The hint given is: Think about vertices which have degree less than $\frac{m}{n}$. I have no idea how to proceed with this one. I am pretty sure that this part of the problem does not require the minimum cycle condition. I think that some kind of bounds is required. 3) Let $v$ be a vertex in $H$. Consider the subgraph of $H$ induced by vertices at distance at most $k$ from $v$. Prove that this subgraph is a tree. If the subgraph were not a tree, there would exist a cycle. However, all vertices are distance at most $k$ and so the cycle would be of length at most $2k +1$, contradicting the initial hypothesis. 4) Prove that $\frac{m}{n} \le n^{\frac{1}{k}}$ + 1. (Hint: Give bounds on the number of vertices in the subgraph constructed in part 3.) Again, I'm not too sure how to find an appropriate bound. I would greatly appreciate some guidance for this problem, especially for part 2 with the existence of the subgraph. Thank you for your time. Edit: Solution to part 2 obtained thanks for Code-Guru and Erick Wong's help. We induct on the number of vertices. Let $\nu$ be the average degree of the graph. Given a one vertex graph, we have average degree $\nu=0$. There exists trivially a subgraph with minimum degree $\frac{\nu}{2}$, i.e. the single vertex graph itself. This forms our base case. Suppose then that every graph on $n-1$ vertices has a subgraph with minimum degree $\frac{\nu}{2}$. Consider a graph $G$ on $n$ vertices. If the graph contains no vertices $v$ with $\deg{v} < \frac{\nu}{2}$ then we are done. Otherwise, fix such a $v$ and remove it. Since $\deg{v} < \frac{\nu}{2}$ we remove at most $\nu$ degrees from the total degree of the graph. The average degree of the $n-1$ vertex graph is then $$\nu_{n-1} \ge \frac{n\nu - \nu}{n-1} = \nu$$ More specifically, the average degree is non-decreasing. Therefore by the inductive hypothesis, there exists a subgraph of $G\backslash\{v\}$ with minimum degree at least $\frac{\nu_{n-1}}{2} \ge \frac{\nu}{2}$ which is also the desired subgraph of $G$ itself. The result holds by mathematical induction. $\square$ Edit 2: Solution to part 4 thanks to Erick Wong We consider the tree obtained in part 3 as a rooted at $v$. $v$ has at least $\frac{m}{n}$ children because of it's degree bound. Similarly, each subsequent child has at least $\frac{m}{n} - 1$ children. Therefore we obtain a lower bound for the number of vertices as $$v_T \ge 1 + \frac{m}{n} + \frac{m}{n}\left(\frac{m}{n}-1\right) + \cdots + \frac{m}{n}\left(\frac{m}{n} - 1\right)^{k-1}$$ We evaluate the sum as a geometric series $$v_T \ge 1 + \frac{m}{n}\left(\frac{\left(\frac{m}{n}-1\right)^k - 1}{\frac{m}{n}-2}\right)$$ If $\frac{m}{n} \le 2$ then the required bound is trivial. So consider $\frac{m}{n} > 2$. Our inequality becomes $$n \ge v_T \ge 1 + \frac{m}{n}\left(\frac{\left(\frac{m}{n}-1\right)^k - 1}{\frac{m}{n}-2}\right)$$ $$m - 2n \ge \frac{m}{n}\left(\frac{m}{n}-1\right)^k - 2$$ $$n - 2n\cdot\frac{n - 1}{m} \ge \left(\frac{m}{n}-1\right)^k$$ $$n \ge \left(\frac{m}{n}-1\right)^k$$ $$n^{\frac{1}{k}} + 1 \ge \frac{m}{n}$$ as required. $\square$
For (2), what happens if you throw out all vertices which have degree less than $\frac{m}{n}$? Do you get an empty graph? If not, can you prove why this graph isn't empty? (Perhaps you should proceed by induction.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/170097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\ell_0$ Minimization (Minimizing the support of a vector) I have been looking into the problem $\min:\|x\|_0$ subject to:$Ax=b$. $\|x\|_0$ is not a linear function and can't be solved as a linear (or integer) program in its current form. Most of my time has been spent looking for a representation different from the one above (formed as a linear/integer program). I know there are approximation methods (Basis Pursuit, Matching Pursuit, the $\ell_1$ problem), but I haven't found an exact formulation in any of my searching and sparse representation literature. I have developed a formulation for the problem, but I would love to compare with anything else that is available. Does anyone know of such a formulation? Thanks in advance, Clark P.S. The support of a vector $s=supp(x)$ is a vector $x$ whose zero elements have been removed. The size of the support $|s|=\|x\|_0$ is the number of elements in the vector $s$. P.P.S. I'm aware that the $\|x\|_0$ problem is NP-hard, and as such, probably will not yield an exact formulation as an LP (unless P=NP). I was more referring to an exact formulation or an LP relaxation.
the problem can be modeled by an integer formulation: introduce binary variables y and implications $x > 0 \longrightarrow y = 1$ (for example with $M$ constraints $x\le My$). Then minimize the sum of y
{ "language": "en", "url": "https://math.stackexchange.com/questions/170162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Nth derivative of $\tan^m x$ $m$ is positive integer, $n$ is non-negative integer. $$f_n(x)=\frac {d^n}{dx^n} (\tan ^m(x))$$ $P_n(x)=f_n(\arctan(x))$ I would like to find the polynomials that are defined as above $P_0(x)=x^m$ $P_1(x)=mx^{m+1}+mx^{m-1}$ $P_2(x)=m(m+1)x^{m+2}+2m^2x^{m}+m(m-1)x^{m-2}$ $P_3(x)=(m^3+3m^2+2m)x^{m+3}+(3m^3+3m^2+2m)x^{m+1}+(3m^3-3m^2+2m)x^{m-1}+(m^3-3m^2+2m)x^{m-3}$ I wonder how to find general formula of $P_n(x)$? I also wish to know if any orthogonal relation can be found for that polynomials or not? Thanks for answers EDIT: I proved Robert Isreal's generating function. I would like to share it. $$ g(x,z) = \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) = \tan^m(x+z) $$ $$ \frac {d}{dz} (\tan^m(x+z))=m \tan^{m-1}(x+z)+m \tan^{m+1}(x+z)=m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m-1}(x)+m \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^{m+1}(x)= \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (m\tan^{m-1}(x)+m\tan^{m+1}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} (\dfrac{d}{dx}(\tan^{m}(x)))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))=\sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^{n+1}}{dx^{n+1}} (\tan^{m}(x))$$ $$ \frac {d}{dz} ( \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} \tan^m(x) )= \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x) = \sum_{n=1}^\infty \dfrac{z^{n-1}}{(n-1)!} \dfrac{d^n}{dx^n} \tan^m(x)=\sum_{k=0}^\infty \dfrac{z^{k}}{k!} \dfrac{d^{k+1}}{dx^{k+1}} \tan^m(x)$$ I also understood that it can be written for any function as shown below .(Thanks a lot to Robert Isreal) $$ \sum_{n=0}^\infty \dfrac{z^n}{n!} \dfrac{d^n}{dx^n} h^m(x) = h^m(x+z) $$ I also wrote $P_n(x)$ as the closed form shown below by using Robert Israel's answer. $$P_n(x)=\frac{n!}{2 \pi i}\int_0^{2 \pi i} e^{nz}\left(\dfrac{x+\tan(e^{-z})}{1-x \tan(e^{-z})}\right)^m dz$$ I do not know next step how to find if any orthogonal relation exist between the polynomials or not. Maybe second order differential equation can be found by using the relations above. Thanks for advice.
I have been working on the problem of finding the nth derivative and the nth anti derivative of elementary and special functions for years. You are asking a question regarding a class of functions I have called "the class of meromorphic functions with infinite number of poles. I refer you to the chapter in my Ph.D. thesis (UWO, 2004) where you can find some answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 1 }
A simple question : Is $g(z + \Delta z).g(z) = g(z)^2$? Is $g(z + \Delta z).g(z) = g(z)^2$ ? The full expression is a $\lim_{\Delta z \to 0} \frac {A}{G} $, where $G$ is the left-hand side of the above expression. It's a question about a solution to a problem that I'm going through, and I can't post it because I don't have enough reps. But the gist of it is that the above expression is taken to be true and thus he factors a $g(z)^2$ out of the limits in the denominator. I'd appreciate it if somebody explains how this is possible. Thanks!
Let us see a general answer to your particular question. Let $g$ be a function, and assume that $g$ is continuous at $z_0$. Then $$ \lim_{\Delta z \to 0} g(z_0+\Delta z) = g(z_0), \tag{1} $$ and hence $$ \lim_{\Delta z \to 0} g(z_0+\Delta z)g(z_0) = g(z_0)\cdot g(z_0)=g(z_0)^2. $$ Since (1) is the definition of continuity, the answer to your questoin is affirmative provided that $g$ is continuous. If $g$ is not supposed to be continuous, then then answer might be negative: you should be able to use any reasonable kind of discontinuous counterexample to check this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Sylow 2-subgroups of the group $\mathrm{PSL}(2,q)$ What is the number of Sylow 2-subgroups of the group $\mathrm{PSL}(2,q)$?
When $q$ is a power of $2,$ we have ${\rm PSL}(2,q) = {\rm SL}(2,q)$ and a Sylow $2$-normalizer is a Borel subgroup of order $q(q-1).$ Hence there are $q+1$ Sylow $2$-subgroups as ${\rm SL}(2,q)$ has order $(q-1)q(q+1)$. When $q$ is odd, the order of ${\rm PSL}(2,q)$ is $\frac{q(q-1)(q+1)}{2}.$ A Sylow $2$-subgroup of ${\rm SL}(2,q)$ is (quaternion or) generalized quaternion and a Sylow $2$-subgroup of ${\rm PSL}(2,q)$ is either a Klein $4$-group or a dihedral $2$-group with $8$ or more elements. In all these cases, a Sylow $2$-subgroup of ${\rm SL}(2,q)$ contains its centralizer, and some elementary group theory allows us to conclude that the same is true in ${\rm PSL}(2,q).$ The outer automorphism group of a dihedral $2$-group with $8$ or more elements is a $2$-group. Hence a Sylow $2$-subgroup of ${\rm PSL}(2,q)$ is self-normalizing when $q \equiv \pm 1$ (mod 8), and in that case the number of Sylow $2$-subgroups of ${\rm PSL}(2,q)$ is $q(q^{2}-1)_{2^{\prime}}$ where $n_{2^{\prime}}$ denotes the largest positive odd divisor of the positive integer $n.$ When $q \equiv \pm 3$ (mod 8), then a Sylow $2$-normalizer of ${\rm PSL}(2,q)$ must have order $12$ ( a Sylow $2$-subgroup is a self-centralizing Klein $4$-group, but there must be an element of order $3$ in its normalizer by Burnside's transfer theorem). In this case, the number of Sylow $2$-subgroups of ${\rm PSL}(2,q)$ is $q(\frac{q^{2}-1}{24})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/170329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
proving convergence of a sequence and then finding its limit For every $n$ in $\mathbb{N}$, let: $$a_{n}=n\sum_{k=n}^{\infty }\frac{1}{k^{2}}$$ Show that the sequence $\left \{ a_{n} \right \}$ is convergent and then calculate its limit. To prove it is convergent, I was thinking of using theorems like the monotone convergence theorem. Obviously, all the terms $a_{n}$ are positive. So, if I prove that the sequence is decreasing, then by the monotone convergence theorem it follows that the sequence itself is convergent. $a_{n+1}-a_{n}=-\frac{1}{n}+\sum_{k=n+1}^{\infty }\frac{1}{k^{2}}$. But, I can't tell from this that the difference $a_{n+1}-a_{n}$ is negative. If anybody knows how to solve this problem, please share.
[Edit: the first proof or convergence wasn't quite right, so I removed it.] It is useful to find some estimates first (valid for $n>1$): $$\sum_{k=n}^{\infty }\frac{1}{k^{2}}<\sum_{k=n}^{\infty }\frac{1}{k(k-1)}=\sum_{k=n}^{\infty }\left(\frac1{k-1}-\frac1k\right)=\frac1{n-1}\\\sum_{k=n}^{\infty }\frac{1}{k^{2}}>\sum_{k=n}^{\infty }\frac{1}{k(k+1)}=\sum_{k=n}^{\infty }\left(\frac1{k}-\frac1{k+1}\right)=\frac1{n}$$ The last equality in each of these lines holds because those are telescoping series. This gives us the estimate: $1<a_n<\frac{n}{n-1}$. By the squeeze theorem, we can conclude that our sequence converges and $\lim_{n\to\infty}a_n=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170356", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Why convolution of integrable function $f$ with some sequence tends to $f$ a.e. Let $g: \mathbb{R} \rightarrow \mathbb{R}$ be integrable with$\int_{\mathbb{R}}g(x)dx=1$ and $|g(x)| \leq \frac{C}{(1+|x|)^{1+h}}$ for $x \in \mathbb{R} $, where $C, h>0$ are constants. Let $g_t(x)=\frac{1}{t} g(\frac{x}{t})$ for $x \in \mathbb{R}$, $t>0$. I want to show that: If $f\in L^p$, where $1\leq p\leq \infty$, then $f*g_t(x) \rightarrow f(x)$ a.e. I have tried in this way: Let $x\in \mathbb{R}$ be the Lebesgue point of $f$, that is $lim_{r\rightarrow 0} \frac{1}{r} \int_{B(x,r)} |f(y)-f(x)|dx=0$, then $$ |f*g_t(x)-f(x)|\leq \int_{\mathbb{R}} g_t(x-y)|f(y)-f(x)|dy =I_1+I_2, $$ where $I_1=\int_{B(x,t)} g_t(x-y)|f(y)-f(x)|dy \leq\frac{1}{t} \int_{B(x,t)} \frac{C}{(1+\|\frac{x-y}{t}\|)^{1+h}} |f(y)-f(x)|dy $ $ \leq C\frac{1}{t}\int_{B(x,t)} |f(y)-f(x)|dy \rightarrow 0 \ as \ t \rightarrow 0;$ $I_2=\int_{\mathbb{R}\setminus B(x,t)} g_t(x-y)|f(y)-f(x)|dy .$ I don't know how to estimate the integral $I_2$.
Since it is a local result, we may assume without loss of generality that $p=1$. The Hardy-Littlewood centered maximal function of $f$ is defined as $$ f^*(x)=\sup_{r>0}\frac1{2\,r}\int_{x-r}^{x+r}|f(y)|\,dy=\sup_{r>0}\frac1{2\,r}\int_{|y|\le r}|f(x-y)|\,dy. $$ It satisfies the weak one-one inequality $$ |\{x:f^*(x)>\delta\}|\le\frac{M}{\delta}\|f\|_1\quad\forall\delta>0, $$ where $M>0$ is a constant independent of $f$ and $\delta$. We first bound $g_t\ast f(x)$ in terms of $f^*(x)$. $$\begin{align*} |g_t\ast f(x)|&\le \frac{C}{t}\int_{|y|\le t}\frac{|f(x-y)|}{(1+y/t)^{1+h}}\,dy+\frac{C}{t}\sum_{k=0}^\infty\int_{2^kt<|y|\le 2^{k+1}t}\frac{|f(x-y)|}{(1+y/t)^{1+h}}\,dy\\ &\le\frac{C}{t}\int_{|y|\le t}|f(x-y)|\,dy+\frac{C}{t}\sum_{k=0}^\infty\int_{|y|\le 2^{k+1}t}\frac{|f(x-y)|}{(1+2^k)^{1+h}}\,dy\\ &\le 2\,C\,f^*(x)+C\Bigl(\sum_{k=0}^\infty\frac{2^{k+2}}{(1+2^k)^{1+h}}\Bigr)f^*(x)\\ &\le K\,f^*(x), \end{align*}$$ where $$ K=2\,C+C\sum_{k=0}^\infty\frac{2^{k+2}}{(1+2^k)^{1+h}}<\infty. $$ Now we mimic the proof of the Lebesgue differentiation theorem. Given $\epsilon>0$ choose a continuous function $\phi$ with compact support such that $\|f-\phi\|_1<\epsilon$. Then $$ g_t\ast f(x)-f(x)=g_t\ast(f-\phi)(x)+\bigl(g_t\ast\phi(x)-\phi(x)\bigr)+(\phi(x)-f(x)), $$ from where $$ |g_t\ast f(x)-f(x)|\le K(f-\phi)^*(x)+|g_t\ast\phi(x)-\phi(x)|+|\phi(x)-f(x)| $$ Since $\phi$ is continuous, the middle term converges to $0$ as $t\to0$ for all $x$. Let $\delta>0$. Then $$ \{x:\limsup_{t\to0}|g_t\ast f(x)-f(x)|>2\,\delta\}\subset\{x:(f-\phi)^*(x)>\frac\delta{K}\}\cup\{x:|\phi(x)-f(x)|>\delta\}. $$ By Chebychev's inequality $$ |\{x:|\phi(x)-f(x)|>\delta\}|\le\frac1\delta\|\phi-f\|_1\le\frac\epsilon\delta. $$ By the weak $1$-$1$ inequality for the maximal function $$ |\{x:(f-\phi)^*(x)>\frac\delta{K}\}|\le\frac{K\,M}\delta\|\phi-f\|_1\le\frac{K\,M\,\epsilon}\delta. $$ Finally $$ |\{x:\limsup_{t\to0}|g_t\ast f(x)-f(x)|>2\,\delta\}|\le\frac{K\,M+1}{\delta}\,\epsilon. $$ Since $\epsilon$ was arbitrary, it follows that $$ |\{x:\limsup_{t\to0}|g_t\ast f(x)-f(x)|>2\,\delta\}|=0\quad\forall\delta>0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/170433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Ergodicity of the First Return Map I was looking for some results on Infinite Ergodic Theory and I found this proposition. Do you guys know how to prove the last item (iii)? I managed to prove (i) and (ii) but I can't do (iii). Let $(X,\Sigma,\mu,T)$ be a $\sigma$-finite space with $T$ presearving the measure $\mu$, $Y\in\Sigma$ sweep-out s.t. $0<\mu(Y)<\infty$. Making $$\varphi(x)= \operatorname{min}\{n\geq0; \ T^n(x)\in Y\}$$ and also $$T_Y(x) = T^{\varphi(x)}(x)$$ if $T$ is conservative then (i) $\mu|_{Y\cap\Sigma}$ under the action of $T_Y$ on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$; (ii) $T_Y$ is conservative; (iii) If $T$ is ergodic, then $T_Y$ is ergodic on $(Y,Y\cap\Sigma,\mu|_{Y\cap\Sigma})$. Any ideas? Thank you guys in advance!!!
Kac's lemma can be used if $\mu (X) = 1$, which is not the case here. In fact, $\phi_B$ and $\phi_Y$ need not be integrable. Instead, to prove ergodicity one may argue by contradiction. Assume that $(Y, T_Y, \mu)$ is not ergodic. Then there is an invariant set $E \subset Y$ such that $0 < \mu (E) < \mu (Y)$. This will then lead to a conclusion that $(X, T, \mu)$ is not ergodic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that , any primitive root $r$ of $p^n$ is also a primitive root of $p$ For an odd prime $p$, prove that any primitive root $r$ of $p^n$ is also a primitive root of $p$ So I have assumed $r$ have order $k$ modulo $p$ , So $k|p-1$.Then if I am able to show that $p-1|k$ then I am done .But I haven't been able to show that.Can anybody help me this method?Any other type of prove is also welcomed.
$\begin{eqnarray}{\bf Hint}\ \ \ \ \rm a\ \ is\, \ coprime\, \ to\ \ {\bf\color{#C00}p}\,\ &\Longrightarrow&\rm\ \ \ a\ \ is\ \ coprime\ \ to\ \ {\bf\color{#90f}{p^n}}\\ & & \qquad\qquad{ \Downarrow}\\ \smash{\rm r^k\equiv a\ \ (mod\,\ {\bf\color{#C00}p})}\, &\smash{\overset{\ \ \rm\color{#0a0}{CP}}\Longleftarrow}&\rm \smash{\exists\,k\!:\ r^k\equiv a\ \ (mod\,\ {\bf\color{#90f}{p^n}})} \end{eqnarray}$ where we employed the result $\,\rm \color{#0a0}{CP} = $ Congruences Persist mod factors of the modulus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170648", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 6, "answer_id": 3 }
Find a function that satisfies the following five conditions. My task is to find a function $h:[-1,1] \to \mathbb{R}$ so that (i) $h(-1) = h(1) = 0$ (ii) $h$ is continuously differentiable on $[-1,1]$ (iii) $h$ is twice differentiable on $(-1,0) \cup (0,1)$ (iv) $|h^{\prime\prime}(x)| < 1$ for all $x \in (-1,0)\cup(0,1)$ (v) $|h(x)| > \frac{1}{2}$ for some $x \in [-1,1]$ The source I have says to use the function $h(x) = \frac{3}{4}\left(1-x^{4/3}\right)$ which fails to satisfy condition (iv) so it is incorrect. I'm starting to doubt the validity of the problem statement because of this. So my question is does such a function exist? If not, why? Thanks!
Such a function does exist. Try $h(x) = e^{-x^2b}-\frac{b}{e}$. This function trivially satisfies (i)-(iii). Now you just need to find $b$ such that (iv) and (v) also hold. Assuming $b >0$ we get : $|h^{''}(x)|= |2be^{-x^2b}(2bx^2-1)| \leq 2b |2b-1|$ for $x\in[-1,1]$. So choosing a very small $b$ like $b=1/8$ will make this smaller than $1/2$. For this $b$ we also get $h(0)=1-\frac{1}{8e} > 1/2$. So (iv) and (v) also hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finite p-group with a cyclic frattini subgroup. I have a question about the following theorem that I found in some research. Is it possible that $E$ is the identity? I just found this elaborated proof that might help.
Yes, it is possible that $E$ is the identity. Take $P$ non-abelian of order $p^3$. In this case $U = \Phi(P) \cap \Omega_1(Z(P)) = \Omega_1(Z(P)) = Z$ and so $E=1$. $Q=P$ has $\Phi(Q) = Z(Q)$ cyclic of order $p$. Furthermore such a $P$ is not a direct product of any proper non-identity subgroups, and so $E = 1$ is not only possible, it is necessary. This is just @anon's comment with an explicit answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove $(-a+b+c)(a-b+c)(a+b-c) \leq abc$, where $a, b$ and $c$ are positive real numbers I have tried the arithmetic-geometric inequality on $(-a+b+c)(a-b+c)(a+b-c)$ which gives $$(-a+b+c)(a-b+c)(a+b-c) \leq \left(\frac{a+b+c}{3}\right)^3$$ and on $abc$ which gives $$abc \leq \left(\frac{a+b+c}{3}\right)^3.$$ Since both inequalities have the same righthand side, I have tried to deduce something about the lefthand sides, but to no avail. Can somebody help me, please? I am sure it is something simple I have missed.
Here I give a detailed proof. Though steps could have been jumped to keep it short. Without loss of generality we can assume that $a\ge b \ge c$ Let $$(-a+b+c)(a-b+c)(a+b-c)=S$$ $$\Rightarrow S=(-a+b+c)\{a-(b-c)\}\{a+(b-c)\}$$ $$\Rightarrow S=(-a+b+c)\{a^2-(b-c)^2\} $$ $$\Rightarrow S= (-a+b+c)\{a^2-b^2-c^2+2bc\}$$ $$\Rightarrow S=-(a^3+b^3+c^3)-2abc+b^2c+bc^2+ab^2+a^2b+ac^2+a^2c $$ $$\Rightarrow abc-S=(a^3+b^3+c^3)+3abc-(b^2c+bc^2+ab^2+a^2b+ac^2+a^2c) $$ $$\Rightarrow abc-S=(a^3-a^2b)+(b^3-b^2c)+(c^3-c^2a)+(abc-bc^2)+(abc-ab^2)+(abc-a^2c) $$$$\Rightarrow abc-S=a^2(a-b)+b^2(b-c)+c^2(c-a)+bc(a-c)+ab(c-b)+ac(b-a) $$ $$\Rightarrow abc-S=a(a-b)(a-c)+b(b-c)(b-a)+c(c-a)(c-b)$$ $$\Rightarrow abc-S=(a-b)\{a(a-c)-b(b-c)\}+c(c-a)(c-b)$$ $$\Rightarrow abc-S=(a-b)^2\{a^2-b^2+c(b-a)\}+c(c-a)(c-b)$$ $$\Rightarrow abc-S=(a-b)^2\{a+b-c\}+c(c-a)(c-b)$$ Now $(c-a) \le 0$ and $(c-b) \le 0$ $$\Rightarrow c(c-a)(c-b)\ge 0 $$ and $$ (a-b)^2(a+b-c) \ge 0$$ This shows $$ abc-S \ge 0$$ $$\Rightarrow abc\ge S$$ $$\Rightarrow abc\ge (-a+b+c)(a-b+c)(a+b-c) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/170813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 7, "answer_id": 0 }
A little integration paradox The following integral can be obtained using the online Wolfram integrator: $$\int \frac{dx}{1+\cos^2 x} = \frac{\tan^{-1}(\frac{\tan x}{\sqrt{2}})}{\sqrt{2}}$$ Now assume we are performing this integration between $0$ and $2\pi$. Hence the result of the integration is zero. On the other hand when looking at the integrand, $\displaystyle \frac{1}{1+\cos^2 x}$, we see that it is a periodic function that is never negative. The fact that it is never negative guarantees that the result of the integration will never be zero (i.e., intuitively there is a positive area under the curve). What is going on here?
If $\tan(y)=a\tan(x)$, then $\tan(y-x)=\frac{(a-1)\tan(x)}{1+a\tan^2(x)}$, and therefore, $$ y=x+\tan^{-1}\left(\frac{(a-1)\tan(x)}{1+a\tan^2(x)}\right)\tag{1} $$ Since $\left|\frac{(a-1)\tan(x)}{1+a\tan^2(x)}\right|\le\frac{\left|a-1\right|}{2\sqrt{a}}$, $(1)$ is a nice continuous function of $x$. In this question, $a=\frac1{\sqrt2}$, so we get $$ \int\frac{\mathrm{d}x}{1+\cos^2(x)}=\frac{x-\tan^{-1}\left(\frac{(\sqrt2-1)\tan(x)}{\sqrt2+\tan^2(x)}\right)}{\sqrt2}\tag{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/170885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 0 }
Trigonometric Identities: $\frac{\sin^2\theta}{1+\cos\theta}=1-\cos\theta$ $\dfrac{\sin^2\theta}{1+\cos\theta}=1-\cos\theta$ Right Side: $1-\cos\theta$ either stays the same, or can be $1-\dfrac{1}{\sec\theta}$ Left Side: $$\begin{align*} &= \dfrac{\sin^2\theta}{1+\cos\theta}\\ &= \dfrac{1-\cos^2\theta}{1+\cos\theta} &= \dfrac{(1-\cos\theta)(1+\cos\theta)}{1+cos\theta} &= 1-\cos\theta \end{align*}$$ Is this correct?
Perhaps slightly simpler and shorter (FYI, what you did is correct): $$\frac{\sin^2x}{1+\cos x}=1-\cos x\Longleftrightarrow \sin^2x=(1-\cos x)(1+\cos x)\Longleftrightarrow \sin^2x=1-\cos^2x$$ And since the last equality is just the trigonometric Pytahgoras Theorem we're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Intuitively, why does $\lim_{n \to \infty} \frac16 (p_{n - 1} + p_{n - 2} ... + p_{n - 6}) = 2/7$? Blitzstein, Introduction to Probability (2019 2 edn), Chapter 2, Exercise 48, p 94. *A fair die is rolled repeatedly, and a running total is kept (which is, at each time, the total of all the rolls up until that time). Let $p_n$ be the probability that the running total is ever exactly n (assume the die will always be rolled enough times so that the running total will eventually exceed n, but it may or may not ever equal n). (c) Give an intuitive explanation for the fact that $p_n \rightarrow 1/3.5 = 2/7 \quad as \quad n \rightarrow \infty$. From p 17 in the publicly downloadable PDF of curbed solutions. (c)An intuitive explanation is as follows. The average number thrown by the die is (total of dots)/6, which is 21/6 = 7/2, so that every throw adds on an average of 7/2. We can therefore expect to land on 2 out of every 7 numbers, and the probability of landing on any particular number is 2/7. That's the line I don't get it, why we can transfer
I explain this informal argument slightly differently. Let $p(i)$ be the probability that the running total equals $i$ at some point. * *Note that $k = p(1) + \dots + p(n)$ equals the expected number of values between $1$ and $n$ that the running sum takes (by the linearity of expectation). Specifically, let ${\cal E}_i$ be the event that the running sum takes value $i$ and $e_i$ be the indicator variable of the event ${\cal E}_i$ (that is, $e_i =1$ if ${\cal E}_i$ happens; $e_i=0$, otherwise). Then the expected number of values the running sum takes equals $$\mathbb{E}[e_1 + \dots + e_n] = \mathbb{E}[e_1] + \dots + \mathbb{E}[e_n] = \mathrm{Pr}[{\cal E}_1] + \dots + \mathrm{Pr}[{\cal E}_n] = p(1) + \dots p(n).$$ *Since the average increment is $7/2$, we have that $k \approx \frac{2}{7} n$. *On the other hand, intuitively all values $p(i)$ are approximately equal (this is not true for small values of $i$; but it is true for large values of $i$). Thus $\frac{2}{7} n \approx p(1) + \dots p(n) \approx n p(n)$. Therefore, $p(n) \approx 2/7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/170986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Basic inclusion exclusion problem - picking 24 balls out of four sets of 10 balls This is probably a basic problem, but I'm having a really difficult time solving it. There are ten red balls, ten green balls, ten blue balls, and ten white balls. Balls of the same color are considered equal. How many ways are there to select 24 balls, regardless of order? (Sorry if the question was phrased awkwardly, I translated it) The question is supposed to be solved using the inclusion-exclusion principle. I'd appreciate advice not only on how to solve this specific problem, but how to solve general inclusion-exclusion problems.
Let $ r + g + b + w = 24, r <= 10, g <= 10, b <= 10, w <= 10 $ Now let $ r' = r + 1$ and so on. So $ r' + g' + b' + w' = 24 + 4 = 28$ and each is less than or equal to 11 and greater than or equal to 0. Imagine we have 28 balls and we place three dividers between the balls. This divides them into four groups, one for $ r'$, $ g' $, $ b' $, and $w'$. There are 27 slots in which to put dividers and we must place 3 of them. The number of divisions is $ {27 \choose 3}=2925 $. But this is where inclusion-exclusion comes in. We must exclude each of the cases where one variable is $ > 11$ and then include when two are, and so on. Let's say $r' = 12$. We have $ g' + b' + w' = 16 $. By the same argument, this is $ {15\choose2} $. There are four of those cases for each variable. We can go on all the way up to $r' = 25$. This reduces to: $4{15\choose2} + 4{14\choose2} + 4{12\choose2} + 4{11\choose2} + ... + 4{2\choose2} = 4{16\choose3} $. The equal sign is by the hockey-stick identity. Now what if $ r' = 12 = b' $? There are several of these cases where two variables are greater than 11. The logic is the same, you just must consider several cases. I'll leave it to you to finish. At the end take $ {27 \choose 3} - 4{16\choose3} + (two > 11) $, by the principal of inclusion exclusion. Note that three variables may not be greater than 11 because each variable must be >= 1. This, unless there is some solution I am not seeing, is not a typical inclusion exclusions problem. It does use it, however it is only in the last step. In general, an inclusions exclusion problem will ask: How many occurrences are such that at least x is/are in a certain state? The best thing is never to memorize a formula, but instead think critically about your counting. Count the number of ways that exactly 1 object is in a certain state. Think about how many times you just counted situations in which two objects are in a certain state and subtract off those states until they are counted exactly once. Then think about 3, and so on. It is called inclusion exclusion because you are including some cases, and then excluding cases that are double counted, then including the ones that are undercounted, and so on. EDIT: I should clarify. The reason for the adding one to r to get r', and so on, is that now each variable must be at least one. This simplifies the "divider" logic, although it is certainly not required to do so - it just takes a slightly different thought process.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $f:D\to \mathbb{R}$ is continuous and exists $(x_n)\in D$ such as that $x_n\to a\notin D$ and $f(x_n)\to \ell$ then $\lim_{x\to a}f(x)=\ell$? Assertion: If $f:X\setminus\left\{a\right\}\to \mathbb{R}$ is continuous and there exists a sequence $(x_n):\mathbb{N}\to X\setminus\left\{a\right\}$ such as that $x_n\to a$ and $f(x_n)\to \ell$ prove that $\lim_{x\to a}f(x)=\ell$ I have three questions: 1) Is the assertion correct? If not, please provide counter-examples. In that case can the assertion become correct if we require that $f$ is monotonic, differentiable etc.? 2)Is my proof correct? If not, please pinpoint the problem and give a hint to the right direcition. Personally, what makes me doubt it are the choices of $N$ and $\delta$ since they depend on another 3)If the proof is correct, then is there a way to shorten it? My Proof: Let $\epsilon>0$. Since $f(x_n)\to \ell$ \begin{equation} \exists N_1\in \mathbb{N}:n\ge N_1\Rightarrow \left|f(x_n)-\ell\right|<\frac{\epsilon}{2}\end{equation} Thus, $\left|f(x_{N_1})-\ell\right|<\frac{\epsilon}{2}$ and by the continuity of $f$ at $x_{N_1}$, \begin{equation} \exists \delta_1>0:\left|x-x_{N_1}\right|<\delta_1\Rightarrow \left|f(x)-f(x_{N_1})\right|<\frac{\epsilon}{2} \end{equation} Since $x_n\to a$, \begin{equation} \exists N_2\in \mathbb{N}:n\ge N_2\Rightarrow \left|x_n-a\right|<\delta_1\end{equation} Thus, $\left|x_{N_2}-a\right|<\delta_1$ and by letting $N=\max\left\{N_1,N_2\right\}$, \begin{gather} 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N+x_N-a\right|<\delta_1\Rightarrow \left|x-x_N\right|-\left|x_N-a\right|<\delta_1\\ 0<\left|x-a\right|<\delta_1\Rightarrow \left|x-x_N\right|<\delta_1+\left|x_N-a\right| \end{gather} By the continuity of $f$ at $x_N$, \begin{equation} \exists \delta_3>0:0<\left|x-x_N\right|<\delta_3\Rightarrow \left|f(x)-f(x_N)\right|<\frac{\epsilon}{2} \end{equation} Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta\Rightarrow \left|x-x_N\right|<\delta\Rightarrow \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\Rightarrow \left|f(x)-\ell\right|-\left|f(x_N)-\ell\right|<\frac{\epsilon}{2}\\ 0<\left|x-a\right|<\delta\Rightarrow\left|f(x)-\ell\right|<\left|f(x_N)-\ell\right|+\frac{\epsilon}{2}<\frac{\epsilon}{2}+\frac{\epsilon}{2}=\epsilon \end{gather} We conclude that $\lim_{x\to a}f(x)=\ell$ Thank you in advance EDIT: The proof is false. One of the mistakes is in this part: "Thus, letting $\delta=\max\left\{\delta_1+\left|x_N-a\right|,\delta_3\right\}>0$ we have that, \begin{gather} 0<\left|x-a\right|<\delta{\color{Red} \Rightarrow} \left|x-x_N\right|<\delta{\color{Red} \Rightarrow} \left|f(x)-\ell+\ell-f(x_N)\right|<\frac{\epsilon}{2}\end{gather}"
There are some important examples in complex analysis, say $D$ is the unit disk in $\mathbb C = \mathbb R^2$ (so not in $\mathbb R$ as in this question). Some examples of functions, analytic and hence continuous in $D$ are studied, where radial limits exist, but not tangential limits. These will be counterexamples to what you ask in that case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How quickly does the inner product of an L-2 function against its translates decay? Let $H$ be the Hilbert space $L^2(\mathbb{R})$. For $t \in \mathbb{R}$, let $\lambda_t \in B(H)$ be the unitary operator which translates by $t$, that is $(\lambda_t \xi)(s) = \xi(-t +s)$. For $\xi \in H$, fixed but arbitrary, define $f_\xi$ by $$f_\xi(t) = \langle \xi, \lambda_t \xi \rangle.$$ It's pretty easy to see that $f_\xi$ is continuous (since the inner product is continuous and the map $t \mapsto \lambda_t:\mathbb{R} \to B(H)$ is strongly continuous) and vanishes at infinity (just translate $\xi$ by some large $t$ where the integral of $|\xi|^2$ over the complement of $[-2t,2t]$ is small). I am curious what else can be said about $f_\xi$. Does it decay quickly enough to be in any $L^p$ space for example? Motivation: I am wondering for which sort of functions $g : \mathbb{R} \to \mathbb{C}$ does convolution with $g$ define a bounded operator $\kappa_g \in B(H)$. For example, this is the case for $g \in L^1(\mathbb{R})$. The first thing I thought to do was look to see when the quadratic form $\xi \mapsto \langle \xi , \kappa_g \xi \rangle$ might make sense and be bounded. Formally, we have $$\langle \xi ,\kappa_g \xi\rangle = \langle \xi, g * \xi \rangle = \int \overline{ \xi(s)} \int g(t)\xi(-t+s) dt ds = \int g(t) \int \overline{\xi(s)} \xi(-t +s) ds dt = \int g(t) \langle \xi,\lambda_t \xi \rangle dt = \int g(t) f_\xi(t) dt$$ so it is mandatory for $g$ to be integrable against every function in the collection $\{f_\xi : \xi \in H \}$. It was at this point I became interested in the decay properties of these functions.
It can decay arbitrarily slowly. That is, let $g$ be any decreasing continuous function on $[0,\infty)$ with $\lim_{s \to \infty} g(s) = 0$. Then there is $f \in L^1({\mathbb R})$ such that $|\hat{f}(s)| \ge g(|s|)$ for all real $s$. By scaling we may assume $g(0) = 1$. Take $s_n$ so that $g(s_n) = 1/n^2$. Thus $s_1 = 0$, and on $[s_n, s_{n+1}]$ we have $g(s) < 1/n^2$. Consider $f_n(x) = \dfrac{s_{n+1}}{n^2} \exp(-s_{n+1} |x|)$ which is in $L^1$ with $\| f_n \|_1 = 2/n^2$ and has $\widehat{f_n}(s) = \dfrac{2}{n^2} \dfrac{s_{n+1}^2}{s_{n+1}^2+s^2}$. So $\widehat{f_n}(s) > 0$ everywhere and $\widehat{f_n}(s) \ge 1/n^2 \ge g(|s|)$ for $s_n \le |s| \le s_{n+1}$. Now take $f = \sum_{n=1}^\infty f_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a name for these polynomials? Given $t \in \mathbb{R}[0,1]$, consider the following set of polynomials: $$ \left[-{\left(t - 1\right)}^{2} t, {\left(t - 1\right)} {\left(t^{2} - t - 1\right)}, -{\left(t^{2} - t - 1\right)} t, {\left(t - 1\right)} t^{2}\right]. $$ They show up as the coefficients of an interpolation filter. I've put them in a form that reminds me of Bernstein polynomials. They sum to unity, but don't appear to be orthogonal on $[0,1]$. They might be with respect to some weight function (or other interval). Other than digging through lists of various types of polynomials, I'm at a loss for terminology to use in searching for further information. Does anyone recognize these as being part of some larger class, regardless of their connection to interpolation?
If you are interested in finding an orthonormal set of polynomials of the set of polynomials you have, then you can use Gram–Schmidt process to get it. Make sure your polynomials are linearly independent on the interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
For a defective matrix $B$, do $B$ and $B^*$ have the same eigenvalues? From the definition of normal matrix, $AA^*=A^*A$, we know that $A$ and $A^*$ share the same eigenvectors, but my question is that do defective matrix $B$ and its conjugate transpose $B^*$ also have the same eigenvectors, although their eigenvectors are not complete. If not, is there a simple relation between the eigenvectors of $B$ and $B^*$?
Try $B = \pmatrix{0 & 1\cr 0 & 0\cr}$. Do it and $B^*$ share an eigenvector? EDIT: The relationship is this. If $u$ is an eigenvector of $B$ for eigenvalue $\lambda$ and $v$ is an eigenvector of $B^*$ for eigenvalue $\mu$, and $\mu \ne \overline{\lambda}$, then $v^* u = 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Divisibility of large number This was a question asked in a competitive exam: $(300^{3000} -1 )$ is divisible by a) $401$ b) $501$ c) $301$ d) $901$ The answer is $301$. Not sure how they arrived at the answer. Can somebody explain ?
Let us consider the polynomial $x^n-1$ where $n$ is even, then $-1$ is a root of the polynomial and so it is divisible by $x-(-1)=x+1$. Put $x=300,n=3000$ to get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171477", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 0 }
Where are good resources to study combinatorics? I am an undergraduate wiht basic knowledge of combinatorics, but I want to obtain sound knowledge of this topic. Where can I find good resources/questions to practice on this topic? I need more than basic things like the direct question 'choosing r balls among n' etc.; I need questions that make you think and challenge you a bit.
Apart from the book suggestions given here, you may also like to take a look at MIT OCW's Combinatorics: The Fine Art of Counting. Although designed for High school students, few problems that might make you think.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 8, "answer_id": 3 }
Jensen's inequality for integrals What nice ways do you know in order to prove Jensen's inequality for integrals? I'm looking for some various approaching ways. Supposing that $\varphi$ is a convex function on the real line and $g$ is an integrable real-valued function we have that: $$\varphi\left(\int_a^b f\right) \leqslant \int_a^b \varphi(f).$$
I like this, maybe it is what you want ... Let $E$ be a separable Banach space, let $\mu$ be a probability measure defined on $E$, let $f : E \to \mathbb R$ be convex and (lower semi-)continuous. Then $$ f\left(\int_E x d\mu(x)\right) \le \int_E f(x)\,d\mu(x) . $$ Of course we assume $\int_E x d\mu(x)$ exists, say for example $\mu$ has bounded support. For the proof, use Hahn-Banach. Write $y = \int_E x d\mu(x)$. The super-graph $S=\{(x,t) : t \ge f(x)\}$ is closed convex. (Closed, because $f$ is lower semicontinuous; convex, because $f$ is convex.) So for any $\epsilon > 0$ by Hahn-Banach I can separate $(y,f(y)-\epsilon)$ from $S$. That is, there is a continuous linear functional $\phi$ on $E$ and a scalar $s$ so that $t \ge \phi(x)+s$ for all $(x,t) \in S$ and $\phi(y)+s > f(y)-\epsilon$. So: $$ f(y) -\epsilon < \phi(y)+s = \phi\left(\int_E x d\mu(x)\right)+s = \int_E (\phi(x)+s) d\mu(x) < \int_E f(x) d\mu(x) . $$ This is true for all $\epsilon > 0$, so we have the conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 5, "answer_id": 0 }
Is plugging numbers into the ratio test allowed? So I'm going through the notes for my online summer calculus class, and something struck me as odd about the ratio test: it used variables only, there were no numbers plugged in. For example, in the series $$\sum_{n=1}^\infty\frac{n}{4^n}$$ we have $a_{n+1}=\frac{n+1}{4^{n+1}}$ and $a_n=\frac{n}{4^n}$ as the numerator and denominator in the ratio $ \left | \frac{a_{n+1}}{a_n} \right | $. Now, my notes say that we then plug $a_{n+1}=\frac{n+1}{4^{n+1}}$ and $a_{n}=\frac{n}{4^n}$ directly into the ratio, giving us $$ \left | \frac{\frac{n+1}{4^{n+1}}}{\frac{n}{4^n}} \right | $$ which we simplify down to $$ \left | \frac{n+1}{4n} \right | $$ That ratio is then plugged into the limit and which we find is $\frac{1}{4}$, so we know that the series converges since $\frac{1}{4} < 1$. My question is, why do we solve for the initial ratio algebraically with symbols? Couldn't we just pick an integer $n$ and plug that as well as the integer $n+1$ into the equation to get our ratio? It seems like a lot of extra work that can be simplified.
No, you can't just plug in numbers. The ratio test is supposed to be about $\lim \dfrac{|a_{n+1}|}{|a_{n}|}$, and this limit is the key part. EDIT Let's look into this a bit more. Suppose I have the series whose first thousand terms $a_1, \ldots, a_{1000}$ are $1,2, \ldots, 1000$, i.e. $a_n = n$. But after those terms, the series becomes $a_n = \dfrac{1}{2^n + n}$. If you were to just 'find the ratio' of any two of the first 1000 terms, you would think the series diverges, since you'd get a number that's greater than $1$. And if you were to find the ratio of any two of the subsequent terms, you'd get different ratios for every pair! But the limiting ratio is nonetheless $1/2$, which explains why $\sum a_n$ converges in the end.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Do equal distance distributions imply equivalence? Let $A$ and $B$ be two binary $(n,M,d)$ codes. We define $a_i = \#\{(w_1,w_2) \in A^2:\:d(w_1,w_2) = i\}$, and same for $b_i$. If $a_i = b_i$ for all $i$, can one deduct that $A$ and $B$ are equivalent, i.e. equal up to permutation of positions and permutation of letters in a fixed position?
As an other example I proffer families of codes like Delsarte-Goethals codes. These are Grey images of codes that are linear over $\mathbf{Z}_4$, so they are not linear binary codes (even though they are binary).
{ "language": "en", "url": "https://math.stackexchange.com/questions/171687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A strangely connected subset of $\Bbb R^2$ Let $S\subset{\Bbb R}^2$ (or any metric space, but we'll stick with $\Bbb R^2$) and let $x\in S$. Suppose that all sufficiently small circles centered at $x$ intersect $S$ at exactly $n$ points; if this is the case then say that the valence of $x$ is $n$. For example, if $S=[0,1]\times\{0\}$, every point of $S$ has valence 2, except $\langle0,0\rangle$ and $\langle1,0\rangle$, which have valence 1. This is a typical pattern, where there is an uncountable number of 2-valent points and a finite, possibly empty set of points with other valences. In another typical pattern, for example ${\Bbb Z}^2$, every point is 0-valent; in another, for example a disc, none of the points has a well-defined valence. Is there a nonempty subset of $\Bbb R^2$ in which every point is 3-valent? I think yes, one could be constructed using a typical transfinite induction argument, although I have not worked out the details. But what I really want is an example of such a set that can be exhibited concretely. What is it about $\Bbb R^2$ that everywhere 2-valent sets are well-behaved, but everywhere 3-valent sets are crazy? Is there some space we could use instead of $\Bbb R^2$ in which the opposite would be true?
Here are the details of the transfinite induction argument: Well-order the set of points in $\mathbb{R}^d$ and let $p_\alpha$ denote the point at ordinal index $\alpha$. Then define: $S_{<\alpha} = \bigcup_{\beta \lt \alpha}{S_\beta}$ $S_0 = \{p_0\}$ $S_\alpha = S_{<\alpha}$ when there is a $(d-1)$-sphere centered at $p_\alpha$ intersecting $S_{<\alpha}$ at more than $n$ points. $S_\alpha = S_{<\alpha} \cup \{p_\alpha\}$ otherwise. It can be seen that $\bigcup_{\alpha}{S_\alpha}$ is everywhere-$n$-valent. This is actually a little bit stronger since every point has $n$ neighbors at every distance, not just arbitrarily small ones. It works for any $n$ and also any $d$, but I don't know exactly which metric spaces. I don't know if there is any concrete example. Maybe it is worth considering the valency of plane fractals like the Koch snowflake; perhaps there are concrete examples of everywhere-$2 \cdot n$-valent curves that are wiggly enough to enter and exit arbitrarily small circles multiple times. Because of the Jordan curve theorem, this approach seems less promising for finding everywhere-odd-valency sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 2, "answer_id": 1 }
Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$ Solve for $x$; $\tan x+\sec x=2\cos x;-\infty\lt x\lt\infty$ $$\tan x+\sec x=2\cos x$$ $$\left(\dfrac{\sin x}{\cos x}\right)+\left(\dfrac{1}{\cos x}\right)=2\cos x$$ $$\left(\dfrac{\sin x+1}{\cos x}\right)=2\cos x$$ $$\sin x+1=2\cos^2x$$ $$2\cos^2x-\sin x+1=0$$ Edit: $$2\cos^2x=\sin x+1$$ $$2(1-\sin^2x)=\sin x+1$$ $$2\sin^2x+\sin x-1=0$$ $\sin x=a$ $$2a^2+a-1=0$$ $$(a+1)(2a-1)=0$$ $$a=-1,\dfrac{1}{2}$$ $$\arcsin(-1)=-90^\circ=-\dfrac{\pi}{2}$$ $$\arcsin\left(\dfrac{1}{2}\right)=30^\circ=\dfrac{\pi}{6}$$ $$180^\circ-30^\circ=150^\circ=\dfrac{5 \pi}{6}$$ $$x=\dfrac{\pi}{6},-\dfrac{\pi}{2},\dfrac{5 \pi}{6}$$ I actually do not know if those are the only answers considering my range is infinite:$-\infty\lt x\lt\infty$
This is more a comment than an answer, but it is an important comment, since it affects the list of solutions. We seem to have solutions that come from $\sin x=\frac{1}{2}$, and solutions that come from $\sin x=-1$. No problem with the $\sin x=\frac{1}{2}$ stuff. For the record, this gives the solutions $x=\frac{\pi}{6}+2k\pi$ and $x=\frac{5\pi}{6}+2k\pi$, where $k$ ranges over the integers, positive, negative, and $0$. However, there is an issue with $\sin x=-1$. For then $\cos x=0$, and neither $\tan x$ nor $\sec x$ is defined. When we multiply through by $\cos x$, we must remember that if $\cos x=0$ we are multiplying both sides by $0$, so we are not getting an equivalent equation. Thus the "solutions" $-\frac{\pi}{2}+2k\pi$ must be discarded. Remark: If we look at the function $\frac{\sin x+1}{\cos x}$, it turns out that it has a removable singularity at $x=-\frac{\pi}{2}$, and can be made continuous there by defining it to be $0$. In that sense, $x=-\frac{\pi}{2}$ is a solution. However, I doubt it would be accepted as a solution in the context the OP is in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Is there a proof of Gödel's Incompleteness Theorem without self-referential statements? For the proof of Gödel's Incompleteness Theorem, most versions of proof use basically self-referential statements. My question is, what if one argues that Gödel's Incompleteness Theorem only matters when a formula makes self-reference possible? Is there any proof of Incompleteness Theorem that does not rely on self-referential statements?
There is a weaker version of the first incompleteness theorem that is an almost trivial consequence of an insight from computability theory, namely of the result that there exists a computably enumerable set that is not computable (*). Consequence: the set of true first-order sentences (i.e. true about the 'real' natural number sequence) is not axiomatizable by a c.e. axiom set. Unfortunately, most proofs of ($*$) have themselves a scent of self-referentiality hanging around them. However, you may want to check out 'simple sets'. Simple sets are c.e. and not recursive, and the standard textbook-proof of their existence is, to the best of my knowledge, the argument that comes closest to a non-selfreferential argument for ($*$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/171863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 2 }
limit of an integral with a Lorentzian function We want to calculate the $\lim_{\epsilon \to 0} \int_{-\infty}^{\infty} \frac{f(x)}{x^2 + \epsilon^2} dx $ for a function $f(x)$ such that $f(0)=0$. We are physicist, so the function $f(x)$ is smooth enough!. After severals trials, we have not been able to calculate it except numerically. It looks like the normal Lorentzian which tends to the dirac function, but a $\epsilon$ is missing. We wonder if this integral can be written in a simple form as function of $f(0)$ or its derivatives $f^{(n)}(0)$ in 0. Thank you very much.
Let $f$ be smooth with compact support. Consider the double layer potential (up to a constant) $$ u(x_1,x_2)=-2\pi\int_{-\infty}^{\infty}\frac{\partial}{\partial x_2}\Gamma(x_1-y_1,x_2)f(y_1)\,dy_1= $$ $$ \int_{-\infty}^{\infty} \frac{x_2f(y_1)}{x^2 + x_2^2} dx, $$ where $\Gamma(x)=-\frac1{2\pi}\log|x|$ is a fundamental solution for the Laplace equation. As is known $u$ is smooth up to the boundary for smooth $f$. We have $$ \frac{\partial u(0,0)}{\partial x_2}= \lim_{\epsilon \to 0+} \frac{u(0,\epsilon)-u(0,0)}\epsilon= \lim_{\epsilon \to 0+} \frac{u(0,\epsilon)-f(0)}\epsilon= \lim_{\epsilon \to 0+}\int_{-\infty}^{\infty} \frac{ f(y_1)}{x^2 + \epsilon^2} dx, $$ which is the required value. To calculate it note that $$ \frac{\partial u(x_1,x_2)}{\partial x_2} = -2\pi\int_{-\infty}^{\infty}\frac{\partial^2}{\partial x_2^2}\Gamma(x_1-y_1,x_2)f(y_1)\,dy_1= $$ $$ 2\pi\int_{-\infty}^{\infty}\frac{\partial^2}{\partial y_1^2}\Gamma(x_1-y_1,x_2)f(y_1)\,dy_1= 2\pi\int_{-\infty}^{\infty}\Gamma(x_1-y_1,x_2)f''(y_1)\,dy_1 $$ because $\Gamma$ satisfies the Laplace equation. The last integral converges uniformly for $|x|\le 1$ so taking the limit $x\to0$ gives $$ \frac{\partial u(0,0)}{\partial x_2}=2\pi\int_{-\infty}^{\infty}\Gamma(0-y,0)f''(y)\,dy_1=-\int_{-\infty}^{\infty}\log|y|f''(y)\,dy. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/171910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Evaluating $\int(2x^2+1)e^{x^2}dx$ $$\int(2x^2+1)e^{x^2}dx$$ The answer of course: $$\int(2x^2+1)e^{x^2}\,dx=xe^{x^2}+C$$ But what kind of techniques we should use with problem like this ?
As @Peter note above, you can integrate this by separating the integrand and integrating by parts: \begin{align} \int(2x^2+1)e^{x^2}dx &=\int2x^2e^{x^2}dx+\int e^{x^2}dx\\ &=\int x(2xe^{x^2})dx+\int e^{x^2}dx\\ &= \int x\left(\frac{d}{dx}e^{x^2}\right)dx\ + \int e^{x^2}dx\\ &= xe^{x^2}-\int e^{x^2}dx+ \int e^{x^2}dx= xe^{x^2} + C \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/171960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
multiple choice summation problem Let $$X = \frac{1}{1001} + \frac{1}{1002} + \frac{1}{1003} + \cdots + \frac{1}{3001}.$$ Then (A) $X < 1$ (B) $X > 3/2$ (C) $1 < X < 3/2$ (D) none of the above holds. I assume that the answer is the third choice $1<X<3/2$. I integrate out $1/x$ in the interval $(1001, 3001)$ and get a result that satisfies only the choice C. Is this a Riemann sum? Please help.
With respect to your Riemann sum approach: the idea is that for positive, decreasing functions, the Riemann sum and the integral carefully approximate each other, more or less as in the proof of the integral test of convergence. If you'd like another sort of approach, we could approach it naively. Separate the sum into $250$ element blocks, $\frac{1}{1001}$ to $\frac{1}{1250}$ in the first block $B_1$, $\frac{1}{1251}$ to $\frac{1}{1500}$ in the second block $B_2$, and so on. We'll have $12$ blocks. Note that $\frac{1}{5} = \frac{250}{1250} \leq B_1 \leq \frac{250}{1000} = \frac{1}{4}$. Similarly, we get that $\frac{1}{i + 4} \leq B_i \leq \frac{1}{i+3}$ for all of our blocks. This means that our sum has upper and lower bounds: $$ 1 < \frac{1}{5} + \frac{1}{6} + \dots \frac{1}{16} \leq B_1 + \dots + B_{12} \leq \frac{1}{4} + \dots + \frac{1}{15}< \frac{3}{2}$$ And this gives the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/171999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to compare a sum of uniform RVs with a uniform RV? Let $(X_n)_{n\geq1}$ be a sequence of i.i.d. $\sim \text{Uni}([0,1])$ distributed random variables. I want to show that $$\mathbb{P}\big(n^2 X_{n+1} < \sum\limits_{k=1}^n X_k ~\text{for infinitely many } n \in \mathbb{N} \big)=1 $$ This cries out for Borel Cantelli, but I don't know how to use this theorem when there is more than one random variable involved. By what means can I compare the two sides?
This is an application of Lévy's conditional form of Borel-Cantelli lemma. The result is often stated as follows. Consider a sequence of events $(A_n)_n$ which is adapted to a given filtration $(\mathcal F_n)_n$. Then the random series $\sum\limits_n\mathbf 1_{A_n}$ converges/diverges almost surely if and only if the random series $\sum\limits_n\mathrm P(A_{n+1}\mid\mathcal F_{n})$ converges/diverges almost surely. Here, consider $\mathcal F_n=\sigma(X_k;k\leqslant n)$ and $A_{n+1}=[n^2X_{n+1}\leqslant S_n]$ with $S_n=X_1+\cdots+X_n$. Then $\mathrm P(A_{n+1}\mid\mathcal F_{n})=\frac1{n^2}S_n$. By the strong law of large numbers, $\frac1nS_n\to\mathrm E(X_1)=\frac12$ hence, almost surely, $S_n\gt\frac14n$ for every $n$ large enough. This proves that $\sum\limits_n\frac1{n^2}S_n$ diverges almost surely. Hence, almost surely, infinitely many events $A_n$ occur, QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Chebyshev Inequality I was going through the proof of the Chebyshev Inequality here . And I seem to be facing some trouble in the approximation stage. I can't seem to follow how $\epsilon$ has been approximated to $(t-\mu)$.
$\epsilon$ is not being approximated by $t - \mu$. What is happening is that for $t$ outside the interval $(\mu - \epsilon, \mu + \epsilon)$, $\epsilon^2 \leq (t - \mu)^2$ and hence the two integrals can be underestimated. The reasoning "since $t \leq \mu - \epsilon \Rightarrow \epsilon \leq | t - \mu | \Rightarrow \epsilon^2 \leq (t - \mu)^2$" on the page you refer to, applies to the rewriting of the left integral, c.q., $t$ on the left of the interval $(\mu - \epsilon, \mu + \epsilon)$. A similar reasoning applies to the right integral, c.q., the right of that interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Proving:$\tan(20^{\circ})\cdot \tan(30^{\circ}) \cdot \tan(40^{\circ})=\tan(10^{\circ})$ how to prove that : $\tan20^{\circ}.\tan30^{\circ}.\tan40^{\circ}=\tan10^{\circ}$? I know how to prove $ \frac{\tan 20^{0}\cdot\tan 30^{0}}{\tan 10^{0}}=\tan 50^{0}, $ in this way: $ \tan{20^0} = \sqrt{3}.\tan{50^0}.\tan{10^0}$ $\Longleftrightarrow \sin{20^0}.\cos{50^0}.\cos{10^0} = \sqrt{3}.\sin{50^0}.\sin{10^0}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{2}\sin{20^0}(\cos{60^0}+\cos{40^0}) = \frac{\sqrt{3}}{2}(\cos{40^0}-\cos{60^0}).\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}+\frac{1}{2}\sin{20^0}.\cos{40^0} = \frac{\sqrt{3}}{2}\cos{40^0}.\cos{20^0}-\frac{\sqrt{3}}{4}.\cos{20^0}$ $\Longleftrightarrow \frac{1}{4}\sin{20^0}-\frac{1}{4}\sin{20^0}+\frac{1}{4}\sin{60^0} = \frac{\sqrt{3}}{4}\cos{60^0}+\frac{\sqrt{3}}{4}\cos{20^0}-\frac{\sqrt{3}}{4}\cos{20^0}$ $\Longleftrightarrow \frac{\sqrt{3}}{8} = \frac{\sqrt{3}}{8}$ Could this help to prove the first one and how ?Do i just need to know that $ \frac{1}{\tan\theta}=\tan(90^{\circ}-\theta) $ ?
Due to $\tan 40^\circ*\tan 20^\circ$ $$=\frac{\sin 40^\circ*\sin 20^\circ}{\cos 40^\circ*\cos 20^\circ}$$ $$=\frac{4\sin 40^\circ*(\sin 20^\circ)^2}{4\cos 40^\circ*\cos 20^\circ*\sin 20^\circ}$$ $$=\frac{2\sin 40^\circ*(1-\cos 40^\circ)}{2\cos 40^\circ*\sin 40^\circ}$$ $$=\frac{2\sin 40^\circ-2\sin 40^\circ*\cos 40^\circ}{\sin 80^\circ}$$ $$=\frac{2\sin 40^\circ-\sin 80^\circ}{\cos10^\circ}$$ $$=\frac{\sin 40^\circ-(\sin 80^\circ-\sin 40^\circ)}{\cos10}$$ $$=\frac{\sin 40^\circ-2\cos 60^\circ*\sin 20^\circ}{\cos10}$$ $$=\frac{\sin 40^\circ-\sin 20^\circ}{\cos10}$$ $$=\frac{2\cos 30^\circ*\sin 10^\circ}{\cos10}$$ $$=\cot 30^\circ*\tan 10^\circ$$ Hence, $$\tan 20^\circ*\tan 30^\circ*\tan 40^\circ$$ $$=\tan 30^\circ*\cot 30^\circ*\tan 10^\circ$$ $$=\tan 10^\circ$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/172182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 3 }
Is there a "natural" topology on powersets? Let's say a topology T on a set X is natural if the definition of T refers to properties of (or relationships on) the members of X, and artificial otherwise. For example, the order topology on the set of real numbers is natural, while the discrete topology is artificial. Suppose X is the powerset of some set Y. We know some things about X, such as that some members of X are subsets of other members of X. This defines an order on the members of X in terms of the subset relationship. But the order is not linear so it does not define an order topology on X. I haven't found a topology on powersets that I would call natural. Is there one?
What about the following: A set $S\subset P(X)$ is open if for any $x\in S$ it also contains all $y\subset x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 3 }
Proof of $(A - B) - C = A - (B \cup C)$ I have several of these types of problems, and it would be great if I can get some help on one so I have a guide on how I can solve these. I tried asking another problem, but it turned out to be a special case problem, so hopefully this one works out normally. The question is: Prove $(A - B) - C = A - (B \cup C)$ I know I must prove both sides are not equivalent to each other to complete this proof. Here's my shot: We start with left side. * *if $x \in C$, then $x \notin A$ and $x \in B$. *So $x \in (B \cup C)$ *So $A - (B \cup C)$ Is this the right idea? Should I then reverse the proof to prove it the other way around, or is that unnecessary? Should it be more formal? Thanks!
In a variation of the currently approved answer, I would prove this through the following simple calculation: for every $x$, $$ \begin{array}{ll} & x \in (A - (B \cup C)) \\ \equiv & \;\;\;\text{"definition of $-$"} \\ & x \in A \land \lnot (x \in (B \cup C)) \\ \equiv & \;\;\;\text{"definition of $\cup$"} \\ & x \in A \land \lnot (x \in B \lor x \in C) \\ \equiv & \;\;\;\text{"logic"} \\ & x \in A \land \lnot (x \in B) \land \lnot (x \in C) \\ \equiv & \;\;\;\text{"definition of $-$, to work towards our goal"} \\ & x \in (A - B) \land \lnot (x \in C) \\ \equiv & \;\;\;\text{"definition of $-$"} \\ & x \in ((A - B) - C) \\ \end{array} $$ By the definition of set equality, this proves the given theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 4 }
Another question about a proof in Atiyah-Macdonald I have a question about the following proof in Atiyah-Macdonald: 1:Why is $\Omega$ infinite? Are all algebraically closed fields infinite? 2: How does the existence of $\xi$ follow from $\Omega$ being infinite? Thanks.
1.) Yes, algebraically closed fields are infinite: For, if $\mathbb F$ is a finite field, we can consider the polynomial $p(X) = 1 + \prod_{a \in \mathbb F}(X-a)$, which has no zeros as $p(\alpha) = 1 + 0 = 1$ for each $\alpha \in \mathbb F$, so $\mathbb F$ is not algebraically closed. 2.) $\xi$ is choosed such that $\xi$ is not a zero of $q(X) = \sum_{i=0}^n f(a_i)X^{n-i}$, as $f(a_0) \ne 0$, $q$ has degree $n$ and hence at most $n$ different zeros. As $\Omega$ has, by 1.), infinitely many elements, there is a $\xi \in \Omega$ with $0 \ne q(\xi) = \sum_{i=0}^n f(a_i)\xi^{n-i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Intertwiner in german? What is the best way to translate the mathematical term ''intertwiner'' (between two representations of a group) into German?
Excuse me if mine is a very low-brow approach. Let us look at the English Wikipedia page Equivariant Maps (cf. here), where there is the definition of Interwiners, as a special kind of equivariant maps. Then let us switch to the German version, and we find that, in the same context, it is employed just the term äquivarianten Abbildung. I hope it helps. Bye. Edit Added because the OP need not only tranlations but references to actual usage. In the German literature you'll find it also in the abbreviated form $G$-Abbildung as for example here in Tammo Von Dieck's Topologie, (In order to overcome the difficulties you report in comments, here is the exact physical reference: Chapter I, Section 10, second paragraph on page 41.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/172408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
For what $(n,k)$ there exists a polynomial $p(x) \in F_2[x]$ s.t. $\deg(p)=k$ and $p$ divides $x^n-1$? For what $(n,k)$ there exists a polynomial $p(x) \in F_2[x]$ s.t. $\deg(p)=k$ and $p$ divides $x^n-1$? Motivation: if exists $p(x)$, then ideal generated by $p(x)$ is "cyclic error correcting code". It seems to me not for all $n,k$ it exists, but MatLab generates some polynoms for cyclic codes for all $n,k$ I tried. So something is wrong with my understanding. The question probably elementary, sorry.
One version of this problem has been studied very, very recently by Lola Thompson (arXiv) in her PhD thesis. A value of $n$ for which $x^n-1$ admits divisors (over $\mathbb Q$) of all degrees $k\le n$ is called $\phi$-practical. By earlier work of Thompson, these are about as common as the primes: the number of $\phi$-practical numbers up to $x$ has order of magnitude $x/\log x$. So for most $n$, there is some $k \le n$ such that $x^n-1$ does not have a divisor of degree $k$. Of course, working over $\mathbb F_2[x]$, there are more divisors available. Empirically, this does not seem to change things by more than a constant factor (there are tables of counts for $\mathbb F_2$, $\mathbb F_3$, $\mathbb F_5$ in the above preprint). However, a proof of zero density is only known under the assumption of GRH. Thompson and Pollack have a conditional result (also on GRH) for the converse problem: for a fixed $k \ge 3$, the number of $n \le x$ for which $x^n-1$ has a divisor of degree $k$ over $\mathbb F_2$ is $\ll x/(\log k)^{2/35}$. So for very large $k$ there are somewhat fewer pairs $(n,k)$ with the given property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
prove that $x^2 + y^2 = z^4$ has infinitely many solutions with $(x,y,z)=1$ Prove that $x^2 + y^2 = z^4$ has infinitely many solutions with $(x,y,z)=1$. Do I use the terms $x= r^2 - s^2$, $y = 2rs$, and $z = r^2 + s^2$ to prove this problem? Thanks for any help.
Hint Let $w=z^2$. Then $$x^2+y^2=w^2$$ Solve this Pytagorean equation, and prove that infinitely many $w$ are perfect squares. With your notation $w=r^2+s^2$ so you need $r,s$ to satisfy $r^2+s^2=z^2$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/172539", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Prove $ (r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$ How would I verify the following trig identity? $$(r\sin A \cos A)^2+(r\sin A \sin A)^2+(r\cos A)^2=r^2$$ My work thus far is $$(r^2\cos^2A\sin^2A)+(r^2\sin^2A\sin^2A)+(r^2\cos^2A)$$ But how would I continue? My math skills fail me.
Oh I didn't read robjohn's answer carefully before making this colourful answer.. I will leave it here anyways. To continue on what you have: $$ (\color{red}{r^2}\cos^2A\sin^2A)+(\color{red}{r^2}\sin^2A\sin^2A)+(\color{red}{r^2}\cos^2A) \\ = \color{red}{r^2}( \cos^2A\color{blue}{\sin^2A}+\sin^2A\color{blue}{\sin^2A}+\cos^2A) \\ = r^2( (\color{red}{\cos^2A+\sin^2A})\color{blue}{\sin^2A}+\cos^2A) \\ = r^2( (\color{red}{1})\sin^2A+\cos^2A) \\ = r^2( \color{red}{\cos^2A+\sin^2A}) \\ = r^2( \color{red}{1} ) \\ = r^2 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/172607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there a monotonic function discontinuous over some dense set? Can we construct a monotonic function $f : \mathbb{R} \to \mathbb{R}$ such that there is a dense set in some interval $(a,b)$ for which $f$ is discontinuous at all points in the dense set? What about a strictly monotonic function? My intuition tells me that such a function is impossible. Here is a rough sketch of an attempt at proving that such a function does not exist: we could suppose a function satisfies these conditions. Take an $\epsilon > 0$ and two points $x,y$ in this dense set such that $x<y$. Then, $f(x)<f(y)$ because if they are equal, then the function is constant at all points in between, and there is another element of $X$ between $x$ and $y$, which would be a contradiction. Take $f(y)-f(x)$. By the Archimedean property of the reals, $f(y)-f(x)<n\epsilon$ for some $n$. However, after this point, I am stuck. Could we somehow partition $(x,y)$ into $n$ subintervals and conclude that there must be some point on the dense set that is continuous?
Such a function is possible. Let $\Bbb Q=\{q_n:n\in\Bbb N\}$ be an enumeration of the rational numbers, and define $$f:\Bbb R\to\Bbb R:x\mapsto\sum_{q_n\le x}\frac1{2^n}\;.\tag{1}$$ The series $\sum_{n\ge 0}\frac1{2^n}$ is absolutely convergent, so $(1)$ makes sense. If $x<y$, there is some rational $q_n\in(x,y)$, and clearly $f(y)\ge f(x)+\frac1{2^n}$, so $f$ is monotone increasing. However, $f$ is discontinuous at every rational: $$\lim_{x\to {q_n}^-}f(x)=\sum_{q_k<q_n}\frac1{2^k}<\sum_{q_k\le q_n}\frac1{2^k}=f(q_n)\;.$$ Thus, $f$ is discontinuous on a set that is dense in $\Bbb R$ (and in every open interval of $\Bbb R$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/172753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "35", "answer_count": 2, "answer_id": 0 }
Proving $\left\lfloor \frac{x}{ab} \right\rfloor = \left\lfloor \frac{\left\lfloor \frac{x}{a} \right\rfloor}{b} \right\rfloor$ for $a,b>1$ I'm trying to prove rigorously the following: $$\left\lfloor \frac{x}{ab} \right\rfloor = \left\lfloor \frac{\left\lfloor \frac{x}{a} \right\rfloor}{b} \right\rfloor$$ for integers $a,b \geq 1$ and real $x$. So far I haven't gotten far. It's enough to prove this instead: $$\left\lfloor \frac{z}{c} \right\rfloor = \left\lfloor \frac{\lfloor z \rfloor}{c} \right\rfloor$$ for integers $c \geq 1$ and real $z$ since we can just put $z=\lfloor x/a \rfloor$ and $c=b$.
It is trivial when done universally, i.e. using the universal definition of floor $\begin{align} k\,&\le\, \lfloor x\rfloor \color{#c00}{\iff} k\,\le\, x,\quad {\rm for\ all}\,\ k\in\Bbb Z\\[1em] {\bf Lemma}\qquad\quad\color{#0a0}{\lfloor r/n\rfloor}\,& =\, \lfloor{\lfloor r\rfloor}/n\rfloor\ \ {\rm for}\ \ 0<n\in\Bbb Z,\,\ r\in\Bbb R\\[.6em] {\bf Proof}\qquad\qquad\quad\ \ \ k \,&\le \lfloor{\lfloor r\rfloor}/n\rfloor\\[.4em] \color{#c00}\iff\quad k\ & \le\ \:{\lfloor r\rfloor}/n\\[.2em] \overset{\times\, n}\iff\ \ nk\ &\le\ \ \lfloor r\rfloor\qquad {\rm by}\,\ n>0\\[.4em] \color{#c00}\iff\ \ nk\ &\le\,\ \ \ r\qquad\ \ {\rm by}\,\ n\in\Bbb Z\\[.2em] \overset{\div\,n}\iff\ \ \ \ k\ &\le\:\ \ \ r/n\quad\ {\rm by}\,\ n>0\\[.4em] \color{#c00}\iff\ \ \ \ k\ &\le\ \ \color{#0a0}{\lfloor r/n\rfloor}\quad {\small\bf QED} \end{align}$ Yours is special case $\ r = x/a,\,\ n = b.$ Re: universality: see the links in comments below for more general (category-theoretic) viewpoints.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Units in number fields with complex embeddings Assume that we have an algebraic number field with integers $o$, and with a complex embedding $\iota$. What can be said about the image $\iota( o^\times)$ under $\iota$? Is it discrete? Is infinite?
As has been noted in comments, the important result here is Dirichlet's Unit Theorem. E.g., (taken from Daniel Marcus's Number Fields): Dirichlet's Unit Theorem. Let $U$ be the group of units in a number ring $\mathcal{O}_K = \mathbb{A}\cap K$ (where $\mathbb{A}$ represents the ring of all algebraic integers). Let $r$ and $2s$ denote the number of real and non-real embeddings of $K$ in $\mathbb{C}$. Then $U$ is the direct product $W\times V$, where $W$ is a finite cyclic group consisting of the roots of $1$ in $K$, and $V$ is a free abelian group of rank $r+s-1$. In particular, there is some set of $r+s-1$ units, $u_1,\ldots,u_{r+s-1}$ of $\mathcal{O}_K$, called a fundamental system of units, such that every element of $V$ is a product of the form $$u_1^{k_1}\cdots u_{r+s-1}^{k_{r+s-1}},\qquad k_i\in\mathbb{Z},$$ and the exponents are uniquely determined for a given element of $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can continuity of inverse be omitted from the definition of topological group? According to Wikipedia, a topological group $G$ is a group and a topological space such that $$ (x,y) \mapsto xy$$ and $$ x \mapsto x^{-1}$$ are continuous. The second requirement follows from the first one, no? (by taking $y=0, x=-x$ in the first requirement) So we can drop it in the definition, right?
"Continuity of $f\colon G\times G\to G$, $f(x,y)=xy$" means that the function is continuous relative to the topology of $G$ and the induced product topology of $G\times G$. It is true that when you have a continuous function $g\colon X\times Y\to Z$, where $X$ and $Y$ are topological spaces and the topology on $X\times Y$ is the product topology, then for each fixed $x\in X$ the induced map $g_x\colon Y\to Z$ given by $g_x(y) = g(x,y)$, and for each fixed $y\in Y$, the induced map $g_y\colon X\to Z$ given by $g_y(x) = g(x,y)$, are continuous. However, you cannot obtain continuity of $h(x)=x^{-1}$ here in that way. You need the map to take $x$ as an input, and produces $x^{-1}$ as an output. Your suggestion is to actually take the function $m_{e}(f(x))$, where $m$ is the multiplication map, $f$ is the inversion map, and $e$ is the identity; in order to deduce that this map is continuous from the continuity of $m$, you essentially have to show that $f$ itself is continuous... which is what you are trying to establish in the first place. So continuity of $f$ does not immediately follow as you suggest. For an example to show that you cannot deduce continuity of $x\longmapsto x^{-1}$ merely from the continuity of $(x,y)\longmapsto xy$, see for example this answer by tomasz: take $G=\mathbb{R}$ under addition, with the Sorgenfrey topology: a basis for the open sets consist of the intervals of the form $[a,b)$. Then multiplication is continuous, but the inverse map is not, since the pullback of $[a,b)$ is $(-b,a]$, which is not open in the topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/172945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Question about continuous function in terms of limits of sequences I am reading about continuous function, in this site http://en.wikipedia.org/wiki/Continuous_function specifically the section "Definition in terms of limits of sequences". My question is, let be $c\in \mathbb{R}$ an arbitrary element belonging to domain of $f$, where $f$ is in a closet and bounded set. Are there always a sequence $x_n$ which converges to $c$?. thanks by your replies.
There is always the constant sequence $x_n = c$ for all $n$. In the extreme case that the domain of the function is $\{c\}$, there isn't anything else. In general there is a dichotomy to be made according to whether $c$ is an isolated point of the domain $X$ of $f$. (Let's assume that $X$ is a subset of the real numbers $\mathbb{R}$, which seems to be the context of the question. In fact, with only mild notational change, this answer works in the context of arbitrary metric spaces.) We say that a point $c \in X$ is isolated if there is a $\delta > 0$ such that if $y \in X$ and $|x-y| < \delta$, then $y= c$. For instance, if $f(x) = \sqrt{x^2(x-1)}$, the natural domain is $\{0\} \cup [1,\infty)$ and $0$ is an isolated point. Now, for $X \subset \mathbb{R}$ and $c \in X$, the following are equivalent: (i) $c$ is an isolated point of $X$. (ii) Every sequence $\{x_n\}$ in $X$ which converges to $c$ is eventually constant: that is, $x_n = c$ for all sufficiently large $n$. (iii) Every function $f: X \rightarrow \mathbb{R}$ is continuous at $c$. In other words, an isolated point is something of a trivial case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Showing that the closure of the closure of a set is its closure I have the feeling I'm missing something obvious, but here it goes... I'm trying to prove that for a subset $A$ of a topological space $X$, $\overline{\overline{A}}=\overline{A}$. The inclusion $\overline{\overline{A}} \subseteq \overline{A}$ I can do, but I'm not seeing the other direction. Say we let $x \in \overline{A}$. Then every open set $O$ containing $x$ contains a point of $A$. Now if $x \in \overline{\overline{A}}$, then every open set containing $x$ contains a point of $\overline{A}$ distinct from $x$. My problem is: couldn't $\{x,a\}$ potentially be an open set containing $x$ and containing a point of $A$, but containing no other point in $\overline{A}$? (Also, does anyone know a trick to make \bar{\bar{A}} look right? The second bar is skewed to the left and doesn't look very good.)
Suppose $x\in\overline{\overline{A}}$. Let $U$ be an open set containing $x$; we want to show that $U\cap A\neq\varnothing$. We know that $U\cap\overline{A}\neq\varnothing$, so there exists $y\in \overline{A}$ such that $y\in U$. But since $U$ is an open set that contains $y$ and $y\in\overline{A}$, then...
{ "language": "en", "url": "https://math.stackexchange.com/questions/173070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 2 }
No pandiagonal latin squares with order divisible by 3? I would like to prove the claim that pandiagonal latin squares, which are of form 0 a 2a 3a ... (n-1)a b b+a b+2a b+3a ... b+ (n-1)a . . . . . . . . . (n-1)b (n-1)b +a (n-1)b +2a (n-1)b +3a ... (n-1)(b+a) for some $a,b\in (0,1...n-1)$ cannot exist when the order is divisible by 3. I think we should be able to show this if we can show that the pandiagonal latin square of order $n$ can only exist iff it is possible to break the cells of a $n \times n$ grid into $n$ disjoint superdiagonals. Then we would show that an $n\times n$ array cannot have a superdiagonal if $n$ is a multiple of $2$ or $3$. I am, however, not able to coherently figure out either part of this proof. Could someone perhaps show me the steps of both parts? A superdiagonal on an $n \times n$ grid is a collection of $n$ cells within the grid that contains exactly one representative from each row and column as well as exactly one representative from each broken left diagonal and broken right diagonal. EDIT: Jyrki, could you please explain how the claim follows from #1?
It is weakly pandiagonal Latin square of order 9 http://forumimage.ru/uploads/20171204/151240677122376874.png Weakly pandiagonal Latin square is pandiagonal as magic square. I found weakly pandiagonal Latin square of order 12. But I have not yet found the weakly pandiagonal Latin square of order 15. It is semi-pandiagonal Latin square of order 9 (pandiagonal in one direction) http://forumimage.ru/uploads/20171217/151350437058089933.png I don't know the semi-pandiagonal Latin square of order n > 9 yet. See more here https://boinc.progger.info/odlk/forum_thread.php?id=178&postid=7105 https://boinc.progger.info/odlk/forum_thread.php?id=178&postid=7107
{ "language": "en", "url": "https://math.stackexchange.com/questions/173127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root? For what value of k, $x^{2} + 2(k-1)x + k+5$ has at least one positive root? Approach: Case I : Only $1$ positive root, this implies $0$ lies between the roots, so $$f(0)<0$$ and $$D > 0$$ Case II: Both roots positive. It implies $0$ lies behind both the roots. So, $$f(0)>0$$ $$D≥0$$ Also, abscissa of vertex $> 0 $ I did the calculation and found the intersection but its not correct. Please help. Thanks.
The roots are given by $1-k\pm\sqrt{k^2-3k-4}$, for which we have: $$\cases{ 1 - k + \sqrt{k^2 - 3k - 4} > 0 & if $\phantom{~-5< \;}k \le -1$\\ 1 - k - \sqrt{k^2 - 3k - 4} > 0 & if $~-5<k\le -1 $. } $$ Wolfram Alpha gives the plus and subtract cases. So for $k\le -1$, you get at least one positive root.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Evaluating the product $\prod\limits_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)$ Recently, I ran across a product that seems interesting. Does anyone know how to get to the closed form: $$\prod_{k=1}^{n}\cos\left(\frac{k\pi}{n}\right)=-\frac{\sin(\frac{n\pi}{2})}{2^{n-1}}$$ I tried using the identity $\cos(x)=\frac{\sin(2x)}{2\sin(x)}$ in order to make it "telescope" in some fashion, but to no avail. But, then again, I may very well have overlooked something. This gives the correct solution if $n$ is odd, but of course evaluates to $0$ if $n$ is even. So, I tried taking that into account, but must have approached it wrong. How can this be shown? Thanks everyone.
If $n$ is even, then the term with $k=n/2$ makes the product on the left $0$ and $\sin\left(\frac{n}{2}\pi\right)=0$. So assume that $n$ is odd. $$ \begin{align} \prod_{k=1}^n\cos\left(\frac{k\pi}{n}\right) &=-\prod_{k=1}^{n-1}\cos\left(\frac{k\pi}{n}\right)\tag{1}\\ &=-\prod_{k=1}^{n-1}\frac{\sin\left(\frac{2k\pi}{n}\right)}{2\sin\left(\frac{k\pi}{n}\right)}\tag{2}\\ &=\frac{-1}{2^{n-1}}\frac{\prod\limits_{k=\frac{n+1}{2}}^{n-1}\sin\left(\frac{2k\pi}{n}\right)}{\prod\limits_{k=1}^{\frac{n-1}{2}}\sin\left(\frac{(2k-1)\pi}{n}\right)}\tag{3}\\ &=\frac{(-1)^{\frac{n+1}{2}}}{2^{n-1}}\frac{\prod\limits_{k=1}^{\frac{n-1}{2}}\sin\left(\frac{(2k-1)\pi}{n}\right)}{\prod\limits_{k=1}^{\frac{n-1}{2}}\sin\left(\frac{(2k-1)\pi}{n}\right)}\tag{4}\\ &=-\frac{\sin\left(n\frac\pi2\right)}{2^{n-1}}\tag{5} \end{align} $$ $(1)$: $\cos(\pi)=-1$ $(2)$: $\sin(2x)=2\sin(x)\cos(x)$ $(3)$: cancel $\sin\left(\frac{j\pi}{n}\right)$ in the numerator and denominator for even $j$ from $2$ to $n-1$ $(4)$: in the numerator, change variable $k\mapsto k+\frac{n-1}{2}$ and use $\sin(x+\pi)=-\sin(x)$ $(5)$: for odd $n$, $\sin\left(n\frac\pi2\right)=(-1)^{\frac{n-1}{2}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/173238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "22", "answer_count": 4, "answer_id": 1 }
Maximum area of a triangle in a square For a given square, consider 3 points on the perimeter to form a triangle. How to prove that: * *The maximum area of triangle is half the square's. *The maximum area of triangle occurs if and only if the chosen points are vertexes of the square.
First, recall that the area of a triangle is $\frac{1}{2}bh$ where $b$ denotes the length of the base and $h$ the height. Keeping $h$ fixed, we may increase $b$ to increase the area of the triangle. Similarly, keeping $b$ fixed, we may increase $h$ to increase the area of the triangle. This shows that the vertices must lie along the boundary of the square. Now, we claim that one vertex lies at the corner of the square. Suppose not. Then, pick any edge of the triangle. Consider the vertex not along this edge. By moving this vertex away from the edge, we may increase the height of the triangle, by the remarks above. Since a corner of the square will be the furthest distance from our selected edge, moving the opposite vertex to the corner of the square maximizes the area. Now, let $S$ be a square in the plane. Let $s$ denote the length of the edges of $S$. Without loss of generality, we may suppose that $S$ lies in the first quadrant with one vertex at the origin. Consider a triangle in $S$ that maximizes area. By the above remarks, we may suppose that one vertex of the triangle lies at the origin. Let $b$ be the base of the triangle and $h$ be the height of the triangle. By the above remarks on the construction of the triangle, $b = s$. Similarly, $h = s$. So the area of the triangle is $\frac{1}{2}bh = \frac{1}{2}s^2 = \frac{1}{2}Area(\Box)$. (You may want to argue these points more carefully, but they can be argued by using the ideas of the first two paragraphs.) Note that the second statement is not true: Let the vertices of the sqaure be $(0,0), (1,0), (0,1)$, and $(1,1)$. Let the vertices of the triangle be $(0,0), (1,0)$ and $(\frac{1}{2},1)$. Then we still have $Area(\triangle) = \frac{1}{2}Area(\Box)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173333", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Necessary and sufficient conditions for a matrix to be a valid correlation matrix. It's not too hard to see that any correlation matrix must have certain properties, such as all entries in the range -1 to 1, symmetric, positive semi-definite (excluding pathological cases like singular matrices for the moment). But I'm wondering about the other direction. If I write down some matrix that is positive semi-definite, is symmetric, has 1's along the main diagonal, and all entries are between -1 and 1, does this guarantee that there exists a set of random variables giving rise to that correlation matrix? If the answer is easy to see, a helpful hint about how to define the set of random variables that would give rise to a given matrix would be appreciated.
Yes, although the restriction that all entries are between $-1$ and $1$ is implied by the other properties (and so is not needed). Let $\Sigma$ be a $n \times n$, symmetric, positive semidefinite matrix with $1$'s along the main diagonal. First, $\Sigma$ is a covariance matrix. Since $\Sigma$ is symmetric and positive semidefinite, $\Sigma$ has a nonnegative symmetric square root $\Sigma^{1/2}$. Let $X$ be a $n$-vector of independent random variables, each with variance $1$. (For example, $X$ could be an $n$-vector of independent $N(0,1)$ random variables.) Construct the $n$-vector $\Sigma^{1/2} X$. Then, by properties of covariance matrices, $$\text{cov} (\Sigma^{1/2} X) = \Sigma^{1/2} \text{cov}(X) \Sigma^{1/2} = \Sigma^{1/2} I \Sigma^{1/2} = \Sigma.$$ Thus $\Sigma$ is the covariance matrix for the random vector $\Sigma^{1/2} X$. (This derivation is on the Wikipedia article for covariance matrices.) Second, since $\Sigma$ has $1$'s on its diagonal, the standard deviation of each random variable in $\Sigma^{1/2} X$ is $1$. Thus the correlation matrix $R$ for the random vector $\Sigma^{1/2} X$ is $$R = I^{-1} \Sigma I^{-1} = \Sigma.$$ Thus $\Sigma$ is a correlation matrix as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Hensel lift of a monic polynomial over $F_{2}$ in $Z_{8}$ Define the map - : $\mathbb{Z}_{p^{s}}\rightarrow \mathbb{F}_{p}$. Let g(x) be a monic polynomial over $\mathbb{F}_{p}$. A monic polynomial f(x) over $\mathbb{Z}_{p^{s}}[x]$ is called a Hensel lift of g(x) if $\overline{f}(x)=g(x)$ and there is a positive integer n not divisible by p such that $f(x)|(x^{n}-1)$ in $\mathbb{Z}_{p^{s}}[x]$. Otherwise, a monic polynomial g(x) over $\mathbb{F}_{p}$ has a Hensel lift f(x) over $\mathbb{Z}_{p^{s}}[x]$ if and only if g(x) has no multiple roots and $x\nmid g(x)$ in $\mathbb{F}_{p}[x]$. Could you help me to find Hensel lift f(x) over $\mathbb{Z}_{8}$, if i have a monic polynomial g(x) has no multiple roots and $x\nmid g(x)$ in $\mathbb{F}_{2}[x]$? For example, $g(x)=x^{3}+x+1\in\mathbb{F}_{2}[x]$.
As $g(x)=x^3+x+1$ is a factor of $x^7-1$ in the ring $\mathbb{F}_2[x]$, and seven is the least exponent with this property, I am guessing that you are asking for a monic polynomial $f(x)\in\mathbb{Z}_8[x]$ such that $f(x)\equiv g(x)\pmod 8$, and $f(x)\mid x^7-1$ in the ring $\mathbb{Z}_8[x].$ The Hensel lifting will do exactly this for you, but I will describe another algorithm for lifting a polynomial factor like this. It is, of course, equivalent to Hensel lifting, but it involves an IMHO neat trick I learned while playing with $\mathbb{Z}_4$-linear codes. Let us first find a similar factorization modulo 4. We can view the polynomial $g(x)$ also as an element of the ring $\mathbb{Z}_4[x].$ Let us form the product $$ (-1)^3g(x)g(-x)=(x^3+x+1)(x^3+x-1)=x^6+2x^4+x^2-1=g_2(x^2) $$ in that ring. As the resulting polynomial is obviously even, we see that the polynomial $g_2(x)=x^3+2x^2+x-1\in\mathbb{Z}_4[x]$ has the above property $g_2(x^2)=-g(x)g(-x)$. Furthermore, because $g(-x)\equiv g(x)\pmod 2$, it follows (by uniqueness of factorization in $\mathbb{F}_2[x]$ as well as the identity $p(x^2)=p(x)^2$ that holds for all polynomials in $\mathbb{F}_2[x]$) that $$g_2(x)\equiv g(x)\pmod 2.$$ At this point we record that $g_2(x)$ is the sought lifting in $\mathbb{Z}_4[x].$ Namely, you can easily verify that $$ (x^3+2x^2+x-1)(x^4+2x^3-x^2+x+1)\equiv x^7-1 \pmod4. $$ To get a lifting modulo 8 we repeat the dose. View $g_2(x)$ as a polynomial in $\mathbb{Z}_8[x],$ and compute $$ (-1)^3g_2(x)g_2(-x)=x^6-2x^4-3x-1=g_3(x^2), $$ where $g_3(x)=x^3-2x^2-3x-1\in\mathbb{Z}_8[x].$ By a direct observation $g_3(x)\equiv g_2(x)\pmod 4$. The polynomial $g_3(x)$ is the answer. You can verify this by checking that $$ (x^3-2x^2-3x-1)(x^4+2x^3-x^2-3x+1)\equiv x^7-1 \pmod8. $$ How can I justify this method? Let $\mathcal{O}$ be the ring of $2$-adic integers. It follows from 2-adic theory that the quotient ring $\mathcal{O}[x]/\langle x^3+x+1\rangle$ is isomorphic to the ring of integers $R=\mathcal{O}[\zeta_7]$ of the unramified cubic extension of the field of 2-adic numbers. Here $\zeta_7$ is a suitably chosen primitive seventh root of unity in $R$. This is because the polynomial $g(x)=x^3+x+1$ splits into linear factors in $\mathbb{F}_8[x]$ and, by Hensel lifting, also in $R[x]$. So we can write $$ g(x)=x^3+x+1=(x-u_1)(x-u_2)(x-u_3) $$ for some elements $u_1,u_2,u_3$ in $R$. Furthermore, reduction modulo two reveals that (with an appropriate choice of indices) $u_1\equiv \zeta_7\pmod{2R}$, $u_2\equiv \zeta_7^2\pmod{2R}$ and $u_3\equiv \zeta_7^4\pmod{2R}$. By construction, the roots of $g_2(x)$ in $R$ are the squares of the roots of $g(x)$, i.e. $u_i^2,i=1,2,3,$. But squaring the equations $$ u_i=\zeta_7^{2^{i-1}}+2r_i, $$ $r_i\in R, i=1,2,3,$ implies that $$u_i^2=\zeta_7^{2^i}+4r_i\zeta_7^{2^{i-1}}+4r_i^2\equiv\zeta_7^{2^i}\pmod{4R}.$$ In other words, the zeros of $g_2(x)$ in $R/4R$ are primitive seventh roots of unity. Repeating this calculation shows that the roots of $g_3(x)$ in $R/8R$ are then also seventh roots of unity, which is basically what we wanted to achieve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/173505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }