text
stringlengths
83
79.5k
H: Proving the inverse of a matrix equals $I_n-\frac{1}{n-1}A$ Question: Let $A$ be a matrix whose elements are all $1$. Prove that $$(I_n-A)^{-1}=I_n - \frac 1{n-1}A.$$ Thought: I tried using this identity, but couldn't get any further (computing the adjoint looks pretty nasty): $$(I_n-A)^{-1}=\frac 1{\det(I_n-A)}\operatorname{Adj}(I_n-A)$$ AI: How about just checking: $$\left( {\rm I}_n - \frac{1}{n-1}A \right)({\rm I}_n - A) = {\rm I}_n - \frac{1}{n-1} A - A + \frac{1}{n-1}A^2 = {\rm I}_n - \frac{n}{n-1} A + \frac{n}{n-1}A = {\rm I}_n.$$ Here, we use that $A^2 = nA$, which is easy to verify.
H: Rule for divergence of a vector field I need a proof for this theorem and have absolutely no idea how to do it. Let $ U \subseteq \mathbb{R}^n $ be an open set, $ F : U \to \mathbb{R}^n $ a $C^1$ vector field. Let $ A_k \subseteq U, k \in \mathbb{N}$ be a series of compact, non-empty subsets with smooth border, which converges to $ x \in U $. Then: $$ {\rm div}\, F(x) = \lim_{k \to \infty} \frac{1}{{\rm volume}(A_k)} \int_{\partial A_k} \langle F, \nu \rangle\, {\rm d}S$$ where $ \nu $ is a normal field at $ \partial A_k $. Thanks in advance. AI: Let $ B(x,r) \subset U $ be the metric ball centred at $ x $ with radius $ r $. The Lebesgue-Besicovith theorem asserts that $$ \lim_{r \rightarrow 0} \frac{1}{vol(B(x,r)} \int_{B(x,r)} f(y)\,dy = f(x) \; \; \; (1) $$ for every $ f \in L^{1}_{loc}(U) $. The Divergence theorem asserts that if $ F $ is a $ C^1 $ vector field on $ U $ it holds that: $$ \int_{B(x,r)}div F \, dy = \int_{\partial B(x,r)}\nu \cdot F \, dS \; \; \; (2) $$ Then from (1) and (2) the proof of your statement readily follows in the case $ A_k = B(0,1/k) $. For general $ A_k $ you should approximate these sets exteriorly and interiorly by balls.
H: Differentiating $y=x^{2}$ I am reading in a book about differentiating, but I am confused with one of the steps he takes. We start with: $$\begin{align} y &= x^{2} \\ y + \mathrm{d}y &= (x + \mathrm{d}x)^2 \\ y + \mathrm{d}y &= x^2 + x\mathrm{d}x + x\mathrm{d}x + (\mathrm{d}x^2) \end{align}$$ Now the author simplifies this to: $$y + dy = x^2 + 2x\mathrm{d}x + (\mathrm{d}x^2)$$ I dislike how the middle term is simplified to $2x\mathrm{d}x$ instead of $2(x\mathrm{d}x)$, as I feel like it is more intuitive on what is going. As in, $2$ of the term $x\mathrm{d}x$, instead of $2x\mathrm{d}x$. But I fear writing it as $2(x\mathrm{d}x)$ may result in an incorrect distributive property. Next, he omits the $(\mathrm{d}x^2)$: $y + \mathrm{d}y = x^2 + 2x \mathrm{d}x$. Subtract the original $y = x^2.$: $$\mathrm{d}y = 2x \mathrm{d}x.$$ Now here is where I get confused: $$\frac{\mathrm{d}y}{\mathrm{d}x} = 2x.$$ How can he just divide both sides by $\mathrm{d}x$!? If the original term was $2$ of $x\mathrm{d}x$, wouldn't it have to be written out as $2x * 2\mathrm{d}x$, and thus divide both sides by $2\mathrm{d}x$ instead? I think the root of my confusion is how to properly simplify: $x\mathrm{d}x + x\mathrm{d}x$. I trust that he is right, but I am looking for an explanation of why his simplification can work, and why $2(x\mathrm{d}x)$ would be incorrect. AI: Your question is a good example of what happens when people work with infinitesimals outside of non-standard analysis: a lot of confusion. Look, the notion of $dx$ being a very tiny $x$ (less than any real number, and yet nonzero) is not precise, and it's not possible to define it correctly in standard analysis. There are some people that try defining $dx$ as $\Delta x$ when $\Delta x$ goes to zero, but this is zero by the definition of limit, so this is just garbage. Many people ask: "why should we care if it is rigorous or not?", and well, it's just because when working with something rigorous the chance of confusion is much less than with something that is not even defined. In the rigorous framework, we let $f: \Bbb R \to \Bbb R$ be given by $f(x)=x^2$, then by the definition of the derivative we have: $$f'(a)=\lim_{h\to 0}\frac{f(a+h)-f(a)}{h}=\lim_{h\to0}\frac{(a+h)^2-a^2}{h}=\lim_{h\to 0}\frac{a^2+2ah+h^2-a^2}{h}$$ Now we simply reorganize the last expression, getting: $$f'(a)=\lim_{h\to 0}\frac{(2a+h)h}{h}=\lim_{h\to0}2a+h=2a$$ So this limit exists for every $a \in \Bbb R$ and thus $f$ is differentiable with derivative $f'(x)=2x$ at every $x \in \Bbb R$. So my suggestion is that you abandon this "intuitive" notion of infinitesimals and move to the rigorous standard analysis. You can pick Spivak's Calculus book: it's a very good book, even for self-study, and it'll show you how to deal with all of these things in a rigorous and straightforward way.
H: Answer-verification: Show that $f(x,y)=1+2x+3y$ for all $(x,y)\in \Bbb R^2$ Define the function $f: \Bbb R^2\to \Bbb R$ has first order partial derivatives and that $f(0,0)=1$ While $\frac{\partial f}{\partial x}(x,y)=2$ and $\frac{\partial f}{\partial y}(x,y)=3$ for all $(x,y)\in \Bbb R^2$ Prove that $f(x,y)=1+2x+3y$ for all $(x,y)\in \Bbb R^2$ $\bf{solution:}$ Let $\frac{\partial f}{\partial x}(x,y)=g(x,y)$ Then $\int g(x,y)dx=2x +c_1$ for $c_1 $ is constant Similarly, let $\frac{\partial f}{\partial y}=h(x,y)$ Then $\int h(x,y)=2y+c_2$ for $c_2$ constant Since $f(0,0)=1$ is constant so, $c:=(c_1,c_2)=1$ Thus, $f(x,y)=1+2x+3y$ for all $(x,y)\in \Bbb R^2$ Is this answer true? If there exist mistakes, please can somebody correct this? AI: Then $\int g(x,y)dx=2x +c_1$ for $c_1 $ is constant This isn't correct. For instance take $\alpha (x,y)=2x+2y^3+\log (|y|+1)$. Then $\dfrac{\partial \alpha }{\partial x}(x,y)=2$ and $\alpha (x,y) \neq 2x$. Correct would be: then $\int g(x,y)dx=2x+\varphi (y)=f(x,y)$, for some differentiable function $\varphi$. Now you differentiate the above with respect to $y$ to get $$3=\frac{\partial f}{\partial y}(x,y)=\varphi '(y).$$ Therefore $\varphi (y)=3y+C$, for some constant $C\in \Bbb R$. Now replacing $\varphi$ by what was just gotten it follows that $f(x,y)=2x+3y+C$. Edit: You're also given that $f(0,0)=1$, so $2\cdot 0+3\cdot 0+C=1$ and it follows that $C=1$.
H: Why does a local minimum or maximum need to be between an open interval? Why does it need to be an open interval if we want to define a local maximum or minimum? Does it have to do with limits and that you cannot find the derivative of one point (let's say x) without knowing some values close to x? AI: It doesn’t: if $D$ is the domain of $f$, $f$ has a local maximum at $x=a$ if $a\in D$ and there is an open interval $I=(a-\epsilon,a+\epsilon)$ such that $f(a)\ge f(x)$ for all $x\in I\cap D$. If $D=[1,2]$, it’s entirely possible for a function $f$ with domain $D$ to have a local maximum at $x=2$; the function $f(x)=x$, considered only as a function on $D$, is an example of such a function. If $D=[1,2]\cup\{3\}$, it’s possible for a function with domain $D$ to have a local maximum at $x=3$. In fact it must. To see why, consider the example of the function $f$ defined by $$f(x)=\begin{cases}0,&\text{if }1\le x\le 2\\-1,&\text{if }x=3\;.\end{cases}$$ It has a local maximum at $x=3$, because if $I$ is the open interval $\left(\frac52,\frac72\right)$, $I\cap D=\{3\}$, and sure enough, $f(3)\ge f(x)$ for every $x\in I\cap D$! In first-year calculus you’re generally looking at differentiable functions, and you’re often using the first derivative to find maxima. In order for that to work, the maximum does have to be in the interior of the domain $D$: that is, $D$ must contain an open interval about the point at which the maximum occurs. That’s why, when $D$ is a closed interval $[a,b]$, you’re taught to check $f(a)$ and $f(b)$ to see whether either of them is a local maximum as well: that can be the case even if the (one-sided) derivative is not $0$.
H: Is tensor product of Sobolev spaces dense? My question is: is $W_2^k(\mathbb{R})\otimes W_2^k(\mathbb{R})$ dense in $W_2^k(\mathbb{R}^2)$, and more generally is this true in $\mathbb{R}^d$? I found this post: Tensor products of functions generate dense subspace? which shows the above type of result for $C_c^\infty$. So my guess is that the answer should be affirmative, maybe requiring the assumption that $k>d/2$? AI: Yes, you can get this result from $C_c^\infty$, because $C^\infty_c(\mathbb R^2)$ is dense in $f\in W^k_2(\mathbb R^2)$ for any $k$. So, any $W^k_2$ function can be approximated by smooth functions with compact support, which in turn are approximated by sums of products of univariate smooth functions (even in the stronger sense, $C^\infty_c$). But it may be easier to apply the Fourier transform, which transforms $W^k_2(\mathbb R^2)$ to a weighted $L^2$ space. Since $\widehat{u\otimes v}=\widehat{u}\otimes \widehat{v}$, the question reduces to its analog for Lebesgue spaces. Then we observe that the characteristic functions of cubes have a dense linear span, and they belong to the tensor product.
H: Is $2+5x$ a primitive root in $\mathbb{F}_7[x]/(x^2+1)$? The question I'm inquiring about is all in the title, but I would be more interested in a few things related to the question which I don't know. I know what a primitive root of $\mathbb{F}_p$ is for any prime - an element $g \in \mathbb{F}_p$ such that $\{g^1, \dots, g^{p-1}\}$ generates the entire set $\mathbb{F}_p$ excluding 0. I don't know what elements of $\mathbb{F}_7[x]/(x^2+1)$ look like, and if there is a reasonable way to check if a given element is a primitive root like in $\mathbb{F}_p$, where all that is needed is to check that $g^n \not\equiv 1$ for any $n < p-1$ that divides $p-1$. Also what is the structure called where you take the quotient of a finite field? I'm not sure but I think that is what $\mathbb{F}_7[x]/(x^2+1)$ is an example of. Edit: So I started multiplying everything out, taking $g^1,\dots,g^8$ taking $g = (2+5x)$ and $g^k$ to be the result modding all the coefficients by $7$ then the remainder of that divided by $(x^2 + 1)$. For example $g^2 = (5x+2)^2 = 25x^2 + 20x + 4 \equiv 4x^2 + 6x + 4 \equiv 6x$. I did this up to $g^8 = (5x+2)(2x+2) = 10x^2 + 14x + 4 \equiv 3x^2 + 4 \equiv 1$. I believe there are supposed to be 48 elements generated, and this would cycle for only 8 elements so it shouldn't be a primitive root. I am sure there must be a more efficient way of checking this, but I am also not all too confident that this is entirely correct. AI: Note that ${\bf F}_7[x]/(x^2+1)$ is not "a quotient of a finite field." In fact the only quotients of any field are the field itself and the trivial ring (which is not itself a field), or equivalently the only ideals of a field are the trivial ideal and the whole field (this follows from everything being invertible). Rather, this is a quotient of the polynomial ring over a finite field. In fact, creating any finite field extension is equivalent (up to isomorphism) of adjoining formal variables and then quotienting by the ideal generated by the relations these variables satisfy in the extension field. For example, to create ${\bf Q}(\sqrt[3]{2})$, we adjoin the variable $x$ satisfying the relation $x^3=2$: so ${\bf Q}(\sqrt[3]{2})\cong{\bf Q}[x]/(x^2-2)$. Note that in ${\bf F}_7$, the element $-1$ (i.e. $6$) does not have any square root, so in order to create an extension field containing such a square root, we use this process to get ${\bf F}_7[x]/(x^2+1)$. In fact you may check that $x^2+1$ is irreducible in ${\bf F}_7[x]$ (since it's degree two, this merely says it doesn't have any root in the field), hence the ideal it generates is maximal in the polynomial ring (as it's a PID due to the availability of Euclidean division), hence the quotient is also a field. In general quotients of polynomial rings need only be rings, not fields, so this is why the check is relevant. What do elements of ${\bf F}_7[x]/(x^2+1)$ "look like"? Well, just like with integers mod $7$ we can "chop off" multiples of $7$ at will, here we can chop off multiples of $x^2+1$, or equivalently replace every instance of $x^2$ with $-1$. As a result, any polynomial expression in $x$ can be reduced to the much simpler form $a+bx$ (can you see why?). There are $7$ choices of scalars for $a$ and $b$ independently, so there are $7^2$ elements here. Note we can tell they are distinct elements of the quotient for distinct choices of $a$ and $b$: verify $a+bx=c+dx\Leftrightarrow (x^2+1)\mid[(a-c)+(b-d)x]\Leftrightarrow a=c,b=d$. As you will or may have heard or studied, there is a classification of finite fields. For any power of a prime $q=p^s$, there is precisely one field of order $p^s$ up to isomorphism, and these are all possible finite fields. For just a prime $p$ the field is ${\bf F}_p$, in general ${\bf F}_q$ can be obtained as ${\bf F}_p[x]/(\pi(x))$ for some irreducible polynomial $\pi(x)$ of degree $s$, or as the so-called splitting field of $x^q-x$. It is a basic abelian group theory exercise that if a finite group $G$ has at most $d$ elements of order $d$ for each divisor $d\mid\#G$, then $G$ is cyclic. This shows that any finite subgroup of $F^\times$ is cyclic for any field $F$ (finite or infinite, characteristic zero or nonzero). In particular, ${\bf F}_q^\times={\bf F}_q\setminus\{0\}$ is cyclic of order $q-1$. To check if $u\in {\bf F}_q$ is a primitive root, it suffices to check that $u^d\ne1$ for every proper divisor $d\mid(q-1)$, and for that it is sufficient to check that $u^{(q-1)/r}\ne1$ for every prime divisor $r\mid(q-1)$. (This is because the divisors $\frac{q-1}{r}\mid(q-1)$ are "cofinal" in the "poset.") Here we have $q-1=48=2^4\cdot3$, so we check $(2+5x)^{24}\ne1\ne(2+5x)^{16}$. We have $$(2+5x)^{24}=(2-2x)^{24}=2^{24}(1-x)^{24}=(1-2x+x^2)^{12}=(2x)^{12}=2^{12}(-1)^6=1.$$ Hence $2+5x$ is not a primitive root.
H: Calculate Binomial Probability of number 75 out of total 100 rows and each row containing 3 random numbers Im trying to calculate the chance or probability of number 75 appearing. I have total 100 rows in microsoft excel and have frequency of number 75. Each row contains 3 cells and each cell contains a random number between 1-100. For example typical table cells: 75 23 01 91 64 67 05 12 75 Im trying to use the formula: P(k out of N) = (N! / (k!(N-k)!)) * (pk)(qN-k) where: N = 100 Total rows k = 15 Number of times number 75 appears in these 100 rows p = ? q = 1 - q Im stuck on calculating p. Notes: numbers never repeat in the same cell. meaning there will never be a case like: 75 75 22 AI: If the numbers are random, the probability of $75$ appearing in any one row is $0.03$. That is your $p$. This counts on the fact that numbers are not repeated within a row.
H: If $V \subset H$ compact, is $L^2(0,T;V) \subset L^2(0,T;H)$ compact too? As the question states, if we have the compact embedding of Hilbert spaces $V \subset H$, is $L^2(0,T;V) \subset L^2(0,T;H)$ compact too? If not true in general, is it true for $V=H^1(\Omega)$ and $H=L^2(\Omega)$? AI: You want to know whether the unit ball of $L^2(0,T;V)$ is relatively compact (=has compact closure) in $L^2(0,T;H)$. A readable treatment of relative compactness in Lebesgue-Bochner spaces is in Tightness, integral equicontinuity and compactness for evolution problems in Banach spaces by Rossi, and Savaré, see Theorem 1. In a nutshell, you need: boundedness with respect to some Banach space compactly embedded in $H$ (which you have), and integral equicontinuity (which you don't have). Without integral equicontinuity, a counterexample is provided by $f_n(t)=e^{int}\varphi $, where $\varphi$ is any fixed element of $V$.
H: Characterization properties of number sets $\mathbb{N},\mathbb{ Z},\mathbb{Q},\mathbb{R},\mathbb{C}$ When people say that a structure is defined up to isomorphism means, accordingly, that they assume certain properties that make it completely determined under certain operations and relations. So, I'd like to know what the properties are that characterize the different systems of numbers ($\mathbb{N},\mathbb{Z},\mathbb{Q},\mathbb{R},\mathbb{C}$) up to isomorphism with the operations of $+,\cdot$, and $\leq$. To make my question clearer, for example I guess the principle of induction would be part of characterizing $\mathbb{N}$, the least upper bound property would be part of characterizing $\mathbb{R}$, etc. The thing is that there are a lot of properties like these and it's not clear, at least for me, to decide what are the main ones and what of them can be deduced and are redundant, etc. In other words I'd like to ask which properties characterize each of the sets $\mathbb{N},\mathbb{ Z},\mathbb{Q},\mathbb{R},\mathbb{C}$ based on their operations and orderings. AI: $\Bbb N$ is the unique linearly ordered set that is infinite, but every initial segment is finite; it is also the smallest set which contains $0,1$ and closed under addition that satisfies the axiom $x+y=0\iff x=y=0$. $\Bbb Z$ is the smallest linearly ordered set on which both successor and predecessor operations are defined everywhere, and every element is a successor. $\Bbb Q$ is the smallest field which satisfies that $1+1+\ldots+1\neq 0$. It is also the smallest field which can be ordered. $\Bbb R$ is the unique ordered field which is Dedekind complete as an ordered set. $\Bbb C$ is the unique field which is algebraically closed, contains $\Bbb Q$ and is equipotent with $\Bbb R$. It is also the unique algebraic closure of $\Bbb R$.
H: Mean value theorem application for multivariable functions Define the function $f\colon \Bbb R^3\to \Bbb R$ by $$f(x,y,z)=xyz+x^2+y^2$$ The Mean Value Theorem implies that there is a number $\theta$ with $0<\theta <1$ for which $$f(1,1,1)-f(0,0,0)=\frac{\partial f}{\partial x}(\theta, \theta, \theta)+\frac{\partial f}{\partial y}(\theta, \theta, \theta)+\frac{\partial f}{\partial z}(\theta, \theta, \theta)$$ This is the last question. I don't have any idea. Sorry for not writing any idea. How can we show MVT for this question? AI: Following up on Peterson's hint, forget about the MVT for several variables and focus on the one dimensional version of it. Consider the function $\varphi\colon [0,1]\to \Bbb R, t\to t^3+2t^2$. The MVT guarantees the existence of $\theta\in ]0,1[$ such that $\varphi '(\theta)=\varphi(1)-\varphi (0)$. Now try to relate $\varphi (1)$ with $f(1,1,1)$, $\varphi(0)$ with $f(0,0,0)$ and $\varphi '(\theta)$ with $\displaystyle \frac{\partial f}{\partial x}(\theta, \theta, \theta)+\frac{\partial f}{\partial y}(\theta, \theta, \theta)+\frac{\partial f}{\partial z}(\theta, \theta, \theta)$.
H: Convergent sequences coming back from infinity This may seem like a dumb question, but is it possibly for a sequence to jump to infinity and after some term later we can find an $N$ such that $n > N \implies |a_n - \ell| < \epsilon$? Because if it is, doesn't that mean convergent sequences does not necessary have to be bounded? Or is the problem that once it jumps to infinity, it is immediately divergent and there is no hope of finding an $N$ that gives $|a_n - \ell| <\epsilon$? AI: A convergent sequence does necessarily have a bounded tail, if you will. There is nothing wrong with having "unbounded" terms at the (relative) start of your sequence. Some might cringe at the idea of setting term no. 3 to equal $\infty$, though, as infinity is not a number. Rather, in some manner of speaking it's a concept of "unboundedness". Of course there are exceptions, the easiest one being the augmented real number line $\Bbb R \cup \{-\infty, +\infty\}$. But then you will find arithmetic to be difficult to define in a meaningfull way, so you need to tread carefully when you use it in conjunction with, for instance, the operation $+$. I'd recommend you read up on "Hilbert's hotel" for a nice thought experiment on how just how difficult it is to make sense of common operations when $\infty$ is involved. Edit: Seing your last comment, there is no such "point of no return" on this side of infinity. Either it's a real number, with a (more or less) comprehensible size, or it's not. Either way, convergence only cares about the last part of your sequence, be that from the xkcd-numbered term on or even later.
H: Calculate the probability of one ball out of 5 being number 80 Im trying to write a lottery game program and want to have probabilities calculated for the players before they choose numbers. Lottery rules: 5 Total Balls Each ball has to be a number between 1 and 80 No ball can repeat on one drawing (5,10,20,55,55) would not happen I want to calculate the probability of number 80 in 2 ways: 1. Based on previous data 2. Just probability of number 80 occuring at least once in 5 balls I know how to calculate the 2 part but the 1 st part is kind of tricky. I am trying to use binomial probability which gives me some numbers but when I add up all probabilities it does not come out 1.0 thats how I know its wrong. Formula Im using: P(k out of N) = (N! / (k!(N-k)!)) * (p^k)(q^N-k) where N = 115 //115 Past drawings in total k = 23 //number of times number 80 occured in that 115 drawings. Frequency p = 5 / 80 q = 1 - p Is this correct way of calculating it? Why all probabilities when added up does not add up to 1.0 ? Should my N be number of total drawings or number of total frequencies AI: Note that you are drawing the balls without replacement. Hence, a binomial distribution is inappropriate. Imagine that you randomly choose all $5$ balls at the same time, then sort them in increasing order. The total number of ways this can happen (since order doesn't matter) is $\binom{80}{5}$. Now how many ways can you choose $5$ balls such that exactly one of them is $80$ and exactly four of them are not $80$? The total number of ways this can happen (since order doesn't matter) is $\binom{1}{1}\binom{79}{4}$. Hence, the probability is: $$ \dfrac{\binom{1}{1}\binom{79}{4}}{\binom{80}{5}} = \dfrac{\dfrac{79!}{75!4!}}{\dfrac{80!}{75!5!}}= \dfrac{5 \cdot 79!}{80!}= \dfrac{5}{80} = \dfrac{1}{16} $$
H: Lights Out Variant: Flipping the whole row and column. So I found this puzzle similar to Lights Out, if any of you have ever played that. Basically the puzzle works in a grid of lights like so: 1 0 0 00 0 0 00 1 0 0 0 0 1 0 When you selected a light (the X), it toggled itself and all the lights in its row and column: 1 0 1 01 1 X 10 1 1 0 0 0 0 0 This got me wondering how one could tell whether there was a solution for a given setup and grid size, and if so, what was it? I can't seem to get anywhere. Could anyone push me in the right direction? AI: Think of each light as a variable taking values in $0, 1$. Flipping a switch has the effect of adding $1$ to each corresponding light, modulo $2$. To the switch at position $i, j$ make correspond a variable $s_{i,j}$representing the number of times you flip that switch. Now the final value of each light is equal to its initial value plus each $s_{i,j}$ that shares a column or row with it - this is a linear equation in the variables $s_{i,j}$. For an $m\times n$ grid, we thus obtain $mn$ linear equations (one for each light) in $mn$ variables (one for each switch). The integers modulo $2$ are a field, so all of the usual results of linear algebra should apply. Example. Let's works out the $2\times2$ case. Which grids are solvable? If we represent "on" as $1$ and "off" as $0$, then note that we need to add each light to itself. That is, if a light starts at value $0$, we need to flip the switches so as to add $0$ to that light (because we don't want it to change), but if it starts at value $1$, we need to flip the switches so as to add $1$ to that light. So the equation for each light is going to look like sum of associated switches = initial state. Writing the initial state of the $i,j$ light as $l_{i,j}$, we therefore need to solve: $$l_{1,1}=s_{1,1}+s_{1,2}+s_{2,1}$$ $$l_{1,2}=s_{1,2}+s_{1,1}+s_{2,2}$$ $$l_{2,1}=s_{2,1}+s_{1,1}+s_{2,2}$$ $$l_{2,2}=s_{2,2}+s_{1,2}+s_{2,1}$$ The matrix for this system looks like this (omitting zeros for readability): \begin{array}{cccc} 1 & 1 & 1 & \\ 1 & 1 & & 1 \\ 1 & & 1 & 1 \\ & 1 & 1 & 1 \end{array} Gaussian Elimination works easily on this matrix (tip: remember that mod $2$, $a + a = 0$ for any $a$) and we obtain an explicit solution for the general $2\times2$ grid: $$s_{1,1}=l_{1,1}+l_{1,2}+l_{2,1}$$ $$s_{1,2}=l_{1,2}+l_{1,1}+l_{2,2}$$ $$s_{2,1}=l_{2,1}+l_{1,1}+l_{2,2}$$ $$s_{2,2}=l_{2,2}+l_{1,2}+l_{2,1}$$ So the answer is that all $2\times2$ grids are solvable, and there's the solution (concurring with Aryabhata's proposition). As the number of equations is equal to the number of cells on the grid, this method quickly becomes too tedious and complicated to perform by hand. I would try to investigate the determinant of these matrices in the abstract case and see if you come up with anything. That would at least tell you which grid sizes have a guaranteed solution.
H: Interpretation of proposition in textbook I'm reading Algebra by Michael Artin and I have a question about a section of text. Proposition 1.2.13: Let $M'=[A'\mid|B']$ be a block row echelon matrix, where $B'$ is a column vector. The system of equations $A'X=B'$ has a solution iff there is no pivot in the last column, $B'$. In that case, arbitrary values can be assigned to the unknown $x_i$, provided that (column $i$) does not contain a pivot. When these arbitrary values are assigned, the other unknowns are determined uniquely. Why is the bold text true? Thanks. AI: Let $v_1, v_2, \dots, v_k$ be the columns in $A'$ with pivots and $w_1,w_2,\dots w_{m}$ those columns of $A'$ that don't contain a pivot. Then your linear system can be written as $$ y_1v_1+y_2v_2+\dots+y_kv_k=B'-z_1w_1-z_2w_2-\dots-z_mw_m $$ where I have renamed the unknowns $x_1,x_2,\dots$ so not to use double subscripts. This system, considered only in the unknowns $y_1,y_2,\dots,y_k$ has a (unique) solution for every value you assign to $z_1,z_2,\dots,z_m$, because it has triangular form and all columns (except for the last, of course) have a pivot in them: indeed, multiplying a column for a scalar can't introduce new pivots.
H: How to Define Product Orientations for Topological Manifolds When working with smooth manifolds, $M^m$ and $N^n$, it is straightforward to see how orientations at points $p\in M$ and $q\in N$ (i.e. ordered bases for the tangent spaces) give rise to an orientation at the point $(p,q) \in M \times N$, given a convention about which basis should come first. However in the continuous setting, it isn't so clear to me how a choice of generators for $H_m(M, M-p)$ and $H_n (N, N-q)$ should naturally give rise to a generator for $H_{n+m} (M \times N, M\times N -(p,q))$. I think you can probably do it by the same method as in the smooth case once you choose local coordinates and a convention about how ordered bases for $\mathbb{R}^n$ correspond to generators of $H_n(\mathbb{R}^n, \mathbb{R}^n-0)$, but I was hoping for some purely algebraic-topology construction that doesn't use tangent vectors. Thank you for your time. AI: The answer comes from the Künneth formula. Of course, by excision, you are localizing and looking at $H_m(D^m,\partial D^m)$, $H_n(D^n,\partial D^n)$, and $H_{m+n}(D^{m+n},\partial D^{m+n})$. An orientation, as you pointed out, is a choice of generator for $H_m(D^m,\partial D^m) \cong \mathbb Z$, etc. Now, the Künneth isomorphism [see, e.g., Hatcher p. 276 for the relative statement] \begin{align*} H_m(D^m,\partial D^m)\otimes H_n(D^n,\partial D^n) &\overset{\cong}{\to} H_{m+n}(D^m\times D^n,\partial D^m\times D^n \cup D^m\times\partial D^n)\\ &\cong H_{m+n}(D^{m+n},\partial D^{m+n}) \end{align*} gives you what you want. (It's probably easiest to visualize the last step thinking of cubes, rather than disks.)
H: Term for changing properties in higher dimensions Somewhat simple question, but it's the following. Consider D-volumes (that is, the equivalent volume measurement in D dimensions) of spheres of ever-higher dimensions. The percent of D-volume concentrated in a $\epsilon$ crust around the sphere is $1-(1-\epsilon)^D$ (This derivation is pretty easy so leaving it out.) For a given $\epsilon$, this percentage gets much higher as D gets higher. My question is, I want to say that the 'topology' of spheres changes as you go into higher dimensions. But I know that's not right at all. What is the correct word for the phenomenon I'm describing? Thanks! AI: I think the term you're looking for is "concentration of measure".
H: $f$ is either regular or $df_x = 0$. In the text, it says: Consider smooth functions on a manifold $X$: $f: X \to \mathbb{R}$, at a particular $x \in X$, $f$ is either regular or $df_x = 0$. So I am not certain here: if $df_x \neq 0$, then all we know here is $df_x$ is injective. Hence how can we conclude that $df$ us surjective? AI: The image of $d_xf$ is a subspace of a one dimensional vector space: it is either zero or the whole thing.
H: How to prove gaussian-like integral equation true. Integrals is definitely not my strong point, and I'm having trouble proving that: $${\int_{-\infty}^\infty (e^{\large{\pi}n})^{-\large{x}^2} dx = {1\over\sqrt{n}}}$$ It has similarities to the gaussian integral, but I have no idea on how to prove it correct. What could be used to attack this? Thanks. AI: Notice that $(\operatorname{e}^{\pi n})^{-x^2} \equiv \operatorname{e}^{-\pi n x^2}$. We can apply the same method: $$J^2 = \left(\int_{-\infty}^{\infty}\operatorname{e}^{-\pi n x^2}\, \operatorname{d}\!x\right)\left(\int_{-\infty}^{\infty} \operatorname{e}^{-\pi n y^2} \, \operatorname{d}\!y \right) = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} \operatorname{e}^{-\pi n (x^2+y^2)}\, \operatorname{d}\!x\, \operatorname{d}\!y$$ If we make the substitution $x = r\cos\theta$ and $y=r\sin\theta$ we see that $-\infty < x < \infty$ and $-\infty < y < \infty$ is replaced by $0 \le r < \infty$ and $0 \le \theta < 2\pi$. Moreover, $x^2+y^2 = r^2$ and $\operatorname{d}\!x \wedge \operatorname{d}\!y = r \, \operatorname{d}\!r \wedge \operatorname{d}\!\theta$. Meaning that: \begin{array} JJ^2 &=& \int_0^{2\pi}\int_0^{\infty} r\operatorname{e}^{-\pi n r^2} \, \operatorname{d}\!r \, \operatorname{d}\!\theta \\ \\ &=& \int_0^{2\pi} \left[ \frac{1}{2\pi n}\operatorname{e}^{-\pi n r^2}\right]_0^{\infty}\operatorname{d}\!\theta \\ \\ &=& \int_0^{2\pi} \frac{1}{2\pi n} \, \operatorname{d}\!\theta \\ & = & \left[\frac{1}{2\pi n}\right]_0^{2\pi} \\ \\ &=& \frac{1}{n} \end{array} Since $J^2 = \frac{1}{n}$ it follows that $J = \frac{1}{\sqrt{n}}$.
H: Closure of a connected set is connected Let $(X,d)$ be a metric space and let $E \subseteq X$ connected. I want to show that $\overline E$ is connected. How can I prove this in a nice way ? AI: PROP Let $E\subseteq X$ be connected. Then $\overline E$ is connected. P Let $f:\overline E\to\{0,1\}$ be continuous. Since $E$ is connected, $f\mid_E$ is constant. But $E$ is dense in $\overline E$, so $f$ is constant, since continuous functions are entirely determined on a dense subset of their domain whenever the codomain is Hausdorff. And $\{0,1\}$ with the discrete topology is metrizable, hence Hausdorff, the claim follows. ADD There is a more general claim. Let $(X,\mathscr T)$ be a topological space. If $E$ is connected and $K$ is such that $E\subseteq K\subseteq \overline E$, then $K$ is connected. P Consider $K$ as a subspace of $X$. Then $E$ is dense in the space $K$. Let $f:K\to\{0,1\}$ be continuous. Then $f\mid_E$ is constant. It follows $f$ is constant, so $K$ is connected. Here is the proof of PROP Let $X$, $Y$ be topological spaces, $Y$ Hausdorff. Suppose $D$ is dense in $X$ and $f,g:X\to Y$ are continuous. If $f$ and $g$ agree on $D$, then $f=g$. P By contradiction. Thus, assume there exists $x\in X\smallsetminus D$ such that $f(x)\neq g(x)$. Then there exist open nbhds of $N_1$ of $f(x)$ and $N_2$ of $g(x)$ with $N_1\cap N_2=\varnothing$. By continuity, $M_1=f^{-1}(N_1)$ and $g^{-1}(N_2)=M_2$ are open. Then so is $M=M_1\cap M_2\neq \varnothing$ since $x\in M$. Thus, there exists $y\in M\cap D$, and $f(y)=g(y)$. But this is impossible, since this gives $f(y)\in f(M)\subset ff^{-1}(N_1)\subset N_1$ and $g(y)\in g(M)\subset gg^{-1}(N_2)\subseteq N_2$.
H: What is an example of a bounded, discontinuous linear operator between topological vector spaces? I am thinking there might be an example between the space of compactly supported smooth functions on the real line (chosen because it is non-metrizable under the standard topology for this space of test functions) and $L^{1/2}[0,1]$ (chosen because it is not locally convex). AI: Let $E$ be an infinite dimensional Banach space or Fréchet space whose dual has uncountable Hamel basis. Let $F$ be the same space endowed with its weak topology. The identity $\operatorname{id} \colon F \to E$ is bounded (every weakly bounded set is strongly bounded by Mackey's theorem - Banach and Fréchet spaces carry their Mackey topology), but not continuous (the strong topology is strictly finer than the weak topology; for Banach spaces it follows directly because every weak neighbourhood of $0$ contains an infinite dimensional subspace, for Fréchet spaces with big enough dual, you can for every countable family $\mathcal{U} = (U_n)$ of weak $0$-neighbourhoods find a continuous linear form $\lambda$ that is not in the span of the forms used to determine the $U_n$, and $\bigcap \mathcal{U}$ then contains a nontrivial subspace not contained in $\ker \lambda$, whence $\{x\colon \lvert \lambda(x)\rvert < 1\}$ does not contain any $U_n$ [that reasoning applies of course also to infinite dimensional Banach spaces, their dual is an infinite dimensional Banach space, hence has uncountable Hamel basis]).
H: Characterization of $C_0^\infty(\mathbb{R}^N)$ in terms of Fourier transform. Let $C_0^\infty(\mathbb{R}^N)$ denote the space of infinite differentiable functions with compact support. My question is: Is there any characterization of $C_0^\infty(\mathbb{R}^N)$ in terms of Fourier tranform, i.e. is there a statemant like this: $f\in C_0^\infty(\mathbb{R}^N)$ if and only $\hat{f}$ satisfies ... ? Thank you AI: This is about the Paley-Wiener theorems, as enhanced by Schwartz and others: the Fourier transforms of the test functions are exactly the functions that extend to holomorphic functions on $\mathbb C^n$ bounded by (writing in the single-variable case for simplicity) $|\hat{f}(x+iy)| \cdot (1+x^2)^M \ll e^{Ny}$ for all $M$ and for some $N$. Proofs are not too hard, and are written many places, e.g., http://www.math.umn.edu/~garrett/m/fun/notes_2012-13/paley-wiener.pdf
H: Are there Hausdorff spaces which are not locally compact and in which all infinite compact sets have nonempty interior? Here is the background material from which I am working: The Cantor set is an uncountable compact Hausdorff space with empty interior. In a locally compact Hausdorff space, each countable set has empty interior. The rational numbers with the subspace topology is a non-locally compact Hausdorff space in which all compact sets have empty interior. I am trying to find a non-locally compact Hausdorff space in which all infinite compact sets have nonempty interior. I am guessing the example will be an exotic function space. AI: Let $X=\Bbb R\cup\{\infty\}$ with the discrete topology on $\Bbb R$, and a neighborhood of $\infty$ is a set with a countable complement. This $X$ is clearly Hausdorff. Now, the funny thing about this space is that each compact set is finite. Such a space is called anti-compact. To see this, let $S$ be an infinite subset. If it avoids $\infty$, then it has the discrete topology and is not compact. If $\infty\in S$, then choose a countable subset $Q$ of $S$ and note that $(S-Q)\cup\bigcup_{q\in Q}\{q\}$ is an infinite open cover of $S$ without a finite subcover. This means that no infinite set is compact. In particular, $\infty$ has no compact neighborhood, so $X$ is not locally compact. Now, the property you wish for is vacuously satisfied since there are no infinite compact subsets :-)
H: Simplifying trig expression for Laplace transform I'm working on the following Laplace transform problem at the moment, and I'm a little stuck. $$\mathcal{L} \{\sin(2x)\cos(5x) \}$$ I don't recall any trig identity that would apply here. I know that $$\sin(2x) = 2\sin(x)\cos(x)$$ But I'm not sure if that applies in this situation. If you guys could point in the right direction I'd be most appreciative. AI: By the addition theorem for the sine, we can write $$\begin{align}\sin (7x) &= \sin (5x + 2x) = \sin (5x)\cos (2x) + \sin(2x)\cos (5x)\\ \sin (3x) &= \sin (5x - 2x) = \sin (5x) \cos (2x) - \sin (2x)\cos (5x), \end{align}$$ and hence $$\sin(2x)\cos(5x) = \frac12 \bigl(\sin (7x) - \sin (3x)\bigr).$$ Therefore $$\mathcal{L}\{\sin(2x)\cos(5x)\} = \frac12 \bigl(\mathcal{L}\{\sin (7x)\} - \mathcal{L}\{\sin (3x)\}\bigr).$$
H: Number of rays on finite grid? Let's have a set $M = \{ (i,j) : i,j \in \{0,\dots,m\}\}$. Define equivalence on $M$, $(i,j) \sim (k,l)$ iff there is $r \in \mathbb R$ that $(ri,rj) = (k,l)$. Question is what is the number of elements of $M/_\sim$? Motivation: This is math/programming question. I have a grid of points $(i,j)$ for $i,j\in \{0,\dots,m-1\}$. Now I want to evaluate function $f(i,j)$ for all $i,j$ and I want to do it fast. The thing is that I can calculate $f$ easily along a ray. So if I calculate $f(i,j)$ than I can calculate $f(2i,2j)$ fast and than I can calculate $f(3i,3j)$ fast etc. The question sounds a little bit like the question about the size of projective plane over finite field but it isn't. AI: Each ray not above the diagonal (so with $i\geq j$) has as slope a fraction $0\leq\frac ji\leq1$ with denominator at most$~n$. The list of all such fractions is the $n$-th Farey sequence and its length is called the $n$-th Farey number, which equals $1+\sum_{i=1}^n\phi(i)$ where $\phi$ is Euler's totient function (there are $\phi(i)$ nonzero reduced fractions${}\leq1$ with denominator$~i$). For the number you ask for, add as many minus one for the rays above the diagonal, giving $1+2\sum_{i=1}^n\phi(i)$.
H: Why are nonsquare matrices not invertible? I have a theoretical question. Why are non-square matrices not invertible? I am running into a lot of doubts like this in my introductory study of linear algebra. AI: I think the simplest way to look at it is considering the dimensions of the Matrices $A$ and $A^{-1 }$ and apply simple multiplication. So assume, wlog $A$ is $m \times n $, with $n\neq m$ then $A^{-1 }$ has to be $n\times m$ because thats the only way $AA^{-1 }=I_m$ But it must also be true that $A^{-1 } A=I_m$ but now instead of $I_m$ you get $I_n$ wich is not in accordance with the definition of an Inverse ( see ZettaSuro) Hence $m$ must be equal to $n$
H: Proving the existence of a bridge in a tree Let $G$ be a connected graph, and let $e \in E(G)$. Prove that $e$ is a bridge if and only if every spanning tree of $G$ contains $e$. Can someone help me with this please? Thank you! AI: Suppose $e$ is a bridge in $G$ and $T$ a spanning tree on $G$ not containing $e$. Then, since $T$ is a tree, it must be connected, but since $T$ is a subgraph of $G\setminus\{e\}$, a disconnected graph, it must be disconnected. Therefore, any spanning tree contains every bridge. Now suppose $e$ is not a bridge. Then $G\setminus\{e\}$ is still connected, and so has a spanning tree $T$. However, since $G\setminus\{e\}$ has the same vertices as $G$, $T$ is also a spanning tree of $G$ that does not contain $e$. Therefore, any edge in every spanning tree is a bridge.
H: Can the triangle inequlity extened to show the distance inequlity of a trapezium $AB // CD$. What are the angle conditions (acute, obtuse or right angle) of $a,b,c,d$ to be satisfied the inequality $ |AB+BC| > |CD|$? $AB,BC,CD$ are distances. AI: BI think the inequality holds iff $a+\frac{b}{2}<180$. Let $P$ be a point on $AB$ where $B$ is between $P$ and $A$ such that $PB=BC$. Also Let $Q$ be the fourth vertex of parallelogram $APCQ$ that lies on $CD$ (or its extension). The desired inequality holds only when $D$ is between $C$ and $Q$ which translates to the angle inequality I mentioned.
H: The product of integers relatively prime to $n$ congruent to $\pm 1 \pmod n$ Problem: Let $1 \leq b_1 < b_2 <...< b_{\phi(n)} < n$ be integers relatively prime with n.Prove that $$B_n = b_1 b_2 ... b_{\phi(n)} \equiv \pm 1 \bmod n $$ I was thinking of Fermat's Little Theorem or Euler's Theorem. AI: The cases $n=1$ and $n=2$ are trivial, so we can assume that $n\gt 2$. As Calvin Lin suggested, for every $x$ in your list, there is a unique number $y$ in the list such that $xy\equiv 1\pmod{n}$. If $y\ne x$, call the set $\{x,y\}$ a couple. By definition, the product of the two numbers in a couple is congruent to $1$ modulo $n$. So the product of the coupled numbers is congruent to $1$ modulo $n$. Now we deal with the singles, the people $x$ such that $x^2\equiv 1\pmod{n}$. Note that if $x$ is single, then so is $n-x$. Moreover, since $n\gt 2$, the numbers $x$ and $-x$ are not congruent modulo $n$. For if they were, then $2x$ would be divisible by $n$. This is not possible, since $x$ is relatively prime to $n$ and $n\gt 2$. So there is some romance in the world of singles, they can be paired off in pairs $\{x,n-x\}$ whose product is congruent to $-x^2$, that is, to $-1$. We conclude that if $2k$ is the number of singles, then the product of the singles is congruent to $(-1)^k$ modulo $n$. Thus the product of all our $b_i$ is congruent to $(-1)^k$ modulo $n$. The result follows. Remark: If $n$ is odd then the number of singles is $2^l$, where $l$ is the number of distinct primes in the prime power factorization of $n$. In the very special case of an odd prime $n$, there are $2$ singles, and we get Wilson's Theorem. In all other cases the number of singles is divisible by $4$, so our product is congruent to $1$ modulo $n$. it is not hard to extend the detailed analysis of when our product is congruent $1$ or $-1$ to even $n$. The answer depends on whether the greatest power of $2$ that divides $n$ is $2$, $2^2$, or $2^w$ where $w\ge 3$. This is because the number of solutions of $x^2\equiv 1 \pmod{2^k}$ is $1$ when $k=1$, $2$ when $k=2$, and $4$ when $k\ge 3$.
H: Recursion Question - Trying to understand the concept Just trying to grasp this concept and was hoping someone could help me a bit. I am taking a discrete math class. Can someone please explain this equation to me a bit? $f(0) = 3$ $f(n+1) = 2f(n) + 3$ $f(1) = 2f(0) + 3 = 2 \cdot 3 + 3 = 9$ $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3 = 21$ $f(3) = 2f(2) + 3 = 2 \cdot 21 + 3 = 45$ $f(4) = 2f(3) + 3 = 2 \cdot 45 + 3 = 93$ I do not see how they get the numbers to the right of the equals sign. Please someone show me how $f(2) = 2f(1) + 3 = 2 \cdot 9 + 3$. I see they get "$2\cdot$" because of $2f$ but how and where does the $9$ come from? I also see why the $+3$ at the end of each equation but how and where does that number in the middle come from? AI: Simply use substitution. We are given the initial value $$\color{blue}{\bf f(0) = 3}\tag{given}$$ Each subsequent value of the function $f$ depends on the preceding value. So the function evaluated at $(n + 1)$ depends (is defined, in part) on the function's value at $n$: That's what's meant by a recursive definition of the function $f$, which here is defined as: $$f(n+1) = 2\cdot f(n) + 3,\quad \color{blue}{f(0) = 3}$$ Knowing $f(0)$ is enough to "get the ball rolling": $${\bf f(0 + 1) = f(1)} = 2\color{blue}{\bf f(0)} + 3 = 2\cdot \color{blue}{\bf 3} + 3 = 6 + 3 = \bf 9$$ Now, knowing $f(1)$ we can compute $f(2)$ $$f(1 + 1) =f(2) = 2\cdot {\bf f(1)} + 3 = 2\cdot {\bf 9} + 3 = 18 + 3 = {21}$$ Now that we know $f(2) = 21$ we can find $f(2 + 1) = f(3)$: $$f(3) = 2\cdot f(2) + 3 = 2 \cdot 21 + 3 = 42 + 3 = 45$$ Now that we know $f(3) = 45$, we can compute $f(3+1)=f(4)$: $$f(4) = 2\cdot f(3) + 3 = 2\cdot 45 + 3$$ And so on...
H: homeomorphic $\sin \frac{1}{x}$ and $\mathbb{R}$ Is it really true that graph of the function $$f(x)=\begin{cases}\sin \frac{1}{x}& x\neq0\\0&x=0\end{cases}$$ homeomorphic to $\mathbb{R}$? This function is not continuous, else it is true. What's the main ways of prooving non-homeomorphic we have? AI: This is false. The problem is that the graph of $f(x)$ is not path-connected, whereas $\mathbb{R}$ is, and path-connectedness is a topological invariant. You can prove the graph of $f(x)$ is not path-connected directly, or you can get it from the fact that the "topologist's sine curve" is not path-connected. In response to the second question, this is the whole point of topological invariants. The easiest way to show two spaces are not homeomorphic is to show that a certain topological invariant differs between them. Ones I've used frequently in undergraduate topology courses are compactness (in its various forms), connectedness (including path and simple), Hausdorff property, Euler characteristic; these are worth keeping in mind. You should almost never try to show directly from the definition that "there exists no continuous map with continuous inverse between $X$ and $Y$." Just by virtue of being a nonexistence statement, it's hard to prove without some simplification, and topological invariants are precisely that simplification --- they make this proof an easy (easier, anyway) verification.
H: Given a fourier series in $L^2$ and using it to determine a particular integral Suppose $g \in L^2 (-\pi,\pi)$ has Fourier series is $b_0 + \sum_{n=1}^\infty (b_n\cos(nx)+c_n\sin(nx))$. From this we want to determine what $\frac{1}{2\pi}\int_{-\pi}^{\pi}|g(x)|^2dx$ equals. I was trying to use the Parseval's formula, but I didn't know if that was the right approach. According to the Parseval's theorem, if g(x) has the Fourier series $\sum (a_n\cos(nx) + a_n\sin(nx))$ then we should have $\frac{1}{2\pi}\int_{-\pi}^{\pi}|g(x)|^2dx = \sum_{-\infty}^{\infty}|a_n|^2$. This seems like the right theorem to use for this problem, but in our given Fourier series we have to different constants $b_n$, $c_n$ and not just $a_n$. Can I manipulate the equality in some way such that it works for the give Fourier series, can someone help me get started? AI: Let's consider the way you have your FS defined as above. In this case, it is relatively straightforward to compute $$\int_{-\pi}^{\pi} dx \, |g(x)|^2$$ because all of the cross-terms vanish (why?). So what you get is $$\int_{-\pi}^{\pi} dx \, |g(x)|^2 = 2 \pi b_0^2 + \pi \sum_{n=1}^{\infty} (b_n^2+c_n^2)$$ because $$\int_{-\pi}^{\pi} dx \, \cos^2{n x} = \int_{-\pi}^{\pi} dx \, \sin^2{n x} = \pi$$ when $n \ge 1$. So really, Parseval in this case takes the form $$\frac{1}{2 \pi} \int_{-\pi}^{\pi} dx \, |g(x)|^2 = b_0^2 + \frac12 \sum_{n=1}^{\infty} (b_n^2+c_n^2)$$
H: Finding the value of Pr(X Consider the joint density $f_{X,Y}(x,y)=1/40$ inside the rectangle 0< x <5 and 0< y< 8. How do I calculate $Pr[X<Y]$ ? AI: The probability that $X$ is $\lt Y$ is $\frac{1}{40}$ times the area of the part of the rectangle that is above the line $y=x$. Finding this area is a simple geometric problem if one draws a picture. Note that it is marginally easier to find the probability that $Y\le X$. For then we are dealing with an isosceles right triangle with leg $5$. The area is $\frac{25}{2}$, and therefore $\Pr(X\le Y)=\frac{25}{80}=\frac{5}{16}$. Thus $\Pr((X\lt Y)=\frac{11}{16}$. Remark: From general considerations the probability is $\iint_D f(x,y) \,dx\,dy$, where $D$ is the part of the plane that is above the line $y=x$. Since our density function is $0$ outside the rectangle, this integral is $\iint_T \frac{1}{40}\,dx\,dy$, where $T$ is the part of the rectangle that is above the line $y=x$. To do the integration, we can integrate first with respect to $y$, then with respect to $x$. We get $$\int_{x=0}^5\left(\int_{y=x}^8 \frac{1}{40}\,dy\right)\,dx.$$ The geometric approach is easier, particularly since in any case we need the geometry to evaluate the integral. However, the integral approach would become necessary if we had a non-constant density function on the rectangle.
H: want to study permutation groups, only background is linear algebra and calculus My background is in computer science, specifically software engineering, and not really math heavy. I know the basics of calculus (the Thomas book) linear Algebra (Strang), and some Discrete Math, Graph Theory, Complexity and Algorithms and that's about it. I was recently working on proving that the 8-slide puzzle states are divided in to 2 disjoint sets (AI exercise). I had no clue that permutation groups were such a large field of study on their own (I can't even understand the terminology in wikipedia articles, orbits? Cayley tables?), or that their study was Abstract Algebra stuff. My question is, what is the learning path, book-wise or just topic wise (and I'll find the books later) for someone like me, to understand the basics of Group Theory (taking into account that I don't even know what it really is) and permutation groups more specifically? P.S. I'm not asking because I want to solve my exercise (this was simple) it just seems a fascinating topic. AI: Another book which I find very accessible is Fraleigh's A First Course in Abstract Algebra, which would be a great start with which to begin, and requires no more in the way of prerequisites than you've already covered. You wouldn't need to cover all the material in Fraleigh's text, if you're interested primarily in group theory: the basics you'll need for getting an overview of groups (including permutation groups) is covered in the first half of the text, with some supplemental sections on groups included (among other topics) in the latter half. I've had a lot of success with using it in an undergraduate's first course in abstract algebra/modern algebra. It certainly wouldn't hurt to work through Fraleigh in conjunction with Dummit and Foote. They do a great job with group theory, and it is an excellent text, overall. See this post: For more suggestions on abstract algebra texts, at varying levels of difficulty. At any rate, once you have a basic "lay of the land" and familiarity with abstract algebra, with a focus on groups, then check into An introduction to the Theory of Groups by Joseph Rotman. It's a classic.
H: Design data structure 3-13. Let A[1..n] be an array of real numbers. Design an algorithm to perform any sequence of the following operations: Add(i,y) -- Add the value y to the ith number. Partial-sum(i) -- Return the sum of the first i numbers, i.e. $\sum_{j=1}^j A[j]$ There are no insertions or deletions; the only change is to the values of the numbers. Each operation should take O(logn) steps. You may use one additional array of size n as a work space. ABove is an excercise from steve skiena. i am unable to solve this. Can someone please help me ? AI: A standard easy-to-implement structure for this is called a Fenwick tree. It is very popular in programming competitions. Here is a description on Petr Mitrichev's blog. UPDATE: now that I look at it, Mitrichev's description may be too short. Here is one that is more comprehensive.
H: What does "isomorphic" mean in linear algebra? My professor keeps mentioning the word "isomorphic" in class, but has yet to define it... I've asked him and his response is that something that is isomorphic to something else means that they have the same vector structure. I'm not sure what that means, so I was hoping anyone could explain its meaning to me, using knowledge from elementary linear algebra only. He started discussing it in the current section of our textbook: General Vector Spaces. I've also heard that this is an abstract algebra term, so I'm not sure if isomorphic means the same thing in both subjects, but I know absolutely no abstract algebra, so in your definition if you keep either keep abstract algebra out completely, or use very basic abstract algebra knowledge, that would be appreciated. AI: Isomorphisms are defined in many different contexts; but, they all share a common thread. Given two objects $G$ and $H$ (which are of the same type; maybe groups, or rings, or vector spaces... etc.), an isomorphism from $G$ to $H$ is a bijection $\phi:G\rightarrow H$ which, in some sense, respects the structure of the objects. In other words, they basically identify the two objects as actually being the same object, after renaming of the elements. In the example that you mention (vector spaces), an isomorphism between $V$ and $W$ is a bijection $\phi:V\rightarrow W$ which respects scalar multiplication, in that $\phi(\alpha\vec{v})=\alpha\phi(\vec{v})$ for all $\vec{v}\in V$ and $\alpha\in K$, and also respects addition in that $\phi(\vec{v}+\vec{u})=\phi(\vec{v})+\phi(\vec{u})$ for all $\vec{v},\vec{u}\in V$. (Here, we've assumed that $V$ and $W$ are both vector spaces over the same base field $K$.)
H: Writing and expression to find the probability Suppose (x) and (y) are independent lives. $T(x)$ and $T(y)$ denotes the future lifetimes. How do I write an expression to find out the probability that exactly one of the dies within the next 10 years? the formula i thought of is $Pr[T(x)<10$ or $T(y)<10]$-$Pr[T(x)$ and $T(y)<10]$ but I am not sure whether this correct AI: Yeah, that's right. If the lives are independent, then the deaths are independent events (or boolean random variables), and you're essentially taking the exclusive-or.
H: Finding $\,\%\,$ of salary relative to another salary Sam makes $\$65$ per week and Don makes $\$138$ per week. How do I express, as a percentage, how much more Don makes per week than Sam does? AI: $$\text{Don earns}\;\;\left(\frac{\text{Don's earnings}-\text{Sam's earnings}}{\text{Sam's earning's}}\; \times 100 \%\right) \;\;\text{more than what Sam earns}$$ $$\dfrac {138 - 65}{65}\times 100\% \approx 112.31\%$$ This tells us that Don earns $\;\approx \;112.31\%\;$ more than Sam earns.
H: Some questions about $\gcd(n,m)$ and $\phi(n)$ I was messing around in Excel at the end of work today and made a table where the $(i,j)$ entry $a_{i,j}$, for $j \geq i$, is 1 exactly when $i$ and $j$ are coprime (see snapshot of a portion of the table below): My questions are: Is there an explanation for the highlighted rectangular pattern that keeps popping up (albeit in different orientations)? What about the numbers where $a_{*j}$ is $0$ for "most" values of $j$. They seem to be sandwiched between twin primes or at least right next to a prime. Is there any significance there? Another question that popped out of this, are there infinitely many solutions for $\phi(n) = \phi(n+1)$? I also calculated $\phi(n)$ directly at the top of my table and noticed that the pattern for solutions seemed somewhat erratic. Is there a theorem related to this at all? These questions are rather open ended and I apologize. Just trying to see if there is any merit to my curiosity today. AI: Twin primes are always either side of a multiple of 6 (except for the first twin primes, 3 and 5). Why is that? I think the numbers with most blank space below them are multiples of 6. Why would most numbers have a factor in common with any multiple of 6? I don't know about your last question. None the less, playing around with maths is (a) fun and (b) how a lot of maths gets discovered.
H: Problem related to Chinese Remainder Theorem I'm not sure if there is a typo in the question or if I am incorrect (will point out as I get to it), but I am given that $a,b,m,n$ are integers with $\gcd(m,n) = 1$ and that \begin{equation} c \equiv (b - a) \cdot m^{-1} \;(\!\!\!\!\!\!\mod n) \end{equation} and am asked to prove $x = a + cn$ (I believe should be $x = a + cm$) is a solution to \begin{equation} x \equiv a \;(\!\!\!\!\!\!\mod m) \qquad x \equiv b \;(\!\!\!\!\!\!\mod n), \end{equation} and that every solution to the system of congruences has the form $x = a + cn + ymn$ (I believe should be $x = a + cm + ymn$) for some $y \in \mathbb{Z}$. I start by stating that as $\gcd(m,n) = 1$, the Chinese Remainder Theorem states that there is in fact a unique solution to the congruence up to congruence in $\mathbb{Z}/_{mn\mathbb{Z}}$. As $x \equiv a \;(\!\!\!\!\mod m)$, we have that $x = a + mk$ for $k$ an integer and since $\gcd(m,n) = 1$, $m$ has an inverse, $m^{-1}$ in $\mathbb{Z}/_{n\mathbb{Z}}$. Hence \begin{equation} x \equiv b \;(\!\!\!\!\!\!\mod n) \implies k \equiv (b - a) \cdot m^{-1} \;(\!\!\!\!\!\!\mod n). \end{equation} (Note: we are given this as an assumption, where $c = k$, so I'm not sure why this assumption would be provided.) This then provides $k = (b - a) \cdot m^{-1} + n\ell$ for $\ell$ and integer, and so \begin{equation} x = a + m(b - a) \cdot m^{-1} + mn\ell \end{equation} is the general solution to the congruence, with $a + m(b - a) \cdot m^{-1}$ being the unique solution in $\mathbb{Z}/_{mn\mathbb{Z}}$. (Note that $m^{-1} \cdot m \neq 1$ as this is not modulo $n$.) With $\ell = y$ and $(b - a) \cdot m^{-1} = c$ this solution would make complete sense if I was asked to show that $x = a + cm$ and $x = a + cm + ymn$ are the solutions instead of $x = a + cn$ and $x = a + cn + ymn$. Could someone please show me where I may have made a mistake or missed something, or reassure me that these are in fact typos in the question. AI: Let me offer you reassurance "that these are in fact typos in the question." You are correct that it ought to be $\,x = a + c\,\bf m\,$ and not $\,x = a+ c\,\bf n,\,$ and also about $\,x = a + c\,{\bf m} + ymn$ and not $\,x = a + c\,{\bf n} + ymn$.
H: Is there a structure theorem for nonempty, compact, nowhere dense subsets of the real line? Let $X$ be the set of all nonempty compact nowhere dense subsets of the real line. Is there a theorem that describes the form of the elements of $X$? Context For open subsets of the line, such a result is well-known: every open set is the disjoint union of open intervals. But compact sets can be substantially more complicated. AI: Up to homeomorphism the basic ones are homeomorphic copies of the ordinal space $\alpha+1$ for each $\alpha<\omega_1$, and Cantor sets. Of course $X$ is also closed under finite unions. Of course a space homeomorphic to one of the countable compact ordinal space can be embedded in a non-obvious way. For example, $\omega^2+1$ can be embedded as follows: $$f:\omega^2+1\to\Bbb R:\begin{cases} \omega\cdot n\mapsto\frac1{2^n}\\\\ \omega\cdot n+k\mapsto\frac1{2^{n+1}}-\frac1{2^{n+2+k}},&\text{if }k>0\\\\ \omega^2\mapsto 0\;. \end{cases}$$ The resulting set of reals looks schematically like this in its order in $\Bbb R$: $$\bullet\dots\longrightarrow\bullet\longrightarrow\bullet\longrightarrow\bullet\longrightarrow\bullet\longrightarrow\bullet\bullet$$ The bullets ($\bullet$) from right to left are $f(\omega\cdot0),f(\omega\cdot 1),f(\omega\cdot2),\ldots,f(\omega^2)$. This is rather different from our usual picture of $\omega^2+1$ in its natural (ordinal) order.
H: Show that weighted average of a function involving expected values is not equal to the function of the weighted average How can I show that, given $w_j$ sum to $1$ $$\sum_{j=1}^n w_j \frac{a}{E(x_j)+b}\ne \frac{a}{E(\sum_{j=1}^n w_j x_j)+b}$$ unless $x_j=x_k \forall j,k \\$ where a and b are constants, $x_j$ are arbitrary R.V.'s? I see some similarity to Jensen's inequality but I'm not sure how to approach that. Thanks! AI: Hint: you want to prove that $$\sum_{i=1}^n\omega_jf(X_j)\geq (\text{or} \leq ? ) ~f(\sum_{i=1}^n \omega_jX_j)$$ for all $n$, with $\sum_{i=1}^n\omega_j=1$, $X_1,\dots,X_n$ random variables and $$f(X):=\frac{a}{E[X]+b}.$$ What can you say about $f$?
H: What are zero divisors used for? This is the first time I hear this term. Specifically the assertion is that $\mathbb{Z}$ has no zero divisors. So, from my understanding this is because there are not two non-zero numbers $a,b\in \mathbb{Z}$ such that $ab=0$. Also I can see that this definition is related to the one I learnt in high school that $a$ divides $0$ if $\exists b\in\mathbb{Z}\ (ba=0)$, the difference being that we need to consider the restriction $a,b\neq 0$ when dealing with zero divisors. What is the motivation of zero divisors? what are they used for? AI: Trying to shed additional light by means of an example involving a most typical deduction where the presence/absence of zero divisors plays a role. Let's imagine that we need to solve the equation $$x^2-x=0.$$ Before we can make any progress on this, we need to know a bit more. What kind of an object is $x$? Is it a number (real/complex/rational)? Is it a matrix? Is it an element of some other universe, where the equation makes sense? A couple things are certain. Whatever $x$ is, we need the ability to multiply it with itself (otherwise $x^2$ does not exist) as well as subtract it from its square. Furthermore, this universe must have an object called zero. Let's assume that the universe is at least a ring, because then we have all of the above operations available. In a ring we have a unit (a neutral element for multiplication, "$1$") and the distributive law. This allows us to rewrite the equation as $$ 0=x^2-x=x^2-1\cdot x=(x-1)x. $$ Well, does that help? Depends! If $x$ is an element of any of the school level number systems, we know that the system has no zero divisor. Meaning that if $ab=0$ in such a system, we are allowed to deduce that either $a=0$ or $b=0$. If our $x$ comes from such a system, we can make swift progress with our equation and conclude that either $$ x-1=0\qquad\text{or}\qquad x=0. $$ This tells us that either $x=1$ or $x=0$ (standard applications of ring axioms). So we have completely solved the equation PROVIDED that $x$ resides in a ring that does not have zero divisors. This is just an abstraction of the techniques we learned in school, but there no attention to the zero divisor concept is usually given, even though it did play a key role. That is all fine, because drawing attention to the zero divisor concept begs the question: "Are there meaningful systems with zero divisors?" Let us next assume that $x$ is an element of the residue class ring $\mathbb{Z}_6$. Students might nevertheless want to use the above technique. But some wiseguy will notice that $x=\overline{3}$ is also a solution, as $x^2=\overline{9}=\overline{3}$. What was wrong with the earlier approach? Let's check: $$ (x-1)x=\overline{(3-1)}\cdot\overline{3}=\overline{2}\cdot\overline{3}=\overline{6}=0. $$ Lo and behold, in this ring a prodct can be zero without either factor being so. But the need to solve our equation still is there. If we are not allowed to use the familiar trick, then what can we do? Quadratic formula? That requires an ability to calculate square roots, and an ability to divide by two. Without going into details I will just state that both of these are suspect in more general rings. Also the absence of zero divisors is hidden in the derivation of the quadratic formula. So what? Thankfully this ring is finite, so we can get away by simply trying all the possible values of $x$. Such checking reveals that the residue class $x=\overline{4}$ is also a solution. But it gets worse! If $x$ is a $2\times2$ matrix with real entries, then in addition to the "obvious" solutions $$ x=\left(\begin{array}{rr}1&0\\0&1\end{array}\right),\qquad\text{and}\qquad x=\left(\begin{array}{rr}0&0\\0&0\end{array}\right) $$ we also have solutions like $$ x=\left(\begin{array}{rr}1&0\\0&0\end{array}\right),\qquad\text{and}\qquad x=\left(\begin{array}{rr}0&0\\0&1\end{array}\right).$$ And that's not all. I invite you to check that the matrix $$ x=\left(\begin{array}{cc}\cos^2\alpha&-\cos\alpha\sin\alpha\\-\cos\alpha\sin\alpha&\sin^2\alpha\end{array}\right) $$ is a solution of the equation $x^2-x=0$ for all values of the angle $\alpha$. So the main "motivation" to study the concept of zero divisors is, as André pointed out, to know when they are absent. A simple task may become quite harrowing, if we don't know about the possibility of zero divisors. Above we saw that a quadratic equation may have four or infinitely many solutions, if the ring has zero divisors. With a little bit of extra work we can prove the familiar result that a degree $n$ equation in a commutative ring without zero divisors has at most $n$ solutions. The reason for adding the commutativity assumption is a bit subtle, and I won't get into that. If I managed to awake your curiosity about that, I advice you to take a peek at this question for an example of a quadratic equation with infinitely many solutions in a non-commutative ring without zero divisors. I also recommend the answer by Arturo Magidin, but it does require some familiarity with ring concepts.
H: permutation of the elements of a matrix with respect to sign? Let matrix $\mathcal A$ with a $(m\times n)$. Every element $a_{i,j}$ of the matrix is either a positive or negative integer, or zero. Question: How many distinct matrices could be generated with respect to the above distinction for matrix $\mathcal A$ - please provide a general formula? Example: A matrix $(2 \times 1)$ with two elements has 9 distinctions. AI: If you only are interested in the sign, not the size of the integer, then you have three options for each entry, so for an $m \times n$ matrix you would have $3^{(m\times n)}$ possibilities.
H: Inversion of a power series without a linear term Could someone explain me how to invert $$ z = y e^{-y} = e^{-1} - \frac{1}{2e}(y - 1)^2 + \frac{1}{3e}(y - 1)^3 - \frac{1}{8e}(y - 1)^4 + \cdots $$ around $y=1, z=e^{-1}$, so that $y$ is expressed as a series of $(1 - ez)$ ? This is a part of example VI.8 in "Analytic combinatorics" by Flajolet and Sedgewick. They skips the details of the inversion process. Actually I am not so familiar with manipulating series expansions. If someone could give some details of the method, I'll greatly appreciate for it. AI: Well, this is like the Lagrange inversion theorem (also see Wikipedia) for the equation $z=f(y)$, but with $f'(y_0)$ vanishing. You compute the expansion for $y$ by writing down the general form of its power series with unknown coefficients: $$ y = 1 + \alpha (1-ez)^{1/2} + \beta(1-ez) + O((1-ez)^{3/2}). $$ Then you substitute: $$z = \frac{1}{e} - \frac1{2e}\left(\alpha (1-ez)^{1/2} + \beta(1-ez) + \cdots\right)^2 + \frac1{3e}\left(\alpha(1-ez)^{1/2}+\cdots\right)^{3/2} + \cdots, $$ and equate coefficients on the left and right: $$ \frac1e-\frac{1-ez}{e} = \frac1e - \frac{\alpha^2}{2e}(1-ez) + \left(-\frac{\alpha\beta}{e} + \frac{\alpha^3}{3e}\right)(1-ez)^{3/2} + \cdots. $$ Solving gives $$ \alpha = \sqrt2, \qquad \beta = \frac23, $$ so $$ y = 1 + \sqrt{2} (1-ez)^{1/2} + \frac23(1-ez) + O((1-ez)^{3/2}). $$ You can find more terms by doing the same with longer power series. I don't know if there is a closed form for the series coefficients.
H: How do we know that $\sum_{k=0}^{\infty}\frac{x^k}{k!}=e^x$? I've been taught that the definition of the exponential function is the following power series: $$\sum_{k=0}^{\infty}\frac{x^k}{k!}$$ Here's my question: how do we know that this series is equal to $e^x$? That is to say, how do we know that the function defined by this power series can be expressed in the form $c^x$ where $c$ is a constant and that $c=e$? If we look at another power series, say, $\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k}}{2k!}$, there is obviously no constant $c$ for which $c^x=\sum_{k=0}^{\infty}(-1)^k\frac{x^{2k}}{2k!}$, so what tells us that there exists such a constant for the first series? AI: It is only after some manipulation that it becomes something familiar. Your other example of course is $\cos x$. A common approach is to find what differential equation it satisfies. Let $y(x) = \sum_{k=0}^{\infty}\frac{x^k}{k!}$. Differentiate it term by term, and you find $\frac{dy}{dx}=y(x)$. That is a differential equation. It is separable, and you get $x=c+\log y$. You can tell $c=0$ because $y=1$ when $x=0$. Lastly, the inverse function of $\log x$ is $e^x$. Your other series is a solution to $\frac{d^2y}{dx^2}+y=0$, and so are $\cos x$ and $\sin x$, so it must be one of those, or a combination of those.
H: Is it true that $\mathbb R \setminus \bigcup A\not= \emptyset$? Let $X$ be the set of all nonempty compact nowhere dense subsets of the real line. Given any countable subset $A$ of $X$. Is it true that $\mathbb R \setminus \bigcup A\not= \emptyset$? Thanks for your help. AI: Yes: that’s just (a special case of) the Baire category theorem. A compact, nowhere dense subset of $\Bbb R$ is a closed nowhere dense set, so in fact $\Bbb R\setminus\bigcup A$ is even dense in $\Bbb R$.
H: Finding the slope at a point $P(x_1,y_1)$ on a parabola Given a point $P(x_1,y_1)$ on the graph of a parabola $y^2=4px$, prove that the slope at point P is $$\frac{y_1}{2x_1}$$ AI: How 'bout this: from $y^2 = 4px$, we have, by implicit differentiation, $2yy' = 4p$; dividing the latter equation by the former, we get $\frac{2y'}{y} = \frac{1}{x}$, or $y' = \frac{y}{2x}$. And I leave it to you to re-insert the subscripts. Usual caveats in re. case $x= 0$ apply. Cheers.
H: Will the ball hit the wall? There is a ball starting at point $A$ going forward in the direction towards point $B$ (so it moves along the $(AB)$ line). A wall is represented by its two ends $W_1$ and $W_2$. I have to solve in a general way the question "Will the ball hit the wall ?". The question seemed quite simple at first but there are a lot of particular possibilities. So either the vectors are colinear and $(x_B-x_A)*(y_{W_2}-y_{W_1}) = (y_B-y_A)*(x_{W_2}-x_{W_1})$, and if they are, they can either be on the same line or not. So we check if $y_{W_1}-y_A =(y_B-y_A)/(x_B-x_A)*(x_{W_1}-x_A)$ and if it's true, we then have to check if $\overrightarrow{AB}$ points towards the wall. So I tried to compare the $x$ and $y$ coordinates but it doesn't seem sufficient to conclude if point B is between $W_1$ and $W_2$. And this is just the simple case when they are parallel and it becomes more complicated then to generally check if the ball will hit (we also have to check if the intersecting point is between the ends of the wall but that's easy). If they're not parallel, I've determined precisely the possible intersection points (with all the particular cases of the line being vertical or horizontal and then we can't use the equations to check the intersection). How can we check if the vector really points towards the wall ? (when they're parallel and in a general way ?) Is there a more elegant and general way to solve the problem ? AI: If the lines are not parallel, then you can uniquely calculate the intersection point between the complete straight line through $A$ and $B$ and the complete straight line through $W_1$ and $W_2$. Let's call it $I$. Then all which is left to do is to check that $I$ is both between $A$ and $B$, and between $W_1$ and $W_2$. Since you already know that $I$ is on the straight line through $A$ and $B$, this is easily done by checking that for a single arbitrary coordinate in which $A$ and $B$ differ, the corresponding value for $I$ is in between, and the same for $W_1$ and $W_2$. If the lines are parallel, then all you have to check is whether at least one of the following relations are true: $d(A,W_1)+d(W_1,B)=d(A,B)$, $d(A,W_2)+d(W_2,B)=d(A,B)$, $d(W_1,A)+d(A,W_2)=d(W_1,W_2)$ or $d(W_1,B)+d(B,W_2)=d(W_1,W_2)$ where $d(P,Q)$ is the distance between the points $P$ and $Q$. If any of them is true, the point "hits" the wall (actually, moves at least partially along it), otherwise it doesn't. If you do numerical calculations, you of course want to take into account that you may have rounding errors, and therefore the equalities may not be exact equalities. That is, you should test whether the absolute difference is smaller than some pre-defined value epsilon.
H: Why does this polynomial related to Hamming Codes have integer coefficients? Question: Why does $$ \frac{(1 + x)^{2^k - 1} - (1 - x^2)^{2^{k-1} - 1}(1-x)}{2^k} $$ have integer coefficients? Details: For a question I'm thinking about, I needed to know all the real numbers $c$ such that the generating function $$ p(x) = \frac{(1 + x)^{2^k - 1}}{2^k} - c \cdot(1 - x^2)^{2^{k-1} - 1}(1-x) $$ is a polynomial with integer coefficients and constant term $0$ or $1$. Plugging in $x = 0$ reveals that $c = -\frac{1}{2^k}$ or $c = 1 - \frac{1}{2^k}$. Both of these $c$ values seem to work empirically for all $k$, but it is unclear to me why this is the case. Binomial formula gives the coefficient of $x^i$ (for, say, $i$ even) as $$ \frac{{{2^k - 1} \choose {i}} - (-1)^{i/2}{2^{k-1} - 1 \choose i/2} }{2^k} $$ But I don't think this expression is nice. Also, a search on oeis reveals that $p(x)$ is related to Hamming Codes. AI: Note that $$ \frac{(1+x)^{2^{k+1}-1}-(1-x^2)^{2^k-1}(1-x)}{2^{k+1}} =(1+x)^{2^k-1}\frac{(1+x)^{2^k}-(1-x)^{2^k}}{2^{k+1}} $$ $\frac{(1+x)^{2^k}-(1-x)^{2^k}}{2}$ is the odd part of $(1+x)^{2^k}$. Therefore, we just need to show that $\left.2^k\,\middle|\,\binom{2^k}{j}\right.$ when $j$ is odd. The number of factors of $2$ in $\binom{2^k}{j}$ is the sum of the bits in the binary representation of $j$ plus the sum of the bits in the binary representation of $2^k-j$ minus the sum of the bits in the binary representation of $2^k$. When $j$ is odd, this is $k$. 100000000000 $\leftarrow2^k$ has $k$ zeros on the right 001100100101 $\leftarrow j$ is odd, so it has a one on the right 010011011011 $\leftarrow2^k-j$ is the complement of $j$ plus one (has a one on the right) Except for the rightmost bit, all bits of $j$ and $2^k-j$ are complements. Since the rightmost bit is one in both, the sum of the bits in $j$ and $2^k-j$ is $k+1$. The sum of the bits in $2^k$ is $1$. Thus, there are $k$ factors of $2$ in $\binom{2^k}{j}$. That is, $\left.2^k\,\middle|\,\binom{2^k}{j}\right.$ when $j$ is odd.
H: Analytic Function vs Exponential Order function We say that a function $f$ is of exponential order $\alpha$ if there exist constants: $M$, $\alpha$, $T$ such that for $x>T$ $$f(x)<M\cdot e^{\alpha x}$$ Polynomials are of exponential order. Then, is it true that analytic functions in a neighborhood of the origin are of exponential order? I want to know this to see if it is legal to calculate the Laplace transform of analytic functions. AI: In general no, take for example $e^{x^2}$.
H: Proving that $\|A\|_{\infty}$ the largest row sum of absolute value of matrix $A$ I am studying matrix norms. I have read that $\|A\|_{\infty}$ is the largest row sum of absolute value and $\|A\|_{1}$ is the highest column sum of absolute values of the matrix $A$. However, I am not able to prove this. Are there any proof of these statements? Please help and thanks for your time. Edit: Where $\|A\|_{\infty}$ is the matrix norm induced by the vector norm $\|x\|_{\infty}$. AI: I assume that the author tries to derive the matrix norms $\|A\|_1$ and $\|A\|_\infty$ induced by vector norms $\|x\|_1$ and $\|x\|_\infty$. Let's take, for example, $\|\cdot\|_\infty$. We write $$\|Ax\|_\infty=\sup_i \left|\sum_j A_{ij}x_j\right|\le \sup_i \sum_j |A_{ij}||x_j|\le \sup_j |x_j| \sup_i \sum_j |A_{ij}|.$$ Hence, a good candidate for $\|A\|_\infty$ is $\sup_i \sum_j |A_{ij}|$. We need to prove that this boundary is indeed achieved; it is true, since we can take $i$ where that supremum is achieved and impose $x_j=\mathrm{sign}(A_{ij})$. With such $x$ all our inequalities degenerate to equalities and we conclude that $\|A\|_\infty = \sup_i \sum_j |A_{ij}|$ is a matrix norm induced by $\|x\|_\infty = \sup_j |x_j|$. The case $\|\cdot\|_1$ is done likewise.
H: Prove that every integer is either prime or composite In the book I'm reading, the following proof is given for the stated theorem: Let n be any integer that is greater than 1. Consider all pairs of positive integers $r$ and $s$ such that $n = rs$. There exist at least two such pairs, namely $r = n$ and $s = 1$ and $r = 1$ and s = n. Moreover, since $n = rs$, all such pairs satisfy the inequalities $1 ≤ r ≤ n$ and $1 ≤ s ≤ n$. If $n$ is prime, then the two displayed pairs are the only ways to write $n$ as $rs$. Otherwise, there exists a pair of positive integers $r$ and $s$ such that $n = rs$ and neither $r$ nor $s$ equals either 1 or $n$. Therefore, in this case $1 < r < n$ and $1 < s < n$, and hence $n$ is composite. When I attempted the proof myself, I came up with the following reasoning: Consider all integers $1<k<n$. Now, either $n$ is divisible by one of them or not. If not, $n$ is prime; if it is, $n$ is composite. Therefore, every integer is either prime or composite. I was wondering if my proof is incorrect or less rigorous in some sense. Isn't the textbook complicating things unnecessarily? Or is that a better demonstration of mathematical thinking? Some thoughts are appreciated. P.S.: I was also not sure whether this question belonged to proof-writing or proof-strategy. AI: What do you mean exactly by "composite"? Disregarding $0$ and $\pm1$ for the sake of simplicity, if you define composite as divisible by something else then $1$ and itself (i.e. "non-prime") the proof you need is just an obvious appealing to the principle of tertium non datur. If, on the other hand, you define composite a number which can be written as a product of at least two primes you actually need some argument to show that primes and composites exhaust all numbers. Certainly, your suggestion that if $n$ is not prime then there's a number $1<k<n$ that divides $n$ is correct, but it is not the end of it since $n=k\cdot(n/k)$ needs not to be a decomposition into prime factors. I'd say that you need a little inductive argument to come to a conclusion.
H: What is the domain of the following function? Please tell me the domain of $y = \sin^{-1}(\sin(x))$ P.S. I think domain is $(-\infty, \infty)$ But my teacher says it is $(-\frac{\pi}{2}, \frac{\pi}{2})$. He says since $sine$ is a many one function, so its domain has to be confined in the interval of $(-\frac{\pi}{2}, \frac{\pi}{2})$ to make it an inverse function. Well I didn't quite understand. Please explain. AI: Looking at the graph might do some help.
H: Connectedness of disjoint union Consider the topological space $Z$ defined as the disjoint union $Z=X\cup Y$. Please tell me if these statements are true: 1) If both $X$ and $Y$ are open in $Z$ or both are closed in $Z$ then $Z$ must be disconnected 2) If one of them is open and the other is closed then $Z$ may or may not be connected. For example $\{0\}$ is closed in $[0,1]$ and $(0,1]$ is open in $[0,1]$ but $\{0\}\cup (0,1]=[0,1]$ is connected. The second example, take $Z=(0,1)\cup [2,3]$ then $(0,1)$ is open in $Z$ and $[2,3]$ is closed in $Z$ but $Z$ is not connected. Thanks alot. AI: This is correct, pretty much by definition, assuming that $X$ and $Y$ are non-empty. This is also true. Note, however, that in $(0,1)\cup[2,3]$, the sets $(0,1)$ and $[2,3]$ are both open and both closed: it’s not like your first example, in which $\{0\}$ is closed but not open, and $(0,1]$ is open but not closed.
H: Prove that every irreducible cubic monic polynomial over $\mathbb F_{5}$ has the form $P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5})$? For a parameter $t=(t_{0},t_{1},t_{2},t_{3},t_{4},t_{5},)\in\mathbb F_{5}^{6}$ with $t_{0}\ne 0$ and {$t_{i},i>0$} are ordering of elements in $\mathbb F_{5}$ (t1~t5 is a permutation of [0]~[4] here at least as I think), define a polynomial $$P_{t}(x)=(x-t_{1})(x-t_{2})(x-t_{3})+t_{0}(x-t_{4})(x-t_{5}).$$ Show that $P_{t}(x)$ is irreducible in $\mathbb F_{5}[x]$. Prove that two parameters $t,t'$ give the same polynomial over $\mathbb F_5$ if and only if $t_{0}=t_{0}'$ and $\{t_{4},t_{5}\}=\{t_{4}',t_{5}'\}$. Show that every irreducible cubic monic polynomial over $\mathbb F_{5}$ is obtained in this way. After trying $x,x-1,x-2,x-3,x-4$ the first question can be solved. But I have no idea about where to start with the remaining two. Expanding the factor seems failed for proving two polynomials are equal to each other. AI: Hints: A cubic is reducible, only if it has a linear factor. But then it should have a zero in $\mathbb{F}_5$, so it suffices to check that none of $t_1,t_2,t_3,t_4,t_5$ is a zero of $P_t(x)$. This part is tricky. I would go about it as follows. Let $t$ and $t'$ be two vectors of parameters. Consider the difference $$ Q_{t,t'}(x)=P_t(x)-(x-t'_1)(x-t'_2)(x-t'_3). $$ It is a quadratic. Show that if $\{t_1,t_2,t_3\}=\{t'_1,t'_2,t'_3\}$ then $Q_{t,t'}$ has two zeros in $\mathbb{F}_5$, but otherwise it has one or none. This allows you to make progress. Count them! The irreducible cubics are exactly the minimal polynomials of those elements of the finite field $L=\mathbb{F}_{125}$ that don't belong to the prime field. The number of such elements is $125-5=120$. Each cubic has three zeros in $L$ (it's Galois over the prime field), so there are a total of 40 irreducible cubic polynomials over $\mathbb{F}_5$. How many distinct polynomials $P_t(x)$ are there?
H: Definition of complex number In many situations (problems as well as solutions) I encounter the complex number $i$ which many times is defined as $i^2=-1$ instead of $i=\sqrt{-1}$, since it is "more preferred". Does anyone know why? Is it because there are two solutions to the equation $x^2+a=0, \ \ a>0$? AI: Basically this is because by $i^2=-1$ one introduces a new symbol by giving it new properties which make sense but writing down $\sqrt{-1}$ does not make sense until the symbol $i$ is defined (and also then it is ambigous since there is no "unique nonnegative solution"). Regarding your comment on two possible solutions: Using $i^2=-1$ it is then clear that also $(-i)^2 = -1$ and hence, there are two equally good possibilities for $i$.
H: self adjoint properties I looking for a proof for the theorem but I have not find yet. A link or even sketch for How it goes will be very appreciate. A linear map is self adjoint iff the matrix representation according to orthonormal basis is self adjoint. by the way is not that true for all self adjoint matrix and not just for matrices representation according to orthonormal basis. Thanks in advanced !! AI: The adjoint $A^*$ of a linear map $A$ on a space with scalar product is determined by the property $\langle Ax,y\rangle = \langle x, A^*y\rangle$ for all $x,y$. If we let $x,y$ run through the base vectors of an ON basis, then $\langle Ae_i,e_j\rangle$ is the $i,j$ entry of the matrix for $A$ and $\langle e_i,A^*e_j\rangle$ is the $j,i$ entry of the matrix for $A^*$. So if $A=A^*$ then the corresponding matrix (with respect to the ON basis considered) is self adjoint
H: Prove that $T$ is a subgraph of $G$ Let $T$ be a tree with $k$ edges, and let $G$ be a graph where every vertex has degree at least $k$. Prove that $T$ is a subgraph of $G$. Can someone give me tips/help on how to solve this problem? AI: Hint: Start with any vertex $t_1 \in T$ and map it to any vertex $g_1 \in G$ (assuming one exists, as pointed out by celtschk). Pick any neighbour $t_2 \in T$ of $t_1$ and map it to any unused neighbour $g_2 \in G \setminus \{g_1\}$ of $g_1$, ... repeat until finished. You will always be able to pick an unused neighbour in $G$ because you have $k+1$ vertices to map and each vertex in $G$ has at least $k$ neighbours (i.e. that gives you $k+1$ vertices along with the currently mapped one). I hope this helps ;-)
H: Reference a single element within a set Is there a notation to reference a single element within a set? Let's say I have a set n = {1, 2, 4, 8, 16}. If I wanted to use a single element from this set, is there a certain notation to do so? In computer programming, if I have an array int x = {1, 2, 4, 8, 16} I could reference the third element by calling on x[2], and x[2] is equal to 4 (in programming, arrays are zero indexed). Would it be the same in mathematics? AI: Sets are not ordered. $\{1,2\}=\{2,1\}$. There is no first element. If you talk about a sequence, instead, then it is indexed like an array and then you would often write $x_n$ for the $n$-th member of the sequence $x$. If you're unsure that everyone will understand this notation, it's perfectly fine to write "We denote by $x_n$ the $n$-th element of the sequence $x$". Clarity is better than brevity.
H: Proof of the statement "The product of 4 consecutive integers can be expressed in the form 8k for some integer k" I am slowly diving into simple number theory and learning how to craft direct proofs. I needed to proof the statement "The product of 4 consecutive integers can be expressed in the form 8k for some integer k". My approach was to use the quotient-remainder theorem, as it was surely intended in the textbook. The proof looks like follows. Proof: Suppose n is any particular but arbitrarily chose integer. We must show that n(n+1)(n+2)(n+3) is divisible by 8. By the quotient-remainder theorem, n can be written in one of the forms 4q or (4q+1) or (4q+2) or (4q+3) for some integer q. We divide into cases accordingly: Case 1 (n = 4q for some integer q): $$\begin{align} n(n+1)(n+2)(n+3) & = 4q(4q+1)(4q+1)(4q+2)(4q+3)\\ & = 8[q(32q^3+48q^2+18q+3)] \end{align}$$ Let $m=q(32q^3+48q^2+18q+3)$. Then m is an integer because sums and products of integers are integers. By substitution, $n(n+1)(n+2)(n+3) = 8m$ where m is an integer. Hence n(n+1)(n+2)(n+3) is divisible by 8. Case 2 (n = (4q+1) for some integer q): $$\begin{align} n(n+1)(n+2)(n+3) & = (4q+1)(4q+2)(4q+3)(4q+4)(4q+5)\\ & = 8[(4q+1)(2q+1)(4q+3)(q+1)] \end{align}$$ Let $m=q((4q+1)(2q+1)(4q+3)(q+1))$. Then m is an integer because sums and products of integers are integers {...} {See case 1, its basically the same reasoning here...} Case 3 (n = (4q+2) for some integer q): {...} Case 4 (n = (4q+3) for some integer q): {...} Conclusion: I each of the above cases, n(n+1)(n+2)(n+3) was to be shown to be a multiple of 8. By the quotient-remainder theorem, one of these cases must occur, hence n(n+1)(n+2)(n+3) can be written in the form 8k for some integer k. q.e.d. [End Proof] But honestly, I consider this proof somehow clumsy and too inconvenient [I am a bloody amateur, maybe I am wrong]. Are there any better/shorter ways to prove the above statement? Thanks in advance. Nikolai AI: In each four consecutive integers there is exactly one pair of even integers, and exactly one of them is $\;2\pmod 4\;$ and the other one is $\,0\pmod 4\;$ , so the product of these two even (consecutive even) integers is already divisible by $\,8\,\ldots\ldots$
H: Prove that there exists walks that each edge is in $G$ For some $k \in\mathbb{N}$, let $G$ be a connected graph with $2k$ odd-degree vertices, and any number of even-degree vertices. Prove that there exists $k$ walks such that each edge in $G$ is used in exactly one walk exactly once, assuming that the main theorem about Eulerian circuits is true for graphs with multiple edges. AI: Add an extra vertex and join it to all odd-degree vertices, it has an eulerian circuit and that circuit in the original graph becomes $k$ walks with the desired property.
H: log transformation for dummies I have a question which is probaly very simple to answer for most people here: We have a formula: y = -log(x) Then this happens to x: = -log(x^1.5) or ( = -log(x^(15/10)) ) How do I now write up y? Many thanks! AI: We can make use of the following property of $\log$: $$\log(a^b) = b\log(a)$$ So, in this case: $$y = -\log(x)$$ $$y = -\log(x^{1.5}) = -1.5\log(x)$$ Note: I treated the above as if $y$ were a function of $x$, and we applied the transformation $x \mapsto x^{1.5}$. If this isn't a transform problem: If $y = -\log(x)$, then: $$-\log(x^{1.5}) = -1.5\log(x) = 1.5(-\log(x)) = 1.5(y)$$ Thus: $$1.5(y) = -\log(x^{1.5})$$
H: Translation from Spanish - Simple differentiation exercise Could anybody please translate this exercise into English? A friend of mine sent me the translation via Facebook but I still don't understand why I have to first do the logaritmization of both sides of the equation and then the differentiation... Another thing - the last sentence.... They say that it's an implicit function? It does not look like an implicit function to me.... I'm just confused because of the Spanish language.... Maybe they push me to logaritmize the whole equation so that it becomes an implicit function... AI: Logarithmic differentiation: A procedure to compute $\frac{dy}{dx}$ with $y=f(x)^{g(x)}$, is to apply first the logarithmic function to both sides of the equation $y=f(x)^{g(x)}$, that is, $$\ln(y)=\ln(f(x)^{g(x)})\ \Longleftrightarrow\ \ln(y)=g(x)\ln(f(x))$$ and then derive implicitly to find $\frac{dy}{dx}$. Apply this procedure to find $\frac{dy}{dx}$ with $y=x^{\sqrt{x^3+\sin(x)}}$. This is the translation. The reason to do so is simply that it is easy to derivate the product of two functions while the original function is more complicated to derive directly.
H: Distributions on manifolds Wikipedia entry on distributions contains a seemingly innocent sentence With minor modifications, one can also define complex-valued distributions, and one can replace $\mathbb{R}^n$ by any (paracompact) smooth manifold. without any reference cited. I went through Vladimirov, Demidov, Gel'fand & Shilov but could not find a single mention of the latter concept. Of course, I have an intuitive feeling of how to go about this, but I would need to use generalized functions on $S^1$ in my work and I don't want to inefficiently re-discover the whole theory if it exists anywhere already. Could anyone point me at a reference where I could learn more about distributions on smooth manifolds? NB this is not the same question as distributions supported, or concentrated, on a manifold embedded in $\mathbb{R}^n$. My space of test functions would also be defined on the same manifold so transverse derivatives are not defined. AI: The case of a circle or products of circles is much nicer than the general case of manifolds, since there is a canonical invariant ("Haar") measure! Further, circles are abelian, compact Lie groups. And connected. Thus, smooth functions are identifiable as Fourier expansions with rapidly decreasing coefficients, and distributions have Fourier expansions with at-most-polynomially-growing coefficients. There is a useful gradation in between, by Levi-Sobolev spaces, etc. One version of this is in my course notes http://www.math.umn.edu/~garrett/m/fun/notes_2012-13/04_blevi_sobolev.pdf
H: equivalent condition for moment generating function Consider a random variable $x$ with pdf $f(x)$, and have $x \ge 0$. The moment generating function is defined as $M(t)=\int^{\infty}_{-\infty}e^{-tx}f(x)dx$ (noted that we change the sign of $t$ compared with common case ). For convenient we consider only when $t>0$. The problem is: When would a function be the moment generating function of a distribution? There are some trivial conditions like $M(0)=1$ and $M(t)$ is strictly decreasing. But obviously these are not enough. An observation is that the $M(t)$ should lie in the convex hull of $e^{-tx}$. From inequality techniques we have $\log M(t)$ up convex. Is this condition sufficient? An related but a bit different problem is given $t_1,t_2,\dots,t_n$ and $M_1,M_2,\dots,M_n$ when there exist $M(t)$ such that $M(t_i)=M_i$? Add Jul 12: The reason that the up convex condition is not enough are as follows: Consider a function $\bar M(t)$ that it's logarithm is linear in an interval, using Cauchy inequality we can demonstrate that $\bar M(t)$ is composed of one $e^{-tx}$. But we can find other continuation. AI: Take a look at this http://en.wikipedia.org/wiki/Bernstein's_theorem_on_monotone_functions completely monotone functions for example have a "moment generating function representation"
H: For which rationals $x$ is $3x^2-7x$ an integer? The following exercise is from [Birkhoff and MacLane, A Survey of Modern Algebra]: For which rational numbers $x$ is $3x^2-7x$ an integer? Find necessary and sufficient conditions. I think I was able to obtain the set of rationals $x$, but I am not sure what the necessary and sufficient conditions are about. Here is what I tried: Suppose $3x^2-7x=k, k \in \mathbb{Z}$. Solving the quadratic, we get $x=\frac{7 \pm \sqrt{49+12k}}{6}$, whence $49+12k$ must be the square of some integer $m$. Now $m^2 \equiv 49 (\operatorname{mod} 12)$ has solutions $m = 1,5,7,11 (\operatorname{mod} 12)$, i.e. $m \in \{1,5,7,11\}+12 \mathbb{Z}$. Thus, the set of rationals $x = \frac{7 \pm m}{6}$ is $\{0,\frac{1}{3} \}+\mathbb{Z} = \{\ldots,0, \frac{1}{3}, 1, \frac{4}{3}, \ldots \}$. Is this correct? What do the necessary and sufficient conditions refer to? AI: What you've found are called "necessary conditions" on $x$ for $3x^2-7x$ to be an integer--so called, because if the conditions fail, then $3x^2-7x$ will not be an integer. "Sufficient conditions" are conditions on $x$ that that, if they hold, will imply that $3x^2-7x$ is an integer. Your conditions on $x$ are also sufficient conditions, as you can check. When we say "necessary and sufficient conditions," we mean equivalent conditions, or exact conditions.
H: Dual space of $K[X]$ Let $k[X]$ be the space of polynomials over a field $k$ (regarded as a vector space over $k$). What is the dual space of this vector space? My guess is that it is somehow generated by the derivations $d/dX$? AI: It is possible to identify $k[X]^*$ with $k[[X]]$ (power series). Consider the map $\phi:k[X]^* \to k[[X]]$ such that $\phi(f)=\sum f(x^i)x^i$. It is clear that $\phi$ is a monomorphism. It is also surjective because if $w=\sum a_i x^i\in k[[X]]$ then we can define $f$ in the standard basis of $k[X]^*$ so that $f(x^i)=a_i$ and now $\phi(f)=w$. It is more or less the same as the identification of the dual of eventually zero sequences with the space of all sequences. EDIT: The space of derivations of $k[X]$ is isomorphic to $k[X]$ as a vector space. One possible isomorphism is given by $p\in k[X] \mapsto d_p :k[X]\to k[X]$ where $d_p$ is the unique derivation such that $d_p(X)=p$. As $k[X]$ is not isomorphic to $k[[X]]$ as vector spaces the dual cannot be identified with derivations.
H: equivalence between axiom of choice and Zorn's lemma in a particular case. Define $A(x)$ and $Z(x)$ as follows. $A(x)\Leftrightarrow $for every indexed family $(S_i)_{i\in I}$ of nonempty sets s.t. $\# I =x$, there exists an indexed family $(s_i)_{i\in I}$ s.t. $s_i \in S_i$ for every $i\in I$. $Z(x)\Leftrightarrow $for every inductive ordered set $X$ s.t. $\# X =x$ and for every $a\in X$, there exists $b\in X$ s.t. $a\leq b$ and for every $c\in X$ $b\leq c$ implies b=c. Does $A(x) \Leftrightarrow Z(x)$ holds? Thanks in advance! I edited. AI: Let me assume that you want the proof to be in $\sf ZF$ and not in $\sf ZFC$, where both the statements are provable and therefore provably equivalent. The answer is negative. Let me explain why using a very violent image, and then give the reason as to why we can't prove this. Suppose that you are set to kill someone. It doesn't matter if you have killed them with a spoon, or with a chainsaw, or a gun, or a nuclear bomb. True, it's much easier to kill someone with a gun than it is with a spoon; and death by a spoon is likely to cause vastly less collateral damage than death by a low yield nuclear device. But if you have managed to kill that someone, then you mission is done. All the methods are equivalent for that matter. They are dead. If, on the other hand, you failed in your mission, for one reason or another, then the methods are no longer equivalent. Failing to kill someone with a spoon is likely to leave them with a few black and blue marks that will pass after a few days. Failing to kill someone with a gun may leave them paralyzed. Failing to kill someone with a nuclear bomb is likely to leave them with a good radiation dose that (if comic books and '50s horror B-movies are of any indication) will turn them into an angry fifty feet green zombie-monster that is likely to destroy Tokyo and other major cities. So if you managed to kill your victim, it's fine and every method of execution was equally satisfactory. But if you failed to kill your victim, then a lot of different methods are no longer equivalent. Similarly, the axiom of choice have many many many equivalents. Because when the axiom of choice holds in its full power it kills a lot of the pathologies, and we are left with a bunch of well-behaved sets. But when the axiom of choice fails, and we want to know whether or not certain restrictions of the axiom of choice are equivalent then the answers are usually negative. Let me give you an example: The axiom of choice is equivalent to the trichotomy. That is to say, the axiom of choice holds if and only if every two cardinals are comparable. So we can ask whether or not the statement "Every cardinal is comparable with $\aleph_1$" is equivalent to the statement "Every family of $\aleph_1$ non-empty sets admits a choice function". The answer is no. In fact the answer is so negative that even assuming "Every well-ordered family of non-empty sets admits a choice function" is not sufficient to imply that every cardinal is comparable with $\aleph_1$. Similarly even if every set is comparable with $\aleph_1$ it does not imply the axiom of countable choice. The reason that nonequivalence happens when we make these restrictions is that we use a varying degree of choice when we prove the full axiom of choice from Zorn's lemma, or from the well-ordering theorem. We use choice functions on very large sets, or partial orders whose cardinality is very large to end up with a choice function of a relatively small family of sets. For example, being able to choose from a countable family of countable sets is not enough in order to prove that the countable union of countable sets is countable. The reason is that we need to choose a well-order for each of our sets, but there are $2^{\aleph_0}$ possible well-orderings to choose from, so we can't assure a choice function from this countable family of uncountable sets. So finally, we reach to the problem at hand. Limiting Zorn's lemma to sets of a certain cardinality and limiting the axiom of choice to families of certain sizes. Consider the case where $x=\omega$, a countably infinite set. One can trace the proof of Zorn's lemma and see that in $\sf ZF$ it holds that if $(X,\leq)$ is a set satisfying Zorn's lemma's hypothesis, and $X$ can be well-ordered, then there is a maximal element in $X$. On the other hand, one cannot prove $\sf AC_\omega$, or as you denote it $A(\omega)$, from $\sf ZF$ itself. Let me finish this already long answer by pointing out that the common restriction of Zorn's lemma is not to the size of the partial order, but rather to the length of its well-ordered chains. In that case $\sf ZL_\kappa$ is equivalent to $\sf DC_\kappa$, where $\sf DC_\kappa$ is the principle of dependent choice for $\kappa$. In that case $\sf DC_\kappa$ is strictly stronger than $\sf AC_\kappa$ for every $\aleph$ number $\kappa$. I suppose that one can talk about $\sf DC_X$ for non-well orderable $X$, but no one has done that to my knowledge. Mostly because the whole motivation of $\sf DC$ is to define things by recursion, and we generally do recursion over ordinals.
H: Examples of homeomorphisms between the real numbers and the positive real numbers? I'm interested in homeomorphisms between the real numbers, $\mathbb{R}$, and the positive real numbers, $(0,\infty)$--where both spaces have the topology induced by the metric $d(x,y)=|x-y|$. Here is a couple of easy examples, $f:(0,\infty)\rightarrow\mathbb{R}$, I could think of $$f(x)=\ln(x),\quad\quad\quad f(x)=\frac{x^{\alpha}-\gamma }{x^\beta},$$ where $\alpha,\beta,\gamma>0$ and $\alpha>\beta$. What other homeomorphisms are there? EDIT: The motivation for the above question is to search for initial value problems $\dot{x}=f(x),\quad\quad x_0\in\mathbb{R}^n_{>0}$ whose solutions can be mapped `nicely' to those of a linear system of ODEs on $\mathbb{R}^n$ $\dot{z}=Az,\quad\quad z_0\in\mathbb{R}^n.$ That way the analysis of the former, can be done by simply studying the later (which, generally will be much easier). AI: [Posted before motivation was added. Still might be useful.] To find all homeomorphism between $X$ and $Y$, find one, $f:X\to Y$, and then find all self-homeomorphisms $Y\to Y$. Then every homeomorphism $X\to Y$ can be written as $h\circ f$ for some homeomorphism $h:Y\to Y$. A homeomorphism $\mathbb R\to \mathbb R$ is a strictly monotonic continuous function that is unbounded above and below. For diffeomorphisms, you can do the same thing.
H: measurability w.r.t. Borel on extended real line Following Schilling I have shown for measurable functions $$u, v \in m \mathcal{A}/ \mathcal{\hat{B}}$$ that sums, differences, products and maxima/minima are again measurable whenever they are defined. ( here $\mathcal{\hat{B}}$ is the Borel sigma algebra on the extended real line ) There is a corollary that states without proof that the measurable functions $u$ and $v$ give $$\{u<v\} \in \mathcal{A}.$$ This is obvious if $u-v : (X,\mathcal{A}) \to (\mathbb{\hat{R}},\mathcal{\hat{B}} )$ is defined. However if there is some $x$ in the domain such that both functions take positive infinity (or negative infinity) there then we cannot argue via $\{u-v < 0\} \in \mathcal{A}$. Since there is no reference in the corollary regarding the question whether $u - v$ is defined I must be missing something obvious ! Can somebody point me in the right direction ? AI: Express $\{ u < v \}$ via the measurable functions $u 1_{\{ u \neq +\infty \}}$ and $v 1_{\{ v \neq - \infty \}}$.
H: Find open sets. Consider the set $X = \{1,2\} \times \mathbb{Z}_+$ in the dictionary order. Then this will be an ordered set with smallest element. Denote $a_n = (1,n)$ and $b_n = (2,n)$. Then elements of $X$ will be $a_1, a_2, \dots, b_1, b_2, \dots$. Singleton sets are open except $\{b_1\}$. This is not clear to me. Please help. AI: The (sub)basic open sets of the order topology (which I assume is what interests us here) are of the form $(-\infty,a)=\{a\mid x<a\}$ and $(a,\infty)=\{x\mid x>a\}$. The singleton $a_k$ is open because it is $(-\infty,a_k+1)\cap (a_{k-1},\infty)$ for $k>1$ and $(-\infty,a_2)$ for $k=1$. Similarly, $\{b_k\}=(-\infty,b_{k+1})\cap(b_{k-1},\infty)$ if $k>1$. However, this does not work for $b_1$, neither with the axact trick as above nor any other way: If we progress from our subbasis to a basis of the topology, we have to consider finite intersections. Those are (as far as neighbourhoods of $b_1$ are concerned) of the form $(a_k,b_l)$ with $l>1$, or at least half-inifite. Since none o fthese is the singleton $\{b_1\}$, a union of such sets connot be that singleton either.
H: Solution to linear equations as parameterized matrices. I want to solve the following matrix equation: $$ \begin{pmatrix} a & b \\ c & d \end{pmatrix}\begin{pmatrix} 3 & 2 \\ 4 & 2 \end{pmatrix}=\begin{pmatrix} 1 & 2 \\ 3 & 4 \end{pmatrix}\begin{pmatrix} a & b \\ c & d \end{pmatrix} $$ and I thus get for equations: $$ \\2a +4b -2c = 0 \\2a + b -2d= 0 \\-3a -c +4d = 0 \\-3b +2c -2d = 0 $$ which reduces to: $$ \\a+2b-c=0 \\3b-2c+2d=0 $$ Now I want to parameterize this with parameters $s$ and $t$, and intuition tells me I should be able to express the solution as: $$ T=\begin{pmatrix} a & b \\ c & d \end{pmatrix}=s\begin{pmatrix} x & x \\ x & x \end{pmatrix}+t\begin{pmatrix} x & x \\ x & x \end{pmatrix} $$ (where $x_i$ is som natural number) This should be similar to parameters with vectors, but I havn't been able to solve this. So whats the trick here? Thanks! /Alexander AI: Choose two variables to represent $s$ and $t$. I'll use $a=s$ and $b=t$. $$\begin{align}c&=s+2t\\ \\ 2d & = 2c-3t \\ 2d & = 2(s+2t)-3t \\ 2d & = 2s + t \\ d &= s + \frac{1}{2}t \end{align} $$ $$\begin{bmatrix} a & b \\ c & d \end{bmatrix}=\begin{bmatrix} s & t \\ (s+2t) & (s+\frac{1}{2}t) \end{bmatrix}=\begin{bmatrix} s & 0 \\ s & s \end{bmatrix}+\begin{bmatrix} 0 & t \\ 2t & \frac{1}{2}t \end{bmatrix}$$
H: Question on integral closure in $\mathbb{Q}[\alpha]$ Let $\alpha$ be a root of $f(x) = x^{3} -2x +6$, $ \ \mathbb{K} = \mathbb{Q}[\alpha]$. Prove that $ O _{\mathbb{K}} = \mathbb{Z[\alpha]}$. What I've done: $f$ is irreducible, so $disc\{1,\alpha, \alpha^{2}\} = disc(f(x)) = - Res(f(x), f'(x)) = -940$. But $-940 = -2^{2} \cdot 5 \cdot 47$, so it seems we can't apply Stickelberger theorem. AI: If you can prove that $2$ is ramified, then it must divide the discriminant, and you're done. To see that $2$ ramifies, it suffices to look $2$-adically. The Newton polygon has a single segment, with slope $1/3$, so the $2$-adic extension is purely ramified. (This is just a slightly souped-up version of Eisenstein's criterion...) Edit: in effect recapitulating some aspects of what could be formalized as the Newton polygon argument, we can give a direct argument (with hindsight!?!) In the cubic extension generated by $\alpha$, for an extension of the $2$-adic valuation on $\mathbb Q$, what could the ord of $\alpha$ be? From the ultrametric inequality and the equation satisfied by $\alpha$, the ord of $\alpha$ is positive, but less than $1$. Thus, $2$ is ramified.
H: Divisibility of the difference of powers Consider the following theorem: For any $a, b \in \mathbb{Z}^+$, there exist $m, n \in \mathbb{Z}$ such that $m > n$ and $a\ |\ b^m - b^n$. What's the best way to prove it? I have an idea (and I know it's true because of that idea), but I don't know how rigorous it is to constitute a proof. AI: Consider the remainders of $b^0$, $b^1$, $b^2$, ... when divided by $a$. Since this is an infinite sequence with finitely many possible values we must have $n$ and $m$ such that the remainder of the division of $b^n$ by $a$ is the same as the one of $b^m$. This is equivalent to $a|b^m-b^n$. In fact it is enough to consider $0\leq n,m\leq a$.
H: If P=NP, then NP = coNP. Why is this so? I read that if we assume that P = NP, then NP = coNP. I am unable to understand why this is so. AI: It is easy to show that $P=coP$ (think about it). Now, if $P=NP$ then $coP=coNP$, so that $NP=P=coP=coNP$.
H: Expressing pushforward of a flow in integral form Let $\phi(t,x)$ be a flow of a vectorfield $V$ on some compact domain $\tilde{U} = U\times I \in R^n \times R$. Let X be a vector field. If one wants to write $(\phi(t,x))_{*}X)(\phi(t,x)(q)) = \phi(t,x))_*(X(q))$ in integral form, is there anything wrong in writing $(\phi(t,x))_*(X(q)) = X(q) + \int_{0}^t DV_{\phi(x,s)}\circ D\phi(x,s)X(q) ds$ (this can be obtained either by direct derivation of $\phi(t,x))_{*}(X(q))$ or by finding an ODE satisfied by the flow $(\phi(t,x))_{*}X)(\phi(t,x)(q))$ using chronological calculus) so in particular, $|\phi(t,x))_*(X(q)) - X(q)| \sim t\sup|DV|\sup|D\phi||X|$ I am asking this because usually people dont express pushword in the integral form but they express it as a series $\phi(t,x))_* = Id + t[V,.] + t^2[V,[V,.]]+...$ (which converges on a bounded subset of functions). Then continuing this analysis one can derive results where (see Agrachev - Control Theory from Geometric Viewpoint Chapter 2) $|((\phi(t,x))_*-I)X| \sim e^{|DV|}|DX|$ The appearance of $|DX|$ in one analysis and $|X|$ in the other confuses me. Is there a problem in the first part? Thanks AI: Based on the comment discussion, what you call $\phi_*(X(q))$ I usually think of as $\mathrm{d}\phi\cdot X(q) = X(\phi)(q)$. That is to say, writing $\phi$ as a $\mathbb{R}^n$ valued function, what you are interested in is the (spatial) directional derivative of $\phi(t,x)$ in the direction of $X$ evaluated at the point $q$. In coordinates where $X = X^j \partial_j$ it is just $$ (X^j \partial_j \phi^i)(t,q) $$ The reason your formula doesn't agree with those in the textbooks is because your formula is looking at something different from what the textbooks are looking at. If you are estimating something entirely different from what the textbooks are estimating, why should you expect that you get the same result? (In your case you are trying to compare two vectors one based at the point $q$ and the other at $\phi(t,q)$. In the general geometric setting this doesn't make too much sense, since the two vectors live in two distinct vector spaces and cannot be compared. In your Euclidean setting you can work in canonical coordinates and ignore the coordinate ambiguity.) In any case, what you need to estimate is the difference $$ \partial_j\phi^i(t,q) - \delta_j^i $$ Using the partial derivatives commute, and that $\delta_j^i = \partial_j\phi^i(0,q)$ you have that $$ \partial_j\phi^i(t,q) - \delta_j^i = \int_0^t \partial_t\partial_j\phi^i(s,q) \mathrm{d}s = \int_0^t \partial_j \left(V^i(s,\phi(s,q))\right) \mathrm{d}s $$ which is precisely what you derived. Using the chain rule we have $$ = \int_0^t \partial_kV^i(s,\phi(s,q)) (\partial_j\phi^k(s,q) - \delta_j^k) + \partial_jV^i(s,\phi(s,q)) \mathrm{d}s $$ From this we can apply Gronwall and get $$ |\partial_j\phi^i - \delta_j^i| \leq t \sup |DV| \exp (t \sup |DV|) $$
H: Question about Independent events If we know that $A,B$ are independent events, how can we proof that $\bar{A},\bar{B}$ are independent events to? We should using the definition: $\Pr(A\cap B)=\Pr(A)\cdot \Pr(B)$ Thank you! AI: $$\begin{align}\operatorname{Pr}(\bar A\cap\bar B)&=1-\operatorname{Pr}(A)-\operatorname{Pr}( B)+\operatorname{Pr}(A\cap B)\\ &=1-\operatorname{Pr}(A)-\operatorname{Pr}( B)+\operatorname{Pr}(A)\operatorname{Pr}(A)\\ &=(1-\operatorname{Pr}(A))(1-\operatorname{Pr}(B))\\ &=\operatorname{Pr}(\bar A)\operatorname{Pr}(\bar B).\end{align}$$
H: Confused between multiple representations of Fourier Series' formula I have never used the formula for Fourier Series with the representation that the instructor of the above video is using that involves $k$ and $\omega$. Instead, I use $n$ and $\pi$. Now, suppose that I want to write the formula for complex form of the Fourier Series with the notations I am used to, it would be: $$ f(t) = a_0 + \sum_{n=1}^\infty e^{in\pi t}(\frac{a_n-ib_n}{2}) + e^{-in\pi t}(\frac{a_n + ib_n}{2}) $$ Am I right? AI: The $w_0$ in the slides corresponds to your $\pi$. Usually, $w_0$ represents a fundamental (zeroth order) frequency, and $n w_0$ are then frequencies of its higher-order harmonics. Here, $w_0 = 2\pi f = 2 \pi/T$ so your expression is already expressed as multiples of $T/2$ which is the Nyquist minimum sampling period required for perfect signal reconstruction. Your form is more apt to digital (sampled) systems, while the form in the slides is more general and for analog systems. The $i$ in the denominator did not go anywere; if you multiply $b_k$ by $i$ as was done, it cancels.
H: Lebesgue density for other probability measures on $[0,1]$ Does the Lebesgue density theorem hold for arbitrary (Borel) probability measures on $[0,1]$? Following Downey & Hirschfeldt's proof leads me to believe that the answer is "yes". (Recall every probability measure on $[0,1]$ is regular, which is a key step in the proof.) But I feel that if this is true, it would be easy to find it stated somewhere, though I've not been able to. All proofs, statements, mentions, etc. that I've seen regard only the Lebesgue measure. (I've looked in Royden's, Folland's, and Billingsley's books in addition to Downey & Hirschfeldt's.) If it does hold, is there a good reason why it's received so little attention (if it indeed has)? AI: If it seems hard to find, perhaps that is because the proof uses technical covering arguments that authors don't wish to get into. Nevertheless proofs are written down in several places. See e.g. chapter 2 in Geometry of Sets and Measures in Euclidean Spaces by Mattila.
H: Is the unit disk in $\Bbb R^2$ a subspace? This is the original Spanish version of the exercise: My understanding is that if we make a circle which has a center in [0,0] and a radius of 1 and we take all the points of that circle.... is this set of points a vector space? How do they define addition of 2 members of a vector space? As a scalar product? If so, than the set is not a vector space since it is possible to do the scalar product of 2 such vectors, that the product will be out of the circle, so the set can't be a vector space... I assume they meant the points of the circle are represented by vectors in the vector space, right? AI: The operations would be those inherited from $\Bbb R^2$, that is, $(a,b)+(c,d)=(a+c,b+d)$ and $\lambda(a,b)=(\lambda a,\lambda b)$. That's what it means to be a subspace, that it is a nonempty set that is a vector space with operations "fitting into/matching" the containing space. "Multiplication" is a word usually reserved for operations other than addition. In vector spaces, the binary operation involved is usually referred to as "addition." Hint: what happens to a vector in $\Bbb R^2$ when you multiply it by a large positive scalar? Concretely: take a look at $5(1/2,1/2)$. Intuitively, one dimensional subspaces of $\Bbb R^2$ are lines. If it has more than one dimension, it would have to fill up $\Bbb R^2$ completely!
H: Prove that $(n+1)(n+2)(n+3)$ is $O(n^3)$. (Big-o notation) Will someone help me prove that $(n+1)(n+2)(n+3)$ is $O(n^3)$? Thank you. AI: Hint: For all positive integers $n$, your expression is $\le (n+n)(n+2n)(n+3n)$. Remark: A fancier way of doing the same thing is to show that $$\frac{(n+1)(n+2)(n+3)}{n^3}$$ is bounded above, that is, that there is a constant $C$ such that $\frac{(n+1)(n+2)(n+3)}{n^3}<C$. With some algebra, we find that $$\frac{(n+1)(n+2)(n+3)}{n^3}=\left(1+\frac{1}{n}\right)\left(1+\frac{2}{n}\right) \left(1+\frac{3}{n}\right).$$ Each term on the right is less than $5$ (we are giving away a lot), so we can take $C=5^3$. The reason for the fancier approach is that to show that $f(n)=O(g(n))$ it is often useful to concentrate on the ratio $\dfrac{f(n)}{g(n)}$
H: geometric interpretation of a height inequality of prime ideals Theorem 15.1 in Matsumura's Commutative Ring Theory states that if $f:A\rightarrow B$ is a homomorphism of Noetherian rings, $P \in \operatorname{Spec}B, P \cap A=p$, then $ht (P) \le ht(p) + \dim B_P/pB_P$. He proceeds to say that we can replace $A$ by $A_p$ and $B$ by $B_P$ and then the inequality becomes $\dim B \le \dim A + \dim B/mB$, where $A,B$ are now local rings and $m$ is the maximal ideal of $A$. Indeed, i can see why this is true. He then states: "rewriting the original inequality in this form makes the geometrical content clear". My question is: what is the geometrical content? AI: He is saying that if $P$ is a point on $Y = \operatorname{Spec}(B)$ and $p\in X = \operatorname{Spec}(A)$ is its image under the morphism $^{a}\varphi:Y\to X,$ then the inequality implies that the dimension of $Y$ at $P$ is less than or equal to the dimension of $A$ at $p$ plus the dimension of the fibre of $^a\varphi$ over $p,$ with equality in the case of a (locally) flat morphism.
H: Isomorphisms of $\mathbb P^1$ Prove that every isomorphism of $\mathbb P^1$ (over an algebrically closed field $\mathbb K$) is of the form $$ \phi(x_0: x_1) = (ax_0+bx_1 : cx_0 + dx_1) $$ where $\begin{pmatrix} a & b \\c & d\end{pmatrix} \in GL(2, \mathbb K).$ There are some hints; I show you what I've done. 1. Show that the action of $PGL(2, \mathbb K)$ on $\mathbb P^1$ is transitive. Well, I think this is quite intuitive, but I can't find a rigorous proof. Let us pick two distinct points $x=(x_0:x_1)$ and $y=(y_0 : y_1)$. I want to find an invertible matrix $A$ s.t. $Ax = y$. It is easy to see that this system (the unknowns are the entries of $A$) has always solutions (and maybe one can find one solution s.t. $ad-bc \ne 0$). My question is: is there an easier way to see that this action is transitive? Is it enough to say "It's trivially true in the affine case so it must be true in the projective space as well"? 2. Using the fact that every isomorphism of $\mathbb A^1$ is of the form $t \mapsto at+b$ (for some $a \ne 0$), prove the claim for morphisms s.t. $\phi(0:1)=(0:1)$. Suppose $\phi$ is a morphism s.t. $\phi(0:1)=(0:1)$. If we restrict $\phi\vert_{U_0}$ we obtain a morphism of $U_0 \simeq \mathbb A^1$, hence it must of the form $$ \phi(1:x) = (1:ax+b) $$ i.e. $$ \phi(x_0:x_1) = (x_0:ax_0+bx_1) $$ Is this correct? Why shall I assume that $(0:1) \mapsto (0:1)$? 3. Conclude. I don't see how to conclude at this point, I'm puzzled. Could you please help me? Thanks. AI: It is enough to show that $(0,1)$ can be moved to any point. This is because if I want to move $x$ to $y$, I just need to move $x$ to $(0,1)$ by the inverse that moves $(0,1)$ to $x$, and then move $x$ by the map that moves $(0,1)$ to $y$. But this is clear because any $y$ can be represented by say $(a,b)$ and then the matrix that moves $(0,1)$ to $(a,b)$ is given by $$\left(\begin{array}{cc} c_1 & a \\ c_2 & b \end{array}\right)$$ where $c_1,c_2 \in k$ chosen so that $c_1b - ac_2 \neq 0$. Now why do such $c_1,c_2$ have to exist? As for the last part of your question, I don't understand what you mean when you say "It's true in the affine case thus true in the projective case". What is true in the affine case? I would be happy to edit my answer. For your second question, I believe your answer is correct. The key point as you realize is that if $\phi$ fixes the point at infinity then it restricts to an open subset isomorphic to $\Bbb{A}^1$. The reason why we first prove the case $\phi(0,1) = (0,1)$ is this. Suppose in the general case that you are given any isomorphism $\varphi$ of $\Bbb{P}^1$; i.e. suppose $\varphi(0,1) = (c,d)$. By the first part you know there is an element of $\text{PGL}(2)$ that moves $(c,d)$ to $(0,1)$ and say it is given by some $\rho$. The composite $\rho \circ \varphi$ is then still an automorphism of $\Bbb{P}^1$ (because $\rho$ is) and so immediately we know $$\rho \circ \varphi = \phi$$ for some $\phi$ that is of the form $\phi(x_0,x_1) = (x_0,ax_0 + bx_1)$ by (2). Then from here we get $\varphi = \rho^{-1} \circ \phi$ and so..... (conclude since this is a homework problem).
H: Some discrete math questions, Big-O, Big-Omega, Asymptotics, etc. I don't understand how to prove: $(n-1)(n-2)(n-3)$ is $\Omega(n^3)$. Also, what am I supposed to do here? For each of the following predicates, determine, if possible, the smallest positive integer b where $n \ge b$ implies $P(n)$ appears to be true. 1) $log_2 n < n^{1/2}$ 2) $n^{1/2} < n$ 3) $F_n < 2^n$ 4) $n! < n^n$ 5) $2^n < n!$ 6) $n < nlog_2n$ AI: In order to prove that $f(x)$ is $\mathcal O (g(x))$ as $x\to \infty$, you need to prove that there exist two positive constants $c_1$ and $c_2$ and $x_0$: $$\forall x>x_0\quad c_1 |g(x)|\le |f(x)|\le c_2 |g(x)|.$$ In the case of you polynomial $(n-1)(n-2)(n-3)$ it's evident that for $n\ge 3$ $$(n-1)(n-2)(n-3)\le n^3.$$ On the other hand, if $n\ge 4$ $$(n-1)(n-2)(n-3)=n^3 (1-1/n)(1-2/n)(1-3/n)\ge n^3 (1-1/4)(1-2/4)(1-3/4)=\frac{3}{32}n^3,$$ which allows to conclude that $(n-1)(n-2)(n-3)$ is $\mathcal O (n^3)$ as $n\to \infty$. Apparently, the same reasoning allows to conclude that $n^3$ is $\mathcal O((n-1)(n-2)(n-3))$, so by Knuth's definition, you have also $(n-1)(n-2)(n-3)$ is $\Omega (n^3)$ as $n\to \infty$. If you want to use Hardy's definition of $\Omega$, then you need to prove that $$\limsup_{n\to\infty} \frac{ (n-1)(n-2)(n-3) }{n^3}>0,$$ which is true because $\lim _{n\to\infty} \frac{ (n-1)(n-2)(n-3) }{n^3}=1$, so by this definition you have your $\Omega$ equivalence. I suggest you take a look at this wiki article. As to you other questions, I think it's better to either make another post (one post - one topic of question), or at least provide us with your own results on the subject.
H: what does $|x-2| < 1$ mean? I am studying some inequality properties of absolute values and I bumped into some expressions like $|x-2| < 1$ that I just can't get the meaning of them. Lets say I have this expression $$ |x|<1.$$ This means that $x$ must be somewhere less than $1$ or greater than $-1$ which means that $$-1 < x < 1.$$ So basically $|x|<1$ and $-1 < x < 1$ are the same thing. $$|x|<1 \iff -1 < x < 1 \iff\text{"Somewhere less that $1$ or greater than $-1$" or between $-1$ and $1$}$$ Now lets say I have $$ |x-2| < 1.$$ This means that the result of the expression $|x-2|$ must be less than $1$ or greater than $-1$? What does that also mean for $x$? Is it that $x$ must be a value that when we subtract $2$ the result has to stay withing the bound of $-1$ or $1$ or less than zero? If $x =5$ the statement fails because $3 <1$ is false. So it has to determine a boundary of $x$'s that satisfy this equation right? if $|x| = |-x|$ what can this mean for $|x-2| = |-x-2|$ or $|x+2|$ or $|-x+2|$ ? Thank you AI: The geometric interpretation, in $\Bbb R$, for $|x-a|<b$ is '$x$ is at a distance smaller than $b$ from $a$'. In your particular example, $|x-2|<1$, it means that $x$ is at a distance of at most $1$ from $2$ and it (the distance) never reaches $1$. To interpret $|x-2|=|-x-2|$, I find useful to first note that $|-x-2|=|x-(-2)|$ (why?). The equality $|x-2|=|x-(-2)|$ says that $x$ is at equal distance between $2$ and $-2$. More generally, $|x-a|=|x-b|$ says that $x$ is at the same distance between $a$ and $b$. To summarize, read $|x-a|$ as the distance between $x$ and $a$.
H: Why is that interior points exist only inside intervals on $\mathbb{R}$? I'm reading a book on real analysis that has a chapter on open sets, closed sets, limit points and compact sets (for the sake of generality, according to the author). If a set $X$ has some interior point, it must have at least one open interval, then it's infinite. Every open set is an non-enumerable set. But it seems that for having interior points, the open interval must be a subset of $\mathbb{R}$ - the book does not mention the reason for that. Why is it impossible to have interior points in a subset of $\mathbb{Q}$, for example? There's a definition on interior points: $x \in X$ is an interior point of the set $X$ iff there is an $\epsilon>0$ such that $(x-\epsilon,x+\epsilon)\subset X$. If $x\in (a,b)\subset X$, let $\epsilon$ be the smallest of the positive numbers $x-a$ and $x-b$. Then $(x-\epsilon, x+\epsilon)\subset (a,b)$ then $(x-\epsilon,x+\epsilon)\subset X$. I am thinking that the bold text has something to do with it, it's impossible to know if the smallest positive number $\epsilon$ is rational or real. Is that the reason for having interior points only on intervals inside $\mathbb{R}$? AI: The reason that $\Bbb{Q}$ does not have any interior points is this. Take any rational number $x$, and say we can choose $(a,b)$ about $x$ so that $x \in (a,b) \subseteq \Bbb{Q}$. Then we must have that the cardinality of $\Bbb{Q}$ be greater than or equal to the cardinality of $(a,b)$. But $(a,b)$ is uncountable being diffeomorphic to $\Bbb{R}$, contradicting $\Bbb{Q}$ being countable.
H: Gauss Kronrod quadrature rule Given the abscissae and weights for 7-point Gauss rule with a 15-point Kronrod rule (Wikipedia); Can anyone provide me a working example how to numerically integrate a function given below: $$\int_0^1 x^{-1/2}\log(x) \textrm{d}x = -4 $$ Provided abscissae and weights for 6-point Guass rule, I know how I can find the integral, but I don't find any working example for using Gauss Kronrod. I find is relatively easy to understand how Gauss Kronrod work if I have a working example. It would be great if someone can suggest some good literature on Gauss Kronrod method (not how to calculate abscissae and weights, there are so many papers about them I have already seen) but about its working. Gauss quadrature rule looks like this: $$ \int_{-1}^{1} f(x) \textrm{d}x = \sum_{i=0}^n c_i f(x_i)$$ To convert the limits to what is required by this rule: $$ \int_{a}^{b} f(x) \textrm{d}x = \frac{b-a}{2}\int_{-1}^{1} f(x \cdot \frac{b-a}{2} + \frac{b+a}{2}) \textrm{d}x $$ Effectively this transformation to the function above can be seen as: $$\int_a^b x^{-1/2}\log(x) \textrm{d}x= \frac{b-a}{2}\int_{-1}^1 (x \cdot \frac{b-a}{2} + \frac{b+a}{2})^{-1/2} \cdot \log(x \cdot \frac{b-a}{2} + \frac{b+a}{2}) \textrm{d}x $$ Thus we can replace the integral part with the summation part, replace the weights and abscissae and we are done. Can someone kindly provide me the similar transformation for Guass Kronrod? I believe we only need the weights used in gauss quadrature rule; plus we have to use the kronrod weights and abscissae. But how is the equation looking like, I don't know. AI: You have quoted the right transformation, since the Gauss-Kronrod also is for the interval $[-1,1]$. To shift the integral to $[-1,1]$ we make the substitution $u=2x-1$. Then $dx=\frac{1}{2}\,du$, and $x^{-1/2}=\left(\frac{u+1}{2}\right)^{-1/2}$, and $\log x=\log(\frac{u+1}{2})$. In this case, there would be good reason to perform some additional algebraic manipulation, but there is no need to. Our integral is equal to $$\int_{u=-1}^1 \frac{1}{2}\left(\frac{u+1}{2}\right)^{-1/2}\log\left(\frac{u+1}{2}\right)\,du.$$ Call the function we are integrating $f(u)$. Use published Kronrod nodes $u_i$ and weights $w_i$ and find $\sum w_if(u_i)$. Remark: Because of the singularity of our original function at $0$, one would expect very poor numerical performance. Even in a much better behaved integral like $\int_0^1 \sqrt{x}\cos x\,dx$, the fact that the derivative blows up near $0$ means that we need to use special techniques. Your function's behaviour near $0$ is much worse than that.
H: Find the adjoint of this operator. Given $\mathbb C^2$ with the standard inner product, an operator $T(x,y) = (3x+4y, -4x+3y)$. Find $T^{*}$ and prove that $T$ is normal. So, I took the standard basis $B = \{(1,0),(0,1)\}$ and we know it's orthogonal in respect to the standard inner product. I displayed $T$ in the $B$ basis, then transposed and conjugated it and got $[T^*]_B = \begin{bmatrix} 3 & -4 \\ 4 & 3 \end{bmatrix}$. But how exactly do we know what $T$ does on a general vector $(x,y)$ from that? And I am also interested to know if there is another method. AI: Note that $T^*$ is uniquely determined by the equation $<Tz,w>=<z,T^*w>$ for $z,w \in \mathbb{C^2}.$ Now let $z=(x,y)$ and $w=(a,b)$. Then $\begin{equation} <Tz,w>=<(3x+4y,-4x+3y),(a,b)> = 3x\bar{a}+4y\bar{a}-4x\bar{b}+3y\bar{b} \end{equation}$ $= <(x,y),(3a-4b,4a+3b)> = <z,T^*w>$ From the above equation it can be seen $T^*(a,b)= (3a-4b,4a+3b)$. Now just compute to check that $TT^*=T^*T$ so that $T$ becomes normal.
H: A simple question regarding $df(E)$ for $f(A)=A^{-1}$ I am trying to follow the document on http://web.mit.edu/people/raj/Acta05rmt.pdf, and I got a simple question that why for $f(A)=A^{-1}$ we have $df(E)=-A^{-1}EA^{-1}$ on page 5? (where $E$ is a small perturbation, and on page 4 the author says the $df$ is just the jacobian matrix. ) I got a method which seems correct: $df(A^{-1})=d(A^{-1})=-A^{-1}dfA^{-1}$, and substitute $E$ for $df$. But my question is why another method would not work: I think the correct way to calculate jacobian is to find the derivative of $f(A+E)$ with respect to $E$, which I found somehow would not lead to the correct answer (a quick check is that the derivative of matrix wrt matrix would result in a squared size dimension matrix) ? Thanks. Following up : besides Raskolnikov's explane, there is also explanation here: http://web.mit.edu/18.325/www/handouts/handout2.pdf AI: Because of $(AB)^{-1}=B^{-1}A^{-1}$, we have $$f(A+E)=(A+E)^{-1}=(\mathbb{I}+A^{-1}E)^{-1} A^{-1} \; .$$ Using a series development for the first factor $$(\mathbb{I}+A^{-1}E)^{-1} = \mathbb{I}-A^{-1}E+\ldots$$ we obtain $$f(A+E)=(A+E)^{-1}=A^{-1}-A^{-1}E A^{-1}+\ldots$$ thus $$df(E)=f(A+E)-f(A)=-A^{-1}E A^{-1} \; .$$
H: Prove that $\sin 10^\circ \sin 20^\circ \sin 30^\circ=\sin 10^\circ \sin 10^\circ \sin 100^\circ$? $\sin 10^\circ \sin 20^\circ \sin 30^\circ=\sin 10^\circ \sin 10^\circ \sin 100^\circ$ This is a competition problem which I got from the book "Art of Problem Solving Volume 2". I'm not sure how to solve it because there's just so many possibilities of using most of the trig identities that I don't know which path to take. One way I tried is canceling out the $\sin 10$ on both sides: $\sin 20^\circ \sin 30^\circ= \sin 10^\circ \sin 100^\circ$ $2 \sin 10^\circ \cos 10^\circ \sin 30^\circ=\sin 10^\circ \sin 100^\circ$ $2 \sin 80^\circ \sin 30^\circ= \sin 100^\circ$ $2 \sin 80^\circ \sin 30^\circ= \sin 30^\circ \cos 70^\circ+\sin 70^\circ \cos 30^\circ$ $2 \sin 80^\circ \sin 30^\circ=\sin 20^\circ \sin 30^\circ+\sin 70^\circ \sin 60^\circ$ $2 \sin 80^\circ \sin 30^\circ=\sin 20^\circ \sin 30^\circ + 2\sin 70^\circ \sin 30^\circ \cos 30^\circ$ $2 \sin 80^\circ \sin 30^\circ=\sin 20^\circ \sin 30^\circ + 2\sin 70^\circ \sin 30^\circ \sin 60^\circ$ $1=\large \frac {\sin 20^\circ}{2 \sin 80^\circ}+ \frac {\sin 70^\circ \sin 60^\circ}{\sin 80^\circ}$ Now I set $\sin^2 \theta=\large \frac {\sin 20^\circ}{2 \sin 80^\circ}$ and $\cos^2 \theta =\large \frac {\sin 70^\circ \sin 60^\circ}{\sin 80^\circ}$ I solves for $\sin \theta$ and $\cos \theta$ and set up a right triangle, which then led me to the equation $\sin 70^\circ \sin 60^\circ=\sin 80^\circ-\frac 12 \sin 20^\circ$ Another approach I tries is from the beginning having $\sin 10^\circ \sin 20^\circ \sin 30^\circ=\sin^2 10^\circ \sin 100^\circ$ and the using the power-reducing formula, but that also got me nowhere. Any help is appreciated. Thanks. AI: Big hint of the day: Always try to simplify the expression first, before doing anything else What's the value of $\sin 30^\circ$? You can simplify $\sin 100^\circ$ by noticing that $\sin(90^\circ + \alpha) = \cos \alpha$. Everything should be easy from here. :)
H: Is the function $f:\mathbb{Z}_{8}\rightarrow\mathbb{Z}_2$ where $f([x]_8)=[x]_2$ a group morphism? Let $[x]_n$ denote the equivalence class of $\mathbb{Z}_n$ that contains $x$. I am not certain where to begin with this question: $f:\mathbb{Z}_8\rightarrow \mathbb{Z}_2$, where, $f([x]_8)=[x]_2$. I am not sureif the following is true: $$f([x]_8\cdot [y]_8)=[x]_2\cdot [y]_2 = f([x]_8)\cdot f([y]_8)$$. I am not certain what the binary operation is and as such I do not know if it is a group morphism. I am trying to understand group morphisms more fully and any help would be appreciation. If I am missing something fundamental, please let me know. AI: The groups $\mathbb{Z}_n$ (for $n$ a positive integer) denotes the group on the set $\{0,1,2,\dots, n-1\}$ with binary operation given by addition modulo $n$. In order to determine if this is a morphism, you need to show that $$f([x]_8 +_8 [y]_8) = f([x]_8) +_2 f([y]_8)$$ where $+_n$ denotes addition modulo $n$. My suggestion is to divide your proof into cases. Consider what the results of taking an integer modulo 2 could yield. We are are going to explore three cases of numbers in $\mathbb{Z}_8$: two evens, two odds, or 1 even and 1 odd. With each case, see if you can prove the homomorphism property.
H: Contour integration of complex number confuses me, still. Given $f(z) = (x^2+y)+i(xy)$ and we integrate it using the Parabola Contour. For a parabola, $\gamma(t) = t + it^2$. So, $f(\gamma(t)) = 2t^2 + it^3$. What was done here is all the $y$ were replaced with $t^2$. Now, my understanding is this: t + i(t^2) x + i(y) So, we replace all the $y$ in $f(x)$ with whatever corresponds to $y$ in $\gamma(t)$. Is my understanding of $f(\gamma(t))$ correct ? AI: If $z = x+iy$, then the imaginary part of $z$ is $y$, and the real part is $x$. When we take the curve $\gamma$, parameterized by $t$, we get a set of points in the complex plane. These points have a real and imaginary part. Since $\gamma(t) = t+it^2$, and $t$ is real, the imaginary part of any point on $\gamma$ is $t^2$, and the real part is $t$. When you set $z = \gamma(t)$ and plug this into $f(z)$, you have that $f(\gamma(t)) = \mathrm{Re}(\gamma(t))^2+\mathrm{Im}(\gamma(t)) + i\left(\mathrm{Re}(\gamma(t))\mathrm{Im}(\gamma(t))\right)$, which gets you to the result you've found.
H: Find $\int_{-5}^5 \sqrt{25-x^2}~dx$ I need to evaluate $$\int_{-5}^5 \sqrt{25-x^2}~dx$$ How would I do it? AI: If you are at the earliest stages of integration, you are probably supposed to do this problem without using any integration techniques. The curve $y=\sqrt{25-x^2}$ is the top half of a circle with centre the origin and radius $5$. Our integral $\int_{-5}^5 \sqrt{25-x^2}\,dx$ is the area under the curve $y=\sqrt{25-x^2}$ and above the $x$ axis, from $x=-5$ to $x=5$. That's a half-circle. Thus by a familiar formula the area is $25\pi/2$.
H: Proof that $ \lim_{x \to \infty} x \cdot \log(\frac{x+1}{x+10})$ is $-9$ Given this limit: $$ \lim_{x \to \infty} x \cdot \log\left(\frac{x+1}{x+10}\right) $$ I may use this trick: $$ \frac{x+1}{x+1} = \frac{x+1}{x} \cdot \frac{x}{x+10} $$ So I will have: $$ \lim_{x \to \infty} x \cdot \left(\log\left(\frac{x+1}{x}\right) + \log\left(\frac{x}{x+10}\right)\right) = $$ $$ = 1 + \lim_{x \to \infty} x \cdot \log\left(\frac{x}{x+10}\right) $$ But from here I am lost, I still can't make it look like a fondamental limit. How to solve it? AI: Putting $h=\frac1x$ and assuming Natural Logarithm $$\lim_{x\to\infty}x\cdot \ln \frac{x+1}{x+10} =\lim_{h\to0}\frac{\ln\frac{1+h}{1+10h}}h$$ $$\text{As }\ln_e \frac ab=\ln_e a-\ln_e b, \ln\frac{1+h}{1+10h}=\ln(1+h)-\ln(1+10h)$$ $$\implies \lim_{h\to0}\frac{\ln\frac{1+h}{1+10h}}h=\lim_{h\to0}\frac{\ln(1+h)-\ln(1+10h)}h$$ $$=\lim_{h\to0}\frac{\ln(1+h)}h-10\cdot\lim_{h\to0}\frac{\ln(1+10h)}{10h} $$ Do you know, $$\lim_{x\to0}\frac{\ln(1+x)}x=??$$
H: How to construct a non-diagonalizable matrix with a particular set of eigenvalues Given a set of eigenvalues, how would you go about constructing a matrix with those particular eigenvalues? I know that you can construct a diagonalizable matrix with those eigenvalues using a linearly independent basis vector $B$: $$M=B^{-1}\Lambda{B}$$ But is there any way to construct a non-diagonalizable matrix? AI: If it is not diagonalisable, it must have repeated eigenvalues. Put the eigenvalues along the diagonal, and put ones above the diagonal when the eigenvalues are equal: $\left[\begin{array}{ccc}3&1&0\\0&3&0\\0&0&2\end{array}\right]$
H: Is $\sqrt1+\sqrt2+\dots+\sqrt n$ ever an integer? Related: Can a sum of square roots be an integer? Except for the obvious cases $n=0,1$, are there any values of $n$ such that $\sum_{k=1}^n\sqrt k$ is an integer? How does one even approach such a problem? (This is not homework - just a problem I thought up.) AI: No, it is not an integer. Let $p_1=2<p_2<p_3<\cdots <p_k$ be all the primes $\le n$. It is known that $$K=\mathbb{Q}(\sqrt{p_1},\sqrt{p_2},\ldots,\sqrt{p_k})$$ is a Galois extension of the rationals of degree $2^k$. The Galois group $G$ is an elementary abelian 2-group. An automorphism $\sigma\in G$ is fully determined by a sequence of $k$ signs $s_i\in\{+1,-1\}$, $\sigma(\sqrt{p_i})=s_i\sqrt{p_i}$, $i=1,2,\ldots,k$. See this answer/question for a proof of the dimension of this field extension. There are then several ways of getting the Galois theoretic claims. For example we can view $K$ as a compositum of linearly disjoint quadratic Galois extensions, or we can use the basis given there to verify that all the above maps $\sigma$ are distinct automorphisms. For the sum $S_n=\sum_{\ell=1}^n\sqrt{\ell}\in K$ to be a rational number, it has to be fixed by all the automorphisms in $G$. This is one of the basic ideas of Galois correspondence. But clearly $\sigma(S_n)<S_n$ for all the non-identity automorphisms $\sigma\in G$, so this is not the case.
H: normal subgroup generated by a subgroup Let $G= \langle g_1,g_2 \rangle$, and let $H\leq G$ given by $H=\langle g_1 g_2 g_{1}^{-1}g_{2}^{-1}\rangle$. What is the normal subgroup of $G$ generated by $H$? AI: Let $N$ be the normal closure of $H$. Since $[g_1,g_2] \in N$, $[g_1N,g_2N]=1N$ and $G/N$ is generated by two commuting elements, so is commutative as a whole. Hence $G' \leq N$. However, $G'$ is a normal subgroup containing the commutator $[g_1,g_2]$, so $H \leq G'$ and so $N \leq G'$. Hence $N=G'$.
H: question involving analycity of $f=u+iv$ Let $f=u+iv:\mathbb C\to\mathbb C$ be analytic. Then is it true that $\dfrac{\delta^2 v}{\delta x^2}+\dfrac{\delta^2 v}{\delta y^2}=0?$ AI: Let us consider the Cauchy Riemann conditions $\frac {\partial u} {\partial x}$ = $\frac {\partial v} {\partial y}$ and $\frac {\partial u} {\partial y}$=-$\frac {\partial v} {\partial x}$ So you are asking if $\dfrac{\delta^2 v}{\delta x^2}+\dfrac{\delta^2 v}{\delta y^2}=0$ Lets find $\dfrac{\delta^2 v}{\delta x^2}$ $\frac \partial {\partial x}$($\frac {\partial v} {\partial x}) $= $\frac \partial {\partial x}$(-$\frac {\partial u} {\partial y}$) = -$\frac {\partial u} {\partial x \partial y}$ Nowlets find $\dfrac{\delta^2 v}{\delta y^2}$ $\frac \partial {\partial y}$($\frac {\partial v} {\partial y}) $= $\frac \partial {\partial y}$($\frac {\partial u} {\partial x}$) = $\frac {\partial u} {\partial y \partial x}$ So we end up with $\frac {\partial u} {\partial y \partial x}$-$\frac {\partial u} {\partial x \partial y}$=0 (mixed derivatives are equal) So your statement is correct
H: Proving a triangle is a right triangle given vertices, using vector dot product I want to to show that this triangle is a right triangle. I know that the dot of the vectors need to be $0.$ I tried to dot between them but I don't get zero. Claim: Triangle $\bigtriangleup MNP,\;\;\,M(1,-2,3),\;\;N(0,0,4),\;\; P(4,2,-2)\;$ is a right triangle. What did I do wrong? I would like to get some hints how to do it. Thanks! AI: Hints: What you have listed there are three ${\bf points}$. You need three ${\bf vectors}$. How can you get the three vectors associated with each side of the triangle from these three points? (sub-hint: how do you get a vector between any two points?) Furthermore, you need to check if the dot product is zero between any pair of sides. Since the dot product is symmetric in $\mathbb{R}^{3}$, you only have to check this for three pairs of vectors. Using this, you should be able to demonstrate that there is indeed a pair of vectors whose dot product is $0$, therefore showing that the triangle is right.