text
stringlengths
83
79.5k
H: How many permutations of presentations is possible in this conference? I came across the following solved question in a book and am having difficulty understanding the answer given for it. Q: A conference is to have eight presentations over the course of one day, consisting of three long presentations, and five short presentations. If the conference organizer doesn't want consecutive long presentations, and the conference is to start with a short presentation, how many schedules of presentations are possible? A: Let S and L stand for a short and long presentation respectively. The schedule must start with a short presentation, and every long presentation must be followed by at least one short presentation. So the schedule must be SLSLSLS, along with one more short presentation inserted somewhere. There are four ways to do this - SSLSLSLS, SLSSLSLS, SLSLSSLS, and SLSLSLSS. For each of these there are 5! * 3! possible schedules, giving a total of 2880 schedules. My analysis gave 7200 schedules instead as follows. The first S is fixed. This leaves four S's and three L's to be permuted with the no-two-consecutive-L restriction. Or let's generalize it to a S's and b L's. Then the number of compatible permutations, written as N(a, b), can be expressed as the recurrence N(a - 1, b) + N(a - 1, b - 1). Observing the terminal values N(0, 1) = 1, N(1, 0) = 1, N(1, 1) = 2 and N(0, 2) = 0, and substituting a = 4 and b = 3, the number comes to 10. This gives a total of 10 * 5! * 3! = 7200 schedules. What am I missing? AI: The solution in the book is wrong as it ignores cases with $L$ at the end. Using, ZettaSuro's notation: $$ S\_S\_S\_S\_S\_$$ Each of the three long presentations can go in one of the 5 blankes: $$5\choose 3 $$ which ultimately gives $$ {5\choose 3 }\cdot 5!\cdot 3! = 7200.$$
H: Complex linear transformation, show maps center of circle to center of circle If $L$ is a complex linear transformation with $L(z)=Az+B$, $A \neq 0$, $B \in \mathbb{C}$ and $L$ maps the circle $C_1$ onto the circle $C_2$ where $$C_1=\{z \in \mathbb{C}:|z-z_0|=R_1>0\} \text{ and } C_2=\{w \in \mathbb{C}:|w-w_0|=R_2>0\}$$ then $L(z_0)=w_0$. This is part of a problem I had in a complex analysis course a while back. I thought about maybe I could use properties of isometries but it looks like $L$ is only an isometry when $|A|=1$. I've tried to assume that $|L(z_0)-w_0|=\rho>0$ and I was thinking I could try to do this with cases, the first case being $0<\rho<R_2$ and the second case being $\rho \geq R_2$. For the first case, I was trying to come up with some kind of contradiction involving $w_0,f(z_0)$ and some point on the circle $C_2$, by using the triangle inequality. It looks like an arbitrary point on $C_2$ won't work so I was considering using some kind of point $v$ on the circle where if $w_0=u_0+iv_0$ then I would choose one appropriate $v=u_0+iv_1$, something where at least $w_0$ and the point on the circle share a common real or imaginary part. At this point I am just wondering if there is a more direct way to do this or other easier ways? AI: For a circle, $z=z_0+R_1 e^{i \phi}$. Then $$L(z) = A z+B = (A z_0 +B) + A R_1 e^{i \phi}$$ Accordingly, $R_2 = |A| R_1$ and $w_0 = A z_0 + B = L(z_0)$.
H: Finding eigenvalues and eigenfunctions for a BVP Find the eigenvalues and eigenfunctions for $$y'' + \lambda y = 0, y(0) = 0, y'(\pi/2) = 0$$ According to my book we must check 3 cases: $\lambda < 0$, $\lambda = 0$, $\lambda > 0$. I started with $\lambda > 0$, found the general solution, applied the boundary conditions, and found $$\lambda _n = (2n-1)^2 with~ y_n=sin((2n-1)x)$$ I moved on to $\lambda < 0$ and got $$y = c_1e^{kx}+c_2e^{-kx}$$ and didn't know how to proceed. AI: Let $\lambda = -\mu^2$; then $$y(x) = A e^{\mu x} + B e^{-\mu x}$$ $y(0)=0 \implies A+B=0$. Also, $$y'\left( \frac{\pi}{2}\right) = 0 \implies 2 A \sinh{\left(\mu \frac{\pi}{2}\right)} =0 $$ The only way this happens is when $A=0$; that is, there is no nontrivial solution to this BVP for $\lambda < 0$.
H: $\surd (I^e)=(\surd I)^e$ I'm trying to solve this question: I'm having troubles to prove the $\surd (I^e)=(\surd I)^e$ in the part (ii). I'm trying a lot proving the inclusions $\subset$ and $\supset$ without any success, I really need help. Thanks in advance AI: Let $r \in \sqrt{I}$. That means $r^n \in I$ for some $n > 0$. But then $$f(r)^n = f(r^n) \in f(I) = I^e,$$ so $f(r) \in \sqrt{I^e}$, thus $f(\sqrt{I}) = (\sqrt{I})^e \subset \sqrt{I^e}$. Conversely, let $s \in \sqrt{I^e}$ and $r \in f^{-1}(\{s\})$ (since $f$ is surjective, such an $r$ exists). Let $n > 0$ such that $s^n \in I^e$. Then $f(r^n) = f(r)^n = s^n \in I^e$, so $r^n \in f^{-1}(I^e) = I$, hence $r \in \sqrt{I}$ and $s \in f(\sqrt{I}) = (\sqrt{I})^e$, hence $\sqrt{I^e} \subset (\sqrt{I})^e$.
H: Is this formula $\Sigma_1^{ZF}$? This is probably a simple question. Let $\varphi(x)$ be a formula expressing $dom(x)\not\in Ord$. I want to know whether $\varphi$ is a $\Sigma_1^{ZF}$ formula or not, meaning whether there is a $\Sigma_1$ formula $\psi(x)$ so that $ZF$ proves $\forall x (\varphi(x)\leftrightarrow \psi(x))$. ($\varphi$ is just an example I came up with trying to understand why the recursion used to define "$x=L_{\alpha}$" is $\Sigma_1^{ZF}$) I would write $\varphi(x) \leftrightarrow (\exists y\, (y\not\in Ord \land (\forall z \ ( z\in y \leftrightarrow z\in dom(x)))))$, where $z\in dom(x)$ can be replaced by a certain $\Sigma_0$ formula. So does this mean that $\varphi$ is not $\Sigma_1^{ZF}$ or can the $\forall$-quantor be eliminated? Especially, does that mean that the formula $a=t(b)$ is not in general $\Sigma_1^{ZF}$ (a,b variables) if $t$ is a term so that the formula expressing membership in $t(b)$, $z\in t(b)$, is $\Sigma_0^{ZF}$ ? You can see I am a bit confused.. [edit: This is apparently true : $a=P(b)$ is not $\Sigma_1^{ZF}$ (P denoting powerset), but $z\in P(b)$ is $\Sigma_0$] AI: The formula $y\subseteq\mathrm{dom}(x)$ is $\Delta_0$. This is because it is written as: $$(\forall z\in y)(\exists w\in x)[ w\textrm{ is a pair }\land(\exists u\in w)(x\in u\land(\forall v\in u)(v=x))].$$ Likewise the formula $\mathrm{dom}(x)\subseteq y$ is $\Delta_0$: $$(\forall z\in x)(\exists u\in y)(\{u\}\in z).$$ Hence $y=\mathrm{dom}(x)$ is $\Delta_0$. Now it's easy to see that your formula is $\Sigma_1$: $$(\exists y)(y=\mathrm{dom}(x)\land y\notin On)$$ and $\Pi_1$: $$(\forall y)(y=\mathrm{dom}(x)\to y\notin On)$$ and hence it is $\Delta_1$. Maybe it's simpler but it's late and I'm tired. I may edit this later.
H: Determining if the relation is an equivalence relation I'm needing help in determining if the relation $$R=\left\{(f,g)\mid \exists k,\forall x\in\Bbb Z, \ f(x) = g(k)\right\}$$ where f, g: $\Bbb Z \rightarrow \Bbb Z$ is an equivalence relation. More specifically, i don't seem to understand the part where it says there exists a k for all x. I understand that there must be at least one k, but that's all, i can't seem to make sense of it in relations, like.. should there be a single k that should be used to prove the reflexivity, symmetry and transitivity properties of an equivalence relation? or will it suffice any k as long as i can find one k value as an example for each of the properties? AI: My method for qualifying a relation as an equivalent one is to step through the three conditions: reflexivity, symmetry, and transitivity. Clearly $f\,R\,f$. Next suppose $f\,R\,g$. Then $\exists\, k\, \forall x\in \mathbb{Z} : f(x) = g(k)$. Does it follow that $g\,R\,f$? That is, $\exists\, j\, \forall x\in \mathbb{Z} : g(x) = f(\,j)$? My intuition tells me this need not follow. Let's suppose $f(x) = 1$ and $g(x) = 1 + x^2$. We know that $f\,R\,g$, since when $k=0\, \forall x\in \mathbb{Z} : f(x) = g(0) \implies 1 = 1$. However, check out $g\,R\,f$. There does not exist an integer $j$ such that $\forall x\in\mathbb{Z}: g(x) = f(j) = 1$. So unless I am misinterpreting the problem formulation, this is not an equivalence relation.
H: Area of an elliptic? I'm looking for an analytic way to calculate the area of an elliptic described by $${x^2 \over a^2} + {y^2 \over b^2}=c^2$$ I saw it before, but now i've forgotten. I remember we set $x=a \cos x$ and $y=a \sin x$ but I don't remember what we did after that! AI: Certainly, if you are looking for a calculus approach, you can do it by assuming: $$x=ac\cos(t),~~y=bc\sin(t),~~~c\neq 0$$ in which $0\le t\le 2\pi$ (Here, I am assuming @David's comment) and then follow the integral: $$S=\frac{1}2\int_0^{2\pi}(xy'-yx')dt$$ Note that others may use $S=\int_0^{2\pi}xy' dt$ instead.
H: Question: Expansion of algebra in matrices I have a problem that I would like to check: Expand $(A+B)^3$ where $A$ and $B$ are matrices. Is this right? $$ A^3+A^2B+ABA+AB^2+BA^2+BAB+B^2A+B^3 $$ Thanks. AI: Yes, you've got it. It is probably easiest to start off with $$ (A+B)^2=A(A+B)+B(A+B)=A^2+AB+BA+B^2. $$ From here, we find $$ (A+B)^3=(A+B)(A^2+AB+BA+B^2)=A^3+A^2B+ABA+AB^2+BA^2+BAB+B^2A+B^3, $$ as you claimed.
H: Is $\{\sin x,\cos x\}$ independent? Is $\{\sin x,\cos x\}$ linearly independent in $\mathbb{R}^n$? I thought they were not because I can write $\cos x=\sin (x+\pi/2)$. My professor on the other hand said it was independent and his proof is as follows: If $\{\sin x,\cos\}$ is independent in $[0,2\pi]$, then it will be independent on all of $\mathbb{R}.$ (He didn't prove this either.) $a\cos x+b \sin x=0 ,\forall \vec{x}\in[0,2\pi].$ $x=0\implies a\cdot1+b\cdot0=0 \implies a=0$. $x=\pi /2\implies a\cdot 0+b \cdot 1 \implies b=0$. So $\{\sin x, \cos x\}$ is independent on $[0,2\pi],$ and thus independent everywhere. $QED$. Is there something I am missing or not understanding...? Why is this set independent, when I can express an element of the set as a linear combination? AI: You proved that there were no non-trivial (ie, with at least one non-zero coefficient) linear combination of $\sin$ and $\cos$ identically zero on $[0,2\pi]$. That by definition means that $\{\sin,\cos\}$ is linearly independent on $[0,2\pi]$. This also implies that they are on $\mathbb{R}$, because in particular any identically zero linear combination on $\mathbb{R}$ is also zero on $[0,2\pi]$ (the reciprocal is also true, actually, by $2\pi$-periodicity). As for your initial comment about $\cos x = \sin(x+\frac\pi 2)$ for all $x$, it is not a linear combination. You are not writing $\cos=\alpha\sin$ for some scalar $\alpha$, you are writing $\cos=\sin\circ f$ for some function $f$.
H: Show that the product of upper triangular matrices is upper triangular I have a question. Prove that the product of an [arbitrary] number of upper triangular matrices of [arbitrary] size with [undetermined] upper triangular entries is upper triangular using induction? Should I use transfinite induction? I don't even know where to start! How do you even prove the statement: "prove that the product of n upper triangular matrices with undetermined upper triangular entries is upper triangular"? I know that the product of n upper triangular matrices with all upper triangular entries = 1 is [1 3 6 ... n(n+1)/2 0 1 3 ... (n-1)n/2 0 0 1 ... (n-2)(n-1)/2 . . . . . . with the pattern continuing such that a_(nn) = [n-(n-1)][n-(n-2)]/2. Thanks. AI: HINTS: First and foremost: note that it is enough to prove that the product of two upper-triangular matrices is upper-triangular. Why? This is a direct consequence of the associativity of matrix multiplication. For instance, you can write $A_1A_2A_3=(A_1A_2)A_3$; if we know $A_1A_2$ is upper triangular, then this association gives us the product of two upper triangular matrices, etc. You could also use induction for this, using precisely this sort of argument. So, all that remains is to show that the product of two upper triangular matrices is again upper triangular. You can do this by induction too; think about the following: an upper triangular matrix can be broken up as an upper block-diagonal matrix $$ \begin{bmatrix}a_{1,1} & a_{,12} & \cdots & a_{1,n+1}\\0 & a_{2,2}& \cdots & a_{2,n+1}\\ 0 & 0 & \ddots & \cdots \\0 & 0 & \cdots & a_{n+1,n+1}\end{bmatrix}=\left[\begin{array}{c|c}A_n & \begin{array}{c}a_{1,n+1}\\a_{2,n+1}\\\vdots\\a_{n,n+1}\end{array}\\\hline 0\ 0\ \cdots\ 0\ & a_{n+1,n+1}\end{array}\right], $$ where $A_n$ is the first $n$ rows/columns of your original matrix, and is itself upper triangular. Can you use this to prove the inductive step?
H: row echelon vs reduced row echelon form I apologize if this is a very basic question. I understand the difference between the two forms, but i was curious when row echelon from is enough. where is row echelon form used?. Why shouldn't I always go for reduced row echelon form? AI: In addition to lessening the workload involved in computing a determinant of a square matrix, e.g., one can confirm or rule out whether a square matrix is invertible by stopping before full reduced row-echelon form, we can determine the rank of a matrix without needing to go through the tedious work sometimes involved in obtaining full reduced form row-echelon form, and we can determine whether the row or column vectors of a matrix are linearly dependent or independent with just plain old row-echelon form. Certainly, each form has its uses, and the fact that we can sometimes avoid extensive and tedious computations (which also leave room for introducing simple errors along the way), that's not to say that it is not important to know how and when to obtain reduced row echelon form: but we can "pick-and-choose" to some extent "how far" we need to row-reduce.
H: 2 Questions about the Opposite category 1. If you have a category $\frak{A}$ how are the morphisms of its opposite category $\frak{A}^{op}$ defined? 2. How would you also show that $(\frak{A}^{op})^{op}=\frak{A}$? Thanks! AI: Here is a category. Here is its opposite category. Simple as that. Even though you might want to consider, say, the category of groups, in which a morphism $G\to H$ is actually a function on the underlying sets (satisfying some rules), it is still often very useful to consider categories in which morphisms don't have such concrete interpretations. Excuse my terrible Paint skills.
H: Linear dependence of multivariable functions It is well known that the Wronskian is a great tool for checking the linear dependence between a set of functions of one variable. Is there a similar way of checking linear dependance between two functions of two variables (e.g. $P(x,y),Q(x,y)$)? Thanks. AI: See http://en.wikipedia.org/wiki/Wronskian and in particular the section "Generalized Wronskian".
H: What is a finite partial function from $\mathfrak c$ to $D=\{0,1\}$ While I reading a paper, there is a notation is called finite partial function. I searched by google, I cannot find its definition. So I post it here as a question: What is a finite partial function from $\mathfrak c$ to $D=\{0,1\}$? Thanks! AI: Posting the comment as an answer: It is a function with range contained in $D$, and domain a finite subset of $\mathfrak c$. In general, a partial function from $A$ to $B$ is a function $f$ with domain some subset of $A$ and range (contained in) $B$. If the domain of $f$ is $A$, one says that $f$ is a total function. I believe it was Adrian Mathias who introduced the notation $f\mathrel{\vdots}A\to B$ to indicate that $f$ is a partial function from $A$ to $B$. In computability theory, one sees the notation $f(x)\downarrow$ to indicate that $x\in\mathrm{dom}(f)$, and $f(x)\uparrow$ to indicate otherwise. The idea is that one thinks of $f$ as a program performing a computation with input $x$. The downarrow indicates that the computation converges, and the uparrow, that it diverges. In set theory, in the context of forcing, it is common to talk of partial functions. Given a poset of partial functions, typically, the generic object $G$ one obtains gives us a total function, obtained by pasting together the partial approximations in $G$.
H: Evaluate the limit $\lim_{n\to\infty} \left(\frac{3n}{3n + 1}\right)^n$ I've been trying to evaluate this limit and can't seem to find a way around. I know how to show that $\lim_{n\to\infty}\left(1 + \frac{1}{n}\right)^n = e$ but I can't find a way to apply it. AI: Let's apply just what you know $$\lim_{n\to+\infty}\left(\frac{3n}{3n+1}\right)^n=\lim_{n\to+\infty}\left(1+\frac{1}{3n}\right)^{-n}=\lim_{n\to+\infty}\left(\left(1+\frac{1}{3n}\right)^{3n}\right)^{-1/3}\\=\left(\lim_{\alpha\to+\infty}\left(1+\frac{1}{\alpha}\right)^{\alpha}\right)^{-1/3}=e^{-1/3}$$
H: The ubiquitous "helper function" $\frac{f(z) - f(a)}{z - a}$ I've been looking at basic complex analysis recently, and have noticed (am imagining?) something which I've never really paid attention to before: The "helper function" $$g(z) = \frac{f(z) - f(a)}{z - a}$$ At first, I didn't give it a second thought really - just used in the definition of the derivative. Then I saw that it was used again in the proof of the Cauchy Integral Formula, then again in the derivation of Taylor and Laurent series. Then finally in the proof of the Schwarz lemma in the form $\displaystyle g(z) = \frac{f(z)}{z}$. So now I'm a little suspicious. I will ask with hesitation...is there anything more going on here? Is it just a coincidence that it seems so useful? Where else can I expect this guy to pop up? AI: It's a shift function on the power series centered at $a$; it takes the 0th term and deletes it, and makes the linear term a constant, and keeps going all the way down the line.
H: When would one use factorials in probability? What should a question say so that you know you must use factorials to solve it? Would the word distinguishable be a keyword? AI: Here, the factorials you mention don't have anything to do with probability - rather, they have to do with combinatorics. This is because in this context, you are talking about uniformly random elements of some set - and so counting up the elements with a specific property is a necessary step of the process. Factorials are important because $n!$ is the number of ways to list - in order - a set of $n$ objects that are distinguishable. Because of this, it also comes up in other arrangements - such as the number of ways to choose $k$ elements from a set of $n$ (in an order or otherwise). Further, many other combinatorial constructions can be carried out starting with these basic ideas of arrangements, and so they involve factorials as well. You are absolutely right that distinguishable is a keyword that screams factorial; however, distinguishability of the objects is often implicit in the problem, rather than explicitly stated. The big thing: look for any situation in which you are arranging objects in some way.
H: How to show this subset is proper? Let $f:R\to S$ a commutative ring homomorphism. I'm trying to prove that $Q^c=f^{-1}(Q)$ is a primary ideal if $Q$ is a primary ideal. Curiously, I'm stuck only in the easiest part, to show that $Q^c$ is proper. Any help is welcome. thanks in advance. AI: I assume that you take your rings to be unital, i.e. that they have multiplicative identities. If that is indeed the case, then recall that a homomorphism of unital rings is required to preserve these multiplicative identities, i.e. $f(1_R)=1_S$. Let $f:R\to S$ be any homomorphism of unital rings, and let $I\subseteq S$ be any ideal. If $f^{-1}(I)=R$, then $1_R\in f^{-1}(I)$, so that $f(1_R)=1_S\in I$, so that $I=S$. Therefore, the preimage of a proper ideal is a proper ideal.
H: Linear Operators satisfying $S^n=0$ but $S^{n-1}\neq 0$ I need help with part (c). I could do part (a) and part (b). AI: Hint: If $S^{n-1} \neq 0$, then there exists $x$ such that $S^{n-1}x \neq 0$. Consider the vectors $\{x, Sx, S^2x, \ldots, S^{n-1} x \}$, and try to draw the connection with the "shift" property from the matrix $A$ (and the operator $T$) from the previous parts.
H: What happens when a probability actually occurs? Sorry if I am mixing up the model with the reality, but when for instance a low probability occurs, what happens with the rest of the probability? Philosophically I find it hard to argue that any other probability than 1 has occured. For instance, 80 % probability will likely occur. When it occurs, what hapened to the rest, 20 %, if all probability did exist as physical matter like in quantum physics where we model matter as a probability space. AI: Law of Conservation of Probability: If $\Pr(A)=0.8$ then $\Pr(A^c)=0.2$. Here $A^c$ denotes the complement of $A$, the event that $A$ doesn't occur.
H: a question on orbit in ergodic theory For the map $T: [0,1]\to [0, 1]$ defined by $Tx=10x\pmod{1}$, how to use the decimal expansion to construct a $x$, such that the orbit of $x$, say $\theta_x=\{T^nx: n\geq 0\}$ is dense in $[0, 1]$ and $T^px\to 0$ as $p\to \infty$, where $p$ runs over all prime number ? AI: Some hints: You can think of $T$ as a shift map, where you shifting up the digits of a decimal expansion. The set of all numbers with finite decimal expansions (i.e. having all zeros after a certain point) is countable and dense in $[0,1]$, so you can "hide" them inside the decimal expansion of an appropriate $x$, to be revealed by repeated application of $T$. Furthermore, to make $T^p x \rightarrow 0$, you'll want to "hide" longer and longer strings of zeros inside the decimal expansion.
H: How to calculate a big combination $\binom nr$ How to calculate a big combination such as $$\binom {10^{80}}{10^{10}}$$, using software or by hand, or at least can we get an acceptable approximation. AI: You can use Stirling’s approximation; even the simplest form, $$n!\approx\sqrt{2\pi n}\left(\frac{n}e\right)^n\;,$$ is fairly good. In your example you get $$\begin{align*} \binom{10^{30}}{10^{10}}&\approx\frac{\sqrt{2\cdot10^{30}\pi}\left(\frac{10^{30}}e\right)^{10^{30}}}{\sqrt{2\cdot10^{10}\pi}\left(\frac{10^{10}}e\right)^{10^{10}}\cdot\sqrt{2\left(10^{30}-10^{10}\right)\pi}\left(\frac{10^{30}-10^{10}}e\right)^{10^{30}-10^{10}}}\\\\ &=\frac1{\sqrt{2\pi}}\cdot\frac{10^{3\cdot10^{31}-10^{11}+10}}{\left(10^{30}-10^{10}\right)^{10^{30}-10^{10}+\frac12}} \end{align*}$$ for instance, if I didn’t make another silly algebra error.
H: Simplifying $\sin(4x)\cos(4x)$ Simplify $\sin(4x)\cos(4x)$ using double angle or compound trigonometry. Can someone please show me how its done, Ive tried several times but no where near the answer. AI: The double angle formula is $ \ 2 \sin\theta\cos\theta = \sin(2\theta) \iff \sin\theta\cos\theta = \frac{1}{2} \sin(2\theta)$. By applying this formula with $\theta = 4x$, we obtain $$\sin4x\cos4x=\frac{1}{2} \sin(8x).$$
H: $n=\dim V$. Then $V=\ker(T^n)\oplus\mathrm{range}(T^n)$ I trying to solve the following problem. The question is from a past exam. Suppose that $V$ is a finite dimensional vector space over a field $K$. Let $T: V\rightarrow V$ be a linear operator. If $n=\dim V$, then $V=\ker(T^n)\oplus\mathrm{range}(T^n)$ Attempt/Thoughts. If I can show that $\ker(T^n)\cap \mathrm{range}T^n={0}$ then we are done. To show this I need the following claim. (At least this is what I think). I claim that $\ker(T^{2n}) \subseteq \ker(T^n)$. I am stuck on trying to prove this. I know that $\ker(T) \subseteq \ker(T^2) \subseteq \ker(T^3) \subseteq \cdots$ So this set of inclusions has to stop at some point since we are in the finite dimensional case(?). If there is some $r\geq0$ such that $\ker(T^{r})=\ker(T^{r+1})$ then I can easily prove my claim. But I don't know how to make this precise. Can somebody help? Thanks AI: Instead of considering the chain $\ker(T) \subseteq \ker(T^2) \subseteq \ker(T^3) \subseteq \cdots$, you may consider the ascending chain of proper subset inclusion $\ker(T) \subset \ker(T^2) \subset \ker(T^3) \subset \cdots$. Since $V$ is finite dimensional, the dimension of $\ker(T^r)$ cannot keep strictly increasing. Therefore there must exist some $r$ that is the first index such that $\ker(T^r)=\ker(T^{r+1})$. We also have the descending chain $\mathrm{range}(T) \color{green}{\supseteq} \mathrm{range}(T^2) \color{green}{\supseteq} \mathrm{range}(T^3) \color{green}{\supseteq} \cdots$. Therefore, by the rank-nullity theorem, $$ \mathrm{range}(T) \color{red}{\supset} \mathrm{range}(T^2) \color{red}{\supset} \mathrm{range}(T^3) \color{red}{\supset} \cdots \color{red}{\supset}\mathrm{range}(T^r)=\mathrm{range}(T^{r+1}). $$ Note that the rank-nullity theorem implies that the inclusions preceding $\mathrm{range}(T^r)$ are strict. It remains to show that $\ker(T^r)\cap\mathrm{range}(T^r)=0$ and hence $V=\ker(T^r)\oplus\mathrm{range}(T^r)$. $r\le n$. $V=\ker(T^n)\oplus\mathrm{range}(T^n)$.
H: Given $a,b,c>0$ and $a^5+b^5+c^5=3$. Is $a+b+c\leq 3$ always true? Given $a,b,c>0$ and $a^5+b^5+c^5=3$. Is $a+b+c\leq 3$ always true? I tried many ways to prove it and to find a counterexample, but I couldn't. Please help. Thanks. AI: There are many ways to prove this. One way is to use the fact the $2^{nd}$ derivative of $x^5$ is $20 x^3 > 0$ for all $x > 0$. This means $x^5$ is a strictly convex function over the interval $(0,\infty)$. By Jensen's inequality, we have: $$1 = \frac{a^5 + b^5 + c^5}{3} \ge \left(\frac{a + b + c}{3}\right)^5\implies 3 \ge a+b+c \tag{*1}$$ The power mean inequality mentioned in others' comment can be viewed as a special case of Jensen's inequality. For 3 variables $a,b,c > 0$ and exponents $p > q > 0$, the power mean inequality has the form: $$\left(\frac{a^p + b^p + c^p}{3}\right)^{\frac{1}{p}} \ge \left(\frac{a^q + b^q + c^q}{3}\right)^{\frac{1}{q}}$$ One can rewrite it as $$\frac{(a^q)^{p/q} + (b^q)^{p/q} + (c^q)^{p/q}}{3} \ge \left(\frac{a^q + b^q + c^q}{3}\right)^{p/q}\tag{*2}$$ If one plug $p = 5, q = 1$ into $(*2)$, one obtain the same inequality in L.H.S of $(*1)$. This means one can prove your inequality using power mean inequality alone. It also suggest one can derive the power mean inequality from Jensen's inequality by considering the function $x^{\frac{p}{q}}$ at 3 points $a^q, b^q$ and $c^q$.
H: How to prove the Milner-Rado Paradox? For every ordinal $\alpha<\kappa^+$ there are sets $X_n\subset\alpha$ $(n\in\Bbb{N})$ such that $\alpha=\bigcup_n X_n$ and for each $n$ the order-type of $X_n$ is $\le\kappa^n$. [By induction on $\alpha$, choosing a sequence cofinal in $\alpha$.] I tried to prove this problem by using transfinite induction. For $\alpha=0$, this statement is trivial. If $\alpha$ has sets $X_n\subset \alpha$ satisfies $\alpha=\bigcup_n X_n$, $\text{order-type of }X_n\le\kappa^n$, we define $X'_0=\{\alpha\}$, $X_{n+1}'=X_n$. Then $\{X_n'\}$ satisfies $\bigcup_n X_n'=\alpha+1$, $\text{order-type of }X'_n \le \kappa^n$. But I don't know how to proceed the proof. Thanks for any help. AI: Let $\alpha$ be a limit ordinal, and for each $\beta < \alpha$, let $(X^\beta _n)_n$ be the obvious thing. Fix an increasing sequence $(\beta_\gamma)_{\gamma < \mathrm{cf}(\alpha)}$ cofinal in $\alpha$ with $\beta_0 = 1$. Note $\mathrm{cf}(\alpha) \leq \kappa$. Define: $$X^\alpha _0 = \{0\};\ \ X^\alpha_{n+1} = \bigcup_\gamma X^{\beta_{\gamma+1}}_n\setminus \beta_\gamma$$ and that's it! To see that it works, observe that: $$\bigcup_{n>0}X^\alpha_n = \bigcup _n \bigcup _\gamma X^{\beta_{\gamma+1}}_n\setminus \beta_\gamma = \bigcup_\gamma \bigcup_n X^{\beta_{\gamma+1}}_n\setminus \beta_\gamma = \bigcup_\gamma \beta_{\gamma+1}\setminus \beta_\gamma = \alpha \setminus \beta_0$$ and so $\bigcup_nX^\alpha_n = \alpha$. As for the order types, clearly $\mathrm{ot}(X^\alpha_0) = 1 = \kappa^0$. Noting that the sets $\beta_{\gamma+1}\setminus\beta_\gamma$ form a consecutive sequence of ordinal intervals, and that each $X^{\beta_{\gamma+1}}_n\setminus\beta_\gamma$ is a tail segment of $X^{\beta_{\gamma+1}}_n$ we get that: $$\mathrm{ot}(X^\alpha_{n+1}) = \sum_\gamma \mathrm{ot}(X^{\beta_{\gamma+1}}_n\setminus\beta_\gamma) \leq \sum_\gamma \kappa^n = \kappa^n \cdot \mathrm{cf}(\alpha) \leq \kappa^n\cdot\kappa = \kappa^{n+1}$$
H: If six straight lines and five circles intersect each other, then what are the maximum possible number of distinct points of intersection? This is how I went about doing it. I know that if there are 'n' number of straight lines they intersect each other in nC2 ("n choose 2") ways. Therefore, the points of intersection using 6 lines is 15. I also know that a straight line and a circle intersect each other at atmost two different points. Hence the points of intersection of six lines and five circles is 6C1 x 5C1 x 2 = 60. After this point I am a little confused. My answer was present in the options but it was the wrong one. I calculated it as being 15+60=75. But the answer key says it is 95. Their reason being that 2 circles intersect in atmost two different points and therefore the points of intersection of 5 circles is 5C2 x 2 = 20. The answer then works out to 15+60+20 = 95. I did not understand the last part. Could someone please help me clear out my confusion? Thanks. AI: There are exactly $3$ types of intersection points: Between a line and a line (${}_6C_2 \cdot 1=15$). Between a line and a circle (${}_6C_1 \cdot {}_5C_1 \cdot 2=60$). Between a circle and a circle. This is the case that you were forgetting. There are $5$ circles in total, so there are ${}_5C_2$ ways to choose the two circles that will intersect. These two circles can intersect at most $2$ times. This yields ${}_5C_2 \cdot 2 = 20$. Summing everything together, we obtain $15 + 60 + 20 = 95$.
H: two dimensional linear differential equation with $1$ eigenvector I have the following linear differential equation: \begin{equation} x' = \begin{pmatrix}3&-4\\1&-1\end{pmatrix}x \end{equation} The corresponding characteristic equation is: \begin{equation} \lambda^2-2\lambda+1 = (\lambda-1)(\lambda-1) \implies \lambda_1=\lambda_2=1 \end{equation} A corresponding eigenvector is $\begin{pmatrix} 2\\1\end{pmatrix}$. By plotting the solutions to the equation I know that it is a degenerate node, i.e there is only one eigenvector for the matrix (I believe the terminology is that the eigenspace has dimension 10. How do I determine this without plotting the differential equation? How do I know that the eigenspace only has dimension 1? AI: Terminology When we are solving for eigenvalues of a system, and an eigenvalue is repeated, then one worries as to whether there exist enough linearly independent eigenvectors. If we have an eigenvalue with multiplicity $n$ which has less than $n$ linearly independent eigenvectors, we will not have enough solutions. With this in mind, we define the algebraic multiplicity of an eigenvalue to be the number of times it is a root of the characteristic equation. We define the geometric multiplicity of an eigenvalue to be the number of linearly independent eigenvectors for the eigenvalue. Mathematically, we can state that the algebraic multiplicity of an eigenvalue $\lambda$ is, by definition, the largest integer $k$ such that $(x−\lambda)^k$ divides the characteristic polynomial. The geometric multiplicity of $\lambda$ is the dimension of its eigenspace, that is, it is the dimension of $\{X \in \mathbb{C}^{n×1} : AX=λX\}$, where $n$ is the dimension of the matrix. The null space of the matrix is called the eigenspace associated with the eigenvalue $\lambda$. When the geometric multiplicity of an eigenvalue is less than the algebraic multiplicity, we say the matrix is defective. In the case of defective matrices, we must search for additional solutions using generalized eigenvectors. Analyze System In this system, we have: $$x' = \begin{pmatrix}3&-4\\1&-1\end{pmatrix}x$$ Aside: The matrix here is rank $= 2$. This means that, $\dim(A) = \text{rank}(A) + \text{nullity}(A) = 2 = 2 + 0$, in other words, $A$ is an invertible matrix. The corresponding characteristic equation for $A$ is: $$\lambda^2-2\lambda+1 = (\lambda-1)(\lambda-1) \implies \lambda_1=\lambda_2=1$$ The algebraic multiplicity for the eigenvalue $\lambda_1 = 1$ is $2$. Lets find the geometric multiplicity. To find an eigenvector, we set up and solve $[A- \lambda I]v_i = 0$, so we have: $$[A - 1 I]v_1 = \begin{pmatrix}2 & -4\\ 1 & -2\end{pmatrix}v_1 = 0.$$ The RREF of this matrix is: $$\begin{pmatrix}1 & -2\\ 0 & 0\end{pmatrix}v_1 = 0.$$ This gives us an eigenvector $v_1 = (2, 1)$. Observations The rank of the RREF matrix is $1$ and we know, from the rank-nullity theorem, that the $\dim = 2 = \text{rank} + \text{nullity} = 1 + \text{nullity} \rightarrow \text{nullity} = 1$. Note that sometimes this is more generally called $\dim (\text{image}~ T) + \dim (\ker ~T) = \dim ~V$ We know that the nullity is the geometric multiplicity, which is $1$. This means we can only find one independent eigenvector, so we have what is called a deficient matrix. The eigenspace for $\lambda = 1$ is the nullspace of $[A - \lambda I] = [A- 1 I] = \text{Span}\left\{\begin{pmatrix}2\\ 1\end{pmatrix}\right\}$. Notice how this agrees with the eigenvector we found above (as it should)? From the eigenspace terminology above, we can write $E(1)= \left\{X \in \mathbb{C}^{2×1} : AX = 1X = \begin{pmatrix}2\\ 1\end{pmatrix}\right\}$ Note that there is a nice way of getting everything using the factorization of the characteristic polynomial, but that is for another day. From all of this, we still need to find a second independent (generalized) eigenvector. To find a second eigenvector, we try: $$[A - 1I]v_2 = v_1 \rightarrow \begin{pmatrix}1 & -2\\ 0 & 0\end{pmatrix}v_2 = \begin{pmatrix}2\\1\end{pmatrix}$$ This leads to: $a = 1 + 2b \rightarrow \text{let}~~ b = 0, a = 1 \rightarrow v_2 = (1, 0)$. So, we can write our general solution as: $$x(t) = \begin{bmatrix}x_1(t)\\ x_2(t)\end{bmatrix} = e^t\left[ c_1 v_1 + c_2(v_1 t + v_2)\right] = e^t\left[ c_1 \begin{pmatrix}2\\1\end{pmatrix} + c_2\left(\begin{pmatrix}2\\1\end{pmatrix} t + \begin{pmatrix}1\\0\end{pmatrix}\right)\right] = e^t\left[ c_1 \begin{pmatrix}2\\1\end{pmatrix} + c_2\begin{pmatrix}2t + 1\\t\end{pmatrix}\right] $$ If we wanted to write the matrix exponential, we would have: $$e^{At} = e^t\begin{pmatrix}2 t+1 & -4 t \\ t & 1-2 t \end{pmatrix}$$ We can also draw the phase portrait for the system. We have a critical point at $(x, y) = (0,0)$. From the eigenvalues, we have a positive, repeated real root $\lambda = 1 \rightarrow$ a degenerate node. The phase portrait is as follows.
H: "IFF" (if and only if) vs. "TFAE" (the following are equivalent) If $P$ and $Q$ are statements, $P \iff Q$ and The following are equivalent: $(\text{i}) \ P$ $(\text{ii}) \ Q$ Is there a difference between the two? I ask because formulations of certain theorems (such as Heine-Borel) use the latter, while others use the former. Is it simply out of convention or "etiquette" that one formulation is preferred? Or is there something deeper? Thanks! AI: As Brian M. Scott explains, they are logically equivalent. However, in principle, the expression $$(*) \qquad A \Leftrightarrow B \Leftrightarrow C$$ is ambiguous. It could mean either of the following. $(A \Leftrightarrow B) \wedge (B \Leftrightarrow C)$ $(A \Leftrightarrow B) \Leftrightarrow C$ These are not equivalent; in particular, (1) means that each of $A,B$ and $C$ have the same truthvalue, whereas (2) means that either precisely $1$ of them is true, or else all $3$ of them are true. Also, you can check for yourself that, perhaps surprisingly, the $\Leftrightarrow$ operation actually associative! That is, the following are equivalent: $(A \Leftrightarrow B) \Leftrightarrow C$ $A \Leftrightarrow (B \Leftrightarrow C)$. In practice, however, (1) is almost always the intended meaning.
H: In a square grid ($6 \times 6$) that comprises 25 small unit squares each of side 1 cm, how many rectangles (not squares) are there in the grid? This comes under Combinatorics under intersection of parallel lines. I calculated the number of rectangles to be $\binom62 \times \binom62 = 225$. But how does one subtract only the number of squares from this number? Please help. AI: HINT: Count the squares according to their sizes. There is one square of side $5$, and there are $5^2=25$ of side $1$. How many are there of sides $4,3$, and $2$? (It’s probably easiest to count them in that order, but you should see a pattern pretty quickly.) It may help to realize that when you’re counting squares of side $k$, once you choose the top and lefthand edges, there’s nothing left to choose: the bottom and righthand edges have to be $k$ units down and over, respectively. Of course this does put some limitations on which edges you can choose for the top and lefthand side. Added: Indeed, you can carry this same idea further: once you know where the upper lefthand corner is, you know the whole square. How many places are there for the upper lefthand corner of a square of side $k$? It has to be at least $k$ units from the bottom and at least $k$ units from the righthand edge. How many vertices fit that description?
H: using the conditional to abbreviate formulas i hope you're all doing well. I was reading a paper recently that started out with a language $L$ with a set $PV$ of propositional variables, Boolean connectives $\neg, \vee$, and some modalities called $\Box$ and O. It then says "$\wedge, \rightarrow,$ and $\leftrightarrow$ are defined in terms of $\neg$ and $\vee$." What do they mean? I understand that $\{\neg, \vee\}$ is a complete set of connectives, i.e. that any wff in propositional logic using any of the symbols $\neg, \vee, \wedge, \rightarrow,$ and $\leftrightarrow$ is tautologically equivalent to a wff just using $\neg$ and $\vee$. So, do they mean that they consider $\rightarrow$ not as a boolean connective in its own right, but a notational abbreviation for using $\neg$ and $\vee$? i.e. $p \rightarrow q$ as an abbreviation for $\neg p \vee q$? (I understand that the two formulas are not literally equivalent if we consider classical propositional logic, where $\rightarrow$ is a distinct symbol from $\neg$ and $\vee$, and that the two formulas are merely tautologically equivalent.) Is it a common thing in logic papers to "abbreviate" like this? Thank you for any help/wisdom. Sincerely, Vien p.s. here's the source. It's in the first paragraph of "Basic Definitions" if that helps at all. http://individual.utoronto.ca/philipkremer/onlinepapers/DTL.pdf Cheers! AI: "Do they mean that they consider $\to$ not as a boolean connective in its own right, but a notational abbreviation for using $\neg$ and $\lor$?" Yes (probably). It is pretty common to initially introduce a restricted formal language $L$, and then later want -- for readability, for convenience -- to be able to use devices not in the ground language $L$. We can go two ways. (a) We can officially move to a less restricted language $L'$ by adding new vocabulary and rules for dealing with it in $L'$. Or (b) we can stick strictly to $L$ but allow ourselves street slang in writing sentences of $L$ -- so e.g. we allow ourselves to write $(p \to q)$ as causal unofficial argot for $(\neg p \lor q)$, but $\to$ isn't really part of our official formal language. It really makes little odds which line we take in most cases, but (b) is clean and used in many logic texts and probably what is intended here.
H: "range of function" vs "target of function"? Page 14 of Fundamentals of Computer Graphics states that if we have a function like this: ...the set that comes before the arrow is called the domain of the function, and the set on the right-hand side is called the target. ...The point f(a) is called the image of a, and the image of a set A (a subset of the domain) is the subset of the target that contains the images of all points in A. The image of the whole domain is called the range of the function. Then what exactly is the difference between the range of a function and the target of a function? AI: The difference is that the range may not be the entire target. Consider the function $f(x)=x^2$ from $\{-1,0,1\}$ to $\{-1,0,1\}$; its domain and target are both $\{-1,0,1\}$, but its range is only $\{0,1\}$. By the way, the more usual name for the target is codomain. The range is sometimes called the image.
H: Can we avoid talking about proper classes by talking about models? Intuitively, we can define the ordinal numbers $\mathsf{On}$ as the closure of $\{0\}$ with respect to successorship $x \mapsto x \cup \{x\}$ and (set-sized) unions. Arguably the most natural way to implement this idea is to write $\mathsf{On} = \bigcap \mathcal{A},$ where $\mathcal{A}$ is the collection of all classes that include $\{0\}$ and which are closed with respect to the aforementioned operations. However, this approach cannot work within ZFC, as there are no proper classes, let alone collections thereof. This issue isn't specifically an issue with the definition of the ordinals; it also occurs when defining the Surreal numbers, for example. There's at least two standard solutions to this problem. Work in a more expressive set theory. Define the ordinals (surreals etc.) in a different way. Suppose neither solution appeals to us. Well if television and fantasy books have taught me nothing else (and they haven't), its that there's always a third way. So my question is: Can we work around the problem by talking about models of ZFC? For instance, I was thinking we could define a new function symbol $\mathsf{On}$ such that if a set $M$ is a standard model of second-order ZFC, then $\mathsf{On}^M$ are the ordinals of $M.$ From this point of view, we can define $\mathsf{On}^M$ using the approach via closure described in the preceding paragraph, since $M$ is a set. Furthermore, I think that we can introduce such a function symbol $\mathsf{On}$ without even knowing whether or not ZFC has any models. Now suppose we want to prove something about the ordinals. Like, we wish to show that every non-empty subclass of the ordinals has a minimum element. I imagine that we can instead prove that for all standard models $M$ of second order ZFC, we have that every non-empty subset of $\mathsf{On}^M$ has a minimum element. From this, I imagine that we may deduce the metatheorem that given a definable unary predicate in the language of ZFC such that at least one ordinal satisfies that predicate, there exists a least ordinal satisfying that predicate. I think this approach works even though we can't prove that a standard model $M$ of second order ZFC actually exists. Does this sort of thing actually work, or is there a subtlety that I'm missing? Edit. As Andreas Blass explains in his answer, taking $M$ to be a standard model of second-order ZFC doesn't work, because it prevents the completeness theorem from applying. But, can we solve this by allowing M to be an arbitrary model of first-order ZFC? More generally, is the idea salvageable? AI: If you prove that some statement is true in all standard models of second-order ZFC, you cannot infer that it (or a reformulation using definable predicates instead of classes) is provable in ZFC. The first counterexample that comes to mind is the statement that ZFC is consistent; it's true in all standard models of second-order ZFC and in fact even in all standard models of first-order ZFC, but it's not provable in ZFC. Another example to keep in mind is "there is a standard model of first-order ZFC", which is true in all standard models of second-order ZFC but not in all standard models (let alone in all models) of (fisrst-order) ZFC. Also, if you really want to confine attention to statements "about ordinals", then use the equivalent formulation of my second example as "there is an ordinal $\alpha$ such that $L_\alpha$, the constructible hierarchy up to stage $\alpha$, satisfies first-order ZFC."
H: linearly independence of $e^{a_1x},... e^{a_nx}$ $a_1,\ldots,a_n$ are real different numbers. Prove that the functions $e^{a_1x},...,e^{a_nx}$ are linearly independent in $Fun(R,R)$. My way to try to prove it: I assumed: $b_1e^{a_1x} + \cdots + b_n e^{a_nx} = 0$, and we want to show that $b_1 = \cdots = b_n = 0$. I differentiated it $n-1$ times, so I wrote it in $n\times n$ matrix. Then if I do transpose and calculate the determinant, I get something like: $b_1 \cdots b_n \times\text{Vandermonde Det}$. I know Vandermonde det isn't $0$, but if one of the $b_i$'s is zero everything is zero and the det is $0$ so it is singular and it is linearly dependent! so I don't know why it is true at all. AI: Suppose that there exist $b_1,\dots,b_n$ such that for all $x \in \mathbb{R}$, $$b_1e^{a_1x}+ b_2e^{a_2x}+ \dots+ b_ne^{a_nx}=0$$ Then, derivating and evaluating at $0$, $$ b_1+ b_2+ \dots +b_n= 0 \\ b_1a_1+ b_2a_2+ \dots + a_nb_n=0 \\ b_1a_1^2+b_2a_2^2+ \dots + b_na_n^2=0 \\ \vdots \\ b_1a_1^{n-1}+b_2a_2^{n-1} + \dots + b_na_n^{n-1} =0$$ Therefore, the vectors $\left( \begin{matrix} 1 \\ a_1 \\ a_1^2 \\ \vdots \\ a_1^{n-1} \end{matrix} \right)$, $\left( \begin{matrix} 1 \\ a_2 \\ a_2^2 \\ \vdots \\ a_2^{n-1} \end{matrix} \right)$, ..., $\left( \begin{matrix} 1 \\ a_n \\ a_n^2 \\ \vdots \\ a_n^{n-1} \end{matrix} \right)$ are linearly dependent, hence: $$0= \left| \begin{matrix} 1 & 1 & \dots & 1 \\ a_1 & a_2 & \dots & a_n \\ a_1^2 & a_2^2 & \dots & a_n^2 \\ \vdots & \vdots & & \vdots \\ a_1^{n-1} & a_2^{n-1} & \dots & a_n^{n-1} \end{matrix} \right|$$ In this way, there is no $b_i$, and you deduce from Vandermonde's determinant that $a_i=a_j$ for some $i \neq j$. Another method: Suppose that there exist $b_1,\dots,b_n$ such that for all $x \in \mathbb{R}$, $$b_1e^{a_1x}+b_2e^{a_2x}+\dots+b_ne^{a_nx}=0$$ Let $1 \leq k \leq n$ such that $a_k= \max\limits_{1 \leq i \leq n} a_i$. Then $$b_1e^{(a_1-a_k)x}+b_2e^{(a_2-a_k)x} + \dots + b_ne^{(a_n-a_k)x}=0$$ When $x \to + \infty$, you get $b_k=0$. So you can conclude by induction.
H: Series expansion of $\ln(\sec x + \tan x)$? I'm looking for series expansion of $\ln(\sec x + \tan x)$ ? I tried to differentiate and then find an expansion then integrating but found nothing. AI: $$\frac{d}{dx}\ln(\sec x+\tan x)=\frac{\sec x\tan x+\sec^2x}{\sec x+\tan x}=\sec x=\sum_{n\ge 0}\frac{(-1)^nE_{2n}}{(2n)!}x^{2n}\;,$$ where $E_{2n}$ is an Euler number, the coefficient of $\dfrac{t^{2n}}{(2n)!}$ in the Maclaurin series expansion of $\dfrac1{\cosh t}$. This is easy enough to integrate termwise, though of course the resulting series will still have Euler numbers in the coefficients.
H: NBA round robin probability Let’s say there is a team who you expect to win 75% of its games in a given 82-game NBA regular season (and the probability of winning each game = 75%). What is the probability that the team will never lose consecutive games at any point during the 82-game season. AI: You can make a Markov chain with $3$ states, then find the $82$-th power of the transition matrix. Here is the transition matrix: $$ P = \begin{pmatrix} \frac 34 & \frac 14 & 0 \\ \frac 34 & 0 & \frac 14 \\ 0 & 0 & 1 \end{pmatrix}. $$ Your answer would be $$ \begin{pmatrix} 1 & 0 & 0 \end{pmatrix} P^{82} \begin{pmatrix} 1 \\ 1 \\ 0 \end{pmatrix}. $$ According to Wolfram Alpha, $1.29\%$ would be an estimate of the answer.
H: Simplify the trigonometric equation using double angle and compound angles. Simplify the trigonometric equation $\dfrac{1-t^2}{1+t^2}$ where $t=\tan\dfrac{x}{2}$. using double angle and compound angles. I've worked up to the point where I converted the equation into $\sec$ form after substituting the $\tan\dfrac{x}{2}$. But I'm stuck, so can someone show me how its done? AI: $$\frac{1-t^2}{1+t^2}=\frac{1-\tan^2\frac x2}{1+\tan^2\frac x2}=\frac{\cos^2\frac x2-\sin^2\frac x2}{\cos^2\frac x2+\sin^2\frac x2}$$ multipling the numerator & the denominator by $\cos^2\frac x2$ $$=\cos^2\frac x2-\sin^2\frac x2=\cos x$$ as $\cos2A=\cos^2A-\sin^2A$
H: Addition and Subtraction of Convergent Series I have a simple question and I can't find the answer of it: If a series $a_{n}$ can be proven to converge and a series $b_{n}$ too, will the series $a_{n}+b_{n}$ converge? Same goes with the subtraction of them. Thank you AI: $\{a_n\},\{b_n\}$ converge implies $\exists a,b $ such that given any $\epsilon>0$, there exists $n_0\in \Bbb N$ such that $|a_n-a|<\epsilon/2,|b_n-b|<\epsilon/2$ $\forall n\gt n_0$ Then, for $|a_n+b_n-(a+b)|\leq |a_n-a|+|b_n-b|<\epsilon/2+\epsilon/2=\epsilon$ Therefore, $\{a_n+b_n\}$ converges to $a+b$
H: Prove triangle inequality I want to prove that $d(x,y) = 1- \sum_i {\min(x_i, y_i)}$ where $\sum_i {x_i} = \sum_i {y_i} =1$ and $\forall i: x_i, y_i \geq 0$ satisfies the triangle inequality. The domain of $d$ therefore is $\mathcal{X} \times \mathcal{X} \to \mathcal{X}$ with $\mathcal{X} = \{x | x \in \mathbb{R}^d_{\geq 0}, \sum_{i=1}^d {x_i} = 1 \}$ I am pretty sure, that this is actually the case, but I cant come up with a way to prove it. Might be possible that I am wrong and actually a counter example can be found. AI: Notice that if $a$ and $b$ are two real numbers, then $$\min\{a,b\}=\frac{a+b-|a-b|}2.$$ Consequently, for $x,y\in\mathcal X$, we have $$d(x,y)=1-\sum_{j=1}^d\frac{x_j+y_j-|x_j-y_j|}2=\frac 12\sum_{j=1}^d|x_j-y_j|,$$ from which triangle inequality is not hard to show.
H: why is $E[E[Y|X]] = E[Y]$ I have a derivation from my book, I have a problem with the very first line: $$ \begin{align} E[E(Y|X)] &= \int_{-\infty}^\infty E(Y|x)f_1(x)dx <- \text{why dx}\\ &= \int_{-\infty}^\infty\int_{-\infty}^\infty yf(y|x)f_1(x)dydx\\ &=\int_{-\infty}^\infty y \int_{-\infty}^\infty f(x,y)dxdy\\ &=\int_{-\infty}^\infty yf_2(y)dy\\ &= E(Y) \end{align} $$ Now everything after the first integral I understand, thats just splitting the integral up and getting marginal/joint densities. But why do we choose $dx$ in the first integral? seems arbitrary, what is it based on? AI: $E[Y\mid X]$ is a function of $X$: its value when $X=x$ is $E[Y\mid X=x]$. We now weight each of these values by $f_1(x)$, intuitively the probability that $X=x$, and ‘average’ (i.e., integrate over $x$) to get the expected value of $E[Y\mid X]$, which turns out to be $E[Y]$. It’s no different in principle from calculating $E[Z]$ when $Z$ is any other function of $X$, e.g., $Z=X^2$.
H: What does $a\equiv b\pmod n$ mean? What does the $\equiv$ and $b\pmod n$ mean? for example, what does the following equation mean? $5x \equiv 7\pmod {24}$? Tomorrow I have a final exam so I really have to know what is it. AI: It’s a bit late to be learning a basic definition, but here it is: $a\equiv b\pmod n$ means that $n\mid a-b$, i.e., that $a-b$ is a multiple of $n$. Thus, the congruence $5x\equiv 7\pmod{24}$ means that $24\mid 5x-7$. To solve it, you must find an integer $x$ that makes this true. Since $5\cdot11-7=55-7=48$ is a multiple of $24$, $x\equiv 11\pmod{24}$ is a solution.
H: How to show there exists an infinite sequence satisfying $a_0 = x$ and $(a_n,a_{n+1}) \in R$. Intuitively, we can use the fact that (i) for all $a \in \mathbb{R}$, there exists $b \in \mathbb{R}$ such that $a < b$ in order to conclude that (ii) there exists an infinite sequence $a : \mathbb{N} \rightarrow \mathbb{R}$ such that $a_0 = \sqrt{2}$, and For all $n \in \mathbb{N}$ we have $a_n < a_{n+1}$. Of course, this is easily proved by constructing an explicit example. Like, we could just offer $a_n = \sqrt{2}+n$ as witness. But the point is, even without giving an explicit example, we can deduce (ii) from (i). This works by appealing to the following theorem. Proposition 1. For all sets $X$, all $x \in X$ and any relation $R \subseteq X^2$ that is left-total on $X$, there exists a sequence $a : \mathbb{N} \rightarrow X$ such that $a_0 = x$ and for all $n \in \mathbb{N}$ we have $(a_n,a_{n+1}) \in R.$ I could use a hand proving this proposition. Help, anyone? Thoughts. We can define a new sequence $b : \mathbb{N} \rightarrow X$ such that $b_0 = \{(x)\}$ $b_{n+1} = \{\tilde{x} \oplus (y) \mid \tilde{x} \in b_n, y \in X, (\tilde{x}_{n},y) \in R\}.$ Where $\oplus$ denotes concatenation. But, where to go from there? AI: It’s a straightforward recursive construction. For each $y\in X$ there is an $f(y)\in X$ such that $\langle y,f(y)\rangle\in R$. Let $b(0)=x$, and for $n\in\Bbb N$ let $b(n+1)=f\big(b(n)\big)$. Of course in general this uses (part of) the axiom of choice, since $f$ is a choice function for some of the subsets of $X$. In fact the proposition is precisely the statement of the axiom of dependent choice, which is strictly weaker than the axiom of choice but still independent of the axioms of $\mathsf{ZF}$.
H: Integration of a matrix over a hypersphere Can anybody please help me on this one please? $\int_{B({\bf x}_0;R)} \frac{1}{2} ({\bf x} - {\bf x_0})({\bf x} - {\bf x_0})^{T} d{\bf x}$ Here, $B({\bf x}_0;R)$ is a hypersphere(ball?) with radius $R$ with center $\bf x_0$. AI: Suppose that $\mathbf{x}$ is a column vector, $B(\mathbf{x}_0;R)$ denotes the $n$-ball centered at $\mathbf{x}_0$ with radius $R$ and you are calculating a Riemann integral. Then $$ \int_{B(\mathbf{x}_0;R)} \frac12 (\mathbf{x}-\mathbf{x}_0)(\mathbf{x}-\mathbf{x}_0)^{T} d\mathbf{x} =\int_{B(0;R)} \frac12 \mathbf{x}\mathbf{x}^T d\mathbf{x}. $$ Each off-diagonal entry of $\mathbf{x}\mathbf{x}^T$ is of the form $x_ix_j$. For fixed $i$, this is an odd function in $x_j$ and hence its integral over $B(0;R)$ is zero. Therefore, only integrals of the diagonal entries remain. Since the volume of the unit $n$-ball is $\pi^{n/2}/\Gamma(\frac n2+1)$, we get \begin{align*} v(n,R):=\int_{B(0;R)} \frac12x_i^2d\mathbf{x} &= \frac{\pi^{(n-1)/2}}{\Gamma(\frac {n-1}2+1)} \int_{-R}^R \frac12 x_i^2\left(R^2-x_i^2\right)^{(n-1)/2}dx_i\\ &= \frac{\pi^{(n-1)/2}R^{n+2}}{\Gamma(\frac {n-1}2+1)} \int_0^1 u^2\left(1-u^2\right)^{(n-1)/2}du\quad(x_i=Ru). \end{align*} Hence the required integral is $v(n,R)I_n$. If $\mathbf{x}$ is a row vector, $\int_{B(0;R)} \frac{1}{2} \mathbf{x}\mathbf{x}^T d\mathbf{x}=\sum_{i=1}^n\int_{B(0;R)} \frac12 x_i^2 d\mathbf{x}$. So, the result is $nv(n,R)$ instead.
H: Inverse formula for sg(specific gravity) to plato I am making a class (in software) in wich several algorithms are used to convert values. Now almost all functions need to go two way, so sg_to_plato(sg) and plato_to_sg(plato) These algorithms I gather from online sources and literature. Is is all working within reasonable range, but the vice-versa results aren't exactly the same sg_to_plato(plato_to_sg(18)) is not 18 but 17,9. I now use for example: def sg_to_plato(sg): return 190.74*math.pow(sg,3) - 800.47*math.pow(sg,2) + 1286.4*sg - 676.67 def plato_to_sg(plato): return plato/(258.6-(plato/258.2*227.1))+1 but these two formulas do not seem related. Can't I "inverse" or "reverse" the formula I use for the first to use in the other? I looked at:Learning how to flip equations but my math is not sufficient to apply the theory. Question one: Is a formula given 1 input and 1 output always reversible? Question two: Would someone like to explain how this is done EDIT: As Gerry Myerson correctly pointed out, the discrepancy could be accounted to the fact I am rounding, but this is not the case. Even unrounded i get approx the same result. I removed the round bit all the same. AI: It is not always possible to invert a formula. The function at hand has to be bijective. For instance, consider $y(x)=x^2$, for $y=1$, there are two possible values of $x$, $1$ and $-1$, so you cannot express $x$ as a function of $y$. I do not know where the functions you are using come from, but my guess is that they are only approximations, since they are not the inverse of one another: let $p$ be plato, $s$ be sg, $a=258.6$ and $b=227.1/258.2$, then, using your second algorithm, we have $$ s= \frac{p}{a-bp}+1 $$ so \begin{align} &s(a-bp)=p+a-bp\\ &p(1-b+sb)=sa-a\\ \end{align} $$ p=\frac{a(s-1)}{1+b(s-1)} $$ which is not the formula used in the first algorithm.
H: Variance of summation of Bernoulli variables Let $X_1,\ldots,X_n$ be independent Bernoulli variables, with probability of success $p_i$ and let $Y_n =\frac1n\sum\limits^n_{i=1} (X_i - p_i )$ a) find the mean and variance of $Y_n$ b) show that for every $a>0, \lim\limits_{n\to\infty} P(Y_n<a)=1$ Now for the mean, it was quite straightforward: $E[Y_n]=0$, however the variance not so much. they said $Var(Y_N)= \dfrac{\sum_{i=1}^n p_i(1-p_i)}{n^2}$ why did they not take out the summation as I have to do almost every time? is the same as $\dfrac{Var(X_i)}{n^2}$ ? AI: $$ \begin{align} \mathbb{E}[Y_n] &=\dfrac{\sum^n_{i=1} (\mathbb{E}[X_i] - p_i )}{n}\\ & =0\\\end{align} $$ $$ \begin{align} Var[Y_n]&=\mathbb{E}[Y_n^2]-\mathbb{E}^2[Y_n]\\ & =\mathbb{E}[Y_n^2]=\mathbb{E}[(\dfrac{\sum^n_{i=1} (X_i - p_i )}n)^2]\\ &=\frac1{n^2}\mathbb{E}[\sum^n_{i=1}\sum^n_{j=1}(X_i-p_i)(X_j-p_j)]\\ &=\frac1{n^2}\sum^n_{i=1}\sum^n_{j=1}\mathbb{E}[(X_i-p_i)(X_j-p_j)]\\ &=\frac1{n^2}(\underbrace{\sum^n_{i=1}\sum^n_{j \ne i,j=1}\mathbb{E}[(X_i-p_i)]\mathbb{E}[(X_j-p_j)]}_{=0 (i \ne j \Rightarrow X_i \bot X_j)}+\sum^n_{i=1}\mathbb{E}[(X_i-p_i)^2]\\ &=\frac1{n^2}\sum^n_{i=1}p_i(1-p_i) \end{align} $$ note that $\mathbb{E}[.]$ is a linear operator
H: find $0 < l < 35$ such that $l^5 \equiv 3\pmod {35} $ I have to find some $0 < l < 35$, such that $l^5 \equiv 3\pmod {35} $. I tried to use suggestions from my previous question, So I tried: $l^5 \equiv 3\pmod {35} $ => $35 | l^5 - 3$, I find some matching $l$'s But I didn't find any matching number. I can't use calculator, so I guess there's some trick behind. please help. AI: $(l,35)=(3,35)=1$ by Euclidean Algorithm. Now we have $$l^{25}\equiv(l^{5})^{5}\equiv3^{5}\equiv243\equiv33\ (\mathrm{mod}\ 35)$$ from the question and $$l^{24}\equiv l^{\varphi(35)}\equiv1\ (\mathrm{mod}\ 35)$$ by Euler's Theorem. Therefore, we get $$l\equiv33\ (\mathrm{mod}\ 35).$$
H: Calculating the derivation of $F(t):=\int_0^{\infty}\frac{e^{-x}-e^{-tx}}{x}\, dx$ As the title says, i have to calculate $F'(t)$ for $$ F(t):=\int_0^{\infty}\frac{e^{-x}-e^{-tx}}{x}\, dx. $$ What I already have is $$ F'(t)=\lim\limits_{h\to 0}\frac{F(t+h)-F(t)}{h}=\lim\limits_{h\to 0}\int_0^{\infty}\frac{-e^{-tx-hx}-e^{-tx}}{hx}\, dx. $$ AI: we have $$\frac{e^{-x}-e^{-tx}}{x}= \sim_{x\to0}t-1$$ so the function $x\mapsto \frac{e^{-x}-e^{-tx}}{x}$ has a finite limit at $0$ and then it's integrable on the interval $(0,1]$, moreover $$\frac{e^{-x}-e^{-tx}}{x}=_{x\to\infty}o\left(\frac{1}{x^2}\right)$$ so the function $x\mapsto \frac{e^{-x}-e^{-tx}}{x}$ is integrable on the interval $[1,+\infty)$ and then the integral $$\int_0^\infty \frac{|e^{-x}-e^{-tx}|}{x}dx$$ exists. The function $$f\colon t\mapsto \frac{e^{-x}-e^{-tx}}{x},\quad t>0$$ is differentiable and we have $$|f'(t)|=e^{-tx}\leq e^{-ax}\quad\forall t\geq a>0$$ and since the function $x\mapsto e^{-ax}$ is integrable on $[0,+\infty)$ then by the Leibniz theorem $F$ is differentiable and $$F'(t)=\int_0^\infty e^{-tx}dx=\frac{1}{t}$$ Added Clearly $F(1)=0$ so we conclude $$F(t)=\log t.$$
H: characteristic polynomial and eigenvalues of $T(A)={ A }^{ t }$ Let $V=M_2(\mathbb R)$ and $T(A)={ A }^{ t }$. I was asked to find the characteristic polynomial of $T$ and it's eigenvalues, and finally to say if $T$'s diagonalizable. Is there a way to solve this without actually finding a matrix representation of $T$? (since I've tried turning it into a set of linear equations and it gets ugly) Thanks! AI: You can solve it all in one go. First notice that $T^2 = id$, therefore the only possibles eigenvalues are $\pm 1$. Now you simply need to solve the equations $A^t = A$ and $A^t = -A$. This gives you as eigenvectors for the eigenvalue $1$: $$\begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0\end{pmatrix}$$ And for the eigenvalue $-1$: $$\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}$$ These four matrices are linearly independent, $M_2(\mathbb R)$ has dimension 4 so they span the whole space. $M_2(\mathbb R)$ has a basis of eigenvectors for $T$, so $T$ is diagonalisable, and its characteristic polynomial is $(X-1)^3(X+1)$. This approach has a straightforward generalization for $M_n(\mathbb R)$ for any $n$.
H: Find Limit of the given function Find the limit $$\lim_{x\rightarrow 1}\ln(1-x)\cot\left({{\pi x}\over2}\right)$$ AI: Putting $1-x=y$ $$\lim_{x\rightarrow 1^{-}} \ln(1-x)cot({{\pi x}\over2})$$ $$=\lim_{y\to0^+}\ln y\tan \frac {\pi y}2$$ $$=\lim_{y\to0^+}\frac{\ln y}{\cot \frac {\pi y}2}\text{ which is of the form }\frac \infty \infty$$ So, applying L'Hospital's Rule, $$\lim_{y\to0^+}\frac{\ln y}{\cot \frac {\pi y}2}=\lim_{y\to0^+}\frac{1}{-y\csc^2 \frac {\pi y}2\frac\pi2}$$ $$=-\frac\pi2 \left(\lim_{y\to0^+}\frac{\sin \frac {\pi y}2}{\frac {\pi y}2}\right)^2\cdot \lim_{y\to0^+}y=0$$
H: Constant growth rate? Say the population of a city is increasing at a constant rate of 11.5% per year. If the population is currently 2000, estimate how long it will take for the population to reach 3000. Using the formula given, so far I've figured out how many years it will take (see working below) but how can I narrow it down to the nearest month? AI: Let $a=1.115^{1/12}=\sqrt[12]{1.115}$, the twelfth root of $1.115$. Then $$1.115^x=(a^{12})^x=a^{12x}\;,$$ and $12x$ is the number of months that have gone by. Thus, if you can solve $a^y=1.5$, $y$ will be the desired number of months. Without logarithms the best that you’ll be able to do is find the smallest integer $y$ such that $a^y\ge 1.5$. By my calculation $a\approx1.009112468437$. You could start with $a^{36}$ and work up until you find the desired $y$.
H: Every subspace of $\mathbb{R}^n$ is a solution space of a homogeneous system of linear equation. All solution of $AX = 0$ where $A$ is a $n \times n$ matrix and $X$ is a column vector form a subspace of $\mathbb{R}^n$. All the subspaces of $\mathbb{R}^n$ are of this type. How to prove this result? Linear Algebra: solution of homogeneous system of equation Thank you. AI: Let $S$ a subspace of $\mathbb R^n$ and choose $(e_1,\ldots,e_p)$ a basis of $S$ which we compete it by a basis $(e_1,\ldots,e_p,e_{p+1},\ldots,e_n)$ of $\mathbb R^n$. Now let the endomorphism $f$ defined by $f(e_i)=0,\ 1\leq i\leq p$ and $f(e_i)=e_i,\ p+1\leq i\leq n$ and let $A$ the matrix of $f$ in this basis then $$AX=0\iff X\in S$$
H: Solving a linear system with complex eigenvalues I have the system: \begin{equation} x' = \begin{pmatrix}5&10\\-1&-1\end{pmatrix}x \end{equation} The corresponding characteristic equation is: \begin{equation} \lambda^2-4\lambda+5 \\ \implies \lambda_1 = 2+i \land \lambda_2 = 2-i \end{equation} I am having trouble solving for the eigenvector: \begin{align} (A-\lambda I)x_1 =& 0\\ \begin{pmatrix}3-i&10\\-1&-3-i\end{pmatrix}x_1 =&0 \end{align} I really don't know what to do with the complex numbers. I thought about multiplying by the conjugate in the rows, but then I will have complex numbers in opposite columns again... How do I solve for the eigenvectors in the case of complex numbers? AI: If we set $x_1 = (a, b)^T$, we get $$ \begin{pmatrix}3-i&10\\-1&-3-i\end{pmatrix}x_1 = \begin{pmatrix}3-i&10\\-1&-3-i\end{pmatrix}\begin{pmatrix}a\\b\end{pmatrix}\\\\ = \begin{pmatrix}(3-i)a + 10b\\-a - (3 + i)b\end{pmatrix} = \begin{pmatrix}0\\0\end{pmatrix} $$ which is just a set of two equations of two (complex) unknowns. Multiplying the lower equation by $-(3-i)$ (multiplying by conjugates is a good trick against both square roots and complex numbers, you're right bout that) shows that these equations are linearly dependant (which is good), and we get that $b = \left(-\frac{3}{10} + \frac{i}{10}\right)a$ solves them both: $$ (3-i)a +10b = 0\\\\ (3-i)a = -10b\\\\ \frac{3-i}{-10}a = b \\\\ \left(-\frac{3}{10} + \frac{i}{10}\right)a = b $$ And there you have your eigenspace for the first eigenvalue.
H: Need a source for the following result: From $v \in H^1$ and $f$ Lipschitz it follows that $f(v) \in H^1$. I am looking for a source of the following (or a similar) result: If $v \in H^1(\Omega,\mathbb{C})$ on a bounded domain $\Omega$ and $f: R(v) \to \mathbb{C}$ is Lipschitz continuous, then $f(v) \in H^1(\Omega, \mathbb{C})$ Here $H^1$ denotes the Sobolev space $W^{1,2}$ and $R(v)$ denotes the range of $v$. In various answers on StackExchange this is used (e.g. here or here) and is referred to as a "standard fact". I can't seem to produce a proof of it and I don't find the result in my books about Sobolev Spaces. Does anybody have a source for it? EDIT: In my case I can't suppose that $f(0)=0$! AI: It is easiest to approach this using difference quotients. Let $\tau_h^i u(x) = u(x + h e_i)$ where $e_i$ is one of the standard unit vectors. Define the difference quotient $\Delta_h^iu = (\tau_h^i u - u)/h$. You need the two propositions: Proposition 1 If $u\in W^{1,p}(\Omega)$, then for every $\Omega' \Subset \Omega$ with $\mathrm{dist}(\Omega',\Omega^c) > h$, the difference quotient $\Delta_h^iu \in L^p(\Omega')$ and $$\|\Delta_h^i u\|_{L^p(\Omega')} \leq \|u\|_{W^{1,p}(\Omega)}$$ is true. Proposition 2 If $u\in L^p(\Omega)$ for $p \in (1,\infty)$. If there exists a constant $K$ so that $\|\Delta_h^iu\|_{L^p(\Omega')} \leq K$ for every index $i$ and for every $h > 0$, and every $\Omega'\Subset \Omega$ satisfying $\mathrm{dist}(\Omega',\Omega^c) > h$; then $u$ is weakly differentiable with its weak derivative in $L^p(\Omega)$. (Those are propositions 16 and 17 in my lecture notes on Sobolev spaces. Alternatively you can also find them in Section 7.11 of Gilbarg and Trudinger.) To conclude the proof it suffices to observe that if $f$ is Lipschitz, then $$ |\Delta_h^i (f(v))| \leq \mu |\Delta_h^i v| $$ where $\mu$ is the Lipschitz constant of $f$.
H: Quadratic Polynomial Question - Solving for a coefficient using the discriminant This question has been troubling me: A parabola whose equation is of the form $y = Bx^2$ (where B is a constant) has the line $20x - y + 20 = 0$ as a tangent. Find $B$. The explanation says, basically; "The line is a tangent if only one point of contact exists, thus the discriminant $=0$" I thought that for the discriminant to equal 0, the x-axis had to be tangent to the parabola. Any help would be appreciated, thanks. AI: Let us find the intersection $y=Bx^2$ and $20x-y+20=0$ $20x-y+20=0\implies 20x-Bx^2+20=0\implies Bx^2-20x-20=0$ this is a Quadratic Eqaution in $x$ For tangency, both intersection must coincide, so both root of the above equation must be same, i.e., we need the discriminant to be $0$ So, we need $(-20)^2-4\cdot B\cdot(-20)=0$
H: Cannot understand while reading simplicial=singular homology I was reading http://www.math.toronto.edu/mgualt/MAT1300/Week%2010-12%20Term%202.pdf , and I can't understand the last paragraph of pg 29, and the first paragraph of pg 30. It says that "To compute the singular group $ H_n (X ^ k , X^{k-1} ) $, consider all the simplices together as a map $ \phi : \sqcup _\alpha (\Delta _\alpha ^k , \partial \Delta _\alpha ^k ) \rightarrow (X^k , X^{k-1} ) $", but what does $\sqcup _\alpha (\Delta _\alpha ^k , \partial \Delta _\alpha ^k ) $ means, and what is this map exactly? At pg 30, it says that "note that it gives a homeomorphism of quotient spaces." I cannot understand this part. AI: To build $X^k$ from $X^{k-1}$ you attach a $\Delta_\alpha^k$ by gluing its boundary, $\partial\Delta_\alpha^k$, to $X^{k-1}$. So, $\sqcup_\alpha(\Delta_\alpha^k,\partial\Delta_\alpha^k)$ is the disjoint union of all the $k$-simplices, making special note of their boundaries. The $\phi$ is the map that attaches each simplex by identifying its boundary to some part of $X^{k-1}$. Moreover the homeomorphism part has to do with the fact that $X^k/X^{k-1}$ and $\cup_\alpha\Delta_\alpha^k/\partial\Delta_\alpha^k$ are both a collection of spheres. The homeomorphism between them is induced by $\phi$.
H: Inverse Polynomial in a ring R I just started working on my Bachelor-Thesis in IT-Security and therefore try to understand the NTRUencryption algorithm. It operates on polynomials in a Ring. My problem is that I don't understand how someone computes the inverse of a polynomial in such a ring. I just tried to follow the calculation on wikipedia but no intermediate steps are given. So can someone explain me how to get from f to fp and fq? I understand that when N is 11 we need the polynom x^11 - 1. But I don't know how to use it. Please use easy words. Im just a computer scientist not a mathematician :D AI: You're given the polynomial $$f(x)=-1+x+x^2-x^4+x^6+x^9-x^{10}$$ You want to find polynomials $f_3$ and $j$ such that $$f(x)f_3(x)-(x^{11}-1)j(x)\equiv1\pmod3$$ So, you do the extended Euclidean algorithm on the polynomials $f(x)$ and $x^{11}-1$, at every step doing all the computations modulo 3. This may be very messy: you start by dividing a degree 11 polynomial by a degree 10 polynomial, which could give you a degree 9 remainder; then you'll have to divide the degree 10 by that degree 9, which could give a degree 8 remainder; and so on, for quite a few steps; and after you're done with that, you have to build your way back up to $f_3(x)$. It's no wonder wikipedia leaves out the work! It's probably best handled by a computer algebra system like Maple or Mathematica, rather than trying to do it by hand.
H: If $ A^2=0$ , prove that $A$ doesn't neccesarily have a row of zeros Question $A^2 \in M_{n \times n} (F), A^2=0, n\ge 3$. Prove that it's not true that A necessarily has a row of zeros. Thoughts We thought that the matrix must be nilpotent, but therefore it's main diagonal is 0s (and must mean that there's a line of zeros). I'd love to see a method of finding the counterexample for this one. Thanx AI: Take an easy example, like $A= \left( \begin{matrix} 0 & 1 \\ 0&0 \end{matrix} \right)$ or $A= \left( \begin{matrix} 0 & 0 \\ 1&0 \end{matrix} \right)$. Then, let $A_P=PAP^{-1}$ where $P \in GL_n(\mathbb{R})$. Of course, $A_P^2=0$ and if you choose correctly $P$, $A_P$ does not contain a row of zeros. Randomly, my fist attempt was $P= \left( \begin{matrix} 1 & 2 \\ 1&1 \end{matrix} \right)$ and $A_P= \left( \begin{matrix} 1 & -1 \\ 1&-1 \end{matrix} \right)$. In dimension three, with $A= \left( \begin{matrix} 0&0&1 \\ 0&0&0 \\ 0&0&0 \end{matrix} \right)$ and $P= \left( \begin{matrix} 1&2&3 \\ 2&3&1 \\ 3&1&2 \end{matrix} \right)$ we get the conterexample: $$A_P= \frac{1}{18} \left( \begin{matrix} 7&-5&1 \\ 14&-10&2 \\ 21&-15&3 \end{matrix} \right)$$
H: Does $z^i=i^z$ have any solutions, beside $z=i$? Does this equation have any solutions: $$\large{z^i=i^z}$$ Putting polar form of $z$ is better for LHS, But rectangular form is suitable for RHS ! What to do? Thanks! AI: $$ \frac1z\log\left(\frac1z\right)=\frac1i\log\left(\frac1i\right)=-\frac\pi2 $$ Thus, there are many solutions, one for each branch of the Lambert W function: $$ z=e^{-\mathrm{W}\left(-\pi/2\right)} $$ One solution is Exp[-LambertW[0,-Pi/2]] which is -i. Another is Exp[-LambertW[-1,-Pi/2]] which is i. However, another is N[Exp[-LambertW[1,-Pi/2]],20] which is 1.0213233161306520062 - 4.8683538060775645979 i Of course, the value of $z^i$ depends on which branch of $\log(z)$ is used. In the computation above, I have taken $\log(i)=\pi i/2$, so we have a clear definition of $$ i^z=e^{\pi iz/2} $$ However, $\log(z)$ is determined only mod $2\pi i$ and that affects $z^i$ by a factor of $e^{2\pi k}$ for $k\in\mathbb{Z}$. For example, for $z=1.0213233161306520062-4.8683538060775645979\,i$, we use $\log(z)=1.60429091344801115852-7.64719227612459292313\,i$.
H: Combinatorics problem arising from physics I'm currently studying general mechanics where the following problem came up: Assume we have the space $\Gamma = \mathbb{R}^6$ which we are dividing in small cells $C_i$ . Let $f(\vec{x}, \vec{v})$ be a probability distribution on our $\Gamma$. The amount of elements in a cell $C_i$ is (not exactly) given by: $N_i = \int_{C_i} f(\vec{x}, \vec{v}) d^3x d^3v$. Since it doesn't matter in my problem how my elements are numbered, as long as there are the same amount of particles in each cell, I'm interested in finding out how many possibilities for one configuration of particles in cells there are. Let $N$ be the total amount of particles i.e. $N = \sum N_i$. My initial guess is, that there are $N!$ different possibilities because there are $N$ slots. So the first particle has $N$ possible places, the second one has $N-1$ possible places $\implies$ $N \cdot (N-1) \cdot ... \cdot 1$ = $N!$ But this isn't correct yet, because if I have a cell that has, lets say, $2$ elements, there are $2!$ possible ways for the elements to get labeled which still give the same overall state, which means I have to divide by $2!$. This makes me say the correct result would be $\frac{N!}{N_1! \cdot N_2! \cdot ... \cdot N_k!}$ ($k$ = amount of cells), which I know is the right solution but I'd like to know it with some more mathematical rigor. Anybody care to help me? Cheers! AI: I don't know this way has the rigour you want, but here goes: You have $N$ total particles, $N_1$ go to the first cell. The number of ways to achieve this is $${N\choose N_1} = \frac{N!}{(N-N_1)!N_1!}$$ You are left with $N-N_1$ remaining particles, and now $N_2$ of these go to the second cell. The number of ways this can be done is $${N-N_1\choose N_2} = \frac{(N-N_1)!}{(N-N_1-N_2)!N_2!}$$ Continue this way till the end; the overall number of ways to do this exercise is $${N\choose N_1}{N-N_1\choose N_2} \cdots {N-N_1-N_2\cdots -N_{k-1} \choose N_k}= \frac{N!}{N_1!N_2!\cdots N_k!} $$ All the factors other than $N!$ and $N_i!$ get cancelled due to successive multiplication.
H: Extending our language with a new function symbol Given an arbitray first-order theory (not necessarily a set theory) and definable predicates $P(*)$ and $Q(*,*)$ in the language of that theory, if we adjoin a new function symbol $f$ together with the axiom $$\forall x(P(x) \Rightarrow \exists y(Q(x,y))) \Rightarrow \forall x (P(x) \Rightarrow Q(x,f(x)))$$ is this extension necessarily conservative? Note that we're not requiring that the $y$ satisfying $Q(x,y)$ be unique in the antecedent. Okay that was my first question. Supposing the answer is 'yes', my second question is this. Suppose our original first-order theory includes an axiom schema like separation, or replacement that runs over the definable predicates of our old language. Does the more general schema running over the definable predicates of the extended language also hold? Finally, and this is probably a silly question, but supposing that both answers are yes, why doesn't this make the Axiom of Choice redundant? AI: The paradox is resolved here (thank you for the link, aws). In brief: Yes, adjoining such an $f$ together with such an axiom is indeed conservative. No, otherwise the axiom of choice would be provable in any set theory in which the axiom of union and the axiom schema of separation hold. So in particular, ZF would prove choice. Since this question is predicated on the assumption that the answers to the previous two questions are 'yes', we may disregard it.
H: Prove $(1-\cos x)/\sin x = \tan x/2$ Using double angle and compound angles formulae prove, $$ \frac{1-\cos x}{\sin x} = \tan\frac{x}{2} $$ Can someone please help me figure this question, I have no idea how to approach it? AI: $$\dfrac{1-\cos x}{\sin x}=\dfrac{1-(1-2\sin^2\frac x2)}{2\sin\frac x 2\cos\frac x2}=\dfrac{\sin\frac x2}{\cos\frac x2}=\tan\frac x2$$
H: Inequality involving tangent provided $\tan\theta\geq 1$ If $\tan\theta\geq1$, then $$\sin\theta-\cos\theta\leq\mu(\cos\theta+\sin\theta)\implies\tan\theta\leq\dfrac{1+\mu}{1-\mu}.$$ Why? I get as far as the obvious $$\tan\theta\leq1+\mu(\cos\theta+\sin\theta)$$ AI: Assume for the moment that we are in quadrant I (i.e., $\sin{\theta} \ge 0$ and $\cos{\theta} \ge 0$). Then $$\frac{\sin{\theta}-\cos{\theta}}{\sin{\theta}+\cos{\theta}} \le \mu$$ Divide through up and down by $\cos{\theta}$: $$\frac{\tan{\theta}-1}{\tan{\theta}+1} \le \mu$$ which means that $$\tan{\theta}-1 \le \mu (\tan{\theta}+1) \implies (1-\mu) \tan{\theta} \le 1+\mu$$ The result follows. You then need to show that this also works in quadrant III.
H: How to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists? I have 2 groups $U_5$ and $U_{12}$ , .. $U_5 = \{1,2,3,4\}, U_{12} = \{1,5,7,11\}$. I have to determine whether an isomorphism $\varphi: {U_{12}} \to U_5$ exists. I started with the "$yes$" case: there is an isomorphism. So I searched an isomorphism $\varphi$ , but I didn't found. So I guess there is no an isomorphism $\varphi$ . How can I prove it? or at least explain? please help. AI: I don't know how much algebra you know but notice that $U_5$ is cyclic, with generator, say, $[2]$. Indeed for any prime $p$, $u_p$ is going to be cyclic of order $p-1$. Now look at $U_{12}$, is it cyclic? A rather tedious check tells you that $1^2,5^2,7^2,11^2$ all equal $1$ mod $12$ and so this group is not cyclic. In fact from this you can derive that $U_{12}$ is isomorphic to $C_2 \times C_2$. Now as isomorphic groups are either both cyclic or both not cyclic (as isomorphisms preserve the order of elements), these two groups are not isomorphic.
H: $a, b, c, d$ are positive integers, $a-c|a b+c d$, and then $a-c|a d+b c$ $a, b, c, d$ are positive integers, $a-c|a b+c d$, and then $a-c|a d+b c$ proof: really easy when use $a b+c d-(a d+b c)$ however my first thought is, $a-c| a b+c d+k(a-c)$, and set some $k$ to prove, failed. question1 : is this method could be done? any other methods? question2 : And is there any relations between the question and determinant/matrix $$\begin{array}{cc} a & d \\-c & b \end{array}$$ AI: HINT: $$ab+cd-(ad+bc)=b(a-c)-d(a-c)=(a-c)(b-d)$$ Alternatively, $$ab+cd=b(a-c)+bc-(a-c)d+ad=(a-c)(b-d)+ad+bc$$ we are reaching at the same point
H: Infimum, supremum of a set problems I am solving some infimum/supremum problems, and my book has different answers for some of the problems. Let $A = \{ x \in \Bbb N | x^2 < 5\}$ find sup A and inf A, their answer is sup A = $\sqrt5$, inf A = $-\sqrt5$. I think this is wrong, since A is a finite set, its clear sup A = 2 and inf A = 0, am i missing something? Another one, $A = \{x^2+x |x \in (-1, 1)\}$ they say sup A = 1 and inf A = 0, i think sup A = 2 and inf A = 0, again am i wrong? AI: For the first, I'd say you're right. The official solution woul be for $x \in \mathbb{Q}$ (or $\mathbb{R}$) instead of $x \in \mathbb{N}$. For the second one, take $x = -\frac{1}{2} \in (-1,1)$ (I assume these are the real numberse between $-1$ and $1$, excluding the borders). Then $$x^2 + x = \frac{1}{4} - \frac{1}{2} = -\frac{1}{4},$$ so $\inf A \le -\frac{1}{4}$. Hint to solve this: You want to check the borders of the interval (-1,1) and any minimum/maximum of your expression in that interval, for which derivations will be very helpful. One of the obtained x (should be 3 of them) will give you the infimum.
H: Number of Permutations without the the "diagonal terms" If I have a set of n numbers (we can say n is 5, to create a concrete example), then there are n! (5!) different ways of arranging these numbers. How many of these don't use the "diagonal terms" - i.e, the first term isn't a 1, the second term isn't a 2, and the nth term isn't an n. My thinking was that at first, you have n!. Step 1) you remove all the ones with a 1 in the first place: n! - (n-1)! Step 2) you remove all of the twos in the second place: n! - ( (n-1)! - (n-2)! ) We subtracted out (n-2)! because some of those twos were already removed by (Step 1) This doesn't work though - for 5 terms, it yields 47 when it should yield 44 Thoughts? AI: This is the classical problem of counting Derangements. The problem occurs under many other names, such as Rencontres. We want to count the number, often called $D_n$, of permutations of $\{1, 2,\dots,n\}$ with no fixed points. We have` $$D_5=(5!)\left(1-\frac{1}{1!} +\frac{1}{2!}-\frac{1}{3!}+\frac{1}{4!}-\frac{1}{5!}\right).$$ The derivation of the general formula is a quick Inclusion/Exclusion argument. The article linked to is brief, but does the job.
H: calculate angles between O'clock hands suppose that now it is $1:50$, we need to calculate angle between these hands first because we have $12$ hour system per day and night and they are equal, each hour corresponds $360/12=30$, from $10$ to $1$ we have $30+30+30=90$, but i want to know what should be degree of angle at the same time from $1$ to $2$? Because there is $30$ degree and $5$ dot, each one should equal to $30/5=6$ right? or? please help me AI: That extra angle is due to extra $50$ min. after $1:00$. For $60$ min. hour hand moves $30^\circ$, so for $50$ min. it moves by $\frac{50}{60}\cdot30^\circ=25^\circ$ So total angle b/w hr hand and min. hand at $1:50$ $=90^\circ+25^\circ=115^\circ$
H: Mysterious Matrix Norm Given a matrix $M$, does anyone know the name and the definition of the following norm? $$ \|M\|_* $$ Thanks in advance, Francesco. AI: That's the Schatten norm. It's defined like: $ \lVert A \rVert_* := \text{tr}(\sqrt{AA^T}) $ with tr is the trace of the matrix and $A^T$ is the transpose. In other case, there's the matrix norm defined like (for $A \in M_{m \text{ row }, n \text{ column }}$): $ \lVert A \rVert_p = \max_{x \neq 0} \left\{ \frac{\lvert A x \rvert_p}{\lvert x \rvert_p} , x \in K^n \right\}$ where the $ \lvert \cdot \rvert_p$ is the vector $p$-norm. It really depends on the notation :)
H: $u\in W_0^{1,p}(\Omega)$ but it's extension by zero does not belong to $W^{1,p}_0(\mathbb{R}^N)$ My problem is the following: I want to find a bounded domain $\Omega\subset\mathbb{R}^N$ such that if $u\in W_0^{1,p}(\Omega)$, $p\in (1,\infty$), then the extension by zero of $u$ to $\mathbb{R}^N$ is not in $W_0^{1,p}(\mathbb{R}^N)$. If such $u$ do exist, then the problem must be a problem of "differentiability" in $\partial\Omega$, but I could not figure out how to construct such $u$. I would like to note that if $\Omega\in C^{0,1}(\Omega)$, then such $u$ does not exist because we have a extension operator between $W^{1,p}(\Omega)$ and $W^{1,p}(\mathbb{R}^N)$. Thank you AI: There is no such domain. By definition of $W^{1,p}_0(\Omega)$, $u$ is a $W^{1,p}$ limit of smooth compactly supported functions $u_n$. Extend each $u_n$ by zero; you now have a sequence of smooth compactly supported functions which converges in $W^{1,p}(\mathbb R^N)$. Its limit is an element of $W^{1,p}_0(\mathbb R^N)$, which is nothing else but the zero extension of $u$. The problem of smoothness of $\partial \Omega$ comes up when you interpret the vanishing of $u$ on the boundary differently, i.e., in the sense of traces. Then the question becomes: does having zero trace imply $u\in W^{1,p}_0(\Omega)$? If $\partial \Omega$ is Lipschitz, this is true (e.g., Theorem 15.29 in A first course in Sobolev spaces by Leoni). (I see that you know the last part, but it was natural to include it for completeness).
H: Definability of Kolmogorov Complexity? This paper claims to have a proof of Godel's Second Incompleteness Theorem using Kolmogorov Complexity: http://www.ams.org/notices/201011/rtx101101454p.pdf As far as I can tell, it seems to assume that Kolmogorov complexity (over some language or Turing Machine) is definable in Peano Arithmetic, and refers to concepts like the Godel Number of the statement "K(x) > N", that is, the Kolmogorov Complexity of the number x is greater than N. Is it true that Kolmogorov complexity is definable in a theory like PA? AI: Yes, Kolmogorov complexity is definable in Peano Arithmetic. The key to this and lots of similar definability results for PA is that one can define, in the language of PA, a system for coding finite sequences of natural numbers by single natural numbers, and one can prove in PA the basic combinatorial properties of (encoded) finite sequences (for example, that every two sequences have a concatenation, which has the expected length and the expected components). Repeating this, one can also deal, in PA, with finite sequences of finite sequences of natural numbers,etc. Once one sees how to formalize statements about finite sequences, it is routine, though rather tedious, to write down the basic definitions of computability theory, up to and including (and beyond) the notion of Kolmogorov complexity.
H: Solve equation using combinations of integers from 0 to 9 in Maple Display answers for $x$ using all combinations of $0$ to $9$ integers for $a$ and $b$ $\dfrac{1}{x^{2}}=\dfrac{a^{2}}{s^2}+\dfrac{b^{2}}{t^{2}}$ The values for $s$ and $t$ are known values and must be entered by the user AI: What I could do for you is as follows, however, I couldn't insert proper codes to the program till it takes $s$ and $t$ from keyboard automatically. I think this program deserves to be considered. I assumed that $s=2$ and $t=5$ as you see: [> for i from 0 to 9 do for j from 0 to 9 do c[i, j] := solve(1/x^2 = (1/4)*i^2+(1/25)*j^2, {x}) od; od; [> eval(c); For example while $s=2,t=5$, we can call some of the result: [> c[9,9]; {x = -(10/261)*sqrt(29)}, {x = (10/261)*sqrt(29)} [> c[3,8]; {x = -(10/481)*sqrt(481)}, {x = (10/481)*sqrt(481)}
H: Does every normal space have countable basis? I know that every regular space with a countable basis is normal. But my question is if the converse is true? Normal spaces are obviously regular but does every normal space have a countable basis? Can someone help me please? AI: No, the discrete topology on an uncountable set is an obvious counterexample.
H: For what integers $n$ does $\phi(2n) = \phi(n)$? For what integers $n$ does $\phi(2n) = \phi(n)$? Could anyone help me start this problem off? I'm new to elementary number theory and such, and I can't really get a grasp of the totient function. I know that $$\phi(n) = n\left(1-\frac1{p_1}\right)\left(1-\frac1{p_2}\right)\cdots\left(1-\dfrac1{p_k}\right)$$ but I don't know how to apply this to the problem. I also know that $$\phi(n) = (p_1^{a_1} - p_1^{a_1-1})(p_2^{a_2} - p_2^{a_2 - 1})\cdots$$ Help AI: Euler's $\phi $ function is multiplicative. More elaborately if for $a,b\in N$ with $(a,b)=1$ then $\phi (ab)=\phi (a)\phi (b)$. So let $n=2^km$ with $m$ being odd. Then we have if $k\ge 1$, $$\begin{align} \phi (n)&=\phi(2^k)\phi(m)=2^{k-1}\phi(m) \\ \phi(2n)&=\phi(2^{k+1})\phi(m)=2^{k}\phi(m)\end{align}$$ So $\phi (n)\ne \phi(2n)$. So $k<1\Rightarrow k=0\Rightarrow n$ must be odd. Another easy proof: Let $n=2^k\prod_{i=1}^{n}p_i^{\alpha_i}$ with $k\ge 1$ and $2\ne p_i =$ primes, then we have $\phi (n)=\frac{n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$ and $\phi (2n)=\frac{2n}{2}\prod_{i=1}^{n}(1-\frac{1}{p_i})$.Can $\phi (n)$ be equal to $\phi(2n)$? Now consider $n=2k+1$ and find $\phi (n)$ and $\phi (2n)$. What do you see?
H: Is $\Bbb{6Z}$ a free group? I'm trying to understand the concept of free groups , and from what I've learned so far , a group $G$ is called a free group , if there is a subset $S ⊂ G$ such that any element of G can be written uniquely as a product of elements of S , and their inverses . So , is $\Bbb{6Z}$ a free group ? or $\Bbb{Z}/17\Bbb{Z}$ ? or the group $$A = \left\langle\begin{pmatrix} 1 & 1 \\ 0 & 1\end{pmatrix} , \begin{pmatrix} 0 & 2 \\ -1 & 0\end{pmatrix}\right\rangle$$ Let's take for example $\Bbb{18Z}$ (the coset $18k , k \in \Bbb{Z}$) , which is a subset of $\Bbb{6Z}$ (the coset $6k , k \in \Bbb{Z}$) . So , by the definition of free groups , do I need to take for example an element of $\Bbb{6Z}$ , e.g. $12$ , and few elements of $\Bbb{18Z}$ , and build (with them and their inverses) the element $12$ ? AI: Free groups may be defined via an "universal property" as follows: A group $G$ is free if there exists a set $S$ and a function $\phi:S\rightarrow G$ such that the following "universal property" is satisfied: for every group $H$ and function $g:S\rightarrow H$ there exists an unique group homomorphism $G:S\rightarrow H$ such that $g=G\phi$. The set $S$ is said to "generate" $G$. Note that there are infinitely many free groups generated by a set $S$. The universal property implies the following: $G$ is generated by $\phi(S)$. Any two free groups generated by some set $S$ are isomorphic. Any group isomorphic to a free group generated by $S$ is also a free group generated by $S$. If $|S|=|T|$ then any free group generated by $S$ is a free group generated by $T$ and vice-versa. $\phi$ as above in injective. Of course, an essential result is that for a given set $S$, there always exists a free group generated by that set. That group can be constructed in the following way: Let $S_i= S\times i$ for $i=1,-1$. Let $\overline{G}$ be the set of finite (possibly empty) sequences in $S_1\cup S_{-1}$. A sequence in $\overline{G}$ is said to be "reduced" if no two consecutive elements of the sequence are of the forme $(s,i)(s,-i)$ for some $s\in S$. Note that every sequence in $\overline{G}$ can be uniquely reduced. Finally, let $G$ be the subset of $\overline{G}$ consisting of the reduced sequences, with group operation given by concatenation followed by reduction. Then $G$ is a free group of $S$ (the identity of $G$ is the empty sequence $()$), with $\phi(s)=(s,1)$. If $g:S\rightarrow H$, we simply define $G((s_1,i_1),\ldots,(s_n,i_n))=g(s_1)^{i_1}\cdots g(s_n)^{i_n}$. In particular, is $S\neq\varnothing$, then take $s\in S$ and consider any function $f:S\rightarrow\mathbb{Z}$ such that $f(s)=1$. By the universal property, there exists a homomorphism $F:G\rightarrow\mathbb{Z}$ such that $F(\phi(s))=1$. Therefore, $F$ is surjective, and hence $G$ must be infinite. Now you can easily verify that your definition of free group is equivalent to the one I gave here (in your case, $G$ is a free group generated by $S\subseteq G$ and $\phi$ is simply the inclusion). Now we know that $\mathbb{Z}/17\mathbb{Z}$ is not free since it is finite. $\mathbb{Z}$ is the free group generated by $\left\{a\right\}$, for if $\phi:a\in\left\{a\right\}\mapsto 1\in\mathbb{Z}$ and $f:a\in\left\{a\right\}\mapsto f(a)\in H$, then $F(n)=f(a)^n$ "extends" $f$ in the above sense. The case of the matrices is more difficult. But as in the above answer, there exists a subgroup of $SL(2,\mathbb{Z})$ which is a free group generated by two elements (if I'm not mistaken, those two elements are $\begin{pmatrix} 1 & 2\\0 & 1\end{pmatrix}$ and $\begin{pmatrix} 1 & 0\\ 2 & 1\end{pmatrix}$).
H: How to solve a ratio question Studying for the GRE. In the GRE guide, it says that If the ratio is $2x:5y$, and this equals the ratio $3:4$, what is the ratio of $x:y$? I tried cross multiplying but I don't get the answer. It says the answer is $15:8$. I get $8:15$. Which step am I missing? AI: We are given: $$\dfrac {2x}{5y} = \frac 34$$ $$2x\cdot (4) = 5y \cdot (3)\tag{1}$$ $$ \iff 8x = 15 y\tag{2: cross-multiplied}$$ $$\iff \frac {8x}{y} = 15\tag{divide by y}$$ $$ \iff \frac xy = \frac{15}{8}\tag{divide by 8}$$ It seems as though you went from $(2)$ to $\dfrac {8x}{15y} = 1$, concluding the ratio is $8:15$. But we want $x: y$ which is the value of $\dfrac xy$, so $$\frac {8x}{15y} = 1 \iff \dfrac{8x}{15y}\cdot \dfrac{15}{8} = 1\cdot \dfrac{15}{8} \iff \dfrac xy = \dfrac{15}{8}$$ Rather than cross-multiplying, it makes more sense in this problem to start from the given $$\frac {2x}{5y} = \frac 34 \iff \frac {2x}{5y}\cdot \frac 52 = \frac 34 \cdot \frac 52 \iff \frac xy = \frac{15}{8}$$
H: How to solve percentage of new I am good with all percentage questions except finding the original price of something. If I had a coat that cost $120 after an 8% increase, how do I formulate the original price before the increase? AI: let the original price be $x$$ so, after 8% increase, the price would be, $$x+\dfrac{8}{100}\cdot x=120$$ that is , $$1.08x=120$$ $$x=\dfrac{120\cdot 100}{108}$$ we get $x=111$$ approx.
H: invariant sub space So I preparing myself to a test in linear algebra and I scanned the last years test and I reached a question which I do not understand why is it like that. True or false: $ \forall T\colon V \rightarrow V$ $\exists 0_{V} \neq W \neq V $ so that $T(W) \in W$ So the answer is false and as example he gave this matrix $\begin{pmatrix} 0 & 1\\ 0 & 0\\ \end{pmatrix}$ So how do u proof that for the above transformation there is no such a subspace ? I know the answer is very trivial but I still do not get it. Thanks in advanced !! AI: Presumably, the question should rather read $T(W)\subseteq W$. However, for the given matrix, the space spanned by the first basis vector is invariant, so that's not a counterexample. An obvious counterexample would have been a case with $\dim V=1$ instead.
H: $\displaystyle f: \Bbb C \to \Bbb C$ is analytic function let $\displaystyle f: \Bbb C \to \Bbb C$ be an analytic function. For $z = x + iy$, let $u, v: \Bbb R^2 \to \Bbb R$ be such that $u(x,y) = \Re f(z)$ and $v(x,y) = \Im f(z)$. Which of the following are correct? 1.∂2u∂x2+∂2u∂y2 = 0 2.∂2v∂x2+∂2v∂y2 = 0 3.∂2u∂x∂y-∂2u∂y∂x = 0 4 ∂2v∂x∂y+∂2v∂y∂x = 0 now since $f$ is given as analytic function, $u$ and $v$ must be harmonic function, so option 1 and 2 are correct. Further as C-R equation is satisfied by u and v, using C-R equation option 3 can be obtained and option 4 is incorrect. Am i right? But in original answer key only 1st and 3rd are shown as correct option, i am wondering why option 2 is not there? AI: I don't know why Option 2 isn't there as I didn't author the textbook! However, if I did author the textbook, then I'd put it there because as you correctly point out, the real and imaginary parts of a holomorphic function are always real harmonic functions. Now, I'm not sure the Cauchy-Riemann equations come into the picture. Option 3 just says that the mixed partial derivatives of $u$ are equal. That's always going to be the case for any $C^2$ function. Thankfully, in our case, $u$ is real harmonic and, in particular, $C^{\infty}$ so Option 3 is satisfied. In light of this, can you see why Option 4 must fail in general? I hope this helps!
H: Triangle inequality, is this implication correct? $a, b,$ and $c$ are not necessarily sides of a triangle, but they are positive numbers. The question was: Given that $a+b>c$ , does this imply that $\sqrt a +\sqrt b > \sqrt c$ , and that $a^2+b^2>c^2$ ? The one with the square roots is easily proven by assuming it is true and squaring both sides. However, the one with the squares is giving be problems. I cannot arrive at a strong result. Of course sometimes $a^2+b^2$ will be equal to $c^2$ when $a, b$ are sides of a triangle, but I am not able to definitevly prove that $a^2+b^2$ is never $less$ than $c^2$ , so I am starting to believe that this statement is false. What is the way to solve this? Thanks. AI: Counterexample for the one with the squares: $a = 6, b = 7, c = 12$ Here, $a + b > c$ and $a^2 + b^2 < c^2$ That counterexample is sufficient to disprove the conjecture. Additionally, if $a + b > c$, then all 3 numbers can be thought of as the sides of a triangle. Then because of the law of cosines, $a^2 + b^2 = c^2 - 2ab\cos(\gamma)$ The $\cos$ function can yield a positive, $0$, or negative result, so any of the following can be true: $a^2 + b^2 < c^2$ $a^2 + b^2 = c^2$ $a^2 + b^2 > c^2$
H: Operations on negative integers I was trying to teach my younger sister some math, and it drifted on to integers, and operations on negative integers. So questions like: a) $-3+2 = ?$ b) $2- (-3)= ?$ c)$-3 -2 = ?$ had to be answered. So, I did not want to say that because minus of minus is plus, so the answer to b) is 5, and minus of plus is minus, so you can solve a) and c) likewise....and I explained in detail how we can simplify these and for b) particularly, I said: Think of 2 as $5-3$ then $5-3-(-3)$ is the question. Now, I can say that is the same as $5+0-3-(-3)$ and by the definition of $-3$, $0-3=(-3)$, So now the expression becomes; $5 + (-3) - (-3)$ and since $(-3)$ is being added and subtracted, we will just cancel that and write 5. So, that is your answer! But I think this is a little too long so I said that you should observe this pattern and then use the result $-(-x)=x$ and solve questions. To see if she has really understood, I asked her to explain this to my mother, and the result was not satisfactory. So, my question is, How can I better explain it to her? Or rather, does this have any flaw which needs to be corrected? Please suggest a method that will involve only basic algebra as she is in grade 4... Aslo, if you find an easier way, do share it with me! Thanks in advance! PS: I think the tags I added are not right, but thats all I could think of, so if I have gone wrong, do edit that. AI: I remember learning it as a child using something like a "digging a hole" metaphor. You can think of "a box of dirt" as being a unit, and talk about boxes of dirt stacked on top of each other. If you dig a cubish hole in the ground and use it to fill one box, the empty space is supposed to represent $-1$. If there are no boxes full of dirt and no holes, then we are at 0. If you have a box of dirt and dump it into a 1-box-hole, it fills perfectly and you get $0$. So how to deal with $-(-3)$? One can interpret this as "taking away 1-box vacancies". Taking away a one box vacancy is the same as filling it in, i.e. $-(-1)=+1$. I guess what I have in mind is really thinking of $+1$ as a box of dirt, $-1$ as a vacancy (hole) of the exact same size as one box of dirt, and the positive numbers as stacks of filled unit boxes, and negative numbers as stacks of "vacancies" of the same size as a box of dirt. There are obvious variations of this for children as they grow older. Obviously, if they understand how to use money, you can do the same picture but with owned dollars and owed dollars. If you "have" $-1$ dollars, you owe $1$ dollar. If you have $-3$ debt and you add $+2$, the combination would be that you still owe $-1$. The $-(-2)$ can be interpreted as "the removal of a debt of 2" which would be the same thing as gaining $+2$. Electric charge is another model, but less down to earth than money and dirt, maybe. Somehow I forgot about another obvious version, that of a thermometer. If you can imagine $+1$ as being a unit of "heat" and $-1$ as being a unit of cold, and $0$ as being room temperature, or freezing or whatever. (Of course this analogy breaks because of absolute zero, but you can get away with it for now.) Now the idea of $-(-1)=1$ is "removing one cold is the same as adding a heat". If you have $7$ hot and combine it with $-6$ (6 cold), then you are still warmer than 0 at $+1$. If you are at 3 cold and add 4 more cold $-3-4$ then you have a total of seven colds $-7$. Summary Try to establish the idea of a "vacancy of one unit" as opposed to "one unit". We've mentioned that there are several models to do this (listed in the order that I like them): Dirt Temperature Money Electric charge All of them rely on building the idea of a unit that "is present" and a unit that "is absent", and the idea that existence and absence cancel each other out.
H: How to find the corresponding eigenfunction after determining the eigenvalues? I was reading this page (http://www.jirka.org/diffyqs/htmlver/diffyqsse25.html) example 4.1.4, which says: Again $A$ cannot be zero if $\lambda$ is to be an eigenvalue, and $sin(\sqrt {\lambda} \pi)$ is only zero if $\sqrt {\lambda}=k$ for a positive integer $k$. Hence the positive eigenvalues are again $k^2$ for all integers $k ≥ 1$. And the corresponding eigenfunctions can be taken as $x=cos(kt)$. My question is how is the corresponding eigenfunction determined? AI: Recall that the general solution is $x(t)=A \cos{\sqrt{\lambda} t} + B \sin{\sqrt{\lambda} t}$. The condition $x'(0)=0$ meant that $B=0$. The condition $x'(\pi)=0$ means that $\lambda=k^2$ for any integer $k$. The corresponding eigenfunction is then the result of plugging in the corresponding eigenvalue: $x_k(t) = \cos{k t}$. Note we ignore the constant $A$ here for now; the coefficient of an eigenfunction will be determined elsewhere.
H: solving equations by the method of substitution $\dfrac{a}{x}+\dfrac{b}{y}=\dfrac{a}{2}+\dfrac{b}{3},$ $x+1=y$ We have to solve for $x$ and $y$.I have tried to solve for them by finding value of $x$ or $y$ from the second equation and place them in the second.It is obvious that the answers would be $2$ and $3,$but we need something else. I tried to find the relation between $a$ and $b$ and them place them again in the first equation along with the value of $x$ or $y$ , but it yields something bizarre. So how do we solve it? A tiny hint will be appreciated. AI: Hint: (1) Multiply each side of the first equation by $(2)(3)(x)(y)$; (2) Then replace $y$ by $x+1$; (3) Rearrange, you get a quadratic equation in $x$. You could solve the resulting equation using the Quadratic Formula. Actually, the quadratic factors nicely. Alternately, you spotted one of the roots. If you know one root of a quadratic, then by glancing at the coefficients you can find the other. This is because in the quadratic equation $px^2+qx+r=0$, the product of the roots is $r/p$ and the sum of the roots is $-q/p$. You will find the product criterion more pleasant. If you use it you don't even need to calculate the messisest coefficient of the quadratic, the coefficient of $x$.
H: Algebraic Transformation query... I'm boning up on Algebra, and I'm looking into Algebraic Transformation. I understand the basic concept - but I'm confused by two self assessment questions. The two questions, from what I can see, are almost similar but have quite different ways of arriving at the end result. Now, what I'm confused about is why the two different approaches. First example is thus (make N the subject): $$R = \frac{2N}{C-P}$$ Stage 1: $$R(C-P) = 2N$$ Stage 2: $$R \frac{C-P}{2}=N$$ Second Example, make L the subject $$D = \frac{CL^2}{2+R}$$ Stage 1: $$D(2+R) = CL^2$$ Stage 2: $$\frac{D}{C}(2 + R) = L2$$ I'm confused as to why in the first example (C - P) is divided by the coefficient 2, whereas in Example 2 D is divided by C. Is there a 'reason' or 'rule' why the two different approaches, or does it come down to understanding the formula and working it out? Sorry for such a basic question - still learning. Many thanks for any and all replies. AI: Let's look at the first example again: $$R = \frac{2N}{C-P}$$ You want to get $N$ on its own. So you first get rid of the $C-P$ downstairs. This is achieved by multiplying both sides of the equation by $C-P$, giving $R(C-P) = 2N$. Now, to get $N$ on its own we divide both sides of the equation by 2. Giving the answer $N = \frac{R(C-P)}{2}$. Let's look at the second example: $$D = \frac{CL^2}{2+R}$$ You want to get $L$ on its own. So you first get rid of the $2+R$ downstairs. This is achieved by multiplying both sides of the equation by $2+R$, giving $D(2+R) = CL^2$. Now, to get $L^2$ on its own we divide both sides of the equation by C. Giving the answer $L^2 = \frac{D(2+R)}{C}$. Finally, we need to get rid of the power of two, so we square-root both sides: $$L = \pm\sqrt{\frac{D(2+R)}{C}}.$$ (We need the $\pm$ because if $x^2 = 4$ then $x = -2$ works since $(-2)^2=4$ and $x=+2$ also works since $(+2)^2=4.$ Hence $x^2 = 4$ means $x=-2$ or $x=+2$, i.e. $x=\pm 2$.
H: Generalizing a theorem about indentations around simple poles Assume the function $f(z)$ has a simple pole at $z_{0}$. There is a theorem that states that if $C_{r}$ is an arc of the circle $|z-z_{0}| = r$ of angle $\alpha$, then $$\lim_{r \to 0} \int_{C_{r}} f(z) \, dz = i \alpha \, \text{Res}[f,z_{0}].$$ But what if $z_{0}$ is a pole of higher order? Can we say anything definitive about $ \lim_{r \to 0} \int_{C_{r}} f(z) \, dz $? AI: For $n \geqslant 2$, the function $\dfrac{1}{(z-z_0)^n}$ has a primitive $\dfrac{(-1)}{(n-1)(z-z_0)^{n-1}}$, so if the arc is $z_0 + re^{it}$ for $\varphi \leqslant t \leqslant \vartheta$, $$\int_{C_r} \frac{dz}{(z-z_0)^n} = \frac{1}{(n-1)r^{n-1}}\left(e^{-i(n-1)\varphi} - e^{-i(n-1)\vartheta}\right).$$ In general, that is unbounded for $r \to 0$, but for a given $n$, there are choices of $\varphi$ and $\vartheta$ that make the integral vanish. If the principal part of $f$ has more than one term of order $< -1$, the choices for the difference between the two angles that make the integral vanish are even more restricted.
H: Evaluate the integral: $\lim \limits_{n\to\infty}\int_0^1\frac{nx}{nx^3+1}$ Evaluate the integral: $$\lim \limits_{n\to\infty}\int_0^1\frac{nx}{nx^3+1}dx$$ I'm pretty much stuck on how to solve this one: $$\int_0^1\frac{nx}{nx^3+1}dx$$ or even getting the improper integral. What can i do? AI: If you think that your integral is $\int_0^1\frac{x}{x^3+1/n}$, in the limit it is the integral of $1/x^2$, so you should expect it to diverge. Then you can do the following: $$ \int_0^1\frac{nx}{nx^3+1}=\int_0^1\frac{x}{x^3+1/n}\geq\int_{1/n^{1/3}}^1\frac{x}{x^3+1/n}\geq\int_{1/n^{1/3}}^1\frac{x}{2x^3}\\ \ \\=\int_{1/n^{1/3}}^1\frac1{x^2}=n^{1/3}-1. $$ So $$\lim_n\int_0^1\frac{nx}{nx^3+1}=\infty.$$
H: Proof of $ \lim_{y\to\infty} (\tan\frac{x}{y})\cdot y = x$? At lunch a coworker was talking about how to calculate, say, the 100th digit of pi using a square around the circle, then a pentagon, etc, basically you end up taking the limit of the circumference as the number of sides n goes to infinity. So I tried working out the math, but I got stuck at proving: $$\lim_{n \to \infty} 2n\tan\frac{\pi}{n} = 2 \pi$$ Any ideas how? AI: Putting $n=\frac1h, h\to0$ as $n\to\infty$ $$\lim_{n \to \infty} 2n\cdot\tan\frac{\pi}{n}$$ $$=2\lim_{h\to0}\frac{\tan \pi h}h$$ $$=2\pi\lim_{h\to0}\frac{\sin \pi h}{\pi h}\cdot \frac1{\lim_{h\to0}\cos\pi h}$$ We know, $\lim_{x\to0}\frac{\sin x}x=1$ and $\lim_{x\to0}\cos x=1$
H: Simple question about full time derivative Let's have full time derivative equation $$ \frac{F(\mathbf r (t), \mathbf p (t), t)}{dt} = \frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} + \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } + \frac{\partial F}{\partial t}. $$ Can I represent this equation as $$ F(\mathbf r (t), \mathbf p (t), t) = \int \left( \frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} + \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } + \frac{\partial F}{\partial t}\right)dt $$ and, if I can't, why not? In principle, this expression is equivalent to the identity $$ F = \int \frac{dF}{dt}dt. $$ AI: You can write $$dF= \frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} dt+ \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } dt+ \frac{\partial F}{\partial t}dt$$ $$dF= \bigg(\frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} + \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } + \frac{\partial F}{\partial t}\bigg)dt$$ From which it follows that $$\frac{dF}{dt}= \frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} + \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } + \frac{\partial F}{\partial t}$$ And since $$F=\int dF=\int \bigg(\frac{\partial \mathbf r }{\partial t}\frac{\partial F}{\partial \mathbf r} + \frac{\partial \mathbf p }{\partial t}\frac{\partial F}{\partial \mathbf p } + \frac{\partial F}{\partial t}\bigg)dt=\int\frac{dF}{dt}dt$$
H: Does this function have a minimum? $$f(x, y, z) = xy - xz$$ My textbook asks to find the minimums of various different functions, and this is one of them. But I don't think this has a minimum. If $X=(x, y, z)$, then $f(tX)=t^2f(X)$, so if $f(X)$ is ever negative, the function will tend to $-\infty$ along the line generated by $X$. And $f(X)$ is negative at, for instance, $(1, -1, 0)$. Am I missing something, or is the correct answer indeed just "there is no minimum" ? AI: This (corrected) function has no minimum, and your proof is correct. Also, are you sure there isn't a domain on which this minimum is taken? If it is all of $\mathbb{R}^3$, then we may take $x=1$ and let $y\to -\infty$.
H: A statement equivalent to $\exists ! x P(x)$ I'm trying to write a statement in logic symbols that says there is a unique $x$ such that $P(x)$ is true. I've heard of writing this down as $\exists !x P(x) $. But I think I'm not allowed to use the symbol $\exists !$. Now I'm trying to write an equivalent statement without using that symbol. I've come up with two statements, but I'm not sure if one of them is correct: $\exists x ,\lnot \exists y (P(x) \land P(y) \land \lnot (x=y))$ $\exists x P(x) \land \lnot \exists y (P(y) \land \lnot (x=y))$ My doubt with the first one is that I've never seen a construction like $\exists x, \lnot \exists y (...)$. My doubt with the second one is that I'm not sure if the variable $x$ in this part $\lnot \exists y (P(y) \land \lnot (x=y))$ is interpreted as a free variable or as the variable with the property $\exists x P(x) $. AI: Remember that $\lnot\exists y(\dots)$ is the same as $\forall y\lnot(\dots)$, your first one is equivalent to: $$\exists x\forall y \left(\lnot P(x)\lor \lnot P(y)\lor(x=y)\right)$$ So, as another poster said, all you'd need is $\exists x:\lnot P(x)$ to prove this statement. So this statement is actually saying something more complicated. The second statement is correct, although I'd mind the parentheses there: $$\exists x \left(P(x) \land \lnot \exists y \left(P(y) \land \lnot (x=y)\right)\right)$$ and again you can replace $\lnot\exists y$ with $\forall y\lnot$, which other answers above have done. But I prefer to think of $\exists!$ as being shorthand for "there is one, and at most one." That is, two completely separate statements joined by $\land$. So I'd write it as: $$\left(\exists xP(x)\right)\land\forall y\forall z \left(\left(P(y)\land P(z)\right)\implies y=z\right)$$
H: Strange graph theory problem Let $G$ be a graph where every vertex has degree 1 or 3. Let $X$ be the set of all vertices of degree 1. Suppose there exists a set of edges $Y$ such that by removing these edges from $G$, each component of the remaining graph is a tree which contains exactly one vertex in $X$. Determine $|Y|$ in terms of $|V(G)|$. I really can't think of how to do this problem. Surely there are graphs that fulfill the requirements and both have the same $|V(G)|$ but different $|Y|$s? How can $|Y|$ be written solely in terms of $|V(G)|$? AI: Suppose that G is split into $|X|$ trees with $n_1, n_2, \dots, n_{|X|}$ vertices respectively. Then we have that $n_1 + n_2 + \dots n_{|X|} = |V(G)|$. Furthermore, each component contains exactly one vertex from $X$. The total number of edges in the trees is $(n_1 - 1) + (n_2 - 1) + \dots + (n_{|X|} - 1) = |V(G)| - |X|$. But the total number of edges in $G$ originally was $\frac{1}{2} \sum_v \text{deg}(v) = \frac{1}{2} (|X| + 3(|V(G)| - |X|)) = \frac{3}{2} |V(G)| - |X|$ Therefore, we've removed exactly $\frac{|V(G)|}{2}$ edges.
H: what is the remainder when $1!+2!+3!+4!+\cdots+45!$ is divided by 47? Can any one please tell the approach or solve the question what is the remainder when $1!+2!+3!+4!+\cdots+45!$ is divided by $47$? I can solve remainder of $45!$ divided by $47$ using Wilson's theorem but I don't know what must be the approach for this model problems, as $47$ is a prime number I cannot convert it into another factorial and divide. If any one of you viewing have any idea regarding the approach, please post your approach here. Thanks in advance. Regards, Pavan Kumar AI: Just to compose table: \begin{array}{|c|r|} \hline n! & \equiv \ldots (\bmod \:47) \\ \hline \\ 1! & 1 \\ 2! & 2\cdot 1 = 2 \\ 3! & 3 \cdot 2 = 6 \\ 4! & 4 \cdot 6 = 24 \\ 5! & 5 \cdot 24 = 120 \equiv 26 \\ 6! & 6 \cdot 26 = 156 \equiv 15 \\ 7! & 7 \cdot 15 = 105 \equiv 11 \\ \cdots \\ 44! & 44 \cdot 8 = 352 \equiv 23 \\ 45! & 45 \cdot 23 = 1035 \equiv 1 \\ \hline \end{array} $45$ steps/rows in total. Then to find sum: $S = 1+2+6+24+26+15+11+\ldots+23+1 = \color{#E0E0E0}{1052 \equiv 18 (\bmod \: 47)}$. Here we use idea: if $\qquad$ $k! \equiv s (\bmod \: p)$, then $\;$ $(k+1)! \equiv (k+1)\cdot s (\bmod \: p)$, and apply it step-by-step.
H: Laurent-Development of zero function The following "idendity" contradicts the uniqueness of the Laurent Development: $$ 0=\frac{1}{z-1}+\frac{1}{1-z}=\frac{1}{z}\frac{1}{1-1/z}+\frac{1}{1-z}=\sum_{n=1}^{\infty}\frac{1}{z^n}+\sum_{n=0}^{\infty}z^n=\sum_{n=-\infty}^{\infty}z^n $$ 1) Where is the mistake? 2) When does $\sum_{n=-\infty}^{\infty}z^n$ converge? My answers are: 1) The series $\sum_{n=1}^{\infty}\frac{1}{z^n}$ converges for $\lvert z\rvert >1$. The series $\sum_{n=0}^{\infty}z^n$ converges fpr $\lvert z\rvert <1$. So the mistake is to develop both fractions in a geometric series, because this does not work! 2) The negative degree power series of the Laurentseries converges for $\lvert z\rvert>1$, the positive degree power series converges for $\lvert z\rvert < 1$. So the whole Laurentseries converges for $$ \left\{z\in\mathbb{C}: 1<\lvert z\rvert <1\right\}=\emptyset. $$ So the Laurentseries never converges? AI: You are correct. The series $\delta(z) = \sum\limits_{n=-\infty}^\infty z^n$ does not converge anywhere (so there is no contradiction -- to get uniqueness for a Laurent series expansion you need convergence on an open annulus ). Now even though this series $\delta(z)$ does not converge anywhere, it is tremendously useful in formal series manipulations. It even has a name: the formal Dirac delta. It is an example of an expansion of zero (since it would be an alternate Laurent series expansion of zero if it converged). Expansions of zero and especially the formal delta function play a significant role in the theory of vertex algebras. For more details, James Lepowsky and Haisheng Li's text "Introduction To Vertex Operator Algebras And Their Representations" is a great reference.
H: Finite ways to write $1 =\sum_{i=1}^{h}\frac{1}{n_i}$ Let $h\geqslant 1$ an integer. Can we show (simply), without using group actions, that there exists a finite number of decomposition of the form $\displaystyle 1 =\sum_{i=1}^{h}\frac{1}{n_i}$, with $n_i$ positive integers. P.S.: With group actions, I got it. AI: Show by induction on $h$: For any $s$ there is at most a finite number of decompositions of the form $s=\sum_{i=1}^h\frac1{n_i}$ with $n_i$ positive integers. The claim is trivial for $h=0$. For the induction step $h-1\to h$, note that for any decomposition $s=\sum_{i=1}^h\frac1{n_i}$ we may assume wlog. that $n_h=\min\{n_1,\ldots, n_h\}$. Then $n_h\le\frac hs$, so there are only finitely many choices for $n_h$ and for each choice of $n_h$ there are only finitely many deompositions $s-\frac1{n_h}=\sum_{i=1}^{h-1}\frac1{n_i}$.
H: Find the Wronskian of the Functions Find the Wronskian of the functions $f(t)=6e^t\sin{t}$ and $g(t)=e^t\cos(t)$. Simplify your answer. please list out all steps as simple as possible thank you AI: Recall, that given $$f(t)=6e^t\sin{t}\quad g(t)=e^t\cos(t)$$ The Wronskian of $f(t), g(t) = W(f,g)(t)$ $$W(f, g)(t) =\det \left(\begin{bmatrix}f(t) & g(t) \\ f'(t) & g'(t) \end{bmatrix}\right)$$ So find each of $f'(t)$ and $g'(t)$, and substitute $f, g, f', g'$ into the matrix; then you simply compute the determinant: $$W(f, g)(t)= f(t)\cdot g'(t) - f'(t)\cdot g(t)$$
H: Induction of inequality involving AP Prove by induction that $$(a_{1}+a_{2}+\cdots+a_{n})\left(\frac{1}{a_{1}}+\frac{1}{a_{2}}+\cdots+\frac{1}{a_{n}}\right)\geq n^{2}$$ where $n$ is a positive integer and $a_1, a_2,\dots, a_n$ are real positive numbers Hence, show that $$\csc^{2}\theta +\sec^{2}\theta +\cot^{2}\theta \geq 9\cos^{2}\theta$$ Please help me. Thank you! AI: Define $x = a_1 + a_2 \cdots + a_{n-1}$ and $y = \frac{1}{a_1} + \frac{1}{a_2} \cdots + \frac{1}{a_{n-1}}$. By the induction hypothesis, we have $xy \geq (n-1)^2$. We need to prove $(x + a_n)(y + \frac1{a_n}) \geq n^2$ But clearly by induction hypothesis, $(x + a_n)(y + \frac1{a_n}) = xy + ya_{n} + \frac{x}{a_n} + 1 \geq (n-1)^2 + ya_{n} + \frac{x}{a_n} + 1$ So it suffices to prove: $ya_n + \frac{x}{a_n} \geq 2n - 2 $ Now use the definition of $x$ and $y$, and use $\displaystyle \frac{a_i}{a_n} + \frac{a_n}{a_i} \geq 2$ to obtain $ya_n + \frac{x}{a_n} \geq 2n - 2 $
H: Why is the Fibonacci ratio though a decreasing function, it is alternating and decreasing? I tried to find the ratio of consecutive terms of the Fibonacci series and found that it is a decreasing function and it converges . I tried it though a small code piece in python so that I can have a lot of data points to analyze. import math def F(n): return ((1+math.sqrt(5))**n-(1-math.sqrt(5))**n)/(2**n*math.sqrt(5)) for i in range(53): a = F(i) b = F(i+1) print str(b) + '/' + str(a) + '=>' + str(b/a) And it gave values which suggested that it is a decreasing function and it converges at around 1.618. But I found that, though it was decreasing, it was not a continuously decreasing function, it in fact increases and decreases alternatively. But overall it was decreasing and converging. And I tried to plot the values to prove this graphically. (I expanded the graph a bit by plotting for $n*3$, just to make the view clear) My doubt is this - What is so special in this series which makes it behave alternating in nature? I tried to do it for consecutive numbers ratio and found that it was a increasing but not alternating like the Fibonacci ratios. Thanks AI: Instead of approaching the question via continued fractions, one can look at the closed form solution of the recurrence $F_n=F_{n-1}+F_{n-2}$ with initial values $F_0=0$ and $F_1=1$: this is given by the Binet formula $$F_n=\frac{\varphi^n-\widehat\varphi^n}{\sqrt5}\;,$$ where $\varphi$ and $\widehat\varphi$ are respectively the positive and negative solutions of the quadratic $x^2-x-1=0$, $$\varphi=\frac{1+\sqrt5}2\approx1.618\quad\text{and}\quad\widehat\varphi=\frac{1-\sqrt5}2\approx-0.681\;.$$ Thus, for the ratio we have $$\frac{F_{n+1}}{F_n}=\frac{\varphi^{n+1}-\widehat\varphi^{n+1}}{\varphi^n-\widehat\varphi^n}=\varphi+\frac{\varphi\widehat\varphi^n-\widehat\varphi^{n+1}}{\varphi^n-\widehat\varphi^n}=\varphi+\frac{\widehat\varphi^n(\varphi-\widehat\varphi)}{\varphi^n-\widehat\varphi^n}=\varphi+\frac{\widehat\varphi^n\sqrt5}{\varphi^n-\widehat\varphi^n}\;,$$ so that $$\frac{F_{n+1}}{F_n}-\varphi=\frac{\sqrt5}{\varphi^n-\widehat\varphi^n}\cdot\widehat\varphi^n\;.$$ The factor $\dfrac{\sqrt5}{\varphi^n-\widehat\varphi^n}$ is always positive; $\widehat\varphi^n$ is positive for even $n$ and negative for odd $n$, so $$\begin{cases}\frac{F_{n+1}}{F_n}>\varphi,&\text{if }n\text{ is even}\\\\ \frac{F_{n+1}}{F_n}<\varphi,&\text{if }n\text{ is odd}\;. \end{cases}$$ This clearly implies the alternating behavior of the ratio: $$\frac{F_2}{F_1}<\varphi<\frac{F_3}{F_2}>\varphi>\frac{F_4}{F_3}<\varphi<\frac{F_5}{F_4}>\varphi>\frac{F_6}{F_5}<\ldots\;.$$
H: Venn diagram counting problem? Suppose that Set A has 5 elements Set B has 6 elements Set C has 7 elements $\Omega$ has 10 elements Determine the maximum and minimum number of elements the following set can have: $ A^{c} \cup (B \cap C)$ I can see that there can be a maximum of 6 elements in $B \cap C$ but from there I'm at a loss. Would appreciate some pointers. Thanks in advance! AI: You're correct that the most number of elements $B \cap C$ can have is $6$. If $A$ has $5$ elements, then $A^c$ has $10 - 5 = 5$ elements. So the maximum number of elements in $A^c \cup (B\cap C)$ would seem to be $6 + 5 = 11$ elements. But since there are only $\,10\,$ elements in total (since we're given $\Omega$ has 10 elements), $A^C \cup (B \cap C) \leq 10$. The minimum number of elements would be $5$, since $A^c$ has five elements, so we must have $5$ elements minimum in $A^c \cap (B\cap C)$. We can arrange $B, C$ such that the set $B\cap C$ consists of only those elements that are also in $A^c$. I.e., this minimum will be achieved when we have that $(B\cap C) \cup A^c = A^c$.
H: Number of attempts required to increment a counter based on probability of success Suppose I have a counting variable C that gets incremented every time a particular condition is met. Let's call this a "successful attempt." Suppose also that I have a non-linear function F() expressing the probability of an attempt being successful as a decimal value on the range [0.0, 1.0]. The value of F() is a function of the counter variable C. In other words, the higher the value of C, the lower the probability that C will be incremented. Example: C = 0. I evaluate the probability function and determine that I have an X% chance of success. I get a random number (roll the dice) and based on the value of the dice roll I either fail or succeed. If I fail, C stays at 0. If I succeed, then C gets incremented by 1. Assuming I succeed, C = 1 and P(success) is now < X%. I hope this makes sense. It's been years since I've taken stats in university, and googling hasn't turned up any fruitful results. Is it possible to calculate the expected number of attempts/dice rolls needed in order to raise the counter variable from 0 to some arbitary value N? AI: If the probability of success when the counter is incremented to $C$ is $F(C)$, the mean number of attempts required to reach the $k+1$th counter once the $k$th counter has been reached $\frac{1}{F(C)}$, and the answer is therefore $\displaystyle \sum_{C=0}^{N-1} \frac{1}{F(C)}$.
H: Part of a solution to a mathematical induction problem I don't understand There's a part in the solution that I can't understand, I think it's just something basic that I'm missing. In the solution it says: $$T(k) \leq 2(c(k/2)^2 \log(k/2)) + k^2$$ Then it became $$T(k) \leq ( ck^2 \log(k/2) ) / 2 + k^2$$ P.S: I forgot how to do StackExchange latex again, I am so sorry. AI: \begin{align} T(k) &\leq 2(c(k/2)^2 \log(k/2)) + k^2\\ &= 2(c(k^2/4) \log(k/2)) + k^2\\ &= (c(k^2/2) \log(k/2)) + k^2\\ &= (ck^2 \log(k/2))/2 + k^2\\ \end{align}
H: Understanding associators as natural transformations Reading Baez and Stay's "Rosetta Stone," and trying to understand the definition of monoidal category on page 12, I read that a monoidal category requires a natural isomorphism called the associator, assigning to each triple of objects $X, Y, Z \in C$ an isomorphism $$a_{X,Y,Z}:(X\otimes Y)\otimes Z\overset{\sim}{\rightarrow}X\otimes (Y\otimes Z)$$ and then trying to understand this associator as a natural isomorphism, which is a special case of a natural transformation. A natural transformation relates a pair of functors $F$ and $F'$ that both map from a category $C$ to a category $D$. To narrow in on the definition of $a$, the associator, I must find two functors and a source category $C$ given an original functor, which, in the case of the associator, is the tensor product $\otimes$, which maps from the cartesian product category $E\times E$ to an original category $E$. I am imagining that to define $a$ as a natural transformation, I must think of its source category $C$ as $E\times E\times E,$ which should be ok since the cartesian product of categories is evidently associative, and its destination category $D$ as just $E$, getting there via the two different parenthesizations of $X\otimes Y\otimes Z$. Then the first functor of the natural transformation, $F$ would be something that computes $((X\in E)\otimes(Y\in E))\otimes (Z\in E)$ straight out of the category $E\times E\times E$ into $E$, and the second functor of the natural transformation $F'$ would be something that computes $(X\in E)\otimes((Y\in E)\otimes (Z\in E))$ to go from $E\times E\times E$ to $E$. This leaves me feeling vaguely queasy, leaning on the associativity of the cartesian product and not really precisely defining the functors $F$ and $F'$, and I'd like to know if I've missed the mark. EDIT: thinking more about why I feel queasy, it's because I've assumed some kind of projection operator that can "pick" elements from the cartesian product of categories $E\times E\times E$, but such an operator is not immediately obvious from the given definition of the cartesian product of categories on page 11, namely that $C\times C'$ is a category wherein objects are pairs $(X\in C, X'\in C')$, morphisms are applied and composed componentwise, and identity morphisms are defined componentwise. This doesn't, prima facie, give me a way to pick $X$ or $X'$ from $(X, X')$. AI: The first functor here is actually a composition $$(\mathcal C\times\mathcal C)\times\mathcal C\xrightarrow{(-\otimes-)\times\text{Id}}\mathcal C\times\mathcal C\xrightarrow{\otimes}\mathcal C\\ ((a,b),c)\mapsto (a\otimes b,\ c)\mapsto(a\otimes b)\otimes c$$ and the second functor is the composition $$\mathcal C\times(\mathcal C\times\mathcal C)\xrightarrow{\text{Id}\times(-\otimes-)}\mathcal C\times\mathcal C\xrightarrow{\otimes}\mathcal C\\ (a,(b,c))\mapsto (a,\ b\otimes c)\mapsto a\otimes( b\otimes c)$$ However, if we want $\alpha$ to be a natural isomorphism between them, we have to consider both parenthesizations of $\mathcal C\times\mathcal C\times\mathcal C$ as equal. But this would be wrong since, formally, they are different. We can resolve this issue by extending the first functor above with the map $\mathcal C\times\mathcal C\times\mathcal C\to(\mathcal C\times\mathcal C)\times\mathcal C$ sending the triple $(a,b,c)$ to the pair $((a,b),c)$ (and similarly for the second functor). It is straightforward to check that this really is a functor. Similarly $(-\otimes-)\times\text{Id}$ is a functor. (Both cases can be seen as applications of the general fact that given functors $F_i$ from a category $\mathcal C$ to the categories $\mathcal D_i$, indexed by some set $I$, there is a unique functor $F:\mathcal C\to\prod_I\mathcal D_i$ such that $p_i\circ F=F_i$ where $p_i:\prod_I\mathcal D_i\to\mathcal D_i$ is the projection.) Finally, $\otimes$ is a functor by hypothesis. So it makes sense to talk about a natural transformation between $(-\otimes-)\otimes-$ and $-\otimes(-\otimes-)$.