text
stringlengths
83
79.5k
H: Evaluating an integral with ${\mathrm{d}t}$ in the numerator, $\int \frac{\mathrm{d}t}{\cos(t)^2}$. How do I solve an integral with a differential on top? E.g.: given this integral to evaluate: $$\;\int \frac{dt}{(\cos(t))^2}\;\;?$$ What does it even mean when there's a differential? AI: To address your notational question, the term $\,dt\,$ in an integral tells us the variable with respect to which we are integrating. Integrals should always include $d\alpha$, where $\alpha$ should be replaced by whatever variable with respect to which we are integrating. Writing $\,dt$ in the numerator is simply space-saving, and means no more and no less than the following: $$\int \frac {dt}{(\cos t)^2} = \underbrace{\int \frac 1{\cos^2(t)}\,dt}_{\large = \int \frac 1{\left(\cos(t)\right)^2}\,dt}$$ And integrating: $$\int \frac 1{\cos^2(t)}\,dt = \int \sec^2(t) \,dt = \tan(t) + C$$ It always helps to "double check" an integral, if you have doubts about $\int \sec^2(t)\,dt$: $$y = \tan(t) + C \iff$$ $$\dfrac{dy}{dt}\left(\tan(t) + C\right) = \sec^2(t) + 0 \iff dy = \sec^2(t) \,dt$$
H: Stumped by a notation. I'm reading through http://cr.yp.to/papers/primesieves.pdf and came across the following notation on p. 1: For example, a squarefree positive integer $p \in 1 + 4\Bbb Z$ is prime if and only if the equation $4x^2 + y^2 = p$ has an odd number of positive solutions $(x,y)$. What I'm confused about is the use of the blackboard bold expression $1+4\Bbb Z$ as a restriction on the $p$ variable. I've seen field theorists use stuff like $p \in {\Bbb Z}/12{\Bbb Z}$ for a finite field element, but this one is new to me. AI: The set $1+4\mathbb{Z}$ is the set of integers $x$ such that $x$ is congruent to $1$ modulo $4$. More familiarly, $4\mathbb{Z}$ is the ideal of all multiples of $4$. The notation $1+4\mathbb{Z}$ is not all that far away from the old-fashioned "of the form $4k+1$."
H: How to prove $[x,p]=i$ $\implies$ $[x,p^n]=inp^{n-1}$? How to prove $[x,p]=i$ $\implies$ $[x,p^n]=inp^{n-1}$? I can do this using $p=i\frac{d}{dx}$, but my book hasn't introduced this yet so is there another proof without using this ? These are just linear operators acting on a hilbert space. AI: Here's a hint: based upon the dependence upon $n$ in your answer, a proof by induction is probably the way to go. Your induction hypothesis should be: $[x,p^n] = inp^{n-1}$ and you want to use this property to prove that $[x,p^{n+1}] = i(n+1)p^n$. To see how these are related, note that $$\begin{eqnarray}[x,p^{n+1}] &=& xp^{n+1}-p^{n+1}x\\ &=& (xp^n)p-p(p^nx) \\ &=& (xp^n)p-(p^nx)p+(p^nx)p-p(p^nx) \\ &=& (xp^n-p^nx)p+p^n(xp-px).\end{eqnarray}$$
H: Modular exponentiation pattern question I once found this problem Find the form of all $n$ such that $$3 \cdot 5^{2n+1}+2^{3n+1}=0 \pmod{17}$$ I started by writing the residues $\pmod{17}$ of $3 \cdot 5^{k}$ and $2^k$ $$3 \cdot 5^1+2 \equiv 0\pmod{17},\,2^1+15\equiv 0 \pmod{17}$$ $$3 \cdot 5^2+10\equiv 0\pmod{17},\,2^2+13\equiv 0 \pmod{17}$$ $$3 \cdot 5^3+16\equiv 0\pmod{17},\,2^3+9\equiv 0 \pmod{17}$$ $$3 \cdot 5^4+12\equiv 0\pmod{17},\,2^4+1\equiv 0 \pmod{17}$$ $$3 \cdot 5^5+9 \equiv 0\pmod{17},\,2^5+2\equiv 0 \pmod{17}$$ $$3 \cdot 5^6+11\equiv 0\pmod{17},\,2^6+4\equiv 0 \pmod{17}$$ $$3 \cdot 5^7+4 \equiv 0\pmod{17},\,2^7+8\equiv 0 \pmod{17}$$ $$3 \cdot 5^8+3 \equiv 0\pmod{17},\,2^8+16\equiv 0 \pmod{17}$$ $$3 \cdot 5^9+15\equiv 0\pmod{17}$$ $$3 \cdot 5^{10}+7\equiv 0\pmod{17}$$ $$3 \cdot 5^{11}+1\equiv 0\pmod{17}$$ $$3 \cdot 5^{12}+5\equiv 0\pmod{17}$$ $$3 \cdot 5^{13}+8\equiv 0\pmod{17}$$ $$3 \cdot 5^{14}+6\equiv 0\pmod{17}$$ $$3 \cdot 5^{15}+13\equiv 0\pmod{17}$$ $$3 \cdot 5^{16}+14\equiv 0\pmod{17}$$ And then when I paired the residues so that they "completed the $17$" I found the relation that $$3 \cdot 5^{16m+6k+1}+2^{8n+k+1}=0\pmod{17}$$ for all $m,n,k \in \mathbb{Z}_{\ge 0}$ Then solving diophantically with the exponents is easy, but my question in essence is: Why does this happens? I don't refer to $m$ or $n$, but to $k$. It is trivial that the residues repeat after some period, but i can't find a reason for the $k,6k$. I don't think that $3,5,2$ are the only numbers at all. And: Is there a general forula/algorithm for this? Any idea is greatly appreciated. AI: For any integer $n$, $$\begin{align*} 3\cdot 5^{2n+1}+2^{3n+1}&\equiv 0\bmod 17\\[0.1in] 15\cdot 5^{2n}+2^{3n+1}&\equiv 0\bmod 17\\[0.1in] 15\cdot 5^{2n}&\equiv -2\cdot 2^{3n}\bmod 17\\[0.1in] 15\cdot 5^{2n}\cdot 9^{2n}&\equiv -2\cdot2^{3n}\cdot 9^{2n}\bmod 17 & \text{(we do this because $9\equiv 2^{-1}\bmod 17$)}\\[0.1in] 15\cdot (45)^{2n}&\equiv -2\cdot2^n\bmod 17\\[0.1in] 15\cdot 11^{2n}&\equiv -2\cdot2^n\bmod 17\\[0.1in] 15\cdot 11^{2n}\cdot 14^n&\equiv -2\cdot2^n\cdot 14^n\bmod 17& \text{(we do this because $14\equiv 11^{-1}\bmod 17$)}\\[0.1in] 15\cdot 11^n&\equiv -2\cdot11^n\bmod 17\\[0.1in] 17\cdot 11^n&\equiv 0\bmod 17\\[0.1in] 0\cdot 11^n&\equiv 0\bmod 17 \end{align*}$$ which is always true.
H: If polynomial with rational number is injective on rationals then it is injective on reals? Let $p:\Bbb{R}\to\Bbb{R}$ is polynomial with rational coefficients. If restriction of $p$ to $\Bbb{Q}$ is injective, then $p$ is injective? I conjectured that $p$ is monotonic, but I don't know how to prove this conjecture. Thanks for any help. AI: Answering this CW, using the clever technique of Hailong Dao on MO, pointed out by David Speyer in comment above. We have $$ f(x) = x^3 - 2 x. $$ Now, if we have distinct rational $x,y$ such that $$ f(x) = f(y), $$ we have $x - y \neq 0$ and $$ x^2 + x y + y^2 = 2. $$ We can then take a positive integer $t$ as the least common multiple of the denominators of $x,y,$ so that $$ u = t x, \; \; v = t y $$ are integers and $$ \gcd(u,v) = 1. $$ Then $$ u^2 + u v + v^2 = 2 t^2. $$ However, $u^2 + u v + v^2$ is anisotropic in the 2-adic numbers. That is, since the result is even, it follows that $u,v$ are both even (Try it!). This contradicts $ \gcd(u,v) = 1. $ So, actually $x=y.$ I had no idea that this was related to quadratic forms in this simple way. The number 2 can be replaced by any prime $q \equiv 2 \pmod 3.$ That is, $$ x^3 - 2 x, \; \; x^3 - 5 x, \; \; x^3 - 11 x, \; \; x^3 - 17 x, \; \; x^3 - 23 x, \; \; x^3 - 29 x, \; \; x^3 - 41 x $$ are all injective on the rationals. The more familiar way is to say Legendre symbol $(2|3) = -1.$ Note that $u^2 + u v + v^2$ is one of Pete L. Clark's ADC forms, because it is one of his Euclidean forms. That is, $u^2 + u v + v^2$ represents an integer $n$ over the rationals if and only if it represents $n$ over the integers. If you are checking this property for some $n,$ note that you also need to check $u,v$ with opposite signs as well, to be sure.
H: Proof of Zorn's Lemma using the Axiom of Choice. Why is $\mathscr U$ a tower? I am reading a proof of Zorn's Lemma using the Axiom of Choice in Halmos' classical text, and I fail to see how to prove$\mathscr U$ satisfies the third condition of the definition of a tower. I will transcribe the relevant parts: Let $X$ be any non empty set, and let $\mathscr X$ be a collection of subsets of $X$ subject to two conditions: first, every subset of a set in $\mathscr X$ is in $\mathscr X$. Second, the union of any chain in $\mathscr X$ is in $\mathscr X$. Note that the condition "the union of any chain in $\mathscr X$ is in $\mathscr X$" is the same as saying every chain has an upper bound. We just stick to the partial order of inclusion instead of some abstract partial order. Halmos gives a justification by constructing a bijection between the elements of a poset and the weak initial segments of these elements. Now, we take $f$ to be a choice function for $X$, and define a function $g:\mathscr X\to\mathscr X$ as follows. First, for $A\in\mathscr X$, let $\widehat A$ be the set elements of elements in $X$ such that $A\cup \{x\}$ is in $\mathscr X$. Then $\widehat A=A$ $\iff$ $A$ is maximal w.r.t. to inclusion in $\mathscr X$. Then we define $g(A)=A$ if $A=\widehat A$ ($\iff A$ is maximal) and $g(A)=A\cup\{f(\widehat A-A)\}$ otherwise. So, the thing goes as follows now. Note that $g(A)$ contains at most one more element than $A$. We will say that a subcollection $\mathcal T$ of $\mathscr X$ is a tower if it contains the empty set, if $A\in\mathcal T$ implies $g(A)\in\mathcal T$ and if the union of any chain in $\mathcal T$ is again in $\mathcal T$. We can consider the intersection of all towers in $\mathscr X$ since $\mathscr X$ itself is a tower (i.e. the collection is non-empty), obtaining the smallest tower $\mathcal T_0$. That this is a tower is clear. Halmos now seeks out to prove that $\mathcal T_0$ is indeed a chain. To this end, we will say that $C\in T_0$ is comparable if it is comparable to each element in $T_0$. Thus, we want to show that every $C\in T_0$ is comparable. Comparable sets exist, for example, $\varnothing$ is one. Fix a comparable set $C$ in $T_0$, and let $A\in T_0$. We claim that $g(A)\subset C$. Since $C$ is comparable, either $g(A)\subseteq C$ or $C\subsetneq g(A)$. But in this last case we would have $A\subsetneq C\subsetneq g(A)$ which would mean $g(A)$ has two more elements than $A$; impossible. Halmos now looks at $\mathscr U=\{A\in T_0:A\subseteq C \text{ or } g(C)\subseteq A\}$ This is a subset of the sets that are comparable to $g(C)$ in $T_0$. We want to show this is a tower, so that $T_0=\mathscr U$. I cannot see how to prove the third condition holds. The first two requierements I can see. That is, how do I show that if $\mathscr C$ is a chain in $\mathscr U$, $\bigcup \mathscr C$ is in $\mathscr U$? Halmos said it is obvious from the definition of $\mathscr U$, and that kinda put me down. AI: Either $C$ contains the union, or it does not. If it doesn't, then at least some element of the union lies outside $C$. Such an element must be contained in at least one of the sets in the chain. Let $B$ be such a set in the chain. Then $B$ does not lie in $C$, so $g(C)$ lies in $B$, so $g(C)$ lies in the union.
H: Problem with Free Index in Einstein Summation Notation From http://www.physics.ohio-state.edu/~ntg/263/handouts/tensor_intro.pdf: Rules of Einstein Summation Convention — If an index appears (exactly) twice, then it is summed over and appears only on one side of an equation. A single index (called a free index) appears once on each side of the equation. So $A_{\LARGE{i}} = B_{\LARGE{i}}C_{\LARGE{i}} \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \,\, \, \, \, \, \, \, \, \, \, \, \, \, (1)$ is INvalid. $A_{\LARGE{i}} = \epsilon_{\LARGE{ijk}}B_{\LARGE{i}}C_{\LARGE{j}} \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, \, {(2)}$ is INvalid. I understand (1) is invalid because there's 1 $i$ on the LHS but $2$ on the RHS. But I don't understand the rationale behind this rule? What's the problem? $\sum_{i=1}^n A_i$ = $\sum_{i=1}^n B_iC_i $ is valid because it means $A_1 + ... + A_n = B_1C_1 + ... + B_nC_n $. I understand (2) is invalid — On the LHS, when the summation is expanded in $i$, there's no $k$. However, on the RHS, when the 2 summations are expanded, $k$ is still there in the Levi-Civita tensor. AI: Basically the point is that it is ambiguous to use an index as both a free and dummy index. $$ A_iB_{ii} \stackrel{\huge{?}}{=} A_iB_{jj} = A_i \sum_j B_{jj}$$ or $$ A_iB_{ii} \stackrel{\huge{?}}{=} A_jB_{ji} = \left(\sum_j A_jB_j\right)_i ? $$ This is a problem, we can't have both. Added after Trapu's edit. So, to be clear, I will be non-einstein notation on the r.h.s. of the equations below: the point here is that $A_iB_{ii}$ cannot be interpreted meaningfully treating one pair of $i$ as dummies (summed over) and the other as free (not summed) $$ A_iB_{ii} = \sum_{j=1}^n A_jB_{ji} = A_1B_{1i}+A_2B_{2i}+ \cdots +A_nB_{ni} \qquad (I.) $$ verses: $$ A_iB_{ii} = \sum_{j=1}^n A_iB_{jj} = A_i\sum_{j=1}^n B_{jj} = A_i\left(B_{11}+B_{22}+ \cdots +B_{nn} \right) \qquad (II.) $$ Expressions (I.) and (II.) are two reasonable interpretations of $A_iB_{ii}$ if just one index is taken to be free. But, these are not equal. For example, $B_{11}=1, B_{22}=-1, B_{12}=0=B_{21}$ and $A_1=1, A_2=1$, $$ (I.) \qquad A_1B_{1i}+A_2B_{2i} = B_{1i}+B_{2i} = \begin{cases} 1 & i=1 \\ -1 & i=2 \end{cases} $$ verses $$ (II.) \qquad A_i(B_{11}+B_{22}) = 0 = \begin{cases} 0 & i=1 \\ 0 & i=2 \end{cases} $$ As you can see these expressions do not agree. Therefore, we cannot use the same index for a dummy and a free index. I would liken this problem to that I have with my calculus I students who insist they need not change the bounds in a u-substitution since they're just going to write it back in terms of $x$ at the end. However, if such a practice is made then some of the intermediate steps are wrong. We are left with the situation that what we write is insufficient to capture the precise mathemtical intent of the expression. This should be avoided since good notation ought to be unambiguous. Or, at a minimum the ambiguity should reflect a deeper mathematical structure as in the case of quotient spaces and the non-uniqueness of the representative. This is not that, this is just bad notation. It does lead to errors, trust me, I've made them.
H: Vectors Angles from $[0,2\pi]$ Given two vectors $V_1 = (x_1, y_1)$ and $V_2 = (x_2, y_2)$. How to calculate the angles between them in the range of $[0, 2\pi]$? I know the $\cos\theta$ similarity equation could present a $\theta$ in the range of $[0, \pi]$. AI: You're right that the cos way only gets you the unsigned angle between 0 and 2$\pi$. If you can get the signed angle from $-\pi$ to $\pi$, then you can just add $2\pi$ to the negative answers. But all you need to get the sign is to think of the two vectors as vectors in 3-space with 0 z-coordinate, and take the cross product. It will lie in the z direction. If it is positive, the angle is positive, and if it is negative, the angle is negative. If it is 0, then the angle is either 0 or $\pi$, which you can tell from the cos method.
H: Cluster point of a sequence $\{x_n\}$ is the limit of some subsequence - Axiom of Choice? In a metric space, a cluster point of a sequence $\{x_n\}$ is the limit of some subsequence. The only proof that I know works like this: Construct a sequence $\delta _k \to 0$. For each $\delta _k$ find a point $x_{n_k}$ in the sequence which is within $\delta_k$ of the cluster point. Make sure that $n_{k} \lt n_{k+1}$, so that you really have a subsequence, and you can do this because there are only finitely many terms before index $n_{k}$. Question: Is this using Countable Choice? If so is it because we're "picking" a sequence element for each $\delta_k$? If we are using Countable Choice, then is there a way to prove this without using CC? I apologize if this is a duplicate question - I know there have been previous posts where it's explained why Sequential Continuity Implies $\delta - \epsilon$ Continuity requires Countable Choice. The argument used in that proof seems similar to this one. Thank you very much. AI: Suppose that $p$ is a cluster point of $\sigma=\langle x_n:n\in\Bbb N\rangle$. For each $k\in\Bbb N$ let $$n_k=\min\{n\in\Bbb N:d(p,x_n)<2^{-n}\text{ and }\forall i<k(n_i<n)\}\;;\tag{1}$$ then $\langle x_{n_k}:k\in\Bbb N\rangle$ is a subsequence of $\sigma$ converging to $p$. The definition in $(1)$ does not require choice: it uniquely specifies $n_k$, and the hypothesis ensures that the set over which the minimum is taken is non-empty.
H: Understanding prime factors method of finding a LCM I am able to understand why the LCM of prime numbers is just their product; but for non primes we get their prime factors and choose the 'maximum occurrence' of every prime factor and multiply them, I lose my understanding here, why choose the 'maximum occurrences' ? AI: In short, this relies on unique prime factorization. Every number can be written uniquely as a product of primes; and if a prime occurs a certain number of times in each number, it must occur at least as often in their LCM. Let's look at this in terms of an example. Consider $2^3 \cdot 3 \cdot 5 = 120$ and $2^2 \cdot 3^3 \cdot 7 = 756$. Each of these numbers will divide their LCM (really, the LCM of two numbers $a$ and $b$ is the smallest number that is divisible by both $a$ and $b$). So in particular, it will divisible by each of the prime-powers that divide these numbers. $2^3$ divides $120$, so $2^3$ will divide the LCM of $120$ and $756$. So there will be at least $3$ 'occurrences' (to use your phrase) of the prime $2$ in the LCM. There won't be a need for any more, because only two occurrences of $2$ occur in $756$. Having $2^3$ thus satisfies both numbers' divisibility needs. Similarly, if $120$ divides a number, then in particular it is divisible by $3$. But in fact we need three powers of $3$ from $756$. ' This argument carries on, to numbers besides $120$ and $756$. You need the 'maximum occurrence' of each prime factor because the LCM needs to be a multiple of that maximum prime factor.
H: About divergence theorem Consider the portion $S$ of the sphere $x^2+y^2+z^2=4$ with $z\ge -1$. Calculate the integral $$\iint_{S} (x^3, y^3, z^3)\cdot \vec{n} dS$$ a) Using directly a parametrization Well, what are the steps that I need to follow to parametrize this? Can I use spherical coordinates? AI: $$\begin{align} \nabla{\langle{x^3,y^3,z^3}\rangle}&=3x^2+3y^2+3z^2 \\ x&=\rho \sin(\phi)\cos(\theta) \\ y&=\rho \sin(\phi)\sin(\theta) \\ z&=\rho \cos(\phi) \\ 3\rho^2&=3\rho^2\sin^2(\phi)\cos^2(\theta)+3\rho^2\sin^2(\phi)\sin^2(\theta)+3\rho^2\cos^2(\phi) \end{align}$$ Now find the limit of integration and take the integral: $$\begin{align} -1&=2\cos(\phi) \\ \phi&=\frac{2}{3}\pi \\ \iint_{S} (x^3, y^3, z^3)\cdot \vec{n} dS&=\int_0^{\frac{2}{3}\pi}\int_0^{2\pi}\int_0^23\rho^4\sin(\phi)d\rho\space{d\theta}\space{d\phi} \end{align}$$
H: prove the limit inferior of $(x_n)$ where $n \in\mathbb{N}$ The problem states let $(x_n)$ be a bounded sequence for each $n \in\mathbb{N}$. Let $t_n=inf\{x_k: k\geq n\}$. Prove that $(t_n)$ is monotone and convergent. After a little research because I was confused what limit inferior really was this was the proof I tried coming up with. Proof: Since $(x_n)$ is bounded then it has a supremum and info um. Let $t_n=inf\{x_n: k\geq n\}=inf\{x_n,x_{n+1},...\}$. It follows that $t_{n+1}=inf\{x_{n+1}: k\geq n+1\}=inf\{x_{n+1},x_{n+2},...\}$ If we let set $A= \{x_{n+1},...\}$ and set $B=\{x_n,...\}$ (Note* I assume that set A and B have greatest lower bounds since these are subsets of $(x_n)$ and $(x_n)$ is bounded). Then it follows that $A \subset B$ and $inf A\geq inf B$ Otherwise if $inf A<inf B$ then $inf A \leq x_n$ a contradiction since $x_n \notin B$. It follows that $x_{n+1} \geq x_n$ otherwise if $x_n> x_{n+1}$ then $inf A \leq x_n$ a contradiction from above. Hence $t_n$ is increasing and by the Monotone convergence theorem $t_n$ is monotone and convergent. Not sure If I have all this correct. AI: I suggest you prove First: if $A$ and $B$ are bounded subsets of $\mathbb R$ and $A\subseteq B$, then $\inf A\geq\inf B$. and then rethink a bit your argument, trying to make it a little more organized.
H: How to prove that the cosine of the angle of two vectors is preserved after rotating the vectors with the same angle? How can we with linear algebra prove that if two vectors in 2 dimensions form an angle $\phi$ then by multiplying those two vectors with the same rotation matrix the cosine of the formed angle will be preserved? AI: You can maybe use the fact that matrix corresponding to the rotation is orthogonal: $$R^T = R^{-1}$$ and that the norm of a vector after rotation is the same as the original one: $$||R \mathbf{v}|| = ||\mathbf{v}||$$ So, by direct computation, for two vectors $\mathbf{v}$ and $\mathbf{w}$, you have: $$ \cos(\alpha_{R\mathbf{v},R\mathbf{w}}) = \frac{<R\mathbf{v},R\mathbf{w}>}{||R\mathbf{v}|| \ ||R\mathbf{w}||} = \frac{\mathbf{v}^T R^T R \mathbf{w}}{||\mathbf{v}|| \ ||\mathbf{w}||} = \frac{\mathbf{v}^T \mathbf{w}}{||\mathbf{v}|| \ ||\mathbf{w}||} = \cos(\alpha_{\mathbf{v},\mathbf{w}}) $$
H: Projection, canonical immersion/submersion - are they equivalent, and are they open maps? I am very confused with the concept of projection with the introduction of immersion and submersion. By local immersion/submersion theorem, for a simmersion/submersion $f$, there is is a canonical immersion/submersion locally is equal to $f$. So does this imply that locally, for a immersion/submersion of $f$ at $x$, it is equal to the projection and inverse projection function? Are these three all open maps? Thank you~ AI: Well, projections are open since basic open sets in the product topology are $U \times V$ with $U$ and $V$ open, so projecting down leaves you with $U$ and you're done. Now, since submersion is locally just a projection, it follows that it is also open. Immersions need not be open though. For example, take canonical immersion $\mathbb R^1 \to \mathbb R^2$. The domain is surely open but the image is just a line which can't be open in the plane.
H: Show that if $I$ is an interval and $f:I\to\mathbb R$ is continuous on $I$, then $f(I):=\{f(x):x\in I\}$ is an interval. Show that if $I$ is an interval and $f:I\to\mathbb R$ is continuous on $I$, then $f(I):=\{f(x):x\in I\}$ is an interval. I dont't know how to start, can someone please give me ideas? Thanks. AI: $I$ is connected and $f$ is continuous, hence $f(I)$ is connected. The connected subsets of $\mathbb{R}$ are the intervals. To see why $f(I)$ is connected, suppose $f(I) \subset U \cup V$, with $U,V$ open and disjoint. Then $f^{-1}(U)$, and $f^{-1}(V)$ are disjoint and open (since $f$ is continuous), and we have $I \subset f^{-1}(U) \cup f^{-1}(V)$. Since I is connected, we must have $I \subset f^{-1}(U)$ or $I \subset f^{-1}(V)$, and hence we have either $f(I) \subset U $ or $f(I) \subset V $. It follows that $f(I)$ is connected. To see why an interval is connected, suppose $I \subset U \cup V$ with $U,V$ disjoint and open. Suppose $x,y \in I$, with $x \in U, y \in V$. Without loss of generality, suppose $x <y$. Now let $t = \sup \{\tau |\, [x,\tau] \subset U \}$. (Since $y \in V$, we must have $t \le y$.) If we have $t \in U$, then since $U$ is open, we must have $t+\delta \in U$ for some $\delta>0$, which contradicts the definition of $t$. Hence we must have $t \in V$. However, since $V$ is open, we must have $t-\delta \in V$ for some $\delta>0$, which again contradicts the definition of $t$. Hence if $x \in U$, we must have $y \in U$, and so it follows that $I \subset U$. Hence $I$ is connected. To see why a connected subset of $\mathbb{R}$ is an interval: Suppose $C$ is connected, but not an interval. Then we must have $x,y,z$ such that $x,y \in C$ and $z \notin C$. It follows that $C \subset (-\infty,z) \cup (z, \infty)$, the union of two open disjoints sets. However this immediately contradicts the fact that $C$ is connected. Hence $C$ must be an interval. Addendum: Proof using the intermediate value theorem: First, note that an interval is a set $I \subset \mathbb{R}$ with the property that if $x,z \in I$, with $x\le z$, then if $y$ is a number such that $x \le y \le z$, then we have $y \in I$ as well. Suppose $f(I)$ is not an interval, then we must have $\alpha,\beta,\gamma$ such that $\alpha<\beta < \gamma$, $\alpha, \gamma\in f(I)$ and $\beta\notin f(I)$. Hence we must have $a,b$ such that $\alpha = f(a), \gamma= f(c)$, and $f(a) < \beta < f(b)$. From the intermediate value theorem we have some $b \in [a,c]$ such that $f(b) = \beta$, which is a contradiction. Hence $f(I)$ is an interval.
H: Stalk of the sheaf of regular functions on a subvariety Suppose $Y$ is a subvariety of a variety $X$ (according to Hartshorne this means if $X$ is quasi-affine or quasi projective then $Y$ is a locally closed subset of $X$, c.f. exercise 3.10, chapter 1). Now given $i : Y \to X$ the inclusion map, I am trying to figure out what the stalk at $x \in Y$ of $i_\ast(\mathcal{O}_Y) $ is. Now if I unwind the definitions, firstly for any open set $U \subseteq X$ we have $i_\ast(\mathcal{O}_Y)(U) = \mathcal{O}_Y(U \cap Y)$. I guess that the stalk at $x$ of $i_\ast (\mathcal{O}_Y) $ should consist of pairs $\langle U \cap Y,f \rangle$ where $f$ is a regular function on $U \cap Y$. However why should it be the case that on stalks the map induced from restriction $(\mathcal{O}_X)_x \to (i_\ast \mathcal{O}_Y)_x$ is surjective? This seems to be saying to me that any regular function on an open subset $U \cap Y$ of $Y$ is the restriction of some regular function on an open subset of $X$, but is this true? AI: 1) Yes, we have for the stalk of $i_\ast(\mathcal{O}_Y) $ at $x$ the equality $(i_\ast \mathcal{O}_Y)_x=\mathcal{O}_{Y,x} $ : as you write, this follows from the definitions. 2) Yes, the canonical morphism of local rings $\mathcal{O}_{X,x}\to \mathcal{O}_{Y,x}$ is an epimorphism. Its kernel is $\mathcal I_x$, the stalk at $x$ of the sheaf of ideals $\mathcal I=\mathcal I_Y$ defining $Y$ in $X$. These are not theorems but essentially definitions: the modern point of view is that closed subvarieties or, better, closed subschemes of a scheme $X$ correspond by definition to quasi-coherent ideals of $\mathcal O_X$: cf. Hartshorne, Proposition II 5.9 and this answer to Ravi Vakil's query, emphasizing that in the affine case closed subschemes of $Spec(A)$ are in perfect bijective correspondence with ideals of the ring $A$. In my opinion elementary presentations of varieties tend to hide this simple and beautiful correspondence, replacing it by ad hoc, easy but not so transparent constructions.
H: Prove that: $a^2+b^2+(1-a-b)^2\ge \frac {1}{3}$ Where $a$ and $b$ are any given real number. I have tried solving it using partial derivative. $$ s=a^2+b^2+(1-a-b)^2$$ $$\frac{\partial s}{\partial a}=2a-2(1-a-b) \tag{1}$$ $$\frac{\partial s}{\partial b}=2b-2(1-a-b) \tag{2}$$ for maxima both (1) and (2) are 0..from here we get two equations from where we get values of $a$ and $b$ $$2a-2(1-a-b)=0 \tag{3}$$ $$2b-2(1-a-b)=0 \tag{4}$$ putting the values of $a$ and $b$ we find from (3) and (4) in the function the maximum value of the function should be $\frac {1}{3}$.which in this case is not AI: Hint: Apply Cauchy-Schwarz inequality to the vectors $$ \vec{u}=(1,1,1)\qquad\text{and}\qquad\vec{v}=(a,b,1-a-b). $$ Hint: Look at your equations $(3)$ and $(4)$. Can you deduce that $a=b$ at a critical point? Can you then solve for $a$ and $b$?
H: A sub-problem on real number construction Let we have a Dedekind cut set $\alpha$ and $w$ be a positive rational number. How to prove that there exists an integer $a$ such that $aw \in \alpha$ ? I am able to prove using Archemedian property that there exists a natural number $b$ such that $bw$ does not belong to $\alpha$. Please don't tell the solution. Only a hint is enough. AI: HINT: $a$ can be a negative integer.
H: pigeonhole principle homework question These are Homework question. They are pigeon hole principle questions and I have a very hard time with these unless I have worked on a similar problem before. Q.1. Prove that if we select 87 numbers from the set $S = {1,2,3,....,171}$ then there are at least two consecutive numbers in our selection. Q.2. Let $n\ \epsilon \ Z^+$. Show that there exists $a,b \ \epsilon \ Z^+$ with $a \neq b$ such that $n^a-n^b $is divisible by $10$ Any help, hints would be great. AI: HINTS: Divide $S$ into the sets $\{1,2\},\{3,4\},\{5,6\},\ldots,\{169,170\},\{171\}$. How many sets is that? How many numbers are you choosing from $S$? $n^a-n^b$ is divisible by $10$ if and only if its ordinary base-ten representation ends in $0$. This happens exactly when $n^a$ and $n^b$ end in the same digit. There are only $10$ possible last digits, and there are infinitely many possible exponents, so there must be two powers of $n$ with ... ?
H: $\mathbb{Z}$ as a free product of two groups My question is that can the group $(\mathbb{Z},+)$ be written as a free product of two (non-trivial) groups? Thanks AI: Suppose $\mathbb Z \cong G\coprod H$, the free product of $G$ and $H$. But $\mathbb Z$ is abelian while if $g\in G$ and $h\in H$ are non-trivial elements, then $gh\ne hg$ in the free product. Therefore, at least one of $G$ or $H$ must be trivial.
H: {0,1,2,.....9} are randomly assigned to the vertices ${x_0,x_1,...x_9}$ of a decagon.Prove there are 3 consecutive vertices whose sum is at least 14. This is a homework Question and has to do with Pigeonhole principle. Could use a hint. Q. The numbers ${0,1,2,.....9}$ are randomly assigned to the vertices ${x_0,x_1,...x_9}$ of a decagon. Show that there are 3 consecutive vertices whose sum is at least 14. (hint given by Prof: Consider the numbers $S_0=x_0+x_1+x_2,$ $S_1=x_1+x_2+x_3,$ $ ...,$ $S_9=x_9+x_0+x_1$) I would like to solve it but I don't even have a sense of direction on how to start with this. AI: HINT: Following the suggestion, let $S_0=x_0+x_1+x_2$, $S_1=x_1+x_2+x_3$, and so on up through $S_9=x_9+x_0+x_1$. The numbers $S_0,\dots,S_9$ are the sums of all ten of the possible sets of three adjacent numbers. Suppose that none of them is $14$ or more, i.e, that $S_k\le 13$ for $k=0,1,\dots,9$. Then $$S_0+S_1+S_2+\ldots+S_9\le 10\cdot13=130\;.$$ What is $S_0+S_1+S_2+\ldots+S_9$ algebraically, i.e., in terms of the numbers $x_0,x_1,\dots,x_9$? What is $x_0+x_1+\ldots+x_9$? In view of (1), what does that tell you about $S_0+\ldots+S_9$?
H: Showing that the sum $\sum_{k=1}^n \frac1{k^2}$ is bounded by a constant How can I shows that the summation of $1/k^2$ from k=1 to n is bounded above by a constant? I could bind it by the geometric series from k=0 to n and add 1 to $(1/k^2)$ to get the ratio, r, and get $A_0 (1/(1-r)) $. So is that going to be summation k=0 to n of $2/(k^2+1)$? And the ratio r, is $(2/(k^2+1))/(1/k^2)$? AI: You can't "bind" the series $\sum_{k=1}^{\infty}1/k^2$ with any geometric series because for any $r$ between $0$ and $1$ and any $a$, there will always be some $k$ such that $ar^k<1/k^2$ (that is, the ratio test fails for this series). You can, however, say that (for $k>1$) $$\frac1{k^2}<\frac2{k(k+1)}= 2\left(\frac{1}{k}-\frac{1}{k+1}\right)$$ Now, what can we say about the sum $$ \sum_{k=1}^n\left(\frac{1}{k}-\frac{1}{k+1}\right)? $$
H: Solve the following equation: $x^4- 2x^2 +8x-3=0$ Solve the following equation: $$x^4- 2x^2 +8x-3=0$$ We get 4 equations with 4 variables. But that is too difficult to solve. My try: Let $a,b,c,d$ be the roots of the equation. $$a+b+c+d=0$$ $$\sum ab = -2$$ $$\sum abc = -8$$ $$abcd = -3$$ Is there any other method? AI: Well, the given equation is a product of two quadratics. $$ x^4-2x^2+8x-3=(x^2+2x-1)(x^2-2x+3) $$ Therefore, $$ x=-1\pm\sqrt2 $$ or $$ x=1\pm i\sqrt2. $$
H: Algebra and Substitution in Quadratic Form―Einstein Summation Notation Schaum's Outline to Tensor Calculus ― chapter 1, example 1.5 ――― If $y_i = a_{ij}x_j$, express the quadratic form $Q = g_{ij}y_iy_j$ in terms of the $x$-variables. Solution: I can't substitute $y_i$ directly because it contains $j$ and there's already a $j$ in the given quadratic form. So $y_i = a_{i \huge{j}}x_{\huge{j}} = a_{i \huge{r}}x_{\huge{r}}$. This implies $ y_{\huge{j}} = a_{{\huge{j}}r}x_r.$ But I already used $r$ (in the sentence before the previous) so need to replace $r$ ――― $ y_j = a_{j \huge{r}}x_{\huge{r}} = a_{j \huge{s}}x_{\huge{s}}.$ Therefore, by substitution, $Q = g_{ij}(a_{ir}x_r)(a_{js}x_s)$ $$ = g_{ij}a_{ir}a_{js}x_rx_s. \tag{1}$$ $$= h_{rs}x_rx_s, \text{ where } h_{rs} = g_{ij}a_{ir}a_{js}. \tag{2}$$ Equation ($1$): Why can they commute $a_{js}$ and $x_r$? How are any of the terms commutative? Equation ($2$): How does $rs$ get to be the subscript of $h$? Why did they define $h_{rs}$? AI: The name of repeated indices is non important; in expressions like $$y_i=a_{ij}x_j$$ you can change $j$ to any other index different from $i$, as it is repeated. In other words $$y_i=a_{ij}x_j=a_{ik}x_k=a_{il}x_l=\dots$$ are all represeantations of the same object, i.e. $y_i$. Let us consider the quadratic form $Q=g_{ij}y_iy_j$. It is defined using the indices $i$ and $j$. In the definitions of $y_i$ and $y_j$ we must choose other repeated indices, to avoid confusion with the sum over $i$ and $j$. Let us do it by choosing $k$ and $l$ s.t. $$y_i=a_{ik}x_k, $$ $$y_j=a_{jl}x_l. $$ We arrive at the expression $$Q=g_{ij}y_iy_j=Q=g_{ij}a_{ik}x_ka_{jl}x_l=(a_{\bullet\bullet}\text{ are just scalars!Scalars commute})=g_{ij}a_{ik}a_{jl}x_kx_l,$$ or $$Q=\sum_{i,j,k,l}g_{ij}a_{ik}a_{jl}x_kx_l, $$ which is quadratic in the $x_{\bullet}$'s, as claimed. I hope this helps with the first question of yours. In your second question you introduce the quantity $h$ through the components $$h_{rs}:=g_{ij}a_{ir}a_{js}.$$ The question is: why is the definition of $h_{rs}$ well posed? Let us have a better look at the definition of the $h_{rs}$; the $h_{rs}$ are equivalent to $$h_{rs}=\sum_{i,j}g_{ij}a_{ir}a_{js},$$ as the repeated indices are $i$ and $j$. This is really the definition of the Einstein notation, no hidden trick here. The free indices in $$g_{ij}a_{ir}a_{js} $$ are $r$ and $s$. The quantity "$h$" is then defined for each pair of $(r,s)$. In other words, it is a matrix whose components are just the $h_{rs}$'s. I hope it helps.
H: $ \lim_{ \varepsilon \rightarrow 0^+ } \int_{|x| \geq \varepsilon} \frac{ \varphi(x) }{x}dx = - \int_{-\infty}^\infty \phi'(x) \ln(|x|) dx$ How do I prove that $$ \lim_{ \varepsilon \rightarrow 0^+ } \int_{|x| \geq \varepsilon} \frac{ \varphi(x) }{x}dx = - \int_{-\infty}^\infty \phi'(x) \ln(|x|)dx $$ for all $ \varphi \in C_0^{\infty} (\mathbb{R})?$ I was starting as follows $$ \lim_{ \varepsilon \rightarrow 0^+ } \int_{|x| \geq \varepsilon} \frac{ \varphi(x) }{x}dx = \lim_{ \varepsilon \rightarrow 0^+ } \left( \int_{\varepsilon}^{\infty} +\int_{-\infty}^{- \varepsilon} \right) \frac{ \varphi(x) }{x}dx $$ $$ = \lim_{ \varepsilon \rightarrow 0^+ } \left( \varphi(x)\ln(x)\big|_{\varepsilon}^{\infty} + \varphi(x)\ln(x)\big|_{-\infty}^{- \varepsilon} + \left( \int_{\varepsilon}^{\infty} +\int_{-\infty}^{- \varepsilon} \right) \varphi'(x) \ln(x) dx \right) $$ $$= \lim_{ \varepsilon \rightarrow 0^+ } \left(- \varphi(\varepsilon)\ln(\varepsilon) + \varphi(-\varepsilon)\ln(-\varepsilon) + \left( \int_{\varepsilon}^{\infty} +\int_{-\infty}^{- \varepsilon} \right) \varphi'(x) \ln(x) dx \right) $$ but here I am stuck. Does anyone have any hint on how to proceed? Thanks AI: When you integrate $1/x$ you should get a $\log|x|$, not $\log x$ (check it). Your boundary term should thus be estimated like this: $$|(\varphi(-\epsilon)-\varphi(\epsilon))\ln(\epsilon)|\le(C\epsilon+O(\epsilon^2))|\ln(\epsilon)|\longrightarrow 0$$ as $\epsilon\rightarrow 0$, where the first inequality holds for $\epsilon$ small enough since $\varphi$ is differentiable at $0$. Also note that $\log|x|$ is integrable at $0$, so your principal value term can be rewritten as $$\lim_{\epsilon\rightarrow 0+}\int_{|x|\ge \epsilon}\varphi^\prime(x)\log|x|dx=\int_{-\infty}^\infty \varphi^\prime(x)\log|x|dx$$
H: Name of integration techniques Consider following integrations : $$\int \sec^3 x\ dx,\ \int \cos\ x\ e^x\ dx $$ These can be calculated by integration by parts. But here for instance to calculate the latter example, we meet $$\ast\ \int e^x \cos \ x\ dx = e^x \sin\ x + e^x \cos\ x - \int e^x \cos \ x dx $$ Note that the integration we want to calculate appears again. It is as like reduction formula for $\int \cos^n x\ dx$. Here is there a name of integration technique in $\ast$ ? Thank you in advance. AI: My calculus professor called this the "Recursion method." I do not think that is standard, but it does make sense.
H: How to properly evaluate the Riemann sum for the integral of $x^2$ for $x$ from $0$ to $3$ In my homework we start out with $$\int_{x=0}^3 x^2 \, dx=\lim_{P: \Delta x \to 0} \sum_{i = 1}^n f(x_i) (\Delta x)_i$$ Where I take $$P_i=[\frac{i-1}{n},\frac{i}{n}], x_i=\frac{i}{n}, (\Delta x)_i=\frac{3}{n}$$ So then $$\int_{x=0}^3 x^2 \, dx=\lim_{n \to \infty} \sum_{i = 1}^n \frac{i^2}{n^2} \frac{3}{n}$$ With given in my homework $$\sum_{i=1}^n i^2 = \frac{n(n+1)(2n+1)}{6}$$ Makes $$\int_{x=0}^3 x^2 \, dx=\lim_{n \to \infty} \frac{n(n+1)(2n+1)}{2n^3}=\lim_{n \to \infty} \frac{2n^3+3n^2+n}{2n^3}$$ Which leads me to $$\int_{x=0}^3 x^2 \, dx=\lim_{n \to \infty} 1+\frac{3}{2n}+\frac{1}{2n^2}$$ This is not the answer I should be getting... Should I choose $P_i, x_i, (\Delta x)_i$ differently? Am I going wrong somewhere else? AI: Yes, you have a mismatch in the $P_i, x_i, (\Delta x)_i$. For, we always have that for $P_i = [r_1,r_2]$, $(\Delta x)_i = r_2 -r_1$. Another problem is that: $$\bigcup_{i=1}^n P_i = [0,1] \ne [0,3]$$ Can you see how to adjust the $P_i$ and $x_i$ to solve both these issues in one go?
H: Prove that a continuous function on a closed interval attains a maximum As the title indicates, I'd like to prove the following: If $f:\mathbb R\to\mathbb R$ is a continuous function on $[a,b]$, then $f$ attains its maximum. Now, I do have a working proof: $[a,b]$ is a connected, compact space, which means that because $f$ is continuous, $f([a,b])$ is compact and connected as well. Therefore, $f([a,b])$ is a closed interval, which means it has both a minimum and, as desired, a maximum. What I would like, however, is a proof that doesn't require such general or sophisticated framework. In particular, I'd like to know if there's a proof that is understandable to somebody beginning calculus, one that (at the very least) doesn't invoke compactness. Any comments, hints, or solutions are welcome and apreciated. AI: Here’s a sketch of one possible argument. Let $u=\sup_{x\in[a,b]}f(x)$. (Note that I allow the possibility that $u=\infty$.) There is a sequence $\langle x_n:n\in\Bbb N\rangle$ in $[a,b]$ such that for each $n\in\Bbb N$, $u-f(x_n)<\frac1{2^n}$ if $u\in\Bbb R$ and $f(x_n)>n$ if $u=\infty$. Extract a monotone subsequence $\langle x_{n_k}:k\in\Bbb N\rangle$. Being a monotone, bounded sequence, $\langle x_{n_k}:k\in\Bbb N\rangle$ converges to some $y$. (Note that you have to use the completeness of $\Bbb R$ in some way, and this is the most elementary that occurs to me.) Moreover, $y\in[a,b]$, and $f$ is continuous, so $f(y)=\lim\limits_{k\to\infty}f(x_{n_k})=u$. (Note that this shows that in fact $u\in\Bbb R$.)
H: show that the function $z = 2x^2 + y^2 +2xy -2x +2y +2$ is greater than $-3$ Show that the function $$z = 2x^2 + y^2 +2xy -2x +2y +2$$ is greater than $-3$ I tried to factorize but couldn't get more than $(x-1)^2 + (x+y)^2 +(y-1)^2 - (y)^2$. Is there any another way to factorize or another method?? AI: HINT: $$z = 2x^2 + y^2 +2xy -2x +2y +2=(x+y+1)^2+(x-2)^2+2-4-1$$ Derivation : Let $$ 2x^2 + y^2 +2xy -2x +2y +2=(x+y+a)^2+(x+b)^2+2-a^2-b^2$$ $$\implies 2x^2 + y^2 +2xy -2x +2y +2=2x^2+y^2+2xy+2x(a+b)+2ay+2-a^2-b^2$$ Comparing the coefficients of $y, a=1$ Comparing the coefficients of $x, a+b=-1\implies b=-1-a=-2$
H: Question about natural logarithm in the exponent of the e-function I wonder which rule dictates that e^(-2x+ln(c)) is equal to e^(-2x) * c I know that the logarithm naturalis is the "reverse-function" of the e-function but why isn't it e^(-2x) + c instead? AI: Note that $$ e^{-2x + \ln(c)} = e^{-2x}e^{ \ln(c)} =e^{-2x}c$$
H: Simplify Triple Sum — Einstein Summation Notation Schaum's Outline to Tensor Calculus — Chapter 1, Solved problem 1.5 — Use the summation convention to write and state the value of $n$ necessary in: $$g^{\LARGE{1}}_{11} + g^{\LARGE{1}}_{12} + g^{\LARGE{1}}_{21} + g^{\LARGE{1}}_{22} + g^{\LARGE{2}}_{11} + g^{\LARGE{2}}_{12} + g^{\LARGE{2}}_{21} + g^{\LARGE{2}}_{22} $$. My solution. The game plan is just to simplify each of the three indices one at at time in any of the $3 x 2 x 1$ orders. Say I start with the superscript — I'd get $ \forall \, {\LARGE{i}} \in \{1,2\} \, g^i_{11} + g^i_{12} + g^i_{21} + g^i_{22} $. Then say I pick the second subscript — subsequently $ \forall \, i, k \in \{1,2\} \, g^i_{1k} + g^i_{2k} $. The ultimate subscript — subsequently $ \forall \, i,j, k \in \{1,2\} \, g^i_{jk} $. Their solution: Set $c_i=1$ for each $i$ ($n=2$). Then the expression may be written \begin{align*} g_{11}^i c_i + g_{12}^i c_i + g_{21}^i c_i + g_{22}^i c_i &= (g_{11}^i + g_{12}^i + g_{21}^i + g_{22}^i)\,c_i \\ &= (g_{jk}^i c_j c_k) c_i = g_{jk}^i c_i c_j c_k. \end{align*} Does their final answer look like mine? But why did they "set $c_i = 1 \, \forall \, i \in \{1,2\}"?$ AI: Your solution is equivalent to $$\sum_{i,j,k=1}^2g^i_{jk} $$ and no summation convention (I think it is a way to say "Einstein convention on repeated indices") appears, as pointed out by @Raskolnikov. The aim of the exercise is to arrive at an expression with "repeated indices", i.e. an expression in which you use the summation convention. To do so, one needs to have no free index (like yours $i$, $j$ and $k$) and to contract-or produce pairs of- all indices. It is clear that the starting expression has 3 indices: so 3 summations, or contractions are needed. The textbook begins to produce a summation considering at first the above index, called $i$. This is done introducing the vector $$c=(c_1,c_2)=(1,1)$$ and realizing the sum $\sum_{i=1}^2 g^i_{j,k}$ (which, once again, uses no summation convention) as $$g^1_{j,k}+g^2_{j,k}=\sum_{i=1}^2 g^i_{j,k}=g^i_{j,k}c_i,$$ for any $j,k$. On the rightmost r.h.s. of the above expression we use the Einstein convention, summing over $i$, the only repeated index. Please note that the length of $c$ is equal to $2$, i.e. the cardinality of the set $\{1,2\}$ of all the possible values for $i$. We are left with 2 indices, so 2 more summations are needed. We repeat the above lines to produce the summation w.r.t. to $j$ through $$g^i_{1,k}c_i+g^i_{2,k}c_i=\sum_{j=1}^2 g^i_{j,k}c_i=g^i_{j,k}c_ic_j,$$ for any $k$. Repeating the same trick with $k$, the only free index remaining, we arrive at $$\sum_{k=1}^2g^i_{j,k}c_ic_j=g^i_{j,k}c_ic_jc_k,$$ and $$\sum_{i,j,k=1}^2g^i_{jk}=g^i_{j,k}c_ic_jc_k.~~~(1) $$ On the r.h.s. of $(1)$ we have all repeated indices and we are done.
H: Star graph embeddings This is an homework question which I'm struggling with: Let $S = (V, E, w)$ a star graph, meaning, $S$ is a tree that all it's vertices are leafs except one. I need to : show that every weighted star has an isometric embedding into $\ell_1$. find an example of a weighted star that cannot be embedded isometrically to $\ell_2$. Couldn't really figure out how to approach this question (it's not a topic we have thoroughly covered this semester hence I'm stuck..). Thank you very much! AI: I'm not exactly sure if this is what you want (e.g. I am confused by weights and distances mixing), but I guess the hints below might still help you (if not, let me know in the comments and I will delete this post). Hint: Let $w_1, \ldots, w_n$ be the distances from the center. Check that assigning to the $i$-th node point $(0,0,\ldots,w_i,0,0,\ldots)$ satisfies your conditions (with center being just $0$). Set your star to have three arms of lengths 2013,2014,2015. Show that if $p_0, p_1, p_2, p_3$ is a solution where $p_0$ is the center, then $0, p_1-p_0, p_2 - p_0, p_3-p_0$ is another solution (i.e. you can assume WLOG that the center is mapped to $(0,0,0,\ldots)$). Find possible values for $p_1$ and $p_2$ (remember that the distance from $p_1$ to $p_2$ must be $w_1+w_2$). Show that there is no place to put $p_3$ so that all the distances are preserved. I hope this helps ;-)
H: A discrete space of cardinality $\aleph_0$. How does a discrete space of cardinality $\aleph_0$ looks like? On finite sets I always get finite discrete spaces, countable sets (i.e. sets of cardinality $\aleph_0$) yields spaces of cardinality $> \aleph_0$, cause $2^{\mathbb N}$ is uncountable. AI: It looks like the natural numbers with the standard topology (induced by the order, or from $\Bbb R$). Or it looks like $\Bbb Z$ as well. Or any infinite subset of $\Bbb Z$. Note that the cardinality of the space is the cardinality of the underlying set, not the cardinality of the topology itself. It is true, though, that if $X$ is infinite and $(X,\tau)$ is discrete then $\tau$ is uncountable. But we still say that the topological space is countable in the case where $X$ is countable.
H: A simple inequality in calculus? I have to solve this inequality: $$\left(\left[\dfrac{1}{s}\right] + 1 \right) s < 1,$$ where $ 0 < s < 1 $. I guess that $s$ must be in this range: $\left(0,\dfrac{1}{2}\right]$.But I do not know if my guess is true. If so, how I can prove it? Thank you. AI: The question states that $s$ is positive hence $\left[\frac{1}{s}\right]+1<\frac{1}{s}$ and so $\left[\frac{1}{s}\right]<\frac{1}{s}-1$. Assuming that the square brackets represent the 'integer part' function then, as $s>0$, we also know that $\frac{1}{s}>0$ and so $\left[\frac{1}{s}\right]=\frac{1}{s}-\alpha$ for some $\alpha\in[0,1)$. This then gives us $\frac{1}{s}-\alpha<\frac{1}{s}-1\Rightarrow 1<\alpha$ which is a contradiction, and so no $s$ in the range $(0,1)$ exists which satisfies the inequality. If the square bracket notation does not mean 'integer part of' then please explain the notation better.
H: Conditional probability with Bayes' Rule On a practice exam from statistics I encountered a very difficult exercise I couldn't manage to solve: In the tent next to you there is a family with two children. Early in the morning you see a boy coming out of the tent. What is the probability that the other child is a girl? Use Bayes' Rule My approach to the solution was the following: We assume $P(GIRL)$ = 0.5 and similarly $P(BOY)$ = 0.5. We have to compute the following conditional probability: $P($One child is a girl| One child is a boy). By applying Bayes' rule we should be able to compute this probability. Bayes Rule: $P(A|B)$ $=$ $\frac{P(B|A)*P(A)}{P(B|A)*P(A) + P(B|A^c)*P(A^c)}$ Could anyone please help me with this, I tried many things but nothing worked out.. AI: You can obtain two answers to this, actually. The problem is called the Sisters' Paradox. See this excellent explanation. The most common solution, I would say, goes as follows. Let $G$ denote a girl, and $B$ a boy such that $P(BG)$ means probability of a girl and a boy. $P(GG)=P(BB)=1/4$, $P(BG)=1/2$. Conditioning on a boy($P(B)$): $$ P(BG|B)=\frac{P(BG)}{P(B)}=\frac{P(BG)}{P(BB)+P(BG)}=\frac{1/2}{1/2+1/4}=\frac{2}{3} $$ Note that in the reference this way of solving the question yields $1/3$, but that's because in that case it's $P(GG|G)$ (or, equivalently, $P(BB|B)$) rather than one of each. But, $P(BB|B)=1-P(BG|B)=1-2/3=1/3$, so the answers are in spirit the same - just different formulations.
H: Simple probabilty / set problem I would love an explanation to. I am currently studying som very elementary set theory in my algebra text book and I just arrived at this problem (I am a beginner in this so please bear with me): Question: In a specific situation there are $100$ people. At least $70$% of these lose an eye, at least $75$% lose an ear, at least $80$% lose an arm and at least $85$% lose a leg. How many (at least) does get sick and get all four of the symptoms? Answer: One way to go about this would be to calculate the chance of each of these "symptoms" to happen that would be: $0.70 * 0.75 * 0.80 * 0.85$ which is: $0.357$ or $35.7%.$% Then there should be atleast $35$ people that suffers from all of the symptoms above. The other way to do this is to say that all people are $|U| = 100$ and the symptoms are the sets $A, B, C$ and $D$. We then have $|A| \geq 70$, $|B| \geq 75$, $|C| \geq 80$, $|D| \geq 85$. According to this formula: $A_1 \cap ... \cap A_n = |A_1|+ ... +|A_n| - (n-1)|U|$ We get: $A \cap B \cap C \cap D \geq |A| + |B| + |C| + |D| - 3|U| \geq 70 + 75 + 80 + 85 - 300 = 10$ This means that at least $10$% of all people got all symptoms which can be translated to at least $10$ people. Why is there such a large discrepancy between these two results? If anyone can explain that to me I would be very happy! Thank you! AI: The first method doesn't really calculate the least number with all 4 ailments; it calculates the exact number, but on the assumption that the ailments are independent of each other. The second method makes no assumption of independence, and really does calculate the least number. Try it with 2 ailments, where things aren't so complicated, it will be easier to see what's going on.
H: Stirling numbers combinatorial proof: $S(m,n)=\frac 1{n!} \sum_{k=0}^{n} (-1)^k\binom nk (n-k)^m$ This is a Homework Question. I am required to give a Combinatorial proof for the following. $$S(m,n)=\frac 1{n!} \sum_{k=0}^{n} (-1)^k\binom nk (n-k)^m$$ Hint given is : Show that $n!S(m,n)$ equals the number of onto functions $f\colon A \rightarrow B$ when $ |A|=m$ and $|B|=n. $ There were some other combinatorial proof questions on the assignment which I found easier to do but this one not so much. Could use help. AI: Let $T$ denote the set of functions from $f:A\to B$ where $|A|=m$ and $|B|=n$ and $$S=\{f\in T \mid f \text{ is surjective}\}$$ If $f$ is surjective, then $f^{-1}(b)$ is non-empty for all $b\in B$. There are $S(m,n)$ many ways, to partition $m$ elements into $n$ non-empty set. For each such partition $A_1,\dots, A_n$, there are $n!$ ways to assign the pre-images $f^{-1}(b_1),\dots,f^{-1}(b_n)$, so $$|S| = n!S(m,n)$$ Now count the elements of $S$ again in an other way: For each $b\in B$, define $$M_b = \{ f: A\to B \mid f^{-1}(b)=\varnothing\}$$ Then $$S = T \setminus\bigcup\limits_{b\in B}M_b$$ Now use the Inclusion-exclusion principle to calculate $|S|$.
H: If the * of morphisms (poly. maps) are equal, are the morphisms equal? Let $t,s:X\rightarrow Y$ be polynomial maps between affine varieties and $t_*,s_*:k[Y]\rightarrow k[x]$ be their images under the representable contravariant functor. We've learnt that for any $\tau:k[Y]\rightarrow k[x]$ there exists a $t:X\rightarrow Y$ such that $t_*=\tau$. ('* is a surjective functor', is it proper to say that?). My question is then, is * also injective, meaning that if $t_*=s_*$ than t=s? (Note that my teacher doesn't use category theoretic language, I project my little category theory knowledge upon the algebraic geometry we learn). Edit: I think that it is, because if $t(x_1,..x_n)=(f_1(x),..f_m(x)), s(x)=(g_1(x),..g_m(x))$ and we look at $y_i$ as polynomial maps, then $$ f_i=y_i(t(x))=t_*(y_i)=s_*(y_i)=y_i(s(x))=g_i$$ Am I correct? AI: Yes, sure. Even more is true (and well known): The coordinate ring gives an (anti)equivalence of categories between affine varieties and finitely generated $k$-domains. What you call "surjective functor" is called "full functor", and what you call "injective functor" is called "faithful functor". If both are satisfies, one calls this a "fully faithful functor".
H: Solution of $\tan x^3 = -\frac 32 x^3$? How to solve $\tan x^3 = -\frac 32 x^3$? Could you give me advice? AI: Given tan(x^3)=(-3/2)x^3 Implies : x^3=tan^-1((-3/2)*x^3) x^3 is a curve that always lies the 1st quadrant and the 3rd quadrant, as the function returns a positive number for a positive input and a negative number for a negative input. Also note that, it passes through the origin. tan^-1(x) also lies in the 1st and the 3rd quadrant, but tan^-1(-x) lies in the 2nd and the 4th quadrant, this curve also passes through the origin. Therefore the only pt of intersection is at (0,0). Implies x=0, is the solution !
H: A simple inequality in about integer part of numbers? This question follows A simple inequality in calculus?. I have to solve this inequality in about $s$: $$\left(\left[\dfrac{r}{s}\right] + 1 \right) s \le 1,$$ where $ 0 < s < 1 $ and also $ 0 < r < 1 $. That inequality is in about Computer Programming problem.$r$ is an input that user enter it when App is run.Also $s$ is another user input,but user must input it before $r$ and I have to find which $s$ in the range $\left(0,1\right)$ satisfies the inequality for all $r$.So if user choose a number as $s$ out of that range,I can notice him and he can try again. I guess that $s$ must be in this range: $\left(0,\dfrac{1}{2}\right]$.But I do not know if my guess is true. If so, how I can prove it? Thank you. AI: If $s$ is not the reciprocal of an integer (i.e. $\frac1s\notin\mathbb N$), then for $r$ sufficiently close to $1$, we have $$\left\lfloor \frac rs\right\rfloor=\left\lfloor \frac 1s\right\rfloor$$ and $$ \left(\left\lfloor \frac rs\right\rfloor+1\right)s=\left(\left\lfloor \frac 1s\right\rfloor+1\right)s>\left(\frac1s-1+1\right)s=1.$$ On the other hand, if $s=\frac1n$ with $n\in\mathbb N$ (and of course $n\ge2$), then $\frac rs<n$, hence $$\left\lfloor \frac rs\right\rfloor\le n-1$$ and $$ \left(\left\lfloor \frac rs\right\rfloor+1\right)s\le\left(n-1+1\right)s=1.$$ So $s$ has the desired property iff it is the reciprocal of a natural number.
H: Small arguments of $f$ vs large arguments of $\widehat{f}$ Say I know the behavior of $f:\mathbb{R} \to \mathbb{R}$ in the vicinity of 0. Are there any results linking that to the behavior of its Fourier transform $\widehat{f}(\xi )$ for large values of $|\xi |$? I did not assume that $f$ belongs to some certain function space on purpose since I do not know when any such result would be valid. As a matter of fact I would mostly be interested in the case when $f\in \mathcal{S}'(\mathbb{R})$ is a tempered distribution. Of course the Riemann-Lebesgue lemma says what happens if $f\in L^1(\mathbb{R})$ regardless of any local description. EDIT: Let me clarify my question somewhat. What I have is a (kind of) Taylor expansion of $f$ at 0 and I'm wondering if the transformed series says something about the behavior of $\widehat{f}$ at $\infty $? AI: There are two types of results that I know of, maybe they will satisfy your curiosity, but doubtlessly there are more: Suppose both $f$ and $\widehat{f}(x)$ are real. 1) Suppose both $f$ and $\widehat{f}(x)$ real. In one direction if $f(x) = 0$ in $(-a,a)$ a small neighborhood of $0$, then $\widehat{f}(x)$ has at least $r a/\pi$ zeros in any interval $[0,r]$ as $r \rightarrow \infty$. SEE: http://arxiv.org/abs/math/0301060 2) Suppose that $f(x) = 0$ away from a small neighborhood $(-a,a)$ of $0$ Then $\widehat{f}(x)$ is entire, of exponential type, and thus "spreads out" across the entire real line, in particular $f$ cannot decay too fast or that would contradict its analycity. SEE: This is rather classical. I recommend looking at the "uncertainty principle" and "Paley-Wiener" in any book on harmonic analysis and a book about entire functions such as Levin's "Lecture on entire functions". For a very nice take on the uncertainty principle take a look at http://www.math.msu.edu/~fedja/Published/paper.ps
H: ODE phase portrait and vector function interpretation I do not quite remember how to plot a vector function (or maybe I do). Consider the ODE: \begin{equation} x' = \begin{pmatrix}1&1\\-1&1\end{pmatrix}x \end{equation} I have found the general solution: \begin{equation} x(t) = c_1e^t\begin{pmatrix}-\sin(t)\\ \cos(t)\end{pmatrix} +c_2e^t\begin{pmatrix}\cos(t)\\\sin(t)\end{pmatrix} \end{equation} I understand that the phase portrait spirals away from the origin as $t\to\infty$. What I do not understand is which direction the spiral starts in. It would make sense to plug in some values for $x$ and see what values of $x'$ we get: \begin{equation} x=\begin{pmatrix}0\\1\end{pmatrix} \implies x' = \begin{pmatrix}1\\1\end{pmatrix} \end{equation} Does this mean that at the point $(0,1)$ the slope of the spiral is in the direction $x' = \begin{pmatrix}1\\1\end{pmatrix}$? Thanks for helping me clarify! AI: Since your eigenvalues are positive, they spiral outward. You can figure out the critical point by setting up where the two equations are simultaneously equal to zero, that is: $x+y = 0$ $-x+y=0$ This leads to a CP of $x = 0, y = 0$ Here is the phase portrait showing this behavior (notice the direction arrows (green) and the solutions (blue)) and the CP $(x, y) = (0,0)$: Note: since we have a solution as a function of $t$, you can do a parametric plot of $x(t)$ versus $y(t)$ for a single set of initial conditions, for example, choose $c_1 = c_2 = 1$ and we use WA. If you repeat this for a bunch of different $c_1, c_2$, you will also see what we have from the phase portrait. Lastly, you might find these instructive: MIT Opencourseware on Phase Portraits Paul's Online Notes - Phase Plane.
H: Substitution for $\int \frac {dx} {ax^2 + bx + c}$ I'm looking for the substitution that makes easier to solve integral containing quadratic polynomial in denominator (!) when such polynomial cannot be broken into parts (if it can, then it's possible to use partial fraction decomposition). Example: $$\int \frac {dx} {5x^2 + x - 2}$$ Formulas suggested by wikipedia are hard to remember. So I hope there is some kind of substitution. link: http://en.wikipedia.org/wiki/List_of_integrals_of_rational_functions#Integrands_of_the_form_xm_.2F_.28a_x2_.2B_b_x_.2B_c.29n AI: If you reject use of complex numbers, and your denominator cannot be factored over the reals, then the method to use is: complete the square in the denominator, which will then suggest a trigonometric substitution. Even if it CAN be factored, you could still complete the square and use a hyperbolic substitution...
H: Move Point A along a line Sorry, can't post images if my rep is below 10, and can't post more than 2 links. I removed the http section so it won't count as a link. I hope this isn't against forum rules, I'm not hurting anyone. I checked other questions, like this one (A line moving along the hypotenuse of a right triangle) but the answer was too complicated for me to understand. If someone can explain it again, can you please do it in simpler terms? AI: Assuming that $C$ is the origin, try $P = d\cdot(\cos(\angle A),\sin(\angle A)) + (A-C)$ where $d$ is the length you need to move your point from point $A$, or try $P = (-d')\cdot(\cos(\angle A),\sin(\angle A)) + (B-C)$ where $d'$ is the length from point $B$. In fact you could have skipped the trigonometry thing, just set $k = \frac{d}{\mathrm{Length}(AB)}$ or $k' = \frac{d'}{\mathrm{Length}(AB)}$ and calculate $$P = k\cdot B + (1-k)\cdot A$$ or $$P = k'\cdot A + (1-k')\cdot B.$$ I hope this helps ;-)
H: Wikipedia example Cauchy's Integral formula I do not really understand the wikipedia example that illustrates the usage of Cauchy's integral formula. enter link description here The exact point I do not get is, how they argue that one can split this up into two integrals. There argument is, that this is given by Cauchy's integral theorem. But since there are plenty versions of this theorem available I may be missing something, cause I do not see how this could be deduced from the integral theorem AI: You want to show that the integral over the original circle equals the integral over two smaller circles, one around each pole. This is the same as showing that the sum of the integrals over the three circles is zero, where the big circle is taken counter clockwise and the smaller circles are taken clockwise. This is the integral on the boundary of a region inside the big circle and outside the two smaller circles. 1) First, draw a diameter through the circle that separates the two poles. 2) Next, notice that the original integral is the sum of the integrals over the semicircles you just created, since the contributions on the diameter cancel. 3) Inside each semicircle is a small circle around a pole. The region between the circle around the pole and the semicircle looks like an annulus with a flat side. You can cut this lopsided annulus into two pieces which are "horseshoe shaped". The result is four horseshoe shaped pieces. The sum of the integrals on the boundaries of these pieces gives the original integral. The Cauchy Integral Theorem says that integral on the boundary of each piece is zero, because it's simply connected and the function is analytic on each one. The second cut is a useful trick to remember - I've seen it in at least one proof of the existence of Laurent expansions on an annulus. You can also do this with just one cut, by cutting on a chord of the original circle which goes through both small circles.
H: Taking the derivative of $x^{\sin(e^x)}$ How am I suppose to take the derivative of $f(x)=x^{\sin(e^x)}$? What should I make $u$ equals? I tried to make $u=\sin(e^x)$ and $u=e^x$ but they didn't work. AI: Hint: Here, you want to (start off) by using logarithmic differentiation - that is, take the derivative of $\ln f(x)$, then use it to find the derivative of $f$. Note that $$ \ln(x^{\sin(e^x)})=\sin(e^x)\ln(x). $$ Try differentiating that. Then, use $$ \frac{d}{dx}\left[\ln f(x)\right]=\frac{f'(x)}{f(x)} $$ to solve for $f'(x)$.
H: Smallest value of n for two algoritms with a certain running time If one algorithm has a running time of $100n^2$ and another of $2^n$; how can I find the smallest value of $n$ such that the former is faster than the latter? I could do: $100n^2 < 2^n$ then $\ln(100n^2) < n\ln(2)$ but how do I simplify the left side? AI: The best method could be to plot a rough graph of 100*n^2 and 2^n on the same pair of axes. Then see a probable point of intersection by some iterative method.
H: Taking the derivative of $f(x)=x^{e^{e^x}}$ How can I take the derivative of $f(x)=x^{e^{e^x}}$? How do I apply the chain rule? Thanks for the help! AI: $f(x)= e^{e^{e^x}\ln x}$ (smooth on $(0,\infty)$, so $$ f^\prime(x)= \left(\frac{d}{dx}e^{e^x}\ln x\right)\cdot e^{e^{e^x}\ln x} $$ Only the first term is somehow tricky, but using the rules for the derivative of a product of functions and using at some point the chain rule to get $\frac{d}{dx}e^{e^x} = e^xe^{e^x}$, you'll be fine.
H: Circles passing through three given points How many such circles exist which pass though three given points in 2 dimensions? Is it one unique circle? or possibly more than one? Is there any proof? AI: If the three points are not colinear, you can prove that there exists an unique circle by proving that it's center lies on two perpendicular bisectors of the segments connecting the tree points.[The existence is done by proving that the circle with the center at the intersection of two perpendicular bisectors and radius the distance to one of the points works; the uniqueness is done by proving that any circle passing through the three points has that center and radius]. If the three points are co linear, there is no circle (unless you consider a line as a circle of infinite radius). This can be proven by showing that if the three points are on a line in the order $A,B,C$ (i.e. $B$ between $A$ and $C$), then for any point in the plane we have $$OB < OA \mbox{ or } OB <OC \,.$$ To Prove this, just observe that $ \angle OBA+ \angle OBC =180^o$, thus at least one of them is $\geq 90^o$. If it is $\angle OBA \geq 90^o$, then in $\Delta OBA$ the edge $AB$ is the biggest as it opposes the bigger angle. Added If the three points are not necessarily distinct (which from how the problem is stated I don't think is the case), then it is easy to prove there are infinitely many circles. Case 1: All three points are the same, then it is easy to construct infinitely many circles. Case 2: Two points are the same, but third is distinct. Ignore one of the two equal points. Then any circle with the center on the Perpendicular Bisector of the segment connecting the two points, and the right radius will do.
H: Dependence of vectors : before and after linear transformation I have a pretty simple question that confused me: V is a vector space of a finite dimension. $T: V \to V$ is a linear transformation. The information that's been given in question: $\operatorname{Im} T = \ker T$ I want to know if what I'm doing is right: I took only for exmaple: $\dim \ker T = \dim \operatorname{Im} T = 2$ I take $D = \{v_1,v_2\}$ as some basis for $\ker T$ and $\operatorname{Im} T$ then I add vectors and let's say $B = \{v_1,v_2,v_3,v_4\}$ is basis of $V$, and I demand that $v_3$ and $v_4$ follow these requirements: $T(v_3) = v_1$, $T(v_4) = v_2$. (I did this for a reason, I didn't write here the whole question) But I have to prove that this is basis of $V$, linearly independent and spanning $V$. So I tried to prove that it is linearly independent: I assume : $a_1v_1 + a_2v_2 + a_3v_3 + a_4v_4 = 0$ Then I applied $T$ on it : $T(a_1v_1 + a_2v_2 + a_3v_3 + a_4v_4) = 0$ $a1 * T(v1) + a2* T(v2) + a3*T(v3) + a4* T (v4) = 0$. $v_1$ and $v_2$ are from $\ker T$ so $T(v_1) = 0$ and $T(v_2) = 0$, so I get that there is a linear dependence, but I don't know where is the dependence: is the action that I did means that $\{v_1,v_2,v_3,v_4\}$ are linearly dependent or it means ONLY that $T(v_1)$, $T(v_2)$, $T(v_3)$, $T(v_4)$ are dependent? Thank you AI: Since $\dim\mathop{\rm Ker} T > 0$, the operator $T$ is singular, hence it reduces the dimension of the space (because $\dim\mathop{\rm Im} T = n - \dim\mathop{\rm Ker} T < n$), meaning it maps any base to a linearly dependent set (of vectors that span its image). In other words, $T(B)$ is a set of $4$ linearly dependent vectors, because $T(B) \subseteq \mathop{\rm Im} T$, which is a 2-dimensional space (so it cannot have more than 2 linearly independant vectors).
H: Primary decomposition example I want to find the primary decomposition of $(x^2, xy^2)$ as an ideal of $k[x,y,z]$ where $k$ is some field. My guess is $(x^2, xy^2) = (x) \cap (x^2, y^2)$ however I am not 100% certain if $(x^2, y^2)$ is a primary ideal. My approach to see this was to use the fact that $I$ is primary iff all zero divisors of $R/I$ are nilpotent. I argued that if we have some $p(x,y,z), q(x,y,z)$ whose images in $k[x,y,z]/(x^2,y^2)$ are zero divisors, then $p$ and $q$ can't have any $z$ terms (their product must be in $(x^2,y^2)$) or any constant terms either. Then any zero divisor is of the form $ax+by + cxy$ which is nilpotent since raising to a high enough power and expanding via the binomial theorem, every term will have at least $x^2$ or $y^2$ in it. Something about this seems too strong to me. This reasoning would prove that in $k[x,y]$ any ideal that contains $x^k$ and $y^l$ for some $k$ and $l$ is a primary ideal. Is that true? AI: The radical of ideal $(x^2,y^2)$ is $(x,y)$ which is maximal and this is enough to conclude that $(x^2,y^2)$ is primary. (This shows that any proper ideal containing $x^k$ and $y^l$ is primary, since its radical must contain the radical of $(x^k,y^l)$ which is also $(x,y)$.)
H: Show that that if $p,q,r,s$ are real numbers and $pr=2(q+s)$, then at least one of the eqns $x^2+px+q=0$ and $x^2+rx+s=0$ has real roots. Show that that if $p,q,r,s$ are real numbers and $pr=2(q+s)$, then at least one of the eqns $x^2+px+q=0$ and $x^2+rx+s=0$ has real roots. My Attempt to the solution we know to have a real solution d>=0 so either 1) $p^2-4q>=0$ or 2) $r^2-4s>=0$ or both are true. Rearranging we get $(pr)^2 \geq 16qs$ substituting it in the first eqn we get $16qs\geq4q^2 +4s^2 +8qs$ So we get $0\geq(q-s)^2$ so we get $q=s$. Now what to do..? is there any another way to solve this? AI: Given: $pr = 2(q+s)$. To prove: either $p^2 - 4q \geq 0$ or $r^2 - 4s \geq 0$. It is best to argue by contradiction -- assume both $p^2 - 4q < 0$ and $r^2 - 4s < 0$. Then upon rearranging and adding the inequalities, $$ \begin{array}{rcl} p^2 + q^2 &<& 4r + 4s \\ \frac{p^2 + q^2}{2} &<& 2(r + s) \end{array} $$ Can you see how to finish from here? If not, post a comment and I'll write more. Hope this helps!
H: $I$ semisimple + $R/I$ semisimple $\implies$ $R$ semisimple Let $R$ be a (not necessarily commutative) ring with unit. Let $I\subset R$ be an ideal that in turn is a ring with unit. Is there a theorem that says something like $I$ semisimple and and $R/I$ semisimple implies $R$ semisimple? AI: Let $a$ be the unit of the two-sided ideal $I$. Then the map $R\rightarrow R$, $x\rightarrow ax$ is a ring map, since $ax\cdot ay=axy$ ($a$ is the unit of $I$). The image of the map is $I$. Let the kernel be $J$. Furthermore $a$ is an idempotent, therefore $I\cap J=0$. Thus, $R=I\oplus J$. If $A$ and $B$ are semi-simple rings, so is $A\oplus B$.
H: Measurable cardinals as sets A philosopher said that measurable cardinals are the largest possible sets. Is this true? Are those sets at all? I mean, cardinals measure size of sets and for example $2=\{\{\},\{\{\}\}\}$ but can we represent measurable cardinals similarly? And is it true that those can not exist because defining them requires measurable cardinality amount of text, as he says? AI: No, they are not the largest possible sets. There is no largest possible set (Cantor theorem): If $A$ is a set, then $\mathcal P(A)$, the collection of all subsets of $A$, is a set of size strictly larger than the size of $A$. And neither there is a set $B$ that contains sets of all sizes, or that any set is smaller than one of the sets in $B$: Take the union $C$ of all sets in $B$. Then the power set of $C$ is strictly larger than all the sets in $B$. And yes, they are ordinals, so they are sets. Special sets, at that, in the sense that they are transitive (so if $\kappa$ is measurable, and $a\in\kappa$, then also $a\subset\kappa$) and well-ordered by $\in$ (so $\in$ gives as a linear order on the elements of $\kappa$, and any non-empty subset of $\kappa$ has an $\in$-first element). Some set theorists in fact expect/assume/work under the assumption that there is not just one measurable cardinal, but in fact a proper class of them, so given any set, there is a measurable cardinal that is larger. Others may on occasion assume that there are no measurables. But again, they can never be the largest size. One possible explanation for what was meant is that if $\kappa$ is measurable, then $H(\kappa)$ satisfies all the axioms of set theory (is a model), and this model has size precisely $\kappa$. Here, $H(\kappa)$ is the collection of all sets $A$ such that $A$, and every $T\in A$, and every $S$ in such a $T$, and every $R$ in such and $S$, etc, all of them, have size strictly smaller than $\kappa$. Informally one sometimes describes this situation by saying that the (proper) class $\mathsf{ORD}$ of all ordinals is measurable. For example, this is how the early descriptions of Steel's work on the core model were presented. Measurability is an example of a large cardinal property. As far as large cardinals go, it is on the small side of the spectrum. Nowadays, we routinely consider properties that give us much much larger cardinals. You may want to take a look at Cantor's attic for some examples.
H: $1, e^{ix}, e^{-ix}$ are linearly independent Consider the space of all functions $f: \mathbb{R}\longrightarrow \mathbb{C}$. Prove that $\{1, e^{ix}, e^{-ix}\}$ are linearly independent vectors. AI: Recall the definition of linear independence for vector $u,v$ is $$C_1u + C_2 v = 0\iff C_1=C_2=0.$$ Now consider $$C_1 \mathbf{1}+C_2 e^{ix}=0,\quad \forall x\in \mathbb{K},$$ $\mathbb{K}$ can be either $\mathbb{R}$ or $\mathbb{C}$. The only solution is $C_1=C_2=0$. So for your case, it would be $$C_1 \mathbf{1}+C_2 e^{ix} + C_3 e^{-ix}=0,\quad \forall x\in \mathbb{K}\implies C_1=C_2=C_3=0.$$
H: Partial fraction integration $\int \frac{dx}{(x-1)^2 (x-2)^2}$ $$\int \frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}\,dx$$ I use the cover up method to find that B = 1 and so is C. From here I know that the cover up method won't really work and I have to plug in values for x but that won't really work either because I have two unknowns. How do I use the coverup method? AI: To keep in line with the processes you are learning, we have: $$\frac{1}{(x-1)^2 (x-2)^2} = \frac{A}{x-1}+\frac{B}{(x-1)^2}+\frac{C}{x-2}+\frac{D}{(x-2)^2}$$ So we want to find $A, B, C, D$ given $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1$$ As you found, when $x = 1$, we have $B = 1$, and when $x = 2$, we have $D = 1$. Now, we need to solve for the other two unknowns by creating a system of two equations and two unknowns: $A, C$, given our known values of $B, D = 1$. Let's pick an easy values for $x$: $x = 0$, $x = 3$ $$A(x-1)(x-2)^2 + B(x-2)^2 + C(x-1)^2(x-2) + D(x-1)^2 = 1\quad (x = 0) \implies$$ $$A(-1)((-2)^2) + B\cdot (-2)^2 + C\cdot (1)\cdot(-2) + D\cdot (-1)^2 = 1$$ $$\iff - 4A + 4B - 2C + D = 1 $$ $$B = D = 1 \implies -4A + 4 - 2C + 1 = 1 \iff 4A + 2C = 4\tag{x = 0}$$ Similarly, $x = 3 \implies $ $2A + 1 + 4C + 4 = 1 \iff 2A + 4C = -4 \iff A + 2C = -2\tag{x = 3}$ Now we have a system of two equations and two unknowns and can solve for A, C. And solving this way, gives of $A = 2, C= -2$ Now we have $$\int\frac{dx}{(x-1)^2 (x-2)^2} = \int \frac{2}{x-1}+\frac{1}{(x-1)^2}+\frac{-2}{x-2}+\frac{1}{(x-2)^2}\,dx$$
H: Need help with derivation of conditional expectation The following is taken from the book "Mathematical Statistics for Economics and Business": \begin{align*} E\left.\left( \left[ Y-h(x) \right]^2\ \right\vert\ x\right) =& E\left.\left( \left[\,Y-E(Y|x)+E(Y|x)-h(x)\,\right]^2\ \right\vert\ x\right)\\ =& E\left( \left.\left[\,Y-E(Y|x)\,\right]^2\ \right\vert\ x\right) + \left(E(Y|x)-h(x)\right)^2\\ &\text{(by substitution theorem)}. \end{align*} I don't understand this derivation. Please help ... AI: Start with $\mathbb{E}[(Y-h(x))^2|x]=\mathbb{E}[(Y-\mathbb{E}[Y|x]+\mathbb{E}[Y|x]-h(x))^2|x]$ (the equality is trivial), and expand the RHS: $$ \begin{align*} RHS &= \mathbb{E}\!\left[(Y-\mathbb{E}[Y|x]+\mathbb{E}[Y|x]-h(x))^2 \ \middle|\ x\right] \\ &= \mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])^2+(\mathbb{E}[Y|x]-h(x))^2 + 2(Y-\mathbb{E}[Y|x])(\mathbb{E}[Y|x]-h(x)) \ \middle|\ x\right] \\ &= \mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])^2 \ \middle|\ x\right]+\mathbb{E}\!\left[(\mathbb{E}[Y|x]-h(x))^2 \ \middle|\ x\right] + 2\mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])(\mathbb{E}[Y|x]-h(x)) \ \middle|\ x\right] \\ &= \mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])^2 \ \middle|\ x\right]+(\mathbb{E}[Y|x]-h(x))^2 + 2\mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])(\mathbb{E}[Y|x]-h(x)) \ \middle|\ x\right] \end{align*} $$ using the linearity of conditional expectations, and for the last equality the fact that $(\mathbb{E}[Y|x]-h(x))^2$ is $x$-measurable. From there, to get your result, the first two terms being what you want, it only remains to show the third one is $0$. This is the case, since $$ \begin{align*} \mathbb{E}\!\left[(Y-\mathbb{E}[Y|x])(\mathbb{E}[Y|x]-h(x)) \ \middle|\ x\right] &= \mathbb{E}\!\left[Y\mathbb{E}[Y|x] - \mathbb{E}[Y|x]^2-Yh(x) + \mathbb{E}[Y|x]h(x) \ \middle|\ x\right] \\ &= \mathbb{E}\!\left[Y\mathbb{E}[Y|x] \ \middle|\ x\right] - \mathbb{E}[Y|x]^2-\mathbb{E}\!\left[Yh(x) \ \middle|\ x\right] + \mathbb{E}\!\left[\mathbb{E}[Y|x]h(x) \ \middle|\ x\right] \\ &= \mathbb{E}[Y|x]\cdot\mathbb{E}\!\left[Y \middle| x\right] - \mathbb{E}[Y|x]^2-h(x) \cdot\mathbb{E}\!\left[Y \middle| x\right] + \mathbb{E}[Y|x]\cdot h(x) \\ &= 0 \end{align*} $$ using the fact that $h(x)$ and $\mathbb{E}[Y|x]$ are $x$-measurable.
H: Dimension of $\mathbb{F}^n$ Let $\mathbb{F}$ be a field. Consider the vector space of $\mathbb{F}^n$ over $\mathbb{F}$ for some positive integer $n$. Is the dimension of $\mathbb{F}^n$ necessarily $n$? AI: Yes. It is true. The following vectors are a basis vectors: $e_{1} = (1,0,...0), ..., e_{k} = (0,...0,1,0,..,0) \quad (1 \text{on $ k $th position}), ...e_{n} = (0,...0,1)$ Each of these vectors in a $n-$tuple.
H: $\dim (V_1+V_2) \geq \dim(V_1 \cap V_2) +2$ I am trying to solve this problems from an old qual exam. I know a way to prove this but I feel like its too long for something that looks pretty simple. Can anybody suggest a cleaner way? for example that uses the dimension formula for subspaces $ \dim (V_1) + \dim(V_2)=\dim(V_1+V_2)+ \dim(V_1\cap V_2)$ Here is the statement. Let $V$ be a finite dimensional vector space over a field $F$. Let $V_1$ and $V_2$ be subspaces of $V$ such that $V_1 \nsubseteq V_2$ and $V_2 \nsubseteq V_1$. Prove that $\dim (V_1+V_2) \geq \dim(V_1 \cap V_2) +2$ My solution involves the following facts: $V_1 \cap V_2 \subseteq V_i \subseteq V_1+V_2$ and then by way of contradiction assume the statement is false and proceeding. But this took me about a page to arrive at the desired contradictions using various cases etc. In particular I did not explicitly use the dimension formula. I feel this is way too much for a problem that looks like a simple application of the dimension formula (?). Can anybody suggest a quicker way? Thanks for all your help. AI: Since $\dim(V_{1}),\dim(V_{2}) > \dim(V_{1}\cap V_{2})$, the dimension formula gives you the desired result. To see the first claim I make, observe that since none of the subspaces contain the other, $V_{1}$ is strictly larger than the intersection and so is $V_{2}$ In gory details, $$\dim(V_{1}) \geq \dim(V_{1}\cap V_{2}) + 1$$ $$\dim(V_{2}) \geq \dim(V_{1}\cap V_{2}) + 1$$ Thus, $\dim(V_{1}) + \dim(V_{2}) \geq 2\dim(V_{1}\cap V_{2}) + 2$. Now use the dimension formula.
H: if $f_n\to f$ uniformly in $[a,b]$ then $f\in BV$ While preparing to test in calculus I found the question above: Let $f_n(x)\to f$ uniformlly on $[a,b]$. Prove or give counterexample: if $\forall n\in\mathbb N f_n(x)\in BV$ and $\exists M>0$ s.t $\displaystyle V_a^b f_n\le M$ then $f\in BV$ I am really ashamed to say I don't have any ideas how to solve the question. Even though I think it's correct. First I think we need to define a partition on $[a,b]$ by $x_0<x_1<...<x_n$. Since $\forall n,f_n\in BV, |f_n(x_i)-f_n(x_{i-1})|<V_a^bf_n\le M$ Then we need to prove $V_{a}^bf=q<\infty$, which means taking sup for the expressions (created by paritions) in form of $\displaystyle\sum_{i=0}^{n}|f(x_i)-f(x_{i-1})|$. I thought changing the expression to $$\displaystyle\sum_{i=0}^{n}|f(x_i)-f_n(x_i)+f_n(x_i)-f(x_{i-1})|\le \sum_{i=0}^{n}|f_n(x_i)-f_(x_i)|+|f_n(x_i)-f(x_{i-1})|$$. The first part is final and defined but I don't know how to manage with the second part ($|f_n(x_i)-f(x_{i-1})|$). How can I solve the question? AI: Take limits instead. Let $x_0 < \cdots < x_k$ be a partition of $[a,b]$. Then $$ \sum_{i=1}^k |f_n(x_{i}) - f_n(x_{i-1})| \le V_a^b f_n \le M $$ for every index $n$, which implies that $$ \sum_{i=1}^k |f(x_{i}) - f(x_{i-1})| \le M $$ by taking the limit as $n \to \infty$ of the finite sum on the left. Note that $f_n \to f$ pointwise is all that is required. Now take the supremum over all partitions to get $V_a^b f \le M$.
H: Complex integral - winding number i want to find $$\frac{1}{2\pi i}\int _\gamma \frac{1}{z}dz$$ well $0$'s winding number is $2$, so $\frac{1}{2\pi i}\int _\gamma \frac{1}{z}dz=2$ but when I explicity calculate the integral I get $$\frac{1}{2\pi i}\int _\gamma \frac{1}{z}dz=\frac{1}{2\pi i}(\log(z)\lvert ^{z=-1}_{z=3}+\log(z)\lvert ^{z=\sqrt 2}_{z=-1}+\log(z)\lvert ^{z=-\sqrt 5}_{z=\sqrt 2}+\log(z)\lvert ^{z=3}_{z=-\sqrt 5})=\\ \frac{1}{2\pi i}(\log |-1|+i\pi-\log 3+\log \sqrt 2 -\log |-1|-i\pi+\log |-\sqrt 5|+i\pi-\log \sqrt 2 +\\+\log 3-\log |-\sqrt 5|-i\pi)=0$$ whats going on here? AI: The imaginary part of logarithm increases as you go around the origin. It is the angle between z and the positive real axis. As you go around from 3 to -1 to $\sqrt{2}$, that angle increases from 0 to $\pi$ to $2\pi$. So the imaginary part of $\log(\sqrt{2})$ is $2\pi$. $\log x$ has lots of complex values, in the same way that $\sqrt{x}$ has two values. By following the path $\gamma$, you can follow which value is relevant to your problem. Think of it like a multistory car-part. Going around the origin puts you on a different level.
H: Number of Regions in the Plane defined by $n$ Zig-Zag Lines Fellows of Math.SE, I have been scratching my head at a solution to an exercise in Donald Knuth's Concrete Math. Here is the problem: Here is the solution (I hid it in case someone wants to solve this on their own) Given $n$ straight lines that define $L_n$ regions, we can replace them by extremely narrow zig-zags with segments sufficiently long that there are nine intersections between each pair of zig-zags. This shows that $ZZ_n = ZZ_{n−1} +9n−8$, for all $n > 0$; consequently $ZZ_n = 9S_n −8n+1 = \frac{9n^2 − 7n}{2} +1$. $L_n$ is the number of regions definable by $n$ lines, which is solved as an example earlier in the text. It equals $S_n + 1$, where $S_n = \sum_{k=1}^n k$. I am having difficulty understanding the recurrence solution. I'll hide my question, just to be extra careful I don't spoil this wonderful problem for anyone. Where does the "$-8$" come from? Is the recurrence better understood as $ZZ_n = ZZ_{n−1} +9(n-1)+1$? Or does that further convolute the meaning of the recurrence? I figure there must "-8" must have to do with "lost regions" due to the half-lines, but I am having trouble putting my finger on it. I really love this problem and would love to understand the solution in its entirety! Thank you! AI: Your observation is correct. If you believe the first step, then there are $9(n-1)$ intersections of the new zigzag with the existing constelation. Now any two consecutive intersection points cut an existing bounded region into two regions, resulting in a total of $9(n-1)-1$ regions. Similarly the line segment between the first intersection and infinity and the line segment bewteen the last intersection and infinity each cut an existing unbounded region into two. So there are $9(n-1)-1+2$ new regions.
H: How many times can a $4^{th}$ degree polynomial be equal to a prime number? If $f(x)$ is a $4^{th}$ degree polynomial with integer coefficients, what is the largest set ${x_1, x_2, x_3, ...x_n}$ (where $x_i$ are integers) for which $|f(x_i)|$ is a prime number? Things I have tried: I tried to see how I can restrict the coefficients of $ax^4+bx^3+cx^2+dx+e$ by dividing by $fx+g$. If this worked I could've restricted the coefficients so that hopefully $f(x)$ isn't divisible by such polynomials, but it got really messy so I don't think that's the way to go. Another thing I came up with is that we would probably want this polynomial as $ax^4-bx^3+cx^2-d+e$ , so that when we have $ax^4-bx^3+cx^2-dx+e=-p_i$ , then by Descartes's rule of signs $ax^4-bx^3+cx^2-dx+(e+p_i)=0$ could potentially have up to $4$ solutions. I'm not sure if the problem said $x_i$ are integers, but I believe they have to be because otherwise we could guarantee a solution for every single prime number (therefore the set of $x_i$ would be infinitely large) if we just had one sign change, namely $-ax^4+bx^3+cx^2+dx+(e+p_i)=0$ AI: It is an open problem. In fact, by the Dirichlet's theorem, linear polynomial $f(x)=ax+b$ takes infinitely many prime values, provided $(a,b)=1.$ For all other polynomials of degree $> 1$ it is not known whether there exists a polynomial which takes infinitely many primes. The conjecture is that even $f(x)=x^2+1$ can take infinitely many prime values. If you allow yourself to go to other higher dimensions, then $f(x,y)=x^2+y^2$ would take all prime valuse of the form $4k+1$ by the theorem of Fermat.
H: What is the probability of getting the sum of 5 or at least one 4 when you roll a dice? I just want to know if my method is right: P(Sum of 5 or At least one 4) = 2+3, 3+2, 4+1, 1+4 [+] (4+1,4+2,4+3,4+4,4+5,4+6)*2 So that will be 4+12/36 Ans: 16/36 am i right here? AI: Note that each of $(4, 1)$ and $(1,4)$ appear twice in each of the cases that you sum (those combinations are counted in the combinations that total $5$, and they are counted twice in the combinations in which at least one $4$ appears, so you're currently double counting each of those. You're also counting $(4, 4)$ twice, when doubling (to account for the permutation of) the combinations in which at least one $4$ appears. So you'd have to subtract a total of $3$ from your current total in the numerator: $$\frac{4 + 12 - 3}{36} = \frac{[4 + (2\cdot 5 + 1)] - 2}{36} = \frac{13}{36}$$
H: Determining the effective tax rate in a tax on tax situation There are taxation situations where the taxable amount includes the tax calculated on the taxable amount (e.g. this is a recursive calculation, as follows)... Iteration Taxable Amount Tax per iteration 0 $100,000,000.00 $5,000,000.00 1 $105,000,000.00 $250,000.00 2 $105,250,000.00 $12,500.00 3 $105,262,500.00 $625.00 4 $105,263,125.00 $31.25 5 $105,263,156.25 $1.56 6 $105,263,157.81 $0.08 7 $105,263,157.89 $0.00 8 $105,263,157.89 $0.00 9 $105,263,157.89 $0.00 10 $105,263,157.89 $0.00 Tax Rate 5.00% Effective Tax Rate 5.26% I would like to determine the Effective Tax Rate without the need to apply the calculation recursively - because no matter what the starting taxable amount the Effective Tax Rate is always the same. https://www.facebook.com/download/353721708063956/TaxOnTax.xlsx AI: What you are looking at is a geometric series. Let's say the tax rate is $x$ and the principal $p$. Then what you are doing is computing $$p+ p x + p x^2 + \cdots = p \sum_{k=0}^{\infty} x^k$$ You may recognize the sum of the geometric series, which may be written simply as $$p' = \text{net taxable amount} = p \sum_{k=0}^{\infty} x^k = \frac{p}{1-x}$$ The effective tax rate is then $$\frac{p'-p}{p} = \frac{1}{1-x} - 1 = \frac{x}{1-x} $$ With $x=0.05$, the effective tax rate is about $0.0526$, which agrees with your spreadsheet.
H: Using $\Gamma(z) = \lim_{n \to \infty} \left[ \frac{n! n^z}{z(z+1)(z+2)\ldots(z+n)} \right]$ to prove the Weiestrass product. I was searching the web quite thoroughly in the last two days. I was in paralytically looking for a rigorous proof using $$ \Gamma(z) = \lim_{n \to \infty} \left[ \frac{n! n^z}{z(z+1)(z+2)\ldots(z+n)} \right] $$ to prove the canonical wierstrass representation of the gamma function $$ \frac{1}{\Gamma(z)} = z e^{\gamma z} \prod_{n=1}^\infty \left( 1 + \frac{z}{n}\right) e^{-z/n} $$ I have seen a proof for the reverse, example here http://www.proofwiki.org/wiki/Equivalence_of_Gamma_Function_Definitions but how does one go about proving it the other way? AI: According to the limit, $$ \begin{align} \frac1{\Gamma(z)} &=z\lim_{n\to\infty}n^{-z}\prod_{k=1}^n\left(1+\frac zk\right)\\ &=z\lim_{n\to\infty}n^{-z}\prod_{k=1}^ne^{z/k}\prod_{k=1}^n\left(1+\frac zk\right)e^{-z/k}\\ &=z\lim_{n\to\infty}e^{z(H_n-\log(n))}\prod_{k=1}^n\left(1+\frac zk\right)e^{-z/k}\\ &=ze^{\gamma z}\prod_{k=1}^\infty\left(1+\frac zk\right)e^{-z/k}\\ \end{align} $$
H: How can I calculate this determinant? Please can you give me some hints to deal with this : $\displaystyle \text{Let } a_1, a_2, ..., a_n \in \mathbb{R}$ $\displaystyle \text{ Calculate } \det A \text{ where }$ $\displaystyle A=(a_{ij})_{1\leqslant i,j\leqslant n} \text{ and }$ $\displaystyle \lbrace_{\alpha_{ij}=0,\text{ otherwise}}^{\alpha_{ij}=a_i,\text{ for } i+j=n+1}$ AI: Hint: The matrix looks like the following (for $n=4$; it gives the idea though): $$ \begin{bmatrix} 0 & 0 & 0 & a_1\\ 0 & 0 & a_2 & 0\\ 0 & a_3 & 0 & 0\\ a_4 & 0 & 0 & 0 \end{bmatrix} $$ What happens if you do a cofactor expansion in the first column? Try using induction.
H: a question on the minimal prime divisors of an ideal This question is motivated by the second part of Step 1 in the proof of Theorem 14.14 in Matsumura's Commutative Ring Theory, p. 112. Let $k$ be an infinite field and $Q$ a homogeneous ideal of $k[x]=k[x_1,\cdots,x_s]$. Suppose that $\operatorname{dim} k[x] / Q =d>0$. Let $V$ be the $k$-vector space generated by the elements $x_1,\cdots,x_s$ (i.e. the vector space of linear forms over $k$) and let $P_1,\cdots,P_t$ be the minimal prime divisors of $Q$. Matsumura says: "By the assumption that $d>0$ we have that $P_i \not\supset V$...". Question: why is that true? If e.g. $P_1 \supset V$, then $P_1$ is equal to the maximal ideal generated by $x_1,\cdots,x_s$. Since $P_1$ is a minimal prime divisor of $Q$, this means that the coheight of $P_1$ in $k[x]/Q$ is zero. This is fine, as long as there exists some other $P_i$ with coheight $d$. Any comments? Remark: if Matsumura's argument does not hold in the general setting that i present here, then i suppose the nature of $Q$ comes into play. Any corroborations of that will be appreciated. AI: If a proper ideal contains $V$ this must be $M=(X_1,\dots,X_n)$. Moreover, you know that $M$ is minimal over $Q$ and $\dim R/Q=d>0$, where $R=K[X_1,\dots,X_n]$. Since $Q$ is homogeneous $R/Q$ is a finitely generated graded $K$-algebra and its Krull dimension equals the height of the irrelevant maximal ideal (which is $M/Q$), a contradiction.
H: Integrating $\int^{e^3-1}_{0}\frac{dt}{1+t}.$ How can I integrate $$\int^{e^3-1}_{0}\frac{dt}{1+t}.$$ I tried to make $u=1+t$ which means that $du=dt$ but it's not giving me anything useful, but instead made things more complicated. Maybe I did something wrong, but can someone tell me the correct way of solving this, or the correct $u$-substitution? Thanks! AI: Hint: Use the fact that $\dfrac d{du}\left(\ln u\right) = \dfrac {u'}u:\;$ And so we have , $$\int \frac{du}{u} = \ln|u| + c$$ $$\int\frac{dt}{1+t}$$ Correctly, you let $u = 1 + t,\quad \,du = dt$. This gives us $$\int \frac{du}{u}$$ I trust you can take it from here?! Note: You can either change the bounds of integration by replacing the lower bound with $u$ evaluated at $x = 0$ and replacing the upper bound with $u$ evaluated at $x = e^3 - 1$, thereby keeping all subsequent work in terms of $u$, or you can integrate (as you would an indefinite integral) with respect to $u$, back-substitute by replacing $u$ in the result with $1 + t$, and use then evaluate that at the original bounds.
H: Two cards are drawn without replacement. Find the probability the second card is a jack given the first is not a jack. My calculations: I got $\frac{4}{51}$ because there are $4$ jacks in a deck and if we didn't have a jack then there are still $4$ left out of $51$ because we already chose one card. Is this right? AI: Ok, so after the first card is taken out there are 51 cards. Of these 51 cards there are 4 jacks. Therefore the probability you get a jack is 4/51 as you said. This comes from the definition of probability spaces. The important one is the one that says that they are countably additive. Let $J_1,J_2,J_3, J_4$ be the four jacks. Since the probability of each card is 1/51 then $P({J_1})=P(J_2)=P(J_3)=P(J_4)=\frac{1}{51}$ and therefore $P\bigcup\limits_{i=1}^4 {{J_i}}=\frac{1}{51}*{4}=\frac{4}{51}$
H: Simplyfing Probability equation I was solving a homework problem, and I had obtained a formula for the required probability in the question. What I wanted to ask could it be more simplified? $$P = \sum_{i=0}^{a}( \frac{a!}{(a-i)!} * \frac{(s-i-1)!}{s!})$$ AI: The answer is in fact $P=1/(s-a)$. Here's why. Rewrite the sum as $$\begin{align} P &= \frac{a!}{s!} \sum_{i=0}^a \frac{(s-i-1)!}{(a-i)!} \\ &= \frac{a!}{s!} \sum_{i=0}^a \frac{i}{s-i} \frac{(s-i)!}{(a-i)!} \\ &= \frac{a!}{s!} \sum_{i=0}^a \frac{1}{s-a+i} \frac{(s-a+i)!}{i!} \\ &= \frac{1}{\binom{s}{a}} \sum_{i=0}^a \frac{1}{s-a+i} \binom{s-a+i}{i} \\ &= \frac{1}{\binom{s}{a}} \frac{1}{s-a} \underbrace{\sum_{i=0}^a \binom{s-a-1+i}{i}}_{\text{This is a well-known sum}}\\ &= \frac{1}{\binom{s}{a}} \frac{1}{s-a} \binom{s}{a} \\ &=\frac{1}{s-a} \end{align}$$ The well-known sum may be found in, for example, Concrete Mathematics by Knuth, Graham, and Patashnik, Second Ed., p. 161.
H: $u$-substitution for integrating $\int\frac{\log|x|}{x\sqrt{1+\log|x|}}\,dx\;\;?$ How can I integrate $$\int\frac{\log|x|}{x\sqrt{1+\log|x|}}\,dx\;\;?$$ I'm not sure what I should put equal to $\,u.$ Can someone give me a hint on how to solve this question? I don't need a full solution, I want to try it on my own. Thanks! AI: Hint: Let $u = \log|x|$. Then $\;du = \dfrac {1}{x}\,dx$.
H: Why does Wolfram Alpha say that $\sqrt{1}=-1$? Why does Wolfram Alpha say that $\sqrt{1}=-1$? Is this a mistake or what? Can anyone help? Thanks in advance. AI: Did you read the results carefully? It does give $1$ as the answer: It then helpfully mentions that $-1$ is also a 2nd root of $1$: which is perfectly correct, because $(-1)^2=1$.
H: Given $B \in M_{n\times n}(\mathbb R)$ is invertible and $B^2+B^4+B^7 = I$, find an expression for $B^{-1}$ in terms of only $B$. Given $B \in M_{n\times n}(\mathbb R)$ is invertible and $B^2+B^4+B^7=I$, find an expression for $B^{-1}$ in terms of only $B$. I don't know where to start. Thanks in advance. AI: You may begin with $B^{-1}=B^{-1}I$.
H: Need help with geometry of surfaces I am a third year maths student. I am self-studying a course on surfaces. I have some questions and would really appreciate it if people can help me. What exactly is a connected sum? According to my lecture notes, for two closed (compact) surfaces, if we remove a closed disc from each then take a homeomorphism from the boundary of one disc to the other, we get another surface. My question is, what do we mean by 'closed disc' on a surface? Do we mean some kind of ball? But what metric are we using then? When forming connected sum, does it matter if we don't remove a disc but something homeomorphic to a disc? Like a square or triangle for example. I don't think it matters. It is stated that removing a closed disc from the projective space gives the Möbius band. I don't see why this is the case. Can we do it by the edge identification diagrams? It is stated that the connected sum of two projective spaces is 'obviously' the Klein bottle. I think this is very non-obvious. How can I see this with edge identification diagrams? Any help would be appreciated, I am learning from these notes if anyone is interested: http://www.maths.ox.ac.uk/system/files/coursematerial/2012/2377/29/hitchin1.pdf Lastly, I am likely to have many more questions about the geometry of surfaces. If anyone is interested in tutoring/helping me, please say so, I am looking for a teacher. I will pay you. AI: Surface already means that for every point you have charts (homeomorphisms, or maybe diffeomorphisms to some $\mathbb{R}^n$). This already tells you how to draw balls because you can draw them on the $\mathbb{R}^n$ first and then map them to your surface. See 1. and 4. Yes you can see it from the edge identification diagrams. Remember that you don't need to get exactly the same diagrams, but that you can get from one to the other by cutting new edges and glueing sides that can be identified. I think is a nice not-so-hard puzzle that you should try doing. It is like playing with a Tangram. You can find the pictures here. but do try it before looking it up.
H: Raising a rational integer to a $p$-adic power Let $p$ be a prime number and $d$ a positive integer. If $n\in \mathbb{Z}_p$ (the ring of $p$-adic integers) how would one define $d^n$? Under what conditions would this be an element of $\mathbb{Z}_p$? Any help would be very much appreciated. AI: Assume that $d\equiv1\pmod p$ as others have pointed out. Let $$ n=\sum_{i=0}^\infty a_ip^i, $$ with $a_i\in\{0,1,\ldots,p-1\}$. The upshot is that (binomial theorem!) $$ d^{p^i}\equiv 1\pmod{p^{i+1}} $$ for all $i$. Therefore also $x_i:=d^{a_ip^i}\equiv1\pmod{p^{i+1}}$. This implies in turn that the infinite product $$ \prod_{i=0}^\infty x_i $$ converges (it is clear that the sequence of partial products stabilizies modulo $p^{i+1}$ after $i$ factors). There's your definition: $$ d^n=\lim_{k\to\infty}\prod_{i=0}^kx_i. $$ With a little bit extra work we see that this turns $1+p\mathbb{Z}_p$ into a $\mathbb{Z}_p$-module, which comes in handy sometimes.
H: Is this theorem in Rudin's Real and Complex Analysis wrong as stated? I have forgotten all of my measure theory and will now ask a very dumb question. Consider the following theorem, which I have produced word for word from Rudin's Real and Complex Analysis, third edition. It is theorem $7.13$ in chapter $7$ and can be found on page $142$. Associate to each $x\in \mathbb R^k$ a sequence $\{E_i(x))\}$ that shrinks to x nicely. If $\mu$ is a complex Borel measure and $\mu \perp m$, then $$\lim _{i\rightarrow \infty} \frac{\mu(E_i(x))}{m(E_i(x))} =0\ \text{a.e.} \ [m].$$ Consider the measure $\mu$ defined by giving a point mass to each point in $\mathbb Q$. This is a singular measure with respect to the Lebesgue measure. In other words, the measure of a set is the number of rational points it contains. Let $x=0$ in $\mathbb R$, and consider the intervals $(-1/n, 1/n)$ that shrink nicely to $0$. The $\mu$ measure of these is infinite. So each term in the limit is infinite, and the limit itself must be infinite. This seems to contradict the theorem. Where have I gone wrong in this reasoning? (This question is closely related.) AI: Your measure $\mu$ is not complex-valued, as it admits the value $\infty$, so your construction does not contradict the theorem.
H: Sequence of natural numbers Numbers $1,2,...,n$ are written in sequence. It's allowed to exchange any two elements. Is it possible to return to the starting position after an odd number of movements? I know that is necessarily an even number of movements but I can't explain that! AI: Hint: Define an inversion of a sequence $(a_1,\dots,a_n)$ as a pair $(i,j)$, such that $i<j$ and $a_i>a_j$. Check, that each movement changes the parity of number of inversions.
H: rotate graph of function by 180 suppose that we have graph of function $$f(x)=1+x\cos(x)$$ and we should rotate it by $180$ degree,question is what is a function which describe new graph?answer is $$f(x)=x\cos(x)-1$$but i can't understand why it is so?as i know rotation by $180$ is equal to instead of $x$,put $-x$ is not it?in this case we will have $f(x)=1-x\cos(x)$ or we should change y ordinate?i mean when $x=0$,then $y=1$,so we should change sign in $y$ intersection?please help me AI: To rotate $f(x)$ by 180 degrees about the origin, you need to mirror it horizontally ($f(-x)$) and also vertically ($-f(x)$). In your case, $$\begin{eqnarray*} -f(-x) &=& -(1+(-x)\cos(-x)) \\ &=& -(1 - x\cos(-x)) \\ &=& -1 + x\cos(-x) \\ &=& -1 + x\cos(x). \end{eqnarray*} $$
H: Matrix representation I wonder if the following term $$ \ln\det\left(I+\text{diag}\left(d_1,\ldots,d_n\right)A\right), $$ where $I$ is an identity matrix, $d_1,\ldots,d_n$ is a sequence of binary numbers, (taking the values $0$ and $1$), and $A$ is some symmetric matrix, can be rewritten in linear form as function of $d_1,\ldots,d_n$. That is to say, I want to rewrite the above term in the form $$ \sum_id_if_i(A) $$ where $f_i$ are some functions. For example, in the simple scalar case (which I actually kind of motivated by) we have that $$ \ln (1+d\cdot a) = d\cdot\ln(1+a). $$ (EDIT) where $d$ is binary. Any suggestion? Thank you! AI: Suppose $d_1 = 1$ and the rest of $d_i$s are zero. We have $\sum_i d_i f_i(A) = f_1(A)$. Directly evaluating the determinant, we obtain, $f_1(A) = \log(1+a_{11})$. Similarly, by setting $d_i = 1$ and $d_j = 0,\,j\neq i$, we obtain $f_i(A) = \log(1+a_{ii})$. Hence, we have the $\log\det$ equal to $\sum_id_i\log(1+a_{ii})$. A contradiction for a general selection for $d_i$s now easily follows (as our result depends only on the diagonal elements of $A$), i.e. you cannot find such a decomposition.
H: find minimum of given function today my relative asked a problem,which had strange solution and i am curious, how this solution is get from such kind of equations. let say function has form $f(x)=a\sin(x)+b\cos(x)$ we should find it's minimum,we have not any constraints or something like this,as i know to find minimum,we should find point where it reaches minimum and then put this point into first equation,so in our case we have $f'(x)=a\cos(x)-b\sin(x)$ or when we set this to zero and also convert in tangent form ,we get $\tan(x)=a/b\\;\text{or}\;x=tan^{-1} (a/b)$ now if we put this into first equation,it would be difficult without calculator to calculate minimum,let say $a=3$ and $b=2$, but my relative told me there exist such kind of formula that minimum is directly $\sqrt{a^2+b^2}$, in our case $\sqrt{13}$, is it right? first of all i think that we can get value $3$, if $\alpha=0$; please help me AI: It appears that the main confusion here is how to find $$\sin(\tan^{-1}(a/b))\quad \text{or}\quad \cos(\tan^{-1}(a/b))$$ Well, we have $\tan(\theta) = \frac{a}{b}$, for some $\theta$. So, let's create a right triangle like that, where the opposite side has length $a$, and the adjacent side has length $b$. We know that $\sin(\theta) = \frac{\text{opp}}{\text{hyp}}$, so then: $$\sin(\theta) = \frac{a}{\sqrt{a^2 + b^2}}$$ We also know that $\tan^{-1}(a/b)=\theta$, so then: $$\sin(\tan^{-1}(a/b)) = \frac{a}{\sqrt{a^2+b^2}}$$ The procedure for cosine is similar.
H: Derivative of norm-infinity of vector So I know that $\frac{dX}{dX} = \mathbb{I}$ where $X \in \mathbb{R}^n$ and $\mathbb{I} \in \mathbb{R}^{n \times n}$ is the identity matrix. Now, what is the following derivative? $\frac{d|X|_\infty}{dX}$ where $|X|_\infty$ is the norm-infinity, i.e. $|X|_\infty = max(X)$ is a scalar? AI: Let's study $n=2$. $$\frac{\partial}{\partial x}\max (|x|,|y|)=\begin{cases}\text{sgn } x,&|x|> |y|,\\0,&|x|<|y|.\end{cases}$$ $$\frac{\partial}{\partial y}\max (|x|,|y|)=\begin{cases}0,&|x|> |y|,\\\text{sgn }y,&|x|<|y|.\end{cases}$$ $$\nabla \max (|x|,|y|)=\begin{cases}\text{sgn } x\,\vec e_1,&|x|> |y|,\\\text{sgn } y\,\vec e_2,&|x|<|y|.\end{cases}$$ Can you generalize it for arbitrary $n$?
H: How to prove that $C(a)=C(a^3)$ when $|a|=5$ I am having trouble proving the following result: "Suppose $a$ belongs to a group and $|a|=5$. Prove that $C(a)=C(a^3)$." ($C(a)$ denotes the centralizer of $a$) The typical way to do this would be to show that $C(a) \subseteq C(a^3)$ and $C(a^3) \subseteq C(a)$. The first direction is easy: Let $g \in C(a)$ be arbitrary. Then $ga^3=aga^2=a^2ga=a^3g$, so $g \in C(a^3)$ and $C(a) \subseteq C(a^3)$. I'm having a hard time with the other direction, including seeing how $|a|=5$ works into it. The problem also asks to find an element $a$ from some group such that $|a|=6$ and $C(a) \ne C(a^3)$. This makes me curious: for an element $a$, is it true that $C(a)=C(a^i)$ when $\gcd(|a|,i)=1$, and thus $|a|=|a^i|$? I'd appreciate a HINT on how to go about proving the original problem, and the generalization if it is indeed true. Thanks. AI: Have you used that $$|a^3|=\frac{|a|}{\gcd(|a|,3)}= |a|\text{ ? }$$ Suppose now that $xa^3=a^3x$. Note that $(a^3)^2=a$ Then $$xa^3a^3=a^3xa^3$$ Can you finish? SPOILER $$xa=xa^6=xa^3a^3=a^3xa^3=a^3a^3x=a^6=ax$$ As per your curiosity: one can prove that $$|a^k|=\frac{|a|}{(|a|,k)} $$ Note we have $|a|=|a^k| \iff \langle a\rangle=\langle a^k\rangle$, and from the above, $\iff (|a|,k)=1$, and as anon succinctly commented: "$x$ commutes with $a$ if and only if $x$ commutes with every power of $a$ if and only if $x$ centralizes $\langle a\rangle$. Hence $C(a)=C(b)$ whenever $a$ and $b$ generate the same cyclic subgroup."
H: Solution to $y'' - 2y = 2\tan^3x$ I'm struggling with this nonhomogeneous second order differential equation $$y'' - 2y = 2\tan^3x$$ I assumed that the form of the solution would be $A\tan^3x$ where A was some constant, but this results in a mess when solving. The back of the book reports that the solution is simply $y(x) = \tan x$. Can someone explain why they chose the form $A\tan x$ instead of $A\tan^3x$? Thanks in advance. AI: Hint: Assume the particular solution is of the form $a \tan x$ and solve for the constant. If we do that, we get: $$2 a \tan x \sec^2 x - 2a \tan x = 2 \tan^3 x$$ What do you get when you simply that expression? $$2 a \tan x (\sec^2 x - 1) = 2 \tan^3 x$$ $$2 a \tan x(\tan^2 x) = 2 \tan^3 x$$ Thus, $a = 1$ Now write the solution as: $$y = y_h + y_p = c_1 e^{\sqrt{2} x}+c_2 e^{-\sqrt{2} x} + \tan x$$ I assumed you knew how to find $y_h$, but if not, give a yell. Note: there are other approaches we can use to not guess, but maybe you haven't learned those yet (Variation of parameters, Green's Functions, Series Solutions...). Update By the way, try this same problem if the result was $\tan x$ or $\tan^2 x$. In other words, the $\tan^3 x$ is a somewhat contrived example.
H: Existence of prequantization on a simply connected manifold Let $M$ be a simply connected manifold. Then when, $M$ has a unique pre-quantization and when there is no pre-quantization on $M$. AI: The following holds for prequantizations of arbitrary symplectic manifolds: Existence: A prequantization of $(M, \omega)$ exists if and only if $(2\pi)^{-1} \omega \in H^2(M; \Bbb R)$ is an integral class. Uniqueness: The different choices of prequantization of $(M, \omega)$ are parametrized by $H^1(M; \Bbb R)/H^1(M; \Bbb Z)$. In particular, if $M$ is simply connected then $H^1(M; \Bbb R) = 0$ and hence if a prequantization exists, it is unique. For a prequantization to exist, the symplectic form must satisfy the given integrality condition. See Proposition 8.3.1 of Woodhouse's Geometric Quantization for more details.
H: Show that the theory of densely linear orders and the theory of discrete linear orders are incompatible I'm trying to prove that the theory of dense linear orders and the theory of discrete linear orders are incompatible by showing that their union is inconsistent. Does anyone know how to do this? Thank you for your time AI: HINT: The axioms of a dense linear order say that between any two distinct elements there is a third; what do the axioms of a discrete linear order say?
H: Parseval's Theorem Proof Parseval's Theorem states that: If $$f(x)=\sum^\infty_{-\infty}c_ne^{inx}, g(x)=\sum^\infty_{-\infty}\alpha_ne^{inx}.$$ Then, $$\lim_{N\to\infty}\frac{1}{2\pi}\int^\pi_{-\pi}|f(x)-s_N(f;x)|^2dx=0,\frac{1}{2\pi}\int^\pi_{-\pi}f(x)g(x)dx=\sum^\infty_{-\infty}c_n\alpha_n.$$ where $f$ and $g$ are Riemann-integrable functions with a period of $2\pi$. I don't really understand when it would hold, can somone prove this theorem for me? Thanks! AI: Hints: In general, when we have an orthonormal basis $\,\{u_i\}\,$ in a real (since this is, apparently, what you have) inner product linear space $\,V\,$, then $$\forall\,v,w\in W\;,\;\;v=\sum_{n=1}^\infty a_nu_n\;,\;\;w=\sum_{n=1}^\infty b_nu_n\implies$$ $$\langle\,v\,,\,w\,\rangle=\left\langle\;\sum_{n=1}^\infty a_nu_n\,,\,\sum_{m=1}^\infty b_mu_m\;\right\rangle=\sum_{n=1}^\infty\sum_{m=1}^\infty a_nb_m\,\langle u_n\,,\,u_m\,\rangle =\sum_{n=1}^\infty\sum_{m=1}^\infty a_nb_m\delta_{nm}$$ and from here you get what you want. In case the space is a complex one you need the conjugate with the $\,b_n$'s and etc.
H: In how many ways I can put $2$ red balls and $3$ green balls in $5$ boxes? I have $5(N)$ boxes and some balls. Here's the description: Red Balls $= 2 (k1 = 2)$ Green Balls $= 3 (k2 = 3)$ -------------------------- | | | | | | -------------------------- How many ways do I have to put in these balls in the five boxes? (The boxes are labeled/ordered) p.s. One box can hold only one ball AI: For the solution, we assume that a box can contain any number of balls. You are probably familiar with the Stars and Bars way of counting the number of ways to put $2$ red balls in $5$ boxes. If so, you also can find the number of ways to put $3$ blue balls in $5$ boxes. Multiply. Remark: $1.$ If Stars and Bars is not familiar, one can make it familiar (the Wikipedia article cited is pretty clear). Or else one can count separately, by considering cases, the answers to the $2$ ball problem and the $3$ ball problem. $2.$ If we are to have $1$ ball per box (this was not specified), then the number of ways is the number of ways to choose where the red balls will go. This is $\binom{5}{2}$.
H: About writing a countable family of sets in terms of pairwise disjoint sets I was wondering if the following statement is true: Let $\mathfrak{a}$ be a countable collection of sets. Does this imply that there is a countable collection $\mathfrak{b}$ of pairwise disjoint sets such that every set in $\mathfrak{a}$ is a finite union of sets in $\mathfrak{b}$? Is there a book/reference where I can find the answer? Or is it easy to prove? AI: I do not know a reference, but believe that your claim is false: Consider $$\mathfrak{a}= \{A_k|k\in\mathbf{N}\},\mbox{ where } A_k = \mathbf{N} \setminus \{k\}.$$ Now assume that there is a countable family of sets and denote them as $$\mathfrak{b}= \{B_k|k\in\mathbf{N}\}$$ such that all the required properties all hold. It is suffices to show that each $B_k$ must be a singleton (why?). To this end, suppose there are two distinct numbers $n$ and $m$ such that $n, m \in B_k$. Because the elements of $\mathfrak{b}$ are pairwise disjoint sets and $n \in A_m$, it follows that $B_k \subset A_m$. However, this means $m \in\mathbf{N}\setminus \{m\}$, which is a contradiction.
H: Smallest $N$ for which we can guarantee the approximation error of an alternating series What is the smallest value $N$ for which we can guarantee that the approximation error of the alternating series $$S=\sum_{n=1}^\infty\frac{(-1)^n}{n^{7/2}}$$ by the partial sum, $$S_N=\sum_{n=1}^N\frac{(-1)^n}{n^{7/2}}$$ is at most $10^{-2}$? AI: This is an alternating series with terms that decrease in absolute value. This means that the sum is always bounded by any two consecutive partial sums and the error is less than the last term used. So, you know that if $1/n^{7/2} < 10^{-2}$ then this $n$ will be certainly enough. This means $n^{7/2} > 100$ or $n > 100^{2/7} = 10^{4/7} = 3.7... $ so $n \ge 4$ will do. To find the minimum $n$, get the sums for smaller $n$ and get a more accurate estimate by taking a larger $n$ and see which differs by less than $.01$.
H: Consider the series $\sum_{n=0}^\infty \frac{(x-3)^n}{3^n \sqrt{n+1}}$ $$\sum_{n=0}^\infty \frac{(x-3)^n}{3^n \sqrt{n+1}}$$ Its interval of convergence is of one of the forms $(a,b)$, $(a,b]$, $[a,b)$ or $[a,b]$. What is $a$? What is $b$? I know the interval on convergence is $|x-3|<3$ but i am not sure how to change it into $a$ and $b$? AI: By ratio test we find that the radius of convergence is $R=3$ and the interval is centred at $x=3$ and for $x=0$ the series is convergent by Leibniz test and for $x=6$ the series is divergent (why?) so the interval of convergence is $[0,6)$.
H: Quadratic Operator Notation? I am dealing with functions that are linear combinations of: $[x_1^2, x_2^2... x_n^2, x_1x_2, x_1x_3... x_n-1x_n]$ spanned over a column. All these functions obey the law: $F(aX) = a^2F(X)$ for constant values a. Is there a notation for handling these? Similar to matrix notation for linear operators over $R^n$ (in fact matrices work just fine over $C^n$ as well) does there exist some type of concise "box of numbers and stuff" notation for handling quadratic operators? I tried designing my own notations which almost work fine except: If I allow both pure polynomials (ex: $x^3$ and $y^2$) and mixed polynomials (ex: $x^3y^2) handling composition using a box of numbers notation doesn't appear straightforward. If I allow only pure polynomials then I have a syntax for working with matrix-like objects that can handle composition just fine (it generalizes cleanly to higher powers) but I lose the ability to handle mixed expressions. The factoring of constant values and handling sum of expression are both fine in both notations. How do I create an all encompassing notation in case one does not exist? AI: (In characteristic not 2) every quadratic form $Q(x)$ has an associated symmetric bilinear form $B(x,y) = \frac{1}{2}(Q(x+y) - Q(x) - Q(y))$. We have $Q(x) = B(x,x)$. Since symmetric bilinear forms can be represented by symmetric matrices, this means there is a symmetric matrix $A$ such that $$ Q(\vec{x}) = \vec{x}^T A \vec{x} $$ Of course, this last fact could be seen directly, by observing that every $x_i x_j$ appears in $\vec{x}^T A \vec{x}$ with independent coefficients from $A$ (even in characteristic $2$, although we can't insist on symmetry in that case). I wanted to point out that quadratic forms and bilinear forms are closely connected. For higher degree homogeneous polynomials, you'd have to appeal to multi-linear algebra, which would involve higher rank tensors. In fact, the quadratic form really ought to be a rank (0,2) tensor (or maybe (2,0); I forget the convention) rather than a matrix, which is a rank (1,1) tensor
H: Prove that $B$ is a basis of $R_n$ iff $\mathbf A$ is invertible Let $A \in \mathbf M_{n\times n}(R)$ and let $\{v_1, \ldots, v_n\}$ be a basis of $R_n$. Prove that $B = \{\mathbf Av_1,...,\mathbf Av_n\}$ is a basis of $R_n$ if and only if $A$ is invertible. My idea is that $B$ is basis so $B$ is linearly independent then $\text{rank}(B) = n$, but it can not prove $A$ is invertible. Am I on the right track? Can someone give a hint where should I start? Related: Example 3.5C, P178, Intro to Lin Alg, 4th Ed, G Strang AI: There is another approach. Let $$B := \begin{bmatrix} Av_1 & \dots & Av_n \end{bmatrix}, \quad V := \begin{bmatrix} v_1 & \dots & v_n \end{bmatrix}$$ be matrices with columns $Av_k$ and $v_k$, respectively. Then $$B = AV.$$ Since the columns of a square matrix $V$ are linearly independent, it is invertible, which means that $$A = BV^{-1}.$$ In other words, $A$ is invertible if and only if $B$ is such, which proves your claim.
H: A strengthening of Raabe's test: $\sum a_n$ diverges if $\frac{a_{n+1}}{a_n} \geq 1 - \frac{1}{n} - \frac{A}{n^2}$ for $A>0$ The usual form of Raabe's test says that if $a_n>0$ and if for large $n$, $\frac{a_{n+1}}{a_n}\leq 1-\frac{p}{n}$ for $p>1$, then $\sum a_n < \infty$. A proof I've seen of this relies on the ratio comparison test: $\frac{a_{n+1}}{a_n}\leq 1-\frac{p}{n}< \left(1-\frac{1}{n}\right)^p = \frac{(n-1)^p}{n^p}$, which is $\frac{b_n}{b_{n-1}}$ for $b_n = 1/n^p$, which converges for $p>1$. A partial converse says that if $\frac{a_{n+1}}{a_n}\geq 1 - \frac{p}{n}$ for $p\leq 1$, then $\sum a_n$ diverges. To proves this we may again use the ratio comparison test, taking $p=1$, then a fortiori it holds for $p<1$. I am trying to prove a strengthening of the converse, which states: if for large $n$, $\frac{a_{n+1}}{a_n}\geq 1 - \frac{1}{n}-\frac{A}{n^2}$ where $A>0$, then $\sum a_n$ diverges. It seems we'd like to use the ratio comparison test again, but I'm having trouble finding a series to compare it to. We might write $1 - \frac{1}{n} - \frac{A}{n^2} = \frac{n^2 - n - A}{n^2}= \frac{n-1}{n} - \frac{A}{n^2}$, but that doesn't seem to be helping. Any ideas? AI: We'll show that the series is divergent whatever the value of $A$ is Applying the $\log $ function gives: $$\log(a_{n+1})-\log a_n \geq\log\left( 1 - \frac{1}{n} - \frac{A}{n^2}\right)= - \frac{1}{n} - \frac{A}{n^2}+O( \frac{1}{n^2})=- \frac{1}{n} +O( \frac{1}{n^2})$$ so by telescoping we have $$\sum_{k=1}^{n-1} \log(a_{k+1})-\log a_k=\log(a_{n})-\log a_1\geq -\sum_{k=1}^{n-1} \left(\frac{1}{k} +O( \frac{1}{k^2})\right)=-\log n+ S_n$$ where $(S_n)$ is convergent sequence to a limit say $\ell$ and then $$a_n\geq a_1\frac{e^{S_n}}{n}\sim_\infty a_1\frac{e^{\ell}}{n}$$ and then we conclude.
H: Prove that the number of ways to put $n$ distinct balls into $n$ distinct boxes is $n^n$ Prove that the number of ways to put $n$ distinct balls into $n$ distinct boxes is $n^n$ The only approach I've come up with is to start with $1$ ball in each box, count the permutations, then take a ball out of one of the boxes and put it in another and then take the permutations times $n$ choose $2$ but then the process begins to branch and begins to feel intractable. I find it surprising that the resulting formula, $n^n$, is so simple and that it's the same formula as order matters repetition allowed; is there a way to map this problem into that one? AI: We assume that the boxes are also distinguishable. Where shall the ball with label $1$ go? We can put it into any of the boxes, so we have $n$ choices. For each way of choosing where Ball $1$ goes, there are $n$ ways to decide where the ball with label $2$ goes. So there are $n^2$ ways to decide the fates of Ball $1$ and Ball $2$. For every way to decide where Ball $1$ and Ball $2$ go, there are $n$ ways to decide where Ball $3$ goes. And so on. Remark: More generally, suppose that we have $n$ balls and $k$ boxes. At each stage, we have $k$ possible decisions, for a total of $k^n$. It is easiest to answer your second question in this more general setting. Imagine we have an alphabet of $k$ letters, and we want to make an $n$-letter word, repetition allowed. There are $k^n$ ways to do this. The mapping is as follows. Suppose that we have a word of length $n$. Examine it, letter by letter. If the $i$-th letter of the word is the "letter" $j$, put the $i$-th ball into Box $j$. This establishes a bijection between words of length $n$ and ways to put balls into boxes.
H: Partial fractions to integrate$\int \frac{4x^2 -20}{(2x+5)^3}dx$ $$\int \frac{4x^2 -20}{(2x+5)^3}dx$$ I can't use the coverup method that I learned since making anything zero in this makes everything zero. I would probably just use random test points because I don't have any other tricks memorized. Is there some specific trick I need to use on this so it is possible to compute on a timed test? AI: $$\int \frac{4x^2 -20}{(2x+5)^3}dx = \int \dfrac A{(2x + 5)} + \dfrac B{(2x + 5)^2} + \dfrac C{(2x + 5)^3}\, dx$$ Partial fractions can be thought of as the reverse of "finding the common denominator". That is, we can equate the numerator of the original integrand with the expansion of each of the numerators of the desired fractions, times the factor we'd need to find the common denominator, if we were adding the fractions in the desired integrand. Doing this gives us $$A(2x + 5)^2 + B(2x + 5) + C = 4x^2 - 20\tag{1}$$ Solve for C first: to do this we can put $x = -\frac 52$ to zero out all but $$C = 4(-5/2)^2 - 20 = 4\cdot \frac {25}{4} - 20 = 25 - 20 = 5$$ Then expand the left-hand side, substitute $C = 5$, and match up coefficients, (you can easily find $A$ that way, since it will be the only term on the left hand side that is a coefficient of a term with $x^2$. Substitute, solve for $B$ knowing $A$. In other situations, you could use two arbitrary values for $x$ to obtain two equations in two unknowns. Expanding $(1)$: $$A(4x^2 + 20 x + 25) + B(2x + 5) + 5 = 4x^2 - 20 \tag{2}$$ $$\implies 4Ax^2 = 4 \implies A = 1$$ Now, we have $A = 1, C = 5$, and can substitute this into our equation $(2)$: $$4Ax^2 + 20 Ax + 25A + 2Bx + 5B + 5 = 4x^2 - 20$$ $$ \iff (20 A + 2B)x = 0\cdot x $$ $$\iff 20 A + 2B = 0 $$ $$\iff 20\cdot 1 + 2B = 0 \iff B = -10 $$ $$\int \frac{4x^2 -20}{(2x+5)^3}dx = \int \dfrac 1{(2x + 5)} + \dfrac {-10}{(2x + 5)^2} + \dfrac 5{(2x + 5)^3}\, dx$$
H: Solving for the integrating factor in a Linear Equation with Variable Coefficients So I am studying Diff Eq and I'm looking through the following example. Solve the following equation: $(dy/dt)+2y=3 \rightarrow μ(t)*(dy/dt)+2*μ(t)*y=3*μ(t) \rightarrow (dμ(t)/dt)=2*μ(t) \rightarrow (dμ(t)/dt)/μ(t)=2 \rightarrow (d/dt)\ln|μ(t)|=2 \rightarrow \ln|μ(t)|=2*t+C \rightarrow μ(t)=c*e^{2*t} \rightarrow μ(t)=e^{2*t}$ So I have two questions regarding this solved problem. It appears that the absolute value sign is just tossed out of the problem without saying that as a result $c \ge 0$, is this not correct and if not why? Secondly and more importantly, I was confused by the assumption that $c=1$. Why should it be $1$ and would the answer differ if another number were selected is it just an arbitrary selection that doesn't influence the end result and just cancels out anyways? AI: Method 1: Calculus We have: $y' + 2y = 3$. Lets use calculus to solve this and see why these statements are okay. We have: $$\displaystyle \frac{\dfrac{dy}{dt}}{y - \dfrac{3}{2}} = -2$$ Integrating both sides yields: $$\displaystyle \int \frac{dy}{\left(y - \dfrac{3}{2}\right)} = -2 \int dt$$ We get: $\ln\left|y - \dfrac{3}{2}\right| = -2t + c$. Lets take the exponential of both sides, we get: $$\left|y - \dfrac{3}{2}\right| = e^{-2t + c} = e^{c}e^{-2t} = c e^{-2t}$$ Do you see what happened to the constant now? Now, lets use the definition of absolute value and see why it does not matter. For $y \ge \dfrac{3}{2}$, we have: $$\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = c e^{-2t} +\dfrac{3}{2}$$ For $y \lt \dfrac{3}{2}$, we have: $$-\left(y - \dfrac{3}{2}\right) = c e^{-2t} \rightarrow y = -c e^{-2t} + \dfrac{3}{2}$$ However, we know that $c$ is an arbitrary constant, so we can rewrite this as: $$y = c e^{-2t} + \dfrac{3}{2}$$ We could also leave it as $-c$ if we choose, but it is dangerous to keep those pesky negatives around. Note: look at this graph of $\left|y - \dfrac{3}{2}\right|$: Now, can you use this approach and see why it is identical to the integrating factor (it is exactly the same reasoning)? For your second question: You could make $c$ be anything you want. Let it be $y = ke^{-2t} + \dfrac{3}{2}$. Take the derivative and substitute back into ODE and see if you get $3 = 3$ (you do). If they gave you initial conditions, then it would be a specific value, so the authors are being a little sloppy. They should have said something like $y(0) = \dfrac{5}{2}$, which would lead to $c = 1$. Lets work this statement: $y = ke^{-2t} + \dfrac{3}{2}$ $y' = -2 k e^{-2t}$ Substituting back into the original DEQ, yields: $y' + 2y = -2 k e^{-2t} + 2(ke^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. What if we let $c = 1$, we have: $y' + 2y = -2 e^{-2t} + 2(e^{-2t} + \dfrac{3}{2}) = 3$, and $3 = 3$. So, you see that we can let $c$ be anything, unless given an IC. Method 2: Integrating Factor Here is a step-by-step solution using integrating factor. $y' + 2 y = 3$ $\mu y' + 2 \mu y = 3 \mu$ $\dfrac{d}{dt}(\mu y) = uy' + u' y$ Choose $\mu$ so that $\mu' = 2 \mu \rightarrow \mu = e^{2t}$ We have: $y'+2y = 3$, so: $e^{2t}y' + 2e^{2t}y = 3e^{2t}$ $\dfrac{d}{dt}(e^{2t}y) = 3 e^{2t}$ $e^{2t} y = \dfrac{3}{2}e^{2t} + c$, thus $y(t) = \dfrac{3}{2} + c e^{-2t} = c e^{-2t}+ \dfrac{3}{2}$
H: I feel very stupid. Will someone walk me through a step-by-step in plain english of this Big-O problem? Prove that $n^2 + 2n + 3$ is $O(n^2)$. Find values for $C$ and $k$ that prove that they work. Edit: In particular, I don't at all understand how to find C and k. I asked a similar question but every response went way over my head. The similar question is linked here. Sorry, I guess I'm not that smart! AI: I will try to give you a lengthy discussion of what $\mathcal O$ means. (All functions here are assumed non-negative, and their domains are the positive integers.) You say $f = \mathcal O(g)$ if there exists $C > 0$ and $k > 0$ such that $Cg(n) > f(n)$ for all $n > k$. Let's first investigate this definition when $C$ is not allowed to be chosen arbitrarily, but fixed at $1$. $$ f = \mathcal O(g) \text{ if there exists } k > 0 \text{ such that } g(n) > f(n) \text{ for all } n > k. $$ This means that $g$ eventually dominates $f$. For example, intuitively $n^2$ eventually dominates $2n$. (You can see this by plotting with Wolfram Alpha.) But you do have to prove formally that $n^2 > 2n$ actually holds when $n$ is big enough. By plugging in the first few test numbers, you can see that $3^2 > 2\cdot 3$, and $n^2$ continues to grow faster than $2n$ when $n > 2$. This gives you $k = 2$. Note, however, that you could also pick $k$ anything larger than $2$. [I did skip one step when I claimed that $n^2$ continues to grow faster than $2n$ when $n > 2$. This can be proved by looking at the increment of each function as $n$ increases. Going from $n$ to $n + 1$, the first function increases by $(n+1)^2 - n^2 = 2n + 1$, while the second function increases by $2(n+1) - 2n = 2$. This means $n^2$ will grow faster than $2n$ when $2n + 1 > 2$, i.e., $n > \frac 12$. Since we pick $k = 2$, our claim above is valid.] Next, we come to the discussion of $C$. The big O notation is intended to quantify the "degree of growth", so if we have a function, say $n^6$, we want to say that $n^6$, $2n^6$, $10n^6$ and so on all grow at the same degree. That is where $C$ comes into play. Let me take the opportunity here to say that we could relax strict inequalities ($>$) in the definition of $\mathcal O$ to non-strict ones ($\ge$) without changing anything. In the example you gave, $n^2 + 2n + 3$ is a sum of three terms: $n^2$, $2n$, and $3$. With the non-strict definition $n^2$ (in fact any function) dominates itself for $C = 1$ and any $k$, so we can even pick $k = 0$ $n^2$ dominates $2n$ for $C = 1$ and $k = 2$ $n^2$ dominates $3$ for $C = 1$ and $k = 2$ Now you combine all these dominations into one by adding up $C$'s and taking the maximum of $k$'s. This can be written more formally as For $n \ge 2$, \begin{align*} n^2 & \ge n^2 \\ n^2 & \ge 2n \\ n^2 & \ge 3 \\ \therefore 3n^2 & \ge n^2 + 2n + 3. \end{align*} To conclude, the sentence $$ n \ge 2 \text{ implies } 3n^2 \ge n^2 + 2n + 3 $$ demonstrates $n^2 + 2n + 3 = \mathcal O(n^2)$ with $k = 2$ and $C = 3$.
H: What is the pattern in this series: $\frac{r}{2} + \frac{4r^2}{9} + \frac{9r^3}{28} + \frac{16r^4}{65} + \dotsb$ A book gave me the following series, and asked for which $r\in \mathbb{R}$ does it converge: $$\frac{r}{2} + \frac{4r^2}{9} + \frac{9r^3}{28} + \frac{16r^4}{65} + \dotsb$$ I feel dumb because I can't even find the pattern in the denominator! It's not arithmetic (adding 7, then adding 19, then adding 37), looking at the prime factors doesn't seem to help ($2$, $3\cdot3$, $2\cdot2\cdot7$, $5\cdot 13$). Can someone see it? Sorry, I'm sure it's obvious and I'm just missing it. AI: The general term of the series is $$a_n=\frac {n^2r^n}{n^3+1}\sim_\infty \frac {r^n}{n}$$ so by the ratio test the Radius of convergence of the serie is $R=1$ and for $r=-1$ we can show that the series is convergent by the alternating test and it's a divergent series at $r=1$ and finally the interval of convergence is $[-1,1)$.
H: Optimizing $x^2+y^2$ from two given equations? What is the maximum value of $x^2+y^2$, where $(x,y)$ are solutions to: $$2x^2+5xy+3y^2=2$$ and $$6x^2+8xy+4y^2=3$$ Note: Calculus is not allowed. I tried everything I could but whenever I got for example or $x^2+y^2=f(y)$ or $f(x)$ the function $f$ would always be a concave up parabola, so I could not find a maximum for either variable. And by the way I know that you can solve for $x$ and $y$ using the quadratic formula and get $4$ different solutions but I am looking for less messy way. I've asked this question before, but I didn't get the nice answer I wanted. Thanks. This question came from a math competition from the Math Honor Society, Mu Alpha Theta. AI: a) $2x^2+5xy+3y^2=2$ b) $6x^2+8xy+4y^2=3$ I would provide an easier way to solve this quadratic Divide the first eqn with $2$ and the second with $3$ and equate them. Now we can see all the terms have a two degree eqn. $x^2 + 5/2xy+ 3/2y^2 = 3x^2+ 8/3xy+ 4/3y^2$ divide it by $xy$ Know let $\frac xy$ be $a$ and $\frac yx$ be $\frac 1a$ You would get a simple quadratic equation. you will get the values of a then equate it to $\frac xy$. Then find the solutions. Cheers.!!