text
stringlengths
83
79.5k
H: rational approximation for $x^x$ Using the standard method with derivatives (and taking logarithm of both sides first) we can prove that $x^x>\frac{1}{2-x}$ for $x\in(0,1)$ - this inequality is an exercise from a problemset. Is it possible to replace $\frac{1}{2-x}$ with a rational function $R$ satisfying $R(0)=1$ ? The inequality $x^x>R(x)$ should work for $x\in(0,T)$ with some $T>0$, not necessarily $T=1$. I tried with homographies at first, later with some random examples and looking at plots, but without success. I only managed to came arbitrarily close with homographies: for any $c<1$ there is $T>0$ such that $x^x>c\frac{1+x}{1-x}$ on $(0,T)$; unfortunately the inequality $x^x>\frac{1+x}{1-x}$ is false. AI: You cannot estimate $f(x) = x^x$ from below by any function $R$ which is differentiable at $x=0$ and satisfies $R(0)=1$. That would imply $$ \frac{x^x-1}{x} \ge \frac{R(x)-1}{x} $$ on some interval $(0, T)$, but for $x \to 0$ the right-hand side tends to $R'(0)$, whereas the left-hand side tends to $-\infty$.
H: What does the space $C^2([0,T]; H^2(\Omega))$ mean? Does the space $C^2([0,T]; H^2(\Omega))$ mean: $C^2$ in the time direction and $H^2$ in the space direction? Thank you very much! AI: Informally, that's a reasonable way to think about it. Formally, this is the space of all paths $\gamma : [0,T] \to H^2(\Omega)$ having two continuous derivatives (in time). The time derivative is defined in the usual way $\gamma'(t) = \lim_{\epsilon \to 0} \frac{1}{\epsilon}(\gamma(t+\epsilon)-\gamma(t))$, where the limit should be taken in the $H^2(\Omega)$ norm topology. The second derivative is defined analogously, and $\gamma'' : [0,T] \to H^2(\Omega)$ should be continuous, again with respect to the $H^2(\Omega)$ norm topology. It's a Banach space under a norm such as $$\|\gamma\| = \sup_{t \in [0,T]} \|\gamma(t)\|_{H^2(\Omega)} + \sup_{t \in [0,T]}\|\gamma'(t)\|_{H^2(\Omega)} + \sup_{t \in [0,T]} \|\gamma''(t)\|_{H^2(\Omega)}.$$
H: Doubt in Proof by Hippasus - Incommensurability of geometrical lengths leading to irrational number. I have read about geometrical proofs of irrational numbers based on incommensurability of lengths elsewhere. But, am stuck by the line: For, if any number of odd numbers are added to one another so that the number of numbers added is an odd number the result is also an odd number. given in the book : Julian Havil, The irrationals; at pg. #22, 23. The other pages of the book: pg. 21, pg. 24, 25, pg. #26,27. The text given in the pg.$21, 22 of the complete proof are given below, with the line in bold: Let $ABCD$ be a square and $AC$ its diameter. I say that $AC$ is incommensurable with $AB$ in length. For let us assume that it is commensurable. I say that it will follow that the same num- ber is at the same time even and odd. It is clear that the square on $AC$ is double the square on $AB$. Since then (according to our assumption) $AC$ is commensurable with $AB, AC$ will be to AB in the ratio of an integer to an integer. Let them have the ratio $DE:DF$ and let $DE$ and $DF$ be the smallest numbers which are in this proportion to one another. $DE$ cannot then be the unit. For if DE was the unit and is to $DF$ in the same proportion as $AC$ to $AB, AC$ being greater than $AB, DE$, the unit, will be greater than the integer $DF$, which is impossible. Hence $DE$ is not the unit, but an integer (greater than the unit). Now since $AC:AB = DE:DF$, it follows that also $AC^2 :AB^2 = DE^2 :DF^2$. But $AC^2 = 2AB^2$ and hence $DE^2 = 2DF^2$. Hence $DE^2$ is an even num- ber and therefore $DE$ must also be an even number. For, if it was an odd number, its square would also be an odd number. For, if any number of odd numbers are added to one another so that the number of numbers added is an odd number the result is also an odd number. Hence $DE$ will be an even number. Let then $DE$ be divided into two equal numbers at the point $G$. Since $DE$ and $DF$ are the smallest numbers which are in the same pro- portion they will be prime to one another. Therefore, since $DE$ is an even number, $DF$ will be an odd number. For, if it was an even number, the number $2$ would measure both $DE$ and $DF$, although they are prime to one another, which is impossible. Hence $DF$ is not even but odd. Now since $DE = 2EG$ it follows that $DE^2 = 4EG^2$ . But $DE^2 = 2DF^2$ and hence $DF^2 = 2EG^2$. There- fore $DF^2$ must be an even number, and in consequence $DF$ is also an even number. But it has also been demonstrated that $DF$ must be an odd number, which is impossible. It follows, therefore, that $AC$ cannot be commensurable with $AB$, which was to be demonstrated. Doubt: I feel that the line is irrelevant as the sum of odd quantities in odd number is no where to be seen. If not, then have not understood the proof correctly. AI: The proof depends on the fact that the square of an odd number is odd. That follows from the statement in bold since you square an odd number by adding it to itself an odd number of times.
H: Sample Space for simple Probability Experiment Imagine three empty boxes. You throw a "fair" coin. If the coin display head, you put a ball inside one of the three boxes whereas the probability for box one is $p\in(0,1)$ and for box two and three it is $\frac{1-p}{2}$. With which probability (in dependence of p) do we have a ball in the first box? Now, intuitively one would say $\frac{p}{2}$, which is of course correct, but I'm more interested in actually writing it down in a pedantic way. So let $H:=\text{"Coin display head"}$ and $B_i:=\text{"Ball is in box i"}$ with $i=1,2,3$. We know $P[H]=P[H^C]=1/2$ since we have a "fair" coin. Furthermore, we have apparently: $P[B_1|H]=p, \quad P[B_i|H]=\frac{1-p}{2}, \ i=2,3$ and $P[B_i|H^C]=0, \ i=1,2,3$ Now, I basically fail to see how to properly calculate e.g. $P[B_1|H]=\frac{P[B_1 \cap H]}{P[H]}$ and the reason for that is, that I'm confused on how to calculate $P[B_1 \cap H]$ and the reason for that is, that we never really modeled the sample space for the experiment. Of couse, it makes intuitively sense. To be able to have the event $B_1$ we need to have thrown a coin which displays head, i.e. $B_1$ implies $H$. But I'd like to actuall write down these sets. How would one do that? I'm confused on how to model the sample space $\Omega$ if we have events that kind of imply some events? My take would be: $\Omega:=\{ \{B_1, H\}, \{B_2, H\}, \{B_3, H\}, H, H^C\}$ (whereas I really really dislike the notation of $H^C$ for "Coin displays number." since it it just wrong. We get what is meant, but with the given $\Omega$ above, it'd be wrong... but anyway, as you can see, my $\Omega$ also just seems to be absolute trash. Can someone give me an example of a properly modeled sample space? I don't want to work with "intuition" since I really think probability is heavy anti-intuition. AI: The sample space $\Omega$ is $$\Omega = \{B_1, B_2, B_3, T\}$$ where $T$ (tails) stands for your $H^C$. The elements $\omega_i$ of the sample space $\Omega$ must be mutually exclusive (if $\omega_i$ occurred, then $\omega_j, j\neq i$ can't occur) collectively exhaustive (one element $\omega_i$ always occurs in the probabilistic experiment) The sample space $\Omega = \{B_1, B_2, B_3, T\}$ satisfies these requirements, therefore is a correct sample space for your experiment.
H: Evaluating Sum at bounds I have to find an expression in terms of n using standard results for $$\sum_{r=n+1}^{2n} r(r+1)$$ And have found the general equation $$\sum_{r=n+1}^{2n} r(r+1) = \frac{2n^3+6n^2+4n}{6}$$ However evaluating it as $$\frac{2(2n)^3+6(2n)^2+4(2n)}{6} - \frac{2(n+1)^3+6(n+1)^2+4(n+1)}{6}$$ does not yield the correct answer, yet evaluating it as $$\frac{2(2n)^3+6(2n)^2+4(2n)}{6} - \frac{2(n)^3+6(n)^2+4(n)}{6}$$ gives the correct answer Im at a loss here, why am I not getting the correct answer by finding the difference of the sum between the two bounds? AI: Let the terms of the sum be $a_n$. You need to find: $$a_{n+1}+a_{n+2}+\cdots+a_{2n}=\\ (a_1+\cdots+a_{n}+a_{n+1}+\cdots+a_{2n})-(a_1+\cdots+a_n)=\\ S_{2n}-S_n$$ In your first method, you are subtracting the term $a_{n+1}$ and losing it. Addendum: Note the correct formula to use is: $$S_n=\sum_{k=1}^n k(k+1)=\frac{2n^3+6n^2+4n}{6}$$ Now consider the difference: $$\sum_{r=n+1}^{2n} r(r+1)=S_{2n}-S_n=\\ \frac{2(2n)^3+6(2n)^2+4(2n)}{6} - \frac{2(n)^3+6(n)^2+4(n)}{6}=\\ \frac{7}{3}n^3+3n^2+\frac{2}{3}n.$$
H: Proving the bounds of the cosine sequence without starting with basic trig identities. If we look at a unit circle we can see that the values of cosine are between -1 and 1. However is there a particular proof for this fact? I have tried using the Euler's identity to arrive at a proof with no luck. I have also tried using the expanded form of cosine : $\cos(x)=\sum_{n=0}^{\infty } \frac{(-1)^n x^{2n}}{2n!}$. Using the expansion I tried to manipulate as much as I could however it seems that I am not finding a proof in this fashion either. I also tried going from the fact that it is a Cauchy sequence. I know that identities can be brought in to prove it, but I should like to find a more fundamental way of proving it. Any ideas? AI: $\cos^2x+\sin^2x=(\cos x+i\sin x)(\cos x -i\sin x)=(\cos x+i\sin x)(\cos (-x )+ i \sin (-x))$ $=e^{ix}e^{-ix}=e^0=1.$ Since $\cos x$ and $\sin x$ are real numbers, their squares are non-negative. Therefore, $\cos^2x\le1$, which implies $-1\le \cos x\le1$.
H: How do I finish solving $f(x)f(2y)=f(x+4y)$? I'm trying to solve this functional equation: $$f(x)f(2y)=f(x+4y)$$ The first thing I tried was to set $x=y=0$; then I get: $$f(0)f(0)=f(0)$$ which means that either $f(0)=0$ or we can divide the equation by $f(0)$ and then $f(0)=1$. Case 1: If $f(0)=0$ then we can try to set $x=0$. Then; $$f(0)f(2y)=f(4y)$$ $$0=f(4y)$$ Which means that one of the solutions is a constant function $f(x)=0$ Case 2: If $f(0)=1$. This is where I'm stuck. I tried to set $x=0$, then I get: $$f(2y)=f(4y)$$ I also tried to set $x=-4y$. Then I get: $$f(-4y)f(2y)=f(0)=1$$ I think that these two observations could be useful, but I don't know how to continue from here. I have guessed that another solution is $f(x)=1$ but I don't know how to show that there aren't any others as well. AI: Assuming that $ f(0)=1$, you got for $ y\in \Bbb R $, $$f(4y)=f(2y)=f(y)=f(\frac y2)$$ $$=...=f(\frac{y}{2^n})$$ for each $ n\ge 0$. but by continuity of $ f $ at $ 0$, $$\lim_{n\to +\infty}f(\frac{y}{2^n})=f(0)=1$$ thus $$(\forall y\in \Bbb R)\;\; f(y)=1$$
H: Riemann integrability criteria Thinking back about limits and the original definition of the limit I thought that the Reimann integral (for some bound function $f$ in $[a,b]$) could be defined using limit-like definition. I found one definition and proved the equivalence of the two: for any $\epsilon >0$ exists a partition $P$ for which $U(P)-L(P)<\epsilon$. where $L$ and $P$ are the lower and upper Darboux sums. My Question then I found this theorem for equivalence: for every $\epsilon >0$ exists some $\delta >0$ such that $U(P)-L(P)<\epsilon$ for any partition $P$, $||P||<\delta$. I was not able to prove this one, how can on prove this theorem? AI: I believe the proof can be found here in theorem 2.4 (2.4').
H: Prove these statements about trees are the same Given a tree $T$ with $|V(T)| = n \geq 2$ prove these statements are the same: There is an Eulerian path in $T$ There is a Hamiltonian path in $T$ The number of leaves in $T$ is $2$ I don't understand how can a tree have an Hamiltonian path, a tree looks like this: How can you possibly travel on each vertex without crossing it twice? (Statement 2) I would appreciate if you could help me solve it! Thank you! AI: $(1)\implies(2)$ since an Eulerian path is a path that includes all edges, and hence includes all vertices because a tree is connected and has no isolated vertex. $(2)\implies(3)$ since if there is a Hamiltonian path, the $n-2$ intermediate vertices of that path must have degree $\ge2$. The sum of degrees of all vertices should be $2e=2(n-1)=2n-2$ and the sum of degrees of $n-2$ intermediate vertices is $\ge2(n-2)=2n-4$. To enforce connectedness, we require this sum to be exactly $2n-4$ i.e. each intermediate vertex to have degree exactly $2$ and each terminal vertex to have degree exactly $1$. $(3)\implies(1)$ since the unique path between the two leaves is the required Eulerian path. How? By the above calculation, each internal node has degree exactly $2$. The tree is "linear" so to say -- an unbranched line.
H: Questions about adjoint functions Let $V, W$ be euclidean vector spaces and $f \in \mathrm {Hom}_{\mathbb {R}}(V,W)$. The adjoint function of $f$ is $f^{ad} \in \mathrm {Hom}_{\mathbb {R}}(W,V)$. Which of these following statements is true? a. $f$ is surjective iff $f^{ad}$ is surjective. b. $f$ is injective iff $f^{ad}$ is injective. c. $f$ is surjective iff $f^{ad}$ is injective. d. $f^{ad}$ is surjective iff $f$ is injective. Can someone help me with this? I'm not sure at all. AI: Hint: Only c,d are true. To prove c, we can begin one approach by noting that $$ f^{\text{ad}}(y) = 0 \iff \text{for all }x \in V, \langle x, f^{\text{ad}}(y) \rangle = 0. $$
H: Properties of min(x,y) and max(x,y) operators Is $\min(x^2,y^2)=[\min(x,y)]^2$, and similarly for $\max(x,y)$? Also, is $\sqrt{\min(x^2,y^2)}=\min(x,y)$? Do other non-linear operations work? In general, what are the other interesting properties of these operators, and where can I study more about them? AI: No. $\min\{2^2,(-3)^2\}=2^2\ne\min\{2,-3\}^2=(-3)^2$. You can see even $\max$ will not work for this example. In general, it will work when $x\le y\implies x^2\le y^2$, i.e. in the monotonically increasing section of the curve $y=x^2$, where $x,y\ge0$.
H: A question in Theorem 5 of Lesson 8 of Hoffman Kunze linear algebra I am self studying Linear Algebra from Hoffman Kunze and I have a question in a theorem of Lesson-8 of text book. Adding it's image: How does it follows from Theorem 4 that E(c$\alpha + \beta) = cE\alpha + E\beta$ ? Image of Statement of Theorem 4: Kindly tell. AI: We have \begin{align} E(c\alpha + \beta) = cE\alpha + E\beta &\iff cE\alpha + E\beta \text{ is the best approximation in $W$ of $c\alpha + \beta$}\\ &\stackrel{(i)}\iff (cE\alpha + E\beta) - (c\alpha + \beta) \perp W\\ &\iff c(E\alpha -\alpha) + (E\beta - \beta) \perp W \end{align} and the last statement is true. Namely, $E\alpha$ is the best approximation in $W$ of $\alpha$ so by $(i)$ we get $E\alpha -\alpha \perp W$. Similarly $E\beta - \beta \perp W$. Therefore $c(E\alpha -\alpha) + (E\beta - \beta) \perp W$.
H: implication of the Abel–Ruffini theorem I am taking a course in abstract algebra, and we proved the following theorem: I want to prove something more specific. Let's look at polynomials of degree 5 over C. Someone is claiming he has a magic formula, which receives the coefficients of a polynomial of degree 5, and returns its roots using only basic operations and radicals. I want to understand, how I can prove this person wrong using the theorem above. In this case, the theorem talks about the field of rational functions with 5 variables over C. It shows that I can't express $t_1, ..., t_5$ (the roots of f) in terms of $s_1,...,s_5$, in this abstract field. I understand the proof in this context, but I want to understand how I can use it concretely in order to prove this person wrong. In the sources that I have seen, they say that Abel–Ruffini theorem implies what I want to prove, but they don't show how. Can someone help me understand how you can show this? I am adding the proof we saw in the course: AI: Nice question. I was annoyed by the same thing after learning Galois theory. Before learning it I thought that I will see a proof that there is no formula which works specifically for polynomials over $\mathbb{C}$, but instead of that I only saw a proof that there is no formula which works for polynomials over the field $\mathbb{C}(t_1,...,t_n)$, which is a much bigger field. I was very disappointed for a while, but lucky for me I managed to think of a proof myself. Suppose there is a formula which works for polynomials of degree $n\geq 5$ over $\mathbb{C}$. The formula contains the field operations, taking roots, the coefficients of a polynomial and some complex constants. (for example the quadratic formula $\frac{-b+\sqrt{b^2-4ac}}{2a}$ uses the constants $2,4,...$). The main thing we should note is that the formula can contain only finitely many constants. Let's call them $z_1,...,z_k$. So our formula is actually a formula over the field $K:=\mathbb{Q}(z_1,...,z_k)$. Note that this field is countable, since it is finitely generated over $\mathbb{Q}$. Since $\mathbb{C}$ is uncountable there must be an element $t_1\in\mathbb{C}$ which is transcendental over $K$. Again, $K(t_1)$ is countable, so there is $t_2\in\mathbb{C}$ which is transcendental over $K(t_1)$. We continue this way, and finally get a field $K(t_1,...,t_n)$ where $t_i$ is transcendental over $K(t_1,...,t_{i-1})$ for all $i$. Now define the polynomial $f=(x-t_1)...(x-t_n)$ and call its symmetric functions $s_1,...,s_n$. (the coefficient of $x^n$ is $1$). Finally, let $L=K(s_1,..,s_n)$, this is a subfield of $K(t_1,...,t_n)$. Now, what can we say about $L$? I claim that every polynomial of degree $n$ in $L[x]$ is solvable by radicals. Why? Well, take a polynomial $g\in L[x]$ of degree $n$ and put its coefficients in the formula we have. The coefficients of $g$ are obviously in $L$, and remember that the constants in the formula belong to $K\subseteq L$, so they are in $L$ as well! So this shows $g$ is solvable by radicals over $L$. But now let's go back to the polynomial $f=(x-t_1)(x-t_2)...(x-t_n)\in L[x]$. As we showed above it must be solvable by radicals over $L$, so its Galois group over $L$ is solvable. On the other hand, using the fact that $t_1,...,t_n$ are algebraically independent over $K$ (which means $t_i$ is transcendental over $K(t_1,...,t_{i-1})$ for all $i$) we can conclude that $Gal(K(t_1,...,t_n)/K(s_1,...,s_n))\cong S_n$, this is exactly the same proof as the proof that the Galois group of the general polynomial in the field of rational functions is $S_n$. This means the Galois group of $f$ over $L$ is $S_n$, a contradiction. The difference is that in my proof $t_1,...,t_n$ are all complex numbers, and not just formal variables like in the field of rational functions. So here $f=(x-t_1)...(x-t_n)$ is a specific polynomial over $\mathbb{C}$ for which the formula we took fails.
H: A question in an example in book Hoffman and Kunze in Lesson- Inner Product Spaces I am self studying some topics in Linear Algebra from Hoffman Kunze and I have a question in an Example of Lesson- Inner Product Spaces It's image: How does orthogonal projection of $R^{3}$ on W is linear transformation defined by $ (x_{1} , x_{2}, x_{3} ) $ = ... I am unable to understand what is reasoning behind it. Kindly help me. AI: The orthogonal projection $E(x_1,x_2,x_3)$ of $(x_1,x_2,x_3)$ onto $W$ is characterized by $$E(x_1,x_2,x_3) \in W, \quad (x_1,x_2,x_3) - E(x_1,x_2,x_3) \perp W$$ so we have $$E(x_1,x_2,x_3) \in W \implies E(x_1,x_2,x_3) = \lambda(3,12,-1)$$ for some scalar $\lambda$ and then $$(x_1,x_2,x_3) - \lambda(3,12,-1) \perp (3,12,-1)$$ so $$0=\langle (x_1,x_2,x_3) - \lambda(3,12,-1), (3,12,-1)\rangle = 3x_1+12x_3-x_3 - 154\lambda.$$ We get $$\lambda = \frac{3x_1+12x_3-x_3}{154} \implies E(x_1,x_2,x_3) = \frac{3x_1+12x_3-x_3}{154}(3,12,-1).$$
H: Prove that for any set $A$, $A = \bigcup \mathscr P (A)$. Not a duplicate of Prove that $ (\forall A)\bigcup\mathcal P(A) = A$ Prove that for any set A, A = $\cup$ $\mathscr{P}$(A) This is exercise $3.4.16$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Prove that for any set $A$, $A = \bigcup \mathscr P (A)$. Here is my proof: Suppose $A$ is arbitrary. $(\rightarrow)$ Let $x$ be an arbitrary element of $A$. Since $A\subseteq A$ then $A\in\mathscr P(A)$. From $A\in\mathscr P(A)$ and $x\in A$, $x\in\bigcup \mathscr P(A)$. Therefore if $x\in A$ then $x\in\bigcup \mathscr P(A)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in A\rightarrow x\in\bigcup\mathscr P(A)\Bigr)$ and so $A\subseteq \bigcup\mathscr P(A)$. $(\leftarrow)$ Let $x$ be an arbitrary element of $\bigcup\mathscr P(A)$. So we can choose some $A_0$ such that $A_0\in\mathscr P(A)$ and $x\in A_0$. $A_0\in\mathscr P(A)$ is equivalent to $A_0\subseteq A$ and since $x\in A_0$, $x\in A$. Therefore if $x\in\bigcup \mathscr P(A)$ then $x\in A$. Since $x$ is arbitrary, $\forall x\Bigr(x\in\bigcup\mathscr P(A)\rightarrow x\in A\Bigr)$ and so $\bigcup\mathscr P(A)\subseteq A$. From $A\subseteq \bigcup\mathscr P(A)$ and $\bigcup\mathscr P(A)\subseteq A$ we obtain $A= \bigcup\mathscr P(A)$. Since $A$ is arbitrary, $\forall A\Bigr(A=\bigcup\mathscr P(A)\Bigr)$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention. AI: The proof is valid but you could make it shorter and clearer by arguing at the level of sets, rather than elements. For example $A\in\mathscr{P}(A)$ implies $A\subseteq\bigcup\mathscr{P}(A)$. And "$X\subseteq A$ for all $X\in \mathscr{P}(A)$" implies $\bigcup\mathscr{P}(A)\subseteq A$.
H: Difference between finite and infinite for dimension of sum of vector subspaces? I am reviewing Advanced Linear Algebra by Steve Roman. He says that for subspaces $S$ and $T$ of vector space $V$ $$\dim(S)+\dim(T) = \dim(S+T) + \dim(S \cap T)$$ but we cannot write $$\dim(S+T) = \dim(S)+\dim(T) -\dim(S \cap T)$$ unless $S+T$ is finite-dimensional. I am trying to reason out why that is. If $S+T$ has infinite dimension, then one of these subspaces has infinite dimension. Actually, I think bases for both subspaces have to have the same cardinality, since otherwise $S+T$ does not seem well-defined. However, perhaps this is where I am wrong. Assuming you can somehow define the operation $S+T$ for this situation, I see three cases. Without loss of generality, if $S$ has finite dimension, then I don't see a problem. In the second equation, you would simply get that the cardinalities of the bases for $T$ and $S+T$ are the same (assuming a constant added to infinity is infinity). If both subspaces have the same infinite dimension, then I suppose you would get $\infty - \infty = \infty$, which is a problem. If both subspaces have infinite dimension but their bases have different cardinalities, then I don't see a problem here either. Let $n$ be the larger cardinality and $m$ be the smaller cardinality. I assume $n-m=n$ as in the situation where $n=\infty$ and $m$ is a constant. Then we arrive at a similar situation as in the first case. Am I thinking about this correctly? AI: Yes, the problem is with $\infty- \infty$ which is not defined in general. Indeed, if $S+T$ is infinite-dimensional then at least one of $S$ and $T$ are infinite-dimensional. Namely, if they were both finite-dimensional with bases $\{s_1, \ldots, s_m\}$ and $\{t_1, \ldots, t_n\}$ respectively then $S+T$ is spanned by $\{s_1, \ldots, s_m, t_1, \ldots, t_n\}$, and hence it is finite-dimensional.
H: How many SVD's does a matrix have? If $A$ is a $3 \times 3$ matrix with singular values $5$, $4$, and $2$, then there are $9$ distinct singular value decompositions of $A$. True or false? Is there any method to solve this because I'm not sure how to approach this. AI: Important rule: An $n*n$ matrix with n distinct positive singular values has $2^{n}$ different singular value decompositions (svds). An $n*n$ singular matrix (one without an inverse) with $n$ distinct singular values ($\sigma_{n}$ can be $0$) has $2^{n+1}$ distinct svds. An $n*n$ matrix with a repeated singular value has $\infty$ svds. This problem satisfies the first rule, therefore $A$ has $2^{3}=8$ distinct svds. Answer: False
H: Solving a third order Euler-Cauchy ODE I have been given the following ODE: $$(2x+3)^3 y''' + 3 (2x+3) y' - 6 y=0$$ and I have to solve it using Euler's method, which I am fairly familiar with. Now, I let $ 2x+3 = e^t$ and $y=e^{λt}$ After differentiating $y$, I get that $$y''' = \frac{y_t'''-3y_t''+2y_t'}{e^{3t}}$$ and $y'$ is $$\frac{y_t'}{e^t}$$ Now after substituting in the given equation I get $$e^{3t} \frac{y_t'''-3y_t''+2y_t'}{e^{3t}} + 3e^t \frac{y_t'}{e^t} -6y=0 $$ After which I am left with the following homogeneous equation: $$y''' - 3y'' + 5y' -6y =0$$ Which can be easily solved and the solutions are (I checked in wolframalpha): $$C_1 e^{2t} + e^{\frac{t}{2}}(C_2 \cos(\frac{\sqrt {11}}{2} t) + C_3 \sin(\frac{\sqrt {11}}{2} t))$$ When I plug $2x+3=e^t$ back in, I get: $$y(x) = C_1(2x+3)^2 + C_2 \sqrt{2x+3} \cos(\frac{\sqrt {11}}{2}\ln(2x+3)) + C_3 \sqrt{2x+3} \sin(\frac{\sqrt {11}}{2}\ln(2x+3))$$ But the wolframalpha solution for the whole eqauation is $$C_2(2x+3)^{\frac{3}{2}} + C_3(2x+3) + C_1\sqrt{2x+3}$$ Now, I am new to ODES so I can't rule out that I made a silly mistake. What I did when substituting back is essentially $e^t = 2x+3$ and $t=\ln(2x+3)$ Can anyone point out my mistakes? AI: If you set $2x+3=e^t$, then in $u(t)=y(x)$ you get $u(t)=y(\frac{e^t-3}2)$. Thus computing the derivatives gives $$ u'(t)=y'(x)\frac{e^t}2\\ u''(t)=y''(x)\frac{e^{2t}}4+y'(x)\frac{e^t}2\\ u'''(t)=y'''(x)\frac{e^{3t}}8+y''(x)\frac{3e^{2t}}4+y'(x)\frac{e^t}2 $$ This can also be solved for the derivatives of $y$ to get $$ y'(x)=2e^{-t}u(t)\\ y''(x)=4e^{-2t}(u''(t)-u'(t))\\ y'''(x)=8e^{-3t}(u'''(t)-3u''(t)+2u'(t)) $$ This means in your initial calculations you did not consider the inner derivative/linear coefficient $2$ in $e^t=2x+3$. You could have chosen to set $e^t=x+\frac32$, then the powers of $2$ originate in the polynomial coefficients.
H: A where-from and how-to study pure mathematics question I've tried to find answers to my question on this community as well as several others,but couldn't find a satisfactory answer,so here I am. I thank in advance, to anyone who decides to give time to my question. Background: I am trying to learn pure mathematics on my own from books and any other resources I can find online. I used to do contest mathematics an year or so ago,but have lost touch with almost everything over the past year. Now,I decided to study maths again and am hooked on the maths again but a little lost. I've read A short introduction to mathematics by Timothy gowers. And am currently planning to start the book what is mathematics? By courant and Robbins. I have the following plan for my studies after this book, Start by the high school precalculus,then proceed to learn calculus and then further(this is one of the things where I am lost). The books I plan to study from are(possibly in this order) Basic mathematics by serge lang, along with the book series by I.M. Gelfand Precalculus mathematics in a nutshell by George finlay Simmons Euclid's elements,along with the new mathematical library series from mathematical association of america. Calculus by Spivak. A course of pure mathematics by hardy. After that I pretty much have no idea of what things are and in what order am I supposed to study further. My plan is to study these books and continue further studies with the help of the further readings in these books,use online resources like Khan academy, mit open course ware,etc.(not a very good plan,I know). So,my questions are: How much do you think is the effectiveness of my plan?How much in depth of mathematics can I go with that? In what order should I study further mathematics? What are some of the books and resources that I can study from? Also I would like to how much time do I have to invest daily into my studies (5 hrs?8 hrs? Etc.) So that I can complete undergraduate level mathematics in about 2 years or so(please tell me if that's not enough time). Thanks again for your time. Also please forgive my mistakes as I'm quite new to this site as well as the world of mathematics. Any answer or advice would be appreciated. AI: If your plan is go from having a highschool level understanding of mathematics to an undergraduate degree level understanding from self-study, then 2 years is extremely optimistic. Depending on your work ethic I would say such a venture would take closer to 4-5 years or maybe even longer. An undergraduate degree in mathematics covers a massive range of topics. By the time you finish an undergratuate degree you will have covered number theory, analysis (real and complex), group theory, linear algebra, differential geometry, topology, combinatorics, graph theory and more. Now I don't want to discourage you because learning mathematics on your own is certainly do-able. In fact I think your current plan is good as a starting point. But realistically the resources you've listed here will only cover basic plane geometry, introductory calculus and introductory algebra. That's fine though, because these are the things you should be starting with. If you get a good understanding of these things then it will put you in a good position to start learning the more advanced stuff I listed above. As for the order you should learn things. I think this is good as a rough outline: Get comfortable with the material you've listed already: plane geometry, calculus, and basic algebra. Move on to number theory and real/complex analysis. They are closest to what you would've been studying already and good gateways into more advanced topics. Study group theory and linear algebra. They are more abstract and at first it can be difficult to really see how they fit in. But they are important topics, so it's worth making an effort with them. Once step 2 and 3 are completed then things get a bit looser and depends on your interests. If you're more interested in geometric style things then start on differential geometry, metric spaces or topology. If you like algebra then study commutative algebra or more advanced group theory. You get the idea. Like I said, don't take these steps as gospel, I've thrown them together based on my own experience learning mathematics, so some topics that others consider to be vital for understanding pure mathematics are probably missing. So following what I've suggested here will not give you an equivalent education to an undergraduate degree. This is what I think you should do if you want an approximation to an undergraduate understanding of pure mathematics. To be honest by the time you get to step 4 you almost certainly won't need this guide. You'll have a clearer picture of the landscape of mathematics, what you need to learn and also what you're interested in. With regards to how much you should study, it really depends on you. I think 2 or 3 hours a day is a perfectly reasonable amount of time to dedicate to studying if you've got time. Other than that my only advice is similar @Alexey Burdin's comment. Don't give up, mathematics is hard so don't get frustrated if you don't understand something! There isn't a mathematician on earth that understands everything first time.
H: Show that: $f(\theta)=\sin\theta\cos(\theta\ -k)$ is max when $\theta = \frac{k+90^{\circ}}{2}$ without using calculus. Given $$f(\theta)=\sin\theta\cos(\theta\ -k)$$ Show that $f(\theta)$ is maximum when: $\theta = \frac{k+90^{\circ}}{2}$ I can do this easily using calculus, but I'm looking for a way of doing it without calculus. Context: A particle is projected up an inclined slope. The incline is fixed at an angle $k$ to the horizontal. The particle is projected at an angle $\theta$ to the incline. This problem resulted from trying to find the angle of maximum range. AI: Note $$\sin\theta \cos(\theta-k) = \frac 12\cdot 2\sin\theta\cos(\theta-k) =\frac 12 \left(\sin(2\theta-k) +\sin k \right) $$ This will be maximum, when $$\sin(2\theta-k) =1 \\ \implies 2\theta-k=\frac{\pi}{2} +2n\pi \\ \implies \theta=\frac 12\left (k+\frac{\pi}{2}+2n\pi \right)$$ for any integer $n$. $|\theta|$ can be minimized by setting $n=0$.
H: How to prove that supremum of strictly convex function is infinity? Suppose there is a strictly convex continuous function $f$: $R^n$ $\rightarrow$ $R$. Is the supremum of $f$ always infinity? How can we prove it? I am trying to come up with proof. If $x$ and $y$ are two points in $R^n$, strictly convex implies $f(\alpha x_1 + (1-\alpha) x_2) $ < $\alpha f(x_1) + (1-\alpha)f(x_2)$ . Suppose $f$ is bounded. Case 1: The bound is attained at a point, say $x_0$. Then for some $\alpha$, some $x_1$ and $x_2$ s.t. $ (\alpha x_1 + (1-\alpha) x_2) = x_0$: $f(x_0)$< $\alpha f(x_1) + (1-\alpha)f(x_2)$ Therefore a contradiction. Case 2: The bound is not attained. Since the function is strictly convex, we know $f(x)$ approaches this bound as $x$ approaches $ \infty $ I don't know how to proceed after this step. Where can I find a contradiction in this case? AI: I am sure this has been answered before but am unable to find a solution. The key result is that if $f$ is a non constant convex function on $\mathbb{R}^2$ then there are $x,y$ such that $f(x)<f(y)$. Let $\phi(t) = f(x + t(y-x))$ and suppose $s >1$. Then we can write $1={1 \over s} s + (1 - {1 \over s}) 0$ and so $\phi(1) \le {1 \over s} \phi(s) + (1 - {1 \over s}) \phi(0)$, or $\phi(s) \ge \phi(0) + s(\phi(1) - \phi(0)) = f(x) + s (f(y)-f(x))$ and so $f$ is unbounded. If $f$ is strictly convex then it must be non constant.
H: If $G$ is a finite group of automorphisms of $L/k$, then $\hom_k(L^G,k_s)=\hom_k(L,k_s)/G$. Let $L/k$ be a finite separable extension and let $G$ be a finite group of automorphisms. We also fix a separable closure $k_s$ of $k$. I want to prove that $\hom_k(L^G,k_s)=\hom_k(L,k_s)/G$, where two elements $\varphi,\psi\in\hom_k(L,k_s)$ are identified if $\varphi=\psi\circ g$ for some $g\in G$. In other words, I want to prove that $\varphi$ and $\psi$ agree on $L^G$ if and only if $\varphi=\psi\circ g$ for some $g\in G$. This seems closely related to Galois theory but the only result I know in that could be useful is that $L/L^G$ is Galois and its Galois group is $G$ but I don't see how to use it. AI: Hint. For "only if". If $\phi$ and $\psi$ agree on $L^G$ then, considering them as $k$-automorphisms of $L$, $\varphi\circ\psi^{-1}$ is in the Galois group of $L|L^G$. For "if". Check directly if $\psi$ and $\psi \circ g$ agree on $L^G$.
H: Is there a word for a contradictory set of linear system of equations? So in basic math, we tend to learn that we can solve for the variables if there are n equations and n unknowns. But let's say the equations are contradictory, for example, $x + y = 1$ and $x + y = 5$. Is there an official math word for a contradictory set of equations so that you can't solve it? Thanks AI: They are called inconsistent equations, as the set of variables that would solve the first equation $x+y=1$ would not solve the second $x+y=5$.
H: Limit of sequence of Lebesgue integrable functions is not Lebesgue integrable Construct a sequence of functions $\{f_n(x)\}_{n=1}^{\infty}\subset L([0,1])$ and measurable function $f(x)$ such that $f(x)=\lim \limits_{n\to \infty} f_n(x)$ for all $x\in (0,1)$ and $$\left|\int_{(0,1)}f_n(x)d\mu\right|\leq 1$$ for all $n$, but $f(x)\notin L([0,1])$. By $L([0,1])$ I mean Lebesgue integrable function on $[0,1]$. I have spent some time trying to construct an example but I failed. Would be very thankful for help! AI: Let $b_n := e^{-n}$, $n\in \mathbb{N} = \{0, 1, 2, \ldots\}$, and define $$ f_n(x) := \begin{cases} (-1)^j/x, & \text{if}\ b_{j+1} < x \leq b_j,\ j\leq n,\\ 0, & \text{otherwise in}\ [0,1]. \end{cases} $$ Since $\int_{b_{j+1}}^{b_j} f_n(x)\, dx = (-1)^j$, then $|\int_0^1 f_n| = 1$. On the other hand, if $f$ is the pointwise limit of the sequence $(f_n)$, then $|f(x)| = 1/x$ for every $x\in (0,1]$.
H: If $P(A) = p$ and $P(B|A) = P(B^{c}|A^{c}) = 1 - p$, find the value of $P(A|B)$. I am trying to solve this question where I have to find P(A|B) given, $P(A) = p$ and, $P(B|A) = P(B’|A’) = 1-p$ Since, I don’t have P(B), direct Bayes theorem isn’t applicable but I have this hunch that the equality between the conditional probabilities can somehow be used to prove that events are independent and thus the answer comes out to p, but that’s just a hunch. (edit) P(B’) means probability of B not happening. Same for A AI: According to the given information, one has that \begin{align*} \textbf{P}(B|A) = \textbf{P}(B^{c}|A^{c}) = 1 - p & \Longleftrightarrow \frac{\textbf{P}(A\cap B)}{\textbf{P}(A)} = \frac{\textbf{P}(A^{c}\cap B^{c})}{\textbf{P}(A^{c})} = 1-p\\\\ & \Longleftrightarrow \frac{\textbf{P}(A\cap B)}{p} = \frac{1 - \textbf{P}(A\cup B)}{1 - p} = 1 - p \end{align*} Thus we conclude that \begin{align*} \begin{cases} \textbf{P}(A\cap B) = p(1-p)\\\\ \textbf{P}(A\cup B) = 2p - p^{2} \end{cases} & \Longrightarrow \textbf{P}(A) + \textbf{P}(B) - \textbf{P}(A\cap B) = p + \textbf{P}(B) - p(1-p) = 2p - p^{2}\\\\ & \Longrightarrow \textbf{P}(B) = 2p(1 - p) \Longrightarrow \textbf{P}(A|B) = \frac{\textbf{P}(A\cap B)}{\textbf{P}(B)} = \frac{p(1-p)}{2p(1-p)} = \frac{1}{2} \end{align*} and we are done. Hopefully this helps.
H: Proving that $0 \rightarrow \Bbb Z \rightarrow \Bbb Q \rightarrow \Bbb Q / \Bbb Z \rightarrow 0$ does not split. Can I prove that the $\Bbb Z$-module exact sequence $0 \rightarrow \Bbb Z \rightarrow \Bbb Q \rightarrow \Bbb Q / \Bbb Z \rightarrow 0$ is non-split exact by proving that $\Bbb{Z} \bigoplus \Bbb{Q} / \Bbb{Z}$ is not isomorphic to $\Bbb{Q}$? The way I prove this is by noticing that in $\Bbb{Z} \bigoplus \Bbb{Q} / \Bbb{Z}$, there exist elements of order $a$ in which $a$ is the smallest natural number that satisfy $ar \in \Bbb Z$, where $r \in \Bbb Q$ (e.g. order of $(0, \frac{1}{2} + \Bbb Z)$ is $2$), whereas the order of elements in $\Bbb Q$ is either $1$ or $\infty$, showing that they are not isomorphic to each other. Is this correct? AI: There are many equivalent criterion that determine when a sequence splits. I think the most-common definition is the existence of a splitting map — in this case a map $f\colon \Bbb Q/\Bbb Z\to\Bbb Q$ such that $g\circ f=\operatorname{Id}_{\Bbb Q/\Bbb Z}$, where $g\colon \Bbb Q\to\Bbb Q/\Bbb Z$ is the quotient map. In this case, for any $x\in \Bbb Q/\Bbb Z$, we know that $nx=0$ for some integer $n$. So if $f$ is a $\Bbb Z$-module map, then $nf(x)=f(nx)=0$. In $\Bbb Q$, this means $f(x)=0$, so $f$ must be the zero map. As you already noted, though, the existence of a splitting map implies that $\Bbb Z\oplus \Bbb Q/\Bbb Z\cong\Bbb Q$, which is of course absurd as $\Bbb Q$ is torsion-free. You can see how both proofs capture the same idea.
H: does a sum related to the prime factorization of the whole numbers. to define this sum you need a function $f(x)$ $f(x)=$ the sum of the prime powers of $x$ $12=3^1\times2^2$ so $f(12)=1+2$ $16=2^4$ so $f(16)=4$ my question does this sum converge and if so what does it converge to? $$\frac{f(2)}1+\frac{f(3)}2+\frac{f(4)}4+\frac{f(5)}8+\frac{f(6)}{16}+\cdots+\frac{f(x)}{2^{x-2}}+\cdots$$ AI: Well, it converges. You can estimate every $f(n)$ with $n$ itself, since $n \leq 2^n$ for every natural number. Then you have clearly convergence. As to what it converges to, I don’t think it is possible to compute, and if it is I would be extremely surprised
H: If $X$ is not full rank, is $X(X^TX + \lambda I_p)^{-1}X^T$ invertible if $n < p$? Suppose $X$ is a $n \times p$ matrix with $\text{rank}(X) = r < p$. Is $X(X^TX + \lambda I_p)^{-1}X^T$ invertible when $n < p$? We know that $$\operatorname{rank}(X(X^TX + \lambda I_p)^{-1}X^T) \leq \min(\operatorname{rank}(X), \operatorname{rank}((X^TX + \lambda I_p)^{-1}X^T)$$ Since $\operatorname{rank}(X) < p$ and $(X^TX + \lambda I_p)^{-1}X^T$ is a $p \times n$ matrix, then $$\operatorname{rank}((X^TX + \lambda I_p)^{-1}X^T) \leq \min(n, p) = n$$ therefore, $$\operatorname{rank}(X(X^TX + \lambda I_p)^{-1}X^T) \leq \min(\operatorname{rank}(X), \operatorname{rank}((X^TX + \lambda I_p)^{-1}X^T)) < n < p$$ Therefore, $X(X^TX + \lambda I_p)^{-1}X^T$ is rank deficient, and thus is not invertible. Is this correct? AI: $\min(\operatorname{rank}(X), \operatorname{rank}((X^TX + \lambda I_p)^{-1}X^T)) < n < p$ This is not correct. It is possible that the ranks of both $X$ and $(X^TX + \lambda I_p)^{-1}X^T$ are equal to $n$. Also, the inequality $n<p$ will not force $A=X(X^TX + \lambda I_p)^{-1}X^T$ to be rank deficient, since $A$ is $n\times n$. In general, $A$ can be singular, even if $X$ has full row rank. E.g. $$ \pmatrix{I_n&0}\left[\pmatrix{I_n\\ 0}\pmatrix{I_n&0}-I_p\right]\pmatrix{I_n\\ 0}=0. $$ However, if $X$ is a real matrix of full row rank and $\lambda$ is positive, $A$ must be nonsingular, because $A=YY^T$, where $Y=X(X^TX+\lambda I_p)^{-1/2}$ has full row rank. If $Ax=0$, then $\|Y^Tx\|^2=x^TYY^Tx=x^TAx=0$ and hence $Y^Tx=0$. Since $Y^T$ has full column rank, $x$ must be zero.
H: Solve $x^5\equiv 4\pmod 7$ We know about calculating $x^2\equiv 2\pmod 7$ using quadratic residue properties in order to find out whether a solution exists or not. I wonder is there any way to determine that $x^n\equiv k\pmod v$, where $v\ge 2$, $k\in\Bbb Z$, and $n\ge 3$? As I asked in title: Solve $x^5\equiv 4\pmod 7$ AI: Since $x^6\equiv1\bmod 7$, invert the exponent mod $6$: $\;5\times5\equiv1\bmod 6$, so $x^5\equiv4\bmod7\implies x\equiv x^{5\times5}\equiv4^5=1024\equiv2\bmod7.$
H: Just a basic question for understanding the definition of topological spaces slowly I am moving into the $\mathbb{R}^n$ with my analysis studies. And with starting so, the author introduced some basic topology terms, for better understanding. After writing down the three criteriums for inducing a topology onto a set, he also gave the two trivial topologies on any set $X$: $\mathcal{T}_0:=\lbrace \emptyset,X\rbrace$ and $\mathcal{T}_1:=\mathcal{P}(X)$ And telling: any set element of a topology is regarded open in terms of the topology. So here are my two questions: Firstly: Is the term open in regards to a topology not instantly compareable to the classification of open sets, but rather a term given to any element of a topology, but the whole definition helps to define open sets? An example which went through my head when trying wrapping my head around topologic spaces: The empty set, $\mathbb{R}$ and all compact intervalls on $\mathbb{R}$ build a topology, (I hope this even works ;D) than any element in the topology is called open regarding the topology, but actually are all closed sets. Second question: Is $\mathcal{P}(\mathbb{R})$ even possibly a topologic space? it contains compact intervalls and open intervalls? Maybe a good answer to the first question will also clear my second question :) I hope someone has the time to help me out AI: The point is that $\Bbb R$ and $\Bbb R^n$ are specific examples of topological spaces. They are quite familiar to us, and we already know what it means for a set to be open or closed. The purpose of defining a topology abstractly like this is to generalize these familiar spaces. By itself, the axioms of a topology are quite minimal, as the same set can have vastly different topologies. Let's just take $\Bbb R$ as an example of an underlying set. We can define the trivial topology which contains only $\emptyset$ and $\Bbb R$. In this topology, there are only two open sets. Sets like $(0,1)$ are no longer open, even though we might expect them to be. On the other end, we can construct the discrete topology, in which every subset of $\Bbb R$ is open. Now sets like $[0,1]$ are open. The complement $\Bbb R\setminus[0,1]$ is also open, so actually $[0,1]$ is also closed. In this topology, every possible set is both open and closed. The familiar topology on $\Bbb R$ is referred to as just "the usual topology," "the Euclidean topology," or "the metric topology." We begin by first constructing a metric. In this case, it's just the absolute value function. Then we construct a topology where a set $U$ is open if and only if, for every $x\in U$, there exists $\varepsilon>0$ such that $(x-\varepsilon,x+\varepsilon)\subseteq U$. It then requires a bit of work to show that this defines a topology. A more common construction is the lazier approach: we declare that any open intervals $(a,b)$ are open, and then include whatever other open sets we need to in order to satisfy the axioms. More generally, the metric function on $\Bbb R^n$ gives the distance between $x=(x_1,\ldots,x_n)$ and $y=(y_1,\ldots, y_n)$ as $|\!|x-y|\!|=\sqrt{(x_1-y_1)^2+\cdots+(x_n-y_n)^2}$. We construct the metric topology by declaring that open balls $B_\varepsilon(x)=\{y\in\Bbb R^n:|\!|y-x|\!|<\varepsilon\}$ are open, and then letting this generate a topology as before. At the end of the day, we almost always use the usual topology on $\Bbb R^n$, but the point is that every time you have heard a set described as "open" before now, this really meant "open in the usual topology."
H: When the function equation $f(x)f(y)=axy+b$ is solvable Assume $a,b$ are constants. The question is whether there is a continuous function $f$ defined on $\mathbb R$ or $\mathbb C$ so that $$ f(x)f(y)=axy+b $$ Of course, such a function $f$ exists if $b=0$ by taking $$f(x)=\sqrt{a}x\,.$$ Likewise if $a=0$ then $f$ exists by taking $$f(x)=\sqrt{b}\,.$$ But I don't know whether the condition $a=0$ or $b=0$ is also necessary for the solvability of this function equation. AI: Let's assume that $b \neq 0$, then clearly $f$ cannot be identically $0$. Thus, there exists $c$ such that $f(c)\neq 0$. Let $y=c$, then we get $f(x)f(c)=acx+b$, or after division $f(x)=\frac{ac}{f(c)}x+\frac{b}{f(c)}$, and so $f(x)$ is linear function. So, let $f(x)=px+q$ and plug it back to the original equation. We get $$(px+q)(py+q)=axy+b$$ for all $x,y$, so after expanding and comparing the coefficients, we get $p^2=a,pq=0,q^2=b$. By original assumption, $q \neq 0$, and so $p=0$, $a=0$, and $q=\sqrt{b}$. So the only solution then is $f(x)=\sqrt{b}$, which works only when $a=0$.
H: Show that $K$ has measure zero This is a problem from my measure theory book: Let $K$ be a compact subset of $\mathbb{R}^d$ such that the intersection $H_r(K)\cap H_{r'}(K)$ of two homothetic images ($H_r(x)=rx$ for $x\in \mathbb{R}^d$ and $r\in\mathbb{R}$) of $K$ has Lebesgue-Borel measure zero whenever $0<r<r'<1$. Prove that $\lambda^d(K)=0$. Hint: $H_r(K) \subset \tilde{K}=\{tx:0\leq t\leq 1, x\in K \}$ which is a compact set. Hence $\lambda^d(\tilde{K})<\infty$. I know that $\lambda^d(H_r(K))=|r|^d \lambda^d(K)$ approaches $\lambda^d(K)$ as $r$ approaches $1$, but not sure where I can go from there. Any help is greatly appreciated. AI: Let's assume that $\lambda^d(K) > 0$. We consider the sets $H_{r_n}$, where $r_n = \frac{1}{2}+\frac{1}{n+2}$. We know that the $H_{r_n}$ are pairwise "almost disjoint", and included in $\overline{K}$. Can you conclude from there, using what you have already noticed ?
H: Prove that $\bigcup\mathcal F$ and $\bigcup\mathcal G$ are disjoint iff for all $A\in\mathcal F$ and $B\in\mathcal G$, $A$ and $B$ are disjoint. Not a duplicate of Suppose $F$ and $G$ are families of sets. Prove $\bigcup\mathcal{F}$ and $\bigcup\mathcal{G}$ are disjoint iff for all $A \in \mathcal{F}$ and $B \in \mathcal{G}$, A and B are disjoint. This is exercise $3.4.19$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Suppose $\mathcal F$ and $\mathcal G$ are families of sets. Prove that $\bigcup\mathcal F$ and $\bigcup\mathcal G$ are disjoint iff for all $A\in\mathcal F$ and $B\in\mathcal G$, $A$ and $B$ are disjoint. Here is my proof: $(\rightarrow)$ Suppose $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$. Let $A$ be an arbitrary element of $\mathcal F$ and $B$ be an arbitrary element of $\mathcal G$. Let $x$ be an arbitrary element of $A$. Since $A\in\mathcal F$ and $x\in A$, $x\in\bigcup\mathcal F$. From $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$ and $x\in\bigcup\mathcal F$, $x\notin \bigcup\mathcal G$. From $x\notin \bigcup\mathcal G$ and $B\in \mathcal G$, $x\notin B$. Thus if $x\in A$ then $x\notin B$. Since $x$ is arbitrary, $\forall x(x\in A\rightarrow\ x\notin B)$ and so $A\cap B=\emptyset$. Since $A$ and $B$ are arbitrary, $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$. Therefore if $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$ then $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$. $(\leftarrow)$ Suppose $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$. Let $x$ be an arbitrary element of $\bigcup\mathcal F$. This means $A\in\mathcal F$ and $x\in A$. Let $B$ be an arbitrary element of $\mathcal G$. From $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$ and $A\in\mathcal F$, $\forall B\in\mathcal G(A\cap B=\emptyset)$. From $\forall B\in\mathcal G(A\cap B=\emptyset)$ and $B\in\mathcal G$, $A\cap B=\emptyset$. From $A\cap B=\emptyset$ and $x\in A$, $x\notin B$. Thus if $B\in \mathcal G$ then $x\notin B$. Since $B$ is arbitrary, $\forall B(B\in\mathcal G\rightarrow x\notin B)$ and so $x\notin \bigcup\mathcal G$. Thus if $x\in\bigcup\mathcal F$ then $x\notin\bigcup\mathcal G$. Since $x$ is arbitrary, $\forall x(x\in\bigcup\mathcal F\rightarrow x\notin\bigcup\mathcal G)$ and so $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$. Therefore if $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$ then $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$. Since $\Bigr((\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset\Bigr)$ $\rightarrow$ $\Bigr(\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)\Bigr)$ and $\Bigr(\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)\Bigr)$ $\rightarrow$ $\Bigr((\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset\Bigr)$, $\Bigr((\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset\Bigr)$ iff $\Bigr(\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)\Bigr)$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention. Edit: $(\leftarrow)$ Suppose $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$. Let $x$ be an arbitrary element of $\bigcup\mathcal F$. So we can choose some $A_0$ such that $A_0\in\mathcal F$ and $x\in A_0$. Let $B$ be an arbitrary element of $\mathcal G$. From $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$ and $A_0\in\mathcal F$, $\forall B\in\mathcal G(A_0\cap B=\emptyset)$. From $\forall B\in\mathcal G(A_0\cap B=\emptyset)$ and $B\in\mathcal G$, $A_0\cap B=\emptyset$. From $A_0\cap B=\emptyset$ and $x\in A_0$, $x\notin B$. Thus if $B\in \mathcal G$ then $x\notin B$. Since $B$ is arbitrary, $\forall B(B\in\mathcal G\rightarrow x\notin B)$ and so $x\notin \bigcup\mathcal G$. Thus if $x\in\bigcup\mathcal F$ then $x\notin\bigcup\mathcal G$. Since $x$ is arbitrary, $\forall x(x\in\bigcup\mathcal F\rightarrow x\notin\bigcup\mathcal G)$ and so $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$. Therefore if $\forall A\in\mathcal F\forall B\in\mathcal G(A\cap B=\emptyset)$ then $(\bigcup\mathcal F)\cap(\bigcup\mathcal G)=\emptyset$. AI: It’s just a bit more detailed than is probably necessary even at this stage, but with one small exception it’s correct. I would make one change: in the second half, after you let $x$ be an arbitrary element of $\bigcup\mathscr{F}$, you say that ‘[t]his means $A\in\mathscr{F}$ and $x\in A$’. That’s not quite right: what it actually means (and what you should say) is that there is some $A\in\mathscr{F}$ such that $x\in A$. (In fact, what you wrote couldn’t be right, simply because it makes an assertion about something that has not at that point been introduced, namely, the set $A$.)
H: Integral wrt floor(x) What is the definite integral of $f(x)=x^2+1$ with respect to the differential of $\lfloor x\rfloor$ i.e ($d\lfloor x\rfloor$) from $0$ to $2$? I tried to multiply and divide dx by then $d\lfloor x\rfloor/dx = 0$. How do I approach it? AI: $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ Integrate by parts ( Why not ? ): \begin{align} &\bbox[#ffd,5px]{% \int_{0}^{2}\pars{x^{2} + 1}\dd\lfloor x\rfloor} \\[3mm] = &\ \overbrace{\left.\left\lfloor x\right\rfloor\pars{x^{2} + 1} \,\right\vert_{\ 0}^{\ 2}}^{\ds{=\ 10}}\ -\ \int_{0}^{2}\lfloor x\rfloor\pars{2x}\dd x \\[3mm] = &\ 10\ -\ \underbrace{2\int_{0}^{1}\lfloor x\rfloor\,\dd x}_{\ds{=\ 0}}\ -\ \underbrace{2\int_{1}^{2}\lfloor x\rfloor x\,\dd x}_{\ds{=\ 3}} = \bbox[10px,#ffa,border:1px groove navy]{7} \\ & \end{align}
H: Let $S=\{a,b\}$. Which binary operation $*$ on $\wp(S)$ makes $(\wp(S),*)$ a cyclic group? Let $S=\{a,b\}$ be a set, and $\wp(S)$ the power set of $S$. It is well known that $$(\wp(S),\triangle,\emptyset)\cong \mathbb{Z}_2\times \mathbb{Z}_2\,,$$ where $\triangle$ is the symmetric difference of two sets. Now, there are $24$ bijections $f\colon \mathbb{Z}_4 \to \wp(S)$, and hence as many operations "$*$" in $\wp(S)$ such that $$(\wp(S),*,f(0))\cong \mathbb{Z}_4.$$ I tried by trial and error several times, but I couldn't succeed in finding any of such operations as a symmetric (being the group abelian), closed formula in terms of the basic set operations $\cup, \cap,\setminus$, just like the symmetric difference formula. AI: Let $(B,+,\cdot)$ be the Boolean algebra with two generators $u$ and $v$. The multiplication in $B$ is given by $u\cdot u=u$, $v\cdot v=v$, and $u\cdot v=v\cdot u=0$. Therefore, $e:=u+v$ is the multiplicative identity of $B$. We identify $0$, $u$, $v$, and $e$ with $\emptyset$, $\{a\}$, $\{b\}$, and $\{a,b\}$, respectively. Then, we can associate any set operation on $\mathcal{P}(S)$ with a polynomial operator in $B$. This is because the symmetric difference operator $\triangle$ is associated to the polynomial $d(x,y):=x+y$, the union operator $\cup$ is associated to the polynomial $f(x,y):=x+y+x\cdot y$, the intersection operator $\cap$ is associated to the polynomial $g(x,y):=x\cdot y$, the set difference operator $\setminus$ is associated to $h(x,y):=x+x\cdot y$, and the complement operator is associate to the polynomial $k(x):=e+x$. Suppose that there exists a polynomial $p(x,y)\in B[x,y]$ such that the binary operation on $\mathcal{P}(S)$ equips $\mathcal{P}(S)$ with a structure of $G:=\mathbb{Z}/4\mathbb{Z}$. Let $z\in B$ be the element that acts as the identity of $G$. Since $G$ is abelian, we get $p(x,y)=p(y,x)$, whence $$p(x,y)=\alpha+\beta\cdot x+\beta\cdot y+\gamma\cdot x\cdot y$$ for some $\alpha,\beta,\gamma\in B$. Now, $$0=p(0,z)=\alpha+\beta\cdot z\,.$$ Therefore, $$\beta\cdot z=\alpha\,.$$ We also have $$z=p(z,z)=\alpha+\beta\cdot z+\beta\cdot z+\gamma\cdot z\cdot z=\alpha+\gamma\cdot z\,.$$ Hence, $$(e+\gamma)\cdot z=z+\gamma\cdot z=\alpha\,.$$ Furthermore, $$\begin{align}e=p(e,z)&=\alpha+\beta\cdot e+\beta\cdot z+\gamma\cdot e\cdot z \\&=\alpha+\beta+\alpha+(\alpha+z)=\alpha+\beta+z\,.\end{align}$$ Consequently, $$z=e+\alpha+\beta\,.$$ From $\beta\cdot z=\alpha$, we conclude that $\alpha\cdot\beta=\alpha$, or $$\alpha\cdot(e+\beta)=0\,.$$ Case I: $\beta=0$. Then, $\alpha=\beta\cdot z=0$. Therefore, $z=e+\alpha+\beta=e$. As $(e+\gamma)\cdot z=\alpha$, we conclude that $\gamma=e$. Hence, $p(x,y)=x\cdot y$, which clearly does not work. (Alternatively, note that $p(0,0)=0$, which contradicts the result that $z=e$ is the identity of $G$.) Case II: $\beta=u$. Then, $\alpha\cdot v=\alpha\cdot(e+\beta)=0$. Hence, either $\alpha=0$ or $\alpha=u$. If $\alpha=0$, then from $z=e+\alpha+\beta$, we get $z=v$. From $(e+\gamma)\cdot z=\alpha$, we conclude that $\gamma=0$ or $\gamma=v$. In the case $\gamma=0$, we get $p(x,y)=u\cdot(x+y)$, which means that the image of $p(x,y)$ can only be $0$ or $u$, leading to a contradiction. In the case $\gamma=v$, we get $$p(x,y)=u\cdot(x+y)+v\cdot(x\cdot y)\,,$$ whence $$p(u,0)=u\cdot(u+0)+v\cdot(u\cdot 0)=u\,,$$ but this contradicts the conclusion that $z=v$ is associated to the identity of $G$. If $\alpha=u$, then $z=e+\alpha+\beta=e$. From $(e+\gamma)\cdot z=\alpha$, we conclude that $\gamma=v$. Ergo, $$p(x,y)=u+u\cdot(x+y)+v\cdot(x\cdot y)\,.$$ Thus, $$p(u,u)=u+u\cdot(u+u)+v\cdot(u\cdot u)=u\,.$$ This contradicts the result that $z=e$ is associated to the identity of $G$. Case III: $\beta=v$. The argument is the same as Case II. Case IV: $\beta=e$. Then, $z=e+\alpha+\beta=\alpha$, and from $(e+\gamma)\cdot z=\alpha$, we get $\gamma\cdot\alpha=0$. If $\alpha=0$, then $z=0$ and $$p(x,y)=(x+y)+\gamma\cdot(x\cdot y)\,.$$ Therefore, $p(\gamma,\gamma)=\gamma$ implies that $\gamma$ is associated to the identity of $G$, making $\gamma=z=0$. Thus, $p(x,y)=x+y$, which clearly does not work. (Alternatively, note that $p(0,0)=0$, which contradicts the result that $z=e$ is the identity of $G$.) If $\alpha=u$, then $z=u$ and $$p(x,y)=u+(x+y)+\gamma\cdot(x\cdot y)\,.$$ Note that $\gamma\cdot \alpha=0$ implies $\gamma=0$ or $\gamma=v$. If $\gamma=0$, then $p(0,0)=u=p(v,v)$, which contradicts the fact that $G$ has only one element of order $2$. If $\gamma=v$, then $p(e,v)=v$, which contradicts the result that $u$ is associated to the identity of $G$. If $\alpha=v$, then we have a similar contradiction to the previous subcase. If $\alpha=e$, then $z=e$ and $\gamma=0$, making $$p(x,y)=e+(x+y)\,.$$ Now, $p(x,x)=e$ for all $x\in B$ contradicts the fact that $G$ has only one element of order $2$. Therefore, such a polynomial $p(x,y)\in B[x,y]$ does not exist. Hence, there is no binary operator $*$ on $\mathcal{P}(S)$ given by the usual set operations that makes $\mathcal{P}(S)$ isomorphic to the group $\mathbb{Z}/4\mathbb{Z}$. P.S. See a much simpler argument to a more generalized setting here.
H: Is Gaussian process at random times also a Gaussian process? I have a question I am not sure whether my answer is correct or not: I have a gaussian process $X_t$ (for $t\geq0$) and a random function $s(t):[0,\infty)\rightarrow[0,\infty)$. Does $X_{s(t)}$ (for $t\geq0$) also a Gaussian process? My answer is that it is because for every $\omega\in\Omega$, $s(t)$ will be a deterministic function and in that case, the process is Gaussian, and therefore it will be Gaussian in any case. The problem is that this answer feels a little suspicious. AI: Let $X_t$ be standard Brownian motion, $\tau = \inf\{t : X_t \ge 1\}$ the hitting time of 1, and $s(t) = t \wedge \tau$. Then $X_{s(t)} \le 1$ almost surely so it is certainly not Gaussian. The flaw in your argument becomes clear if you try to write it more precisely. $X_t = X_t(\omega)$ and $s(t) = s(t,\omega)$ both depend on $\omega$. You can't "plug in" an $\omega$ to $s(t)$ without also plugging it into $X_t$. And in that case you have $X_{s(t,\omega)}(\omega)$ which is not a process at all, it's a number, and it makes no sense to speak of it being Gaussian.
H: Floor function equation $⌊x + 1/2⌋ + ⌊x⌋ = \frac12 x^6$ So in this floor equation $⌊x + 1/2⌋ + ⌊x⌋ = \frac12 x^6$, I've tried putting $x = n + e$, where $0 \le e < 1$, but I didn't get anything useful. What should be an approach in these situations? AI: Suppose $x = n + e$ and consider two cases $e < 0.5$ and $e \geq 0.5$. $0\leq e < 0.5$ $\begin{alignat*}{2} &2\cdot⌊n + e + 0.5⌋ + 2\cdot⌊n + e⌋ = (n + e)^6\\ &2\cdot n + 2\cdot n = (n + e)^6\\ &4\cdot n = (n + e)^6 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (1)\\ &2\cdot \sqrt{n} = (n + e)^3\\ &(\sqrt{n} + 1)^2 - (n + 1) = (n + e)^6\\ &-(n + 1) = (n + e)^3 - (\sqrt{n} + 1)^2 \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ (2) \end{alignat*}$ Notice, that LHS is negative, while RHS positive for $n \in \mathbb{Z}_{>1}$. Hence, we need to consider only $n \in \{0,1 \}$. Considering $n = 0$ and using $(2)$ yields $x = 0$. While $n = 1$ yields $e = 4^{1/6} - 1 < 0.5$, hence $x = 1 + 4^{1/6} - 1 = 4^{1/6}$. $0.5\leq e < 1$ The second case can be worked out similarly, but there are no solutions.
H: Identity involving product of the $\zeta$ function for different values I would like to prove the identity $$\sum_{\substack{b,d>0 \\ (b,d)=1}}\frac{1}{b^n}\frac{1}{d^m}=\frac{\zeta(n)\zeta(m)}{\zeta(m+n)},$$ where $\zeta$ is the Riemann zeta function and $n,m\ge 2$. Any help would be welcome. AI: Every pair $(r,s)$ of positive integers has the form $(tb,td)$ where $t=\gcd(r,s)$ and $\gcd(b,d)=1$. Therefore $$\zeta(m)\zeta(n)=\sum_{r,s>0}\frac1{r^n}\frac1{s^m} =\sum_{b,d>0\atop\gcd(b,d)=1}\sum_{t=1}^\infty\frac1{(tb)^n}\frac1{(td)^m} =\zeta(m+n)\sum_{b,d>0\atop\gcd(b,d)=1}\frac1{b^n}\frac1{d^m}.$$
H: How many positive divisors are there of the number $2019^{2019}$? How many positive divisors are there of the number $2019^{2019}$ ? Since $2019$ has $4$ positive divisors $1,~3,~673,~2019$, the positive divisors of $2019^{2019}$ are $1, \\ 3,~3^2,~3^3, \cdots, 3^{2019}, \\ 673,~673^2, ~ 673^{3},\cdots, 673^{673}, \\ 2019,~2019^2,~2019^3, \cdots, 2019^{2019}. $ So there are Total $1+3 \times 2019=2058$ positive divisors of $2019^{2019}$ according to me. Am I right ? Or something wrong? AI: You left out $3^2\times 673$ et al. There are $2020\times2020$ numbers of the form $3^a673^b$ with $a$ and $b$ integers between $0$ and $2019$, and they are the factors of $2019^{2019}$.
H: Prove for every number $c$ such that $c \geq f(y)$, there is $x \in (a,b)$ such that $f(x) = c$ - Proof Verification Let $f:(a,b) \to \mathbb{R}$ be continuous on $(a,b)$. Assume that $\lim_{x \to a^{+}}f(x) = \infty$ and $\lim_{x \to b^{-}}f(x) = \infty$. Let $y \in (a,b)$ such that $f$ attains its minimum. Prove that for every number $c$ such that $c \geq f(y)$, there is $x \in (a,b)$ such that $f(x) = c$ Attempt: We are given that $\lim_{x \to a^{+}}f(x) = \infty$ and $\lim_{x \to b^{-}}f(x) = \infty$. By definition this means that for every $M(\text{or}\ N) > 0$, there exists a $\delta_{M}(\text{or}\ \delta_{N}) > 0$ such that if $0< x-a < \delta_{M} \ (0 < b-x < \delta_{N})$ that $f(x) > M(\text{or}\ N)$. We also know that $f(y)$ is the minimum. This would mean we can let $M = N = f(y)$. As such we know that the $\delta$ we choose will be satisfactory. Let $\delta = b-a$. Therefore we have $$0 < x-a < b-a \\\Rightarrow a < x < b$$ The same manipulation can be performed for $0 < b - x < b-a$. In both cases we have illustrated an $x \in (a,b)$. Now by continuity this means $f(x) = c$. Is this the right approach. Comment: When I first read the claim the conditions are very similar to those to be able to apply the Intermediate Value Theorem, except we have an open interval $(a,b)$ instead of the required closed interval. I was trying to brainstorm a way to make the interval "closed" so then I could apply IVT, but didn't come up with anything concrete. Perhaps there is a meaning behind divergence I may be missing. AI: hint Let $ c\in \Bbb R $ such that $$c>f(y)$$ Define $ g $ at $ (a,b) $ by $$g(x)=f(x)-c$$ at first observe that $$g(y)<0$$ On the other hand, there exists $ u \in (a,b) $ satisying $$f(u)>c$$ because $\lim_{a^+}f=+\infty$ (with $M=c)$ thus $$g(u)>0$$ Now, apply IVT at $ [u,y] $.
H: Common solution for $f(x) = f'(x) = 0$ I encountered the following problem in Real Analysis: Let $f:\mathbb{R}\to\mathbb{R}$ be differentiable and assume there is no $x$ in $\mathbb{R}$ such that $f(x)=f'(x)=0$. Show that $S=\{x\mid 0\le x\le 1, f(x)=0\}$ is finite. I have solved this problem by observing that $S$ is compact and in case it is infinite, the limit point $\alpha$ of $S$ gives a contradiction by negating the hypothesis since $f(\alpha)=f'(\alpha)=0$. I want to apply the contra positive statement of the problem to the function $f(x)=x^2\sin(1/x)$. Since for the chosen $f(x)$, $S$ is infinite ($\frac {1} {n\pi}$ for every $n\in\mathbb{N}$ belongs to $S$), I should be able to conclude that there exists $x\in\mathbb{R}$ such that $f(x)=f'(x)=0$. But Wolfram says there is no such $x$. Am I wrong or is Wolfram wrong? Please help. AI: How do you define $f(0)$? If you do not define it, the domain of $f$ is not compact. If you define $f(0)=0$, then $f'(0)=f(0)=0$.
H: How to show ${_2F_1}\left(-\frac{19}{20}, \frac{11}{30}; -\frac{19}{30}; -2\right)$ is zero. I have seen hypergeometric functions over the years on Wolfram Alpha and am trying to learn more about them. I recently read this question and its associated answers, but understood very little. I wrote a program to arbitrarily search for interesting hypergeometric function values and stumbled on this one. How do I prove the following identity? $$ {_2F_1}\left(-\frac{19}{20}, \frac{11}{30}; -\frac{19}{30}; -2\right) = 0 $$ This would be equivalent to the following sum, where $(x)_n$ refer to the rising factorial or Pochhammer symbol. However, $\left|- 2\right|$ is not less than $1$, so this sum is not guaranteed to converge. $$ \sum_{n=0}^{\infty} \frac{\left(-\frac{19}{20}\right)_n \cdot \left(\frac{11}{30}\right)_n}{\left(-\frac{19}{30}\right)_n\cdot(1)_n} \cdot (-2)^n $$ I tried applying the first Pfaff transformation in order to get $-2$ back in the radius of convergence: $$ {_2 F_1}(a, b; c; x) \Longrightarrow (1-z)^{-b}\cdot{_2 F_1}\left(b,c-a;c;\frac{z}{z-1}\right) $$ $$ {_2F_1}\left(-\frac{19}{20}, \frac{11}{30}; -\frac{19}{30}; -2\right) \Longrightarrow 3^{-\frac{11}{30}} \cdot {_2F_1} \left( \frac{11}{30}, \frac{19}{60}; -\frac{19}{30}; \frac{2}{3} \right) $$ Because the sum converges to zero, I can ignore the leading $3^{-\frac{11}{30}}$ . The trick worked and it gave me something I can sum numerically. Here is a table with the first 10 terms of the transformed hypergeometric series (without the leading constant). 0 1.0 1 -0.12222222222222222 2 -0.19993827160493827 3 -0.1782466849565615 4 -0.14016354150790022 5 -0.1046338569817722 6 -0.07596678344256204 7 -0.05421630175119416 8 -0.03824906471494405 9 -0.026761952441104003 And here are the first twenty partial sums 0 0.0 1 1.0 2 0.8777777777777778 3 0.6778395061728395 4 0.499592821216278 5 0.3594292797083778 6 0.2547954227266056 7 0.17882863928404355 8 0.1246123375328494 9 0.08636327281790535 10 0.05960132037680134 11 0.040992463681377815 12 0.028115033171369225 13 0.01923797177189061 14 0.01313772602200051 15 0.008956592581665427 16 0.006097117468555947 17 0.004145193056747411 18 0.00281493829147749 19 0.0019096402360084949 At this point, however, I'm stuck. The values in the transformed series are not particularly friendly and I don't see an obvious way to bound the partial sums. AI: Your function is of the form $ _2 F_1(a,b;b-1;z) $. We can cancel $(b)_n / (b-1)_n = \frac{n}{b-1}-1$, and then note that $ _2 F_1 (a,b;b;z) = (1+z)^a $. After cancellation, the general case is $$ _2 F_1(a,b;b-1;z) =\frac{ (a-b+1)z+(b-1)}{(b - 1) (1-z)^{a+1}} $$In particular, your function is $\displaystyle{\frac{z+2}{2(1-z)^{1/20}}}$, which evaluates to zero at $z=-2$.
H: Does the embedding of $M_{n}(\mathbb C)$ into $M_{2n}(\mathbb R)$ send $GL_n(\mathbb C)$ into $GL_{2n}(\mathbb R)$? We can embed $M_{n}(\mathbb C)$ into $M_{2n}(\mathbb R)$ via $$Z=X+iY \mapsto \pmatrix{X& Y \cr -Y& X}, $$ where $X,Y \in M_{n}(\mathbb R)$. It can be seen that this embedding is an injective ring homomorphism that sends the multiplicative identity of $M_n(\mathbb{C})$ to the multiplicative identity of $M_{2n}(\mathbb{R})$. Does this embedding also send $\text{GL}_{n}(\mathbb C)$ into $\text{GL}_{2n}(\mathbb R)$? Namely how do I conclude from $$\det (X+iY)\ne 0$$ that $$\det \pmatrix{X& Y \cr -Y& X}\ne 0\,?$$ AI: This is immediate from the fact that your embedding is a ring homomorphism. If $X+iY$ is invertible with inverse $U+iV$, then $\begin{pmatrix}U & V \\ -V & U\end{pmatrix}$ is inverse to $\begin{pmatrix}X & Y \\ -Y & X\end{pmatrix}$, and in particular $\begin{pmatrix}X & Y \\ -Y & X\end{pmatrix}$ is invertible.
H: Conditional Probability: Two defective monitors A small showroom has $50$ LED Monitors on a shelf that work perfectly and another $5$ that are defective in the same shelf. What is the probability of randomly selecting two defective monitors when purchasing a pair of LED monitors from that showroom? $W=$ working monitors, $NW=$ defective monitors $P(W)=\frac{50}{55}$, $P(NW)=\frac{5}{55}$ $P(2$ defective monitors$)= \frac{5}{55} \cdot \frac{4}{54} = \frac{2}{297}$ This is what I did. Is this correct or am I wrong somewhere? AI: Yes, your solution is correct. The probability of picking a wrong monitor the first time is obviously $\frac{5}{55}$, and the probability for the second time is $\frac{5-1}{55-1}=\frac{4}{54}$, so you multiply those and get the result. Ps: you can also solve it using $\dfrac{\binom{5}{2}}{\binom{55}{2}}$ as the other answer had shown. Anyway, the result is the same.
H: Can we equip the power set $P$ of any set $S$ with a binary operation such that $P$ becomes a group (with some restrictions)? This question is inspired by this one. Please read my answer there to get better context. Settings. Let $S$ be a (not necessarily finite) set, and $P$ the power set of $S$ (i.e., $P$ is the set of all subsets of $S$). A binary operator $*:P\times P\to P$ is said to be elementary if it can be given in terms of the standard set operations: the union operator $\cup$, the intersection operator $\cap$, the set difference operator $\setminus$, the symmetric difference operator $\triangle$, and the complement operator $(\_)^\complement$. Some Examples. This operator $\star$ is considered an elementary binary operator: $$A \star B:= \big((M\setminus A)\cup (B\cap N)\big)^{\complement}\triangle \Big(A\cup B^\complement\Big)\text{ for all }A,B\subseteq S\,,$$ where $M$ and $N$ are fixed subsets of $S$. On the other hand, if $|S|=2$, then this operator $\bullet$ is not an elementary binary operator: $$A\bullet B:=\left\{ \begin{array}{ll} S&\text{if }A\subseteq B\,,\\ \emptyset&\text{otherwise}\,, \end{array}\right.$$ where $A,B\subseteq S$ (a proof can be done by imitating this answer). Question. For which groups $G$ of order $2^{|S|}$ does there exist an elementary binary operator $*:P\times P\to P$ such that $(P,*)$ is a group isomorphic to $G$? If the case where $S$ is an infinite set is too troublesome, then an answer in the case where $S$ is a finite set is very welcome. Let $n:=|S|$. Write $Z_k$ for the cyclic group of order $k$. Trivial Answer. When $G\cong Z_2^n$, then the binary operator $\triangle$ does the work. My conjecture is that there are no other groups. Known Result. When $|S|=2$ and $G\cong Z_4$, then there does not exist such an elementary binary operator. AI: Let us identify $P$ with $\{0,1\}^S$ in the obvious way. Then an elementary operation is just one which is given by applying some binary operation $\{0,1\}\times\{0,1\}\to\{0,1\}$ coordinatewise. Indeed, it is clear that every elementary operation must have this form (since all the basic Boolean operations have this form), and conversely it is a simple exercise in Boolean algebra to build every binary operation on $\{0,1\}$ out of the basic Boolean operations. So, $P$ must just be a product of copies of some binary operation on $\{0,1\}$. As long as $S$ is nonempty, this means $P$ will be a group iff the corresponding operation on $\{0,1\}$ makes it a group (and the case where $S$ is empty is trivial). But there is only one group operation on $\{0,1\}$ up to isomorphism, so $P$ can only be isomorphic to $\mathbb{Z}_2^S$. (In fact, there are only two possible group operations at all: the usual symmetric difference operation and symmetric difference conjugated by swapping $0$ and $1$, which in terms of sets is just the complement of the symmetric difference.)
H: compactness model theory question Let $\sigma$ be a set of first-order formulas including the axioms of equality. Suppose that for every $n\in\mathbb{N}, \sigma$ has a satisfying model $M_n$ whose domain is finite and has at least $n$ distinct elements. Prove that the set $\sigma$ must have a model with infinite domain. Edit: Here's my revised attempt. By the compactness theorem, a $\sigma$ has a model iff every finite subset of $\sigma$ has a model. To show that $\sigma$ has a model with infinite domain, I need to add sentences to $\sigma$ to construct an infinite model, that satisfies $\sigma$ equipped with these sentences and thus $\sigma$, though I'm not sure how to find these sentences. AI: Define sentences $\phi_i$ as follows: $$ \phi_i = \exists x_1, x_2, \ldots, x_i. \bigwedge_{1 \le m < n \le i} x_m \neq x_n $$ I.e., (given the equality axioms), $\phi_i$ holds in a model $M$ iff $M$ has at least $i$ distinct elements. Let $\sigma' = \sigma \cup \{\phi_1, \phi_2, \ldots\}$. By assumption, any finite subset of $\sigma'$ has a model. Hence by the compactness theorem, $\sigma'$ has a model,$M$, say. But then the equality axioms together with each $\phi_i$ all hold in $M$, so $M$ must be infinite. [Aside: the only equality axiom that is actually relevant here is reflexivity: $\forall x. x = x$.]
H: Find the arclength from 0 to 1 on the function $y =\arcsin(e^{-x})$. I need to find the arclength from 0 to 1 on the function $y = \arcsin(e^{-x})$. I know that $$y' = \frac{-e^{-x}}{\sqrt{1-e^{-2x}}}$$ By applying arc length formula I get this nasty integral: $$\int_{a}^{b} \sqrt{1 + (y')^2} \ dx=\int_{0}^{1}\sqrt{\frac{1-2e^{-2x}}{1-e^{-2x}}}dx$$ However I am unsure how to evaluate this. Putting it through software, I seem to get complex values which I am not sure are possible. Could someone offer an attempt? AI: hint $$(y')^2=\frac{e^{-2x}}{1-e^{-2x}}$$ $$1+(y')^2=\frac{1}{1-e^{-2x}}$$ The length is $$L=\int_a^b\frac{e^{-2x}dx}{e^{-2x}\sqrt{1-e^{-2x}}}$$ Now, make substitution $$u=\sqrt{1-e^{-2x}}$$ to get $$L=\int_A^B\frac{du}{1-u^2}$$ Now, use partial fraction decomposition.
H: Let $\Omega$ be a finite set. Let $\mathcal{F}\subset\mathcal{P}(\Omega)$ be an algebra. Show that $\mathcal{F}$ is a $\sigma$-algebra. Let $\Omega$ be a finite set. Let $\mathcal{F}\subset\mathcal{P}(\Omega)$ be an algebra. Show that $\mathcal{F}$ is a $\sigma$-algebra. MY ATTEMPT Since $\mathcal{F}$ is an algebra, $\Omega\in\mathcal{F}$. Moreover, if $A\in\mathcal{F}$, then $A^{c}\in\mathcal{F}$. Finally, if $A,B\in\mathcal{F}$, then $A\cup B\in\mathcal{F}$. Now we have to prove that the countable union of sets in $\mathcal{F}$ does belong to $\mathcal{F}$. Here it is the sketch of my attempt to prove it: since there are finitely many subsets of $\Omega$, the countable union has to have finitely many different sets in its composition. Consequently, such union is a finite union of subsets of $\Omega$, which clearly belongs to $\mathcal{F}$ since it is an algebra. However I am not sure if it is a good approach or how to formalize it. Could someone please help me with this? AI: Your sketch is pretty much a proof already. If you wanted to be more precise about it: suppose you have a countable family $(A_i)_{i\in I}$ of subsets of $\Omega$, then this is equivalently a function $f:I\to2^\Omega$. Since $2^\Omega$ is finite, so is the image $fI$, so write $fI = \{B_1,\dots,B_n\}$ and now $\bigcup_{i\in I}A_i = B_1\cup\dots\cup B_n$. As $\mathcal F$ is an algebra, it is closed under binary union, so by induction this union will be in $\mathcal F$ also.
H: Can someone help me understand the question? Its Linear Equations. Decide whether the given number is a solution to the equation. $2x + 3x + 2= 10$; $x = \frac{8}{5}$ Is $x = \frac{8}{5}$ a solution? Help, I don't understand the question and have no idea how to even check if the solution is true or false. AI: If $x=\dfrac85$ then $2x=\dfrac{16}5$ and $3x=\dfrac{24}5$, so $2x+3x+2=\dfrac{16}5+\dfrac{24}5+2=\dfrac{16+24}5+2=8+2=10$, so $x=\dfrac85$ is a solution.
H: Let $A\subset\Omega$ and $\mathcal{B}_{A} = \{B\cap A:B\in\mathcal{B}\}$. Show that $\mathcal{B}_{A}$ is a $\sigma$-algebra on $A$. Let $\Omega$ be a nonempty set and $\mathcal{B}$ be a $\sigma$-algebra on $\Omega$. Let $A\subset\Omega$ and $\mathcal{B}_{A} = \{B\cap A:B\in\mathcal{B}\}$. Show that $\mathcal{B}_{A}$ is a $\sigma$-algebra on $A$. MY ATTEMPT To begin with, notice that $A\in\mathcal{B}_{A}$. Since $\Omega\in\mathcal{B}$, we conclude that $A = \Omega\cap A\in\mathcal{B}_{A}$. Let us suppose that $S_{1},S_{2},\ldots, \in\mathcal{B}_{A}$. Then one has that $S_{i} = B_{i}\cap A$ for some $B_{i}\in\mathcal{B}$, where $i\geq 1$. Consequently, \begin{align*} S = \bigcup_{i=1}^{\infty}S_{i} = \bigcup_{i=1}^{\infty}(B_{j}\cap A) = \left(\bigcup_{i=1}^{\infty}B_{j}\right)\cap A = B\cap A \Rightarrow S\in\mathcal{B}_{A} \end{align*} Could someone help me to prove that $\mathcal{B}_{A}$ is closed under complementation? Any help is appreciated. AI: If $S =A\cap B\in \mathcal B_A$ where $B \in \mathcal B$ then complement of $S$ in $A$ is $A\cap S^{c}$ and $S^{c} \in \mathcal B$ so $S^{c} \in \mathcal B_A$.
H: Prove that for every $x\in M$ the sequence $\{T^nx\}$ converges to a fixed point of $T$. Let $(M, d)$ be a complete metric space, let $T:M\to M$ be a continuous map and let $\varphi:M\to\mathbb{R}$ be a function which is bounded below. Assume that together they satisfy $$d(x,Tx)\leq\varphi(x)-\varphi(Tx)$$ Prove that for every $x\in M$ the sequence $\{T^nx\}$ converges to a fixed point of $T$. My idea is to use banach fixed point theorem, that is, we need to show $d(Tx,Ty)\leq cd(x,y)$ where $c\in[0,1)$. $$d(Tx,Ty)\leq d(Tx,x)+d(x,y)+d(y,Ty)\leq\cdots$$ I don't follow the condition of $\varphi$. And that confuses me how $T$ and $\varphi$ satisfy the inequality. Hints and good observations are welcome! Thanks in advance! AI: Hints: $d(T^{n}x,T^{n+1}x) \leq \phi (T^{n}x)-\phi (T^{n+1}x)$. From this and the fact that $\phi$ is bounded below show that $\sum d(T^{n}x,T^{n+1}x) <\infty$ [You will have telescopic sum on the right hand side]. From this and triangle inequality show that $(T^{n}(x))$ is Cauchy sequence. If $T^{n}x \to y$ prove that $Ty=y$.
H: I don't understand this question You have a craving for Mrs. Fields gourmet cookies. You have a choice of oatmeal raisin, macadamia nut, triple chocolate, cinnamon sugar, chocolate chip, and peanut butter. If you must choose at least one cookie. How many ways is this possible? AI: The question is perhaps a little ambiguous, but here's one interpretation - there are 6 cookies, one in each of the listed flavours. How many ways can you take at least one cookie, but potentially up to all 6? There are two ways that you can approach this (and if you do it right, they should give you the same results): Method 1: There are $^6C_1 = 6$ ways to pick a single cookie, $^6C_2 = 15$ ways to pick a pair of cookies, and so forth up to $^6C_6 = 1$ way to pick all 6 cookies. Adding them up will give you the total number of ways to choose at least one cookie. Method 2: You can either pick oatmeal raisin or leave it out, giving you two options on that front. Once you've decided on that, you can either add macadamia nut or leave it out, again giving you two options, so at this point there are $2 \times 2 = 4$ different possible selections. You can then either add triple chocolate or skip it, so now there are $4 \times 2 = 8$ choices. You keep doing that for all the arrangements, giving you all the ways to pick out of the 6 cookies. But, if you look carefully, that set of choices includes the case where you avoid all the cookies, which the problem says isn't happening. So once you have your number, subtract one for the disallowed option.
H: Does $y=9$ solve $2y+9(y-4)=52$? What are the steps to properly solve this? Determine whether $y=9$ is the solution to the equation $2y+9(y-4)=52$. AI: "Is a solution to" means the same thing as "Makes the given things true". So to check whether $y = 9$ is a solution to $2y+9(y-4)=52$, we want to see whether choosing $y = 9$ makes that equation true. The right-hand side of the equation is a nice simple constant, so we can leave it as it is. The left-hand side involves $y$, so let's plug our chosen value in: $\begin{eqnarray}2y + 9(y - 4) & = & 2 \times 9 + 9(9 - 4) \mbox{ by substitution} \\ & = & 18 + 9 \times 5 \\ & = & 18 + 45 \\ & = & 63 \end{eqnarray}$ And so we ask, is $63 = 52$? No, the statement is clearly false, and so $y = 9$ is not a solution to the given equation. As shown in the comments, the true solution is in fact $y = 8$, which you can find through standard solution methods, but it's useful to remember that you can test solutions like this because (a) it lets you prove that the answer you found was right, and (b) when solving more complicated equations you can sometimes accidentally introduce additional, incorrect solutions and you need to eliminate them. For example, you might accidentally introduce a division by zero, or take a square root of a negative number, or any of the many ways that show up in various proofs that 1=0 or something.
H: Number of ways of choosing n, m elements from non-disjoint sets A and B? How many ways are there to choose $m$ and $n$ elements from (potentially non-disjoint) sets $A$ and $B$, respectively? If the sets were disjoint, this would be $\binom{|A|}{m} \binom{|B|}{n}$. If they aren't necessarily disjoint however, I can't seem to think of a nice closed-formula. Any help would be appreciated. AI: As asked by the OP: Well iterate over the elements in the intersection, let say that $c=|A\cap B|$ $$\sum _{k=0}^{c}\binom{c}{k}\binom{|B|-c}{n-k}\binom{|A|-c}{m-k}.$$
H: If $f$ is a continuous function and $f(a + b) = f(a) + f(b)$, how do I prove that $f(x) = mx$ where $m=f(1)$? If $ f $ is a continuous function and $ f ( a + b ) = f ( a ) + f ( b ) $, how do I prove that $ f ( x ) = m x $ for any $ x $ in real numbers, where $ m = f ( 1 ) $? I know that I have to start by showing $ f ( x ) = m x $ for any rational $ x $ and then extend that to any real number with continuity. However, I do not know how to go about it. AI: First use induction to show $f(n)=nm$ for $n\in\Bbb N$. Next, $f(0)=f(0)+f(0)$ implies $f(0)=0$, so $0=f(1-1)=m+f(-1)$. This tells us $f(-1)=-m$, and now we can show by induction $f(n)=mn$ for any $n\in\Bbb Z$. Next, we claim that, for any rational number $p/q$, $f(p/q)=mp/q$. We can assume $q$ is a positive integer, and then $$mp=f(p)=f(qp/q)=f(p/q+p/q+\cdots+p/q)=qf(p/q),$$ which gives the result. Now use the fact that $f$ is continuous to extend the result to every real number.
H: if $\frac{p}{q}, \frac{r}{s}$ are positive simplified fractions such that $qr - ps=1$, prove that $\frac{p+r}{q+s}$ is also a simplified fraction if $\frac{p}{q}, \frac{r}{s}$ are positive simplified fractions such that $qr - ps=1$, prove that $\frac{p+r}{q+s}$ is also a simplified fraction It's not hard to prove through Pick's theorem actually. Let $(p,q), (r,s)$ be two points on the plane, then $(0,0), (p,q), (r,s), (p+r, q+s)$ formed a parallelogon whose area is $1$. But there is no lattice point within $(0,0), (p,q), (r,s), (p+r, q+s)$, Thus $\frac{p+r}{q+s}$ cannot be further reduced. Otherwise it'd produce a lattice point on the diagnal. However Pick's theorem is kind of too deep for this. Is there an elementary proof? AI: $$ r(q+s)-s(r+p) = rq +rs -rs - ps = qr - ps=1 $$
H: generating functions for the sequence $\{\frac{k(k-1)}{2}\}$ and ${(k+1)(k+2)}$ For the following two sequences: (1) $\{(k+1)(k+2)\}$, (2) $\{\frac{k(k-1)}{2}\}$, I am trying to obtain the generating functions for both of them. I am going through a text finite difference equations and method of generating functions is extremely short. I know that both of sequences are related to the sequence $y_k = k(k+1)$, and it's generating function is $\frac{2s}{(1-s)^3}$. For (1) If I let $Y(s)=y_k$, then shouldn't the generating function for $(k+1)(k+2)$ be $\frac{Y(s)-y_0}{s} = \frac{{\frac{2s}{(1-s)^3}}-1}{s} = \frac{2}{(1-s)^3} - \frac{1}{s}$. However the solution is $\frac{2}{(1-s)^3}$. I am not sure what I am doing wrong. For (2) $\{\frac{k(k-1)}{2}\}$, I am not sure how to relate it back to the the sequence $k(k+1)$ but the solution of its generating function is $\frac{s^2}{(1-s)^3}$ Thank you in advance AI: For the first one notice that you do not have to take out the constant term because there is no constant term. $k(k+1)$ evaluated at $k=0$ is $0.$ For the second one notice that if you have $$F=\sum _{k=0}^{\infty}y_kx^k,$$ then $$xF=\sum _{k=0}^{\infty}y_kx^{k+1}=\sum _{k=1}^{\infty}y_{k-1}x^k,$$ so if $$\frac{2s}{(1-s)^3}=\sum _{k=0}^{\infty}k(k+1)s^k,$$ then $$s\frac{2s}{(1-s)^3}=\sum _{k=0}^{\infty}(k\color{red}{-1})(k+1\color{red}{-1})s^k,$$ divide by $2$ and you are done.
H: Solve the system of equations for $x$ and $y$? I'm trying to solve this system of linear equations: $3x^2 - 12y = 0$ $24y^2 -12x = 0$ for $x$ and $y$, but I'm a little confused. I get $x = 0, 2$ and when I plug those into my first equation I get $y = 0, 1$ but when I plug it into my second equation I get $y = 0, 1, -1$. I thought these are supposed to be equivalent. How could I determine which solutions are correct? AI: When you substitute $x = 2$ in your second equation, you get $y = -1, 1$. However, $(2, -1)$ only satisfies the second equation and not the first, but $(2, 1)$ and satisfies both equations. Therefore, the only real solutions are $(0,0)$ and $(2, 1)$. As others have shown, your method neglects any complex solutions.
H: Typographical error? Integral curve of $y^2dx-x^2dy=xy$ passing through point (1,1) The following first-order differential equation and boundary condition appears in Section 2.7, Problem 14 of An introduction to the theory of Differential Equations by Walter Leighton. $y^2dx-x^2dy=xy$, passing through (1,1) The solution is given as $x^y=y^x$ Which by inspection must reduce to $y=x$ to pass through the point (1,1). Clearly, this does not satisfy the ODE in the problem. The book I've found is quite full of errors and I would not be surprised if the given ODE is incorrect. Is the ODE even solvable in any sense? The lack of differentials on the RHS is disturbing, and my attempts to numerically solve the ODE have failed to converge on a solution. AI: Definitely something is missing. In order that the solution satisfies an equation, one way to do is to change the equation to $$y^2dx-x^2dy=xy(\ln(y)dx-\ln(x)dy).$$ Or just forget about it and move on.
H: Let $f:[a,b]\to \mathbb{R}$ be Riemann integrable. Let $g:[-b,-a]\to \mathbb{R}$ be defined by $g(x):=f(-x)$. Show that $g$ is Riemann integrable Let $a<b$ $f:[a,b]\to \mathbb{R}$ be Riemann integrable. Let $g:[-b,-a]\to \mathbb{R}$ be defined by $g(x):=f(-x)$. Show that $g$ is Riemann integrable with $\int_{[-b,-a]}g=\int_{[a,b]} f$ I wanted to use change of variables but the statement I have from the text says $\phi$ must be monotone increasing: Let $[a,b]$ be a closed interval and let $\phi:[a,b]\to [\phi(a),\phi(b)]$ be a differentiable monotone increasing function such that $\phi'$ is Riemann integrable. Let $f:[\phi(a),\phi(b)]\to \mathbb{R}$ be Riemann integrable. Then $(f\circ \phi) \phi':[a,b]\to \mathbb{R}$ is Riemann integrable on $[a,b]$ and $\int_{[a,b]} (f\circ \phi)\phi'=\int_{[\phi(a),\phi(b)]} f$ So I wanted to just prove that $\underline\int_{[a,b]} f\leq \int_{[-b,-a]}g\leq \overline\int_{[a,b]} f$ I know that given a partition $P$ of $[a,b]$ I can use $\phi(x)=-x$ to construct a partition of $[-b,-a]$ as $Q\{J\in P:\phi(J)\}$. But I'm not sure exactly what to do from here. I'm assuming there must be some way to use the theorem given that I'm not seeing or I'm supposed to prove that the change of variables still works for monotone decreasing $\phi$. AI: If for each partition $P$ of $[a,b]$ of size $n$ you can find a partition $Q$ on $[-b,-a]$ of size $m$ so that $$\sum_{i=1}^{n-1} (\sup_{x\in[x_i,x_{i+1}]}f(x))(x_{i+1}-x_i) = \sum_{j=1}^{m-1} (\sup_{x\in[y_j,y_{j+1}]}g(x))(y_{j+1}-y_j) $$ Then $\overline{\int_a^b} f = \overline{\int_{-b}^{-a}}g$. The same argument will show that $\underline{\int_a^b} f = \underline{\int_{-b}^{-a}}g$.
H: For a set A is it always possible to find a measurable superset A* such that $\mu^*(A^*-A)=0$ Given a measure set $\langle X,\mu,\mathcal{A}\rangle$ let $\mu^*$ be the outer measure induced by $\mu$ and $A \subseteq X$ with finite outer measure (Not necessarily measurable). Can a measurable superset $A^*$ with the same outer measure as $A$ be found and has the property that $\mu^*(A^*-A)=0$? How do you prove you can find such a set? (Note: a set is a superset of itself I am not considering only strict supersets) Edit: I have worked on this for a while and I believe it can be proved that this is never the case for non measurable sets for complete measures. If the outer measure of a set is $0$ then it is measurable according to Carathèodory. A non measurable set $A$ union a null set $N$ must be not measurable in a complete measure. This is because if $A\cup N$ was measurable then we would have that $A-N$ would have to be measurable and by completeness $A \cap N$ would also be measurable so $A$ would have to be measurable contradicting the premises. Is this correct? AI: No, it’s not always possible. If $X=\{0,1\}$ with $\mu(A)$ the cardinality of $A$, then there is no such superset of any subset of $X$.
H: "Show that the limit $ \lim\limits_{(x,y) \to (0,0)} \frac{2x^2y^3}{x^4+y^6} $ does not exist." $$ \lim\limits_{(x,y) \to (0,0)} \frac{2x^2y^3}{x^4+y^6} $$ My reasoning after reading the textbook: Direct substitution wouldn't work since it would lead to the indeterminate form $0/0$. We can examine the values of $f$ along parabolic curves that end at $(0,0)$. Along $y=kx^2, x\neq0$, the function has the value $$ \frac{2k^3x^4}{1+k^6x^8}. $$ So, if $(x,y)$ approaches $(0,0)$ along $y=x^2$, then $k = 1$, and the limit is $\frac{2x^4}{1+x^8}$. If $(x,y)$ approaches $(0,0)$ along the $x$-axis, then $k=0$, and the limit is $0$.Is this correct and/or is there another way to go about this? Thanks. AI: As DMcMor observed, unfortunately your reasoning is incorrect as $$\frac{2x^4}{1 + x^8} \to 0$$ as $x \to 0$. But the idea is correct: you want to find two paths through $(0, 0)$ whose limits are different. You might consider the paths $y = x^{2/3}$ and $y = x$, for example. Usually, in problems where we want to show the limit doesn't exist, the strategy for picking two paths is to first find a path where the numerator's degree is higher than the denominator's degree (happening here with $y = x$) --- in which case the limit becomes zero --- and then find a second path through which the denominator and numerator have the same degree (happening here with $y = x^{2/3}$ --- in which case the limit will be nonzero.
H: Why is the well-ordering theorem so important in the set theory? Why is the well-ordering theorem so important in the set theory? Every set can be well-ordered. Mathematicians think the above theorem is very important but the below theorem is not so important. Of course I know that the above theorem is stronger than the below theorem. But why is the above theorem so important? Every set can be totally-ordered. AI: Most simply, well-orderings let us do things that arbitrary total orders don't - namely, they support definition by (transfinite) recursion and proof by (transfinite) induction. The difference is most clearly seen if we look at "constructions" of weird sets of reals: $\mathsf{ZF}$ obviously proves that $\mathbb{R}$ can be totally ordered, but a mere total ordering of $\mathbb{R}$ doesn't help us build a Vitali set, a Bernstein set, an undetermined game on the naturals, or etc. - for any of those, we need a well-ordering.
H: Every continuous open mapping $\mathbb{R} \to \mathbb{R}$ is monotonic Prove that every continuous open mapping from $\mathbb{R} \to \mathbb{R}$ is monotonic I want to prove it only (or mostly) using arguments and concepts from topology, and not from analysis. I don't have anything that I think is useful or correct yet. The only idea I had was this: Let $f:\mathbb{R}\to\mathbb{R}$ be continuous and open. If $x,y \in \mathbb{R}$, $x<y$, then $(x,y)$ is connected and open. Since $f$ is open, $f(x,y)$ must be open, and since $f$ is continuous, $f(x,y)$ must be connected. This way, if I managed to prove that $f(x,y)$ is bounded, then we would have $f(x,y)=(a,b)$ for some $a,b\in\mathbb{R}$. Now, because $f$ is continuous and open, we would have that $f(\partial(x,y))=\partial(a,b)$, that is, $\{f(x),f(y)\}=\{a,b\}$, which implies that $f(x)=a, f(y)=b$, or $f(x)=b, f(y)=a$ and finally $f(x)<f(y)$ or $f(x)>f(y)$. However, I don't know how to prove that $f(x,y)$ is bounded, and even if I did it, I think this would only establish that f is injective, not that it is monotone. I would be thankful if you could give me some hints (not full solutions). AI: Hint: Supose that $f$ is not monotonic, then exist a interval $[x,y]$ and a point $t \in (x,y)$, such that $f(t)= \max_{s \in [x,y]}{f(s)}$ or $f(t)= \min_{s \in [x,y]}{f(s)}$. Once you have that, study the set $f((x,y))$.
H: Does a linear function $\mathrm {End}_K(V) \otimes V \to V$ which maps $A \otimes v$ to $\det (A) \cdot v$ exist? $K$ is a field and $V$ a finite dimensional $K$-vector space. Does a linear function $\mathrm {End}_K(V) \otimes V \to V$ which maps $A \otimes v$ to $\det (A) \cdot v$, for all $A \in \mathrm {End}_K(V)$ and all $v \in V$ exist? AI: If such a map exists the map ${\rm End}_K(V)\times V\to V$ given by $(A,v)\to\det(A)\cdot v$ should be bilinear, which is not the case, because, $\det$ is not linear, that is $\det(A+B)\neq\det(A)+\det(B)$ and $\det(cA)\neq c\det(A)$ in general, and if $\dim_KV>1$.
H: Is the Cartesian product of two bounded sets bounded? What about for compact sets? The title pretty much speaks for itself. I just got this curiosity while self-studying Real Analysis using Baby Rudin and Charles Pugh's "Real Mathematical Analysis" out of boredom. I am still relatively new to abstract math since these two books are my first brush with the subject. AI: If $A,B\subseteq\Bbb R$ are bounded, there is a positive integer $n$ such that $A\cup B\subseteq[-n,n]$. Then $$A\times B\subseteq[-n,n]\times[-n,n]\subseteq\left\{\langle x,y\rangle\in\Bbb R^2:x^2+y^2\le 2n^2\right\}\;,$$ and that last set is plainly bounded: it is the closed disk of radius $n\sqrt2$ centred at the origin. It is a basic result in topology that the product of two compact sets is compact. Indeed, the product of any family of compact sets, no matter how large, is compact: this is the very important Tikhonov theorem.
H: Does "y (2 + x) = 3" represents a straight line equation? I'm new to straight line equation and trying to find out whether $y (2 + x) = 3$ represents a straight line equation or not. Could anybody please help me to figure out how to reach a conclusion here. AI: No it does not, It represents a hyperbola. General equation of a straight line is $ax+by+c=$ where $a,b,c \in\mathbb R$. But your equation when simplified looks like $2y+xy-3=0$ Notice that straight line equation has no $xy$( or coefficient term $xy$ term is zero) hence it does not represent a straight line.
H: To find a smooth planar curve starting at $\vec{r_0}$ stopping at $\vec{r_1}$ with some additional constraints. To find a smooth planar curve starting at $\vec{r_0}$ stopping at $\vec{r_1}$ whose unit tangents at start and stop are $\hat{v_0}$ and $\hat{v_1}$ and has the minimum length. Let us assume that the curve is $\vec{r(t)}=x(t)\hat{x}+y(t)\hat{y}$ parametrized versus $0\leq t\leq1$ having the length $L$ to be minimized: $$L=\int_{0}^{1} |\vec{r'(t)}|dt=\int_{0}^{1} \sqrt{x'^2(t)+y'^2(t)}dt$$ where $$\frac{x'(0)\hat{x}+y'(0)\hat{y}}{\sqrt{x'^2(0)+y'^2(0)}}=\hat{v_0},$$ $$\frac{x'(1)\hat{x}+y'(1)\hat{y}}{\sqrt{x'^2(1)+y'^2(1)}}=\hat{v_1},$$ $$x(0)\hat{x}+y(0)\hat{y}=\vec{r_0},$$ and $$x(1)\hat{x}+y(1)\hat{y}=\vec{r_1}.$$ Also: $$\hat{v_0}\times(\vec{r_1}-\vec{r_0})\bullet\hat{v_1}\times(\vec{r_1}-\vec{r_0})<0$$ Any idea how to minimize the $L$? AI: If you draw a picture using very small parts of the curve satisfying the initial and final conditions, it should be obvious that you can connect those parts (after appropriate turns) with a straight line segment, hence the length of the curve can be made arbitrarily close to the distance between the initial and final points. It follows that there is no minimum length unless each of the two given unit tangent vectors are in the direction of the displacement vector, but that would break the specified dot product condition.
H: Finding the maximum value given two system of equations I was given $(x,y)$ that satisfies both of this equation: $4|xy| - y^2 - 2 = 0\\(2x+y)^2 + 4x^2 = 2$ And was asked to find the maximum value of $4x + y$. Solving for $y^2$, I get this equation: $8x^2 + 4|xy| + 4xy - 4 = 0$ I assumed that if I were to find the maximum value of $4x + y$, then both $x$ and $y$ must be positive, so $xy \ge 0$ means that $|xy|$ is equal to $xy$. So, I have $8x^2 + 8xy - 4 = 0$ Solving for $y$, I have $y = \frac{4-8x^2}{8x}$ which makes $4x+y = f(x) = \frac{24x^2 + 4}{8x}$. Finding the extreme point using $f'(x) = 0$, I get $x = \frac{1}{\sqrt{6}}$, $y = \frac{\sqrt{6}}{3}$, and $f(\frac{1}{\sqrt{6}}) = \sqrt{6}$, which is the wrong solution (and this being the minimum point, too, while not satisfying the first equation at the same time ...) The choices were: $1, \sqrt{2}, 2, \sqrt{3}, 4$. Some help would be helpful! AI: From the two equations you get $$4|xy|-y^2=(2x+y)^2+4x^2 \implies 2|xy|=4x^2+y^2+2xy.$$ If $xy \geq 0$, then we get $$2xy=4x^2+y^2+2xy \implies 4x^2+y^2=0 \implies x=y=0.$$ In which case the expression $\boxed{\color{blue}{4x+y=0}}$. If $xy < 0$, then we get $$-2xy=4x^2+y^2+2xy \implies 4x^2+y^2+4xy=0 \implies (2x+y)^2=0.$$ Thus $2x+y=0$. Now the expression $\color{blue}{4x+y}=2x+(2x+y)=\boxed{\color{red}{2x}}$. But with $2x+y=0$, we also get $y=-2x$ and plugging that in one of the given equations we get $x=\pm\frac{1}{\sqrt{2}}$. So the max. value of the expression $\color{blue}{4x+y}=\color{red}{2x}=\sqrt{2}$.
H: Probability of having 3 tails consecutively out of 5 tosses Suppose we have a coin tossing game. What is the probability that out of 5 coin tosses, you will get 3 tails consecutively? For this problem, I was thinking of using the Binomial distribution PMF since it seems to describe the number of success out of n trials. The only problem is that I don't think it takes into account the order of the tosses ie 3 consecutive tails. So I was thinking maybe instead of Px(k)= nCk * (p^k) * (1-p)^(n-k) I would replace n choose k with n!. Idk if this would work, or if I am headed in the right direction. Any thoughts on my approach? edit: I forgot to put that in the problem, we are finding the probability of getting either 3 tails or 5 tosses, whichever comes first, thus my suggestion of the binomial distribution PMF. AI: The binomial distribution PMF isn't really describing the same thing - it looks for the possibility of getting a specific exact number of tails. If that's really what you're looking for (the probability that 3 are tails and the rest are heads), the replacement for nCk is n-k+1, or 3, because there are 3 places where there could be 3 consecutive tails (the beginning, the middle 3, or the last 3). Following this should get you 3/32. On the other hand, if you're looking for the probability of there just being 3 consecutive tails (so that 5 tails in a row would also count), you're going to have a lot of difficulties with overcounting. The easiest method in this case is really just to count the possibilities (there are 8 - ttttt,tttth,tttht,ttthh,thttt,htttt,httth,hhttt). Divide this by the total number of possibilities (there are 32 ways to flip a coin 5 times) to get the overall probability of 1/4. If you really want to do the second part in a slightly fancier way, you have to avoid counting sequences like ttttt multiple times. The way to do this is to look for an h followed by 3 t's - if you find httt, there can be no previous sequence of 3 t's, and you avoid overcounting. Therefore, you are looking for ttt??,httt?, and ?httt, which have probabilities of 1/8,1/16,and 1/16 respectively, summing to 1/4. Note that this method won't work for significantly longer sequences, as it's possible to have 3 consecutive t's, then httt. More advanced methods are required. TL;DR depending on what you mean, the answer is probably 1/4, but the method you described isn't particularly useful
H: $\frac{1}{z^2-5z+4}=\sum_{n=-\infty}^{n=\infty}a_nz^n$. Find $a_{-10}$ and $a_{10}$ Here is the question: By Laurent expansions, there exist constants $a_n$ such that for $1<|z|<4$, $\frac{1}{z^2-5z+4}=\sum_{n=-\infty}^{n=\infty}a_nz^n$. Find $a_{-10}$ and $a_{10}$. My idea: Using partial fraction decomposition, we find that $\frac{1}{z^2-5z+4}=\frac{\frac{1}{3}}{z-4}-\frac{\frac{1}{3}}{z-1}$. Using Laurent expansions, we know that $\frac{\frac{1}{3}}{z-4}=\frac{\frac{1}{3}}{-1-\frac{1}{4}z}=-\frac{1}{3}\sum_{n=0}^\infty \frac{1}{4^{n+1}z^n}$, and that $\frac{\frac{1}{3}}{z-1}=-\frac{1}{3}\sum_{n=0}^{\infty}\frac{1}{z^n}$. Now, I am sort of stuck on where to go from here. I feel like I've done the majority of the problem, but is it really just a matter of playing with indices now? Moreover, I should ask if I made any mistakes, and if there is anything that ought to be considered for the rest of the problem. Furthermore, just out of curiosity, I was wondering if there is any way to maybe do this simply through integration? Any thoughts, suggestions, tips, etc. are greatly appreciated! Thank you! AI: You are in the right direction. However, since you are working in $1<|z|<4$, so for $$\underbrace{\frac{1}{z-1}=\frac{1}{z\left(1-\frac{1}{z}\right)}}_{\because \, \left|\frac{1}{z}\right|<1}=\frac{1}{z}\sum_{n=0}^{\infty}\frac{1}{z^n}=\sum_{n=0}^{\infty}\frac{1}{z^{n+1}}.$$ And $$\underbrace{\frac{1}{z-4}=\frac{1}{4\left(\frac{z}{4}-1\right)}}_{\because \,\left|\frac{z}{4}\right|<1}=\frac{-1}{4}\sum_{n=0}^{\infty}\left(\frac{z}{4}\right)^n=-\sum_{n=0}^{\infty}\frac{z^n}{4^{n+1}}$$ Now you can have \begin{align*} \frac{1}{z^2-5z+4}&=\frac{1}{3}\left[\frac{1}{z-4}-\frac{1}{z-1}\right]\\ &=\frac{-1}{3}\left[\sum_{n=0}^{\infty}\frac{z^n}{4^{n+1}}+\sum_{n=0}^{\infty}\frac{1}{z^{n+1}}\right] \end{align*} Observe that the first series will give all non-negative powers, whereas the second series will give the negative powers. For example, $a_{-10}=\frac{-1}{3}$.
H: Why is the vector dot product scaled? I have scoured all the answers on this website but I still cannot understand why $a\cdot b = |a||b|\cos\theta$, if the dot product is interpreted as the amount of one vector, say $a$, in the same direction as the other, say $b$, then why do we scale it by multiplying it with the magnitude of the vector on which it projects? why do we need $|b|$? AI: To understand how the dot product is defined, it's better to first look at why the dot product is defined. The idea of the dot product is to have some operation which takes in two vectors (say $a$ and $b$), and returns a single (meaningful) value. As you suggest, if we were merely interested in the length of the projection of $a$ onto $b$ (or, as you put it, the amount of one vector goes in the same direction as the other), then we would indeed want to use $|a|\cos\theta$. But there are several problems with this ''definition''. Indeed, let's say we did define such a product: say $a\star b=|a|\cos\theta$. Observe this ''product'' has the following problems: We do not have $a\star b=b\star a$. In other words, the product is not commutative. We do not have $a\star (b+c)=a\star b+a\star c$, it is not distributive. We do not have $a\star (xb)=x(a\star b)$ for $x\in \mathbb{R}$, it does not respect scalar multiplication. That is why defining $a\cdot b=|a||b|\cos\theta$ is a much more natural definition. It satisfies the properties we want a usual product to satisfy, and hence tells us more about the vectors $a$ and $b$.
H: How would I find the order of a factor group and determine what it's isomorphic to Using this operations table i'm trying to figure out how to find the order of the factor group $D_{6} / H$ with H = $ \{ \rho_{0} , \rho_{3} \}$ and then figure out which well-known group it's isomorphic to. I know the order of $D_{6}$ on it's own is 12 and I know how to find the order of a factor group like $( \Bbb Z_{4} \times \Bbb Z_{2}) \space / \space \langle (2,1) \rangle$, but I can't figure out how to calculate this factor group. Maybe once I can figure it out, then I might be able to find what it's isomorphic to? Or are those completely unrelated. Any advice would be greatly appreciated. AI: Cosets of a normal subgroup are pairwise disjoint and cover all of the group, so for finite groups we get $|G/H|=|G|/|H|$. This is usually very helpful in classifying factor groups of finite groups, and your problem is no exception. As to which order-6 group $D_6/H$ is, we only have two options: $\Bbb Z_6$ or $S_3$. If it is to be $\Bbb Z_6$, then there must be an order 6 element in $D_6/H$. An order 6 element in the factor group must come from an element in the original group that has order which is a multiple of 6. There are only two such: $\rho_1$ and $\rho_5$. However, $\rho_1H$ and $\rho_5H$ have order $3$ as $\rho_1^3=\rho_5^3=\rho_3\in H$, so the group cannot be $\Bbb Z_6$. If it is to be $S_3$, there must be two non-commuting elements in $D_6/H$. Non-commuting elements in the factor group must come from non-commuting elements in the original group. We try with $\rho_1$ and $\mu_1$, because those are the first two non-commuting elements in the table: $$ \rho_1H\cdot\mu_1H=(\rho_1\mu_1)H=\delta_1H=\{\delta_1,\mu_3\}\\ \mu_1H\cdot\rho_1H=(\mu_1\rho_1)H=\delta_3H=\{\delta_3,\mu_2\} $$ These are not the same coset, so $\rho_1H$ and $\mu_1H$ do not commute. So the group must be $S_3$.
H: Find the remainder $1690^{2608} + 2608^{1690}$ when divided by 7? Find the remainder $1690^{2608} + 2608^{1690}$ when divided by 7? My approach:- $1690 \equiv 3(\bmod 7)$ $1690^{2} \equiv 2(\bmod 7)$ $1690^{3} \equiv-1 \quad(\mathrm{mod} 7)$[ quite easy to determine , $\frac{2*1690}{7}$..so on] $\left(1690^{3}\right)^{869} \cdot 1690 \equiv(-1)^{869}1690 \quad(\mathrm{mod} 7)$ $1690^{2608} \equiv -1690 \quad(\mathrm{mod} 7)$....(1) again for $2608$ $2608 \equiv 4(\bmod 7)$ $2608^{2} \equiv 2(\bmod 7)$ $2608^{3} \equiv1 \quad(\mathrm{mod} 7)$[ quite easy to determine , $\frac{2*2608}{7}$..so on] $\left(2608^{3}\right)^{563} \cdot 2608 \equiv(1)^{563}2608 \quad(\mathrm{mod} 7)$ $2608^{1690} \equiv 2608 \quad(\mathrm{mod} 7)$...(2) Now applying property adding (1) + (2), $1690^{2608} + 2608^{1690}=918 \quad(\mathrm{mod} 7)$ $\boxed{1690^{2608} + 2608^{1690} \equiv 1 \quad(\mathrm{mod} 7)}$ Is my approach best? or Anyother approach is there comparatively better than it AI: Seems fine. Try to use smaller number as fast as you can. Using Fermat's little theorem, $$1690^{2608}\equiv 3^{2608}\equiv 3^{6(434)+4} \equiv 3^4 \pmod{7}$$ $$2608^{1690}\equiv 4^{6(281)+4}\equiv 4^4 \pmod{7}$$ $$3^4+4^4 \equiv (-4)^4+4^4 \equiv 2(4^4) \equiv 2^9 \equiv 2^3 \equiv 1 \pmod{7}$$ Remark: Also, I notice the way you compute $1690^3 \pmod{7}$ is by multiplying $2$ with $1690$ and then get the remainder when you divide by $7$. You don't have to do that. To compute $1690^3$, just multiply $2$ and $3$ and you get $6$ directly. That is once you figure out that $1690 \equiv 3\pmod{n}$, we know that $1690^n \equiv 3^n \mod{7}$, work with $3$ rather than $1690$. In fact, for prime $p$, $gcd(a,p)=1$, $a^n \equiv(a\pmod{p})^{(n \pmod{(p-1)})}\pmod{p}$ can reduce the magnitude of the number that you have to work wth.
H: Problem with proving inequalities Question: Prove that if $x,y,z$ are positive real numbers such that $x+y+z=a$ then $(a-x)(a-y)(a-z)>\frac8{27}a^3$ is not true. My Approach: $$\frac{a-x}{2}=\frac{y+z}2$$ $$\frac{a-y}{2}=\frac{x+z}2$$ $$\frac{a-z}{2}=\frac{x+y}2$$ Using $AM>GM$ we get $$\frac{x+y+z}{3}>\root 3 \of {xyz}$$ Cubing both sides and multiplying by $8$, $$\frac{8a^3}{27}>8xyz$$ Also, by $AM>GM$, $$(\frac{y+z}2)(\frac{x+z}2)(\frac{x+y}2)>8xyz$$ Now, how do I find the relation between $(\frac{y+z}2)(\frac{x+z}2)(\frac{x+y}2)$ and $\frac{8a^3}{27}$? AI: You can proceed like this: $$(a-x)(a-y)(a-z) \leqslant \left( \dfrac{(a-x)+(a-y)+(a-z)}{3} \right) ^3 =\dfrac{8}{27} a^3$$
H: How to solve $\int ^{1}_{-1}\frac {x^{2n}}{\sqrt {1-x^{2}}}dx?$ I couldn't solve this. $$\int ^{1}_{-1}\dfrac {x^{2n}}{\sqrt {1-x^{2}}}dx$$ I thought that like the following. $$\int ^{1}_{-1}\dfrac {x^{2n}}{\sqrt {1-x^{2}}}dx=\int ^{1}_{-1}\dfrac {1-\left( 1-x^{2n}\right) }{\sqrt {1-x^{2}}}dx\\=\int ^{1}_{-1}\dfrac {1-\sqrt {\left( 1-x^{2n}\right) ^{2}}}{\sqrt {1-x^{2}}}dx\\=\int ^{1}_{-1}\dfrac {1}{\sqrt {1-x^{2}}}dx-\int ^{1}_{-1}\dfrac {\sqrt {\left( 1-x^{2n}\right) ^{2}}}{\sqrt {1-x^{2}}}dx$$ I don't know what to do next. Maybe the procedure so far is wrong. Please tell me how to solve. AI: Let $x=\sin\theta\implies dx=\cos\theta \ d\theta$ $$\int ^{1}_{-1}\dfrac {x^{2n}}{\sqrt {1-x^{2}}}dx=2\int ^{1}_{0}\dfrac {x^{2n}}{\sqrt {1-x^{2}}}dx$$ $$=2\int ^{\pi/2}_{0}\dfrac {\sin^{2n}\theta}{\cos\theta}\cos\theta \ d\theta$$ $$=2\int ^{\pi/2}_{0}\sin^{2n}\theta\ d\theta$$ Using formula $\color{blue}{\int_0^{\pi/2}\sin^m\theta\cos^n\theta\ d\theta=\dfrac{\Gamma(\frac{m+1}{2})\Gamma(\frac{n+1}{2})}{2\Gamma(\frac{m+n+2}{2})}} $, $$=2\frac{\Gamma(\frac{2n+1}{2})\Gamma(\frac{0+1}{2})}{2\Gamma(\frac{2n+0+2}{2})}$$ $$=\frac{\Gamma(n+\frac{1}{2})\sqrt{\pi}}{\Gamma(n+1)}$$
H: Why don't we need to count the two pre-assigned people in this committee forming probability question? The following question comes from MITx 6.431x. Out of five men and five women, we form a committee consisting of four different people. Assume that 1) each committee of size four is equally likely, 2) Alice and Bob are among the ten people being considered. Calculate the probability that both Alice and Bob are members of the committee. I know the correct solution to this problem; what I do not understand is why isn't $(1/5)^2*\binom{8}{2}/\binom{10}{4}$ the correct way to calculate. $(1/5)^2$ because both Alice and Bob have a 1 out of 5 chances of being picked $\binom{8}{2}$ because after Alice and Bob have been picked, there are two spots left to fill out of 4 men and 4 women $\binom{10}{4}$ because that's the total amount of combinations possible Could anyone help please? In particular, $(1/5)^2$ is not necessary - why? AI: $\left(\frac15\right)^2$ is (incorrectly) the probability of both Alice and Bob being picked. That's it. It is, by itself, a complete (but still incorrect) answer to the problem. It would have been correct if we had wanted the committee to consist specifically of exactly one man and exactly one woman, rather than four people without gender restrictions. The correct answer to this problem is $$ \frac{\text{Number of committees with Alice and Bob}}{\text{Number of possible committees in total}} $$ There are no probabilities in neither numerator nor denominator here. There is no room for $\frac15$ anywhere.
H: How to integrate $ \int\frac{x-2}{(7x^2-36x+48)\sqrt{x^2-2x-1}}dx$? How to integrate $$ \int\frac{x-2}{\left(7x^2-36x+48\right)\sqrt{x^2-2x-1}}dx\,\,?$$ The given answer is $$ \color{brown}I=-\dfrac{1}{\sqrt{33}}\cdot \tan^{-1}\left(\frac{\sqrt{3x^2-6x-3}}{\sqrt{11}\cdot (x-3)}\right)+\mathcal{C}.$$ I tried by different substitutions i.e $\dfrac{x^2 - 2x -1}{x-3} = t$, but I am not getting my desired answer. $ORIGINAL$ $QUESTION$: This question was asked in our test and the given answer was option D ,i.e none on the given options were correct. AI: $$I=\int \frac{x-2}{(7x^2-36x+48)\sqrt{x^2-2x-1}}\,dx$$ This can be simplifies using $$\frac{x-2}{7x^2-36x+48}=\frac 1{7(a-b)}\left(\frac{a-2 } {x-a }+\frac{2-b } {x-b } \right)$$ where $$a=\frac{2}{7} \left(9-i \sqrt{3}\right) \qquad \text{and} \qquad b=\frac{2}{7} \left(9+i \sqrt{3}\right) $$ which makes that we are facing two integrals $$I_c=\int \frac {dx} {(x-c)\sqrt{x^2-2x-1}}$$ Complete the square and let $x=1+\sqrt 2 \sec(t)$ which gives $$I_c=\int \frac{dt}{(1-c) \cos (t)+\sqrt{2}}$$ Now, using the tangent half-angle subtitution $$I_c=2\int\frac{du}{\left(c+\sqrt{2}-1\right) u^2-c+\sqrt{2}+1}=\frac{2 }{\sqrt{-c^2+2 c+1}}\tan ^{-1}\left(u\frac{\sqrt{c+\sqrt{2}-1} }{\sqrt{-c+\sqrt{2}+1}}\right)$$ and so on ....
H: How to calculate $ \lim_{x\to\infty} (\frac{x}{x+1})^x$ using L'Hopitals rule? I am trying to calculate $ \lim_{x\to\infty} (\frac{x}{x+1})^x$ using L'Hopital. Apparently without L'Hopital the limit is $$ \lim_{x\to\infty} (\frac{x}{x+1})^x = \lim_{x\to\infty} (1 + \frac{-1}{x+1})^x = \lim_{x\to\infty} (1 - \frac{1}{x+1})^{x+1} \frac{1}{1-\frac{1}{x+1}} = e^{-1} * \frac{1}{1} = \frac{1}{e}$$ I am wondering how one could calculate this limit using L'Hopital's rule. My failed approach My initial syllogism was to use the explonential-log trick in combination with the chain rule as following: $$ \lim_{x\to\infty} (\frac{x}{x+1})^x = e^{\lim_{x\to\infty} x \ln(\frac{x}{x+1})} \quad (1) $$ So, basically the problem that way is reduced to: $$ \lim_{x\to\infty} x \ln(\frac{x}{x+1}) = \lim_{x\to\infty} x * \lim_{x\to\infty}\ln(\frac{x}{x+1}) \quad (2)$$ As far as $\ln(\frac{x}{x+1})$ is concerned, it has the form $f(g(x))$, so using the chain rule for limits and chain rule for derivatives in order to apply L'Hopital we can rewrite it as: $$ \lim_{x\to\infty} \ln( \lim_{x\to\infty} \frac{(x)'}{(x+1)'}) = \lim_{x\to\infty} ln(1) \quad (3)$$ But $(2),(3) \to 0 * \infty$, so that failed. Any ideas on how we could approach this in other ways? AI: Caution, $$\lim fg=\lim f\lim g$$ can only be used when the limits on the right both exist, which is not the case here. By L'Hospital $$\lim_{x\to\infty}\log\left(\frac x{x+1}\right)^x=\lim_{x\to\infty}\frac{\log\left(\dfrac x{x+1}\right)}{\dfrac1x}=\lim_{x\to\infty}\frac{\dfrac1x-\dfrac1{x+1}}{-\dfrac1{x^2}}=-\lim_{x\to\infty}\frac x{x+1}=-1.$$ The simplest is, by continuity of the inverse function, $$\lim_{x\to\infty}\left(\frac x{x+1}\right)^x=\frac1{\lim_{x\to\infty}\left(1+\dfrac1x\right)^x}=\frac1e.$$
H: Probability conditional or normal? I am struggling with this problem. My work: I did a) part and I think it should be $$\{(0,0),(0,1),(1,0),(1,1),(2,0),(2,1)\}$$ for part b) i am not sure will it be $$=1-0.4-0.5=0.1$$ Confused with , and | And I am stumped in part c and onwards! AI: From the definition of conditional probability, we have $$P(A|B) = \frac{P(A\cap B)}{P(B)}$$ Now, here, we have $$P(I=0|G=0) = \frac{P(I=0,G=0)}{P(G=0)}$$ $$\implies P(I=0,G=0) = 0.4*0.5 = 0.2$$ For the fourth part, you will have to do as above, since you know all the conditional probabilities, and the fact that $P(G=0)=P(G=1)=0.5$ For the 5th part, use $P(I=2 \cap G=1)$ and $P(I=2)$ (just sum over all possible values of G) and it would come from the conditional probability definition For the last part, you have to establish that $$P(G=1,I=2) = P(G=1)\cdot P(I=2)$$
H: Prove that a polynomial ring is integrally closed Let $V \subseteq {\mathbb{A}}^2_{\mathbb{C}}$ be the curve defined by $x^2-y^2+x^3=0$, and let $\mathbb{C}\left [ V \right ]$ the coordinate ring of $V$. Let $\Theta :=\bar{y}/\bar{x} \in \mathbb{C}\left ( V \right )$. I must show that the ring $B:=\mathbb{C}\left [ V \right ]\left [ \Theta \right ]$ is a UFD. I could show that this ring is a FD, because it is noetherian, but I am not sure how to prove that the factorizations are unique. Any help would be appreciated. I also tried to show that $B$ is isomorphic to a UFD, but i am not sure what domain would be suitable for this argument. (Note: I need to prove that $B$ is a UFD to say that $B$ is integrally closed.) AI: Let us show that the $\Bbb{C}[V][\Theta]$ is in fact generated by $\Theta$ as a $\Bbb{C}$-subalgebra of $\Bbb{C}(V).$ Define a morphism $\phi$ as follows: \begin{align*} \phi : \Bbb{C}[x,y]&\to\Bbb{C}[t]\\ x&\mapsto t^2 - 1,\\ y&\mapsto t^3 - t. \end{align*} It is not difficult to check that this factors through the quotient map $\Bbb{C}[x,y]\to\Bbb{C}[V].$ Now, clearly $\phi$ is a surjection onto $\Bbb{C}[t^2 - 1,t^3 - t],$ so that $\Bbb{C}[x,y]/\ker\phi\cong\Bbb{C}[t^2 - 1,t^3 - t].$ We need to prove that $\ker\phi = (x^3 + x^2 - y^2)$. To do so, note that both $\ker\phi$ and $(x^3 + x^2 - y^2)$ are prime. Since $\ker\phi$ is not maximal ($\Bbb{C}[x,y]/\ker\phi$ is visibly not a field), it is properly contained in some maximal ideal $\mathfrak{m}.$ This gives us a chain of prime ideals $$ (0)\subsetneq (x^3 + x^2 - y^2)\subseteq \ker\phi\subsetneq\mathfrak{m}. $$ But, $\dim\Bbb{C}[x,y] = 2,$ so that we must have $(x^3 + x^2 - y^2) = \ker\phi.$ Thus, we get an isomorphism $$ \Bbb{C}[x,y]/(x^3 + x^2 - y^2)\cong\Bbb{C}[t^2 - 1,t^3 - t], $$ and $y/x = \Theta$ in the fraction field of the left hand side corresponds to $t$ in the fraction field of the right hand side, because $x = \Theta^2 - 1$ and $y = \Theta^3 - \Theta.$ Now, it is easy to see the result, as $\Bbb{C}[V][\Theta]\cong\Bbb{C}[t^2 - 1,t^3 - t][t] = \Bbb{C}[t].$ This implies that $\Bbb{C}[V][\Theta] = \Bbb{C}[\Theta]$ (and that $\Theta$ satisfies no relations over $\Bbb{C}$) as claimed initially. Edit: As user26857 notes, the initial solution I presented (below) is not totally rigorous -- we need some condition on $x$ and $y$ to guarantee that $R[\frac{y}{x}]\cong R[T]/(xT - y).$ In fact, it isn't true that $\Bbb{C}[V][T]/(xT-y)\cong\Bbb{C}[V][y/x]$: the ideal $(xT-y)$ should be $(xT -y, T^2 - x - 1)$ -- this second relation is implicitly assumed and explicitly used. What is below can be made rigorous, either by justifying that the kernel of $\Bbb{C}[V][T]\to\Bbb{C}[V][y/x]$ is precisely $(xT -y, T^2 - x - 1),$ or by writing $x$ and $y$ in terms of $\Theta$ and justifying that $\Theta$ satisfies no additional relations. First, note that $\Bbb{C}[V]\cong\Bbb{C}[x,y]/(x^3 + x^2 - y^2)$ and observe that $\Theta^2 = \frac{y^2}{x^2} = \frac{x^3 + x^2}{x^2} = x + 1.$ Now, using the fact that $x = \Theta^2-1,$ we find \begin{align*} \Bbb{C}[V][\Theta]&\cong(\Bbb{C}[x,y]/(x^3 + x^2 - y^2))[\Theta]/(x\Theta - y)\\ &= \Bbb{C}[x,y,\Theta]/(x\Theta - y,x^3 + x^2 - y^2)\\ &= \Bbb{C}[y,\Theta]/((\Theta^2 - 1)\Theta - y,(\Theta^2 - 1)^3 + (\Theta^2 - 1)^2 - y^2) \end{align*} However, it is now clear that $y = \Theta^3 - \Theta,$ and hence that \begin{align*} (\Theta^2 - 1)^3 + (\Theta^2 - 1)^2 - y^2 &=(\Theta^2 - 1)^3 + (\Theta^2 - 1)^2 - (\Theta^3 - \Theta)^2\\ &= (\Theta^2 - 1)^2(\Theta^2 - 1 + 1) - (\Theta^3 - \Theta)^2\\ &= \Theta^2(\Theta^2 - 1)^2 - (\Theta^3 - \Theta)^2\\ &= 0. \end{align*} As such, we find that $$((\Theta^2 - 1)\Theta - y,(\Theta^2 - 1)^3 + (\Theta^2 - 1)^2 - y^2) = ((\Theta^2 - 1)\Theta - y),$$ so that \begin{align*} \Bbb{C}[V][\Theta]&\cong \Bbb{C}[y,\Theta]/((\Theta^2 - 1)\Theta - y,(\Theta^2 - 1)^3 + (\Theta^2 - 1)^2 - y^2)\\ &=\Bbb{C}[y,\Theta]/((\Theta^2 - 1)\Theta - y)\\ &=\Bbb{C}[\Theta^3 - \Theta,\Theta]\\ &= \Bbb{C}[\Theta]. \end{align*} A polynomial ring in one variable over a field is clearly a UFD.
H: Evaluating $\lim_{x\to 0} x^{x^{x}}-x^x$ using a graph I came across a question $$\lim_{x\to 0} x^{x^{x}}-x^x$$ I tried plotting the graph, but graph of $x^x$ doesn't exist but $x^{x^x}$ which is quite indigestible. When I plotted the graph for $x^{x^x} - x^x$ then this graph exists which gives $-1$ at $x=0$, so I'm totally confused and not able to comprehend it. Any help is welcomed. AI: First find out the limit for $$\lim_{x\to 0} x^x$$ $$x=e^{\ln x} \rightarrow \lim_{x\to 0}x^x=\lim_{x\to 0}e^{\ln x^x} = \lim_{x\to 0}e^{x\ln x}=\lim_{x\to 0}e^{\frac{\ln x}{\frac1x}}$$ Apply L'hospital for $\frac{\ln x}{\frac1x}$ $$\lim_{x\to 0}e^{x}\rightarrow1$$ Now $$\lim_{x\to 0}x^{x^x}\rightarrow e^{x^x\ln x}$$ We know $\lim_{x\to 0}x^x=1$ $$\lim_{x\to 0}e^{\ln x}=\lim_{x\to 0}x=0$$ Hence $x^{x^x}-x^x=-1$
H: How to show that $\arctan(|x-y|)\le\arctan(|x-z|)+\arctan(|y-z|)$ I have to show that $\delta$ is a metric with: $$\delta(x,y):=\arctan(|x-y|)$$ The first two axioms are really straight forward, but I kinda struggle with showing $$\arctan(|x-y|)\le\arctan(|x-z|)+\arctan(|y-z|)$$ My first try was (since the arctan is monotonic growing) $$\arctan(|x-y|)\le\arctan(|x-z|+|y-z|)$$ But here Im stuck, I checked the graph just to make sure, the statement should be correct with saying: $$\arctan(|x-z|+|y-z|)\le\arctan(|x-z|)+\arctan(|y-z|)$$ By plotting $\arctan(2x)$ and $2\arctan(x)$. The curve let me assume, that the statement: $$\arctan(|x-z|+|y-z|)\le\arctan(|x-z|)+\arctan(|y-z|)$$ is indeed correct. But I cannot show it. I tried going with taylor series, but I definitly cannot see how this is true: $$\sum\limits_{k=0}^\infty (-1)^k\frac{(|x-z|+|y-z|)^{2k+1}}{2k+1}\le\sum\limits_{k=0}^\infty (-1)^k\frac{|x-z|^{2k+1}}{2k+1}+\sum\limits_{k=0}^\infty (-1)^k\frac{|y-z|^{2k+1}}{2k+1}$$ The addition theorems all are under conditions I cannot satisfy for my general statement. Or do I have to make cases? Would be great if someone could give me a hint.. AI: In general, let $f : [0, \infty) \to \mathbb{R}$ satisfy: $f(0) = 0$, $f$ is non-decreasing, $f$ is concave. Note that $\arctan(\cdot)$ satisfies this property. Then by concavity, for any $a, b \geq 0$ such that $a+b > 0$, $$ f(a) = f\left( \frac{a}{a+b} (a+b) + \frac{b}{a+b} (0) \right) \geq \frac{a}{a+b}f(a+b) + \frac{b}{a+b}f(0) = \frac{a}{a+b}f(a+b). $$ By interchanging the role of $a$ and $b$, we also get $$ f(b) \geq \frac{b}{a+b} f(a+b). $$ Then adding two inequality proves $$ f(a+b) \leq f(a) + f(b). $$ Finally, since $f$ is increasing, for any $x, y, z \in \mathbb{R}$, $$ f(|x-y|) \leq f(|x-z| + |z-y|) \leq f(|x-z|) + f(|y-z|). $$ Remark. More generally, if $d$ is a metric, then $f\circ d$ is also a metric. A standard application of this observation is the proof of the fact that any metrizable space admits a bounded metric that realizes its topology.
H: How to calculate $\lim_{x \to \infty} \left( \frac{1}{\sin^2(x)} - \frac{1}{x^2} \right)$? I am trying to calculate the limit, using L'Hospital's Rule. $$\lim_{x \to \infty} \left( \frac{1}{\sin^2(x)} - \frac{1}{x^2} \right)$$ My attempt $$\lim_{x \to \infty} \left( \frac{1}{\sin^2(x)} - \frac{1}{x^2} \right) = \lim_{x \to \infty} \frac{(x^2 - \sin^2(x))'}{(x^2\sin^2(x))'} = \lim_{x \to \infty} \frac{2x - \sin(2x)}{2x\sin^2(x) +x^2\sin(2x)} $$ I stopped trying at that point because the limit seems to get overly complicated and I run out of other ideas. Any tips on how to solve this? Extra side-note question: I generally stuggle when I try to solve limits that involve infinity with trigonometrics. Is there a general rule to reduce these problems into easier ones? AI: This limit cannot exist because $\sin (\infty)$ bounded but uncertain real number. Take two sequences $x_n=n\pi, x'_n=(n+1/2)\pi$ such that bot $x_n$ and $x_n$ tend to $\infty$. As $n \to \infty$ as $f(x)=\frac{1}{\sin^2 x}$ attains two unequal values: $f(x_n)=\infty$ and $f(x'_n)=1,$ the limit does not exist.
H: If $ \lim a_{n}b_{n}=\gamma $ and $ \lim b_{n}=1 $ then $ \lim a_{n}=\gamma $ let $a_n,b_n $ be sequences such that $ \lim a_{n}b_{n}=\gamma $ (maybe infinity or minus infinity) and $ \lim b_{n}=1 $ Prove or disprove that $ \lim a_{n}=\gamma $ I tried to prove it using epsilon-delta definition, but I couldnt finish the proof. Any ideas would help. Thanks in advacne AI: If $\gamma$ is not finite, we may assume that it's $+\infty$ (the proof for $-\infty$ would be similar; you can do it as an excercise). Let $x$ be given. We need to show that there exists an $N$ so that $a_n>x$ for all $n>N$. But since $\lim b_n=1$, we can find an $N_1$ so that $1/2<b_n<3/2$ for all $n>N_1$. Similarly, we can find an $N_2$ so that $a_nb_n>x$ for all $n>N_2$. Rearranging it: $$a_n >\frac{x}{b_n}>x \frac{2}{3}$$ So we can pick an $N>\text{max}\{N_1,N_2\}$. The proof for the $\gamma \in \mathbb{R}$ is similar: the idea is that if $b_n$ is close to $1$ and $a_nb_n$ is close to $\gamma$ then $a_n=\frac{a_nb_n}{b_n}$ must be close to $\frac{\gamma}{1}=\gamma$.
H: Bolzano Weierstrass Theorem for General Metric Spaces Though $\mathbb{R}$ is not compact, because of LUB axiom one can conclude BW theorem i.e (every bounded sequence will have a convergent subsequence.) My questions are:- In what kind of Metric Spaces this result will hold? (example: In Compact Metric Spaces this holds.) Is there a charaterization to these spaces ? Can we find an example of a Unbounded Metric Space(like $\mathbb{R}$) where BW property holds? In what kind of Metric Space it is true that Every bounded sequence will have a Cauchy subsequence. (example: In Totally Bounded Metric Space this holds.) Is there a characterization to these spaces ? AI: There are two things going on here. A metric space in which every Cauchy sequence converges is called complete. A metric space in which every sequence has a Cauchy subsequence is equivalent to the space being totally bounded. So what you need is a complete metric space in which every bounded subset is totally bounded. Examples are: any compact metric space, any Euclidean space $\mathbb{R}^n$, any closed subspace of a Euclidean space. In general this property is called the Heine-Borel property.
H: If $A \leq B$ for a positive-definite operator $A$ in finite dimensions, then $B^{-1} \leq A^{-1}$ Exercise 13 from SEC. 82 of Finite-Dimensional Vector Spaces - 2nd Edition by Paul R. Halmos. If a linear transformation $A$ on a finite-dimensional inner product space is strictly positive (positive-definite), and if $A \leq B$, then $B^{-1} \leq A^{-1}$. (The underlying field is not specified as real or complex.) I have been able to establish the assertion in the simple case wherein $A = 1$. Unable to establish the assertion in the other case wherein $A \neq 1$. Would appreciate pointers. Thanks. Proof for the case $A = 1$: let $[B] = (\beta_{ij})$ to be the diagonal form for $B$ under a suitable orthonormal basis $X$. Because $A = 1$, the matrix $[A] = (\alpha_{ij})$ of $A$ under $X$ is also a diagonal matrix with each diagonal entry $\alpha_{ii}$ as $1$. Now, since $B-A$ is positive, the matrix $[B]-[A]$ is positive, and therefore we have $\beta_{ii} - \alpha_{ii} \geq 0$ for all $i \implies \beta_{ii} \geq 1$ for all $i$. It follows that $\frac{1}{\beta_{ii}} \leq 1$ for all $i$, which in turn results in the diagonal matrix $[C] = \left[A^{-1}\right]-\left[B^{-1}\right]$ to have each diagonal entry positive. That is, $[C]$ is a positive matrix. Since $[C]$ together with $X$ defines the transformation $A^{-1}-B^{-1}$, we infer that $A^{-1}-B^{-1}$ is positive, and the assertion follows. AI: For $S$ symmetric, we have $SAS<SBS$ ($(ASX|SX)<(BSX|SX)$) and we use this property with $H$ where $H$ is symmetric positive and $H^2=A^{-1}$. ( it's simple with spectral theorem) You arrive to $1<HBH$. It follows that $(HBH)^{-1}<1$, ( we use eigenvalues) and $H(HBH)^{-1}H<H^2=A^{-1}.$ And I think it's ok.
H: Prove: $\sum_{n=0}^\infty \frac{a_n}{n!}x^n$ converges for every $x$ if $\sum_{n=0}^\infty a_nx^n$ has radius of convergence $R>0$ $\sum_{n=0}^\infty a_nx^n$ has radius of convergence $R>0$ Prove: $\sum_{n=0}^\infty \frac{a_n}{n!}x^n$ converges for every $x$. Also, I don't understand why $R < 1$ implies $$\lim_{n \to \infty} a_n \neq 0$$ If the limit exists the solution would be easy but I can't be sure it exist. AI: Hint: $\sum |a_n| (\frac R 2)^{n} <\infty$. This implies that $|a_n| (\frac R 2)^{n} $ is bounded. Suppose $|a_n| (\frac R 2)^{n} \leq C$. Then $\sum \frac {a_nx^{n}} {n!}$ is dominated by $C \sum \frac {(2|x|/R)^{n}} {n!}$.
H: Cholesky decomposition of a Kronecker product Assume that the $n\times n$ matrix $\mathbf{A}$ has the Cholesky decomposition of the form $\mathbf{A}=\mathbf{L}\mathbf{L}^H$. Now, suppose the matrix $\mathbf{B}$ is the result of a Kronecker product as $\mathbf{B}=\mathbf{I}\otimes\mathbf{I}\otimes\mathbf{A}$ where $\mathbf{I}$ is $2\times 2$ identity matrix. Can we find the Cholesky decomposition of $\mathbf{B}$ in terms of $\mathbf{L}$? AI: Yes. It is $$ I \otimes I \otimes A = (I \otimes I \otimes L)(I \otimes I \otimes L)^H. $$
H: Unclear problem with $n$-th power matrix and limit Find $$\lim\limits_{n \to \infty} \frac{A_n}{D_n}$$ where $$\begin{pmatrix} 19 & -48 \\ 8 & -21 \\ \end{pmatrix} ^{\! n} = \begin{pmatrix} A_n & B_n \\ C_n & D_n \\ \end{pmatrix}$$ $n$ - is the power of a matrix, but what is $A_n, B_n, C_n, D_n$ then? Is it a corresponding element of a matrix in the $n$-th power? How is this type of problem called? And what is the way to solve that problem? AI: $$A:=\begin{pmatrix}19&-48\\8&-21\end{pmatrix}=\begin{pmatrix}2&3\\1&1\end{pmatrix}\begin{pmatrix}-5&0\\0&3\end{pmatrix}\begin{pmatrix}2&3\\1&1\end{pmatrix}^{-1}=PDP^{-1}$$ So $A^n=PDP^{-1}PDP^{-1}\cdots PDP^{-1}=PD^nP^{-1}$ That is, $$A^n=\begin{pmatrix}2&3\\1&1\end{pmatrix}\begin{pmatrix}(-5)^n&0\\0&3^n\end{pmatrix}\begin{pmatrix}2&3\\1&1\end{pmatrix}^{-1}=\begin{pmatrix}-2 (-5)^n + 3^{1 + n}& 6 (-5)^n - 2\times 3^{1 + n}\\-(-5)^n + 3^n& 3 (-5)^n - 2\times 3^n\end{pmatrix}$$
H: Show that if $T,T'$ are edge-distinct minimum spanning trees of $G$, then $T$ has two edges of the same weight Let $G=(V,E)$ be an undirected graph. Let $w:E\mapsto \mathbb{R}$ be a weight function over the edges. Let $T,T'$ be two minimum spanning trees with distinct edges (namely, $T\cap T' = \emptyset$). Show that there exist two different edges $e_1,e_2 \in T$ such that $w(e_1)=w(e_2)$. I tried to prove it using Kruskal's algorithm correctness. I could not derive a contradiction... AI: Let the smallest weight be $m$. If the edges of weight $m$ do not form any cycles, then they must all be included in both trees, contradicting edge-disjointness. If there is a cycle consisting of edges of weight $m$, then there must be at least three edges of weight $m$, and at least two of them must be in each of the trees, and we're done.
H: Distribute $n$ distinguishable balls into $k$ distinguishable baskets Given a number $n$ and $k$ numbers $n_1,n_2,n_3\ldots, n_k \in \mathbb{N}$ such that $n_1+n_2+n_3+\ldots+ n_k=n$ How many ways are there to distribute distinguishable balls into $k$ distinguishable baskets so that exactly $n_i$ balls are placed in each basket $i$ , $i =1,2,\ldots, k$? Also, how many ways are there to distribute $n$ distinguishable balls into $k$ distinguishable baskets? Let's say if there is no restriction to the number of balls in each basket. I can't really understand the logic of that. I mean, there are $n$ balls by the given forumla $n_1+n_2+\ldots+n_k=n$ and there are $k$ baskets? So what's the deal with "$n_1, n_2,\dots$ etc."? Why isn't it $x_1,x_2,\dots$ etc.? How do you think should I do it? I mean if they were identical balls I would use the $k+n-1\choose{n-1}$ formula. But here they are different. I can't really figure out what should I do in both of those questions. Thanks. For the second answer it's gonna be $k^n$? ($k$: number of baskets; $n$: number of balls) Edit: The bins are not identical. I thought about it, and if $n_1,n_2,n_3,\dots,n_k$ are simply numbers which represent the amount of balls in each bin (for example $n_1$ balls in bin number $1$, $n_2$ balls in bin number $2$ and so on), then there is only one option, right? Because we already have the exact amount of balls in each basket. But maybe it's something fishy because we can find a lot of options for $n_1+n_2+...+n_k=n$ ... I mean, $n_1$ can be different in each option... AI: You are correct that the number of ways of distributing $n$ distinguishable balls to $k$ distinguishable bins without restriction is $k^n$ since there are $k$ choices for each of the $n$ balls. As for the number of ways of distributing $n = n_1 + n_2 + n_3 + \cdots + n_k$ balls to $k$ distinguishable baskets so that exactly $n_i$ balls are placed in basket $i$, $i = 1, 2, \ldots, k$, select which $n_1$ of the $n$ balls are placed in the first basket, which $n_2$ of the remaining $n - n_1$ balls are placed in the second basket, which $n_3$ of the remaining $n - n_1 - n_2$ balls are placed in the third basket, and so forth until you are left with $n_k$ balls to choose from the remaining $n - n_1 - n_2 - \cdots - n_{k - 1}$ to place in the $k$th basket. This can be done in $$\binom{n}{n_1}\binom{n - n_1}{n_2}\binom{n - n_1 - n_2}{n_3} \cdots \binom{n - n_1 - n_2 - \cdots - n_{k - 1}}{n_k}$$ ways. Let's simplify the above expression. \begin{align*} & \binom{n}{n_1}\binom{n - n_1}{n_2}\binom{n - n_1 - n_2}{n_3} \cdots \binom{n - n_1 - n_2 - \cdots - n_{k - 1}}{n_k}\\ & \qquad = \frac{n!}{n_1!(n - n_1)!} \cdot \frac{(n - n_1)!}{n_2!(n - n_1 - n_2)!} \cdot \frac{(n - n_1 - n_2)!}{n_3!(n - n_1 - n_2 - n_3)!} \cdots \frac{(n - n_1 - n_2 - n_3 - \cdots - n_{k - 1})!}{n_k!(n - n_1 - n_2 - n_3 - \cdots - n_{k - 1} - n_k)!}\\ & \qquad = \frac{n!}{n_1!n_2!n_3! \cdots n_k!(n - n_1 - n_2 - n_3 - \cdots - n_{k - 1} - n_k)!}\\ & \qquad = \frac{n!}{n_1!n_2!n_3! \cdots n_k!0!}\\ & \qquad = \frac{n!}{n_1!n_2!n_3! \cdots n_k!} \end{align*} where we have used the fact that $n = n_1 + n_2 + n_3 + \cdots + n_k$ in the penultimate line. Why does this answer make sense? Imagine lining up all $n$ balls in some order. We can do this in $n!$ ways. Place the first $n_1$ balls in the first box, the next $n_2$ balls in the second box, the next $n_3$ balls in the third box, and so forth until we place the last $n_k$ balls in the $k$th box. The factors in the denominator represent the number of orders in which the same $n_i$ balls could be placed in the $i$th box without changing the distribution. Addendum: If we impose the additional requirement that there must be at least one ball in each basket, then we must subtract those distributions which leave one or more of the baskets empty. There are $\binom{k}{j}$ ways to exclude $j$ of the baskets from receiving a ball and $(k - j)^n$ ways to distribute the $n$ balls to the remaining $k - j$ baskets. Thus, by the Inclusion-Exclusion Principle, the number of ways of distributing $n$ distinguishable balls to $k$ distinguishable baskets so that no basket is left empty is $$\prod_{j = 1}^{k} (-1)^{j} \binom{k}{j}(k - j)^n$$ This is also the number of surjective functions from a set with $n$ elements to a set with $k$ elements.
H: Find the maximum $x$ coordinate of a point so that the area of a quadrilateral is $48$ In the $Oxy$ rectangular coordinate system we're given points $O(0,0), A(0,6)$ and $B(8,0)$. The point P is chosen so that $OAPB$ is a convex quadrilateral with area of $48$. Find such P with maximum $x \in \mathbb{Z}$ value. Here's what I did: first off, we can draw a line from $A$ to $B$, we get a right triangle with area $24$. Therefore, the area of triangle $PAB = 48 - 24 = 24$. We also know, from Pythagoras, that $AB = 10$, so the height from point $P$ to side $AB $ will be $\frac{2\cdot24}{10} = 4.8$. I'm not sure how to go from here. Edit: I've added a picture, so it's easier to see AI: You're almost there. What is the locus of points that are at a fixed perpendicular distance from another line? It would be a parallel line, at the given distance Now, the line AB is $6x +8y = 48$. Hence the line you need is of the form $6x+8y = c$, where $c$ is found using the perpendicular distance To find the distance between the two lines, we use the following $$d = \frac{|c-48|}{\sqrt{6^2 + 8^2}} \implies \frac{c-48}{10} = 4.8$$ $$\implies c = 96$$ If you notice, even $c=0$ would give us the same distance, but that would not give us the maximum x coordinate Now, To maximize x-coordinate, we need to find the solution with biggest $x$ coordinate for $6x+8y = 96$ Now, $x = 15$ is the clearly largest allowable solution, as $x=16$ would result in a triangle, and $x > 16$ would make a concave quadilateral
H: Does $x^n$ belong to $O(e^x)$ for all $n\geq 1$? My question is essentially two-fold. I've been asked to prove that $x^5 \in$ $O(e^x)$ as $x\to \infty$, and trying to do that I decided to plot some functions of the form $x^n$ next to $e^x$, and noted that after some (possibly very large) point $e^x$ tends to outgrow $x^n$. Now, I only tested this up to about $n=7$ as the numbers get extremely large, but I wonder if the pattern holds up for all $n$? I tried thinking about this inductively, i.e. supposing that $x^n\in$ $O(e^x)$ up to some $n$, then for $x^{n+1}$ we have that its rate of growth is $(n+1)x^n$, which is in $O(e^x)$ by the inductive hypothesis, hence $e^x$ must outgrow $x^{n+1}$ eventually.I am not sure, however, if that is correct. Either way, I lack the intuition as to why that happens (if it does), so any explanation would be appreciated. AI: Hint: $e^x = \sum_{k = 0}^{\infty} \frac{x^k}{k!}$, which is a sum of positive terms for $x>0$.
H: Law of Cosines: Proof Without Words I am trying to prove the Law of Cosines using the following diagram taken from Thomas' Calculus 11th edition. I have an answer, but I think there must be a simpler or better way to do it. Here is my answer: Construct a coordinate system such that $(0,0)$ is located at the bottom right corner of the pictured triangle. Then the red line intersects the hypotenuse at $(-a,0)$ and a leg at $(-b\cos\theta,b\sin\theta)$. Thus the squared distance $c$ from $(-a,0)$ to $(-b\cos\theta,b\sin\theta)$ is \begin{align} c^2&=(-b\cos\theta-(-a))^2 + (b\sin\theta)^2\\ &=a^2-2ab\cos\theta+b^2\cos^2\theta+b^2\sin^2\theta\\ &=a^2+b^2-2ab\cos\theta. \end{align} I feel like there has to be a simpler way, since my proof is basically ignoring the right triangle, the circle, etc. If somebody can show me another proof, that would be great. Thanks. UPDATE: It looks like I needed the Intersecting Chords Theorem from Geometry to write $(a+c)(a-c)=(2a\cos\theta-b)(b)$. AI: The image was a little difficult for me to parse at first, so here's a refinement: Now ... With $A$ the vertex opposite $a$ in the $a$-$b$-$c$ triangle, we can express the power of $A$ with respect to the circle as two chord-chord products to get $$(2a\cos\theta-b)\cdot b = (a-c)\cdot(a+c)$$ and the result follows. $\square$
H: Why cant $a_n$ be zero in a polynomial function? I was looking at the definition of the polynomial function which is pretty much always stated like this: $$P(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\cdots+ a_{1}x+a_{0}\\ a_{i}\in \mathbb{R}\, , i=0,1,2,\cdots,n\\a_{n}\neq 0$$ I was always wondering why is it that $a_{n}\neq 0$ in this definition? AI: That's because the degree of a polynomial is often a very important quantity. And the specification that $a_n\neq0$ is another way to say that $P$ has degree $n$. Or maybe your quote is used to define what "degree $n$ polynomial" means, and then the specification is crucial and unavoidable. Without the specification, the only thing we know is that $P$ had degree at most $n$. Unless that's exactly what we want, we have to spend a sentence or two to establish the degree and the highest non-zero coefficient. It's cumbersome and mostly unhelpful.
H: Does a linear function $\mathrm {End}_K(V) \otimes \mathrm {End}_K(V) \to \mathrm {End}_K(V)$ which maps $A \otimes B$ to $A \circ B$ exist? $K$ is a field and $V$ a finite dimensional $K$-vector space. Does a linear function $\mathrm {End}_K(V) \otimes \mathrm {End}_K(V) \to \mathrm {End}_K(V)$ which maps $A \otimes B$ to $A \circ B$, for all $A,B \in \mathrm {End}_K(V)$ exist? AI: Based on the fact that you have asked several questions that have essentially the same answer, it seems that what you really need is an explanation of the universal property that characterizes tensor products. In particular: For any vector spaces $V,W,Z$ and any bilinear map $h: V \times W \to Z$, there exists a unique linear map $\phi: V \otimes W \to Z$ that satisfies $\phi(v \otimes w) = h(v,w)$. In your case, we have $h:\operatorname{End}(V)\times \operatorname{End}(V) \to \operatorname{End}(V)$ defined by $h(A,B) = A \circ B$. This is a bilinear map, which is to say that we have $$ h(c_1 A_1 + c_2A_2,B) = c_1h(A_1,B) + c_2h(A_2,B)\\ h(A,c_1 B_1 + c_2B_2) = c_1h(A,B_1) + c_2h(A,B_2) $$ for all $A,B \in \operatorname{End}(V)$ and $c_1,c_2 \in K$. It follows that there is a unique linear map $\phi: \operatorname{End}(V)\otimes \operatorname{End}(V) \to \operatorname{End}(V)$ for which $\phi(A \otimes B) = A \circ B$.
H: Spivak Calculus Chapter 3 Problem 10-(d) I am currently working through Michael Spivak's „Calculus“ 3rd edition all by myself and came across this Problem which might not be that important at all, but I am still curious to find out more about it. English is not my first language, so I apologize in advance for my mistakes. Chapter 3, Problem 10-(d): What conditions must the functions $a$ and $b$ satisfy if there is to be a function $x$ such that $$a(t)x(t)+b(t)=0$$ for all numbers $t$? How many such functions $x$ will there be? My Answer: There are three possible cases. If $a(t)=0$ for all $t$, then $b(t)=0$ for all $t$. It follows that there exist infinitely many functions $x$ so that $a(t)x(t)+b(t)=0$ holds. If $a(t)\neq0$ for all $t$. It follows that there is a unique function $x$ such that $x(t)=-\frac{b(t)}{a(t)}$ for all $t$ so that $a(t)x(t)+b(t)=0$ is true. (This includes the case $x(t)=b(t)=0$ for all $t$.) If $a$ is a function that is not always $0$ but has some roots, i.e. $a(t_i)=0$, then $b$ has to be a function that has the same roots, i.e. $b(t_i)=0$. Because of (1.) there exist infinitely many functions $x$ so that $a(t)x(t)+b(t)=0$ holds. I looked up the correct answer in Spivak's answer book for calculus which reads as follows: (d) $b(t)$ must $= 0$ whenever $a(t)=0$. If $a(t)\neq0$ for all $t$, then there is a unique such function, namely $x(t)=\frac{a(t)}{b(t)}$. If $a(t)=0$ for some $t$, then $x(t)$ can be choosen arbitrarily, so there are infinitely many such $x$. I find this very confusing. Why does $x$ have to be $x(t)=\frac{a(t)}{b(t)}$, if $a(t)\neq0$ for all $t$? AI: The given solution is wrong and you are right. It should be $$x(t) = -\frac{b(t)}{a(t)}$$
H: Chebychev's inequality over a discrete random variable: $<$ vs $\le$ By Chebychev's inequality it holds that $$ \begin{split} Pr(|X-\mu |<\epsilon )>1-{\frac {\sigma ^{2}}{\epsilon ^{2}}} \\ \end{split} $$ For a discrete random variable $X$, does the following hold? $$ \begin{split} Pr(|X-\mu | \le \epsilon )>1-{\frac {\sigma ^{2}}{\epsilon ^{2}}} \\ \end{split} $$ Best regards AI: Well, the Chebyshev's inequality is $$P(|X- \mu|\geq\epsilon)\leq \frac{\sigma^2}{\epsilon^2}$$ In words, it says that if the variance is small, then the random variable is unlikely to fall too far off from the mean.
H: How many number of ordered pair $(m, n)$ can be formed if $m+n=190$ and $m$ and $n$ are positive integers and coprime? The question involves the concept of number theory Kindly provide the hints to solve the question not the entire solution I don't know how to approach these kind of problems AI: If $d=\gcd(m,190-m)$, then $d|190=2 \cdot 5 \cdot 19$. But we want $d=1$, so $2,5,19 \not| m$. So we want $m \in S=\{1,2,3, \ldots, 189\}$ such that $m$ is not divisible by $2, 5$ or $19$. Let $P_i:= \text{property that }m \text{ is divisible by }i$ and $n(P_i):=$ number of integers in $S$ that have property $P_i$ Then we want $$n(\bar{P_2} \cap \bar{P_5} \cap \bar{P_{19}})=N-\left[n(P_2)+n(P_5)+n(P_{19})\right]+\left[n(P_2 \cap P_5)+n(P_2 \cap P_{19})+n(P_5 \cap P_{19})\right]-\left[n(P_2 \cap P_5 \cap P_{19})\right].$$ Thus, $$n(\bar{P_2} \cap \bar{P_5} \cap \bar{P_{19}})=189-\left[94+37+9\right]+\left[18+4+1\right]-\left[0\right]=\color{red}{72}.$$ So the number of values that $m$ can take is $72$.