text
stringlengths
83
79.5k
H: Find whether the sequence is convergent . Find whether the sequence $(a_n)$ given by $a_{n+1}= \sqrt{a_n}+\sqrt{a_{n-1}}$, where $a_1=1$ and $a_2=2$, is convergent. So , $a_{n+1}-a_{n}= \sqrt{a_n} + \sqrt{a_{n-1}} -a_n \implies \sqrt{a_n}(1-\sqrt{a_n})+ \sqrt{a_{n-1}}.$ Now I assumed the sequence is $>1$ and I showed it by induction then , $a_{n+1}-a_{n} < \sqrt{a_{n-1}}$.Any help from here ? AI: We show that the sequence is convergent by showing that it's bounded and monotone. Claim 1. $2 \le a_n \le 4$ for all $n \ge 2.$ Proof. We prove this via induction. For $n = 2, 3$, it is manually verified. Let $P(n)$ denote the statement "$2 \le a_n \le 4$". Assume that $n \ge 4$ and that $P(k)$ is true for all $2 \le k \le n-1$. We prove that $P(n)$ is true. By hypothesis, we have \begin{align} a_n &= \sqrt{a_{n-1}} + \sqrt{a_{n-2}}\\ &\ge \sqrt{2} + \sqrt{2} = 2\sqrt{2}\\ &\ge2. \end{align} Similarly, we have \begin{align} a_n &= \sqrt{a_{n-1}} + \sqrt{a_{n-2}}\\ &\le \sqrt{4} + \sqrt{4}\\ &=4. \end{align} This proves the statement. Claim 2. $a_n \le a_{n+1}$ for all $n \ge 1$. Proof. Let $P(n)$ denote the statement "$a_n \le a_{n+1}$". $P(n)$ can be manually verified for $n = 1, 2, 3.$ Assume that $n \ge 4$ and that $P(k)$ is true for all $1 \le k \le n-1$. We prove that $P(n)$ is true. Using the hypothesis, we see that $$a_n \ge a_{n-1} \ge a_{n-2}.$$ (Note that $n \ge 3$, so all these terms are defined.) By the previous claim, we also see that all the terms are positive and thus, we can conclude $$\sqrt{a_n} \ge \sqrt{a_{n-2}}. \quad (*)$$ Now, we have \begin{align} a_{n+1} - a_n &= \sqrt{a_n} + \sqrt{a_{n-1}} - \sqrt{a_{n-1}} - \sqrt{a_{n-2}}\\ &= \sqrt{a_n} - \sqrt{a_{n-2}}\\ &\ge 0. \end{align} The last inequality followed using $(*)$. This proves this claim as well. By Claim 1, the sequence is bounded and by Claim 2, the sequence is monotone. Thus, the sequence converges. (The value of the limit can be found to be $4$.)
H: Why is the realification of a simple complex Lie algebra a semisimple real Lie algebra? Why is the realification of a simple complex Lie algebra a semisimple real Lie algebra? The realification here means to consider the complex Lie algebra as a real Lie algebra of twice the dimension. The statement was used in the proof of Proposition 12.46 in https://doi.org/10.1016/S0079-8169(08)61672-4. AI: By the Cartan criterion, a Lie algebra $\mathfrak g$ is semisimple if and only of its Killing form is non-degenerate. So, if $\mathfrak g$ is a semisimple complex Lie algebra, its Kiling form is non-degenerate, but that Killing form is also the Killing form of the realification of $\mathfrak g$.
H: Do the sequences converge or diverge? $$\sum_{n=1}^{\infty} (-1)^{n} \cdot \frac{n}{5n+3} $$ Using Leibniz $\lim_{n\to\infty} a_{n}= \frac{n}{5n+3}=\frac{1}{5}$ so this is not equal to $0$, divergent $$\sum_{k=1}^{\infty} \frac{n^6}{3^n} $$ I guess that $3^{n}$ is growing faster than $n^{6}$ so $a_{n}$ decreases. $$\sum_{k=1}^{\infty} \frac{1}{\sqrt{k(k+1)}} $$ I still have no idea AI: It diverges, since you don't have$$\lim_{n\to\infty}(-1)^n\dfrac n{5n+3}=0.$$In fact,$$\lim_{n\to\infty}\left|(-1)^n\dfrac n{5n+3}\right|=\frac15.$$ Since$$\sqrt[n]{\frac{n^6}{3^n}}=\frac{\sqrt[n]n^6}3\to\frac13<1,$$ypur series converges. Since$$\lim_{k\to\infty}\frac{\frac1{\sqrt{k(k+1)}}}{\frac1k}=1$$and the harmonic series diverges, your series diverges too.
H: Probability: Union and Conditional Union I have $P(A\cup C|B)$ Does it equal to $P(A|B)+P(C|B)-P(A\cap C |B)$ If A,C are mutually exclusive, is it same as $P(A|B)+P (C|B)$? AI: The short answer is "Yes." We have $$\begin{align} \Pr(A\cup C|B)&=\frac{\Pr((A\cup C)\cap B)}{\Pr(B)}\\ &=\frac{\Pr((A\cap B)\cup (C\cap B))}{\Pr(B)}\\ &=\frac{\Pr(A\cap B)+\Pr(C\cap B)-\Pr(A\cap C\cap B)}{\Pr(B)}\\ &=\Pr(A|B)+\Pr(C|B)-\Pr(A\cap C|B) \end{align}$$ Now if $A$ and $C$ are mutually exclusive, or mutually exclusive given $B$, this reduces to $\Pr(A|B)+\Pr(C|B)$.
H: Minimum cost of a connected graph $G$ is a connected graph with cost $p:E(G)\to\mathbb{R}$ defined on its edges. Let $e' \in E(G)$ be such that $p(e')<p(e)$ for every $e\in E(G)-\{e'\} $. Is it possible to find two spanning trees of minimium weight, $T_1$, $T_2$ ($T_1 \neq T_2$), such that one of them has $e'$ and the other one doesn't? AI: No. Every minimum spanning tree of $G$ contains the minimum weighted edge. Suppose not. Then the inclusion of the minimum weighted edge creates a cycle. Remove the maximum weight edge in the cycle to remove the cycle. The resulting tree has lesser total weight (contradicting the fact that the initial minimum spanning tree had minimal weight) and now includes this edge.
H: Speed of decay of $\zeta(x)-1$ as $x \to \infty$ I am trying to find some numerical bound on the Riemann zeta function $$ \zeta(x) = \sum_{k\ge 1} 1/n^x. $$ I am only interested in the case which $x > 1$, so the above expression is valid. More precisely, what is the decay of the function $$ \zeta(x)-1 \text{ as } x\to \infty? $$ I suppose that the next term is of order $1/2^x$ (due to the definition of the zeta function), but can I also control the constant? That is, do I have the existence of a positive constant $C>0$ such that $$ |\zeta(x)-1| \le C e^{- (\ln 2) \cdot x} $$ and if so, is the value of $C$ known? AI: The first term is $1$, the next is $2^{-x}$. When $x > 2$ (say), if $n \ge 3$ $$n^{-x} = 3^{-x} (n/3)^{-x} < 3^{-x} (n/3)^{-2}$$ so since $\sum_n n^{-2}$ converges $$ \sum_{n=3}^\infty n^{-x} < c 3^{-x}$$ for some positive constant $c$. Thus $\zeta(x) = 1 + 2^{-x} + O(3^{-x})$.
H: Proving Linearly Independence from System of Equation I am trying to understand the proof of Linearly Independence of the basis set $\{1, x, x^2, x^3\}$. It is written that - Substituting $3$ other values of $x$ into the above equation yields a system of $3$ linear equations in the remaining $3$ unknowns $c_1, c_2$ and $c_3$. The equation we are considering right now is - $$c_0 \cdot (1) +c_1x+c_2x^2+c_3x^3=0 \cdots (1)$$ First we plug in $x= 0$ to get the value of $c_0$ in equation $(1)$, we get - $$ c_0 \cdot (1) +c_1\cdot 0 +c_2\cdot 0 +c_3\cdot 0=0\implies c_0 =0$$ For arbitrary 3 non-zero values $ x_1, x_2, x_3 \in \mathbb{R}$ where $ x_1, x_2, x_3$, are not solutions of equation $(1)$ and for $c_0=0$, we will get $3$ equations from equation $(1)$- $$c_1x_1+c_2x_1^2+c_3x_1^3=0$$ $$c_1x_2+c_2x_2^2+c_3x_2^3=0$$ $$c_1x_3+c_2x_3^2+c_3x_3^3=0$$ Now, how can it be shown that the only solution of the above system of equations is the trivial solution $c_1 = c_2 = c_3 = 0$ and therefore the set $\{1, x, x_2, x_3\}$ is linearly independent? Thanks. The source of the problem and background is given below - AI: The matrix of that system is$$\begin{bmatrix}x_1&x_1^{\,2}&x_1^{\,3}\\x_2&x_2^{\,2}&x_2^{\,3}\\x_3&x_3^{\,2}&x_3^{\,3}\end{bmatrix}.$$It is a Vandermonde matrix and its determinant is $(x_1-x_2)(x_1-x_3)(x_2-x_3)$. So, if the numbers $x_1$, $x_2$, and $x_3$ are distinct, the only solution of the system is $(0,0,0)$, since then the determinant is not $0$.
H: Holomorphic function bounded Let $f \in \mathcal{H}(\mathbb{C} \setminus \{0\})$ be and assume that $$ |f(z)| \leq |\log|z|| + 1, \quad z \in \mathbb{C} \setminus \{0\} $$ I have to prove that $f$ is constant. My attempt is next proof. Consider $g(z) = zf(z)$. By hypothesis $$ |z||f(z)| \leq |z||\log|z|| + |z|, $$ so $\lim_{z \to 0} g(z) = 0$ and $g$ is a entire function, by Riemann's theorem. By Cauchy integral formula $$ |g^{(n)}(0)|\leq \dfrac{M}{r^{n-1}}(r\log(r) + r). $$ Is easy to see that for all $n > 2$, $g^{(n)}(0) = 0$ because the right hand part has limit zero when $r \to \infty$. For $n = 1$, we obtain the same result taking limit $r \to 0$. So Taylor expansion $(g(0) = 0$) $$ g(z) = az^2, \quad a \in \mathbb{C}. $$ For all $z \neq 0$, $f(z) = az$. By hypothesis, if $z \neq 0$ $$ |a| \leq \dfrac{|\log|z|| + 1}{|z|}. $$ Taking $|z| \to \infty$ we obtain $|a| = 0$, so $f \equiv 0$. Is correct that argument? It seems a little strange because the exercise says that $f$ is constant, not necessarily zero in every point. AI: There is an error in your application of the Cauchy integral formula. The correct estimate is $$ |g^{(n)}(0)| \le \frac{n!}{r^n} \max_{|z|=r} |g(z)| \le n! \frac{|\log(r)| + 1}{r^{n-1}} $$ which implies $g^{(n)}(0) = 0$ for $n \ge 2$, so that $g$ is linear, and consequently, $f$ is constant. Finally, setting $z=1$ shows that $f(z) \equiv a$ with $|a| \le 1$.
H: Why is the Penrose triangle "impossible"? I remember seeing this shape as a kid in school and at that time it was pretty obvious to me that it was "impossible". Now I looked at it again and I can't see why it is impossible anymore.. Why can't an object like the one represented in the following picture be a subset of $\mathbb{R}^3$? AI: Start at the bottom left-hand corner, taking othonormal unit vectors $\pmb i$ horizontally, $\pmb j$ inward along the cross-member bottom left-hand edge, and $\pmb k$ upward and perpendicular to $\pmb i$ and $\pmb j$. I'll take the long edge of a member as $5$ times its (unit) width; the exact number doesn't matter. Then, working by vector addition anticlockwise round the visible outer edge to get back to the starting point, we have $$5\pmb i+\pmb k+5\pmb j-\pmb i-5\pmb k-\pmb j=4\pmb i+4\pmb j-4\pmb k=\pmb0,$$which of course is impossible.
H: Minimum absolute difference after diving number from 1 to n into two groups I am trying to solve an algorithmic problem mentioned at https://www.geeksforgeeks.org/divide-1-n-two-groups-minimum-sum-difference/. In the solution it says "We can always divide sum of n integers in two groups such that their absolute difference of their sum is 0 or 1. So sum of group at most differ by 1".I am struggling to prove this statement.Can someone please help in proving this statement? AI: The point is that if you have a division for $1$ to $n$, then you get another division for $1$ to $n + 4$, by adding $n + 1$ and $n + 4$ to one side, $n + 2$ and $n + 3$ to the other side. Hence it suffices to consider the case of $n = 1, 2, 3, 4$. These are super easy.
H: Proof : Given a finite set of equidimensional proper subspaces of a vector space $V$, $\exists$ $x$ in $V$ that belongs to none of them I am stuck at this statement in a book of linear algebra. Even though the author casually mentions it, I am having a hard time coming up with a proof, why it must be true. Can you guys help? I can think of a few cases where it is true. Like given a finite no of lines through centre in $\mathbb{R}^2$, we can always find a point, which doesn't lie on any of them and so on. But how to generalize it? AI: @paul garrett gave a counterexample for finite fields. For infinite fields, let $V_1, ..., V_k$ be proper subspaces of a vector space $V$, and let $U=\bigcup_{i=1}^kV_k\subset V$. Now, there are two cases : If $U$ is not a vector space, then it must be a proper subset of $V$, and thus $V\setminus U$ is nonempty. If $U$ is a vector space, then, by this property (extended to a finite union of subspaces) there must be an $i\in\left\{1,...,k\right\}$ such that $V_i$ contains $V_1, ..., V_k$. But then $U=V_i$ so $V\setminus U$ is nonempty.
H: Is there an intuitive way of justifying why the square of an infinite cardinal is itself? By no means I am an expert in this subject, but I do have some knowledge of ZFC. While there are many proofs which are difficult to recollect, I feel like I have enough knowledge that if I am given a statement about ordinals or cardinals, then I can prove it myself. However, there is one particular thing which didn't seem intuitive to me at all and whose proof is also difficult to remember or recreate, which is: If $\kappa$ is any infinite cardinal, then $\kappa\cdot\kappa = \kappa$. Does anyone know of an intuitive proof of this? The fact that $\aleph_0\cdot\aleph_0 = \aleph_0$ is very intuitive, because this is just saying that cartesian product of two countable sets is countable. If $c = |\mathbb{R}|$, then the fact that $c\cdot c = c$ is also more or less intuitive: that $|(0 , 1)| = c$, we can use decimal expansions of numbers to justify this. Can we visualise this fact somehow, or if not do you know of seeing this more easily? AI: You're wrong about $\aleph_0$. The fact that the product of two countable sets is countable is not "the intuitive reason" for $\aleph_0\cdot\aleph_0=\aleph_0$. It is the very definition of the equality. You've merely recast the symbols into words. It only becomes intuitive once you've seen the proof a few times. The general fact that $\kappa\cdot\kappa=\kappa$ for any infinite cardinal is equivalent to the axiom of choice, so we may as well assume it. In this case, the intuition is sort of the same from the case of $\aleph_0$: we can order the set $\kappa\times\kappa$ in a well-order satisfying the property that every proper initial segment has size $<\kappa$. Much like how we order $\Bbb{N\times N}$ in a way that every proper initial segment is finite. If you want to talk about sets like $\Bbb R$, then you can actually talk about this without the axiom of choice: if $A$ is an infinite set and $|A\times\{0,1\}|=|A|$, then we can use a bijection witnessing that to prove that $|\mathcal P(A)\times\mathcal P(A)|=|\mathcal P(A)|$. Since the real numbers satisfy $|\Bbb R|=|\mathcal P(\Bbb N)|$, the result follows. Alternatively, you can verify that for any set $X$, setting $A=X^\Bbb N$, we have $|A\times A|=|A|$, and again apply this to $X=\Bbb N$ and $\Bbb{|R|=|N^N|}$. Or, more generally, if $|A\times\{0,1\}|=|A|$, then $|X^A\times X^A|=|X^A|$ for any set $X$. But now we see why "intuitive" is the wrong term here. This is not true for every infinite set without assuming the axiom of choice: if AC fails, not every infinite set can be well-ordered, and certainly not every infinite set is a power set, or equipotent to a set of the form $X^A$ with the needed properties.
H: Understanding memoryless property of exponential distribution For an Exponential distribution X with mean 500, we could say that P(X>1000 | X>500) = P(X>500) and the mean of the conditional distribution X|X>500 would also be 500. Good so far? Can the memoryless aspect be extended in the other direction, so that you could say then that P(X<200 | X<400) = P(x<200)? It seems not, because the following problem's solution implied to me that the conditional distribution has a different mean. The chore was to find the mean survival time of a component whose lifespan is exponentially distributed with mean 5, if the component survives less than 10 years. The answer was 3.44. So to me this says that mean of the conditional distribution of X | X<10 is not the same as the mean of X. AI: One can think of exp distribution as waiting time for the first event in a Poisson process. The memoryless property simply says that waiting does not help - the probability of an event occurring in the next 5 minutes is the same, no matter how much you waited already (no matter how on what you condition, $X>500$ in your case). Hence, $P(X>500+500\vert 500)=\Pr(X>500)$. Clearly, the opposite is not true. If you know you've waited for less than 500 than the event time is between 0 and 500, and the bounded support alone should hint that the distribution is not exponential. If components last for less than 10 years and nine already passed, then your component will break very soon, and with probability 1 will break in the next year. This is not the probability of breaking in the first year of its life.
H: Proving a combinatorial identity involving sum and integral I want to prove Therefore I use that the derivatives of both sides ($\frac{d}{\text{d}p}$) are equal (and that for a fixed p the values of both sides are equal). Has anyone got a clue how to find that the derivative of the left side is equal to $-\frac{n!}{k!(n-k-1)!}p^k(1-p)^{n-k-1}$? I have been trying hard but I cannot figure it out. Thanks in advance. Manuel AI: Using the product rule to take the derivative on the left hand side gives $$ \begin{aligned} &\sum_{j=0}^k\left[n\binom{n-1}{j-1}p^{j-1}(1-p)^{n-j}-n\binom{n-1}{j}p^j(1-p)^{n-j-1}\right]\\ &\quad=n\sum_{j=1}^k\binom{n-1}{j-1}p^{j-1}(1-p)^{n-j}-n\sum_{j=1}^{k+1}\binom{n-1}{j-1}p^{j-1}(1-p)^{n-j}\\ &\quad=-n\binom{n-1}{k}p^k(1-p)^{n-k-1}. \end{aligned} $$ In the first line, the factors stemming from the exponents that dropped down in taking the derivative have been combined with the binomial coefficients. In the second line, we have noticed that the $j=0$ term in the first sum is zero and have shifted the summation index in the second term. In the third line, we have observed that the sum telescopes.
H: Square root of 1 modulo N Given a positive integer N, how do we compute $card(A)$ where $A = \{x\in\mathbb{Z}, 0 < x < N \mid x^{2}\equiv1\pmod N\}$, when the prime factorization of N is known. In other words, how many square roots of 1 modulo N exist? We know that when N is prime, there are only two square roots -> 1 and -1 (except for N = 2, where 1 and -1 coincide). So what what the equation for generic N looks like? A formal proof would be appreciated. I don't need to find these roots (this task can be accomplished by using EEA on every pair of factors of N), I need only to compute their amount. AI: By chinese remainder theorem, we only need to consider the case where $N = p^r$ is a prime power. If $p$ is odd, then there are exactly two square roots of $1$. This can be seen from Hensel's lemma or the fact that $\Bbb Z/N\Bbb Z$ is cyclic in this case. If $p = 2$, then it depends on the value of $r$. $r = 1$: there is $1$ root ($1$). $r = 2$: there are $2$ roots ($1, 3$). $r \geq 3$: there are $4$ roots, congruent to $1, 2^{r - 1} - 1, 2^{r - 1} + 1, 2^r - 1$ mod $2^r$, respectively. This again is an exercise of Hensel's lemma.
H: What is the shape in the complex plane generated by all possible points $z_1 + z_2$, where $z_1$ and $z_2$ can be any two points on the unit circle? What is the shape in the complex plane generated by all possible points $z_1 + z_2$, where $z_1$ and $z_2$ can be any two points on the unit circle centered at $0$ AI: First take any point $z_1$ on the unit circle. Now adding all points from the unit circle to it is equivalent to drawing a new circle of radius 1 around that point (by viewing them as vectors). Doing this for all points $z_1$ on the unit circle will "smear" new circles centered on these points, around the origin, thus filling the disk of radius 2 centered at the origin.
H: Show that $x^{2}-6y^{2}=523$ has infinitely many integral solutions I want to show that $x^2-6y^2=523$ has infinitely many solutions. For the special case $x^2-dy^2=1$, I know what I need to do. I can get the result by using continued fractions. Also, in the kinds of $x^2-dy^2=m$ for some examples, I can say that there is no solution using modulo prime $p$. But in general, I'm not sure how to find the solution set for $ax^{2}+by^{2}+c=0$ where $a,b,c\in \mathbf{Z}$. I would appreciate if you can help me with this question or direct it to a resource that can help. AI: You can use a twist on Pell's equation in this case. The fundamental Pell solution for $D=6$ is $(x,y)=(5,2).$ So all positive solutions to $$a^2-6b^2=1$$ are given by $$a_k+b_k \sqrt{6}=(5+2\sqrt{6})^k$$ for $k\in \mathbb{Z}_{+}.$ Here's the fun part: since $(x,y)=(23,1)$ is a solution to $$x^2-6y^2 = 523$$ (found by trying out smaller values of $x$ or $y$ and seeing if the other variable turns out to be an integer), a solution is given by $$x_k + y_k\sqrt{6}=(23+\sqrt{6})(a_k+b_k \sqrt{6})$$ for each positive integer $k.$ It is easy to prove that these are all solutions because if $a^2-6b^2=1$ then $$(23+\sqrt{6})(a+b\sqrt{6}) = (23a+6b)+(23b+a)\sqrt{6}$$ and \begin{align*} (23a+6b)^2-6(23b+a)^2 &= 23^2 (a^2-6b^2) - 6(a^2-6b^2)\\ &= 23^2-6\\ &=523. \end{align*} This does not necessarily find all solutions, but you do get infinitely many since they are monotonically increasing in some sense. You can read what Keith Conrad has written about finding all solutions.
H: vector of random variables and conditional probability problem? I truly don't know how to approach the following problem Consider a sequence of events identically distributed and independent with probability of success $p$. Let $S_i$ be the success of the i-th event. Denote $X_1$ the time at which the first success happened and $X_1+X_2$ the time when the second success happened. Express the events $\{X_1=k, X_2=\ell\}$ for $ k,\ell \in \mathbb{N}$ in terms of $S_i$ and prove that $X_1$ and $X_2$ are independent. I think that $S_i$ is distributed as a Binomial(N,p), where $N$ is the number of times event was done, and $X_1=\sum_{i=1}^{N} S_1$. Is this correct? Any Hint? Thanks! AI: By what I understand from the exercise, $S_i$ is actually a Bernoulli random variable, so $P(S_i = 1)=p$ and $P(S_i = 0)=1-p$, where $1$ is success and $0$ is failure. So, $\sum^n_{i=1} S_i \sim Binomial(n,p)$, but not $S_i$. Now, for the $X$'s. Note that $X_1$ is the smallest index such that $S_i$ is a success. $$X_1(w)= \inf\{n : \sum^n_{i=1}S_i(w) \geq 1 \}$$ $X_2$ is similar to $X_1$, and it represents the number of trials until the second success. $$ X_2(w)= \inf\{n : \sum^n_{i=X_1(w)+1}S_i(w) \geq 1 \} $$ The distribution of $X_1 + X_2$ is a Negative binomial, which is given by $$ P(X_1+X_2 = k)= {k+2-1\choose k}p ^2(1-p)^k $$ Since the Negative binomial is identical to the sum of i.i.d Geometric random variables, then $X_1$ and $X_2$ are independent. One can also show the independence by noting that $$\{X_1=k,X_2=l\} = \{S_1=0,...,S_k=1,S_{k+1}=0,...,S_{k+l}=1\} $$ Since $S$ are independent, it is clear that so are $X_1$ and $X_2$.
H: Show that for a rational $a$ there is a finite amount of rationals $\frac{p}{q} \neq a$ such that $|a - \frac{p}{q}| < \frac{1}{q^2}$ I want to shot that there for rational number $a \in \mathbb{Q}$, there is a finite amount of rational numbers $\frac{p}{q} \neq a$ such that $|a - \frac{p}{q}| < \frac{1}{q^2}$. I know that for a irrational number $\alpha$, there are infinite such rationals, and it follows as a correlation from Dirichlet's approximation theorem. However, I am not sure how it fails when $a$ is rational. Help would be appreciated. AI: Let $a=m/n$. Then $$\left|a-\frac pq\right|=\left|\frac mn-\frac pq\right|=\frac{|mq-np|}{nq}.$$ Since this is a nonnegative integer over $nq$, it is either $0$ or $\geq \frac{1}{nq}$. Can you finish from here?
H: Show that there are two total orderings of $\textbf{Q}(\sqrt{2})$ under which it is an ordered field. Let $\textbf{Q}(\sqrt{2})$ be the set of all real numbers of the form $r + s\sqrt{2}$, with $r,s\in\textbf{Q}$. Show that $\textbf{Q}(\sqrt{2})$ is a subfield of $\textbf{R}$. Show that there are two total orderings of $\textbf{Q}(\sqrt{2})$ under which it is an ordered field. MY ATTEMPT In order to prove it is a subfield of $\textbf{R}$, it suffices to define: \begin{align*} \begin{cases} (a + b\sqrt{2}) + (c + d\sqrt{2}) = (a + b) + (c+d)\sqrt{2}\\\\ (a + b\sqrt{2})\times(c + d\sqrt{2}) = (ac + 2bd) + (ad + bc)\sqrt{2} \end{cases} \end{align*} and check its corresponding properties. I am mainly interested in the second part, but I do not know how to approach it. Could someone help me to describe such total orderings? AI: Consider the two maps $\mathbb{Q}(\sqrt{2})\to\mathbb{R}$, one given by sending the element $a+b\sqrt{2}$ to that real number, and the second to $a-b\sqrt{2}$. Verify that each of them is a field embedding. The usual order of the reals then induces an ordering on $\mathbb{Q}(\sqrt{2})$ by transport of structure along each of the two maps, giving you two orderings.
H: Number of functions to receive 2 values I have a set $A$ with $5$ items: $1,2,3,4,5$ I want to know how many functions are there that make $|f[A]|=2$ if $f: A->A$ So what I thought is that it's the same like to put $5$ balls in $2$ boxes out of $5$ boxes. My direction to the solution would be: choose $2$ boxes out of the $5$ boxes and put the $5$ balls in the two boxes so you have at least one ball in each box. So there are $10$ ways to choose the two boxes out of $5$, and $D(2,5-2)$ ways to put the $5$ balls in the $2$ boxes. So $10\cdot4 = 40$ However, this is not the solution. What am I missing? AI: Having chosen the two elements in the image of $f$, you have to assign one of the two as the function value for each element of $A$. There are $2$ ways to assign each one, so there are $2^5$ ways to make the assignment. We require that at least one element go to each value, and $2$ of the possible assignments take all five elements to the same value, so there are $30$ ways to assign the values of $f$. The total number of functions is then $10 \cdot 30=300$
H: Find $x$ and $y$ where $ax - by = c$ and $x + y$ is minimum. How can I find out the natural numbers $x$ and $y$, such that $$ax - by = c$$ and $x + y$ is minimum? $a, b, c$ are given integers. example: $a = 2, b = 2, c = -2$. $$2x - 2y = -2$$ $$ans: (x = 0, y = 1)$$ Also, how can I know when it is impossible? I guess it's when $c$ is not divisible by $LCD(a, b)$. Is it right? AI: The equation $ax - b y = c$ (where $a$ and $b$ are positive integers and $c$ is an integer) has natural number solutions if and only if $GCD(a,b) \mid c$. Obviously this is necessary as $ax-by$ is divisible by that GCD. To see it is sufficient: if the GCD is $g$, $a x - b y = g$ has integer solutions by Bezout's identity, and you can then multiply by $c/g$ to get a solution of $ax - b y = c$. Note that if $(x,y)$ is an integer solution, so is $(x+kb/g, y+ka/g)$ for any $k$, so for suitable $k$ you'll have a natural number solution. Moreover, these are all the solutions. So to find the natural number solution with the least $x+y$, find $k$ so that $x+kb/g \ge 0$ and $y+ka/g \ge 0$ but $\min(x+(k-1)b/g, y+(k-1)a/g) < 0$.
H: Interpretation of Simple Uniform Marginal Density Example from Mathematical Statistics by Rice I am stuck trying to visualize Example B from Mathematical Statistics and Data Analysis 3rd ed by Rice. The examples revolve around the concept of independence of Random Variables which is defined in the following way: I included Example A from which Example B is derived: A couple of things going on here: He says "rotate the square by $90^{\circ}$" so first off that has to be a typo because that will just give me another square. So we shall assume $45^{\circ}$ to get the desired diamond. He first says "you can see that the marginal density of $X$ is nonegative for $\frac{-1}{2} \leq x \leq \frac{1}{2}$, but is not uniform. Why is it nonnegative? Is it because when we integrate over the bounds of integration our marginal density will be positive? More importantly, why is this marginal not considered uniform anymore? The bounds of integration may have changed, but the joint density is still defined as $f_{XY}(x,y) = 1$. In trying to illustrate the concept of independence he took the result $f_{X}(0.9) > 0$ and $f_{Y}(0.9)>0$. Now using my scattered trig knowledge the bounds of integration would've changed to $\frac{-1}{\sqrt{2}} \leq x \leq \frac{1}{\sqrt{2}}$ and same for $y$. Which would mean $f_{X}(0.9) = 0$ and same for $Y$. I get what he may have been trying to insinuate, if he had chosen $f_{X}(0.65)$ and $f_{Y}(0.65)$ he would've gotten the desired result I believe. Maybe because of my inability to sketch this out properly or put it into some graphing software I'm not seeing what is occurring, but I don't think that would help too much here. I asked a lot of questions here so if one was of most importance it would be #2 and why is the marginal density not a uniform density anymore? AI: The answer is very simple....the error you found is not the only one in the text. Rotating the square of 45° you get a diamond with the same area $A=1$ Where, obviously, a) $y=-\frac{\sqrt{2}}{2}-x$ b) $y=\frac{\sqrt{2}}{2}+x$ c) $y=-\frac{\sqrt{2}}{2}+x$ d) $y=\frac{\sqrt{2}}{2}-x$ Then the joint density is the following $$f_{XY}(x,y)=\mathbb{1}_{(-\frac{\sqrt{2}}{2};0)}(x)\mathbb{1}_{(-\frac{\sqrt{2}}{2}-x;\frac{\sqrt{2}}{2}+x)}(y)+\mathbb{1}_{[0;\frac{\sqrt{2}}{2})}(x)\mathbb{1}_{(x-\frac{\sqrt{2}}{2};\frac{\sqrt{2}}{2}-x)}(y)$$ to derive the marginal X it is enough to integrate in Y (with the joint density expressed as I did the integral extremes are written in the joint density) Thus $$ f_X(x) = \begin{cases} 2x+\sqrt{2}, & \text{if $ -\frac{\sqrt{2}}{2}<x<0 $ } \\ \sqrt{2}-2x, & \text{if $0 \leq x<\frac{\sqrt{2}}{2}$ } \\ 0, & \text{elsewhere } \end{cases}$$ so X density is non negative for $ -\frac{\sqrt{2}}{2}<x<\frac{\sqrt{2}}{2}$ but it is not uniform....it's a triangle!
H: Let $a_{n} = \sqrt{n^{2}+n} - n$, for $n\in\textbf{N}$. Is the sequence $(a_{n})_{n=1}^{\infty}$ monotonic? Let $a_{n} = \sqrt{n^{2}+n} - n$, for $n\in\textbf{N}$. Show that $a_{n}$ converges as $n\to\infty$. What is the limit? Is the sequence $(a_{n})_{n=1}^{\infty}$ monotonic? MY ATTEMPT The answer to the first question is $a_{n}\to 1/2$ as $n\to\infty$. Indeed, \begin{align*} \lim_{n\to\infty}\sqrt{n^{2}+n} - n = \lim_{n\to\infty}\frac{n}{\sqrt{n^{2} + n} + n} = \lim_{n\to\infty}\frac{1}{\sqrt{1 + 1/n} + 1} = \frac{1}{2} \end{align*} To test for monotonicity, we can try to study the behavior of the quotient: \begin{align*} \frac{a_{n+1}}{a_{n}} & = \frac{\sqrt{(n+1)^{2} + n + 1} - n - 1}{\sqrt{n^{2} + n} - n}\\\\ & = \frac{(n+1)^{2} + n + 1 - (n+1)^{2}}{n^{2} + n - n^{2}}\times\frac{\sqrt{n^{2} + n} + n}{\sqrt{(n+1)^{2} + n + 1} + n + 1}\\\\ & = \frac{n+1}{n}\times\frac{\sqrt{n^{2} + n} + n}{\sqrt{(n+1)^{2} + n + 1} + n + 1} \end{align*} But then I get stuck, because the first factor is greater than one and the second is smaller than one. Can someone please finish my attempt or provide an alternative approach? AI: Since the sequence$$\left(\sqrt{1+\frac1n}\right)_{n\in\Bbb N}$$is decreasing and since you proved that$$\sqrt{n^2+n}-n=\frac1{\sqrt{1+\frac1n}+1},$$your sequence is increasing.
H: How can we write the function on definite integral form? How can we write the the following Stieltjes function on definite integral form? $$\frac{1}{\log(1+x)}$$ for example : $$\frac{\log(1+x)}{x}=\int_{0}^{\infty} \frac{t^{-1}}{x+t} dt $$ AI: As people have said in the comments, $${\int\frac{1}{\log(1+x)}dx}$$ does not have an elementary antiderivative. However, judging by your edit - this is not what you are asking. You want to write $${\frac{1}{\log(1+x)}=\int_{a}^{b}f(x,t)dt}$$ for some function ${f(x,t)}$. Firstly, differentiate to get $${\frac{d}{dx}\left(\frac{1}{\log(1+x)}\right)=-\frac{1}{(1+x)\log^2(1+x)}}$$ Then clearly, $${\frac{1}{\log(1+x)}=-\int_{\infty}^{x}\frac{1}{(1+t)\log^2(1+t)}dt=\int_{x}^{\infty}\frac{1}{(1+t)\log^2(1+t)}dt}$$ Now simply do ${u=\frac{t}{x}}$ to get $${\Rightarrow \frac{1}{\log(1+x)}=\int_{1}^{\infty}\frac{x}{(1+xu)\log^2(1+xu)}du}$$ You can now rewrite in terms of ${x,t}$ just to make it look nicer ($u$ is just a dummy variable anyway) $${\Rightarrow \frac{1}{\log(1+x)}=\int_{1}^{\infty}\frac{x}{(1+xt)\log^2(1+xt)}dt}$$ This is like a reverse Leibniz rule! (Or "Feynman trick" if you are a Physicist :P)
H: What does it mean when polynomials have closed, exact complex solutions, but not exact real solutions? I was watching this introduction to peturbation theory. His first example is solving $$x^5 + x = 1$$ for which he claims there is no exact real solution. I asked WolframAlpha what it thought. It gives an inexact decimal solution $x \approx -0.75488...$ and some exact complex solutions $$x = -\sqrt[3]{-1}$$ $$x = (-1)^\frac{2}{3}$$ Is there some deep reason as to why the complex roots would have exact forms but not the real root? Could we have an $n$-degree polynomial with $a$ exact solutions and $b$ inexact solutions, for arbitrary $a+b=n$? Can the exact and inexact solutions be distributed arbitrarily between the real line and the rest of the complex plane? Can we say anything in general, or is this just a fluke for this particular polynomial? AI: It is possible to create a polynomial with roots that are irrational in the reals but, for any complex root $z = a + bi$, $a$ and $b$ are rational and therefore exact. One can also create examples that are opposite, like this: $$(x + \pi)(x + i) = 0$$ and $$(x + \pi i)(x + 1)$$ as examples. So to the original question, can we have a polynomial with $a$ exact and $b$ inexact solutions, sure you can, just build it up. Create it from first order polynomials multiplied together, in which $a$ are exact and $b$ have irrational roots. In general, it is impossible to know the location of the roots for an arbitrary polynomial of degree greater than 5, so one couldn't do this construction using only the polynomial's coefficients. You would have to start by placing the roots and then expanding it in order to know what the polynomial is in a standard form.
H: How to express in Legendre's polynomials? How do I express $cos(3\theta)$ and $sin^{2}(\theta)$ in Legendre's polynomials, knowing that $x=cos\theta$? I know that $f(x)=\sum a_{n}P_{n}(x)$ and $P_{n}=\frac{(-1)^{n}}{2^{n}n!}\frac{d^{n}}{dx^{n}}(1-x^{2})^{n}$, but I don't know what to do with them AI: If the functions you are trying to match to Legendre polynomials of $\cos \theta$ are sinusoids, the easiest thing to do is usually to use trigonometric identities to re-express the function in terms of $\cos \theta$. Then start from the highest order, match up the coefficients, and work your way down. For example, suppose $f(\theta) = \cos(4 \theta) - \sin^2 (\theta)$. We have $$ \cos(4 \theta) = 2 \cos^2(2 \theta) - 1 = 2 (2 \cos (\theta) - 1)^2 - 1 = 8 \cos^4 \theta - 8 \cos^2 \theta + 1 = 8x^4 - 8x^2 + 1 $$ and $\sin^2(\theta) = 1 - \cos^2 \theta = 1 - x^2.$ So $$ f(x) = 8 x^4 - 7 x^2. $$ The highest order of $x$ is 4, so the highest Legendre polynomial present is $P_4(x) = \frac{1}{8}(35 x^4 - 30 x^2 + 3).$ Subtract that out: $$ f(x) - \frac{64}{35} P_4(x) = 8 x^4 - 7 x^2 - \frac{8}{35} (35 x^4 - 30 x^2 + 3) = - \frac{1}{7} x^2 - \frac{24}{35} $$ Now we're down to $x^2$, so we can add in a multiple of $P_2(x)$: $$ f(x) - \frac{64}{35} P_4(x) + \frac{2}{21} P_2(x) = - \frac{1}{7} x^2 - \frac{24}{35} + \frac{1}{21} (3 x^2 - 1) = - \frac{11}{35} $$ And $P_0(x) = 1$, so $$ f(x) - \frac{64}{35} P_4(x) + \frac{2}{21} P_2(x) + \frac{11}{15} P_0(x) = 0 \\ f(x) = \frac{64}{35} P_4(x) - \frac{2}{21} P_2(x) - \frac{11}{15} P_0(x). $$
H: What is a small change of a function with 2 variables? I am reading a book and am confused about how this equation is founded. I would have thought that delta C would just be equal to the sum of the partial derivatives? C is a function depending on v1 and v2. AI: The partial derivatives just tell you how fast the function is changing, it doesn't tell you what it changes TO. It would be like saying that I am currently moving at 100 meters per second. That tells you how fast I'm going, but it doesn't tell you how far I've moved yet. In this case, you'd get just $\frac{dp}{dt}$, so what we need is a change in time that we can apply that speed to in order to get my change in position. When we have a multivariable function we in general can change among any of our independent variables, and we can do so independently, so we need to add up the contributions of each of those changes. Hence we still need those deltas - the changes in the respective variables. When the deltas are "very large" the accuracy of the computation goes down, but that's just calculus - generally you need to assume that the deltas are so small as to be infinitessimal in order to be "accurate".
H: Prove that there exists a bijective function $i:S \to S$ such that $i \circ g = g$ and $g \circ i = g$ for all bijections $g: S \to S$. Let $S$ be a set. ($a$) Prove that there exists a function $i:S \to S$ such that $i \circ g = g$ and $g \circ i = g$ for all bijections $g: S \to S$. ($b$) Prove that $i$ is a bijection. Proof of ($a$): Suppose $y \in S$. Then by surjectivity of $g$, there exists $x \in S$ such that $g(x)=y$. Suppose $i:S \to S$ is defined as $i(x_{0})=x_{0}$ for any $x_{0} \in S$. Then, $y=g(x)=g(i(x)) = (g \circ i)(x)$, since $x = i(x) \in S$. By definition of $i$, $y = g(x) = i(g(x)) = (i \circ g)(x)$. Thus, $(g \circ i)(x) = (i \circ g)(x)$ for any $x \in S$. QED Proof of ($b$): For $x_{1},x_{2} \in S$, $x_{1} \ne x_{2}$, we have that $i(x_{1})=x_{1}$ and $i(x_{2}) = x_{2}$ both imply $i(x_{1}) \ne i(x_{2})$. So, $g$ is injective. Then for any $x \in S$, there exists $x$ in $S$ such that $i(x) = x$. This implies that $g$ is surjective. Since we have shown that $g$ is both injective and surjective, it follows that $g$ is a bijective function. QED I wanted to know if both proofs are correct. AI: (b) is largely correct with the following stylistic flaws: "both" should be replaced by "together" "there exists $x$ in $S$ such that $i(x)=x$" should be replaced by "there exists $y$ in $S$ such that $i(y)=x$, namely $x$". (a) is incomprehensible to me: The definition of $i$ should be the start of the proof. $g$ should be introduced by a sentence like "Let $g$ be a bijection of $S$ onto $S$". You introduce $y$ as if you want to prove a statement of the form $\forall y\in S:\ldots$, but end up proving something of the form $\forall x\in S:\ldots$. You only try to prove $g\circ i=i\circ g$, but not that both terms are equal to $g$. A correct proof of (a) would be: Define $i:S\rightarrow S$ by $i(x_0)=x_0$ for all $x_0\in S$. Let $g$ be a bijection of $S$ onto $S$. Then, for all $x\in S$, $$ g(x)=g\big(i(x)\big)=(g\circ i)(x) $$ and $$ g(x)=i\big(g(x)\big)=(i\circ g)(x). $$ Note that I've not used that $g$ is bijective. That's because it isn't necessary. We have $g\circ i=g=i\circ g$ for all functions $g:S\rightarrow S$. The problem statement is misleading in that regard.
H: How many ways to move around parentheses for finite tensor products? Suppose I have $n$ elements where $n\in\mathbb{N}$ in a place where tensor products make since and are not strict, say $a_1,\cdots, a_n$. Suppose we only know that $a_i\otimes a_{j}$ is defined for all $i,j\in\{1,\cdots,n\}$. How many ways are there to move around the parenthesis for tensoring all the elements together if we have the $a_i$'s in numerical order? I know for 3 elements, there is two ways. $(a\otimes b)\otimes c$ $a\otimes (b\otimes c)$ For four elements, there are 5 ways. $(a\otimes b)\otimes (c\otimes d)$ $a\otimes(b\otimes(c\otimes d))$ $a\otimes ((b\otimes c)\otimes d)$ $(a\otimes(b\otimes c))\otimes d$ $((a\otimes b)\otimes c)\otimes d$ I do not know for any higher number. I am trying to use this to construct graphs which forms a single shape and then construct an operad based on this information whose algebras are precisely symmetric monoidal categories with a strict unit. AI: If you have n elements, the answer seems to be ${1\over n}{2(n-1)\choose n-1} $, which is the (n-1)-th Catalan numbers, which are closely related whit this kind of problems
H: Given that for two naturals $p$ and $q$ are coprime, How to show that two naturals $u$ $v$ exist such as $pu-vq =1$ I know by Bézout theorem two integers $u$ and $v$ exist and verify $pu+qv=1$ but to show that $u$ and $v$ are naturals I'm stuck. AI: If $p$ and $q$ are both positive integers and $pu+qv=1$ with integers $u$ and $v$, then one of them is positive and the other is not. If $u$ is the positive one, then you're done (i.e., just change the sign of $v$ and subtract instead of adding). If $u$ is not positive, then let $u'=u+(\text{a gazillion})q$ and $v'=(\text{a gazillion})p-v$, where "a gazillion" is whatever it takes to make $u'$ positive, and note that $$pu'-qv'=p(u+(\text{a gazillion})q)-q((\text{a gazillion})p-v)=pu+qv=1$$
H: There exists an $\aleph_0$-coloring of a graph on the real numbers. I have this question: Let $G = ( \mathbb{R} , E)$ be a graph such that its vertices are the real numbers and its edge set is given by $$E = \big\{ \{u,v\}\,\big |\, u-v \in \mathbb{Q} \setminus \{0 \}\big\}\,.$$ Prove that the graph has a legal coloring in color set $\mathbb{N}$. I have a graph $G$ with vertices set $V$, If there is a legal coloring of the graph $G$ using a set $A$ of colors of cardinality of $a$. Does that mean there is a legal coloring of $G$ using every set of colors with cardinality $a$ ? Thank you! AI: HINT: Show that if $\{u,v\}$ and $\{v,w\}$ are both edges of $G$, then $\{u,w\}$ is also an edge of $G$. Conclude that each component of $G$ is a countably infinite complete graph. Added: I’ll fill in a bit more detail. Let $u$ be any vertex of $G$; the neighborhood of $u$ (i.e., $u$ together with the set of vertices to which it is joined by edges) is $N(u)=\{u\pm q:q\in\Bbb Q\}$. Note that this is a countably infinite set (why?). Use the original hint to show that if $v\in N(u)$, then $N(v)=N(u)$: every vertex in $N(u)$ is connected by an edge to every other vertex in $N(u)$, and no vertex in $N(u)$ is connected to any vertex outside of $N(u)$. That shows that $N(u)$ is both a complete subgraph of $G$ and a component of $G$. And it has countably infinitely many vertices, so there is a bijection between it and $\Bbb N$.
H: Is there an elementary reason $\mathbb{CP}^{n}$ is not homeomorphic to the sphere for $n\ge 2$? My question is similar to this one, but I am asking this for the complex case. In the real case, we can use the fact that the fundamental group of $\mathbb{RP}^{n}$ is non-trivial, so the space cannot be homeomorphic to the sphere for $n\ge 2$. We can't really use this fact in the complex case, because $\mathbb{CP}^{n}$ is always simply connected. Is there any other simple way to argue this? I am hoping to find something similar to the ideas found in the above link that would involve only the simplest of tools from algebraic topology (or as simple as possible if nothing else). Any suggestions are welcome. AI: Without all the tools of algebraic topology, but with the basics of intersection numbers (e.g., Guillemin & Pollack), you can easily see that, for example, $\Bbb CP^2$ and $S^4$ are not diffeomorphic (or homeomorphic). In $\Bbb CP^2$, any two linear copies of $\Bbb CP^1$ intersect once. However, in $S^4$, since $S^4-\{\text{point}\}$ is contractible, any surfaces have zero intersection number. This argument generalizes immediately to $\Bbb CP^n$ and $S^{2n}$ ($n>2$).
H: Using Rule of Inference, How to derive following conclusion from given premises? Question is from the book: Discrete Mathematical Structures with Applications to CS by Tremblay and Manohar. It is an exercise problem. But, unfortunately, there is no help available on answers, or solutions of this book. I have tried to solve this problem but couldn't get the desired conclusion. Premise 1: $P \rightarrow Q$, Premise 2: $(\neg Q \lor R) \wedge \neg R$ Premise 3: $ \neg (\neg P \wedge S)$ Conclusion: $ \neg S$ Solution: $(\neg Q \lor R)$ $\wedge$ $-R$..............[Introducing Premise 2] $(\neg Q \lor R)$.........................[Tautologically Implies, 1, Simplification] $Q \rightarrow R$.............................[Tautologically Implies, 2, Converting Disjuction To Implication] $P \rightarrow Q$.............................[Introducing Premise 1] $P \rightarrow R$.............................[Tautologically Implies, 4, 3, Transitivity Law] $ \neg (\neg P \wedge S)$.......................[Introducing Premise 3] $P \lor \neg S$............................[Tautologically Implies, 6, DeMorgan's Law] $ \neg S \lor P$............................[Tautologically Implies, 7, Commutative Law] $S \rightarrow P$............................[Tautologically Implies, 8, Converting Disjuction to Implication] $S \rightarrow R$............................[Tautologically Implies, 9, 5, Transitivity] $\neg S \lor R$............................[Tautologically Implies, 10, Converting Implication to Disjuction] What wrong I did? I am getting $\neg S \lor R$ instead of $\neg S $ AI: hint From premisse $ 2$, use distributivity to get $ \lnot Q \wedge \lnot R$ because $ R\wedge \lnot R $ is false. by simplification, you have $ \lnot Q$. by contrapositive of premmisse $ 1$, you get $ \lnot P.$ Finally, using premisse $ 3$, De Morgan's law and disjunctive syllogism, you have the conclusion $\lnot S$.
H: Understanding Serge Lang's Definition of Homotopy I have been following Serge Lang's Complex Analysis text book and today I came across a chapter on homotopy. I have trouble visualising and honestly, understanding the definition that he has given in his book. Here is the definition from his book Could somebody explain to me how I can visually interpret this? I would also be really grateful if someone had a graphic or visual that would illustrate what is meant in this definition. Any help will be appreciated. AI: By definition, $\psi(t,c)=\gamma(t)$. Since $\psi$ is continuous, if $c_1$ is slightly bigger than $c$, then $t\mapsto\psi(t,c_1)$ is a path which is close to $\gamma$. And if $c_2$ is slightly bigger than $c_1$, then $t\mapsto\psi(t,c_2)$ is a path which is close to the previous one. And so on, until you reach $d$. So, $\psi$ deforms $\gamma$ into $\eta$.
H: Determining the validity of a basis with unnecessary vectors If we have $W=\{1-x,1-x+x^2,1+x^2,1-x-x^2\}$ find out of this set forms a basis for $P_2$ I put it into an matrix and row reduce it to get: $$\begin{bmatrix}1&0&0&2 \cr 0&1&0&-1\cr 0&0&1&0\end{bmatrix}$$ I noticed that while the first three columns are in RRE the fourth column is not and I was wondering: Could we just take out the last equation in the set $W$ and that be the basis? Is this even considered a basis since we have the last column not in RRE? If we are to use the RRE method to determine if this is a basis would we want $0'$s above and below each pivot points since we want to have all of the coefficients be $0$ after doing the RRE? AI: On the first question: Try it out! Repeat your process and check whether the resulting matrix still has full rank. On the second question: It is not a basis but a generating set, since a basis is usually defined to be linearly independent. In particular, this fixes how many vectors you have in a basis of some vector space $V$ to be $\operatorname{dim} V$, which is $3$ in your case (I think, anyways). On the third question: You would want the coefficient matrix to RRE to yield the identity matrix, yes (can you figure out why given the answers to the 2 questions above?)
H: Why do three non collinears points define a plane? I've just started looking at the axioms of 3D Geometry. The first one that I encountered is this one: "Three non collinear points define a plane" or " Given three non collinear points, only one plane goes through them" I know that it is an axiom and it is taken to be true but I don't understand the intuition behind it. I understand that if I take one point or any number of collinear points, then I can draw infinite planes just by rotating around the line that connects these points, but why do we need 3 non collinear points to define a plane, why not more? And why, given three non collinear points, does only one plane go through them? Why not two or three? AI: Two points determine a line (shown in the center). There are infinitely many infinite planes that contain that line. Only one plane passes through a point not collinear with the original two points:
H: Why are interior products necessary when 1-forms are already dual to vectors? I am reading a book about differential geometry that introduces an operator $i$, called the "interior product," that takes vectors and produces something that can act on 1-forms. Their rules are, $$ i\left(\frac{\partial}{\partial x_i}\right)\mathrm{dx_j} = \delta_{ij} $$ and, $$ i\left(\frac{\partial}{\partial x_i}\right)f = 0 $$ Now, the second rule is unique, and is definitely not what the vector $\partial/\partial x$ would do normally. Still, I don't understand why the first one should apply. Vectors are already dual to one-forms, so why do we need a map $i$ that takes vectors to something that can act on one-forms? This is especially bothersome to me, because interior products usually allow you to measure vectors against other vectors in the same space. I am not sure how to interpret these laws, which apparently describe an "inner product" between spaces that already have a bilinear function to be dual under! AI: No, the point is that interior product maps $(k+1)$-forms to $k$-forms (for all $k\ge 0$). This has nothing to do with inner products. Indeed, it's the process of undoing the wedge product (so-called "adjoint operation"), which sends $k$-forms to $(k+1)$-forms. It's based on contraction (evaluating a $1$-form on a vector), and that, indeed, is the rule they gave you for the case $k=0$. More generally, if we have a vector $v$ and, for example, two $1$-forms $\omega$ and $\eta$, then $$\iota_v(\omega) = \omega(v) \quad\text{and}\quad \iota_v{\eta} = \eta(v),$$ and then $$\iota_v(\omega\wedge\eta) = \omega(v)\eta - \eta(v)\omega.$$ (I'll leave it to you to figure out why the negative sign is there.) An important application of this notion is the following: If you have an oriented surface $S$ in $\Bbb R^3$, with outward pointing unit normal $\vec n$, then you get the area $2$-form on $S$ (written $dS$ or $d\sigma$ in calculus books) by taking $${"}dS{"} = \iota_{\vec n} (dx_1\wedge dx_2\wedge dx_3).$$ This generalizes to oriented hypersurfaces in any dimension.
H: How can I find $\theta$ where $\sin \theta=X$ and $\cos \theta=Y$? I have two variables, $X$ and $Y$. Both are between $-1$ and $1$, inclusive, but I need to find the angle, of which the sine is $X$, and the cosine is $Y$. How can I do that? This is probably a dumb question but it's been troubling me for a while now. AI: First you have to check if $X^2+Y^2=1$. Iff it's true, then they are sine and cosine of the angle $\theta=\arctan({X \over Y})+k\pi \;\forall k\in \Bbb Z$.
H: Given two points in $\mathbb{R}^d$, A and B, find the point in $\mathbb{R}^d$ that is most nearly x distance from A and y distance from B Consider two points in $\mathbb{R}^d$, $A$ and $B$. Now, consider two scalar quantities, $x$ and $y$. I want to find the point in $\mathbb{R}^d$ that is closest to being $x$ distance from $A$ and $y$ distance from $B$ Here's what I believe is the mathematical representation of what I'm trying to say, but if it still doesn't make any sense, I attached an image below just to demonstrate a simpler example in 2D. $\displaystyle\min_{\forall m \in \mathbb{R}^d} \mbox{ } |\|A - m\| - x| + |\|B - m\| - y|$ Just to recap, I want a way, given the coordinates of two points, $A$ and $B$, in $\mathbb{R}^d$, of finding the point which is $x$ distance from $A$ and $y$ distance from $B$. If no such point exists, I want the point that if closest to satisfying this. AI: We are gonna have 3 cases. Here is a picture in $\mathbb R^2$: If the surface of these balls intersect, i.e. $|x-y|<|A-B|<x+y$, then we have exact solutions. In fact there is an entire space of solutions which looks like $S^{d-1}$. In the plane, we get the pair $M_1$ and $M_2$. The second case is $|A-B|>x+y$. (Sorry the image is out of order). This is when the balls do not intersect. Given a point $m$, we are trying to minimize the discrepancy between $|m-A|$ and $x$, also the discrepancy between $|m-B|$ and $y$. This occurs along the line segment $PQ$ which I drew in the figure, as @MBW noted any point along that line segment is a minimum for $f(m)$. Third case is $|A-B|<|x-y|$. Then the circles are "too close" so to speak. Assume without loss of generality the ball around $A$ is contained inside the ball around $B$. Then any point along the line segment $BP$ as depicted will minimize the distance. If you want this to have a unique solution then replace $f(m)$ by $$ f^\star(m) = ||A-M|^2-x|^2 + ||B-M|^2-y|^2 $$ For this function, in the first case, there is still a space of exact solutions. In the second and third case, along the line segment which was considered, take the midpoint. Edit: There is one error in the figure, on the third figure, the point P should actually be reflected across the point $A$, and instead of $BP$ we consider the segment $PC$ where $C$ is the intersection of the ray $BA$ and the ball centered at $B$.
H: $C^1$ extension of a harmonic function is harmonic Let $B(0,r)$ denote the ball of radius $r$ centered at the origin in $\mathbb{R}^d$, $d\geq 2$. Let $0<r_0<1$. Suppose I have a real-valued function $v$ that is harmonic in the annulus $\{r_0<|x|<1\}$ that vanishes on $\partial B(0,1)$. Now let $\tilde{v}$ be an extension of $v$ to $\{r_0<|x|<1/r_0\}$ defined by $\tilde{v}(x) = -|x|^{2-d}v(|x|^{-2}x)$ for $1<|x|<1/r_0$. It can be shown that $\tilde{v}$ is $C^1$ on $\{r_0<|x|<1/r_0\}$. In a paper that I am reading (Lemma 1), the author seems to makes the claim that from the above, it follows that $\tilde{v}$ is harmonic on $\{r_0<|x|<1/r_0\}$. My question is, what is the general fact that is being used here? Is it true that a $C^1$ extension of a harmonic function is harmonic? My sense here is that this has something to do with the fact that (real) harmonic functions are (real) analytic, but I can't seem to find any reference on this. AI: $\tilde v$ is the Kelvin transform of $v$, not just some regular extension. The Kelvin transform of a harmonic function (in this case with respect to the unit ball) is always harmonic in the complementary domain (interior domains become exterior domains etc. Just find the Laplacian of $\tilde v$ and use the fact that $v$ is harmonic to convince yourself.
H: "Lazy" Random Walk I would like to discuss a slightly different kind of random walk on $Z$ in which we include the probability of being "stuck" on the same place. Let us denote as: $$p/3 \; \text{the prob. of right step (which is +1 on Z)} $$ $$p/3 \; \text{for left step (which is -1 on Z)} $$ $$ 1-\frac{2}{3}p \; \text{for being in the same place.} $$ Here $p \in (0,1) $ How I get from this to the diffusion equation? I denote as $n_1, n_2, n_3$ respectively the amount-of-steps random variables, respectively to the left,to right and on the same place. Obviously $(1) \; N=n_1+n_2+n_3$ is the total number of steps. As expectation values I get: $$ <n_1> = <n_2> = N\frac{p}{3}$$ While variances: $$ V[n_1]=V[n_2]=N\frac{p}{3}(1-\frac{p}{3})$$ I define the distance random variable $d_n=n_1 -n_2 $ as usual but Now I cannot write $n_2(n_1,N)$ as I do in the simpler case of bernoulli process; this is because of (1). How can I proceed? AI: Let $P_N(x)$ denote the probability of being at $x$ after $N$ steps. assuming $P_0(0)=1$. Then: $$P_{N+1}(x)= \left(1-\frac{2}{3}p\right)P_N(x)+\frac{p}{3}(P_N(x-1)+P_N(x+1)),$$ which is obtained by conditioning on the walk's possible value at step $N$. If you want the continuous limit, write the above as: $$\frac{P_{N+1}(x)-P_N(x)}{N}=\frac{p}{3}\frac{P_{N}(x-1)-2P_N(x)-P_N(x+1)}{N}$$ so that after rescaling your random walk in $x,N$ appropriately, you get: $$\partial_t P_t(x)=\frac{p}{3}\partial_{xx}P_t(x)$$
H: Logic - Identifying domain of discourse, variable, and predicate I have the following question that I am trying to figure out: 'All swans are white' Identify a natural domain of discourse for this sentence, the variable and the predicate. I am trying to study this on my own, and this was my own practice question from Coursera. I believe the statement itself is the predicate, the domain of discourse , "all swans", and the variable is "swan". How should I think about this statement in regards to each of these definitions? AI: I see two ways of formalizing this sentence which give different answers to the questions. One is to have the domain of discourse be all swans. The predicate is "is white" or "are white"-predicates do not care how many things satisfy them but English does. You can write $x$ is white as $W(x)$ and the whole sentence $\forall x~W(x)$. The variable would be one swan, which we refer to as $x$. Another approach is that swans are things in your domain of discourse, but not the whole thing. This statement picks out the swans and says they are all white. A natural domain of discourse would be all birds, all animals, all things on earth, or something similar. Now you have two predicates, $S(x)$ says $x$ is a swan and $W(x)$ says $x$ is white. You would then write the sentence $\forall x~(S(x)\implies W(x))$ Incidentally, the statement is false. I have seen black swans in the wild, but don't have a photo handy.
H: Using sum-to-product formula to solve $\sin(2\theta)+\sin(4\theta)=0$ Trying to use the sum-to-product formula to solve $\sin(2\theta)+\sin(4\theta)=0$ over the interval $[0,2\pi)$, but I'm missing solutions. $$\sin(2\theta)+\sin(4\theta)=0$$ Apply sum-to-product formula: $$2\sin\left(\frac{2\theta+4\theta}{2}\right)\cos\left(\frac{2\theta-4\theta}{2}\right)=0$$ $$2\sin(3\theta)\cos(-\theta)=0$$ By odd-even identities: $\cos(-\theta)=\cos(\theta)$ $$2\sin(3\theta)\cos(\theta)=0$$ $$\sin(3\theta)\cos(\theta)=0$$ By the zero-product property $\sin(3\theta)=0$ or $\cos(\theta)=0$ Then solving for theta gives: $\theta=0, \frac{\pi}{2}, \frac{3\pi}{2}, \pi$. However, there are missing solutions $\frac{\pi}{3}, \frac{2\pi}{3}, \frac{4\pi}{3}, \frac{5\pi}{3}$. A solution online used double angle identities instead: $$\sin(2\theta)+\sin(4\theta)=0$$ $$\sin(2\theta)+\sin(2*2\theta)=0$$ Apply double angle identity for: $\sin(2*2\theta)$ $$\sin(2\theta)+2\sin(2\theta)\cos(2\theta)=0$$ Factor out $\sin(2\theta)$ $$\sin(2\theta)*[1+2\cos(2\theta)]=0$$ Apply double angle identities: $\cos(2\theta)= 1-2\sin^2(\theta)$ $\sin(2\theta)= 2\sin(\theta)\cos(\theta)$ $$2\sin(\theta)\cos(\theta)*[1+2(1-2\sin^2(\theta))]=0$$ $$2\sin(\theta)\cos(\theta)*[-4\sin^2(\theta)+3]=0$$ By the zero-product property $2\sin(\theta)\cos(\theta)=0$ or $-4\sin^2(\theta)+3=0$ Which further simplifies to $\sin(\theta)=0$, $\cos(\theta)=0$, or $-4\sin^2(\theta)+3=0$ Solving for theta now gives all possible solutions over $[0, 2\pi)$. My questions are: (1) Can the sum-to-product formula be used to solve this equation? (2) If so, why were solutions missing when using the sum-to-product formula but not the double angle identities? What was I doing incorrectly? AI: This is an excellent way to proceed with this problem, and the reduction to $\sin(3\theta)\cos(\theta)=0$ is great; this implies that $\sin(3\theta)=0$ or $\cos(\theta)=0$. The solutions to $\cos(\theta)=0$ are $\theta = \dots,\frac\pi2,\frac{3\pi}2,\dots$. The solutions to $\sin(\alpha)=0$ are $\alpha = \dots, 0, \pi, 2\pi, \dots$. But we have $\sin(3\theta)=0$, and so the solutions are $3\theta = \dots, 0, \pi, 2\pi, \dots$, which is the same as $\theta=\dots,0,\frac\pi3,\frac{2\pi}3,\pi,\frac{4\pi}3,\frac{5\pi}3,\dots$.
H: Create Log function I'm a programmer. I'm working with a sensor that has given the following graph. I would like to make a function where i put the value of Rs/R0 in. The outcome of this function is the value of ppm of alcohol. I understand that precision is imposible so, for the function i'm going for a straigt line from point (200 , 2.9 ) and the second point would be (10000 , 0.68). I tried to google how you create log functions but i got stuck with the problem that i don't know the value of y (in my case rs/r0) when x (in my case ppm) is 1 . All tuts online i found used that. Is there an other way to calculate the function? AI: In your graph $\frac {R_s}{R_0}$ is a logarithmic function of the concentration in ppm. You want to go the other way, so your function will be exponential. You are fitting a line through two points, which you do by choosing your functional form and evaluating the coefficients. If you choose your form to be $\log_{10} \frac {R_s}{R_0}=a+b\log_{10}(c)$ where $c$ is the concentration in ppm. Each point gives one equation in $a,b$. You have two equations in two unknowns. $$0.46=a+b\cdot 2.30\\ -0.70=a+b\cdot 4\\ -1.16=1.70b\\ b=-1.47\\ a=5.16$$ You can raise $10$ to the power of each side, getting $$\frac {R_s}{R_0}=10^a\cdot c^b=145000c^{-1.47}$$ where I have shown too many significant figures so you can follow through the calculation.
H: Vector Space - how to visualize it for understanding? I read on Wikipedia about vector spaces, but I don't understand them in a way that I can visualize the vector spaces in my head. During the process of understanding, I had several concepts in my head and I am at a point now, where I am totally confused. Maybe I am in a dead end as well. I have drawn four of these concepts, so you can imagine what happened in my head. Pictures: my approaches for vector spaces Picture A $ \vec{r} $ is the vector space, which means the space is linear on the line of the vector. $ \vec{r} $ contains infinite vectors like $ \vec{a} $, $ \vec{b} $ and $ \vec{c} $. The last three vectors only exist in $ \vec{r} $ or vector spaces which are bigger or equal to themselves. An orthogonal vector of $ \vec{b} $ is not a part of $ \vec{r} $. Picture B The vector space is an area where one or multiple vectors like $ \vec{r} $ and $ \vec{m} $ exist. The space is infinite, which doesn't make much sense to define a space. But it is a space. In the picture it is the striped zone of the diagram. Picture C $ \vec{r} $ can be build by the linear combination of $ \vec{a} $ + $ \vec{b} $, $ \vec{c} $ + $ \vec{d} $ or any other combination of two vectors within the red striped zone. But what is with combinations outside of the red striped zone? Here it destroys my concept probably. Picture D $ \vec{r} $ is the shortest vector to the target point. $ \vec{a} $, $ \vec{b} $, $ \vec{c} $ and $ \vec{d} $ are one linear combination of multiple possible linear combinations to the target. Is the red striped area the vector space or red and yellow together? Is one of my concepts the right concept of vector spaces? I really appreciate your inputs and hope to get a explanation which my brain can visualize. Maybe you could draw it? AI: The following are primary examples of vector spaces (over the real numbers): A one point set, regarding the point as the origin, i.e. the zero vector $\{0\}$. This space is $0$ dimensional. A full line through the origin (basically it's along the lines of your picture A, but we also consider negative and every multiples of its vectors). The lines are $1$ dimensional. A full plane through the origin, including all its points. These are $2$ dimensional. The physical 3d space you can consider as a $3$ dimensional vector space after fixing a point for origin: you can add vectors and multiply them by real numbers: that's what the abstract definition says. We can observe that in all these geometric examples, the elements of the given set can be coordinatized by base vectors, namely we have to fix exactly as many base vectors as the given 'dimension'. This, on one hand, means that the elements of the given set can be represented by a single coordinate (for a line) / a pair of coordinate numbers (for a plane) / a triple of coordinates (for the space). But this thing we can simply continue in the algebraic way: For any positive integer $n$, we can define a (canonical) $n$ dimensional vector space: $\Bbb R^n$ consists of the $n$-tuples of real numbers. You can add them and multiply by any real number, coordinatewise. You can check the conditions that it indeed defines a vector space in the abstract sense.
H: Convergence in laws i'm currently stuck in this exercice where i don't know how to start. Let $\{X_n\}_{n \ge 1}$ be a sequence of independent random variables on $(\Omega, \mathbb{A}, \mathbb{P})$ with: $$ F_{X_n}(t) = \left(1 - \frac{1}{1+t^2}\right)1_{[0,\infty)}(t) $$ for which $p \in (0,\infty)$ is $X_1$ in $L^p(\mathbb{P})$? Consider $n \in \mathbb{N}$ and the random variable: $$ Z_n = min(\sqrt nX_1,\sqrt nX_2,...,\sqrt{n}X_n) $$ Show that $Z_n$ converges in law when $n \rightarrow \infty$ to a random variable with density f(x) that need to be determined. $$ F_{X_1}(t) = \mathbb{P}(X_1 \le t) = \left(1 - \frac{1}{1+t^2}\right)1_{[0,\infty)}(t) = \int_{0}^{\infty} \frac{t^2}{1+t^2}dt $$ $\int_{0}^{\infty} \left(\frac{t^2}{1+t^2}\right)^p dt$ doesn't not converge for any p. AI: $F_{X_n}$ is the distribution function of $X_n$, not the density function. Differentiation gives $f_{X_n}(t)=\frac {2t} {(1+t^{2})^{2}}$. Show that $E|X_1|^{p}=\int |t|^{p} f_{X_n}(t)dt <\infty$ iff $p <2$. Now $P(Z_n >z)=(P(X_n >z /\sqrt n))^{n}=(\frac 1{1+z^{2}/n})^{n} \to e^{-z^{2}}$. Hence $Z_n \to Z$ in distribution where $P(Z>z) =e^{-z^{2}}$. Differentiate this to get the density of $Z$.
H: How to compare which interest rate is better compounded annual vs compounded 3 times a year Having a little trouble with getting the answer to this question. : How much better is the return on a 4% yearly interest rate investment that is compounded 3 times per year as opposed to compounded yearly? I tried to set up the equation as : 10000(1.04)^n = 10000(1+.04/3)^3n n=1 then to compare the them : 10400/ 1040535704 = 0.999485 I'm guessing I am not setting things up right... I am suppose to get answer 1% - 1.5% better. AI: You are doing the right thing in comparing $1.04$ to $\left(1+\frac {0.04}3\right)^3$. You don't want to write it as an equation because they are not equal. The difference is $\left(1+\frac {0.04}3\right)^3-1.04\approx0.0005357$ as you found. The percentage difference you are supposed to find is the increased percentage of interest $$\frac {0.0405357}{0.04} \approx 1.0134$$ which is an increase of $1.34\%$ in the interest received, not an increase of that much in the effective interest rate. Anybody using percentages should make clear what the number is that percentage of. I think it is better to state that you are getting an increase of $0.05357\%$ in the effective interest rate.
H: Finding global extrema Click here for question I don't know how to find the global extrema of this since taking the partial derivatives with respect to x and y leaves me with no x and y's to find zeros of the function. Please help AI: In calc 1, how did you find extrema? You took the derivative of the objective function, found where it is zero, or where the derivative was undefined. Then you checked the endpoints. Now you have a second variable to work with. You are going to do the same thing. Take the derivatives, look for where they are zero, then check the boundary. The partial derivatives $q(x,y)$ are defined everywhere and never zero. So, we jump to the boundary. A couple ways to go here: One is to parameterize the boundary. $x = \frac 12 \cos t\\ y = \sin t$ $q(t) = 4\cos t + 3\sin t$ and find $t$ to maximize and minimize $q.$ $q(\arctan \frac {3}{4}) = 5\\ q(\arctan \frac {3}{4}+\pi) = -5$ Another would be something along the lines of a Lagrange multiplier $\nabla (8x + 3y) = \lambda \nabla (4x^2 + y^2 - 1)\\ (8,3) = \lambda (8x, 2y)\\ x = \frac {1}{\lambda}\\ y = \frac {3}{2\lambda}$ Constrained by $4x^2 + y^2 = 1$ $\frac {4}{\lambda^2} + \frac {9}{4\lambda^2} = 1\\ 16 + 9 = 4\lambda^2\\ \lambda = \pm \frac {5}{2}\\ x = \pm \frac {4}{5}\\ y = \pm \frac {3}{5}\\ q(\frac {4}{5},\frac {3}{5}) = 5\\ q(-\frac {4}{5},-\frac {3}{5}) = -5$
H: If $E[X^p] < \infty$, then $\lim_{x\to \infty} x P(|X|\gt {x}^{1/p})=0$ In this question, the proof of the following claim was solved: If $E[X] < \infty$, then $\lim_{x\to \infty} x P(|X|\gt x)=0$ . Now, I want to ask about the following claim: If $E[X^p] < \infty$, then $\lim_{x\to \infty} x P(|X|\gt {x}^{1/p})=0$ . I think this statement is true for $^\forall p>0$, but I couldn't prove it. Maybe I can prove it similarly to the former statement. Thank you in advance. AI: By the first statement with $X$ changed to $|X|^{p}$ we see that $E|X|^{p} <\infty$ implies $xP(|X|^{p} >x) \to 0$ or $x P(|X| >x^{1/p}) \to 0$.
H: About essential range and essential supremum $\newcommand{\esssup}{\mathrm{ess\,sup}}$$\newcommand{\essrng}{\mathrm{ess\,range}}$ I am trying to prove that $\esssup(f) = \sup(\essrng(f))$, where we define $$ \esssup(f) = \inf \{b \in \mathbb{R}_+ : \mu(f^{-1}((b, \infty))) = 0\} $$ and similarly $$ \essrng(f) = \{w \in \mathbb{R}_+ : \mu(f^{-1}(B(w, \epsilon))) > 0\}. $$ Actually, I already showed that $\esssup(f) \leq \sup(\essrng(f))$, but I have not been able to prove the other direction. The answer on this related question hasn't been useful for me to prove the desired reverse inequality. If you could give me a hint to prove it, I'll be really grateful. AI: $\newcommand{\esssup}{\mathrm{ess\,sup}}$$\newcommand{\essrng}{\mathrm{ess\,range}}$ You want to prove that $\esssup(f) \geq \sup (\essrng(f))$. For this, it's enough to show that if $w \in \essrng(f)$, then $\esssup(f) \geq |w|$. Hint: You can do this by contradiction: Assume there is $w_0 \in \essrng(f)$ such that $\esssup(f)<|w_0|$ and try to find a contradiction by playing around with the definitions of $\esssup$ and $\essrng$. Solution: (Try not to look at this until you've tried to solve it by yourself) I am assuming that the measure space here is called $X$. By definition of essential supremum we have $|f(x)| \leq \esssup(f)$ a.e. $[\mu]$, so$$ \mu\big( \left\{ x \in X : |f(x)| > \esssup(f) \right\} \big) =0$$Take any $w_0 \in \Bbb{C}$ and suppose that $|w_0| > \esssup(f)$. Then, there is $\varepsilon_0$ such that $|w_0| - \varepsilon_0 >\esssup(f)$. Therefore, if $x \in X$ is such that $|f(x)-w_0|< \varepsilon_0$ the reverse triangle inequality implies that $|f(x)|>|w_0|-\varepsilon_0 > \esssup(f)$. Hence $$\{x \in X : |f(x)-w_0|< \varepsilon_0 \} \subset \{ x \in X : |f(x)| >\esssup(f)\}$$However, if $w_0 \in \essrng(f)$,$$0 < \mu\big( \{x \in X : |f(x)-w_0|< \varepsilon_0 \} \big) \leq \mu\big( \{ x \in X : |f(x)| > \esssup(f) \} \big)=0$$a contradiction! Then we must have $|w| \leq \esssup(f)$ for any $w \in \essrng(f)$, as desired.
H: Suppose $a,b\in \mathbb{Z}$ are relatively prime and $c\in \mathbb{N}$ is a divisor of $a+b$. **Verify my proof** that gcd$(a,c)$=gcd$(b,c)=1$. I am looking for someone to verify my proof of the problem in the title. Of course, if you believe it to be wrong, you are welcome to shred it into a million pieces and if I made any logical errors please point them out. I'll write my solution out in the most coherent way I know how to. I am also interested, even if the proof provided below is correct, is there a "simpler" (whatever you consider that to mean) way to approach this? Proof We have that $c$ divides $a+b$. This means $\exists \lambda \in \mathbb{Z}$, \begin{equation*} a+b = \lambda c. \end{equation*} Note that $a=\lambda c -b$ and $b = \lambda c - a$. This will be immediately useful. Let $x \in \langle a \rangle + \langle c \rangle$. Then $x=sa+tc$ for some $s,t\in\mathbb{Z}$. $sa+tc = s(\lambda c - b) + tc = (-s)b + (s\lambda + t)c \Rightarrow x \in \langle b \rangle + \langle c \rangle$. This means $\langle a \rangle + \langle c \rangle \subseteq \langle b \rangle + \langle c \rangle$. It is easy to verify that $\langle b \rangle + \langle c \rangle \subseteq \langle a \rangle + \langle c \rangle$ in the same way and hence $\langle a \rangle + \langle c \rangle = \langle b \rangle + \langle c \rangle$. Therefore we have gcd$(a,c)=$gcd$(b,c)$. To show that $a$ is relatively prime to $c$ and $b$ is relatively prime to $c$ we suppose that there are some $f,g\in\mathbb{Z}$ with $f\neq \pm 1$ and $g\neq \pm 1$ such that $a=fc$ and $c=fa$, $b=gc$ and $c=gb$. Then \begin{equation*} a=fc=(fg)b \end{equation*} Now, since we have established that $f$ and $g$ are integers, $fg$ is an integer. Furthermore, $fg\neq \pm 1$. This is a contradiction, since $a$ and $b$ are relatively prime. Therefore we conclude that \begin{equation*} \text{gcd}(a,c) = \text{gcd}(b,c) = 1. \end{equation*} AI: $\gcd(a,c)$ divides $a$ and divides $c$. And $c|a+b$ so $\gcd(a,c)|a+b$ so $\gcd(a,c)|b$ so $\gcd(a,c)$ is a common factor of $a$ and $b$. But $a$ and $b$ are relatively prime so $\gcd(a,c) =1$. Same argument shows $\gcd(b,c)= 1$
H: a linear map on $W$ Define $W = \{(a_1, a_2,\cdots) : a_i \in \mathbb{F}, \exists N\in\mathbb{N}, \forall n \geq N, a_n = 0\},$ where $\mathbb{F} = \mathbb{R} $ or $\mathbb{C}$ and $W$ has the standard inner product, which is given by $\langle(a_1,a_2,\cdots), (b_1,b_2,\cdots)\rangle = \sum_{i=1}^\infty a_i \overline{b_i}$ ($\overline{b_i}$ is simply the complex conjugate of $b_i$). Prove that the linear map $T : W \to W$ given by $T(a)_j = \sum_{i=j}^\infty a_i$, where $T(a) = (T(a)_1, T(a)_2,\cdots),$ has no adjoint. I know that to show that $T$ has an adjoint $T^*$, it suffices to show that for all $a,b \in W$, $\langle T(a),b \rangle = \langle a, T^* b\rangle$. So to show that $T$ does not have an adjoint, it suffices to show that there is no linear map $T^*$ so that for all $a,b \in W \langle T(a),b\rangle = \langle a,T^*b\rangle$ For any $a \in W,$ we may find $N\in\mathbb{N}$ st $i \geq N\Rightarrow a_i = 0.$ Hence $$\langle T(a),b\rangle = \sum_{i=1}^\infty \left(\sum_{j=i}^\infty a_j\right) \overline{b_i}=\sum_{i=1}^{N-1} \left(\sum_{j=i}^{N-1} a_j\right)\overline{b_i}.$$ Also, $$\langle a, T^*b\rangle = \sum_{i=1}^\infty a_i\overline{(T^*b)_i}=\sum_{i=1}^{N-1} a_i\overline{(T^*b)_i}.$$ Hence $$\langle T(a),b\rangle = \langle a,T^*b\rangle \iff \sum_{i=1}^{N-1} \left(\sum_{j=1}^{N-1} a_j \overline{b_i}-a_i \overline{(T^*b)_i}\right) = 0.$$ I know I'm supposed to find a $b \in W$ that'll make it impossible for $\langle a,T^*b\rangle = \langle T(a),b\rangle$ for all $a\in W$, but I'm unsure how to find this. AI: Let $e_i\in W$ satisfy $(e_i)_j=1$ if $i=j$ and $(e_i)_j=0$ otherwise. Then $$\langle Te_i,e_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ Thus if $T^*$ exists: $$\langle e_i,T^*e_j\rangle=\cases{1\,\,{\rm if}\,\,j\leq i,\\0 \,\,{\rm if}\,\,j>i.}$$ So we have $(T^*e_j)_i=1$ for all $i\geq j$ which contradicts $T^*e_j\in W$.
H: Singular Measure with Dense Support Is there a measure $\mu$ with support $S \subseteq [0,1]$ such that it satisfies: (i) $S$ has Lebesgue measure zero but is dense on $[0,1]$ with respect to the standard metric; (ii) $\mu(S)<\infty$; and (iii) $\forall \epsilon>0$, $\forall a,b \in [0,1]$ such that $b-a\geq \epsilon >0$, $\mu((a,b])\geq k(\epsilon)>0$. The Cantor distribution fails (i). It is unclear to me that taking the route of the answers here ensures (iii). The main reason I am asking this is that this possibility is mentioned in Diaconis and Freedman, 1990, p. 1317, but I'm struggling to construct such a measure. Edit: (iii) should hold $\forall \epsilon>0$. Apologies for the imprecision. AI: let $S$ be the set of dyadic rational numbers in $[0,1]$ (i.e. all rational numbers of the form $a/2^k$, with $0\leq a\leq 2^k$ and $a$ odd), and let $\mu(\{a/2^k\})=1/2^{2k}$. Property (i) is immediate. Further,$\mu$ is finite because there are at most $2^k$ dyadic rationals in $[0,1]$ with denominator $2^k$, so the measure of all such rationals is at most $2^k/2^{2k}=1/2^k$, and summing over all possible $k$ gives (ii). to see $(iii)$, suppose $b-a>\epsilon>0$. Choose $k(\epsilon)$ so that $2^{-k(\epsilon)}<\epsilon$. The crucial observation is that there must exist a dyadic rational number in $(a,b)$ with denominator $2^{k(\epsilon)}$ or less. Then $(iii)$ follows immediately from this observation, since $\mu([a,b])$ would then be at least $2^{-2*k(\epsilon)}$. To see why the observation is true, observe that the set of all such dyadic rationals is equal to $0, 1/2^{k( \epsilon)}, 2/2^{k(\epsilon)}, \dots, 1$, i.e. they are evenly spaced with the gap between successive terms being $1/2^{k(\epsilon)}$. Since the distance between $a$ and $b$ is larger than this gap, the observation follows.
H: Suppose that $(z_{n})_{n=1}^{\infty}$ converges to $z$. Show that $\overline{z}_{n}\to\overline{z}$ and $|z_{n}|\to |z|$ as $n\to\infty$. Suppose that $(z_{n})_{n=1}^{\infty}$ is a sequence in $\textbf{C}$ which converges to $z$. Show that $\overline{z}_{n}\to\overline{z}$ and $|z_{n}|\to |z|$ as $n\to\infty$. MY ATTEMPT Let $\varepsilon > 0$. Then there is a natural number $n_{0}\geq 1$ s.t. $|z_{n} - z| < \varepsilon$ whenever $n\geq n_{0}$. Moreover, one has that $|z_{n} - z| = |\overline{z_{n} - z}| = |\overline{z}_{n} - \overline{z}| < \varepsilon$. Thus we conclude that $\overline{z}_{n}\to \overline{z}$ as $n\to \infty$. Similarly, one has that $||z_{n}| - |z|| \leq |z_{n} - z| < \varepsilon$. Thus $|z_{n}|\to|z|$ as $n\to\infty$, and we are done. Could someone please check if the wording of my proof is good enough? Any comments are appreciated. AI: Yes, this is fine. By the way, for the second part it suffices to note that $\lvert \:\cdot\:\rvert:\Bbb{C}\to \Bbb{R}$ is continuous, so that it commutes with limits and $\lim_{n\to\infty} \lvert z_n\rvert=\lvert \lim_{n\to\infty} z_n\rvert=\lvert z\rvert$.
H: Understanding calculation of jth column in matrix AB This is a theorem found in Friedberg's Linear Algebra which I have trouble understanding. A is an $m×n$ matrix, B is an $n×p$ matrix, $u_j$ is the jth column of $AB$ and $v_j$ is the jth column of B. I am having trouble understanding the proof of $u_j=Av_j$. $$u_j=\begin{bmatrix} (AB)_{1j} \\ ... \\ (AB)_{mj} \end{bmatrix}$$ $$=\begin{bmatrix} \sum_{k=1}^n{A_{1k}B_{kj}} \\ ... \\ \sum_{k=1}^n{A_{mk}B_{kj}} \end{bmatrix}$$ Now this is the part where I don't understand. How was is the A factored out? How is this equal to the previous result? $$=A\begin{bmatrix} B_{1j} \\ ... \\ B_{nj} \end{bmatrix}$$ It further states that the column j of $AB$ is a linear combination of the columns of $A$ with the coefficients in the linear combination being the entries of column j of $B$. I cannot see this from the above representation however. Any help explaining this step is appreciated! AI: Perhaps seeing $u_j$ within the context of the entire matrix multiplication result can help: $$\underset{m\times n}A\quad \underset{n\times p}B =\small\begin{bmatrix} \displaystyle\sum_{k=1}^n{A_{1k}B_{k1}} & \displaystyle\sum_{k=1}^n{A_{1k}B_{k2}} & \cdots & \color{blue}{\displaystyle\sum_{k=1}^n{A_{1k}B_{kj}}} &\cdots & \displaystyle\sum_{k=1}^n{A_{1k}B_{kp}} \\ \displaystyle\sum_{k=1}^n{A_{2k}B_{k1}} & \displaystyle\sum_{k=1}^n{A_{2k}B_{k2}} & \cdots & \color{blue}{\displaystyle\sum_{k=1}^n{A_{2k}B_{kj}}} &\cdots & \displaystyle\sum_{k=1}^n{A_{2k}B_{kp}} \\ && \ddots \\ \displaystyle\sum_{k=1}^n{A_{(m-1)k}B_{k1}} & \displaystyle\sum_{k=1}^n{A_{(m-1)k}B_{k2}} & \cdots & \color{blue}{\displaystyle\sum_{k=1}^n{A_{(m-1)k}B_{kj}}} &\cdots & \displaystyle\sum_{k=1}^n{A_{(m-1)k}B_{kp}} \\ \displaystyle\sum_{k=1}^n{A_{mk}B_{k1}} & \displaystyle\sum_{k=1}^n{A_{mk}B_{k2}} & \cdots & \color{blue}{\displaystyle\sum_{k=1}^n{A_{mk}B_{kj}}} &\cdots & \displaystyle\sum_{k=1}^n{A_{mk}B_{kp}} \end{bmatrix}$$ Remember that each entry in the matrix multiplication is a dot product. The expression $\small u_j=\begin{bmatrix} (AB)_{1j} \\ \vdots \\ (AB)_{mj} \end{bmatrix}$ is simply saying "pick all $m$ entries of the column $j$ of the multiplication matrix." This is exactly the result of the $m$ dot products of each of the $m$ rows of $A$ with the $j$ column of $B,$ or $\small A\begin{bmatrix} B_{1j} \\ \vdots \\ B_{nj} \end{bmatrix}.$ It further states that the column $j$ of $AB$ is a linear combination of the columns of $A$ with the coefficients in the linear combination being the entries of column $j$ of $B.$ True, because $$\small A\begin{bmatrix} B_{1j} \\ \vdots \\ B_{nj} \end{bmatrix} = B_{1j}\begin{bmatrix} A_{11} \\ \vdots \\ A_{m1} \end{bmatrix} + B_{2j}\begin{bmatrix} A_{12} \\ \vdots \\ A_{m2} \end{bmatrix} + \dots + B_{nj}\begin{bmatrix} A_{1n} \\ \vdots \\ A_{mn} \end{bmatrix} $$
H: Solving an equation using 'replacement' I have this equation: $$ a \cdot 2^a = b$$ and need to find solutions for $a$. And I know it's complicated and has to do something with the $W$ function. But I've come up with a way which does not work for some-reason... and I would like you to have a look and help me find the mistake! We can write any number $a = 2^k$ where $k$ isn't necessarily in $\mathbb{N}$ or $\mathbb{Z}$ so we have: $2^k \cdot 2^{2^k} = b$ which gives: $2^{2^k +1} =b $ and thus: $2^k + 1 = \log_2{b}$ And we get that $k = \log_2(\log_2b -1)$ and thus $a = 2^k = 2^{\log_2(\log_2b -1)} = \log_2b -1$ However this does not work! Why? Here is an example: $a \cdot 2^a = 24$ In my example $a = \log_2(24) -1 \approx 3.58$ but it is actually $3$ ... Edit: I am so dumb.. it should be $2^{2^k + k}$ ... Is there a way to solve this kind of equation? Thank you! AI: $2^k \cdot 2^{2^k} = 2^{k + 2^k}$, not $2^{1+2^k}$. EDIT: The solutions of $a 2^a = b$ are $$a = \frac{W(b \ln(2))}{\ln(2)}$$ where $W$ is a branch of the Lambert W function. If you want real solutions: there is none if $b < -1/(e \ln(2))$. If $b = -1/(e \ln(2))$, the only real solution is $-1/\ln(2)$. If $-1/(e \ln(2)) < b < 0$, there are two real solutions, one with the "$-1$" branch and one with the principal branch of $W$. If $b \ge 0$, the only real solution is with the principal branch.
H: Is there a rigorous way to describe $g(x)$ continuously deforming into $h(x)$ and could it be useful? One thing that bothers me about mappings is that they seem to instantaneously transport points from one space to another. I feel like there should be a space of unique, non-intersecting paths that each of the points travel on to get to their new destination. Does anyone agree? For example consider a mapping $F:\Bbb R^2 \to \Bbb R^2$ with $F(x,y)=(e^x,e^y).$ Consider the function $g(x)=\frac{1}{x}$ embedded in the standard $x-y$ cartesian system. We start with $g$ and magically get $h(x)=e^{\frac{1}{\log(x)}}$ with no information about how $g$ was deformed into $h!$ Maybe it's just perspective but I feel like at every point in time we should be able to track the deformations as $g$ morphs into $h.$ I drew a picture with the paths that I think each of the points should follow as they start with $g$ and move to become $h.$ The upper bound path is $y=e^x$ and the lower bound path is $y=\log(x).$ The central path is $y=x.$ Obviously there's not enough rigor here, but I tried my best with what I know. My question is: Is there a rigorous way to describe $g$ continuously deforming into $h$ and could it be useful? AI: It sounds like you're looking for the notion of homotopy. Broadly speaking, the idea behind homotopy is that we're not just interested in individual continuous maps into a space $X$, but rather the manipulation of such maps. E.g. the picture you drew suggests that we should be able to take the map $$\alpha:\mathbb{R}\rightarrow\mathbb{R}: x\mapsto e^x$$ and "deform" it into the map $$\beta:\mathbb{R}\rightarrow\mathbb{R}: x\mapsto \ln(x),$$ for example hitting the map $\gamma:\mathbb{R}\rightarrow\mathbb{R}:x\mapsto x$ along the way. The trick to making this intuition precise is to think about maps from a product space. Specifically, we're going to think of "a continuous map from $A$ to $B$ being deformed over time from $t=0$ to $t=1$" (say) as "a map from $A\times [0,1]$ to $B$." Conversely, given a continuous map $m:A\times [0,1]\rightarrow B$, for each $t\in[0,1]$ we get the "snapshot map" $m_t:A\rightarrow B: a\mapsto m(a,t)$, and we think of $m_0$ and $m_1$ as the "starting" and "finishing" maps. We can then, for example, talk about when one continuous map $f:A\rightarrow B$ can be "deformed into" another continuous map $g:A\rightarrow B$ - namely, when there is a continuous map $m:A\times[0,1]\rightarrow B$ such that $m_0=f$ and $m_1=g$. When such an $m$ exists we say that $f$ and $g$ are homotopic. For example, the homotopy relation isn't very interesting in $\mathbb{R}^2$ (or more generally $\mathbb{R}^n$). Specifically, suppose we have two maps $f,g:X\rightarrow\mathbb{R}^2$. Then consider the map $$m: X\times[0,1]\rightarrow\mathbb{R}^2: m(x,t)=tg(x)+(1-t)f(x).$$ This $m$ is guaranteed to be a homotopy between $f$ and $g$. On the other hand, more complicated spaces make things more interesting. Consider the space $Y=\mathbb{R}^2\setminus\{(0,0)\}$, $f: S^1\rightarrow Y$ the usual map from the circle to the plane, and $g: S^1\rightarrow Y$ the constant map sending everything to $(17,42)$. Are $f$ and $g$ homotopic? And there's a lot more to say. Of particular interest is the case when $A$ itself is the unit interval $[0,1]$ and we restrict attention to those homotopies $m:[0,1]\times[0,1]\rightarrow B$ which "keep the endpoints fixed," that is, which satisfy $m(0,0)=m(0,x)$ and $m(1,0)=m(1,x)$ for all $x\in[0,1]$ - these $m$s are the path homotopies and lead to the notion of the fundamental group(oid). The notion of homotopy also leads to a notion of similarity of spaces, namely homotopy equivalence. The wiki page has more information on the topic.
H: Why is the Projection (cB) of Vector A on B perpendicular to Vector A - cB? The following excerpt can be found in Serge Lang's Introduction to Linear Algebra. I am trying to understand mathematically why the vector $\mathbf{A}- c\mathbf{B}$ is perpendicular to the vector $c\mathbf{B}$. I suppose there would be a simple mathematical explanation behind this, but I haven't been able to find any. I have tried taking the dot product of $\mathbf{A} - c\mathbf{B}$ and $c\mathbf{B}$ to equal $0$, but I cannot find any proof as to why this dot product would have to equal $0$. AI: As @Bungo has mentioned, it is not true for an arbitrary value $c\in\textbf{F}$. It just states the projection of $A$ lies in the direction $B$. More precisely, in order to find $c$, it has to satisfy the following relation: \begin{align*} \langle A-cB,cB\rangle = 0 & \Longleftrightarrow \langle A,cB\rangle - \langle cB,cB\rangle = 0\\\\ & \Longleftrightarrow \overline{c}\langle A,B\rangle - c\overline{c}\langle B,B\rangle = 0 \end{align*} If $B\neq 0$ and $c\neq 0$, it results that \begin{align*} \langle A,B\rangle - c\langle B,B\rangle = 0 \Longleftrightarrow c = \frac{\langle A,B\rangle}{\langle B,B\rangle} \end{align*} and we are done. Hopefully it helps.
H: prove this is a closed nowhere dense subset in $L^1$ A UC qualiyfing exam problem goes like this: Let $f$ be a positive continuous function on $\mathbb{R}$ such that $\lim_{|t|\rightarrow\infty} f(t)=0$. Show that the set $\{hf|\,h\in L^1(\mathbb{R}),||h||_1\leq K\}$ is a closed nowhere dense set in $L^1(\mathbb{R})$ for any $K>0$. The second part: let $(f_n)$ be a sequence of positive continuous functions on $\mathbb{R}$ such that for each $n$ we have $\lim_{|t|\rightarrow\infty}f_n(t)=0$. Show that there exists $g\in L^1(\mathbb{R})$ such that $g/f_n\notin L^1(\mathbb{R})$ for any $n$. The second part is clear from the first part and the Baire category theorem. The first part got me stuck for a day now. If we assume closedness, we can see "nowhere denseness" as follows: let $h$ be a function with $||h||_1\leq K$; first we can find a sequence of disjoint subsets $E_n$ of $\mathbb{R}$ such that $|f|<\frac{1}{n}$ on $E_n$ and $\mu(E_n)=1$ for each $n$. Let $g$ be the function that is $\frac{1}{n}$ on $E_n$ and zero elsewhere, then $gf\in L^1$ but $g\notin L^1$. Therefore for any $\epsilon>0$, we see that $hf+\epsilon gf\notin$ the set defined in the problem. This shows that the interior of the set defined in the problem is empty. But for closedness,I couldn't prove directly using measure theory, or the fact that convergence in $L^1$ implies a subsequence converges pointwise. I also tried to use Fourier transform to transform to change multiplication into convolution. But $f$ doesn't have to have a Fourier transform. Can someone give me some hints as to what should I try? Thank you!! AI: Suppose $h_nf\to g$ in $L^1,$ where each $\|h_n\|_1\le K.$ Then $|h_n|f\to |g|$ in $L^1.$ Thus $|h_n|f\to (|g|/f)f$ in $L^1.$ It follows that $|h_{n_k}|f \to (|g|/f)f$ a.e. for some subsequence, which implies $|h_{n_k}| \to (|g|/f)$ a.e. By Fatou's lemma, $$\int (|g|/f) = \int \liminf |h_{n_k}| \le \liminf \int |h_{n_k}| \le K.$$ This shows $g=(g/f)f$ with $\|g/f\|_1\le K,$ and thus the set in question is closed.
H: If $\int\limits_0^{\infty}f^2(x)\ dx$ is convergent, prove $\int\limits_a^{\infty}\frac{f(x)}x\ dx$ is convergent If $\int\limits_0^{\infty}f^2(x) dx$ is convergent, prove $\int\limits_a^{\infty}\frac{f(x)}x dx$ is convergent for any $a\ge 0$ I use the Cauchy-Schwarz Inequality to get : $$\left( \int\limits_a^A \frac{f(x)}{x}dx \right)^2 \leq \left( \int\limits_a^A f^2(x)dx \right) \left( \int\limits_a^A \frac{1}{x^2}dx\right)\le +\infty$$, so for every $A$, the $\int\limits_a^A \frac{f(x)}{x} dx$ is bounded, but then I can not prove the $\int\limits_a^A \frac{f(x)}x dx$ is not the vibration one such as $(-1)^n$ since it is not given monotone. So I can not address this. Could you give some way to solve it?(Or could you have any other method to solve it?) Thank you! AI: Note that the same argument works for $|f|$ instead of $f$, and then the integrand becomes non-negative. $$\lim_{A\to\infty}\int_a^A\dfrac {|f(x)|}{x}dx\le\sqrt{\dfrac 1a\times\int_0^\infty f^2(x)dx}<\infty$$ Now use the fact that absolutely convergence imply conditional convergence.
H: Optimizing expectation, unknown parameters, normal distribution. How many restaurants should I try before choosing one for the rest of my n-m meals? Suppose you move to a new city with an infinite number of restaurants and you plan to stay there for a predetermined amount of time. You plan to have n meals at restaurants over the course of your stay in the city. Assume there are no reviews on any of the restaurants and the only way to determine how good a restaurant is is by eating there and giving it a rating. Also assume the quality of restaurants follows a normal distribution but you don't know the parameters, µ and σ, of the distribution. Your goal is to optimize the expectation of your combined restaurant experiences. You do not value variety and would happily eat every single meal at the best restaurant if you could find it. For example, if you simply ate at a different restaurant for every meal you would have an expected combined experience of n$\times$µ. Also if you had a meal at a random restaurant then decided to have all your meals there without trying any others you would again have an expected combined experience of n$\times$µ. But you could improve upon that by trying two restaurants then choosing the better of the two and having all your remaining meals there. Then your expected combined experience would be (n-1)$\times$a1+a2. where a1 is the expected rating of the better of the two restaurants and a2 is the expectation of the worse restaurant. (What would a1 and a2 be in terms of µ and σ for this case? I know (a1+a2)/2=µ but don't know how far apart they would be). You could improve further by trying 3 restaurants and choosing the best of those and so on. If you sampled m restaurants before settling on one for your remaining meals your combined expectation would be (n-m+1)$\times$a1+a2+a3+...+am where again a1 is the highest expectation of these m drawings. Main question: How many restaurants, m, should you try before picking the best of those restaurants for your remaining n-m meals? AI: Use dynamic programming. Let $x_{t-1}$ be the value of the best restaurant you have so far visited in period $t$. Then you can sample a new restaurant or stick with the best you already have. In the final period, $t=n$, the value function is $$ J_n(x_{n-1}) = \max\{ \mu, x_{n-1} \}. $$ and you should search if $\mu > x_{n-1}$. So, go to your existing favorite or try something new. Backwards inducting, at a period $t<n$, you have the same kind of functional equation, $$ J_t(x_{t-1}) = \max \{ \mathbb{E}[ J_{t+1}(x_t)|x_{t-1}], (m-t)x_{t-1} \} $$ So the optimal policy is to stop (which is an absorbing state) if $x_{t-1} \ge (\mathbb{E}[J_{t+1}(x_t)|x_{t-1}])/(m-n)$. Because of the search pattern, $$ \mathbb{E}[J_{t+1}(x_t)|x_{t-1}] = \int_{-\infty}^{x_{t-1}} J_{t+1}(x_{t-1})dF(z) + \int_{x_{t-1}}^\infty J_{t+1}(z)dF(z) = J_{t+1}(x_{t-1})F(x_{t-1}) + \int_{x_{t-1}}^\infty J_{t+1}(z)dF(z). $$ So the break-even $x_{t-1}$ satisfies $$ (m-t) x_{t-1} = J_{t+1}(x_{t-1})F(x_{t-1}) + \int_{x_{t-1}}^\infty J_{t+1}(z)dF(z). $$ You can solve that numerically for the optimal break-even $x_{t-1}$, given $J_{t+1}$. Given the optimal policy fcn $J_{t+1}$, you can solve for the expectation numerically and do another round of backwards induction, all the way back to $t=1$. The key is that experimenting early is valuable, even if you already have a pretty good restaurant, even substantially above $\mu$, but as time goes on, you will prefer to pick a better-than-$\mu$ restaurant rather than risking a bad meal. If you add discounting and take $n \rightarrow \infty$, you get a simpler problem: the functional equation is $$ J(x) = \max \left\lbrace \dfrac{x}{1-\beta}, \mu + \beta \int_{-\infty}^x J(x)dF(x') +\beta \int_{x}^\infty J(x')dF(x')\right\rbrace $$ That equation has a unique solution $J^*$, and from that you can solve for the optimal cutoff $x^*$, which will be independent of calendar date: you search until you get a restaurant better than $x^*$, then stop. Also look up Bellman equations and discrete time dynamic programming. Sorry, I got excited about the solution and initially didn't understand what you meant by not knowing the parameters, but figured it out once I got to the end. It's easy to add that. Put $\theta = (\mu,\sigma^2)$ into the value function and use Bayes' rule to update from period to period. There are a lot of ways to approach this, including https://en.wikipedia.org/wiki/Recursive_Bayesian_estimation and https://en.wikipedia.org/wiki/Kalman_filter You'll probably want to use a collocation method to treat the state space as continuous instead of discretizing, and then use quadrature to compute the expectations.
H: Optimization on multiplication of function of three variables I need some suggestions on how to go above solving this problem: Suppose I have $n$ vectors: $X_1, X_2, ..., X_n$, and a known vector $Y$. Each vector has $T$ rows. I want to select only $3$ out of $n$ vectors, say $\: X_i, X_j, X_k\: $, along with finding some corresponding thresholds $\: \chi_i, \chi_j, \chi_k \: $ such that the following is maximized (globally). $\frac{1}{T} \, \sum_{t=1}^T Y_t \, * \, \Big[ 1\{\: X_{t, i} > \chi_i \: \& \: X_{t, j} > \chi_j \: \& \: X_{t, k} > \chi_k \} - 1\{\: X_{t, i} < \chi_i \: | \: X_{t, j} < \chi_j \: | \: X_{t, k} < \chi_k \} \Big] $ where $\: 1\{.\}\: $ is the Indicator function, and I need to select $\: i, j, k\: $ such that $\; i \neq j \neq k$. I tried solving this using brute force, running massive loops, over all combinations of $\: i, j, k \: $, i.e. looping over $\: X_i, X_j, X_k \: $, and thresholds $\: \chi_i, \chi_j, \chi_k \: $ . Bruce force gives me good solution, but brute force solutions can take a while. For my problem $n$ is typically $2000$ and $T$ is typically $30,000$ and repeating the bruce force every time on a new data is time-consuming. To speed up, I tried to restructure this problem approximately as a "Greedy" Decision Trees (GBM, Random Forest, etc.) to get some approximate solutions (such as picking up the most promising path among all possible paths), but "Greedy" Decision Tree solutions are typically vastly inferior to running a brute force solutions, e.g. generally, the best path from a Tree fitting gives a value that 20%-30% of the optimal solution. As an alternative to a brute force approach, I was wondering if anyone had any idea (any pointers will do too) on how I could go about converting the above maximization into a optimization problem (convex, or non-convex) where I am hoping to use a solver to speed up my work? Any papers that deal with such a problem will do too. AI: You can solve the problem via mixed integer linear programming as follows. For $i \in \{1,\dots,n\}$ and $s\in\{1,2,3\}$, let binary decision variable $u_{i,s}$ indicate whether vector $X_i$ is selected to be in "slot" $s$. For $s\in\{1,2,3\}$, let continuous decision variable $\chi_s$ be the threshold, let $\overline{\chi_s}$ be an upper bound on $\chi_s$, and let $M_{s,t}=\overline{\chi_s}-\min_i X_{t,i}$. For $t\in\{1,\dots,T\}$, let binary decision variable $v_t$ indicate whether all three selected vectors $X_i$ exceed their respective thresholds in component $t$. The problem is to maximize $\sum_t Y_t v_t$ subject to linear constraints: \begin{align} \sum_i u_{i,s} &= 1 &&\text{for all $s$} \tag1\\ \chi_s - \sum_i X_{t,i} \cdot u_{i,s} &\le M_{s,t}(1-v_t) &&\text{for all $s$ and $t$} \tag2\\ \sum_i i\cdot u_{i,s} + 1 &\le \sum_i i\cdot u_{i,s+1} &&\text{for $s\in\{1,2\}$} \tag3 \end{align} Constraint $(1)$ selects exactly one $X_i$ per slot. Constraint $(2)$ enforces the logical implication $(u_{i,s} = 1 \land v_t = 1) \implies X_{t,i} \ge \chi_s$. Constraint $(3)$ prohibits selecting the same vector twice. You can recover your original objective function value as $\frac{1}{T}\sum_t Y_t(2 v_t-1)$.
H: Motivation behind definition of complex sympletic group One definition of complex sympletic group I have encountered is (sourced from Wikipedia): $$Sp(2n,F)=\{M\in M_{2n\times 2n}(F):M^{\mathrm {T} }\Omega M=\Omega \}$$ What is the motivation for imposing the condition $M^{\mathrm {T} }\Omega M=\Omega$ instead of others such as $M^{-1}\Omega M=\Omega$? AI: In general, if $V$ is a $n$-dimensional $F$-vector space equipped with a bilinear form $b\colon V \times V \to F$ and $T\colon V \to V$ is an endomorphism such that $b(Tx,Ty) = b(x,y)$ for all $x,y \in V$, one may take a basis $\mathcal{B} = (e_1,\ldots, e_n)$ for $V$, let $b_{ij} = b(e_i,e_j)$ and $Te_j = \sum_{i=1}^n T^i_{~j}e_i$, and compute $$b_{ij} = b(e_i,e_j) = b(Te_i,Te_j) = b\left(\sum_{k=1}^n T^k_{~i}e_k, \sum_{\ell=1}^r T^\ell_{~j}e_j\right) = \sum_{k,\ell=1}^n T^k_{~i}T^\ell_{~j}b_{k\ell}.$$If $[T]_{\mathcal{B}} = (T^i_{~j})_{i,j=1}^n$ and $[b]_{\mathcal{B}} =(b_{ij})_{i,j=1}^n$, the above identity then reads $$[b]_{\mathcal{B}} = [T]_{\mathcal{B}}^\top [b]_{\mathcal{B}}[T]_{\mathcal{B}}.$$So, once a basis $\mathcal{B}$ (and hence the matrix $B = [b]_{\mathcal{B}}$) is fixed, the isomorphism $T\mapsto [T]_{\mathcal{B}}$ between ${\rm End}(V)$ and ${\rm Mat}(n,F)$ restricts to an isomorphism $$\{T \in {\rm End}(V) \mid b(Tx,Ty) = b(x,y) \mbox{ for all }x,y \in V \}\cong \{ M \in {\rm Mat}(n,F) \mid M^\top BM = B \}.$$The fact that the dimension of the space $V$ is even and that $\Omega$ is symplectic is irrelevant, this is a general mechanism regarding the relation between the matrix of a bilinear map and the matrix of its pull-back under a linear map.
H: Extend the direct product representation of a subgroup $G$ be finite abelian group. $H$ be subgroup of $G$. $H$ be direct product of cyclic subgroups $H_1$, $H_2$, .., $H_m$. Do there exist cyclic subgroups $H_{m+1}$, .., $H_n$ such that $G$ is direct product of $H_1$, $H_2$, .., $H_n$? I'm quite confused. AI: No, not always. Take $G=C_4$ and $H$ the unique subgroup of order $2$ of $G$. Note the following: you are basically asking whether it is true that if $H$ is a subgroup of the finite abelian group $G$ then $H$ has a complement in $G$. That will certainly not be true if $H \leq \Phi(G)$, as in the example above.
H: Separation properties of a topological space vs. characteristics of the continuum Suppose that a set $X$ has a topology $\mathcal{T}$. Then $$\mathcal{T}\ \text{is T}_1\Rightarrow|\mathcal{T}|\geq|X|.$$ I'm curious about implications in the opposite direction, possibly assuming the negation of the continuum hypothesis. I would also be interested in implications in the same direction which are finer than allowed with CH, i.e. if $|(|X|,|\mathcal{P}(X)|)|>0$ can hold for infinite sets $X$. AI: Let $\langle X,\tau\rangle$ be any infinite space, and let $I=\{0,1\}$ with the indiscrete topology. Then $X\times I$ has the same cardinality as $X$, and the product topology on $X\times I$ has the same cardinality as $\tau$, since the open sets in the product are the sets of the form $U\times I$ for $u\in\tau$, but the product is not even $T_0$. Thus, any combination of cardinalities of $X$ and $\tau$ that is possible at all is possible for a space that is not even $T_0$.
H: How to Find the $d^4p $ from the four vector? Lets assume we are given the four vector of momentum P which can be written as: $$p = (p_0, p_t cos\theta , p_t sin \theta , P_L ) ........(1)$$ Where $P_L$ is the longitudinal competent. The transverse component can be written as easily : $p_t ^2 = p_t cos\theta^2 + p_t sin\theta ^2$ The differential form I need to show $$d^4p = 1/2 \ dp_0 \ dp_L \ d p_t^2 \ d \theta ........(2)$$ I understand the $p_L$ and $p_0$ are the two component that comes from $$d^4p = dp_0 \ dp_x \ dp_y dp_L .....(3)$$ Therefore $dp_x \ dp_y$ has to be equal to $d p_t^2 \ d \theta$ . How? My intention is to get the equation (2) from equation(1) or (3). AI: Assuming you meant $p_t$ and not $q_t$, we have $$\frac{1}{2}\: d(p_t^2)\:d\theta = p_t\:dp_t\:d\theta$$ which is just the polar coordinates Jacobian.
H: What is the meaning of the derivative of a complex function. For the derivative of the complex function, when it is analytic, then it satisfies C-R equation. So in this case we have $f'(z_{0})=u_{x}(x_{0},y_{0})+iv_{x}(x_{0},y_{0})$. But if we consider the Vector valued function $g:\mathbb{R^{2}}\rightarrow\mathbb{R^{2}}$, the derivative at $z_{0}=(x_{0},y_{0})$ is $g'(z_{0})$, which is a Jacobi matrix. What makes these two things different? Is because the binary operation "vector multiplication" makes different sense in complex plane and $\mathbb{R^2}$? AI: When considered as a function from $\Bbb{R}^2$ to $\Bbb{R}^2$, a complex function differentiable at a point $z = x + iy$ must be differentiable at $(x, y)$, so in this sense, complex differentiability is stronger than $\Bbb{R}^2$-differentiability. Further, it's strictly stronger, as the Jacobean has to be of the form $$\begin{pmatrix} a & -b \\ b & a \end{pmatrix}.$$ In fact, this matrix corresponds to a complex derivative $a + ib$. The above matrix represents the matrix for the linear transformation of $z \mapsto (a + ib)z$, considered as maps from $\Bbb{C}$ to $\Bbb{C}$, with the ground field $\Bbb{R}$, over the basis $(1, i)$. One consequence of this is that complex differentiable functions need to be conformal; the above matrix is a product of a scaling matrix $\sqrt{a^2 + b^2} I$ and a rotation matrix. This means that the map locally preserves angles, making it conformal. This is not true for general differentiable functions from $\Bbb{R}^2$ to $\Bbb{R}^2$!
H: Show that $P=(\bar{x},\bar{z})$ is a prime ideal in the ring $A:=k[x,y,z]/(xy-z^2)$. Note: $\bar{x},\bar{z}$ denotes the image of $x$ and $z$ in A and $k$ is a field. I have a possible proof but I do not know if its correct: Suppose $\bar{a}\bar{b}\in P$ where $a,b\in k[x,y,z]$. Then $\bar{a}\bar{b}=\bar{c}\bar{x}+\bar{d}\bar{z}$ where $c,d\in k[x,y,z]$. Hence, $\overline{ab}=\overline{cx+dz}$. This implies $ab-cx-dz=g(xy-z^2)$ where $g\in k[x,y,z]$. This implies that $ab=cx+dz+g(xy-z^2)$. Therefore, all the monomials involved in the sum on the RHS have either $x$ or $z$ indeterminates or the sum could be zero. If the RHS sum is zero then either $a=0$ or $b=0$. If the RHS sum is not zero then this implies that $a$ and $b$ are both not zero and either $a$ or $b$ only has monomials containing $x$ or $z$. Hence, either $\bar{a}$ or $\bar{b}$ is in P. Is the proof above valid? Secondly, I would like to show that $\bar{x}\notin P^2$ and $\bar{y}\notin r(P^2)$. But I'm stuck with that. Any ideas or suggestions? AI: The ideal $(\bar x , \bar z)$ in the quotient corresponds to the ideal $(x,z,xy-z^2)$ in $k[x,y,z]$, so it suffices to check primality of the latter. But obviously by closure this latter ideal is simply $(x,z)$ which is prime, and so the original ideal is prime. I find your argument kind of hard to follow and you can see that the above argument is sort of implicit in it. With the above in mind your second question is not too hard to answer. If $\bar x \in P^2$ then translating back to $k[x,y,z]$ there is some $g\in k[x,y,z]$ such that $$x + (xy -z^2)g \in (x,z)^2 = (x^2,xz,z^2)$$ By closure we can drop the $-z^2g$ to get $$x + xy g \in (x^2,xz,z^2)$$ but the LHS clearly cannot be in that ideal since it has a lone $x$ term. Finally, for your last question, if $\bar y \in r(P^2) \subseteq r(P)$ then $\bar y^n \in P$ for some $n$ and hence $\bar y \in P$, which would say that $y\in (x,z)$ in $k[x,y,z]$ which is clearly not the case.
H: Concrete Mathematics: Josephus Problem: Odd induction I am trying to work through the odd induction case of the closed form solution to the Josephus problem. To start with a quick review of the even case - I'm being quite verbose though to help frame the question and also to potentially highlight any mistakes in my understanding that just happen to work in the even case. Quick review of even case Recurrence: $J(2n) = 2J(n) - 1$ Closed form to prove: $J(2^m+l)=2l+1$ First we express it in terms of the recurrence $$J(2^m+l)=2J(2^{m-1}+\frac{l}{2})-1$$ Logically, then, these two are equivalent $$2J(2^{m-1}+\frac{l}{2})-1=2(\frac{2l}{2}+1)-1$$ Which finally gives us what we want $$2(\frac{2l}{2}+1)-1=2(l+1)-1=2l+2-1=2l+1$$ Odd case Odd recurrence: $J(2n+1)=2J(n)+1$ I am trying to apply the closed form in the same way. First in terms of the odd recurrence: $$J(2^m+l)=2J(2^{m-1}+\frac{l}{2})+1$$ Then plugging in the closed form: $$2(2\frac{l}{2}+1)+1$$ But then this does not induct: $$2(\frac{l}{2}+1)+1=2(l+1)+1=2l+3$$ I am not sure what I am misunderstanding. AI: You’re not applying the recurrence for the odd case correctly. Suppose that $2n+1=2^m+\ell$, where $0\le\ell<2^m$. The recurrence is $J(2n+1)=2J(n)+1$, and $n$ here is $\frac12(2^m+\ell-1)$, so $$\begin{align*} J(2^m+\ell)&=2J\left(2^{m-1}+\frac{\ell-1}2\right)+1\\ &=2\left(\frac{2(\ell-1)}2+1\right)+1\\ &=2\ell+1\;, \end{align*}$$ as desired.
H: Can I put $x = \dfrac{π}{2}$ in $ \tan{2x} = \dfrac{2\tan{x}}{1 - \tan^2{x}} $? In the textbook it is written about the equation that $ 2x \neq n \pi + \dfrac{\pi}{2} $. Does that mean x can be equal to $ n \pi + \dfrac{\pi}{2} $ ? AI: Notice, $\tan x$ is undefined at $ x = n \pi + \frac{\pi}{2} $. Similarly $\tan 2x$ is undefined at $ 2x =n \pi + \frac{\pi}{2} $ Therefore for defining $\tan x$ & $\tan 2x$ $$x\ne n \pi + \frac{\pi}{2}, \ \ \ \ \ \ 2x\ne n \pi + \frac{\pi}{2}$$
H: If any finite subcollection of finite sets has non-empty intersection, then the infinite intersection is non-empty as well. The space is R with usual metric. Since all the sets are finite, they are compact as well. Suppose {$A_µ$} is the collection of finite sets and the infinite intersection is empty. Let K belong to {$A_µ$}. So, no element of K is contained in any $A_µ$. Let the complement of $A_µ$ be $G_µ$. So {$G_µ$} forms an open cover for K. Since K is compact, K admits a finite subcover, $G_1$, $G_2$,..., $G_n$ such that K is contained in the union of this finite subcover. This implies that K is not contained in the intersection of $A_1$, $A_2$,..., $A_n$ (Complement of union is intersection of complements). Therefore, intersection of K with {$A_µ$}, 1 ≤ µ ≤ n is empty. This contradicts the hypothesis that finite subcollection has non empty intersection. Is my approach correct? Also, can this be done without using compactness of finite sets? If yes, then please give me a hint. Thanks in advance :) AI: Compactness is not needed. Let $\mathscr{F}$ be a family of finite subsets of $\Bbb R$, and suppose that $\bigcap\mathscr{F}=\varnothing$. Fix $F\in\mathscr{F}$. Then for each $x\in F$ there is an $F_x\in\mathscr{F}$ such that $x\notin F_x$. But then $\{F\}\cup\{F_x:x\in F\}$ is a finite subfamily of $\mathscr{F}$ whose intersection is empty.
H: Factorization of $x^b+1$ I came across the result that $(x^a+1)|(x^b+1)$ if and only if $\frac{b}{a}$ is odd. Any intuitive reason why, though? What about $\frac{b}{a}$ being odd makes this true? AI: This follows from the fact that, if $k$ is odd,$$x^k+1=(x+1)\left(x^{k-1}-x^{k-2}+\cdots-x+1\right).$$
H: Infinite product of transcendental numbers approaches 1 I am seeking infinite formulas connect transcendentals and rationals. We know $$e = \sum_{n = 0}^\infty \frac{1}{n!} = \frac{1}{0!} + \frac{1}{1!} + \frac{1}{2!} + \frac{1}{3!} + \frac{1}{4!} + \cdots$$ as an example of infinite sum of rational numbers which approaches the transcendental e. Is there an infinite product exists as a kind of dual counterpart of the above Euler's formula? i.e. which transcendentals $\{x_n\}$ satisfied $$1 = \prod_{n = 0}^\infty x_n$$ The beauty the better, just like Euler's version. AI: The gamma function $\Gamma(x)$ satisfies the identity below, where $\gamma$ denotes the Euler-Mascheroni constant. The identity is evidently a result of the Weierstrass factorization theorem (see the examples section). $$\frac{1}{\Gamma(x)} = xe^{\gamma x} \prod_{n=1}^\infty \left(1 + \frac x n \right) e^{-x/n}$$ Take $x=1$. Then, since $\Gamma(x) = (x-1)!$ for positive integers $x$, and $0!=1$, we get $$1 = e^\gamma \prod_{n=1}^\infty \left(1 + \frac 1 n \right) \frac{1}{\sqrt[n]e}$$ We don't know if $\gamma$ itself is transcendental or not (that question is still open), but the $n^{th}$ roots of $e$ are transcendental as a corollary of Lindemann-Weierstrass. Some potential flaws with this: This isn't a product from $0$ to $\infty$, but a shift of index fixes that $e^\gamma$ isn't known to be transcendental, and it might somehow negate that of the factors $x_n = (1+1/n)e^{-1/n}$ Granted, I think there might be merit in this example - it certainly "feels" likely that the factors are transcendental, even if we can't quite prove that yet. There might also be merit in exploring Weierstrass factorization further for something that can be more surely seen to have transcendental factors. Perhaps with different functions, for instance?
H: Name and interpretation of the $\stackrel{d}{=}$ symbol. Context: I have the following statement which uses the symbol $\stackrel{d}{=}$. Let X be a Random Variable, and let $X'$ be an RV that is independent of $X$ and $X'\stackrel{d}{=}X$. We call the Random Variable $$X^s = X −X'$$ symmetrized X. AI: The symbol $\overset d =$ means equal in distribution. More precise, let $(\Omega_1 , \mathcal F _1 , \Bbb P_1)$ and $(\Omega_2 , \mathcal F _2 , \Bbb P_2)$ be two probability spaces and $X_i:\Omega_i \to \Bbb R$ be a random variable on $\Omega_i$. The pushforward measure on $(\Bbb R , \mathcal B (\Bbb R))$ $$\Bbb P_i \circ X_i^{-1}$$ is called the distribution (or law) of $X_i$. Then we say $X_1$ is equal in distribution to $X_2$ (write $X_1 \overset d = X_2$) if $$\Bbb P_1 \circ X_1^{-1} = \Bbb P_2 \circ X_2^{-1}.$$ Note that for this definition, $X_1$ and $X_2$ do not need to be defined on the same probability space. There are also the following symbols (meaning the same as $\overset d =$) in use: $\sim$ $\overset{\mathcal{D}}{=}$ ($\mathcal D$ for "distribution") $\overset{\mathcal{L}}{=}$ ($\mathcal L $ for "law", fortunately seldom used)
H: Exists $f\in L^{\infty}(\Omega)$ such that $G(x, t)\geq f(x) \vert t\vert^{\theta} -\alpha$? Let $\Omega$ be an open bounded subset of $\mathbb{R}^n$ and let $g:\Omega\times\mathbb{R}\to\mathbb{R}$ be such that $$ g(x, \cdot)t\in\mathbb{R}\mapsto g(x, t)\in\mathbb{R} \ \mbox{ is continuous for a.e. } x\in\Omega$$ and $$g(\cdot, t)x\in\Omega\mapsto g(x,)\in\mathbb{R} \ \mbox{ is Lebesgue measurable for all } t\in\mathbb{R}.$$ Define $$G(x, t) = \int_{0}^t g(x, s) ds.$$ Suppose there exists $\theta >2$ such that $$0 <\theta G(x, t)\leq g(x, t) t.$$ Could anyone help me to understand why the above condition implies that $$G(x, t)\geq f(x) \vert t\vert^{\theta} -\alpha,$$ with $f(x) >0$ a.e. in $\Omega$ and $f\in L^{\infty}(\Omega)$, while $\alpha$ denotes a positive constant? Thank you in advance! AI: (This is only a hint, not a full answer.) Consider only the case $t>0$ (the case $t<0$ should be similar). Let $f(x) := G(x, 1) = \int_0^1 g(x, t)\, dt$. By assumption, we have that $f(x) > 0$. Moreover, the function $t\mapsto G(x, t)$ satisfies the differential inequality $$ \begin{cases} G' \geq \frac{\theta}{t}\, G, & t > 0, \\ G(1) = f(x), \end{cases} $$ which gives $$ G(x) \geq f(x) \exp \int_1^t \frac{\theta}{s}\, ds = f(x) \, t^\theta, $$ for every $t\geq 1$. The range $t\in [0,1]$ should be adjusted using the constant $\alpha$. In any case, I think that you need to assume something like $$ \textrm{esssup} \{|g(x,t)|; x\in\Omega, t\in [-1,1]\} < \infty $$ to obtain both the existence of the constant $\alpha$ and the boundedness of $f$. (I don't know if this condition can be deduced from your assumptions.)
H: Showing that $U_{A,B}$ is a subspace Show that $$ U_{A,B} = \{X \in \mathbb{K}^{n \times n}, f(X^T) = f(X)^T \} $$ is a subspace of $V = \mathbb{K}^{n \times n}$ with $f: V \to V, X \mapsto AXB$ and $A,B \in V$. I'm not sure what to show. Showing $f((v+w)^T) = f(v+w)^T $, with $v,w \in V$ does not get me anywhere and it feels not right. But this could just be me lacking intuition. AI: Note that the kernel of any linear map is a subspace. With that said, it suffices to verify that $U_{A,B}$ is indeed the kernel of the linear map $g:V \to V$ defined by $g(X) = f(X^T) - f(X)^T$.
H: Maximising the argument of $\sin and \cos$, given a linear relation between them Here is a problem I have been struggling with for a while, If $$4\sin\theta \cos\phi+2\sin\theta+2\cos\phi+1=0$$ where $\theta,\phi\in[0,2\pi]$, find the largest possible value of $(\theta+\phi)$ I have no idea how to do it, i tried substituting $r\sin\theta=a$ and $r\cos\phi=b$ but this did not worked out effectively. Any other approach for it? AI: Hint: We have $$4\cos\phi\sin\theta+2\cos\phi+2\sin\theta+1=(2\cos\phi+1)(2\sin\theta+1)$$ Use all sin tan cos rule
H: $A\in GL_n(\mathbb{C})$ has a unitary triangularization Let $A\in GL_n(\mathbb{C})$. Show that $A$ has a unitary triangularization, that is there exists $U\in M(n\times n)$ unitary such that $U^{-1}AU$ is an upper triangular matrix. Since the characteristic polynomial of $A$ splits over $\mathbb{C}$, I immediately know that $A$ is similar to an upper triangular matrix. I'm now trying to show that the according change of basis matrix is unitary. One approach was trying to show the sufficient condition that $\left|\left| Ux \right|\right|=\left|\left|x \right|\right|,\, \forall x\in\mathbb{C}^n$, which I haven't been able to show so far. AI: You can prove it by induction. If $n=1$, it is trivial. Let $v$ be an eigenvector of $A$. Now let $V_1=\Bbb Cv$ and let $V_2=V_1^\perp$. If $\{v_1,\ldots,v_{n-1}\}$ is an orthonormal basis of $V_1^\perp$, then $\{v/\|v\|,v_1,\ldots,v_{n-1}\}$ is an orthonormal basis of $\Bbb C^n$ and the matrix of $A$ with respect to that basis is of the form $\left[\begin{smallmatrix}\lambda&A\\0&B\end{smallmatrix}\right]$, where $\lambda\in\Bbb C$, and $B$ is a $(n-1)\times(n-1)$ matrix. Now, consider the map $A_1\colon V_2\longrightarrow V_2$ defined by $A_1=\pi\circ A$, where $\pi$ is the orthogonal projection from $V$ onto $V_2$. And now you apply the induction hypothesis to $A_1$.
H: Prove that none of the integers $11,111,1111,...$ are squares of an integer. Please check my proof. Thank you! Proof: $11,111,1111,...$ can all be written as follows $\underbrace{111...}_{\text{k times}}=1+10(\sum_{i=0}^{k-2}10^n)$ Let us assume $1+10(\sum_{i=0}^{k-2}10^n)=s^2$ where $s\in\Bbb{Z}$. Then this means $s^2|1$ and $s^2|10$. The only possible $s^2$ is then $1$. It is obvious that $1$ does not work. So this means there is no $s$ such that $1+10(\sum_{i=0}^{k-2}10^n)=s^2$. So we conclude that none of $11,111,1111,...$ are squares of an integer. Edit: Once again... this proof is wrong. Please look at the answers below. Correct Attempt: I shall try induction. We see that $11\cong3(\text{mod 4})$ . Now assume that $\underbrace{111...}_{\text{k times}}\cong3(\text{mod 4})$ Then for $\underbrace{111...}_{\text{k+1 times}}$ we see that the last dividend in the long division is $31$. So the largest possible last digit is $7$ and $7\times4=28$ and $31-28=3$. The remainder is therefore $3$. And so, $\underbrace{111...}_{\text{k+1 times}}\cong3(\text{mod 4})$. However, we know that square numbers(mentioned to me by https://math.stackexchange.com/users/279515/brahadeesh ) are either $0$ or $1$ in $\text{mod 4}$. So we conclude that all of them cannot be perfect squares. AI: Let us assume $1+10(\sum_{i=0}^{k-2}10^n)=s^2$ where $s\in\Bbb{Z}$. Then this means $s^2|1$ and $s^2|10$. I don't follow this implication, how do you argue that $s^2 \mid 1$ and $s^2 \mid 10$? Indeed, as @user804886 notes in a comment under your question, $c \mid a + b$ does not imply that $c \mid a$ and $c \mid b$. One way to prove this is to note (exercise!) that a square is always congruent to either $0$ or $1$ modulo $4$. But the numbers in your sequence are all congruent to $3$ modulo $4$, so none of them can be a square.
H: $P = \pi_{1}(P) \times \pi_{2}(P)$? I’m starting my study of functions, I’m following the book “Proofs and Fundamentals”, by Ethan D. Bloch. This is one of the problems of the book and I’m not sure what would be the solution. Let $X$ and $Y$ be sets. Let $P \subseteq X \times Y$. Let $\pi_{1}:X\times Y \rightarrow X$ and $\pi_{2}:X \times Y \rightarrow Y$ be the projection maps defined by $\pi_{1}((x,y))=x$ and $\pi_{2}((x,y))=y$ for all $(x,y) \in X \times Y$. Is it true that $P = \pi_{1}(P) \times \pi_{2}(P)$? Give a proof or a counter-example. Intuitively, I believe this is true (correct me if I’m wrong please). Although I’m having trouble in formulating the proof for this result. Any ideas? Thank you for your time! AI: No, this needs not be the case. Consider for instance $P=\{(a,b),(a,c),(d,c)\}$, with $a\ne d$ and $b\ne c$. Then $\pi_1(P)\times \pi_2(P)=\{(a,b),(a,c),(d,c),(d,b)\}$.
H: Proof there aren't $p(x), q(x)$ polynomials such that $\arctan(x)=\frac{p(x)}{q(x)}, \forall x \in (0,+\infty)$ I've tried this, by contradiction, supposing that $p(x)$ and $q(x)$ exist $$\lim_{x\to+\infty} \arctan x =\lim_{x\to+\infty} \frac{p(x)}{q(x)}= \frac{π}{2}$$.Then the degree of both $p(x)$ and $q(x)$ are the same, let's say $n$. Now, I differentiate the expression and try to get to a contradiction in the degrees on $q^2(x)=(1+x^2)[p'(x)q(x)-p(x)q'(x)]$. The leader term of $[p'(x)q(x)-p(x)q'(x)]$ cancels, so its degree is less than $2n-1$. I know that my proffessor did It that way, but When I try It, I dont know why i dont get the contradiction. Can someone help me? Thankss AI: Suppose that there are polynomial functions $p$ and $q$ such that$$(\forall x\in(0,\infty)):\frac{p(x)}{q(x)}=\arctan x.$$Then, for each $x>0$, $\left(\frac pq\right)'(x)=\arctan'x=\frac1{1+x^2}$. But if two rational function are equal on an interval, they are equal everywhere. So, for each real $x$,\begin{align}\arctan x&=\int_0^x\arctan'(x)\,\mathrm dx\\&=\int_0^x\left(\frac pq\right)'(x)\,\mathrm dx\\&=\frac{p(x)}{q(x)}.\end{align} You have $\lim_{x\to\infty}\arctan x=\frac\pi2$. So, the polynomials $p(x)$ and $q(x)$ would have to have the same degree. But then$$\lim_{x\to-\infty}\frac{p(x)}{q(x)}=\lim_{x\to\infty}\frac{p(x)}{q(x)}=\frac\pi2.$$However,$$\lim_{x\to-\infty}\frac{p(x)}{q(x)}=\lim_{x\to-\infty}\arctan(x)=-\frac\pi2.$$
H: Relation $S$ is equivalence reltion in set $A$. Is relation $S^{-1}$ equivalence relation in set $A$ either? Let $S \subset A^2$ and $S=\{(a,b): aSb\} .\ $Then $\ S^{-1}=\{(b,a):aSb\} \subset A^2.$ That means $S^{-1}$ is also equivalence relation, because every pair is in the same relation as in relation $S$ (so it has the same properties) but entries are switched. This is my answer to the question but for me it feels kinda tottery, so I am seeking more algebraic explenation. AI: From the symmetry of the relation $S$, we can get $S^{-1}=S$. Thus equivalence of $S$ implies equivalence of $S^{-1}$.
H: If $A, B, H \leq G$ such that $A \triangleleft B$ and $H \triangleleft G$, then $HA \triangleleft HB$ There is a lemma that I'm trying to understand in my algebra class and I can't get it done. It says: Given $G$ a group, let $A, B, H \leq G$ be subgroups of $G$ such that $A \triangleleft B$ and $H \triangleleft G$. Then, $HA \triangleleft HB$ and $\frac{HB}{HA}$ is isomorphic to some quotient of $B/A$. The second part is pretty ok to me and I got the morphism right away, but as much as it seems stupid, I can't get the normality done, and I've tried a lot. And it is not only this problem, I just seem to be HORRIBLE proving normality of stuff. Any tips on that? AI: For any $a\in A$ and $b\in B$, we have $$bab^{-1}=a_1$$ for some $a_1\in A$. Note that $a_1H=Ha_1$ and $bH=Hb$. Then $$(Hb)Ha(b^{-1}H)=HbH(ab^{-1})H=HbHb^{-1}a_1H=Ha_1.$$
H: Divergence of Improper integral with trigonometric functions I need to test the convergence of the following improper integral: $$ \int_1^{+\infty} |\sin x|^x dx $$ I tried to find a minorant, but I did not find it immediately. Furthermore, I tried to split the integral into a series, by dividing the interval x>1 in intervals of the form $[1,\pi]$, $[k\pi,(k+1)\pi]$, with $k \ge 1$. Can you help me, please? AI: We begin by noting that, for $n \geq 0$, $$ \int_{0}^{\pi} \sin^n x \, \mathrm{d}x = 2 \int_{0}^{\frac{\pi}{2}} \sin^n x \, \mathrm{d}x \geq 2 \int_{0}^{\frac{\pi}{2}} \left(\frac{2x}{\pi} \right)^n \, \mathrm{d}x = \frac{\pi}{n+1}. \tag{1} $$ Here, the inequality $\sin x \geq 2x/\pi$, valid for $0 \leq x \leq \frac{\pi}{2}$, is utilized. Next, we bound the integral from below by \begin{align*} \int_{1}^{\infty} \left| \sin x \right|^x \, \mathrm{d}x &\geq \int_{\pi}^{\infty} \left| \sin x \right|^x \, \mathrm{d}x \\ &= \sum_{n=1}^{\infty} \int_{n\pi}^{(n+1)\pi} \left| \sin x \right|^{x} \, \mathrm{d}x \\ &\geq \sum_{n=1}^{\infty} \int_{n\pi}^{(n+1)\pi} \left| \sin x \right|^{(n+1)\pi} \, \mathrm{d}x \\ &= \sum_{n=1}^{\infty} \int_{0}^{\pi} (\sin x)^{(n+1)\pi} \, \mathrm{d}x. \end{align*} Now this lower bound can be further bounded from below by using $\text{(1)}$ to prove the desired divergence.
H: Newton Raphson Method Iteration Scheme My question here is for the 2nd part. The 1st part is straightforward, taking $x^2 - N = 0$ as $f(x)$. How does one go about the second part? What exactly do they mean by applying the scheme two times? Edit: By second part, I mean how does one derive that formula "easily"! AI: If $A_k=x_k$ and $B_k=N/x_k$, then $$ x_2=\frac{A_1+B_1}2=\frac{x_1}2+\frac{N}{2x_1}=\frac{A_0+B_0}4+\frac{N}{A_0+B_0} $$ And as $x_2$ is indeed a better approximation for $\sqrt N$, one can write $\sqrt{N}\approx x_2=...$
H: Addition of differentiable function on regular surface is still differentiable Given two differentiable functions $f,g:V\to \mathbb{R}^n$ where $V\subset S$ is open subset of regular surface $S$. Prove that $f+g$ with $(f+g)(x) = f(x)+ g(x)$ is still differentiable. My attempt:for a given $x\in V$,we need to show $f+g$ is differentiable at this point,to do this we need to find one chart map $z:W\to \mathbb{R}^n$,such that $(f+g)z$ is differentiable, we have already have two different chart $a:U\to \mathbb{R}^n$ and $b:V\to \mathbb{R}^n$,such that $fa$ and $gb$ is differentiable function from $\mathbb{R}^2 \to \mathbb{R}^n$.My question is how to construct such $z$,based on $a,b$. AI: Hint: While in the definition only existence of one chart map is required, one usually proves next, that compositions of differentiabele functions with arbitrary chart maps is differentiable.
H: Convergence and calculation of a particular series. Let $$p_n(x)=\frac{x}{x+1}+\frac{x^2}{(x+1)(x^2+1)}+\frac{x^4}{(x+1)(x^2+1)(x^4+1)}+...…+\frac{x^{2^n}}{(x+1)(x^2+1)(x^4+1)(……)(x^{2^n}+1)}$$ I know this can be simplified to $$p_n(x)=1-\frac{1}{(x+1)(x^2+1)(x^4+1)(……)(x^{2^n}+1)}$$ Since it is only a telescoping series. Evaluatie:$$\lim_{n\rightarrow \infty}p_n(x)=L$$ Now for $|x|\geq1; L=1$ and $x=0; L=0$ These two are quite obvious from a glance. Now here's my question, For $|x|< 1$, does $L$ converge and if it does how to calculate it? AI: Note that $$(x+1)(x^2+1)(x^4+1)\dots(x^{2^n}+1)=\dfrac{x^{2^{n+1}}-1}{x-1}$$ Therefore, $$p_n(x)=1-\left(\dfrac{1-x}{1-x^{2^{n+1}}}\right)\xrightarrow{n\to\infty}x\qquad\forall\,|x|<1$$
H: If $f:X\rightarrow Y$ is not a constant function and if $X$ is first countable then $f$ is not continuous in any isolated point of $X$. Conjecture If $X$ is firt countable and if $f:X\rightarrow Y$ is a function then $f$ is not continuous at $x_0$ if this is an isolated point for $X$. If $X$ is first countable then there exist a countable base $\mathscr{B}(x_0):=\{B_n\in\mathcal{U}(x_0):n\in\Bbb{N}\}$ for $x_0$ and so if this is an isolated point for $X$ there exist $n_0\in\Bbb{N}$ such that $B_{n_0}\cap X=\{x_0\}$. Now we suppose that the basic neighborhood of $\mathscr{B}(x_0)$ are such that $B_m\subseteq B_n$ for any $m\ge n$ so that if for any $n\in\Bbb{N}$ we pick a $x_n\in\ B_n$ then we make a sequcence $(x_n)_{n\in\Bbb{N}}$ converging to $x_0$ and additionally for any $n\ge n_0$ it must be $x_n=x_0$. Now we suppose that the function $f$ is continuous so that the sequence $f(x_n)$ converges to $f(x_0)$ and so if the statement if true I shall obtain a contradiction but unfortunately I can't do this so I ask to complete the proof if the statement it true and if not to take a counterexample and to show if the statement if true with additional (we can suppose that $Y$ is hausdorff or first countable) hypotheses because for example any real function if continuous in a isolated point. So could someone help me, please? AI: This is certainly not true as stated. Take $f$ to be a constant function. In fact, for every function $f:X\to Y$, $f$ must be continuous at the isolated points because then for any open set $V\subseteq Y$ containing $f(x_0)$, there is an open set $U\subseteq f^{-1}(V)$ containing $x_0$, namely $U=\{x_0\}$.
H: Is there such polynomials that exist? Let f be a polynomial of degree 3 with integer coefficients such that f(0) = 3 and f(1) = 11. If f has exactly 2 integer roots, how many such polynomials f such exist? Approach: f(0) = 3 so constant term is 3 f(x) = ax^3 + bx^2 + cx + 3 and it has exactly 2 integer roots. since integer coefficients, then the roots are integers So there is one root that is multiplicity 2. Question: Is there such polynomials that exist? AI: No polynomial $p(x)$ with integer coefficients such that $p(0)=3$ and $p(1)=11$ has integer roots. Let $n\in\Bbb Z$. If $n$ is even, then $n\equiv0\pmod2$ and therefore $p(n)\equiv p(0)(=3)\pmod 2$. So, $p(n)$ is odd. And if $n$ is even, then $n\equiv1\pmod2$ and therefore $p(n)\equiv p(1)(=11)\pmod2$. So, again, $p(n)$ is odd. Since $p(n)$ is always odd, it cannot be equal to $0$.
H: Fixed Point theorem with Lipschitz continuous mapping. How can we prove that the below function does not have fixed point? Define $S:=\{(x_m)\in l^1\mid \sum^\infty |x_i|\leq 1)\}$, and consider the self map $\Phi$ on $S$ defined by $\Phi((x_m)):=(1-\sum^\infty |x_i|,x_1,x_2,,,,)$. Show that $S$ is a nonempty, closed, bounded , and convex subset of $l^1$ and $\Phi$ is a Lipschitz continuous map without a fixed point. My solution is, as a way of contradiction, assume that it does have a fixed point. Then by definition of the mapping, $x_i=\bar{x} \ \ \forall i$. Moreover, the first coordinate of the mapping implies $\sum |\bar{x}|=1$, but this is clearly a contradiction, because the infinity sum of a constant number can not converge. AI: You don't get $\sum |x_i|=1$. What you get is $x_1=1-\sum |x_i|, x_1=x_2,x_2=x_3,...$. It follows that $x_n$ is independent of $n$. If $x_n=x_1 \neq 0$ then $\sum |x_i|=\infty$ contradicting the fact that $x_1=1-\sum |x_i|$. But $x_n=0$ for all $n$ also contradicts $x_1=1-\sum |x_i|$ so there can be no fixed point.
H: The order of a finite field I'm reading a theorem about the order of a finite field: Here is the proof: At the end, the author said It follows that $\mathbb{F}$ is a vector space over $\mathbb{F}_{p}$, implying that its size $q$ is equal to $p^{m}$ for some $m>0$. I do not understand how $\mathbb{F}$ is a vector space over $\mathbb{F}_{p}$ implies that its size $q$ is equal to $p^{m}$. Could you please elaborate on it? AI: Given any field $k$ and a vector space $V$ of finite dimension $n$ over $k$, the cardinality of $V$ is exactly $\lvert k\rvert^n$. To see this, just fix any basis of $V$ and use the fact that any element of $V$ can be uniquely expressed as a $k$-linear combination of basis vectors, and there are clearly $k^n$ possible linear combinations of the $n$ basis vectors. (More generally, if $k$ is any field and $V$ is a $k$-vector space of dimension $\lambda$, then by the same argument, the cardinality of $V$ is $\sum_n \binom{\lambda}{n}(\lvert k\rvert-1)^n$, which for infinite $\lambda$ is equal to $\lambda \cdot \lvert k \rvert$.)
H: Show that $\{(1-t)^{\lambda}(1+t)^{2n-1-\lambda}, \lambda=0,1,...,2n-1\}$ forms a basis in $P_{2n-1}$, polynomial vector space as stated above, I want to check wether $\{(1-t)^{\lambda}(1+t)^{2n-1-\lambda}, \lambda=0,1,...,2n-1\}$ forms a basis in $P_{2n-1}$, where $P_{2n-1}$ is the vector space of polynomials of degree less than or equal $2n-1$. It has been used without any further comment in a paper of Walter Gautschi on Gaussian quadratur formulae, https://www.cs.purdue.edu/homes/wxg/selected_works/section_07/128.pdf, third page between (2.5) and (2.6). I thought about showing the linear independence of these vectors, i.e. showing that $$\sum_{\lambda=0}^{2n-1} a_{\lambda}(1-t)^{\lambda}(1+t)^{2n-1-\lambda}=0 $$ implies that $a_{\lambda}=0$, $\lambda=0,1,\dots,2n-1$. Didn't seem like a big problem, but induction for $n \in \mathbb{N}$ didn't work for me. Neither did I manage to get anything done with the binomial theorem. I'd be grateful for any kind of help. AI: Hint: The set $\{1,t,\dots,t^{2n-1}\}$ is a basis of $P_{2n-1}$, and the map $\Phi:P_{2n-1} \to P_{2n-1}$ defined by $$ \Phi(f(t)) = (1-t)^{2n - 1} f\left(\frac{1+t}{1-t} \right) $$ is linear and invertible.
H: What's the general formula for the sum total of first k terms of the following? I have written a naive code to calculate the sum total of the first k terms of a sequence. It's too complicated for me to write it here in "mathematical" language with nested summations and all. Code: int foobar(int n, int k) { if (k == 1) return n-1; int count = 0; for (int i = 1; i <= n-k; ++i) count = count + foobar(n-i, k-1); return count; } Does there exist a general formula for the sum of the first k terms in terms of n and k? Edit: The formula for the function: $f(n,k)=\begin{cases}n-1,&\hbox{ for }k=1\\ \sum\limits_{i=1}^{n-k}f(n-i,k-1),&\hbox{ for }k\ne1\end{cases}$ AI: $f(n,k)=\frac{(n-1)...(n-k)}{k!}= $${n-1}\choose{k}$ Edit: this holds because pascal's equality holds, indeed $f(n,k)=f(n-1,k-1)+(f(n-2,k-1)+...+f(k,k-1))=f(n-1,k-1)+f(n-1,k)$
H: A general circle through the intersection points of line $L$ and circle $S_1$ has the form $S_1+\lambda L$. What is the significance of $\lambda$? We write a general line $L$ passing through intersection of two lines $L_1$ and $L_2$ as $L= L_1 + (\lambda) L_2$ where $\lambda$ is a variable. Even in family of circles we write a general circle $S$ passing through points of intersection of a circle $S_1$ and a line $L$ as $S= S_1+(\lambda)L$. But why do we write in this way? What is the significance of $\lambda$ and this form? For example if we want to find lines through the point of intersection of $3x+4y+5=0$ and $2x+y+4=0$ . The required lines would be obtained by substituting different values of $λ$ in $3x+4y+5+ λ(2x+y+4)=0$ AI: Let us take up the case of lines first. Let $L_1(x,y)$ and $L_2(x,y)$ be two lines which intersect at $(a,b)\\$. Thus $$L_1(a,b)=0\\L_2(a,b)=0$$Now let $L_3(x,y)$ be another line such that $$L_3(x,y)=L_1(x,y)+\lambda L_2(x,y)$$Now, if we are able to show that $L_3$ passes through $(a,b)$,i.e. the intersection point of $L_1$ and $L_2$, our job will be complete.To do this we put $(a,b)$ in our expression for $L_3$ $$L_3(a,b)=L_1(a,b)+\lambda L_2(a,b)$$ $$\Rightarrow L_3(x,y)=0+\lambda .0$$ $$\Rightarrow L_3(x,y)=0$$So as you can see, for any value of $\lambda$, our line $L_3$ always passes through the intersection of lines $L_1$ and $L_2\\$. You can the same with any two curves(e.g. two circles) .
H: How to converted $x^3+y^3=6xy$ to parametric equations? How to converted $x^3+y^3=6xy$ to parametric equations? The suggested solution is: $x=\frac{6t}{1+t^2}$ $y=\frac{6t^2}{1+t^2}$ But what is the process? AI: Taking $x=r \cos^{\frac{2}{3}} \phi$ and $y = r \sin^{\frac{2}{3}} \phi$ we obtain $r=6 \cos^{\frac{2}{3}} \phi \sin^{\frac{2}{3}} \phi$ and now it can be used for parametric represantation: $$x=6\cos^{\frac{4}{3}} \phi \sin^{\frac{2}{3}} \phi$$ $$y=6\cos^{\frac{2}{3}} \phi \sin^{\frac{4}{3}} \phi$$
H: Is a transitive subgroup $H \leq S_n$ of cardinality $n$ automatically cyclic? It is very reasonable for me to assume that if $H \leq S_n$ is transitive and $|H|=n$, then $H$ must be generated by a $n$-cycle, but I cannot seem to be able to prove it, so I really can not be sure that it is true. Could you please help me with that? Thank you! EDIT: Thank you anyone! Could you tell me if we can at least say anything about the group being abelian? How about solvable? AI: The smallest counterexample is $\{1,(1,2)(3,4),(1,3)(2,4),(1,4)(2,3)\}\leq S_4$. Indeed, any group of order $n$ is a transitive subgroup of $S_n$ by Cayley's theorem.
H: Help with $\int_{\Bbb D}(x^2 - y^2)\, dx\, dy$ where $\Bbb D=\{|x|+|y|\le2\}$ Consider $\Bbb D=\{|x|+|y|\le2\}$; I'm trying to solve: $$\int_{\Bbb D}(x^2 - y^2)\, dx\, dy$$ My goal is to solve that through a change of variables. I was thinking to something like: $\begin{cases} |x|=u^2 \\ |y|=v^2 \end{cases}$. In such a case I would had the new domain as a circle; but the transformation is not invertible and I should study case by case: $\{x+y\le 2,x\ge0,y\ge0\},\{x-y\le2,x\ge0,y\le0\},\{-x+y\le2,x\le0, y\ge0\},\{-x-y\le2,x\le0,y\le0\}$. Now, another transformation that would fit the problem is: $\begin{cases} x+y=u \\ x-y=v \end{cases}$, but I wasn't able to write any single domain in a comfortable way. Is there another suitable change of variables? (otherwise) an someone handle one of the transformation I just proposed? AI: Note, the area under integration is a tilted square (with corners at (2,0), (0,2), (-2,0) and (0,-2). The sides of this square are given by the lines :$x-y=2, x-y =-2, x+y=2, x+y=-2$. Also, the integrand can be factored as: $x^2 - y^2 = (x-y)(x+y)$ Thus, choosing $u=x-y$ and $v=x+y$ seems a natural choice here. Then, we use: $${\iint\limits_R {f\left( {x,y} \right)dxdy} } = {\iint\limits_S {f\left[ {x\left( {u,v} \right),y\left( {u,v} \right)} \right] }\kern0pt{ \left| {\frac{{\partial \left( {x,y} \right)}}{{\partial \left( {u,v} \right)}}} \right|dudv} ,}$$ where,$\left| {\large\frac{{\partial \left( {x,y} \right)}}{{\partial \left( {u,v} \right)}}\normalsize} \right|= \left| {\begin{array}{*{20}{c}} {\frac{{\partial x}}{{\partial u}}}&{\frac{{\partial x}}{{\partial v}}}\\{\frac{{\partial y}}{{\partial u}}}&{\frac{{\partial y}}{{\partial v}}} \end{array}} \right|$ For this term, use $x=\frac{u+v}{2}$ and $y=\frac{v-u}{2}$. Getting, $\left| {\large\frac{{\partial \left( {x,y} \right)}}{{\partial \left( {u,v} \right)}}\normalsize} \right| = 1/2$ So we get, $I = \int_{u=-2}^2 \int_{v=-2}^2 \frac{uv}{2} du dv$ Which pretty easily evaluates out to $0$.
H: Evaluate $\iiint_{D}(x^2+y^2)^2\,dx\,dy\,dz$ over cylinder section Evaluate $$\iiint_{D}(x^2+y^2)^2\,dx\,dy\,dz\,,$$ where $$D=\{(x,y):x^2+y^2\leq 1, 0\leq z\leq 1,0\leq y\}$$ I used GeoGebra to represent my domain. It is that right section of the cylinder, enclosed by the planes $z=0$, $z=1$. As I have a cylinder, is it a good ideea to use polar coordinates or should I use a projection of the domain on xOy? In fact, when I know which to use? Could you solve this as an example, I don`t have much. Thank you, it would be great. AI: In this particular example, it seems optimal to use polar coordinates, since you can represent $x^2+y^2$ in a very simple and covinient way. Let $(x,y,z)$ be a point in $\mathbb{R}^3$. You first set $x=r\cos\theta$ and $y=r\sin\theta$ where $\theta$ is the angle of the vector and the "positive" part $x$-axis and $r$ is the length of the vector (in genereal $r>0$ and $\theta \in [0, 2\pi]$). You basically use polar coordinates for the plane and leave the third coordinate untouched. With this setting and the given $D$, $x^2 + y^2=r^2$, and thus $0\leq r\leq 1$ and since $y\geq 0$, $\theta \in [0, \pi]$. Also the Jacobian of this transformation, which we need in order to change variables in the integral, is $r$. Alos we can use Fubini since it is a continuous function on a compact set and thus it's integrable. \begin{align} &\iiint_{D}(x^2+y^2)^2dxdydz = \int_0^1 \int_0^{\pi} \int_0^1 r^4 \cdot r \text{ d}z \text{ d}\theta \text{ d}r = \int_0^1 \int_0^{\pi}r^5 \text{ d}\theta \text{ d}r \\&= \int_0^1 \int_0^{\pi} r^5 \text{ d}\theta \text{ d}r = \pi \int_0^1 r^5 \text{ d}r = \dots \end{align}
H: Prove that NAND and NOR are the only Universal Logic Gates. I was watching this lecture: link H(x,y) is a boolean function. He's says that H(x,y) is a Universal logic gate if and only if H(x,x) is 1 - x. I didn't get this part. So how to prove that NAND and NOR are the only Universal Logic Gates ? AI: If $H(x,x)\ne 1-x$, then constructing the single input NOT gate is impossible. As for the 'iff' part, this means the only two gates left are the double input NOT gates, and so 'game over'!