text
stringlengths
83
79.5k
H: show there is a number that occurs infinitely often in the digital representation of a polynomial I would appreciate some help with the following problem: Let $p(x)$ be a polynomial with integral coefficients (possibly negative). Let $a_n$ be the digital sum in the decimal representation of $p(n)$ for $n\in \mathbb{N}$. Show that there is a number which occurs in the sequence $a_1,a_2,a_3,\ldots$ infinitely often. Note this problem is related to but distinct from the simpler problem here: A question about digital sum of polynomials over $\mathbb Z^+$. AI: We have the polynomial $P(x)$. We first define: $$Q(x)= \begin{cases} P(x) & \text{if the leading coefficient of $P$ is positive} \\ -P(x) & \text{if the leading coefficient of $P$ is negative} \end{cases}$$ This ensures that $Q$ has positive leading coefficient. Now, define: $$R(x)=Q(x+T)$$ where $T$ is a sufficiently large positive integer. We have: $$Q(x)=\sum_{i=0}^d a_dx^d \quad (a_d>0)$$ $$Q(x+T)=\sum_{i=0}^d a_d(x+T)^d=a_dx^d+(dTa_{d}+a_{d-1})x^{d-1}+\cdots+(a_dT^d+\cdots+a_0)$$ Each coefficient of $R(x)$ is like a polynomial over $T$ with positive leading coefficient as $a_d>0$. Thus, the sufficiently large choice of $T$ ensures that $R$ has all positive integer coefficients. Now, for sufficiently large choice of $M$, we have $R(10^n)$ has the same sum of digits for all $n \geqslant M$ as the decimal expansion of $R(10^n)$ is simply the coefficients of $R$ separated by a string of $0$s. We have: $$P(10^n+T)=\pm Q(10^n+T) = \pm R(10^n) \quad (n \geqslant M)$$ Thus, $P(10^n+T)$ has the same sum of digits for all $n \geqslant M$ where $M,T$ are sufficiently large.
H: Neighborhood of a point (real number) How can we prove that every real number has infinitely many neighbourhoods? I know that it is true because we can consider a symmetric epsilon neighborhood of a point and there are infinitely many number of such epsilons. But how can we prove it rigorously? AI: You can prove it rigorously by constructing an infinite set of neighborhoods for an arbitrary real number $x$, and, of course, showing that the set is indeed infinite. Your idea, i.e. "consider a symmetric epsilon neighborhood of a point", is a very good start. Now try to actually construct a set of all such neighborhoods. To show that it is infinite, it is best find a bijection from that set to some other set which you know is infinite.
H: If G is a simple graph with at least two vertices, prove that G must contain two or more vertices of the same degree. I will use the pigeon-hole principle. For a simple graph of n vertices the maximum degree a vertex can have is (n-1). Let there be (n-1) boxes corresponding to degrees of 1 to (n-1). We are going to put the vertices into the box with the degree it has. As there are n vertices, by the pigeon hole principle, at least two of the vertices must be in the same box so that all vertices can fit into the boxes. Here we assume that the each vertex at least have a degree of 1. If some k number of vertices have degrees of zero, the argument is still valid as the boxes can simply be reduced to n-k-1 and if k>=2 then at least two other vertices also have the same degree. Edit As mentioned by 5xum below, the problem only occurs when k=1, so the only reduction needed is to n-2 boxes. AI: We notice that the vertices that occurs the same number of times in the sequence results in having the same degree. (Do I have to prove this?) A short sentence about why this is true in general would be nice, yes. Something like "since each appearance of a vertex in the sequence corresponds in one removal of an edge starting at that vertex, we see that the degree of the vertex is $(n-1)-k$, where $k$ is the number of appearances of the vertex in the sequence". Anyway, your proof has a much bigger problem later on. In particular, you claim the following is true: If some vertex occurs some j times less than (n-1) then by the nature of edges, at least one other edge must also occur j times. yet you provide no proof. How do you know this is true? The way I see it, this claim is just an elaborate restatement of the original theorem that you are yet to prove, so you cannot just assert it is true. Even more in particular, the problem is also here: From this let's remove from the sequence one of the edges(so now it will occur n-2 times), in other words, we are removing one less edge. Then, one other edge also occurs n-2 times. So at least 2 vertices are of the same degree as the others essentially occur the same time. well... sure. But all this shows is that if you remove one edge from a full graph, you get two vertices with degree $n-2$. But then you say: We could do this repeatedly How do you know this? You demonstrated the claim on a very specific type of graph (i.e., a full graph), and then claim that the same is true on all graphs. You cannot do that. Edit: You now also demonstrated the claim on another specific type of graphs (those that can be constructed from a full graph by removing one edge), but you still haven't shown it on all graphs. Edit #2 Your new approach is a good idea, but it needs some work. The way you did it is ok, but can be made a little clearer. When analyzing the case when $k$ vertices have degree $0$, you can first say that the only issue appears when $k=1$ (otherwise, two vertices have degree $0$ and we are done), and then explain that if one vertex has degree $0$, then the maximum degree of the others is $n-2$.
H: Expected value question, with states! Suppose an light can emit 4 different colors of light. The light starts off by emitting blue light. Each second, the color of the light changes according to the following probabilities- When the light is blue, the next second, the light is red.(Named blue light state 0) When the light is red, the next second, there is a 1/3 probability of the light being blue, and a 2/3 probability of the light being green.(Named red light state 1) When the light is green, the next scond, there is a 1/2 probability of the light turning red, and a 1/2 probability of the light turning yellow.(Named green light state 2) When the light is yellow, the next second, there is a 2/3 probability of the light turning green, and a 1/3 probability of the light turning off.(Named yellw light state 3) What is the expected number of seconds it takes for the light to turn off? Recursion was my attempt in solving this problem. Gave variables L_0, L_1, L_2, and L_3 as expected values for each state of the light. Figured L_1=L_0-1. Nevermind, I think I solved part of this problem, thanks to the help in the comments. I got L_0=L_1+1. L_1=1+L_2*(2/3)+L_0*(1/3). L_2=1+L_1*(1/2)+L_3*(1/2). AI: Your state space ( call it $S = \{b,r,g,y,o\}$) has five states : four colors, and an off state. The important thing about a random walk, is its Markovian nature : if, during a random walk, I hit a point $x$ in a state space, any probabilities associated with the future of the random walk will depend only on the fact that it is at $x$ now, and not on how it reached $x$ (it's history). In words, the future, given the present, is independent of the history. Another way of saying it (at least in discrete time) : a random walk "renews itself" at every stopping time i.e. if you choose to record the random walk only after some point, then it will look as if the random walk was started from the stop point (that is, we can forget the history of the original random walk and work with the "new" random walk from the point it was at when we started to record). Now, the concept of first - step analysis uses this well. Basically, imagine you want to find the expected time of some random walk starting at a point $a$ to hit a point $c$. Now, you imagine you went a step ahead : so the point $a$ moved to the point $d$ (with some probability of that happening). Now, to get to $c$, we must move from $d$ to $c$, and we can imagine we are studying the random walk started at $d$ now, and see the expected time to reach $c$. In other words, the expected time from $a$ to hit $c$, depends only upon the expected time from the neighbors of $a$ , to hit $c$. How? Well , let us define $P(x,y)$ as the probability of moving from $x$ to $y$ for $x,y \in S$. Define $f : S \to \mathbb R^+$ by $f(s) = E[inf\{n : X_n = o\} |X_0 = s]$. That is, $f(s)$ is the expected time for the bulb to go off, given that we started at a certain color. It is obvious that $f(o) = 0$. Now, from the statement I made, $f(s) = 1 + \sum_{t \in S} P(s,t)f(t)$, simply because $s$ must move to some $t$ in the state space (in one second's time), and from there $t$ must move to $o$, so we can imagine we restarted the random walk at $t$. (Of course, note that only $t$ which are neighbours of $s$ will have a non-zero term in the summation above). This gives, in our case : $$ f(b) = 1+f(r) \\ f(r) = 1+\frac 13 f(b) + \frac 23 f(g) \\ f(g) = 1 + \frac 12f(r) + \frac 12f(y)\\ f(y) = 1 + \frac 23 f(g) + \frac 13 f(o)\\ f(o)= 0 $$ Can you solve these equations? $f(b)$ is what you need. $f(b) = 18, f(r) = 17,f(g) = 15,f(y)=11$. So it takes $18$ seconds on average for the light to switch off.
H: When defining a vector space is the scalar part of the field or always a real number I've stumbled upon an exercise that takest the set of integers $\Bbb{Z}$, defines addition and multiplication as usual but scalar multiplication as $\lfloor{\alpha}\rfloor * k$, where $\alpha$ is the scalar and $k$ the element of the vector space and proceeds to claim that this set is not a vector space. Wikipedia says the scalar is in field $\Bbb{F}$: In the list below, let u, v and w be arbitrary vectors in V, and a and b scalars in F. which in this case is the integers $\Bbb{Z}$? However, the solution to the exercise only makes sense if the scalar is in $\Bbb{R}$. What am I misunderstanding? AI: In your case, $\mathbb Z$ plays the role of the set $V$ defined in Wikipedia article. And you're right, the scalar multiplication defined in your exercise makes sense if the field $\mathbb F$ is the field of the reals $\mathbb R$. So here a vector is just an integer in $\mathbb Z$.
H: How is this proposition false? I have a proposition: ((x v y) <=> (~x => ~y)) When I solve this, I end up getting True but the answer is False. Here's how I solved it: when we have a <=> b, we can write it as ~a.~b + a.b and a => b becomes ~a+b So the above equation becomes: => ~(x + y).~(~x => ~y) + (x + y).(~x => ~y) => ~(x+y).~(x+~y) + (x+y).(x+~y) Using De Morgan's law (~(a+b) = ~a.~b and ~(a.b) = ~a+~b) => (~x.~y).(~x.y) + (x+y).(x+~y) => (~x.~x + ~x.y + ~y.~x + ~y.y) + (x.x + x.~y + y.x + y.~y) => (~x + ~x(y+~y) + 0) + (x + x(~y + y) + 0) => (~x + ~x) + (x + x) => ~x + x which is always True But, according to another solution, <=> is only True when both sides result in the same solution, So, if we go by this: => LHS = x+y and RHS is (x + ~y), which can never be equal and hence, False. So what am I doing wrong? EDIT: PS: This is where I got the question from. AI: After line "Using De Morgan's law" You should have $$(\neg x.\neg y).(\neg x.y) + (x+y).(x+\neg y)$$ $$0+x+x .\neg y+y.x +y.\neg y$$ $$x+x.(y+\neg y) = x$$ So you end up with $x$: initial equivalence is True together with $x$
H: Show $(Y_{n}-a)_{+}\leq (Y_{n})_{+}+\lvert a\rvert$ In a proof, I saw the use of the following inequality $(Y_{n}-a)_{+}\leq (Y_{n})_{+}+\lvert a\rvert(*)$ without any explanation, where $Y_{n}$ is some random variable and $a$ a constant. Note the definition $(X)_{+}:=\max\{0,X\}$. I am aware that $(\cdot)_{+}$ as a function is subadditive, but the problem in $(*)$ is that I have a minus rather than a plus, so subadditivity cannot be used directly right? But rather I can use the monotony of $(\cdot)_{+}$ since clearly $a\leq \lvert a \rvert$ and thus $Y_{n}-a\leq Y_{n}+\lvert a\rvert$ such that $(Y_{n}-a)_{+}\leq (Y_{n}+\lvert a\rvert)_{+}$. Now I have an upper bound where I can use subadditivity and thus $(Y_{n}+\lvert a\rvert)_{+}\leq (Y_{n})_{+}+ (\lvert a\rvert)_{+}=(Y_{n})_{+}+ \lvert a\rvert$. Is my proof/thinking correct? Or is there a more general way to go about this when dealing with $(\cdot)_{+}$? AI: Notice that \begin{align} Y_n -a = (Y_n)_+ - (Y_n)_- - a \leqslant (Y_n)_+ - a \end{align} because $(Y_n)_-$ is non negative. Then, triangle inequality says that \begin{align} (Y_n)_+ - a \leqslant \left|(Y_n)_+ - a \right| \leqslant |(Y_n)_+| + |a| \end{align} Remark that $|(Y_n)_+| = (Y_n)_+$ as it is non negative.
H: ***For any*** $f: B \to A$ with $(B \ne \emptyset)$, can a function $h:A \to B$ be constructed in such a way that $fhf = f$? I've been stuck on this exercise for a while now, any help would be greatly appreciated. Here's my try. I constructed $h$ as a left inverse of $f$. For any function $f: B \to A$ with $B \ne \emptyset$, there exists a function $h:A \to B$ such that $(i)$ $fhf = f$. $h$ is a left inverse of $f$, Define $hf:B \to B$, We get $(ii)$ $hf(B) = B$, By $(i)$, we have $fhf: B \to A$, Following $(ii)$, $fhf(B) =f(hf(B)) = f(B) =A \implies f(hf) = fhf = f$ AI: You did not define the function $h$, so your argument is not valid. A proof of this requires the Axiom of Choice. Let $C$ be the range of $f$. For each point $y$ in $C$ pick an element $b_y$ such that $f(b_y)=y$. This is possible by Axiom of Choice. Now define $h(y)=b_y$. This defines $h$ in then range of $f$. For $y$ not on the range of $f$ define $h(y)$ to be any fixed element of $A$. I will let you verify that $f(h(f(b)))=f(b)$ for all $b \in B$.
H: S/pS is uncountable From Page 163 of Rotman's Homological Algebra book: The direct product of countably many Z is not free. In the proof, a subgroup S is defined where S = {tuples with countably many components: each positive power of p divides almost all components}. He was trying to show that S is not free. Then he deduced that S is uncountable. And were S a free abelian group, then S/pS would be uncountable, for S = + C's (where + is direct) implies S/pS is isomorphic to + (C/pC). I am not quite sure how to see by the argument he gave that S/pS would be uncountable. Any help would be appreciated! Thank you AI: If $S$ were a free Abelian group, it would be a direct sum of $\mathfrak a$ copies of $\Bbb Z$ for some cardinal $\mathfrak a$. Then $S/pS$ would be a direct sum of $\mathfrak a$ copies of $\Bbb Z/p\Bbb Z$. Therfore $|S/pS|\ge\mathfrak a$ since $S/pS$ contains a generating set of size $\mathfrak a$. We know that $S/pS$ is countable so $\mathfrak a$ is a countable cardinal. This means that $S$ must also be countable, which we know isn't true.
H: A deduction in Hoffman Kunze whose explanation is not given I am self studying linear Algebra from Hoffman and Kunze and Couldn't think about how a deduction mist be true. On page 15 authors wrote Why if a solution $x_{1}$ ,..., $x_{n}$ belongs to F then the system of equations must have a solution with $x_{1} $,...,$x_{n}$ in $F_{1} $ ? Is it is due to the fact that in AX=Y, both A and Y are belonging to $ F_{1}$ .so solution X must exist in $ F_{1} $ ? AI: Because the augmented matrix $[A|Y]$ has the same row-reduced echelon form in $F_1$ and $F$.
H: Contrapositive arguments I came across the following problem:- Let $S$= {${u_{1},u_{2},...u_{n}}$} $\subseteq \Bbb C^{n}\ $and $T$= {${Au_{1},Au_{2},...Au_{n}}$}, for some matrix square matrix A $\in$$\Bbb M_{n}($C$)$. If $S$ is Linearly independent prove that T is linearly independent for every invertible matrix A. I tried to prove it using contrapositive argument. But I do not understand how to do it. I have two arguments running parallel. (i) Let T is linearly dependent then S is linearly dependent for some invertible matrix A (ii) Let T is linearly dependent then S is linearly dependent for some non-invertible matrix A Which of the following is the correct one? It would be a great help if someone provides some insight as to how do we, in general, do contrapositive of an argument like this. AI: First of all, I find that this statement is easier to prove directly rather than by contrapositive. In particular, if $S$ is linearly independent, then we use the definition of linear dependence to show that $T$ is also linearly independent. If $c_1,\dots, c_n$ are such that $c_1 A v_1 + \cdots + c_n A v_n = 0$, then $$ c_1 A v_1 + \cdots + c_n A v_n = 0 \implies\\ A(c_1 v_1 + \cdots + c_n v_n) = 0 \implies\\ c_1 v_1 + \cdots + c_n v_n \implies\\ c_1 = \cdots = c_n = 0, $$ as desired. As for the contrapositive of the statement to be proved: (i) is the correct version of the contrapositive. Here is a justification. Let $p$ denote the statement "$S$ is linearly independent", and let $q$ denote the statement "$T_A$ is linearly independent for every invertible matrix $A$"; I have used the notation $T_A$ to emphasize that the set $T$ depends on our choice of $A$. The statement that you are trying to prove is $p \implies q$; the contrapositive of this would be $\lnot q \implies \lnot p$. It is clear that $\lnot p$ is simlply "$S$ is linearly dependent", but deciding what $\lnot q$ should be is trickier. We can break the statement $q$ down into "for every invertible matrix $A$: $T_A$ is linearly indpendent". If we let $r_A$ denote the statement "$T_A$ is linearly independent", then we can write this symbolically as $$ q = \forall A \in GL_n:r_A. $$ Here, $A \in GL_n$ means "$A$ is invertible". Now, if it is not true that "for all $A \in GL_n$, $r_A$ is true", then there must be a counterexample. That is, there must be an example of an $A \in GL_n$ for which $r_A$ is not true. Symbolically, we might write $$ \lnot (\forall A \in GL_n:r_A) = \exists A \in GL_n : \lnot r_A. $$ Analogously, if we wanted to disprove the statement "every swan is white" (i.e. "for every swan, the color of the swan is white"), then we would have to find a swan that is not white. So, the negation is "there exists a non-white swan" (i.e. "there exists a swan such that the color of the swan is not white). In this context, your statement (ii) is the equivalent of saying "there exists a non-swan that is not white."
H: Prove domain and whether the function - f(x) = 1/(1 + |x|) is one to one. I am a first year Computer Science student studying Calculus and Linear Algebra this semester. I have got two questions: how to find domain regarding finding the domain of the following function. Is this function one to one? Question - 1 - Domain - My understanding f(x) = 1/(1+ |x|) 1 + |x|, denominator can never be 0. |x| is an absolute value, therefore it can be any number. Therefore the domain is (-α, α). However, the solution is as follows and I do not understand this solution. 1 + |x| >= 1 1/(1 + |x|) <= 1 0 < 1/(1 + |x|) <= 1 for all x E (-α, α) Question - 2 - One to One function or Not y = 1/(1 + |x|) 1 + |x| = 1/y |x| = 1/y -1 x = +- (1/y -1) From here, I can see that y must be greater than 0 since the denominator cannot be 0. I get stuck here. I had to use wolframalpha site to plot the graph. Without using this website, how should I procced to test for one-to-one function? Graph Plot to check one-to-one function My understanding is that since there are two x values (+-) for each corresponding y, it is not one to one. Is my understanding correct? Thanks very much. https://www.wolframalpha.com/input/?i=f%28x%29+%3D+1%2F+%281+%2B+%7Cx%7C%29 AI: The domain is $(-\infty, \infty)$ as you have observed. The function is not one-to-one because $f(-1)=f(1)$.
H: Question on tensor product with field Let $A$ be a finitely generated $K$-algebra which has no zero divisors. Here $K$ is a field of characteristic $0$. Let $K\subset L$ an algebraic field extension. Now let $f: L\to E$ and $g: \textrm{Quot}(A)\to E$ be two homomorphisms to another field $E$. The universal property of the tensor product gives us a homomorphism $A\otimes_K L \to E$. Is this necessarily injective? AI: $ℂ \otimes_ℝ ℂ → ℂ,~x\otimes y ↦ xy$ is not injective, since $1^2 + \mathrm i^2 = 0$.
H: Proving $\lim_{n \to\ \infty} \sum_{k=0}^{n} \frac{1}{(k+3)k!} = e-2$ $$ \lim_{n \to \infty} \sum_{k=0}^{n} \frac{1}{(k+2)!} \leq \lim_{n \to\ \infty} \sum_{k=0}^{n} \frac{1}{(k+3)k!} \leq \lim_{n \to\ \infty} \sum_{k=0}^{n} \frac{1}{k!} -2 $$ This is what I came up with but the problem is that the upper bound is practically identical to lower one. Is there any way to found bounds of summation to solve questions involving 'e'? AI: By indeterminate coefficients we find the decomposition $$\frac1{(k+3)k!}=\frac{(k+1)(k+2)}{(k+3)!}=\frac2{(k+3)!}-\frac2{(k+2)!}+\frac1{(k+1)!}.$$ The omitted terms in the $e$ series total $$2\left(\frac1{0!}+\frac1{1!}+\frac12\right)-2\left(\frac1{0!}+\frac1{1!}\right)+\frac1{0!}=2$$ and $$2-2+1=1.$$
H: Sum of measurable extended real-valued functions is measurable? Let $f, g: X \to \overline{\mathbb{R}}$ be measurable (here $\overline{\mathbb{R}}$ denotes the extended real numbers). Let $E_1 = \{ x \in X: f(x) = - \infty, g(x) = +\infty \}, E_2 = \{ x \in X: f(x) = + \infty, g(x) = -\infty \}$. Define $$ h(x) = \begin{cases} f(x) + g(x) & \text{if $x \notin E_1 \cup E_2$} \\\\ 0 & \text{if $x \in E_1 \cup E_2$} \end{cases} .$$ How can I show that $h$ is measurable? Preferably without using limits or such, but only with basic definitions like the definition of a measurable function, etc. I am having a hard time because I don't know how to write the preimage $\{ x \in X: h(x) > a \}$ in terms of already-known measurable sets. I would appreciate any hints or answers. For reference, this is from Bartle's Elements of Integration and Lebesgue Measure (I've paraphrased with some notation). AI: I'll suppress $x$ from the notation. Thus, $\{h>a\}:=\{x\in X:h(x)>a\}$. First of all note that if $(q_n)_{n\in \mathbb{N}}$ is some enumeration of $\mathbb{Q}$, then $$ \{h>a, -\infty<f<\infty,-\infty<g<\infty\}=\bigcup_{n=1}^{\infty} \{q_n<f<\infty,a-q_n<g<\infty\} $$ Indeed, it's perfectly clear that the right-hand side is a subset of the left-hand side. On the other hand, if $x\in \{h>a, -\infty<f<\infty,-\infty<g<\infty\}, $ then $f(x)=h(x)-g(x)>a-g(x)$, implying, since $\mathbb{Q}$ is dense in $\mathbb{R}$ that there exists some $n_0\in \mathbb{N}$ such that $f(x)>q_{n_0}>a-g(x)$. This exactly means that $x\in \{q_{n_0}<f<\infty, a-q_{n_0}<g<\infty\}$, and since $x$ was arbitrary, we conclude that the left-hand side is a subset of the right-hand side and thus, the two sets must be the same. Now, since $f$ and $g$ are both measurable, the right-hand side is a union of measurable sets and we conclude that $\{h>a,-\infty<f<\infty,-\infty < g <\infty\}$ is measurable for any $a\in \mathbb{R}$. To finish, note that if $a\geq 0$, then \begin{align} \{h>a,f=\infty\} &=\{f=\infty\}\setminus \{g=-\infty\}\\ \{h>a, g=\infty\} &= \{g=\infty\} \setminus \{f=-\infty\}, \end{align} both of which are measurable. Similarly, if $a<0$, then $\{h>a,f=\infty\}=\{f=\infty\}$ and $\{h>a,g=\infty\}=\{g=\infty\}$. In conclusion, since $$ \{h>a\}=\{h>a, -\infty<f<\infty,-\infty<g<\infty\}\cup\{h>a,f=\infty\}\cup \{h>a,g=\infty\}, $$ all of which are measurable, we conclude that $\{h>a\}$ is measurable. Note that the difficulty here was really just finding a nice way to write the case where both functions were finite.
H: Prove that ,$\left(a+\frac{1}{a}\right)^{2}+\left(b+\frac{1}{b}\right)^{2} \geq 8$ For any positive a, b prove that $$\left(a+\frac{1}{a}\right)^{2}+\left(b+\frac{1}{b}\right)^{2} \geq 8$$ My approach: Using the well known inequality, $ \boxed{\mathrm{AM} \geq \mathrm{GM}}$ $\left(a+\frac{1}{a}\right)^{2}+\left(b+\frac{1}{b}\right)^{2} \geq 2 \sqrt{\left(a+\frac{1}{a}\right)^{2}*\left(b+\frac{1}{b}\right)^{2}}$ $\geq 2\left(a b+\frac{1}{a b}+\frac{a}{b}+\frac{b}{a}\right)$ what to do next?? Any hint or suggestion would be greatly appreciated AI: $x+\frac 1 x \geq 2$ for any positive number $x$ by AM-GM inequality. Hence $(a+\frac 1a)^{2}+(b+\frac 1 b)^{2} \geq 2^{2}+2^{2}=8$.
H: Proving $\sum_{n=0}^\infty\frac{(-1)^n\Gamma(2n+a+1)}{\Gamma(2n+2)}=2^{-a/2}\Gamma(a)\sin(\frac{\pi}{4}a)$ Mathematica gives $$\sum_{n=0}^\infty\frac{(-1)^n\Gamma(2n+a+1)}{\Gamma(2n+2)}=2^{-a/2}\Gamma(a)\sin(\frac{\pi}{4}a),\quad 0<a<1$$ All I did is reindexing then using the series property $\sum_{n=1}^\infty (-1)^n f(2n)=\Re \sum_{n=1}^\infty i^n f(n)$ ; $$\sum_{n=0}^\infty\frac{(-1)^n\Gamma(2n+a+1)}{\Gamma(2n+2)}=\sum_{n=1}^\infty\frac{(-1)^{n-1}\Gamma(2n+a-1)}{\Gamma(2n)}=-\Re\sum_{n=1}^\infty\frac{i^{n}\Gamma(n+a-1)}{\Gamma(n)}$$ and I dont know how to continue, any idea? Thanks AI: A solution in large steps by Cornel Ioan Valean In the following, I'll focus on the last series. Let's prove that $$\sum_{n=1}^{\infty} x^n \frac{\Gamma(n+a-1)}{\Gamma(n)}=\frac{x}{(1-x)^a}\Gamma(a).$$ Two key steps are necessary: $1)$. Note and use that $$\frac{1}{\Gamma(1-a)}\int_0^1 t^{-a} (1-t)^{n+a-2}\textrm{d}t=\frac{\Gamma(n+a-1)}{\Gamma(n)}.$$ $2)$. (after summing) Employ the following integral representation with a hypergeometric structure (in fact, it may be viewed as a particular case of an integral expressed in terms of a hypergeometric function) $$\int_0^1 \frac{x^{a-1}}{(1-x)^a (1+b x)}\textrm{d}x=\frac{\pi}{\sin(\pi a)}\frac{1}{(1+b)^a}.$$ One useful way to perform the evaluation of the last integral is by using the variable change $x/(1-x)=y$, followed by the variable change $(1+b)y=z$ in order to get precisely a special case of the Beta function. End of story
H: My answer is 36225. Please verify if it is correct A computer program prints out all integers from 0 to ten thousand in base 6 using the numerals 0,1,2,3,4 and 5. How many numerals it would have printed ? AI: The numbers from $0$ to $5$ have $1$ digit in base $6$. The numbers from $6$ to $35$ have $2$ digits in base $6$. The numbers from $36$ to $215$ have $3$ digits in base $6$. The numbers from $216$ to $1295$ have $4$ digits in base $6$. The numbers from $1296$ to $7775$ have $5$ digits in base $6$. The numbers from $7776$ to $10000$ have $6$ digits in base $6$. Adding the ranges up, you get $1(6) + 2(36-6) + 3(216-36) + 4(1296-216) + 5(7776-1296) + 6(10001-7776)=50676.$
H: Does midpoint convexity at a single point imply full convexity at that point? Let $f:[a,b] \to \mathbb [0,\infty)$ be a continuous function, and let $c \in (a,b)$ be a fixed point. Suppose that $f$ is midpoint-convex at the point $c$, i.e. $$ f((x+y)/2) \le (f(x) + f(y))/2, $$ whenever $(x+y)/2=c$, $x,y \in [a,b]$. Is it true that $f$ is convex at $c$? i.e. does $$ f\left(\alpha x + (1- \alpha)y \right) \leq \alpha f(x) + (1-\alpha)f(y) $$ hold whenever $ \alpha \in [0,1]$ and $x,y \in [a,b]$ satisfy $\alpha x + (1- \alpha)y =c$? Does the answer change if we assume $f$ is strictly decreasing? The classic proofs do not seem to adapt to this case. AI: Pick $c=0$ and $f: [-1,1] \rightarrow [0,\infty), f(x)=-x^3+1$. Then we have for $c=0$ that $x=-y$. $$ f(0)=1 = (-x^3 +1 +-(-x)^3+1)/2 = (f(x)+f(y))/2. $$ So $f$ is midpoint-convex. However, for $\alpha=2/3$ and $x=-1/2, y=1$ we have $$ \alpha x + (1-\alpha) y= \frac{2}{3}\cdot \frac{-1}{2} + \frac{1}{3} \cdot 1 = 0 = c $$ and $$ \alpha f(x) + (1-\alpha) f(y) = \frac{2}{3} \cdot \frac{9}{8} + \frac{1}{3} \cdot 0 = \frac{3}{4} < 1 = f(0). $$ So our function is not convex at $c=0$ (and $f$ is even strictly decreasing).
H: Is there a metric on a finite-dimensional (non-trivial) vector space $V$ over $\mathbb{R}$ which makes it compact? I know that a finite-dimensional (non-trivial) vector space $V$ over $\mathbb{R}$ which is normed isn't compact, but what about when it has a metric in general? AI: Take any bijection $f: V \to [0,1] $ and define $d(v,w)=|f(v)-f(w)|$. This makes $V$ compact.
H: Smallest subset of integers that you can use to produce 1,2,....,40 Find the smallest subset of integers that you can use to produce $1,2,...,40$ by only using $"+"$ or $"-"$, and each number in the subset can be used at most one time. There is a hint that $0$ must be in the set, but I cannot see any justification as to why you can need to add or takeaway a $0$. AI: Like $1,3,9,27$ and balanced ternary? Clearly $3$ numbers produce only $3^3$ variants (either number can be $\cdot 0$, $\cdot 1$, $\cdot (-1)$) so $4$ is the minimum.
H: The interval in which the function $ f(x)=\sin(e^x)+\cos(e^x)$ is increasing is/are? question options and answers The interval in which the function $f(x)=\sin(e^x)+\cos(e^x)$ is increasing is/are? I don't understand how to approach such problems. it would be helpful if you could kindly guide me through the process. i have also shared the options image and the correct answers have a green tick. AI: Hint: $$\sin y+\cos y=\sqrt2\sin\left(y+\frac\pi4\right)$$ is growing in $$\left[-\frac{3\pi}4,\frac{\pi}4\right]+2k\pi$$ and the transformation $$y=e^x$$ is invertible.
H: Why $P(\limsup A_n) = 1$ when $P(\cup A_n) = 1$? Let us have a sequence of independent sets like $\{A_n\}$, suppose for any $n$ belongs to $N$, we have $P(A_n)< 1$. If $P(\cup A_n) = 1$ why $P(A_n \ \ i.o) = 1$? AI: This is not true. If $A_n$'s form a partition of the sample space then $P (\cup A_n)=1$ but $A_n$'s cannot occur infinitely often because of disjointness. Answer for the edited version: $\prod (1-P(A_n))=P(\cap A_n^{c}) =1-1=0$ and this implies $\sum \ln (1-P(A_n)=-\infty$. Since $\ln (1-x) \sim -x$ as $x \to 0$ this implies $\sum P(A_n) =\infty$. By Borel - Cantelli Lemma this implies $P(\lim \sup A_n)=1$.
H: Why is the set of all bases of a vector space a manifold? Let $V$ be an $\mathbb{R}$ vector space. Then the collection of all ordered bases of $V$ denoted by $P(V)$ seemingly is manifold. How does one see this ? I suppose that second-countability is inhereted from some $\mathbb{R}^n$ diffeomorphic to $V$. But how to think about the local Euclidean property. Also, what's happening in the case where the dimension of $V$ is not finite. AI: The set of all bases of $V$ is in 'natural' bijection with $GL_{\dim(V)}(\mathbb{R})$ by assigning to a matrix $M$ the basis consisting of the vectors that form the columns of $M$ in some fixed basis $\mathcal{B}_0$ of $V$ (or the rows if you will). Hence, the natural manifold structure on $P(V)$ is that of $GL_{\dim(V)}(\mathbb{R})$ seen as an open subset of $M_{\dim(V)}(\mathbb{R})\cong \mathbb{R}^{\dim(V)^2}$.
H: p is prime and $p=4t+3$, I am wondering how to prove when $p ≡ 1\pmod7$ $$(-1)^\frac{(p+1)}{4} = 1$$ when $p ≡ 5 \pmod3$ $$(-1)^\frac{(p+1)}{4} = -1$$ My attempt: For question 1, $p=4t+3$,hence $$\frac{(p+1)}{4} = t+1$$ $4t+3≡1\pmod7$, hence $$t+1≡4\pmod7$$ For question 2, $p=4t+3$,hence $$\frac{(p+1)}{4} = t+1$$ $4t+3≡5\pmod3$, hence $$t+1≡0\pmod3$$ Then I do not know how to prove it at all. What do you think about it? Could you please show me? Regards AI: Both statements are false. For example, $p=43$ satisfies $p=4t+3$ and $p=1\pmod{7}$ but $(-1)^{11}=-1$. Similarly, $p=23$ satisfies $p=4t+3$ and $p=2\pmod{3}$ but $(-1)^{\frac{p+1}{4}}=(-1)^6=+1$.
H: Application of Identity Theorem Let {$a_n$} and {$b_n$} be sequences of complex numbers such that each $a_n$ is non zero, $$\lim_{n\to\infty}a_n = \lim_{n\to\infty}b_n=0$$ and such that for every natural number k, $$\lim_{n\to\infty}\frac{b_n}{a_{n}^k}=0$$ Suppose f is an analytic function on a connected open subset U of $\mathbb C$ which contains $0$ and all the $a_n$. Show that if $f(a_n)=b_n$ for every natural number n, then $b_n = 0$ for every natural number n. I surmise that we have to apply the Identity Theorem of Complex Analysis and conclude that f is identically zero. This in turn will confirm that all $b_n$ are zero. But I am not able to choose the correct function for applying the Identity Theorem. Help me please. AI: Since $0 \in U$, $f$ has a power series expansion about the point $0$. So $f(z) = \sum_{i = 0}^{\infty}\alpha_iz^i$. We are done if we show that $\alpha_k = 0$ for each $k$. Now by hypothesis for each $n$ \begin{equation} f(a_n) = \sum_{i = 0}^{\infty}\alpha_ia_n^i = b_n \end{equation} divide this on both sides by $a_n^k$, we get \begin{equation} \sum_{i = 0}^{\infty}\frac{\alpha_ia_n^i}{a_n^k} = \frac{b_n}{a_n^k} \end{equation} For $k=0$, we get that $\alpha_0 = 0$. Now use induction and by taking limits as $n \to \infty$, show that $\alpha_k = 0$ for each $k$.
H: Questions about differentiability Which of these following statements is true? a. Let $f:\mathbb {R}^n \to \mathbb {R}^k$ and $g:\mathbb {R}^k \to \mathbb {R}^m$. If $g\circ f$ is differentiable, then $g$ and $f$ are differentiable. b. Let $f:\mathbb {R}^m \to \mathbb {R}^n$ with $df(p)=0$ for $p \in \mathbb{R}^m$. Then $f$ is in a neighbourhood $p$ constant. c. Let $g:\mathbb {R} \to \mathbb {R}$ be differentiable in $\mathbb{R}$. Then is $f:\mathbb {R}^2 \to \mathbb {R}$, $f(x,y) := g(x)$ differentiable in $\mathbb{R}^2$. I'm pretty sure that a. is false but I'm not really sure about the rest. I think that b. is true and that c. is true, is that correct? AI: As you guessed, $a$ is false. Just build $f$ and $g$ such that $g\circ f = 0$. However, $b$ is false as well : just take $m=n=1$, and $f(x) = x^3$. Unless what you meant was $\forall p, df(p) = 0$. And finally, yes, $c$ is true. You can easily compute the differential of $f$ in terms of $g'$.
H: taylor expansion for $\frac1{\cos(x)-\sin(x)}$ I am trying to find a closed expression for the taylor series of $\frac1{\cos(x)-\sin(x)}$ in a neighborhood of $x = 0$. One can obtain the coefficients from: $$(a_o+a_1x+a_2x^2+\cdots)(1-x-\frac{x^2}2+\frac{x^3}{3!}+\frac{x^4}{4!}--++\cdots) = 1$$ thus $a_0= 1, a_1=1, a_2 = 3/2$, etc. Is it possible to find a recursive or symbolic law (like $(B+1)^n-B^n = 0$ for bernoulli numbers) for the coefficients $a_n$? edit: if we let $a_n = b_n/n!$, we can find the following: $$b_n = \binom{n}1b_{n-1}+\binom{n}2b_{n-2}-\binom{n}3b_{n-3}-\binom{n}4b_{n-4}++--\cdots$$ for $n\ge1$, and $b_0 = 1$. Still this is not very appealing. AI: The coefficients of numerators are $$\{1,1,3,11,19,361,307,24611,83579,2873041,12193841,512343611,869783713\}$$ and for denominators $$\{1,1,2,6,8,120,80,5040,13440,362880,1209600,39916800,53222400\}$$ You could find them in $OEIS$ (sequences $A279257$ and $A279258$) but according to the pages, they do not show any particularity. Edit If it was around $x=-\frac \pi 4$ (which is not the case) $$\sec \left(x+\frac{\pi }{4}\right)=\sum_{n=0}^\infty \frac{E_{2 n}}{(2 n)!}\left(x+\frac{\pi }{4}\right)^{2 n}$$ where appear Euler numbers.
H: Individual percentage of a set of numbers I have a set of four numbers. 7.5, 18.5, 424 and 0. Certain percentage is associated with each of it, viz 1.66%, 4.11%, 94.22% and 0.1% resp, which in total is 100%. I want to know how these percentages were calculated? Can a generic formula be applied ? AI: Simply put: The individual percentage is equal to the individual number divided by the total (and multiplied by $100~\%$). The total is simply $$ \text{total} = 7.5 + 18.5 + 424 + 0 = 450 $$ Therefore, the percentages are $$ \frac{7.5}{\text{total}}, \qquad \frac{18.5}{\text{total}}, \qquad \text{etc.} $$ NB: In this case, some rounding has been applied to the numbers, so don't worry about the minor rounding details (unless you really have to!).
H: Question in proof of Multipilcation of matrices in Linear Algebra While self studying linear Algebra from Hoffman Kunze I have a question in proof of a theorem whose image I am adding : This theorem would be useful-> ( Due to some glitch both images automatically appeared at last not here). Question:How in proof authors introduced 2nd variable s without it being in subscript $\sum_{s} B_{rj} C_{rj} $ ? I tried by putting s as follows : $\sum_{r} A_{ir}\sum_{s} B_{rjs} C_{srj} $ but this doesn't proves the RHS in last line as $\sum_{s} \sum_{r} A_{ir} B_{rjs} C_{srj} $ = $\sum_{s} (AB)_{iks} C_{srj} $= But in next step I don't know how to solve to get what is to be proved as subscript is iks and srj. So, kindly tell what mistake I am making. AI: It is difficult to understand what exactly has confused you and how you decided that $A,B$ should have three subscripts. However, since it is clear that your issue has something to do with summation notation, perhaps it would be useful to look at a concrete example. Let's consider the specific case where $A,B,C$ are each of size $3 \times 3$. Now, the confusing part of the proof is $$ [A(BC)]_{ij} = \sum_rA_{ir} (BC)_{rj} = \sum_r A_{ir}\sum_s B_{rs}C_{sj}. $$ First of all, let's understand the definition of matrix multiplication. We have $$ (AB)_{ij} = \sum_{r} A_{ir}B_{rj} = A_{i1}B_{1j} + A_{i2}B_{2j} + A_{i3}B_{3j}. $$ With that, we can write $$ [A(BC)]_{ij} = A_{i1}(BC)_{1j} + A_{i2}(BC)_{2j} + A_{i3}(BC)_{3j}\\ = A_{i1}(B_{11}C_{1j} + B_{12}C_{2j} + B_{13}C_{3j})\qquad \\ \qquad + A_{i2}(B_{21}C_{1j} + B_{22}C_{2j} + B_{23}C_{3j})\\ \qquad + A_{i3}(B_{31}C_{1j} + B_{32}C_{2j} + B_{33}C_{3j}). $$ Can you see that the 3-line sum above is the same thing as $$ \sum_{r=1}^3 A_{ir}\left(\sum_{s=1}^3 B_{rs}C_{sj}\right)? $$
H: How do I compute the derivative of this inverse function? Let $$f(x)=\frac{1}{16}(e^{\arctan(\frac{x}{7})} + \frac{x}{7})$$ You are given that $f$ is a one-to-one function and its inverse function $f^{-1}$ is a differentiable function on $\mathbb{R}$. Also $f(0)=\frac{1}{16}$. What is the value of $(f^{-1})'(1/16)$? AI: We have $$(f^{-1})'(\frac{1}{16})= \frac{1}{f'(f^{-1}(\frac{1}{16})}= \frac{1}{f'(0)}.$$
H: Finding plane equation by using parametric equation Find the plane equation passing through $(4,-2,6)$ $x=3-2t$, $y=t$ and $z=5+2t$ How was the solution of these type of questions..Thanks AI: Let $P(4,-2,6)$ and line $AB$: $X_\ell=A+t\,(B-A)=(3,0,5)+t\,(-2,1,2)$, then the plane equation will be $n\cdot (X-A)=0$ where $n=[(P-A)\times(B-A)]$ and $X=(x,y,z)$ an arbitrary point on the plane. As the mixed product corresponds to determinant, we have $$\left| \begin{array}{ccc} x-3&y&z-5\\ 4-3&-2-0&6-5\\ -2&1&2 \end{array} \right|=0$$ $$-5 x - 4 y - 3 z + 30=0$$ We can test now that $P$ and all the line $AB$ lies in the plane: WA results for the point and the line although it's not needed.
H: Given a finite set of points. How can I make a ball around this points, so that the intersection of any two balls are empty I'm looking for an algorithm or a theoretical result for the following problem: Given the finite set of points $X = \{x_{1},\ldots,x_{d}\} \subseteq \mathbb{R}^{n}$. For $x \in X$ we define $B_{r}(x)$ as the ball around $x$ with radius $r \geq 0$, i.e., $$ B_{r}(x) := \{y \in \mathbb{R}^{n} \mid \lvert \rvert x -y\lvert \rvert_{2} \leq r\}. $$ My question is, how can I efficiently compute all the radius, such that $$ B_{r}(x) ~~\cap ~~ B_{r'}(x') = \emptyset \quad \text{for all } x,x' \in X. $$ AI: This is very simple. Let $S=\{\|x-x'\|:x\neq x', x,x'\in X\}$. Then $S$ is a finite set of positive numbers. Since $S$ is finite, $\min(S)>0$. Take any number $\delta>0$ that satisfies $\delta<\frac{\min(S)}{2}$. Then the balls $\{B_\delta(x)\}_{x\in X}$ are pairwise disjoint for $x\in X$. Let's verify this: If $y\in B_\delta(x)\cap B_\delta(x')$, then $\|x-y\|<\delta$ and $\|y-x'\|<\delta$. Thus $\|x-x'\|\leq\|x-y\|+\|y-x'\|<2\delta<\min(S) $, a contradiction.
H: Formula for the number of elements in $S_{10}$ of order $10$. I'm trying to find a formula (mostly in terms of factorials) for the number of elements in $S_{10}$ of order $10$. We first we count how many elements of $S_{10}$ have the same cycle structure. So,since $lcm(a,b)=10⟺a=10,b=0$ or $a=5,b=2 $, there are only one $10−cycle$ or ,one $5−cycle$, two $2−cycle$, one $1−cycle$, and therefore the number of elements in $S_{10}$ with order $10$ is equal to $\frac{10!}{5∗2∗4}+\frac{10!}{10}=9!+ \frac{9!}{4}$. Am I right? AI: Step $1$: any permutation can be written (using the cycle notation) as a product of disjoint cycles (in particular, they commute). Step $2$: the order of a product of disjoint cycles is the lowest common multiple of the lengths of the cycles. This is fairly easy to prove. Step $3$: Since $10=2 \cdot 5$, an element in $S_{10}$ has order $10$ if and only if it is composed of a single cycle of length $10$, or of cycles of lengths $5$ and $2$ (minimum one for each). So you get the following types: $$(..........)\quad,\quad (..)(.....)\quad,\quad (..)(..)(.....)$$ Step $4$: Using combinatorics, compute the possibilities for each and add them up. For a $10$-cycle there are $9!$ possibilities, as by definition $1$ is always the first digit and then you can freely permute the other $9$. For a $(2,5)$-cycle there are ${10 \choose 3} \cdot {7 \choose 2}\cdot 4!$, as first you choose the three elements that are fixed, then among the remaining seven you choose the ones in the two-cycle, and finally you count how many five-cycles you can form using the five elements that remain. For a $(2,2,5)$-cycle there are ${10 \choose 1} \cdot {9 \choose 2} \cdot {7 \choose 2} \cdot 4! \cdot \frac{1}{2}$. The logic is the same as above, but in the end you have to divide by two to account for double counting of the two cycles (with the process above you're counting $(12)(34)$ and $(34)(12)$ as separate elements)
H: Measurability on subsets | tower property of conditional expectation I have some understanding issues following the proof of the tower property of conditional expectation. The Theorem is the following: Let $F_0, F_1$ be $\sigma$-fields with $F_0 \subseteq F_1 \subseteq F$ and $X$ a random variable with $X \geq 0$. Then, $$ \mathbb{E}[\mathbb{E}[X|F_1]|F_0] = \mathbb{E}[X|F_0] \qquad(1)$$ and $$\mathbb{E}[\mathbb{E}[X|F_0]|F_1] = \mathbb{E}[X|F_0] \qquad(2)$$ $P$ almost surely. The proof given states: For $Y_0\geq0$, $Y_0$ measurable with respect fo $F_0$, $\mathbb{E}[Y_0\mathbb{E}[X|F_1]] = \mathbb{E}[Y_0X]$ and this proves (1). Question (only regarding (1)) First point I do not get. $Y_0$ is by definition $F_0$ measurable. So, I would assume that we have $$\mathbb{E}[Y_0\mathbb{E}[X|F_1]|F_0].$$ Then, I assume that we plug in the defining property of the conditional expectation which yields $$\mathbb{E}[Y_0X|F_0]$$ --> Why is this possible? For plugging $X$ in, $Y_0$ has to be $F_1$ measurable, but it is only $F_0$ measurable? --> And then, where goes "|$F_0$"? I don't understand why this part vanishes... So, I am kind of lost. Thanks a million for any help and further explenations! :) AI: $F_0 \subset F_1$. So the fact that $Y_0$ is $F_0$ measurable implies that it is also $F_1$ measurable: $Y_o^{-1}(E) \in F_0 \subset F_1$ for any Borel set $E$ in $\mathbb R$. Second question: $E(E(Z_0|F_0))=EZ$ for any $Z$. This is immediate from the definition of conditional expectation: $\int_{A} E(Z|F_0) dP =\int_{A} ZdP$ for any $A \in F_0$. Take $A=\Omega$ to get $\int_{\Omega} E(Z|F_0) dP =\int_{\Omega} ZdP$. This means $E(E(Z|F_0))=EZ$.
H: Suppose $\{A_i | i ∈ I\}$ is an indexed family of sets and $I \neq \emptyset$. Prove that $\bigcap_{i\in I}A_i\in\bigcap_{i\in I}\mathscr P(A_i)$. Not a duplicate of Prove that if $I ≠ \emptyset$ then $\bigcap_{i \in I}A_{i} \in \bigcap_{i \in I} \mathscr P (A_{i})$ To Prove $ \bigcap_{i \in I} A_i \in \bigcap_{i \in I} P(A_i) $ This is exercise $3.3.15$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Suppose $\{A_i | i ∈ I\}$ is an indexed family of sets and $I \neq \emptyset$. Prove that $\bigcap_{i\in I}A_i\in\bigcap_{i\in I}\mathscr P(A_i)$. Here is my proof: Let $A$ be an arbitrary element of $\bigcap_{i\in I}\mathscr P(A_i)$. Let $x$ be an arbitrary element of $A$. Since $I\neq \emptyset$, let $i$ be an arbitrary element of $I$. From $\bigcap_{i\in I}\mathscr P(A_i)$ and $i\in I$, $A\in\mathscr P(A_i)$ and so $A\subseteq A_i$. From $A\subseteq A_i$ and $x\in A$, $x\in A_i$. Thus if $i \in I$ then $x\in A_i$. Since $i$ was arbitrary, $\forall i\Bigr(i\in I\rightarrow x\in A_i\Bigr)$ and so $x\in\bigcap_{i\in I}A_i$. Thus if $x\in A$ then $x\in\bigcap_{i\in I}A_i$. Since $x$ was arbitrary, $\forall x\Bigr(x\in A\rightarrow x\in\bigcap_{i\in I}A_i\Bigr)$ and so $A\subseteq\bigcap_{i\in I}A_i$ and ergo $A\in\mathscr P(\bigcap_{i\in I}A_i)$. Therefore if $A\in\bigcap_{i\in I}\mathscr P(A_i)$ then $A\in\mathscr P(\bigcap_{i\in I}A_i)$. Since $A$ was arbitrary, $\forall A\Bigr(A\in\bigcap_{i\in I}\mathscr P(A_i)\rightarrow A\in\mathscr P(\bigcap_{i\in I}A_i)\Bigr)$ and so $\bigcap_{i\in I}\mathscr P(A_i)\subseteq\mathscr P(\bigcap_{i\in I}A_i)$. Let $A$ be an arbitrary element of $\mathscr P(\bigcap_{i\in I}A_i)$. This means $A\subseteq\bigcap_{i\in I}A_i$. Since $I\neq\emptyset$, let $i$ be an arbitrary element of $I$. Let $x$ be an arbitrary element of $A$. From $A\subseteq\bigcap_{i\in I}A_i$ and $x\in A$, $x\in \bigcap_{i\in I}A_i$. From $x\in \bigcap_{i\in I}A_i$ and $i\in I$, $x\in A_i$. Thus if $x\in A$ then $x\in A_i$. Since $x$ was arbitrary, $\forall x\Bigr(x\in A\rightarrow x\in A_i\Bigr)$ and so $A\subseteq A_i$ and ergo $A\in\mathscr P(A_i)$. Thus if $i\in I$ then $A\in \mathscr P(A_i)$. Since $i$ was arbitrary, $\forall i\Bigr(i\in I\rightarrow A\in \mathscr P(A_i)\Bigr)$ and so $A\in\bigcap_{i\in I}\mathscr P(A_i)$. Therefore if $A\in\mathscr P(\bigcap_{i\in I}A_i)$ then $A\in \bigcap_{i\in I}\mathscr P(A_i)$. Since $A$ was arbitrary, $\forall A\Bigr(A\in\mathscr P(\bigcap_{i\in I}A_i)\rightarrow A\in\bigcap_{i\in I}\mathscr P(A_i)\Bigr)$ and so $\mathscr P(\bigcap_{i\in I}A_i)\subseteq \bigcap_{i\in I}\mathscr P(A_i)$. Since $\bigcap_{i\in I}\mathscr P(A_i)\subseteq\mathscr P(\bigcap_{i\in I}A_i)$ and $\mathscr P(\bigcap_{i\in I}A_i)\subseteq \bigcap_{i\in I}\mathscr P(A_i)$, then $\mathscr P(\bigcap_{i\in I}A_i)= \bigcap_{i\in I}\mathscr P(A_i)$. Therefore we can rewrite $\bigcap_{i\in I}A_i\in\bigcap_{i\in I}\mathscr P(A_i)$ as $\bigcap_{i\in I}A_i\in\mathscr P(\bigcap_{i\in I}A_i)$ which is equivalent to $\bigcap_{i\in I}A_i\subseteq \bigcap_{i\in I}A_i$ which is by definition true. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention. AI: Your proof is valid but way longer than it needs to be. Really, let $i_0\in I$. Since $\cap_{i\in I} A_i\subseteq A_{i_0},$ you get that $\cap_{i\in I} A_i\in \mathscr{P}(A_{i_0})$. Since $i_0$ was arbitrary, we get that $\cap_{i\in I} A_i\in \cap_{i\in I} \mathscr{P}(A_i)$.
H: Proving A is a group over unusual operation Let $A= R^2 - {(0,0)}$ and operation $*$ over A is defined by. $$(a, b)*(a′, b′) = (aa′−bb′, ab′+a′b)$$ Question: Is $(A,*)$ a group ? My attempt: First of all, as for all $a$,$b$ in $A$ $$(1,0)*(a,b)=(a.1-0.b,b.1-a.0)=(a,b)=(a,b)*(1,0)$$ $(1,0)$ is identity element. Secondly, let $(a,b)$ and $(c,d)$ be elements of $A$, Then: $$(a,b)*(c,d)=(ac-bd,ad+cb)$$ As both $ac-bd$ and $ab+cb$ in $R^2$, $A$ is closed over $*$. I had problem with proving the existence of inverse for every element of A. AI: Note: if we identify $\mathbb{R}^2$ with $\mathbb{C}$ via $(a,b)\equiv a+ib$, then $(a,b)*(c,d)=(ac-bd, ad+cb)\equiv(ac-bd)+i(ad+bc)=(a+ib)\cdot(c+id)$. So what you are checking is actually whether $(\mathbb{C}-\{0\},\cdot)$ where $\cdot$ denotes the usual multiplication of complex numbers is a group. But even cats and dogs will say that this is true when asked.
H: I don't understand this derivation I try to unterstand a derivation and need help. There are given two functions $$ s=-cos(j\pi/n),s\in[-1,1] $$ and the nonlinear transformation $$ y(s)=C\tan[\frac{\pi(s+1)}{4}+\frac{s-1}{2}\arctan\frac{y^*}{C}]+y^*,y\in[0,\infty) $$ $y*$ and $C$ are constant parameters. The derivation and its solution is $$ \frac{ds}{dy(s)}=4C/[\pi+2\arctan(y^*/C)]/[C^2+(y(s)-y^*)^2] $$ but i don't understand how to reach this solution. AI: Since y* and C are constants so is $\frac{1}{2}arctan(y*/C)$ so just call that C'. You want to differentiate $y= \tan\left(\frac{\pi(s+1)}{4}+ C'\frac{s-1}{2}\right)+ y*$. The derivative of $tan(y(x))$ with respect to x is $sec^2(y(x))\frac{dy}{dx}$ so the derivative of $\tan\left(\frac{\pi(s+1)}{4}+ C'\frac{s-1}{2}\right)$ with respect to s is $\sec^2\left(\frac{\pi(s+1)}{4}+ C'\frac{s-1}{2}\right)$ times $\frac{\pi}{4}+ C'\frac{1}{2}$. That is, $\frac{dy}{ds}= \left(\frac{\pi}{4}+ \frac{1}{4}arctan(y*/C)\frac{s-1}{2}\right)\sec^2\left(\frac{\pi(s+1)}{4}+ C'\frac{s-1}{2}\right)$. To get $\frac{ds}{dy}$ take the reciprocal of that.
H: What does $\exists^{=1}$ stands for? In this paper, they use $\exists^{=1}$, I know that $\exists$ stands for there exists. But what does $\exists^{=1}$ stand for? My guess is "Only one existential quantifier". But can someone confirm it? AI: For any constant $k\in\mathbb N$, $\exists^{=k}x\,\phi(x)$ is a standard notation for “there exist exactly $k$ elements $x$ such that $\phi(x)$”. Similarly, $\exists^{\ge k}x\,\phi(x)$ denotes “there exist at least $k$ elements $x$ such that $\phi(x)$”, and you can now guess what $\exists^{>k}x\,\phi(x)$, $\exists^{\le k}x\,\phi(x)$, and $\exists^{<k}x\,\phi(x)$ mean. All of these are already definable in the usual first-order logic, hence they should be seen as abbreviations: for example, $$\begin{align*} \exists^{\ge k}x\:\phi(x)&\iff\exists x_1\,\dots\,\exists x_k\:\Bigl(\bigwedge_{1\le i<j\le k}x_i\ne x_j\land\bigwedge_{i=1}^k\phi(x_i)\Bigr),\\ \exists^{>k}x\:\phi(x)&\iff\exists^{\ge k+1}x\:\phi(x),\\ \exists^{<k}x\:\phi(x)&\iff\neg\exists^{\ge k}x\:\phi(x),\\ \exists^{\le k}x\:\phi(x)&\iff\neg\exists^{>k}x\:\phi(x),\\ \exists^{=k}x\:\phi(x)&\iff\exists^{\ge k}x\:\phi(x)\land\exists^{\le k}x\:\phi(x). \end{align*}$$ Another notation in a similar vein is $\exists^\infty x\,\phi(x)$ for “there exist infinitely many $x$ such that $\phi(x)$”. This is not a priori definable in first-order logic, hence it is only used in more specialized contexts: in particular, in theories of arithmetic, where it stands for $\forall y\,\exists x\,\bigl(x\ge y\land\phi(x)\bigr)$, and in the model theory of theories with elimination of infinity.
H: Integration problem where denominator of the integral variable is fractional powered I am a bit stuck on this integration problem \begin{equation} \int \frac{1}{a+(x-b)^c} dx \end{equation} Is there a name on this type of integral that I can look up to? AI: \begin{equation} I=\int \frac{dx}{a+(x-b)^c} =\int \frac{dy}{a+y^c} =\frac{y }{a}\, _2F_1\left(1,\frac{1}{c};1+\frac{1}{c};-\frac{y^c}{a}\right) \end{equation} where appears the gaussian hypergeometric function.
H: How to simplify the following matrix problem? Taken from the lecture notes: Introduction to the principles and methods of data assimilation in the geosciences - Marc Bocquet, where I am currently working on page 12. Given that: $$\mathrm{P^a=(I-KH)B+[KR-(I-KH)BH^T]K^T} \tag{1.27}$$ the expression in brackets in the RHS is zero when $\mathrm{K=K^*}$ resulting in $$\mathrm{P^a=(I-K^*H)B} \tag{1.26}$$ where $$\mathrm{K^*=BH^T(R+HBH^T)^{-1}}. \tag{1.23}$$ So we want to simplify Eq$(1.27)$ into Eq$(1.26)$ given Eq$(1.23)$. Working with just the RHS terms within the brackets, we want to show $$\mathrm{[KR-(I-KH)BH^T]=0}.$$ Substitute for $\mathrm{K^*}$, $$\mathrm{[BH^T(R+HBH^T)^{-1}R-(I-BH^T(R+HBH^T)^{-1}H)BH^T]}$$ Then, factoring out $\mathrm{BH^T}$, $$\mathrm{[(R+HBH^T)^{-1}R-I+(R+HBH^T)^{-1}HBH^T]}$$ From this point onward, how should I proceed? I considered using the Woodbury matrix identity, $$\mathrm{(R+HBH^T)^{-1} = R^{-1} - R^{-1}H (B^{-1}+H^TR^{-1}H)^{-1} H^TR^{-1}}.$$ However, that seems to end up in an "endless expansion": $$\mathrm{[I- R^{-1}H (B^{-1}+H^TR^{-1}H)^{-1} H^T-I+R^{-1} - R^{-1}H (B^{-1}+H^TR^{-1}H)^{-1} H^TR^{-1}HBH^T]}$$ $$\mathrm{[R^{-1}H (B^{-1}+H^TR^{-1}H)^{-1} H^T+R^{-1} - R^{-1}H (B^{-1}+H^TR^{-1}H)^{-1} H^TR^{-1}HBH^T]}$$ Furthermore, if I have understood the lecture notes correctly, it doesn't imply the usage of the other form of $K^*$ obtained via the Sherman-Morrison-Woodbury form as it indicated on the end of page 11, "Choosing the optimal gain Eq.(1.23)..." AI: You are so close. From: $\mathrm{[(R+HBH^T)^{-1}R-I+(R+HBH^T)^{-1}HBH^T]}$ Note that $I = (R+HBH^T)^{-1} (R+HBH^T)$ So factor out: $(R+HBH^T)^{-1}$ $(R+HBH^T)^{-1}{[R-(R+HBH^T) +HBH^T]} = 0$
H: Finding the basis of a product topology Let $X = \{1, 2, 3\}$, $T = \{\varnothing, \{1\}, \{1, 2\}, X\}$, $Y = \{4, 5\}$, and $U = \{\varnothing, \{4\}, Y\}$. How do I find the basis $B$ for the product topology on $X \times Y$? Definition: A topological space $(X, T)$ is a Hausdorff space provided that if $x$ and $y$ are distinct members of $X$ then there exist disjoint open sets $U$ and $V$ such that $x \in U$ and $y \in V$. With this definition, how do I solve this question? AI: Let's recall the definition of the product topology. The product topology of $(X,\tau_1)$ and $(Y,\tau_2)$ is the topology $\tau_3$ defined on $X\times Y$ that by definition has as a basis the collection $\{U\times V: U\in\tau_1, V\in\tau_2\}$. So the basis here is $$\bigg{\{}\emptyset, \{(1,4)\}, \{(1,4),(1,5)\}, \{(1,4),(2,4)\}, \{(1,4),(2,4),(1,5),(2,5)\}, \{(1,4),(2,4),(3,4)\}, X\times Y\bigg{\}}$$
H: Random vector in a circle of unity radius The random variable $(X,Y)$ is uniformly distributed in the circle $x^2+y^2\leq 1$. Find the joint density $f_{XY}(x,y)$ and the marginal density of $X$ and $Y$. $$ \begin{split} f_{XY}(x,y) &= \frac{1}{\pi} \\ f_X(x) &= \frac{2}{\pi}\sqrt{1-x^2} \\ f_Y(y) &= \frac{2}{\pi}\sqrt{1-y^2} \end{split} $$ Say if $X \perp Y$ or not. $\rightarrow f_X(x)f_Y(y)\neq f_{XY}(x,y)\Rightarrow X,Y$ are not independent Find the conditional density of $X$ given $Y=y$. $$f_{X|Y}(x|y) := \frac{f_{XY}(x,y)}{f_Y(y)} = \frac{1}{2\sqrt{1-y^2}}$$ Calculate $\mathbb{P}(Y<X)$. $$ \mathbb{P}(Y<X) = \int_{-\sqrt{1-x^2}}^{\sqrt{1-x^2}} \left[\int_{-\sqrt{1-y^2}}^{x}\frac{1}{\pi}dy\right]dx $$ Is it correct? In particular, I have a doubt as to the correctness of integration extremes at the fourth point… Should I use, for example, $x\in (-1,1)$ or not? Thanks in advance. AI: The last one is wrong. LHS is a constant but your RHS depends on $y$. By inspection is it obvious you are asking about the proportion of the unit circle that lies below the line $y=x$, which by symmetry is $1/2$. Since the problem is symmetric in $X,Y$ that conclusion is anyhow warranted even before the geometric argument. If you insist in doing it the hard way, I would write $$ \mathbb{P}(Y<X) = \int_{-1}^1 \int_{-\sqrt{1-x^2}}^x \frac{dydx}{\pi} $$
H: Is there any easy way to calculate the value of this determinant? \begin{vmatrix} 0 & 3 & 1 & 2 & 10! & e^{-7}\\ 1 & 2 & -1 & 2 & \sqrt{2} & 2 \\ -1 & -2 & 3 & -3 & 1 & -\frac{1}{5} \\ -2 & -1 & 3 & 2 & -2 & -9 \\ 0 & 0 & 0 & 0 & 4 & 2 \\ 0 & 0 & 0 & 0 & 1 & 1 \\ \end{vmatrix} I still can't see any easy way to compute the determinant above I would appreciate any kind of help. Thanks. AI: Remember that you can add a multiple of a row / column to another row / column, and it does not change the determinant. Then use the fact that some rows are similar. For instance, do the following operations in order: $R_5 \leftarrow R_5 - 4 R_6$ $R_3 \leftarrow R_3 + R_2$ $R_4 \leftarrow R_4 + 2 R_2$ $R_4 \leftarrow R_4 - R_1$ Then flip $R_6$ and $R_5$, $R_1$ and $R_2$ (each operation multiplies the determinant by $-1$, so it remains unchanged), and you get a triangular matrix with diagonal $(1, 3, 2, 4, 1, -2)$, so you have a determinant of $- 48$.
H: How to find 3D point of a triangle in a 3D space I have a triangle in $3D$ space, with $2$ points defined (lets call them $A(x_1, y_1, z_1)$ and $B(x_2, y_2, z_2)$) and distances to the $3^{rd}$ point known (lets call it $C(x_3, y_3, z_3$)) as well as the $z_3$ known. I need to make a universal formula to find $C$, given coordinates of $A$ and $B$ and the distances $AC$ and $BC$. I can calculate the coordinates for specific points, but I cant wrap my head around creating a universal formula for it. AI: Move all points by $-(x_1,y_1,z_1)$ and solve the system $$\begin{cases}x^2+y^2=d_{13}^2-z_3^2 \\(x-x_2)^2+(y-y_2)^2=d_{23}^2-(z_3-z_2)^2.\end{cases}$$ By subtraction, the second equation becomes linear in $x,y$: $$-2x_2x-2y_2y+x_2^2+y_2^2=d_{23}^2-(z_3-z_2)^2-d_{13}^2+z_3^2.$$ Now express $y$ in terms of $x$, plug in the first equation and solve the quadratic equation in $x$. You can get a more elegant solution as follows: consider the intersection of the spheres centered at $A$ and $B$ by the plane $z=z_3$ and you get a planar problem: intersect two circles of known centers and radii; translate the first center to the origin, then rotate so that the second center comes on $x$; now the equations are $$\begin{cases}x^2+y^2=r_1^2,\\(x-d)^2+y^2=r_2^2.\end{cases}$$ By subtraction, you get a linear equation in $x$. From $x$ you get two $y$. Apply the inverse rotation and translation. $$x=\frac{d^2-r_2^2+r_1^2}{2d},y=\pm\sqrt{r_1^2-x^2}.$$
H: Why does an a local analytic isomorphism imply a mapping onto a disk? in one of the answers to my questions on StackExchange (Open Mapping Theorem Serge Lang Proof) the person answering the question states that "The map u:z↦(z−a)g1(z) is local analytic isomorphism by Theorem 6.1(c) above, so we can take an open neighborhood V⊂U of a that u maps it isomorphically to an open disc D centered at 0." It is clear to me why it is a local analytic isomorphism, however, it is not clear to me why it being one implies that it maps z to an open disc D and why it is centered at zero. If someone could clarify this for me I would greatly appreciate it. AI: If $f:D_1\to D_2$ is an analytic isomorphism between the domains $D_1$ and $D_2$, then you can take any open disc $V\subseteq D_2$ and define $U:=f^{-1}(V)$. Then $f\vert_U$ is an analytic isomorphism between $U$ and $V$. If $f(a)\in V$, then $a\in U$, so $U$ is an open neighborhood of $a$ which is mapped to the open disc $V$ via an analytic isomorphism. Now we just have to center it around $0$ (remember that we can choose the disc to our liking as long as it is contained in $D_2$). If $f(a)=0\in D_2$, we can do this.
H: Suppose every element of $\mathcal F$ is a subset of every element of $\mathcal G$. Prove that $\bigcup \mathcal F\subseteq \bigcap\mathcal G$. Not a duplicate of Prove that if F and G are nonempty families of sets, then $\bigcup \mathcal F \subseteq \bigcap \mathcal G$ Validity of this proof: Prove that $\cup \mathcal{F} \subseteq \cap \mathcal{G}$ Proof that $\bigcup\mathscr F\subseteq\bigcap\mathscr G$, when every element of $\mathscr F$ a subset is of every element of $\mathscr G$ This is exercise $3.3.17$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$: Suppose $\mathcal F$ and $\mathcal G$ are nonempty families of sets, and every element of $\mathcal F$ is a subset of every element of $\mathcal G$. Prove that $\bigcup \mathcal F\subseteq \bigcap\mathcal G$. Here is my proof: Suppose $x$ is an arbitrary element of $\bigcup\mathcal F$. This means that we can choose some $A_0$ such that $A_0\in \mathcal F$ and $x\in A_0$. Let $B$ be an arbitrary element of $\mathcal G$. Since $\forall A\in\mathcal F\forall B\in\mathcal G(A\subseteq B)$, $A_0\subseteq B$. From $A_0\subseteq B$ and $x\in A_0$, $x\in B$. Thus if $B\in \mathcal G$ then $x\in B$. Since $B$ was arbitrary, $\forall B\Bigr(B\in\mathcal G\rightarrow x\in B\Bigr)$ and so $x\in\bigcap \mathcal G$. Therefore if $x\in \bigcup\mathcal F$ then $x\in\bigcap\mathcal G$. Since $x$ was arbitrary, $\forall x\Bigr(x\in \bigcup\mathcal F\rightarrow x\in\bigcap\mathcal G\Bigr)$ and so $\bigcup\mathcal F\subseteq\bigcap\mathcal G$. $Q.E.D.$ Is my proof valid$?$ Thanks for your attention. AI: Your proof is correct! Also I like it how you justify everything. You can also check this alternative approach: Proof. Take $A \in \mathcal F$. Then, as $A \subseteq B$ for all $B \in \mathcal G$, $$A \subseteq \bigcap_{B \in \mathcal G} B = \bigcap \mathcal G.$$ As $A$ was arbitrary, it follows that $$\bigcup \mathcal F = \bigcup_{A \in \mathcal F} A \subseteq \bigcap \mathcal G. \quad \blacksquare$$ In the first part we used that the intersection of a non-empty family of sets is the biggest element that is contained in each element of the family, and in the second part we used that the union of a family of sets is the smallest element that contains each member of the family.
H: Maximize $\boxed{\mathbf{x}+\mathbf{y}}$ subject to the condition that $2 x^{2}+3 y^{2} \leq 1$ Maximize $\mathbf{x}+\mathbf{y}$ subject to the condition that $2 x^{2}+3 y^{2} \leq 1$ My approach $\frac{x^{2}}{1 / 2}+\frac{y^{2}}{1 / 3} \leq 1$ Let $z=x+y$ $\mathrm{Now}, 4 \mathrm{x}+6 \mathrm{y} \frac{d y}{d x}=0 \Rightarrow \frac{d y}{d x}=-\frac{2 x}{3 y}$ $2 x^{2}+3 y^{2}=1$ What to do next? Any suggestion or Hint would be greatly appreciated! AI: Use Schwarz inequality $$(x+y)^2\le \left((\frac{1}{\sqrt{2}})^2+(\frac{1}{\sqrt{3}})^2\right)\left( (\sqrt{2}x)^2+(\sqrt{3}y)^2\right)\le 5/6.$$
H: Is this conclusion on orders of magnitude correct? Let $f(n_1,n_2) = \mathcal{O}\left(\frac{n_1n_2^2}{(n_1-n_2)^3}\right)$, where $n_1$ and $n_2$ are natural numbers. If $n_1\propto n_2$, that is, if the two variables grow proportionally, is it true that $f(n_1,n_2) =\mathcal{O}(1)$? AI: It depends on the coefficient of proportionality. If it is different from $1$, yes. If not, $f$ will tend to infinity. For instance, if $n_1(k) = k$, and $n_2(k) = k-1$, it is easy to see that $\frac{n_1n_2^2}{(n_1-n_2)^3} \simeq k^3$ as $k$ tends to infinity. However, if, say $n_1(k) \geq (1+\delta) n_2(k)$, where $\delta >0$, this cannot happen.
H: Reference request: SLLN for correlated/exchangeable random variables Is there any strong law of large number for correlated but identically distributed random variables or exchangeable random variables? A further question, what about random variables taking values in arbitrary Banach spaces? AI: The SLLN when the variables are not IID is often referred to as the Ergodic theorem, see Chapter 6 of Brieman's (1968) Probability text. Have a look at Linear Processes in Function Spaces by Bosq for SLLN's and ergodic theorems in Banach space.
H: volume of solid obtained by rotating the graph about $x=2$ line Find the volume of solid generated by revolving the regin bounded by the graph of the equation $y=2x^2,y=0,x=2$ about $x=2$ line is What i try Put $x=2$ in $y=2x^2$ . We get $y=8$ So Volume of solid is $$\pi\int^{8}_{0}r^2dy$$ I did not understand How can i find radius of generating cone in terms of $y$. Hp me please. Thanks AI: HINT...set $r=2-x$ and change $dy$ into $\frac{dy}{dx}dx$. Then you use $x$ values for the limits.
H: $f:\mathbb {R}^2 \to \mathbb {R}$, $f(x,y) = \lvert x \rvert y$. Does $df(1,0)$ exist? Let $f:\mathbb {R}^2 \to \mathbb {R}$, $f(x,y) = \lvert x \rvert y$ I know that $\frac {\partial }{\partial y} f$ exists in $\mathbb {R}^2$ and that $\frac {\partial }{\partial x} f$ doesn't exist in $\mathbb{R}^2$. Does $df(1,0)$ exist? AI: $f_x$ exists at any $(x_0,0)$ type point: $$\lim_{x \to x_0}\frac{f(x+x_0,0)-f(x_0,0)}{x}=\lim_{x \to x_0}\frac{|x+x_0|\cdot 0-|x_0| \cdot 0}{x}=0$$ As we have $f_x(1,0)=0, f_y(1,0)=1$, then for differentiation $$\frac{|x|y-y}{\sqrt{x^2+y^2}} \to 0$$ when $(x,y) \to (1,0)$.
H: Find Remainder when $(x+1)^n$ divided by $x^2+1$ I put $(x+1)^n=p(x)(x^2+1)+bx+c$ for some $p(x)$ as the other exercise where we asked to find the remainder when one polynomial is divided by another polynomial. But to make $p(x)(x^2+1)$ go so I could find $b,c$ I have to put $x=i$ which is something I shouldn't put after all. Then an idea pop into my head that the remainder itself is $(x+1)^n$, but I realized that if I put $n=2$ then the remainder is $2x$. Any idea for how to solve this? AI: hint If we replace $ x $ by $ i $ and $ -i $, we get $$(1+i)^n=bi+c$$ $$(1-i)^n=-bi+c$$ thus $$c=\frac{(1+i)^n+(1-i)^n}{2}$$ $$b=\frac{(1+i)^n-(1-i)^n}{2i}$$ with $$1+i=\sqrt{2}e^{i\frac{\pi}{4}}$$ and $$1-i=\sqrt{2}e^{-i\frac{\pi}{4}}$$ this gives $$c=2^{\frac n2}\cos(n\frac{\pi}{4})$$ $$b=2^{\frac n2}\sin(n\frac{\pi}{4})$$
H: Show that $\{v_1,v_2,\dots,v_n\}$ is a basis of a vector space iff a chain of subspaces is complete. Let $V$ be a vector space over a field $F$. A chain $\{0\}=V_0\subseteq V_1\subseteq\dots\subseteq V_{n-1}\subseteq V_n=V$ of subspaces $V_1,V_2,\dots,V_{n-1}$ of $V$ is said to be complete if there is no subspace $W$ of $V$ such that $V_i\subsetneq W\subsetneq V_{i+1}$ for any $i=0,1,\dots n-1$. Problem Let $\{0\}=V_0\subseteq V_1\subseteq\dots\subseteq V_{n-1}\subseteq V_n=V$ be a chain of subspaces $V_1,V_2,\dots,V_{n-1}$ of a vector space $V$ over a field $F$. Let $v_1,v_2,\dots,v_n\in V$ such that $v_i\in V_i\setminus V_{i-1}$ for $i=1,2,\dots,n$. Show that $\{v_1,v_2,\dots,v_n\}$ forms a basis for $V$ if and only if the chain is complete. I came across with this very new interesting problem (for me) mentioned above. Irrespective of completeness of the chain I could prove that $\{v_1,v_2,\dots,v_n\}$ is linearly independent. But to prove that it spans $V$, it requires the completeness of the chain, where I am stucked at. Please any one can help me with this problem. Thank you. AI: I am expanding on my original answer to help clarify the confusion in the discussion of the problem. Assumptions. We have an arbitrary vector space $V$ (of unspecified dimension) over a field $F$ and a chain $\{0\}=V_{0}\subseteq V_{1}\subseteq \ldots \subseteq V_{n-1} \subseteq V_{n} = V$ of subspaces. We then pick vectors $v_{1}, \ldots, v_{n}$ such that $v_{i}\in V_{i}\setminus V_{i-1}$. Note in particular that each $v_{i}$ is nonzero since $v_i$ is not in $V_0$. Remark 1. $v_{1}, \ldots, v_{n}$ are linearly independent. Proof. Suppose $\lambda_{1}v_{1}+ \ldots +\lambda_{n} v_n = 0$. If some $\lambda_{i}$ is nonzero then we may choose largest such $i$. But then $v_i$ is a linear combination of $v_{1}, \ldots , v_{i-1}$ contradicting our assumption that $v_{i}$ is not in $V_{i-1}$. Main Claim. $\{v_{1}, \ldots, v_{n}\}$ is a basis iff the chain of subspaces is complete. Proof. $\Leftarrow$: Assume the chain is complete. By Remark 1 we only need to prove that $\{v_{1}, \ldots, v_{n}\}$ spans $V$. Note that for any $i$, $V_{i}$ has dimension $1$ over $V_{i-1}$. Since if not, then we can pick two vectors $v$ and $w$ in $V_{i}$ linearly independent over $V_{i-1}$. But then the space $W$ spanned by $V_{i-1}$ and $v$ is strictly between $V_{i}$ and $V_{i-1}$. So for any $i$, we have that $V_{i-1}$ is spanned by $V_{i-1}\cup\{v_{i}\}$ since $\{v_i\}$ must be a basis for $V_{i}$ over $V_{i-1}$ by the above conclusion. Then $V_{n}=V$ is spanned by $\{v_{1},\ldots v_{n}\}$ by induction. Another way to see this is to use the general formula $$ dim(V) = dim(V_{n}/V_{n-1}) + dim(V_{n-1}/V_{n-2}) + \ldots + dim(V_{1}/V_{0}) $$ We have shown that each summand on the right is $1$. So $dim(V) = n$ hence $\{v_{1}, \ldots , v_{n}\}$ is a basis. $\Rightarrow$: Assume $\{v_{1}, \ldots, v_{n}\}$ is a basis. Then $dim(V)=n$ by Remark 1. Consider the same formula for $dim(V)$ in terms of the dimensions of the quotient spaces as above. The existence of the $v_{i}$'s ensures each $dim(V_{i}/V_{i-1})$ is at least $1$. So each of these is exactly $1$ since $dim(V)=n$. This forces the chain to be complete, since if $W$ is strictly between $V_{i-1}$ and $V_{i}$ then we would have $$ dim(V_{i}/V_{i-1}) = dim(V_{i}/W) + dim(W/V_{i-1})\geq 2 $$
H: The projective space $\Bbb RP^{2n}$ cannot be the total space of a nontrivial covering map Why is the projective space $\Bbb RP^{2n}$ not the total space of a nontrivial covering map? I've heard this in class but I can't see why it holds. AI: You've already worked out the answer in the comments, but just so that there is an official answer I will write it down. Recall that the Euler characteristic is multiplicative for covering spaces, specifically if $E \to B$ is a covering space with $n$-sheets where $B$ is of finite type (by which I mean homology is finitely-generated in all degrees and non-zero in finitely many degrees so that the Euler characteristic is defined) then $\chi(E) = n\cdot \chi(B)$. Note that $\chi(\mathbb{RP}^{2n}) = 1$. If $\mathbb{RP}^{2n} \to B$ is a covering space then by compactness it has finite fibres (say of size $n$) and $B$ is compact and hence of finite type, therefore $$ 1 = n\cdot \chi(B).$$ The only possibility is that $n = 1$ and it is a trivial covering. It's worth noting that $\mathbb{RP}^{2n+1}$ is the total space of non-trivial coverings, as it covers Lens spaces. See for example Covering $\Bbb RP^\text{odd}\longrightarrow X$, what can be said about $X$?
H: show that $e^{f(x)}$ is integrable if $f(x)$ is integrable Let $f(x)$ be integrable in the path $[a,b]$, I need to prove that $e^{f(x)}$ is also integrable in $[a,b]$ My attempt is to argue that if $f(x)$ is integrable so $F=\int{f(x)}$ is continous and than I thought about a way to get relation between $e^F$ to $e^f$ but I guess this is wrong method because I do not see any way to do that. AI: hint Let $$g=e^f$$ $f $ is integrable at $ [a,b ]$, then it is bounded $ (|f|\le M )$. The function $ x\mapsto e^x $ satisfies MVT conditions, so $$\forall (x,y)\in [a,b]^2$$ $$|g(x)-g(y)|=|e^{f(x)}-e^{f(y)}|$$ $$=|f(x)-f(y)|e^c\le e^M|f(x)-f(y)|$$ from here, you can prove that $$U(g,P)-L(g,P)\le e^M(U(f,P)-L(f,P))$$ and conclude using Cauchy criterion. Let $ P=(x_i)_{i=0,n} $ be a partage of $ [a,b]$. then for $ i=0,1,...,n-1 $, $$I_i=[x_i,x_{i+1}]$$ $$m_i=\inf \{f(x), x\in I_i\}$$ $$M_i=\sup\{f(x),x\in I_i\}$$ and $$\forall (x,y)\in I_i |f(x)-f(y)|\le M_i-m_i$$ and $$g(x)\le g(y)+e^M(M_i-m_i)$$ thus $$\sup \{g(x), x\in I_i\}\le g(y)+e^M(M_i-m_i)$$ and $$\inf \{g(y),y\in I_i\}\ge \sup\{g(x),x\in I_i\}-e^M(M_i-m_i)$$
H: Prove that if $A^*+A=AA^*$ then $A$ is normal Prove that if $A^*+A=AA^*$ then $A$ is normal. I've tried some basic algebraic operations with no luck. AI: \begin{align*} &A^*+A=AA^*\\ \implies &(A-I)(A^*-I)=I\\ \implies &A-I=(A^*-I)^{-1}\\ \implies &(A^*-I)(A-I)=I\\ \implies &A^*A=A+A^*=AA^*\tag*{$\blacksquare$} \end{align*}
H: exp(A+B) = exp(A)exp(B) for matrices proof In this thread, On the proof: $\exp(A)\exp(B)=\exp(A+B)$ , where uses the hypothesis $AB=BA$?, it was mentioned that absolute convergence is required for swapping sums. What theorem is used precisely? AI: I think actually absolute convergence is only mentioned in the original question that is linked in the question you linked: https://math.stackexchange.com/a/356763/688699 Anyway if you're talking about rearranging an infinite sum then this is just the Riemann Series Theorem. In particular (quoting from Wikipedia) let $X$ be a topological vector space. For example, this could be an additive matrix group, which we can see as $\mathbb{R}^{n}$ for some $n$. Then a series $\sum_{n=0}^{\infty} x_{n}$ is called unconditionally convergent if it converges to some point $x\in X$ and any rearrangement of the order of summation produces a series converging to $x$ also. Then the Reimann Series Theorem says that, for $X=\mathbb{R}^{n}$, a series is unconditionally convergent if and only if it is absolutely convergent. For more details see: https://en.wikipedia.org/wiki/Unconditional_convergence
H: Stochastic order I need help with the following exercise: $X$ and $Y$ are random variables in $\mathbb{R}$ with the distribution functions $F_X$ and $F_Y$, so $X$ is stochastically smaller or equal to $Y$, i.e. $(X\leq_{st}Y)$ if $F_X(t)\geq F_Y(t)$ holds for all $t\in\mathbb{R}$. How can I show the following statement? $X\leq_{st}Y$ implies $E[f(X)]\leq E[f(Y)]$ for every monotone increasing function $f:\mathbb{R}\rightarrow \mathbb{R}$, so that the given expected values exist. AI: Let $\Phi_{X}$ and $\Phi_{Y}$ be functions $\left(0,1\right)\to\mathbb{R}$ prescribed by $u\mapsto\inf\left\{ z\in\mathbb{R}\mid F_{X}\left(z\right)\geq u\right\} $ and $u\mapsto\inf\left\{ z\in\mathbb{R}\mid F_{Y}\left(z\right)\geq u\right\} $ respectively. These are the so-called "inverses" of $F_X$ and $F_Y$ and it is well known that $\phi_{X}\left(U\right)\stackrel{d}{=}X$ and $\phi_{Y}\left(U\right)\stackrel{d}{=}Y$ if $U$ has uniform distribution on $(0,1)$. Eventually have a look at this answer about that fact. Further $X\leq_{st}Y$ implies that $\Phi_{X}\left(u\right)\leq\Phi_{Y}\left(u\right)$ for every $u\in\left(0,1\right)$ and consequently: $$f\left(\Phi_{X}\left(u\right)\right)\leq f\left(\Phi_{Y}\left(u\right)\right)\text{ for every }u\in\left(0,1\right)$$ Then we find: $$\mathbb{E}f\left(\phi_{X}\left(U\right)\right)=\int_{0}^{1}f\left(\phi_{X}\left(u\right)\right)du\leq\int_{0}^{1}f\left(\phi_{Y}\left(u\right)\right)du=\mathbb{E}f\left(\phi_{Y}\left(U\right)\right)$$ This with $\mathbb Ef(X)=\mathbb{E}f(\phi_{X}(U)$ and $\mathbb Ef(Y)=\mathbb{E}f(\phi_{Y}(U)$ so we are ready. This all under assumption that the expectations exist of course.
H: find the solutions of $y^{\prime \prime}-4 y^{\prime}+3 y=8 e^{-x}+9$ s.t $\lim _{x \rightarrow \infty} e^{-x} y(x)=7$ I have the ODE $$y^{\prime \prime}-4 y^{\prime}+3 y=8 e^{-x}+9$$ I am asked to find a solution such that: $$\lim _{x \rightarrow \infty} e^{-x} y(x)=7$$ This question feels a liitle bit tricky, how should I approach for such that question? should I first solve it like a regular ODE without boundary condition? AI: We can solve the homogeneous case using the characteristic equation method and find $$y_H(x)=c_1 e^{x} + c_2 e^{3x}$$ Now we guess a particular solution. This can be done more systematically using variation of parameters, but in this case guessing is rather easy. We guess $$y_P(x)=Ae^{-x}+B$$ Then $${y_P}^{\prime}(x)=-A e^{-x} \text{and } {y_P}^{\prime \prime}(x)=A e^{-x}$$ Substituting into our ODE, $$A e^{-x} +4Ae^{-x} + 3(Ae^{-x}+B)=8e^{-x}+9$$ So then $$(A+4A+3A)e^{-x}+3B=8e^{-x}+9$$ Clearly $A=1, B=3$. Then, $y(x)=y_H(x)+y_P(x)$, so our solution is $$y(x)= c_1 e^{x} + c_2 e^{3x} + e^{-x}+3$$ So, $$\lim_{x\to\infty}e^{-x}y(x)=\lim_{x\to\infty}(c_1+c_2e^{2x}+e^{-2x}+3e^{-x})$$ In order for the limit to be finite, $c_2=0$. Then, $$\lim_{x\to\infty}(c_1+e^{-2x}+3e^{-x})=c_1$$ Therefore $c_1=7$. Therefore our solution is $$y(x)=7e^x + e^{-x} +3.$$
H: A curious property of exponential sums for rational polynomials? An article led me to generate some graphs of exponential sums of the form $S(N)=\sum_{n=0}^Ne^{2\pi i f(n)}$, where $f(n)= {n\over a}+{n^2\over b}+{n^3\over c}$ with $a,b,c\in\mathbb{N}_{>0},\,$ leaving me amazed at their great variety. Here are some samples: The top four rows show graphs that appear to be cycles (some highly asymmetrical), whereas the bottom two rows show graphs that appear to be truncations of what would continue without bound in a fixed direction. All choices of $a,b,c\in\mathbb{N}_{>0},\,$ appear to give one of these two kinds of behavior! Naturally, I wondered what structures occur in the more general case where $f$ is an arbitrary rational polynomial. Upon viewing the graphs for hundreds of rational polynomials of various degrees with "random" coefficients, I'm struck by the seeming fact that they all have the following (surprising?) property: The "walk" in the complex plane first visits a finite set of (say, $p$) points, then visits the same set of points except displaced by a constant amount $c\in\mathbb{C}$, then visits the same set displaced by $2c$, then by $3c$, etc. (If $c=0$, the walk is a cycle with period $p$, else it consists of a continually displaced copy of the initial set of $p$ points.) In other words, given $f$, there exists $p\in\mathbb{N}_{>0}$ such that $S(n+p) - S(n) = S(p)-S(0)\ \ (= \text{constant $c$})$ for all $n\in\mathbb{N}.$ But why should it be that (apparently) every rational polynomial produces this behavior? This seems preposterous, but I haven't found a single counterexample. Letting $e_n:= \exp(2\pi i f(n)),$ we can restate the conjecture as follows: Conjecture: If $f:\mathbb{N}\to\mathbb{Q}$ is a polynomial with rational coefficients, then there exists a least integer $p>0$ such that $$e_n+e_{n+1}+\cdots+e_{n+p-1}=\text{constant ($c$, say)}\ \ \text{for all $n\in\mathbb{N};$}$$ that is, every block of $p$ consecutive terms in the sequence $\left(e_n\right )_{n\in\mathbb{N}}$ has the same sum! Question 1: Is this a well-known fact? In any case, how to prove it? (Or counterexample, if it turns out to not be true?) Question 2: Is it possible to determine, as a function of the polynomial coefficients, the number $p$ and constant $c$ (or at least whether the graph is cyclic)? (I mean, of course, some way simpler than computing the graph!) EDIT: As a general rule, it appears that $p$ always divides the product of the coefficient denominators (and is very often equal to that product), but I have no idea how to prove it. As far as the constant "block sum" $c$ is concerned, searching has turned up the quadratic Gauss sum formula, which implies that if $f(n)={a\over b}n^2$ with $b$ an odd prime and $a$ an integer, then $$c = \left(\frac{a}{b}\right)\cdot\begin{cases} \sqrt{b} & \text{if}\ b\equiv 1\pmod 4, \\ i\sqrt{b} & \text{if}\ b\equiv 3\pmod 4 \end{cases} $$ where the Legendre symbol is defined by $$\left(\frac{a}{b}\right) = \begin{cases} 1 & \text{if } a \text{ is a quadratic residue modulo } b \text{ and } a \not\equiv 0\pmod b, \\ -1 & \text{if } a \text{ is a non-quadratic residue modulo } b, \\ 0 & \text{if } a \equiv 0 \pmod b. \end{cases}$$ Are any similar formulas available when $f$ is more complicated than such a simple quadratic monomial? AI: Write $f(x)=g(x)/m$ where $g$ has integer coordinates. Then $$\exp(2\pi i f(n))=\zeta^{g(n)}$$ where $\zeta=\exp(2\pi i/m)$. Therefore $e_n$ repeats with period $m$, and any block of $m$ consecutive $e_n$ has the same sum.
H: Similarity transformation and representation of matrices I'm trying to understand this passage of a book: Why this last expressions shows that the $i$th column of $\bar{A}$ is the representation of $Aq_i$ with respect to the basis $\{q_1,\ldots q_n\}$? I can't understand how the author reached this conclusion. AI: This approach may help. I believe if you explicitly extract out the $i$'th column of the left hand side and compare to $Q$ multiplying the $i$'th column of $\bar{A}$ on the right hand side, you will see the claim. For example, let $i=1$, then we have: $$ A\mathbf{q}_{1} = \begin{bmatrix}\mathbf{q}_{1} & \mathbf{q}_{2} & \cdots & \mathbf{q}_{n} \end{bmatrix} \begin{bmatrix} \bar{A}_{11} \\ \bar{A}_{21} \\ \vdots \\ \bar{A}_{n1} \\ \end{bmatrix} $$ $$ A\mathbf{q}_{1} = \bar{A}_{11} \mathbf{q}_{1} + \bar{A}_{21}\mathbf{q}_{2} + \ldots + \bar{A}_{n1}\mathbf{q}_{n} $$ I hope this helps.
H: Unknown asymptotic notation $(1 + O\Big(\frac{\log n}{n}\Big))\frac{2^n}{n}$ I am reading through Boolean Function Complexity by Stasys Jukna and I stumbled upon this notation for asymptotic bounds: $$C(f) \leq (1 + \alpha_n)\frac{2^n}{n} \;where\; \alpha_n = O\Big(\frac{\log n}{n}\Big)$$ What exactly does the equation on the right mean? I am quite uncertain on how to interpret it as I have never the asymptotic notation used like that before. AI: $\mathcal O\left(\dfrac{\log n}n\right)$ represents the entire set of functions that do not grow faster than $\dfrac{\log n}n$. If you replace $\alpha_n$ with any of these functions, then (according to the author) the inequality should hold. For example, one such function is $\dfrac 1n$ since $\dfrac 1n\in\mathcal O\left(\dfrac{\log n}n\right)$.
H: Banach fixed point theorem, prove singular solution I'm really having trouble understanding how to apply Banach's fixed-point theorem in this exercise. Let $b_i$ and $c_{ik}$ be real numbers with $1 \leq i,k \leq n$ such that the following equation holds $$ \sum_{i,k=1}^n c_{ik}^2 < 1 $$ Now I have to show the following nonlinear system has exactly one solution $(x_1^*, x_2^*, \dots, x_n^*)$ using the fixed-point theorem. $$ x_i = \sum_{k=1}^n \sin{(c_{ik} x_k)} + b_i, \qquad 1 \leq i \leq n $$ So let's define a function $$ f: \mathbb{R}^n \to \mathbb{R}^n, x = (x_1, x_2, \dots, x_n) \mapsto f(x) = \begin{pmatrix}f_1(x_1, x_2, \dots, x_n) \\ f_2(x_1, x_2, \dots, x_n) \\ \vdots \\ f_n(x_1, x_2, \dots, x_n) \end{pmatrix} $$ $(X, d) = (\mathbb{R}^n, |\;|)$ is complete and $f$ is a self-map on $X$. Now I only need to show the contraction property holds, i.e. $$ d(f(x), f(y)) \leq qd(x,y), \quad q \in (0,1) $$ I think I can use the usual euclidean norm here but I'm not even sure how to write this out. Right now I got this $$ \sum_{i=1}^n \left(f_i(x) - f_i(y)\right)^2 \leq q^2 \left(\sum_{i=1}^n \left(x_i-y_i\right)^2\right) $$ with $f_i(x) = \sum_{k=1}^n \sin{(c_{ik} x_k)} + b_i$ AI: First, note that $\sin$ is a non-expansive mapping, due to its derivative being bounded by $1$. It therefore follows that, using Cauchy-Schwarz inequality, \begin{align*} |f_i(\vec{x}) - f_i(\vec{y})| &= \left|\sum_{k=1}^n(\sin(c_{ik}x_k) - \sin(c_{ik}y_k))\right| \\ &\le \sum_{k=1}^n|c_{ik}|\cdot |x_k - y_k| \\ &\le \sqrt{\sum_{k=1}^n c_{ik}^2}\cdot \sqrt{\sum_{k=1}^n |x_k - y_k|^2} \\ &= \|\vec{x} - \vec{y}\| \cdot \sqrt{\sum_{k=1}^n c_{ik}^2}. \end{align*} Therefore, $$\|f(\vec{x}) - f(\vec{y})\|^2 = \sum_{i=1}^n (f_i(\vec{x}) - f_i(\vec{y}))^2 \le \|\vec{x} - \vec{y}\|^2 \cdot \underbrace{\sum_{i=1}^n\sum_{k=1}^n c^2_{ik}}_{< 1},$$ proving the map is a Banach contraction.
H: Multiplying out a quadratic equation of differential factors acting on a function I am having trouble understanding a multiplying out of a quadratic equation with differential factors: from the following video: $$ (D + A(x))(D + B(x))y(x) = 0\\ (D^2 + AD + AB + B^{'} + BD)y = 0 \\ $$ What I don't get is his remark on the last multiplication two terms. It looks like he is using the multiplication rule for differentiation, but in the end, the multiplied out parenthesis is again acting on y, so for me it looks like a double use. I see that $B^{'} = DB$ but I still do not get why two terms are produced? AI: $(D + A(x))(D+B(x))y(x)=0$ (where D is the differentiation operator $(DX(x))=dX/dx $) $\implies$$(D + A(x))(Dy(x)+B(x)y(x))=0$ If you now expand you get $ D(D(y(x)))+D(B(x)y(x))+D(y(x))*A(x)+A(x)B(x)y(x)$ The term $D(B(x)y(x))$ can be written as : $D(B(x)y(x))$=$D(B(x))*y(x)+D(y(x))*B(x)$ (by the product rule) So if you plug that in the previous equation you get$D(D(y(x)))+D(B(x))*y(x)+D(y(x))*B(x)+D(y(x))*A(x)+A(x)B(x)y(x)$ And now if you pull $y$ out you get ($D^2+B'+B(x)+A(x)D + A(x)B(x))y(x)$
H: Function composition and inflection points Considering two functions in $\mathbb{R}$ , $f$ and $g$, both having an inflection point on the same x-coordinate, does the function $h=f \circ g$ necessarily have an inflection point on that x-coordinate? AI: Say $f’’(c)=g’’(c)=0$. Compute the second derivative of $f\circ g$: $$((f\circ g)’)’ = ( f’(g)\cdot g’)’ = f’(g)\cdot g’’ + (g’)^2\cdot f’’(g) $$ The first term vanishes on setting $x=c$, but the latter term $$(g’(c))^2\cdot f’’(g(c))$$ is not necessarily equal to zero. So no, $f\circ g$ must not have an inflection point as well at that point.
H: A convergence problem on bounded linear operators It's, I think, essentially a question about the relation between the pointwise limit and the limit with respect to the operator norm and uniqueness of limits for convergent sequences in a metric space. Define $\mathcal{B}\left(X,Y\right)$ to be a set of all bounded linear operators with the operator norm \begin{equation}\lVert T\rVert= \sup_{x\neq 0} \frac{\lVert Tx\rVert_Y}{\lVert x\rVert_X}. \end{equation} For each $n$, define $T_n:\mathbb{R}\to \mathbb{R}$ by $T_n\left(x\right)=\frac{x}{n}.$ Note that $T_n$ is a bounded linear operator for each $n$ (That is, $T_n\in \mathcal{B}\left(\mathbb{R}\right)$) since $T_n$ is continuous. Define $T\left(x\right)=\lim_{n\to \infty} T_n\left(x\right).$ Note that $T\in \mathcal{B}\left(\mathbb{R}\right)$. Also, $T_n$ converges to $0$, the zero operator since \begin{equation} \lim_{n\to \infty} \lVert T_n-0\rVert=\lim_{n\to \infty} \sup_{x\neq 0}\frac{\lvert \frac{x}{n}-0\rvert}{\lvert x\rvert}=\lim_{n\to \infty} \frac{1}{n}=0. \end{equation} However, we have $T\equiv 0$ by the uniqueness, which sounds strange to me because, particularly, $T_n\left(n\right)=\frac{n}{n}=1$ and hence $\lim_{n\to \infty}T_n\left(n\right)=1 \neq 0.$ I feel so confused about this and I must have something wrong. Could any of you help me with this? Any help is really appreciated. AI: Why does this confuse you? As you proved, $T_n\to 0$ in norm. Also, it is true that $T_n(x)\to 0$ for all $x\in\mathbb{R}$. What you are looking at, $T_n(n)=1$, is true but this is irrelevant. When we say $T_n(x)\to 0$ for all $x$, $x$ is fixed and $n$ tends to infinity. In the expression $T_n(n)$ the argument is not fixed, so it is not relevant to the rest of our observations. It might help you think that if we fix $m\in\mathbb{N}$, then $T_n(m)\to0$ as $n\to\infty$.
H: Problem for gradient and extremum points. I try to compute extremum points for this funciton: $$ f(x,y) = x^{4} - y^{4} - 4xy^{2}-2x^{2} $$ The first step compute gradient: $$ \nabla f(x,y) = [4 x^3-4 x-4 y^2, -8 x y-4 y^3] $$ Next step $$ \nabla f(x,y) = 0 $$ and its fail. It's not easy compute explicitly roots. Do you have any ideas? AI: Your system is$$\left\{\begin{array}{l}x^3-x-y^2=0\\2xy+y^3=0.\end{array}\right.$$If $y=0$, then the second equation becomes $0=0$, whereas the first one becomes $x^3-x=0$. So, the solutions for which $y=0$ are $(\pm1,0)$ and $(0,0)$. If $y\ne 0$, then the second equation is equivalent to $2x+y^2=0$, that is, $y^2=-2x$. But then the first equation becomes $x^3-x+2x=0$, or $x^3+x=0$. Can you take it from here?
H: Congruence equation with binomial coefficient Given a prime $p$ and some $k,t\in\Bbb{Z}^{+}$, when does the congruence equation $${x \choose k} \equiv t\pmod {p}$$ have an integer solution? Is there some necessary and sufficient condition about $p,k,t$? AI: By Lucas's theorem, $${x \choose k} \equiv \prod_j {x_j \choose k_j} \mod p$$ where the base-$p$ representations of $x$ and $k$ have digits $x_j$ and $k_j$ respectively, (with extra $0$'s as needed in $k_j$ when $k$ has fewer digits than $x$). If all $k_j = 0$ or $p-1$, for example, ${x_j \choose k_j} \equiv 0$ or $1 \mod p$, and then${x \choose k} \equiv 0$ or $1 \mod p$. So for a given $k$ and $p$, with $[k_d, \ldots, k_0]$ the base $p$ representation of $k$, the possible nonzero values $t$ are the products $t = \prod_{j=0}^d t_j \mod p$ where $t_j \in \{{0 \choose k_j} \mod p, \ldots, {p-1 \choose k_j} \mod p\}$. For example, with $p=5$ and $k = 123 = 443_5$, ${x \choose 4} \equiv 0$ or $1 \mod 5$, and ${x \choose 3} \equiv 0, 1$ or $4 \mod 5$, so the possibilities for ${x \choose 123}$ are $0, 1$ and $4 \mod 5$.
H: What is the proper notation for drawing random variables from processes? I have a written statement of the form "Let $X_1$ and $X_2$ be independent order-$n$ matrices whose elements are i.i.d. normally-distributed random variables". Because I frequently use such statements in my current paper, I require a compact notation for declaring random variables such as $X_1$ and $X_2$. What seems natural to me is to define a symbol $\mathcal{N}(n)$ as "the set of all order-$n$ matrix random variables with i.i.d. normally-distributed elements" and then write $X_1,X_2 \in \mathcal{N}(n)$. Although I'm certain the readership could deduce what's meant by this, the definition of $\mathcal{N}(n)$ is erroneous since there's really only one order-$n$ matrix RV with i.i.d. normally-distributed elements, not a set of them. That is, the sample space is already encapsulated by the variable. If I amend the definition of $\mathcal{N}(n)$ to "the sample space of an order-$n$ matrix random variable with i.i.d. normally-distributed elements", this is also erroneous since it would mean $X_1$ and $X_2$ are deterministic rather than random variables. I could also define $\mathcal{N}(n)$ as "an i.i.d. process returning order-$n$ matrices whose elements are i.i.d. normal random variables", in which case $X_1,X_2 \in \mathcal{N}(n)$ would imply $X_1$ and $X_2$ were realizations of this process, but it seems to me that the first question in the reader's mind will be "Where is this 'process' coming from? What's generating these variables?", for which there's no good answer since 'process' here is just a linguistic contrivance. I'm certain the literature must have standard notation for (what I'm denoting as) $\mathcal{N}(n)$ and $X_1,X_2 \in \mathcal{N}(n)$, as well as a standard definition for $\mathcal{N}(n)$. Can some kind soul please enlighten me as to what these are? AI: Why not use the $\sim$ symbol? If $X\sim N(\mu,\sigma^2)$ is used to define a normal random variable with mean $\mu$ and variance $\sigma^2$, you can use $X\sim N_n(\mu,\sigma^2)$ (or $N_{n\times n}$) for an $n\times n$ matrix where all entries are iid $N(\mu,\sigma^2)$.
H: Finding the angle between subspace and a vector. Find the angle between the vector $x=(2,2,1,1)$ and the space formed by the linear combination of the vectors $a_1 = (3,4,-4,-1)$ and $a_2=(0,1,-1,2)$. I found the vector $y=(3,1,3,1)$ that is perpendicular to both $a_1$ and $a_2$. Then found the angle between $x$ and $y$: $$\varphi = \arccos\frac{\langle x,y \rangle}{|x||y|}=\arccos\frac{3\sqrt 2}{5}$$ As the answer I took $\pi/2 - \varphi$. Is this correct? Update: I found this duplicate but could you please help me understand the difference between my solution and that in the link I provided? Why should we find an orthonormal basis as the accepted answer states? It would be appreciated. AI: EDIT (my original answer was incorrect) To see why your solution is not correct, I will consider a simpler example. Let $a_1 , a_2$ be such that $span\{ a_1 , a_2 \} = span \{ (1,0,0,0) , (0,1,0,0) \}$. Let $x=(0,0,1,0)$. Then any vector that is perpendicular to combinations of $a_1 , a_2$ is of the form $y = (0,0, y_3 , y_4)$. I will show that your solution can lead to different answers: If you found that $y = (0,0,1,0)$, then $\cos \varphi = \langle y,x\rangle /(\|y \| \| x \|)=1$ If you found that $y = (0,0,1,1)$, then $\cos \varphi = \langle y,x\rangle /(\|y \| \| x \|)=1/\sqrt{2}$ To see why the linked answer is correct, notice that they simply compute the orthogonal projection of $x$ onto $span \{ a_1 , a_2 \}$. Denote this projection $y$. Then the angle between $x$ and $y$ is exactly what you are looking for. Note that in order to compute the projection, you need an orthonormal basis (this is how projection is defined)
H: Does the probability of obtaining a sample for which a new element will be larger than the sample approaches 0 as the sample size increases? Choose arbitrary $\epsilon > 0$ and arbitrary probability distribution $D$ over $[0, 1]$. For a given natural number $m$, sample $S_m = (x_1, x_2, ..., x_m)$ as indepentent identically distributed random variables from $D$. Let $\overline{S_m} = max ({x_1, ... x_m})$. Now consider the probability that a newly sampled random variable $x$ will be larger than $\overline{S_m}$, that is consider $P({S_m})$ = $P_{x \sim D}${x > $\overline{S_m}$} We are interested in the probability $Q(m)$ of obtaining a sample $S_m$, for which $P(S_m) > \epsilon$, that is: $Q(m)$ = $P_{S_m \sim D^m}${$P(S_m) > \epsilon$}. a) Is it true that $Q(m)\rightarrow 0$ as $m$ tends to infinity, regardless of the initial choice of $\epsilon$ and $D$? b) If a) is true, can one state an explicit bound function (parameterised by $\epsilon$ but independent of $D$) $F_\epsilon : N \rightarrow [0,1]$ such that $Q(m) < F_\epsilon(m)$ and $F_\epsilon(m) \rightarrow 0$ for large m? AI: Let $F$ be the CDF for the distribution $D$, and $y = \inf \{v: F(v) \ge 1-\epsilon\}$. Thus $F(y) \ge 1-\epsilon$ but $F(t) < 1-\epsilon$ for $t < y$. If $\max(S_m) \ge y$, $P(S_m) = 1-F(y)\le \epsilon$ but if $\max(S_m) < y$, $P(S_m) > \epsilon$. Thus $Q(m) = P(\max(S_m) < y) = F(y-)^m$ (i.e. $\lim_{v \to y-} F(y)^m$). In particular $Q(m) \le (1-\epsilon)^m \to 0$ as $m \to \infty$.
H: complex analysis question line I know that the map $$z\mapsto\frac{R^2}{\overline{z}-\overline{a}}+a$$ takes a point to a symmetric point respect to the circle $|z-a|=R$ and am trying to get the line $y=x$ in some form like $z=i\bar z$ but this is for $y=x$. I am unsure of how to approach/start this problem. Any help would be appreciated. AI: When we have expressions in $x,y$ that we would like to have in $z$, we use the identities $Re(z)=\frac{z+\bar{z}}{2}$ and $Im(z)=\frac{z-\bar{z}}{2i}$. Your line is expressed by the equation $$z-\bar{z}=3i(z+\bar{z})$$ or equivalently Say that $z$ lies on that line. Then its symmetric point is given by $$w=\frac{4}{\bar{z}-4}+4 $$ Solve for $\bar{z}$ and obtain $$\bar{z}=\frac{4}{w-4}+4$$ Replacing in the equation that is satisfied by $z,\bar{z}$ we get $$\frac{1}{\bar{w}-4}-\frac{1}{w-4}=3i(\frac{1}{w-4}+\frac{1}{\bar{w}-4}+2) $$ Multiplying with both denominators $$w-\bar{w}=3i(\bar{w}+w-8+2|w-4|^2)$$ using the trick about the real and the imaginary part again, this will yield an equation that I believe is going to be a circle. I believe you can work out the details from here! EDIT: you changed the line from $y=3x$ to $y=x$. This changes the computations. Since $y=x$ your line is $z-\bar{z}=i(z+\bar{z})$. The formula between $w,\bar{z}$ stays the same so $$\frac{1}{\bar{w}-4}-\frac{1}{w-4}=i(\frac{1}{w-4}+\frac{1}{\bar{w}-4}+2) $$ Multiplying with both denominators $$w-\bar{w}=i(\bar{w}+w-8+2|w-4|^2)$$ dividing both sides with $2i$ $$im(w)=re(w)-4+|w-4|^2 $$ writing $w=a+ib$ $$(a-4)^2+b^2-4+a-b=0$$ or $$a^2-7a+12+b^2-b=0 $$ This is an equation of a circle. This is elementary analytic geometry, check out the internet for "analytic equation of a circle". In the case of $y=3x$, the result was another circle. As mentioned in the comments, the result could be either a circle or a line. The method is the one I used above to get a 2nd degree analytic equation of the geometric set of points. With some elementary analytic geometry, you can then understand what this equation you get (in the way I explained) represents.
H: Will arithmetic on two sequences ever equal some value I wrote a question a few days ago, but it seems to have disappeared. My question is this: If I have 2 sequences: Sequence1 = {1002, 996, 990... 1 } : essentially S1n-6 Sequence 2 = {2, 10, 18 .... x} : essentially S2n + 8 And if I want to see if nth term in sequence 1 divided by the nth term in sequence 2 = 27. So something like this: S1n/S2n = 27 Example: 1002/2 != 27 996/10 != 27 990/18 != 27 . . and so on. Sequence 1's first term will always be larger than sequence 2's first term . Sequence 1's last term will always be > 0 and Sequence 2's first term will always be >0 As well Sequence 1 and Sequence 2 will always have the same number of terms Thus if I know the first number in sequence 1 the first number is 1001 and will subtract by 6, and I know the first number is sequence 2 is 2 and will add by 8, is there a quick way to see if in any term the division will equal to 27. I am just interested in knowing if the division of any terms are equal to 27 (true or false), I am not interested in knowing what terms they are or what their values are, knowing that would just be an added bonus, but it is not what I am after. Of course, I could go through each term in the element and manually calculate it, but this will be time-consuming so I am just wondering if there is a quicker way with the information I know in the beginning to calculate this? The numbers I use here are just examples, I am looking for a more general method where it can be applied to any numbers. AI: Ideally, you'll find an explicit form for each sequence $S_1,S_2$ in terms of $n$, rather than a recursive definition. Then you can perhaps do some algebra to see how it'd go. For instance, if we let the first term of each sequence be the $n=1$ term, then $$S_{1,n} = 1001 - 5(n-1) \;\;\;\;\; S_{2,n} = 2 + 8(n-1)$$ Then of course, equivalently, $$S_{1,n} = 1006 - 5n \;\;\;\;\; S_{2,n} = -6 + 8n$$ We want to see if there exists a positive integer $n$ such that $$\frac{S_{1,n}}{S_{2,n}} = \frac{1006 - 5n}{-6 + 8n} \overset ? = 27$$ Try to solve for $n$. Then we get that $$1006 - 5n = 27(-6+8n) = 216n-162$$ which, grouping like terms and solving for $n$, implies $$221n = 1168 \implies n = \frac{1168}{221} \approx 5.285$$ This, however, is not an integer, which means there exists no part of either sequence (even if extended infinitely) such that their ratio is $27$.
H: Finding parametrization of $xz^2=xy^2+y^3$ I am doing a problem on geometry and was stuck at one step. I want to parametrize the curve $xz^2=xy^2+y^3$ using two parameters $s,t$. But I cannot figure it out. I am wondering if this is a famous curve or if anyone has any thoughts? AI: Let $y=s$ and $z=t$, then $x=\frac{s^3}{t^2-s^2}$. But observe that when $t^2=s^2$, then from the equation given we get $s^3=0 \implies s=0$. Thus for all $s,t \neq 0$, the above parametrization can work.
H: Understanding Grassmannian as $\text{Gr}(k,n) \cong \text{GL}(k,k) \ \backslash \ \text{Mat}^*_{\mathbb{R}}(k,n)$. The Grassmannian $\text{Gr}(k,n)$ can be described as the quotient $$\text{Gr}(k,n) \cong \text{GL}(k,k) \ \backslash \ \text{Mat}^*_{\mathbb{R}}(k,n) $$ where $\text{Mat}^*_{\mathbb{R}}(k,n)$ is the set of real $k \times n$ matrices of rank $k$. Every point of $\text{Gr}(k,n)$ can be represented by a $k \times n$-matrix $M$. The Plücker coordinates on $\text{Gr}(k,n)$ are all the $k \times k$-minors of $M$. I don't really understand this construction. I have been reading about how to construst $\text{Gr}(k,n)$ as a projective variety via embedding into $\mathbb{P}\mathbb{R}$ of dimension $n \choose k$ and expressing the $n \choose k$-minors by vanishing of homogeneous polynomials, but I am not sure how to relate that description to the one above. I also came across a construction of $\text{Gr}(k,n)$ as quotient of $\text{GL}(k,k)$ by some stabilizer, but this seems to be related to its structure as a manifold, not as a variety, as far as I understood. My goal here is to understand another space, which is a subvariety in $\text{Gr}(k,n)$, but I don't really follow this definition of $\text{Gr}(k,n)$ to begin with. Any help/explanation would be really great! AI: That should be $\text{Mat}^*_{\mathbb{R}}(k,n)/Gl(k,k)$. You might find it helpful to instead think of this as the quotient $\text{Mat}^*_{\mathbb{R}}(k,n)/\sim$, where $\sim$ is the relation by which $A \sim B$ if $A,B$ have the same row-space.
H: Derivative of magnitude of complex trace (function of matrices with Gateaux derivative) So if I define $y(U)=|Tr(U^*V)|^2$ If I do the Gateaux derivative: $y_U[\tilde{U}] =\frac{d}{d\epsilon}|Tr((U^*+\epsilon\tilde{U})V)|^2 $, Here is where I get confused because of how things are nested - the trace has a real and imaginary part and I'm not sure how to handle this My intuition says that the derivative is like $y_U[\tilde{U}] = 2Tr(V^*)$ with the variation somewhere in here, but I can't figure out how to get that if it should be the case. Some tips would be much appreciated AI: $y(U) = \overline{\operatorname{tr} (U^*V)} \operatorname{tr} (U^*V)$. \begin{eqnarray} y(U+tH) -y(U) &=& \overline{\operatorname{tr} ((U+tH)^*V)} \operatorname{tr} ((U+tH)^*V) - \overline{\operatorname{tr} (U^*V)} \operatorname{tr} (U^*V) \\ &=& (\overline{\operatorname{tr} (U^*V)} + t\overline{\operatorname{tr}(H^*V)} ) ( \operatorname{tr} (U^*V) + t\operatorname{tr} (H^*V) ) - \overline{\operatorname{tr} (U^*V)} \operatorname{tr} (U^*V) \\ &=& 2 t \operatorname{re} ( \overline{\operatorname{tr} (U^*V)} \operatorname{tr} (H^*V) ) + O(t^2) \end{eqnarray} Hence $d y(U,H) = 2 \operatorname{re} ( \overline{\operatorname{tr} (U^*V)} \operatorname{tr} (H^*V) ) $. Note that there are many different notations for the Gateaux derivative. Note, if $H$ is real, with $\operatorname{tr} (U^*V) = a+ib$ and $V= V_r+i V_i$ then $d y(U,H) = 2 \operatorname{re} ( (a-ib) ( \operatorname{tr} (V_i^TH) -i \operatorname{tr} (V_r^TH)) = 2 \operatorname{tr}((aV_r-bV_i)^T H)$, and similarly, $d y(U,iH) = 2 \operatorname{re} ( (a-ib) ( \operatorname{tr} (V_i^TH) -i \operatorname{tr} (V_r^TH)) = 2 \operatorname{tr}((bV_r+aV_i)^T H)$
H: If $\cos (α + β) = 4 / 5$ and $\sin (α – β) = 5 / 13$, where $α$ lie between $0$ and $\pi/4$, find the value of $\tan2α$. In my textbook, the given answer is only $\frac{56}{33}$. But I think the question has another answer $(\frac{16}{63})$ as well. Please review the attached answer, share your thoughts and correct me if I am wrong. Thank You! AI: Since $\sin(\alpha-\beta)\gt 0$, we must have $0\lt \alpha-\beta\lt \pi \implies \beta\lt \alpha$ and together with $0\lt \alpha \lt \frac{\pi}{4}$, this means that $0\lt \beta \lt \frac{\pi}{4}$. This further implies that $$0\lt \alpha-\beta \lt \frac{\pi}{4}$$ so $$\tan(\alpha-\beta) \gt 0$$ Your textbook is right.
H: If $A^3 = I_n$ then $\operatorname{tr}(A)\in\Bbb Z$ Let $A\in M_{n\times n}(\Bbb R)$ s.t $A^3 = I_n$. Show that $\operatorname{tr}(A) \in \Bbb Z$. I know that $P(A) = 0$, where $ P(x) = x^3 - 1 = (x-1)(x^2+x+1)$, that is, $1$ is a engevalue of $A$. Also, Trace is the sums of the eingevalues and in $\Bbb C$ have $1, \omega, \omega^2$ is the engevalues of $A$. However I'm stuck in $\Bbb R$, I guess, it's not true that the trace in $\Bbb R$ are equal to the trace in $\Bbb C$. Can you help me? AI: Hint If $\lambda$ is an eigenvalue of $A$ then $\lambda \in \{1, \omega, \overline{\omega} \}$ where $\omega^2+\omega+1=0$. Let $m,n,k$ be the multiplicities of these eigenvalues (which could be 0). Then $$\mbox{tr}(A)=m+n \omega+k \bar{\omega}$$ Use the fact that $A$ has real entries (or that $\mbox{tr}(A)$ is real) to show that $m=k$. Then use the fact that $\omega+\bar{\omega}=-1$.
H: Prove the limit of parametric integral: Suppose that: 1). $\varphi_{n}(x) \geq 0$, $\forall n \in \mathbb{N}$ on $[-1,1]$. 2). $\varphi_{n}(x) \rightrightarrows 0$ as $n \to \infty$ on $(0< \varepsilon \leq |x|\leq 1)$. 3). $\int_{-1}^{1} \varphi_{n}(x)dx \to 1$ as $n \to \infty$. Prove that if $f \in C[-1,1]$, then $$\lim_{n \to \infty} \int_{-1}^{1} f(x)\varphi_{n}(x)dx=f(0)$$ AI: Here's an answer, modulo a few details, under the assumption that $\rightrightarrows$ means 'converges to'. Let's split this up into two parts: where $|x|<\varepsilon$ and $|x|\geq \varepsilon$: $$ \int _{-1}^1 f(x)\varphi_n(x)\,dx = \int _{|x|<\varepsilon} f(x)\varphi_n(x)\,dx +\int _{|x|\geq \varepsilon} f(x)\varphi_n(x)\,dx $$ By the First Mean Value Theorem for Integrals, since $\varphi_n$ is non-negative, we have $$ \int _{|x|<\varepsilon} f(x)\varphi_n(x)\,dx +\int _{|x|\geq \varepsilon} f(x)\varphi_n(x)\,dx $$ $$ = f(\xi_1)\int _{|x|<\varepsilon} \varphi_n(x)\,dx + (f(\xi_2)+f(\xi_3))\int _{|x|\geq \varepsilon}\varphi_n(x)\,dx, $$for some $\xi_1,\xi_2,\xi_3$. Now by the second assumption, we can choose $n$ such that the second integral is arbitrarily small, using the continuity and hence boundedness of $f$ on $[-1,1]$. The first integral will integrate to $1$ by assumption; however, $\xi_1\in(-\varepsilon,\varepsilon)$. In the limit the first term then approaches $f(0)\cdot 1$, as we'd hope.
H: A square is cut into three equal area regions by two parallel lines find area of square. A square is cut into three equal area regions by two parallel lines that are 1 cm apart, each one passing through exactly one of two diagonally opposed vertices. What is the area of the square ? AI: Let $AE=x$. By symmetry (equal area etc.) $FC=x$. Let the side of square be $a$. Then $DE=BF=\sqrt{a^2+x^2}$. Area of the parallelogram $FBED$ = $\sqrt{a^2+x^2} \cdot 1=\sqrt{a^2+x^2}$. Area of $\triangle DAE$ = Area of $\triangle FCB$ = $\frac{ax}{2}$. $$\frac{ax}{2}=\sqrt{a^2+x^2} \implies \color{red}{x^2=\frac{4a^2}{a^2-4}}.$$ And area of square is thrice the area of the triangle gives us $$a^2=\frac{3ax}{2} \implies 2a=3x.$$ Solve these to get $a^2=13$ (area of the square).
H: Is the following a formula for expressing a dot product in terms of length? I've come across the following formula and I've been told that it's expressing a dot product in terms of length, but I can't find any sources or derivations for it online. $$⟨u,v⟩ = \frac{|u+v|^2 - |u|^2 - |v|^2 }{2}$$ I found some similar-looking formulae, but nothing exactly of this form. Can you tell me how to derive this? I know about finding the Pythagorean length, but stumped to find the exact formula above. There is also this question which seems to be almost the same, but again, it's not quite there. Calculate dot product without the use of angles And in a comment to that question, the hint $$(a+b)^2−(a−b)^2=4ab$$ AI: This is a polarization identity, it can be shown by rewriting the norms in terms of the inner product. In this case, \begin{align*} |u+v|^2 - |u|^2 - |v|^2 &= \langle u+v,u+v \rangle - |u|^2 - |v|^2 \\ &= \langle u,u\rangle + 2 \langle u,v \rangle + \langle v,v \rangle - |u|^2 - |v|^2 \\ &= |u|^2 + 2 \langle u,v \rangle + |v|^2 - |u|^2 - |v|^2 \\ &= 2 \langle u,v \rangle. \end{align*}
H: Find basis and dimension of subspaces If I have $V=\{(x,y,z,w)\in \mathbb{R}^4 : x+y=z+w\}$ How I can find basis and dimension? I'm new and I don't know how to proceed AI: hint A vector of $ V $ is of the form $$(x,y,z,w)=(x,y,z,x+y-z)=$$ $$x(1,0,0,1)+y(0,1,0,1)+z(0,0,1,-1)=$$ $$x\vec{v_1}+y\vec{v_2}+z\vec{v_3}$$ Now prove these three vectors are independent.
H: How to prove that a set is not a finite union of intervals? Consider the set $\mathbb{R} - \mathbb{Z}$. It is certainly a countable union of intervals. How does one prove that it is not in fact a finite union of intervals? AI: I’m assuming that by interval you mean bounded interval, i.e., sets of the forms $(a,b)$, $(a,b]$, $[a,b)$, and $[a,b]$. (Even if you allow rays, it’s not hard to show that $\Bbb R\setminus\Bbb Z$ doesn’t contain any rays.) Let $\{I_1,\ldots,I_n\}$ be any finite set of intervals, and for $n=1,\ldots,n$ let $a_n=\inf I_n$ and $b_n=\sup I_n$. Finally, let $b=\max\{b_1,\ldots,b_n\}$. Then $x\le b$ for each $x\in\bigcup_{k=1}^nI_k$, so $\lceil b\rceil+\frac12\in(\Bbb R\setminus\Bbb Z)\setminus\bigcup_{k=1}^nI_k$.
H: Proving roots of quadratic equations While working out, for the required quadratic equation, my result is: $x^2-\left(\frac{{\beta }^3+{\alpha }^3}{{(\alpha \beta )}^3}\right)x+\frac{1}{{\left(\alpha \beta \right)}^3}$ I am unable to move to the next part of the question. Any help will be appreciated. AI: If $\alpha$, $\beta$ are the roots of $ax^2+bx+c$ and $\alpha\beta^2=1$, prove that $a^3+c^3+abc=0$. \begin{align*} \alpha\beta^2&=1\\ \frac ca\beta&=1\\ \frac ca\left(\frac{-b\pm\sqrt{b^2-4ac}}{2a}\right)&=1\\ -bc\pm c\sqrt{b^2-4ac}&=2a^2\\ \pm c\sqrt{b^2-4ac}&=2a^2+bc\\ \left(\pm c\sqrt{b^2-4ac}\right)^2&=(2a^2+bc)^2\\ b^2c^2-4ac^3&=4a^4+b^2c^2+4a^2bc\\ 4a^4+4a^2bc+4ac^3&=0\\ a^3+abc+c^3&=0\\ \end{align*}
H: G is solvable iff factors have prime order A group $G$ is said to be solvable if, and only if, there exists a subnormal series of subgroups $\{e\} = G_0 \triangleleft G_1 \triangleleft \cdots \triangleleft G_n = G$ such that each factor $\displaystyle\frac{G_{i+1}}{G_i}$ of the series is an abelian group. Prove that a finite group $G$ is solvable if, and only if there exists a subnormal series of subgroups $\{e\} = G_0 \triangleleft G_1 \triangleleft \cdots \triangleleft G_n = G$ such that each factor $\displaystyle\frac{G_{i+1}}{G_i}$ of the series has a prime order (i.e. $\left| \displaystyle\frac{G_{i+1}}{G_i} \right | = p$ for some $p$ prime number). This exercise was proposed to us by our algebra teacher and I just keep thinking that there is something wrong here. Every time I looked up in the internet, other conditions were necessary for this to be valid. If it isn't, what are possible counter-examples? If it is valid, however, how can I prove it? Tried a huge amount of stuff and can't figure it out. NOTE: I do not have learned the definitions of solvable groups with derived series and commutators yet, so please do not use them. I mean, i want to learn but I think that this exercise just needs to be solved without them. NOTE 2: No, this is not my homework, this exercise will not be checked for points or anything like that, I just want to learn. AI: This is false in this generality. You require $G$ to be finite for this to be true. A counterexample would be $\mathbb{Z}$ or $\mathbb{Q}$. For finite groups it is true. In order to solve it you first need to prove it for finite abelian groups. Once you have that, then you 'insert' the series you find for each abelian subquotient (quotient of one subgroup by another) into the abelian series you started with, using the correspondence between subgroups of a quotient group and subgroups of a group containing the kernel of the quotient map. For abelian groups $G$, the easiest way is to use induction on $|G|$. If $G$ has no subgroups at all then $G$ is cyclic of prime order, and $1\lhd G$ is a series. If $H$ is a subgroup, by induction both $H$ and $G/H$ have such a series and then you append the series for $G/H$ onto the series for $H$ using the correspondence mentioned above. Thus given $$1=H_0\lhd H_1\lhd H_2\lhd \cdots \lhd H_n=H,$$ $$H/H=G_0/H\lhd G_1/H\lhd \cdots \lhd G_m/H=G/H,$$ we obtain $$ 1=H_0\lhd H_1\lhd H_2\lhd \cdots \lhd H_n=H=G_0\lhd G_1\lhd G_2\lhd \cdots \lhd G_m=G.$$ Now we have a series for an abelian group, we do the general case using exactly the trick in the abelian case. Stitch the series together for each factor $G_{i+1}/G_i$.
H: Compute the phase difference of sine wave at point in 3D space Different points in time have values modulated sinusoidally (like a signal arriving at each point). For the case as illustrated at the picture all points have the same value at the same time. I have different cases were the wave is rotated along the vertical axis (Value), so that at the same time points have different values. The rotation angle is known. Is there a way to compute the phase difference between the points after the rotation? Any help would be greatly appreciated! AI: The equation of your wave is $$V=A\sin(\omega t+\phi_p)$$ Here $V$ is the value, $t$ is the time, and $p$ is the position. The meaning of the angle of rotation does not make too much sense, since it depends on the units of position and time. What makes sense is the time where each of the points goes through a minimum (or maximum). What I mean by that, you know where you have a curve of constant value. You can write the first equation as $$V=A\sin(\omega(t+\Delta t_p))$$ And in your case $\Delta t_p$ is linear in $p$, so $$\Delta t_p=\frac{\Delta t_1-\Delta t_0}{p_1-p_0}p+\Delta t_0$$ Plugging it back into $\phi_p$, you get $$\phi_p=\omega\Delta t_p=\omega\frac{\Delta t_1-\Delta t_0}{p_1-p_0}p+\omega\Delta t_0$$ Then here $\frac{\Delta t_1-\Delta t_0}{p_1-p_0}$ is the slope of the constant level curve (tangent of the angle), and $\omega \Delta t_0$ is the phase of the point at the initial position.
H: Proof of identity relating Prime-counting function and Stirling numbers of the second kind In his famous 1973 paper Gallagher showed that the distribution of prime numbers in short intervals tends to a Poisson distribution. To do it he uses in one step: $$\sum_{n \leq N}(\pi (n+h)-\pi(n))^{k}=\sum_{r=1}^{k} \sigma (k,r) \underset{distinct}{\sum_{1 \leq d_{1}< \dots <d_{r} \leq h}} \pi(N; \left \{ d_{1},d_{2},\dots,d_{r} \right \})$$ Where $\sigma (k,r)$ are the Stirling numbers of the second kind and $\pi(N; \left \{ d_{1},d_{2},\dots,d_{r} \right \}$ counts the different integers $x\leq N$ such that $\{x,x+d_{1},\dots,x+d_{r}\}$ are simultaneously primes. I understand that he has used $\sum_{k=0}^n \sigma (n,k)(x)_{k}=x^{n}$. Where $(x)_{k}$ is the falling factorial. Nevertheless, I do not understand why the identity is true. Any advice? AI: In the paper there is an intermediate step, mainly that $$\sum _{n\leq N}\left (\pi (n+h)-\pi (n)\right )^k=\sum _{n\leq N} \sum _{n<p_1,\cdots ,p_k\leq n+h}1,$$ this step what it does is just rewrite the understanding of $(\pi (n+h)-\pi (n))^k$ as $k$ choices of primes(i.e., $p_1,p_2,\cdots ,p_k$) in between $n+1$ and $n+h$ inclusively. Now, you may argue that some of this primes do not need to be different, and so what you want is to take into account when are not the same prime. So let say that there exists a partition $\sigma \vdash [k]$ in such a way that $p_i=p_j$ iff $i,j$ are in the same block of the partition, in this way we can now consider different primes. The number of such partitions is the Stirling numbers of the second kind and the way to choose different numbers is by noticing that if $p_1< p_2<\cdots <p_r$ then there are increasing numbers $d_1,d_2,\cdots , d_r$ such that $p_1=n+d_2,p_2=n+d_2,\cdots , p_r=n+d_r.$ Morally: This equation is just factoring the choices of the primes in between a partition and an injective choice of the primes. (which is the combinatorial interpretation of the equation that you provide).
H: Evaluate the integral by interpreting it in terms of areas Evaluate the integral by interpreting it in terms of areas. In other words, draw a picture of the region the integral represents, and find the area using high school geometry. $$ \int_0^4 |5x−2|dx $$ AI: So visually the absolute value means your function can be decomposed into two different lines over the region x: [0,4]. As you found the line changes direction at x = 0.4, as this is where |5x - 2| changes sign. So if we let y = |5x - 2|, then at x = 0, y = 2; at x = 0.4, y = 0; at x = 4, y = 18. Draw a line between (0,2) and (0.4,0) and find the area over the region x:[0,0.4]. Then draw a line between (0,2) and (4,18), and find the area over this region. Using the area for triangles, you should get $(1/2)(0.4)(2) + (1/2)(4-0.4)(18) = 32.8$
H: How to find Frechet derivate for this functional My functional is $F(u)=\frac{1}{2}\int (|D_x u|^2 + |D_y u|^2)dxdy + \int fu dxdy$ where $f\in C(\Omega)$, $u\in C_{0}^2(\overline{\Omega})$ and $\Omega \subset R^2$. Well Im reading Elliptic problems in Nonsmooth domains (P.Grisvard) page 2, my question is i need to compute $F(u+\delta)-F(u)$ and try to find some residuous linear in $\delta$? And $\delta$ need to depend of $x,y$? Thank you AI: Let $u_0, h \in \mathcal{C}^2(\bar{\Omega})$. Then \begin{align} F(u+h) &= \frac{1}{2}\int (|D_x (u+h|)^2 + |D_y (u+h)|^2)dxdy + \int f(u+h) dxdy \\ &= \frac{1}{2}\int (|D_x u|^2+\langle D_xu,D_xh \rangle +|D_xh|^2 + |D_y u|^2+\langle D_yu,D_yh \rangle +|D_yh|^2)dxdy\\&~~~~~+ \int (fu + fh) dxdy \\ &= F(u) + \frac12\int\left(\langle D_xu,D_xh \rangle + \langle D_yu,D_yh \rangle +fh \right)dxdy + \frac12\int |D_xh|^2+|D_yh|^2dxdy \\ &= F(u) + l(h) + o(||h||^2) \end{align} where $l(h) = \frac12\int\left(\langle D_xu,D_xh \rangle + \langle D_yu,D_yh \rangle +fh \right)dxdy$ is linear in $u$. It follows that $$D_uF(h) = \frac12\int\left(\langle D_xu,D_xh \rangle + \langle D_yu,D_yh \rangle +fh \right)dxdy$$
H: How to uniformly sample multiple numbers whose product is within some bound Suppose I have 3 positive integers: $n_1$, $n_2$, and $n_3$. How do I uniformly sample $(n_1, n_2, n_3)$ so that $50 < n_1 n_2 n_3 < 100$. I could sample each number independently with bounds between 1 and 100, then keep re-sampling if their product is out of bound. I need to do this with a computer program with more numbers and larger bounds, so I prefer a way that's efficient both in space and time. Are there other ways to sample than what I described above? EDIT (regarding the accepted answer): At the time of this edit, there were 3 approaches answered so far: complete enumeration (addresses the example in the question, simple, but space and time bound for very large problems) rejection sampling (simple, but time bound for very large problems) weighted draw ("efficient" in space and time compared to other approaches, but complex) All answers work best in different situations. I accepted weighted drawing since it seemed to be most complete in a sense that it addressed the "efficient both in space and time" part for larger problems the best. AI: Depending on how large your bounds are, what you consider 'efficient', and how much preprocessing you're willing to do, one option is to do a weighted draw of $n_1$. For each $n_1$ you can compute how many $(n_2,n_3)$ pairs yield $b_{min}\leq n_1n_2n_3\leq b_{max}$; this will take $O(b_{max}^2)$ time and $O(b_{max})$ space naively. Then build a CDF table of $n_1$ from those pairs; this will ensure the appropriate probability for that variable. Once you have $n_1$ you can either do the same procedure a dimension down (but note that the tables there will be $n_1$ dependent, so you'll have to build them after choosing $n_1$ or have them available for many possible $n_1$) or do rejection sampling, though presumably with a much smaller range. For an example, suppose we have $5\leq n_1n_2n_3\leq 10$. Then we start by making a table $T(i)$ of the number of pairs $a,b$ with $1\leq ab\leq i$, for $1\leq i\leq 10$; this looks like $[1, 3, 5, 8, 10, 14, 16, 20, 23, 27]$. (This table is buildable in $O(i^2)$ time through several means and I suspect it's buildable much faster than that in practice if you wanted to dig into the appropriate algorithms.) Now, for each $n_1$ we can compute $T(\lfloor\frac{10}{n_1}\rfloor)-T(\lceil\frac5{n_1}\rceil-1)$ (taking $T(0)=0$); this represents how many pairs $n_2,n_3$ fit the product within the bounds for that $n_1$. This table will look like $[19, 7, 4, 2, 3, 1, 1, 1, 1, 1]$. Then sampling from this table is a matter of computing the CDF (which here would be $[19, 26, 30, 32, 35, 36, 37, 38, 39, 40]$), computing an $r$ uniformly in $[1\ldots 40]$ and doing the reverse lookup to find the corresponding value of $n_1$.
H: Tangent lines of a differentiable function verifying $f(a) = f(b) = 0$ intersect the $x$ axis on all points outside of $[a, b]$. Let $f$ be a differentiable function on $[a, b]$, $(a<b)$ such as that $f(a) = f(b) = 0$. Prove that $\forall d \in \mathbb{R} $\ $[a,b] $, $\exists c \in [a,b]$ such as the tangent line on $c$ intersects the $x$ axis on $(d,0)$. My attempt: for arbitrary $d \in \mathbb{R} $\ $[a,b]$ Let $g_d$:x $\longmapsto f(x)(d-x)$, I thought this would work if I applied Rolle's Theorem to this function, until I derived it... deriving it , we get $g_d'(x) = f'(x)(d-x) - f(x)$, I overlooked the minus sign... So how should I proceed? AI: We want to prove that $$\forall d\in \Bbb R-[a,b]\;\; \exists c\in[a,b]\; : $$ $$f'(c)(d-c)+f(c)=0$$ For this, define, for $ x\in[a,b]$ $$g(x)=\frac{f(x)}{d-x}$$ $g $ is continuous at $ [ a,b] $, differentiable at $ (a,b) $ and satisfies $$ g(a)=g(b)=0$$ then, by Rolle's Theorem $$\exists c\in (a,b) \;\; : g'(c)=0$$ but for any $ x\in (a,b) $ $$g'(x)=\frac{f'(x)(d-x)-(-f(x))}{(d-x)^2}$$ Done.
H: If $\sum_{n=1}^{\infty} a_n$ converges and $\sum_{n=1}^{\infty} a_n^2$ diverges, then $\prod_{n=1}^{\infty} (1+a_n)$ diverges to $0$. Prove that if $\sum_{n=1}^{\infty} a_n$ converges and $\sum_{n=1}^{\infty} a_n^2$ diverges, then $\prod_{n=1}^{\infty} (1+a_n)$ diverges to $0$. We assume that $\{a_n\}$ is a real sequence. An infinite product $\prod_{n=1}^{\infty} (1+a_n)$ diverges to $0$ if $1+a_n \ne 0 \,\forall n \in \mathbb{N}$ and $\prod_{n=1}^{\infty} (1+a_n)=0$. Any hint or reference will be appreciated. AI: Hint: $\log(1+t) \sim t - t^2/2$ as $t \to 0$ so $t - t^2/4 \ge \log(1+t)$ for real $t$ with $|t|$ sufficiently small (in fact it is true for $-1 < t < 2.532354756$ approximately). So when $a_i, i = k \ldots m$ are in this interval $$ \prod_{i=k}^m (1 + a_i) = \exp \left(\sum_{i=k}^m \log(1 + a_i) \right) \le \ldots $$
H: Künneth formula and the tensor product of cohomologies The Künneth formula states that $$\ \Phi: H^*(M) \otimes H^*(F) \to H^*(M×F)$$ is an isomorphism, where $\ \Phi $ is the induced map of $$\ \omega \otimes \tau \to \pi ^* \omega \wedge\rho^*\tau, $$ where $\ \pi$ and $\ \rho$ are projections from $\ M× F $ to M and F respectively. My book (differential form in algebraic topology, page 49) says that if $\ M = \mathbb{R}^n$, then the Künneth formula is just the Poincaré lemma, which says $\ H^*(M) \cong H^*(M×\mathbb{R}^n)$ I don't see how this is the case. As far as I know, $\ \dim(V \otimes W) = \dim(V) × \dim(W)$, and $\ H^*(\mathbb{R}^n)$ is zero in dimensions greater than zero. That means $\ \dim(H^*(M) \otimes H^*(F)) $ is zero when the dimension of $\ H^*(M = \mathbb{R}^n)$ is nonzero. Does tensor products work differently on de Rham cohomologies? AI: Note that $H^*(M)\otimes H^*(\mathbb{R}^n) \cong H^*(M)\otimes\mathbb{R} \cong H^*(M)$. In particular, $\dim (H^*(M)\otimes H^*(\mathbb{R}^n))^k = \dim H^k(M)$ where the exponent denotes the degree $k$ part of the graded ring $H^*(M)\otimes H^*(\mathbb{R}^n)$. For $k > \dim M$ we get zero, but we don't necessarily get zero for $k \leq \dim M$.
H: Showing $\lim_{n\to\infty}\int_{\mathbb{R}} f(x)f(x+n) dx=0$ Problem Let $f(x)\in L^2(-\infty,\infty)$. Prove that $$ \lim_{n\to\infty}\int_{-\infty}^\infty f(x)f(x+n) dx=0. $$ My attempt Let $N\in\mathbb{N}$ and $f_N(x):=f(x)\chi_{[-N,N]}(x)$. For any $n\in\mathbb{N}$, consider $$ \begin{split} \int_{-\infty}^\infty f(x)f_N(x+n)dx&=\int_{-\infty}^\infty f(x)f(x+n)\chi_{[-N,N]}(x+n)dx\\ &=\int_{-\infty}^\infty f(x)f(x+n)\chi_{[-N-n,N-n]}(x)dx\\ &=\int_{-N-n}^{N-n}f(x)f(x+n)dx \end{split} $$ This is where I get stuck. We know for a.e. $x\in\mathbb{R}$ that $\lim_{n\to\infty} f(x+n)=0$. Thus, if I were allowed to interchange limit and integral, I would get the desired conclusion. I tried Dominated Convergence Theorem, but could not find a way to dominate the integrand by an integrable function not depending on $n$. Maybe there is a different approach. Any hints? I'd like to be able to say the absolute value of the above integral is less than any $\varepsilon>0$ for $n$ large enough, and then since $N$ is arbitrary, the conclusion follows. AI: Hint: There exists a continuous function $g$ with compact support such that $ \|f-g\|_2 <\epsilon$. $\int g(x)g(x+n)=0$ for $n$ sufficiently large and $|\int f(x)f(x+n)-\int g(x)g(x+n)| <\epsilon ( \|f\|_2+ \|g\|_2)$ by Holder's inequlaity. A simpler proof: (continuity of $g$ is not required to we can simplify the proof as follows:) Let $g(x)=f(x)$ for $|x| \leq N$ and $0$ for $|x| >N$. Observe that $ g(x)g(x+n)=0$ for all $x$ if $n >2N$ and that $\int |f-g|^{2} \to 0$ as $N \to \infty$. . Hence $\int g(x)g(x+n)dx=0$ fo $n >2N$. Now $$\int f(x)f(x+n) -\int g(x)g(x+n)$$ $$=\int f(x)f(x+n) -\int g(x)f(x+n)+\int g(x)f(x+n) -\int g(x)g(x+n).$$ Use Holder's inequality to show that this quantity tends to $0$ as $N \to \infty$.
H: Reducing $\binom{t-2}{n-2}/\binom{t-1}{n-1}$ to $\frac{n-1}{t-1}$ How does $$\frac{\displaystyle\binom{t-2}{n-2}}{\displaystyle\binom{t-1}{n-1}}$$ reduce to $$\frac{n-1}{t-1}$$ I know that the formula for the nCk = $$\frac{n!}{k!(n-k)!}$$ When i unfold given the formula i get $$\frac{(t-2)!(n-1)!((t-1)-(n-1))!}{(n-2)!((t-2)-(n-2))!(t-1)!}$$ i dont see how this reduces and i know is probably something so simple im just not seeing AI: Use the fact that for any positive integer $n$, $n! = n(n-1)!$. Therefore, $$(t-1)! = (t-1)(t-2)!,\\ (n-1)! = (n-1)(n-2)!,$$ and so forth.
H: Probability with cups and plates I have some trouble solving the next problem: There are 6 pairs of cups and plates (i.e. 6 cups and 6 plates). 2 of them are blue, another 2 pairs are red and the last 2 pairs are white. Each cup is place over a plate randomly. What's the probability that non of the plates and cups have the same color? My attempt: For each pair of plates and cups there are $6^2 = 36$ possible combinations. And there are $6^2 - (2)(6) = 24$ possible distinct combinations for each pair. So the probability of a pair been different is $P(U) = \frac{24}{36} = \frac{2}{3}$, where $U$ is the set of all distinct combinations for a single pair. Finally the probability that the six pairs are distinct is $\left(\frac{2}{3} \right)^6$ since each pair is independent of the rest. However I'm not sure if my solution is correct or even my reasoning. Any help would be greatly helpful and appreciated. AI: You can't just multiply the factor $\frac 23$ together because the events are not independent. Let's start with the blue plates. The chance they both get white cups is $\frac 26 \cdot \frac 15=\frac 1{15}$. Similarly, the chance they both get red cups is $\frac 1{15}$. The chance they get one red and one white is $2 \cdot \frac 26 \cdot \frac 25= \frac 4{15}$. This checks, because the chance they get both blue is $\frac 1{15}$ and the chance they get two colors including blue is $2 \cdot \frac 4{15}$, for a total probability of all the possibilities of $1$ Assuming they both get white, the chance of success is $\frac 24 \cdot \frac 13 =\frac 16$ because both red plates have to get blue ones. This gives $\frac 1{90}$ chance of success and we get another $\frac 1{90}$ from them getting both red. Assuming they get a red and a white, we need the reds to get a white and a blue, which happens with probabilty $2\cdot \frac 14 \cdot \frac 23=\frac 13,$ giving a contribution to the success probability of $\frac 13 \cdot \frac 4{15}=\frac 4{45}$ The overall chance of success if $\frac 1{90}+\frac 1{90}+\frac 4{45}=\frac 19$
H: Suppose that $z = x + iy$, with $y > 0$. Show that there are positive real numbers $u$ and $v$ with $2u^{2} = |z| + x$ and $2v^{2} = |z| - x$ Suppose that $z = x + iy$, with $y > 0$. Show that there are positive real numbers $u$ and $v$ with $2u^{2} = |z| + x$ and $2v^{2} = |z| - x$. Calculate $(u+iv)^{2}$. Show that $z$ has exactly two complex square roots. Show that the same holds when $y < 0$. MY ATTEMPT Let us solve this problem backwards: \begin{align*} \begin{cases} 2u^{2} = |z| + x\\\\ 2v^{2} = |z| - x \end{cases} \Longleftrightarrow \begin{cases} |z| = u^{2} + v^{2}\\\\ x = u^{2} - v^{2} \end{cases} \end{align*} Consequently, one has that \begin{align*} |z|^{2} = (u^{2} + v^{2})^{2} = x^{2} + y^{2} = (u^{2} - v^{2})^{2} + y^{2} & \Longleftrightarrow u^{4} + 2u^{2}v^{2} + v^{4} = u^{4} - 2u^{2}v^{2} + v^{4} + y^{2}\\\\ & \Longleftrightarrow y^{2} = 4u^{2}v^{2} \end{align*} Thus it suffices to choose $x = u^{2} - v^{2}$ and $y = 2uv$, where $u,v\in\textbf{R}_{\geq0}$. Thus we conclude that \begin{align*} (u+iv)^{2} = u^{2} - v^{2} + 2uvi = x + yi = z \end{align*} If $y < 0$, it suffices to take $y = -2uv$. Finally, I did not get what it means to show that $z$ has exactly two complex roots. Any help on this? If there is a nice way to approach it, I'd be grateful to know. Any contribution is appreciated. AI: I think you accidentally solved for $x$ and $y$ instead of $u$ and $v$. Remember that $z = x+iy$ is fixed, so we're looking for $u$ and $v$ in terms of $x$ and $y$. I think you can probably reuse most of the work you already have for that, though. All they mean by showing $z$ has two complex roots is that since you showed that $(u+iv)^2 = z$, we can also write $\sqrt{z} = u+iv$. For the second complex root, you can just take use that $(-(u+iv))^2 = (u+iv)^2 = z$.
H: Is there an homemomorphism that is not a diffeomorphism? Is there an example of an homeomorphism from the reals onto itself that fails to be a diffeomorphism? AI: Take $f(x)=x^3$: its derivative vanishes at $0$, so its inverse is not differentiable at $0$.
H: Understanding the proof of: set whose decimal expansion contains only $4, 7$ is perfect I'm trying to understand the proof of the following: Let $E$ be the set of all $x \in [0,1]$ whose decimal expansion contains only the digits $4$ and $7$. Prove the every point of $E$ is a limit point of $E$. The proof that is provided to me is: $E$ is perfect since it is closed and its every point is a limit point of $E$. We have already shown that $E$ is closed. Now, let $p \in E$ where $p = 0.p_1p_2 \dots p_n \dots$. It suffices to show that $p$ is a limit point of $E$. To this end, let $r > 0$. Then, $\exists N$ such that $\frac{3}{10^N} < r$. Let $p' = 0.p_1p_2 \dots p_n s_{n+1}\dots$ where $s_{n+1} = 4$ if $p_{n+1} = 7$ and $s_{n+1} = 7$ if $p_{n+1} = 4$. Then, $p' \in N_r(p), p' \in E,$ and $p' \ne p$. Since $r > 0$ was arbitrary, $p$ is a limit point of $E$. Can someone please explain why "$\exists N$ such that $\frac{3}{10^N} < r$" and why "$p' \in N_r(p)$"? AI: I’ll take your questions in reverse order. First, $p'\in N_r(p)$ because it was constructed to ensure that this is the case. Specifically, the decimal expansions of $p$ and $p'$ both begin $0.p_1p_2\ldots p_n$, where $n$ is large enough that $\frac3{10^n}<r$. The next digit of the expansion of $p$ is $p_{n+1}$, and the next digit of the expansion of $p'$ is $7$ if $p_{n+1}=4$ and $4$ if $p_{n+1}=7$. Thus, the largest possible value for $|p-p'|$ is $$0.\underbrace{0\ldots 0}_n3333\ldots<\frac1{10^n}\cdot\frac13<\frac3{10^n}<r\;.$$ Thus, $|p-p'|<r$, and that by definition (of $N_r(p)$) means that $p'\in N_r(p)$. I suspect that the choice of $n$ such that $\frac3{10^n}<r$ embodies a typo, and that the author intended to choose $n$ large enough that $\frac1{3\cdot10^n}<r$, since, as I pointed out above, $\frac1{3\cdot10^n}$ is the largest possible value of $|p-p'|$ when $p'$ is constructed according to the recipe actually used.