text
stringlengths
83
79.5k
H: All operators on subspace are scalar I have an infinite dimensional unital $\mathrm{C}^*$-algebra $A\subset B(H)$ and a subspace $U\subset H$ such that for all $f\in A$, $f$ restricted to $U$ is scalar (there exists $\lambda\in\mathbb{C}$ such that:): $$f(u)=\lambda u.$$ Can we conclude that $U$ is (zero or) one dimensional? -----‐--------------------------------‐--------------------------------------------------- Follow up in the linked question. AI: No. Take for instance $H = U \oplus V$ where $U$ and $V$ are Hilbert spaces with $U$ more than 1-dimensional and $V$ infinite-dimensional. Take $A$ to be the C*-algebra of operators of the form $\begin{bmatrix} \lambda \cdot \mathrm{id}_U & 0 \\ 0 & T \\ \end{bmatrix}$ where $\lambda \in \mathbb{C}$ and $T \in B(V)$.
H: What's the point of obtuse angle trigonometry? What is the point of $\sin$ or $\cos$ or $\tan$ of an obtuse angle? Don't we use these functions to find a missing side in a right angle triangle? So why would I use it on an obtuse angle? I have also done a unit circle. AI: If all you want to do is think about right triangles then you will never encounter obtuse angles and so do not need to know about the values of the trigonometric functions at such angles. But there are triangles with an obtuse angle that you may want to study. Moreover, the trigonometric functions appear in many contexts that do not involve triangles. They are crucial in understanding periodic phenomena. There you want to know about $\sin x$ for all possible values of $x$ (whether negative or positive, and as large as you wish). Since you have "done the unit circle" you have started studying this context.
H: What this language mean? I have to define a Turing machine for it. I'm tasked with defining a Turing machine for the language below. However I simply do not understand what this language means due to its notation. A Tm M = $( Q,\Sigma ,\Gamma ,\delta , q_0, q_a, q_r)$ is given. We want to define a Tm (not necessarily standard, could have more than one tape) that decides this language : {$\Omega$ uqv $\Omega$ u'q'v' : u,v,u',v' $\in$ ($\Gamma$ \ {blank symbol})*, q,q' $\in$ Q and (u,q,v) $\Rightarrow$ (u',q',v') according to M} on the alphabet $\Sigma ' = Q \cup (\Gamma $ \ {blank symbol}) $\cup$ {$\Omega$} I have trouble understanding what the language mean in the first place. Would love to have an explanation because without it I can't solve this. AI: I think a string here like $uqv$ means that the machine $M$ is in state $q$, that $u$ is the part of the tape to the left of its head, and $v$ is the part of the tape to the right of its head. The first symbol in $v$ is the symbol that is being scanned by $M$'s head. A string like $$\Omega\, uqv\, \Omega\, u'q'v'$$ should be interpreted as a statement about $M$. It means: If $M$ is in state $q$, and the tape has $uv$ on it, with $M$'s head over the first symbol in $v$, then $M$ will go into state $q'$, and leave $u'v'$ on the tape, with its head on the first symbol of $v'$. Your job is to define a machine $M_W$ that accepts the strings that are true statements about what $M$ would do. The mysterious-looking $\Omega$ symbol is just a delimiter, so that your machine $M_W$, looking at an input string, can know where $v$ ends and $u'$ begins. Without the $\Omega$, inputs to $M_W$ would be ambiguous. $M_W$ would get a string like $1011x000101y0000$, and although it could know that $v\,u' = 000101$ it wouldn't know which part of $000101$ was $v$ and which was $u'$. With the $\Omega$ your machine is given $\Omega1011x000\Omega101y00000$ and can see that $v=000$ and $u'=101$, because there is an $\Omega$ symbol separating them. We don't need delimiters in $uqv$ or $u'q'v'$ because $q$ is the name of a state, and separates the $u$ and $v$ parts, which are strings from $M$'s tape alphabet $\Gamma$. There is no ambiguity. $M_W$ will depend on $M$, so it's not really a single machine; it's a family of machines, one for each possible machine $M$ that you might be given. You are being asked not to build a particular machine, but to describe a method which you could use to build $M_W$ once you are told what $M$ is. The input to $M_W$ is not a sequence of configurations as you suggested in the comments. The input is only two configurations: one for before ($uqv$), and one for after ($u'q'v'$). Your machine $M_W$ will decide if $M$, presented with the “before” configuration, would turn it into the “after” configuration. Your machine $M_W$ will get a string. Then: $M_W$ should check first that the string is in the correct form $\Omega\, uqv\, \Omega\, u'q'v'$. If not, it should reject. (It is acceptable for $M_W$ to reject by failing to halt.) It should set up $uv$ on a the tape somewhere, perhaps on an auxiliary tape. It should simulate what $M$ would do, if it were in state $q$, looking at the $uv$ on its tape, with its head at the first symbol of $v$. It should check that $M$ would go into state $q'$. If $M$ would go into some other state, $M_W$ should reject. It should check that $M$ would leave $u'v'$ on the tape. If $M$ would leave something else on the tape, $M_W$ should reject. It should check that $M$'s head would be over the first symbol of $v'$ on the tape. If $M$'s head is somewhere else, $M_W$ should reject. If $u'q'v'$ correctly describes the configuration of $M$ after it has dealt with $uqv$, then $M_W$ should accept. In step 3, the “simulate” step, probably the easiest thing to do is to have $M_W$ actually change the $uv$ on the tape to whatever $M$ would have changed it to. Then $M_W$ can check that the $u'$ and $v'$ from its input matches what is actually on the tape. Important: The thing that is missing in all of this is: where is $M$'s input used? I said that $\Omega\, uqv\, \Omega\, u'q'v'$ should be understood to mean that $M$ turns $uqv$ into $u'q'v'$. But $M$ won't do anything if $M$ isn't given any input from $\Sigma^\ast$. To know what $M$ will do, $M_W$ needs to know what the input to $M$ is. $M_W$ definitely doesn't get $M$'s input as part of its own input string, because $\Sigma \not\subset\Sigma'$. Is $M$'s input given to $M_W$ on one of $M_W$'s tapes? Or perhaps $M_W$ is supposed to tell us whether there is any input that would cause $M$ to turn $uqv$ into $u'q'v'$? The problem must say, but you didn't tell us. If this isn't clear, please leave a comment.
H: Questions in Proof of Lemma before theorem 7 of Section -6. 5 of Hoffman Kunze( Linear algebra) While self studying Linear Algebra from Hoffman Kunze, I have some questions in proof of a lemma in section - Simultaneous Triangulation, Simultaneous Diagonalization. As I have asked 4 questions here so I will give bounty of 50 points( under the criteria of rewarding existing answer) to anyone who answers all 4 questions as it will take a good amount of time. Adding Image of lemma( Page 207 of the book): This lemma is used in the proof : Questions: (1) Why assumptions that $\mathcal{F}$ contains only finite number of operators holds? Author then assume the maximal set to contain finite operators , why can't maximal set have infinite operators? (2) How $V_{1}$ is larger than $W$? (3) How does $T(T_{1} - c_{1} I) \beta \epsilon$ W is equivalent to statement $T\beta \in V_1$, for all $\beta \in V_1$ and all $T \in \mathcal{F}$? (4) How is $V_{2} $ invariant under $\mathcal{F}$? Kindly tell how should I reason these!! AI: (1) By taking a maximal linearly independent subset of $\mathcal{F}$ means that every other function in $\mathcal{F}$ can be expressed as a linear combination of those functions in the maximal set, and of course condition (b) is invariant under linear combinations (if $f_1$, $f_2$ satisfy (b) then their linear combination satisfies (b)). Also -for some reason that I cannot see at the moment, and I hope it's not the main question- this maximal set can be considered finite. (2) I suppose you meant "$V_1$ larger than $W$". Now if $x\in W$, then $T_1(x)\in W$ as $W$ is invariant under $\mathcal{F}$. Hence $T_1(x)-c_1I(x)=T_1(x)-c_1x$ is a linear combination of elements of $W$, and thus is in $W$. So every $x\in W$ is in $V_1$. (3) I would suggest you to take a closer look. It doesn't say that the statements are equivalent. He proves that If $T$ commutes with $T_1$ (and that's the case for every operator in $\mathcal{F}$, by definition) then $T(\beta)$ is in $V_1$ for any $\beta \in V_1$. In other words $V_1$ is invariant under $\mathcal{F}$ (keep that in mind for your fourth question) (4) $V_2$ was defined for $T_2$ as $V_1$ was defined for $T_1$ with the additional property that ot is a subspace of $V_2$. With the same reasoning as in (3) (and also using (3)), we can show that for $T(\beta)\in V_2$ for any $\beta \in V_2$, for all $T\in \mathcal{F}$. In other words (again) $V_2$ is invariant under $\mathcal{F}.$ Edit: Question (1) It's finite because $\dim V=n<+\infty$, for some $n\in \mathbb{N}$. Every linear operator $T$ can be represented by one $n\times n$ matrix. Every euch matrix is a linear combination of (the linear independent) matrices of the form: A_{ij} = $a_{ij}=1$ and every other $a_{i' j'}=0$, $i'j'\neq ij$, $ \ $ where $i,j,i',j'\in \{1,2,\ldots , n\}$. Of course the set of all such matrices is finite.
H: quotient space with usual topology let be $X$ be a closed with the usual topology, $C \subset X$ a closed interval. then the quotient space $X / C$ is homeomorhpic to the interval $X$ \ $ C$ with the usual topology? AI: It suffices to define the continous surjective map $f\colon [0,5] \to [0,4]$ given by $$f(t) = \begin{cases} t \ &, t \le 4 \\ 4 \ &, t \ge 4 \end{cases}$$ Observe that $f$ is closed since $f$ is the identity on the subspace $[0,4]$ and constant on $[4,5]$. Now, since $f$ is constant on the subspace $[4,5]$, it factors through $$\overline{f}\colon [0,5]\Big/[4,5] \to [0,4]$$ which is a homeomorphism since $f$ is a quotient map.
H: What does $\Bbb{Z}_{18}$ mean in this question? I am trying to solve this question Let $G=\Bbb{Z}_{18}$ and $H$ a subgroup of G generated by $3$, Talk about normal subgroups and coset multiplication then find the cosets $(G/H,\bigotimes)$ Does $\Bbb{Z}_{18}$ mean $$\{0,1,2,3,...,16,17\}$$ or $$\{1,a,a^2,...,a^{16},a^{17}\}$$ where $a$ is the generator of $G$. If it is the second the definition then what does $a$ equal to here? I am just confused if I am supposed to use the multiplication or addition here. AI: As I am looking Your first option sounds good to me. if your $Z_{18}$ is $\Bbb{Z}_{18}$. and the cosets in $\Bbb{Z}_{18}/H$ are $0+H=\{0,3,6,9,12,15\}$, $1+H=\{1,4,7,10,13,16\}$ and so on. Can you complete it?
H: Prove that the following proposition is true. Let a ∈ Z and let b ∈ Z. If n does not divide ab then n does not divide a and n does not divide b. I am currently studying discrete math and I am unsure of how to format this proof in such a way to get my point across. If anyone could write it out for me that would be very appreciated! Thank you in advance. AI: We can prove that the proposition is true by proceeding by “reductio ad absurdum” that is the form of argument that attempts to establish a claim by showing that the opposite scenario would lead to absurdity or contradiction. If $n$ divided $a$ or $b$, then there would exist $h\in\mathbb{Z}$ such that $a=h\cdot n$ or $b=h\cdot n$ Therefore, it would follow that $ab=hb\cdot n$ or $ab=ha\cdot n$ So in any case, we would get that $n$ would divide $ab$, but it would lead to contradiction because a hypothesis says that $n$ does not divide $ab$. Therefore it is not possible that $n$ divides $a$ or $b$ because it leads to absurdity. So we can claim that $n$ does not divide $a$ and $n$ does not divide $b$.
H: A question in a theorem of section 6.6 of Hoffman Kunze I am reading Linear Algebra from Hoffman Kunze by myself and I have a question in a theorem of Section-" Invariant Direct Sums " : I am confused by how authors prove uniqueness of expression of $\alpha$ in proving that direct sum representation exists as for proving that direct sum representation exists I must show that representation of $\alpha$ is unique? I got confused as I think author should take two representations and then prove them both be equal but here author only took $E_{i} \beta_{i} $ = $\alpha_{i} $ . Can kindly tell how exactly author proved uniqueness( ie how last 6 lines of image). AI: They did. They first established that $$\alpha = E_1\alpha + \cdots + E_n\alpha$$ is one desired decomposition with $E_i\alpha \in W_i$. Then they took a different decomposition $$\alpha = \alpha_1 + \cdots + \alpha_n$$ with $\alpha_i \in W_i$ and showed that actually it must be $\alpha_i = E_i\alpha$ for $1 \le i \le n$. Therefore the decomposition $\alpha = E_1\alpha + \cdots + E_n\alpha$ is unique. For any other decomposition $$\alpha = \beta_1 + \cdots + \beta_n$$ with $\beta_i \in W_i$ we would also get $\beta_i = E_i\alpha$ and hence $\beta_i = \alpha_i$ for all $1 \le i \le n$.
H: Convergence of double Integral Let's assume that $f \in L^p(\mathbb{R})$ ($1 \leq p < \infty$). Does it then hold that $$ \lim_{n \rightarrow \infty} \int_0^1 \int_0^1 \left \lvert f\left( \frac{\tilde{r}}{n} \right) -f\left(\frac{r}{n} \right) \right \rvert^p ~\mathrm{d}r \mathrm{d}\tilde{r} = 0 \quad ? $$ And if yes, how can I prove it? I am quite certain that this is true. I tried to use that $\displaystyle \lim_{h \rightarrow 0} \lVert f(\cdot + h) - f \rVert_{L^p} = 0$ but that did not help me as in this case I have a double Integral. Maybe I could youse a density argument or a transformation but I don't quite know how. Can anyone help me out? Or does there exist a counterexample? AI: Here is a counterexample with $p=1$. Take $$ f(x) = \mathbf 1_{(\frac 1 2, \frac 3 4)} + \mathbf 1_{(\frac 1 4, \frac 3 8)} + \mathbf 1_{(\frac 1 8, \frac 3 {16})} + \mathbf 1_{(\frac 1 {16}, \frac 3 {32})} + \dots $$ Then $f \in L^1(\mathbb R)$, with $\| f \|_{L^1(\mathbb R)} = \frac 1 2 $. Now observe that $$ x \in [0, 1] \implies f(x) = f\left( \frac x n \right) \ \ \ \ $$ for all $n$ of the form $n = 2^k$ where $k \in \mathbb N$. So for $n$ of the form $n = 2^k$, we have $$ \int_0^1 \int_0^1 | f\left( \tfrac x n \right) - f\left( \tfrac y n \right) | dx dy = \int_0^1 \int_0^1 | f\left(x \right) - f\left( y \right) | dx dy = \frac 1 2 $$ Hence it's impossible for $ \int_0^1 \int_0^1 | f\left( \tfrac x n \right) - f\left( \tfrac y n \right) | dx dy $ to tend to $0$ as $n \to \infty$.
H: How can we tell the relationship between two lim-sup sets? Let $f_n(x)$ be sequence of functions, and $\epsilon>0$. Denote two sets as follows $$E(\epsilon) = \limsup_{n\rightarrow\infty}\{x:|f_n(x)| >\epsilon \}$$ $$F = \limsup_{n\rightarrow\infty}\{x:|f_n(x)| > 1/n \}.$$ Based on the definitions above, can we conclude $E(\epsilon) \subset F$ for any $\epsilon>0$ ? AI: Yes. If $x \in E(\varepsilon)$, this means that for infinitely many $n \in \Bbb{N}$ we have $|f_n(x)| > \varepsilon$ so there exists an increasing sequence $(p(n))_n$ in $\Bbb{N}$ such that $|f_{p(n)}(x)| > \varepsilon$ for all $n \in \Bbb{N}$. Pick $n_0 \in \Bbb{N}$ such that $\frac1{n_0} < \varepsilon$. Then for all $n \in \Bbb{N}$ such that $p(n) \ge n_0$ we have $$|f_{p(n)}(x)| > \varepsilon > \frac1{n_0} \ge \frac1{p(n)}$$ and the set $\{p(n) : n \in \Bbb{N}, p(n) \ge n_0\}$ is still an infinite subset of $\Bbb{N}$ so $x \in F$.
H: Prove that $P(\tau<\infty)=1$ and find the distribution of $X_\tau$ Let $\{X_n\}_{n\ge1}$ be a sequence of i.i.d. random variables with distribution $\mu$. For $A\in\mathcal{B}(\mathbb{R}):=\text{Borel $\sigma$-algebra of $\mathbb{R}$}$ with $\mu(A)\in(0,1)$, define \begin{align*} \tau=\inf{\{k\ge1, X_k\in A\}} \end{align*} (1) Prove that $P(\tau<\infty)=1$. (2) Prove that $X_\tau$ has distribution given by \begin{align*} P(X_\tau\in H)=\frac{\mu(H\cap A)}{\mu(A)},\quad H\in\mathcal{B}(\mathbb{R}). \end{align*} I know that $\tau$ is a stopping time with respect to $F_n:=\sigma(X_1,...,X_n)$, in particular $\tau$ is a hitting time but I cannot figure out how to use this information to solve the problem. I feel like this is a pretty standard question and I have overthought myself into frustration with it, any help here would be greatly appreciated. AI: Let $\{X_n\}_{n \in \mathbb N}$ be a family of i.i.d r.vs with distribution $\mu$. Let $A \in \mathcal B(\mathbb R)$ be such that $\mu(A) \in (0,1)$. Define $\tau_A := \inf \{ n \ge 1 : X_n \in A \}$. a) Note that $\tau_A$ has geometric distribution with parameter $\mu(A)$. Indeed, for $n \ge 1$: $$ \mathbb P(\tau_A = n) = \mathbb P(X_1,...,X_{n-1} \not \in A, X_n \in A) = \mu(A)(1-\mu(A))^{n-1} $$ So (since $\mu(A) \in (0,1)$ !) $$ \mathbb P(\tau_A < \infty) = \sum_{n=1}^\infty \mathbb P(\tau_A = n) = \mu(A)\sum_{n=1}^\infty (1-\mu(A))^{n-1} = \frac{\mu(A)}{1-(1-\mu(A))} = 1 $$ b) Let $X_{\tau_A}(\omega) = X_{\tau_A(\omega)}(\omega)$ (it is well defined up to the set of measure $0$) Let any $H \in \mathcal B(\mathbb R)$. We have: $$ \mathbb P(X_{\tau_A} \in H) = \sum_{n=1}^\infty \mathbb P(X_n \in H \cap \{ \tau_A = n \}) = \sum_{n=1}^\infty \mathbb P(\{X_1,...,X_{n-1} \not \in A\} \cap \{X_n \in H \cap A \}) $$ Last equality, during to the fact, that since $\tau_A = n$, we need every $X_1,...,X_{n-1}$ not to fall into $A$, and $X_n$ must fall into $A$ and $H$ simultaneously. Since they are independent, we get it is equal to $(1-\mu(A))^{n-1} \cdot \mu(H \cap A)$, so: $$ \mathbb P(X_{\tau_A} \in H) = \mu(A\cap H) \sum_{n=1}^\infty (1-\mu(A))^{n-1} = \frac{\mu(A \cap H)}{\mu(A)}$$
H: Example of non-local homomorphism on local rings Could someone give an example for a homomorphism between local rings which is not local? I tried finding the example by defining a homomorphism from Z/9Z to Z/3Z which maps a+9Z to a+3Z. But it turns out that 0,3,6(i.e. non units of Z/9Z) are mapped to 0(non unit in Z/3Z). Thus it isn't local homomorphisms. I usually struggle to find examples on my own. Any suggestions, how to improve it? AI: How about $R=\{a/b:a,b\in\Bbb Z, b\text{ odd}\}$ where the maximal ideal is $2R$ and $S=\Bbb Q$. Then the inclusion $R\to S$ is not a local homomorphism.
H: Find the Taylor series for $f(z)=\frac{i}{(z-i)(z-2i)}$ about $z_0=0$. Find the Taylor series for $f(z)=\frac{i}{(z-i)(z-2i)}$ about $z_0=0$ and the disk of convergence. For the Taylor series I got $-\frac{i}{2}+\frac{z(2+i)}{4}-\frac{z(2i+3)}{2!}...$, but I'm not super confident in it. Can someone confirm or deny if it's correct? To find this I simply used the $(_0)+\frac{′(_0)}{1!}(−0)+\frac{″(_0)}{2!}(−_0)^2+⋯$ expansion. I double checked with a derivative calculator so I know my values are right, I just wanted to double check that this is the right way to do it. I always thought it was, but that's not how my complex analysis class does it and I don't understand their way. AI: Note that, if $z\in\Bbb C\setminus\{i,2i\}$,\begin{align}\frac i{(z-i)(z-2i)}&=\frac1{i-z}-\frac1{2i-z}\\&=-\frac i{1+iz}+\frac12\frac i{1+iz/2}.\end{align}Therefore, if $|z|<1$,\begin{align}\frac i{(z-i)(z-2i)}&=-i\sum_{n=0}^\infty(-iz)^n+\frac i2\sum_{n=0}^\infty\left(-\frac{iz}2\right)^n\\&=\sum_{n=0}^\infty\left((-i)^{n+1}-\left(-\frac i2\right)^{n+1}\right)z^n\\&=\sum_{n=0}^\infty(-i)^{n+1}\left(1-\frac1{2^{n+1}}\right)z^n.\end{align}So, this last power series is the Taylor series that you're after.
H: combinatorics: 5 people picking 10 seats when there must be at least one space between them I have this question: How many seating arrangements are there for $5$ people to sit in $10$ seats in a row, when $2$ people can't sit next to each other? My idea: If there must be at least one space between every 2 people, the spaces must be something like this: _ _ s _ s _ s _ s _ There must be $2$ open seats next to each other, so they have $5$ options of where to be, and the person who sits there, have $2$ options to choose from. The seating arrangements for $5$ people is $5!$ in a standard row, so overall: $5 \cdot 2 \cdot 5! = 1200$. My answer is wrong, so I was wondering what is a better way to think about it. AI: You must have p_p_p_p_p and one more empty seat somewhere. There are $6$ possible locations for the remaining empty seat: on one end, or between two people. The $5$ people can be arranged in $5!$ different ways in their chosen seats, so altogether there are $6\cdot5!=720$ arrangements. Notice that you don’t have to have two empty seats next to each other: you can have, for instance, _p_p_p_p_p. If you want to use that line of argument, you should notice that there are just $4$ places between two people, so there are $4$ places to put the pair of seats. But then there are also the arrangements _p_p_p_p_p and p_p_p_p_p_, for a total of $6$.
H: Bisection method for $f(x)=x^4-x-2$ Finding a root for the polynomial $$f(x)=x^4-x-2$$ I just to double-check if I've done this correctly, as it's my first time doing so. My work so far In above function, $a=1$ and $b=2$ works, as $f(a)$ and $f(b)$ have opposite signs $$f(1)=(1)^4-(1)-2=-2$$ $$f(2)=(2)^4-(2)-2=+12$$ Thus, I would presume the function is continuous and there must be a root within the interval $[1, 2]$. As the end points of the interval which brackets the root are $a_1=1$ and $a_2=2$, the midpoint must be $$c_1=\frac{2+1}{2}=1.5$$ and the value at the midpoint is $$f(c_1)=(1.5)^4-(1.5)-2=1.5625$$ \begin{array}{|c|c|c|c|} \hline Iteration& a_n & b_n & c_n & f(c_n) \\ \hline 1 & 1 & 2 & 1.5 & 1.5625 \\ \hline 2 & 1.5 & 2 & 1.75 & 5.628906 \\ \hline 3 & 1.5 & 1.75 & 1.625 & 3.3479 \\ \hline 4 & 1.5 & 1.625 & 1.5625 & 2.397964 \\ \hline 5 & 1.5 & 1.5625 & 1.53125 & 1.966493 \\ \hline 6 & 1.5 & 1.53125 & 1.515625 & 1.761131 \\ \hline 7 & 1.515625 & 1.53125 & 1.523438 & 1.862969 \\ \hline 8 & 1.515625 & 1.523438 & 1.519532 & 1.811845 \\ \hline 9 & 1.519532 & 1.523438 & 1.521485 & 1.837354 \\ \hline 10 & 1.519532 & 1.521485 & 1.520509 & 1.824593 \\ \hline 11 & 1.520509 & 1.521485 & 1.520997 & 1.83097 \\ \hline 12 & 1.520997 & 1.521485 & 1.521241 & 1.834161 \\ \hline 13 & 1.521241 & 1.521485 & 1.521363 & 1.521363 \\ \hline 14 & 1.521363 & 1.521485 & 1.521424 & 1.836556 \\ \hline 15 & 1.521363 & 1.521424 & 1.521394 & 1.836163\\ \hline \end{array} After the 13th iteration, it becomes apparent there is a convergence to about 1.521. A root for the polynomial. Is my process correct? Also, I've tried this out for $f(x)=x^3-x-2$ and the root was 1.521 as well. Would this be due to the exponents being close in value? AI: The solutions can be checked against the exact ones. Note $$x^4-x-2=(x+1)(x^3-x^2+x-2)=0$$ which leads to the real roots $x=-1$ and $$x= \frac13\left( 1+\sqrt[3]{\frac{47-3\sqrt{249} }2} + \sqrt[3]{ \frac{47+3\sqrt{249}}2} \right)=1.3532 $$
H: Why is $\arctan(\tan(25\pi / 4)) = \pi/4$? Why is $\arctan(\tan(25\pi /4)) = \pi/4$, and how can I get from the expression on the left to the one on the right? AI: The ${\tan(x)}$ function is periodic, meaning it will not be injective, thus will not have an inverse. Unless you restrict the domain. For example, in your case $${\tan\left(\frac{25\pi}{4}\right)=\tan\left(\frac{\pi}{4}\right)}$$ So if you apply the inverse function - what value do we take? Both ${\frac{25\pi}{4}}$ and ${\frac{\pi}{4}}$ give the same answer under the tangent function. So we are the ones who must decide which value we take. In exactly the same way that we decide ${\sqrt{4}=2}$, and not ${-2}$ (even though both ${(-2)^2=4}$ as well). The decision we make is called the "principle values". For example we say the principle square root of four is two. We take the principle values of ${\arctan(x)}$ to be from the interval ${\left(-\frac{\pi}{2},\frac{\pi}{2}\right)}$, and since ${\tan\left(\frac{25\pi}{4}\right)=\tan\left(\frac{\pi}{4}\right)}$ and ${-\frac{\pi}{2}\leq \frac{\pi}{4}\leq \frac{\pi}{2}}$, that's the answer you will get to the expression: $${\arctan\left(\tan\left(\frac{25\pi}{4}\right)\right)=\frac{\pi}{4}}$$
H: Calculating lim using polar coordinations? How to calculate the following limit: $$\lim_{(x,y)\to(0,0)}\frac{-2yx^3}{(x^2+y^2)^2}$$ I replaced $x$ with $r\cos(\theta)$ and $y$ with $r\sin(\theta)$ and got $-2\sin(\theta)\cos^3(\theta)$ and got stuck there... any help? AI: There is no need to use polar coordinates here. Just note that $f(x,x)=-\frac12$ if $x\ne0$, which implies that $\lim_{x\to0}f(x,x)=-\frac12$. But $\lim_{x\to0}f(x,0)=0$. If you want to do it with polar coordinates, your computations are correct: in fact, $f(r\cos\theta,r\sin\theta)=-2\sin(\theta)\cos^3(\theta)$. Now, note that this is equal to $0$ if $\theta=0$ and that it is equal to $-\frac12$ if $\theta=\frac\pi4$. Therefore, there is no limit at $(0,0)$.
H: Evaluating laplace transform of {x} I came upon a question on Laplace transforms in my exams. $${\scr L}[\{x\}]$$ where $\{x\}=x - [x]$ denotes the fractional part of $x$. This is how I approached the problem. $$f(x)=\{x\}=\sum_{n=-\infty}^\infty{(x-n)(u_n(x)-u_{n+1}(x))}$$ where $u_a(x)$ denotes the Heaviside function shifted to $x=a$. $$\implies \scr L[f(x)]=\sum_{n=-\infty}^\infty{e^{-ns}\scr L[x-n]} - e^{-(n+1)s}\scr L[x-n]$$ $$=\sum_{n=-\infty}^\infty{\frac{e^{-ns}-e^{-(n+1)s}}{s^2}-\frac{n(e^{-ns}-e^{-(n+1)s})}{s}}$$ $$=\sum_{n=-\infty}^\infty{\frac{e^{-ns}-e^{-(n+1)s}}{s}\left(\frac{1}{s}-n\right)}$$ Is the way I approached the problem correct? I noticed that the last summation is a telescopic series which results in a $e^{\infty s}$ term which is making me doubtful. I've checked the procedure multiple times, and the vanishing property seems to hold too, but still, I'm not sure what to make of the telescopic series term. So, in short, I need an experienced look on my method and tell me if I went wrong somewhere. Thanks in advance. AI: You almost have it correct, but it appears that you are trying to take the bilateral Laplace Transform of $\{x\}$, which doesn't exist. If you take the unilateral Laplace Transform, then the summation extends form $n=0$ to $\infty$. Also, note that for $f_n(x)=(x-\lfloor x\rfloor)(u_n(x)-u_{n+1}(x))=(x-n)(u_n(x)-u_{n+1}(x))$, we have $$\begin{align} \sum_{n=0}^\infty\mathscr{L}\{ f_n\}(s)&=\sum_{n=0}^\infty \left(e^{-ns}\mathscr{L}\{x\}(s)-e^{-(n+1)s}\mathscr{L}\{x+1\}(s)\right)\\\\ &=\sum_{n=0}^\infty\left(e^{-ns}\mathscr{L}\{x\}(s)-e^{-(n+1)s}\mathscr{L}\{x+1\}(s) \right)\\\\ &=\sum_{n=0}^\infty\left(\frac{e^{-ns}-e^{-(n+1)s}}{s^2}-\frac{e^{-(n+1)s}}{s}\right) \end{align}$$ which is not quite what you have. Here is how I would suggest to proceed. Let $f(x)=\{x\}$. Then, the Laplace Transform of $f$ is given by $$\begin{align} \mathscr{L}\{f\}(s)&=\int_0^\infty \{x\}e^{-sx}\,dx\\\\ &=\int_0^\infty \left(x-\lfloor x\rfloor\right)e^{-sx}\,dx\\\\ &=\frac1{s^2}-\int_0^\infty \lfloor x\rfloor e^{-sx}\,dx\\\\ &=\frac1{s^2}-\sum_{n=0}^\infty \int_{n}^{n+1}\lfloor x\rfloor e^{-sx}\,dx\\\\ &=\frac1{s^2}-\sum_{n=0}^\infty ne^{-ns}\int_{0}^{1} e^{-sx}\,dx\\\\ &=\frac1{s^2}-\left(\frac{1-e^{-s}}{s}\right)\sum_{n=0}^\infty ne^{-ns}\\\\ &=\frac1{s^2}-\left(\frac{1-e^{-s}}s\right)\frac{e^{-s}}{(1-e^{-s})^2}\\\\ &=\frac1{s^2}-\frac{e^{-s}}{s(1-e^{-s})}\\\\ &=\frac1{s^2}-\frac{e^{-s/2}}{s(e^{s/2}-e^{-s/2})}\\\\ &=\frac1{s^2}+\frac1{2s}-\frac{\coth(s/2)}{2s} \end{align}$$
H: Bijection between the power set of $\mathbb{Z}_+$ and the set of all infinite sequences $(x_n)_{n=1}^\infty$ with $x_n\in\{0,1\}$ Proposition. There is a bijective correspondence between $\mathscr P{(\mathbb{Z_+})}$, the power set of $\mathbb{Z_+}$, and $X^\omega$, the set of all infinite sequences of elements of $X$, where $X=\{0,1\}$. (Source: Munkres Topology 7.3) Proof Let $f: \mathscr P{(\mathbb{Z_+})} \to X^\omega$. Define $f(A) = (x_i)_{i\in\mathbb{Z_+}}$, for $A \in \mathscr P{(\mathbb{Z_+})}$, such that if $i \in A$, $x_i = 1$, otherwise $x_i = 0$. We first show $f$ is an injection. Suppose $A,A' \in \mathscr P{(\mathbb{Z_+})}$ such that $A=A'$. It follows that $A,A' \subset \mathbb{Z_+}$. Consider $f(A)$ and $f(A')$. Then $f(A) = x \in X^\omega$ and $f(A') = x^{'} \in X^\omega$. By definition of $X^\omega$, $x = (x_i)_{i\in\mathbb{Z_+}}$ and $x^{'} = (x^{'}_i)_{i\in\mathbb{Z_+}}$ for $x_i$, $x^{'}_i \in X$. It follows from the definition of $f$, that $\forall j\in\mathbb{Z_+}$, if $j\in A$, then $x_j = 1$. Since $A = A'$ by assumption, $\forall j\in A$, $j\in A'$ and $x^{'}_j = 1$. Hence $\forall j\in \mathbb{Z_+} \cap A = \mathbb{Z_+} \cap A^{'}$, $x_j = 1 = x^{'}_j$. Further, $\forall j\in \mathbb{Z_+} - A = \mathbb{Z_+} - A^{'}$, $x_j = 0 = x^{'}_j$. Thus $\forall j\in\mathbb{Z_+}$, $x_j = x^{'}_j$. Hence, $x = x^{'}$. Therefore, $f$ is an injection. To show $f$ is a surjection, arbitrarily choose $x \in X^\omega$. Then $x = (x_i)_{\mathbb{i\in Z_+}}$, such that each $x_i\in X = \{0,1\}$. Construct a set $A$, as follows.For each $i\in \mathbb{Z_+}$, if $x_i=1$, let $i$ be an element of $A$. Whether empty or not, $A \subset \mathbb{Z_+}$. Thus $A \in\mathscr{P(\mathbb{Z_+})}$. Hence $\forall x\in X^\omega$, $\exists A\in\mathscr{P(\mathbb{Z_+})}$, such that $x = f(A)$. Therefore, $f$ is a surjection. We have show there exists a bijective correspondence between $\mathscr P{(\mathbb{Z_+})}$ and $X^\omega$. Question I would appreciate a verification of this proof. Also, is there a better way to notate the definition of $f$ above. AI: Your bijection is fine, but you’ve not shown that it’s injective: to do that you have to show that if $f(A)=f(A')$, then $A=A'$. Clearly you can’t do this by assuming that $A=A'$ to begin with. What you’ve shown is actually that $f$ is well-defined: if $A=A'$, then $f(A)=f(A')$. If you turn the argument around, starting with the assumption that $f(A)=f(A')$ and proving from that that $A=A'$, you’ll have shown that $f$ is injective. The argument can also be tightened up a bit: Let $x=f(A)$ and $x'=f(A')$, and suppose that $x=x'$. Then for each $i\in\Bbb Z_+$ we have $i\in A$ iff $x_i=1$ iff $x_i'=1$ iff $i\in A'$, so $A=A'$. Your proof that $f$ is surjective is fine, though again it can be tightened up a little: just note that if $A=\{i\in\Bbb Z_+:x_i=1\}$, then $f(A)=x$.
H: Find all matrices that are invariant under base changes. Today I was solving a problem about listing all pairs $(a,b)\in \mathbb{R}^2$ such tath there exist a unique symmetric $2\times 2$ matrix such that $det(A) = a$ and $trace(A)=b$, where $A$ is the matrix that fulfills such conditions. The solution to this specific problem is all pairs such that $b^2 = 4a$, and the matrix is given by de diagonal matrix with all entries along the diagonal being $a_{ii} = \frac b2$. And to any diagonal matrix (with all entries being equal on the diagonal) we can find $a,b$. Now, to my actual doubt. Since trace and determinant (along with all the coeficcients of the characteristic polinomial) are invariant under basis changes. Generalizing this problem and dropping the symmetric condition, in order for such a matrix to exist and be unique, it must be invariant under any basis. I have proven that all diagonal matrices with just one eigenvalue fulfill this condition, but I would like to know if there exists any other linear operators such that its matrix representations is the same no matter the basis chosen. AI: All linear maps which are invariant under change of basis are multiples of the identity as you mentioned, in other words if $\cal{B}=\{e_1,...,e_n\}$ is a basis, $\cal{C}=\{e_1',...,e_n'\}$ a different basis and $L$ is a linear map such that $[L]_{\cal{B}} =[L]_\cal{C}$ then $L$ is a multiple of the identity. To prove this take $\cal{C}=\{2e_1,e_2,...,e_n\}$ and use the invariance condition to find what $L$ has to look like.
H: Approximating minimum of $f(x)+g(x)$ by solving for $f(x)=g(x)$ I'm trying to make sense of the following argument from a paper I'm reading. The author has the following upper bound: $$R(T) \le N + O\left(T\sqrt{\frac{\log T}{N}}\right)$$ They then state the following: "Recall that we can select any value for $N$ so long as it is known to the algorithm before the first round. So we can choose $N$ so as to (approximately) minimize the right-hand side. Noting that the two summands are, resp., monotonically increasing and monotonically decreasing in $N$, we set $N$ so that they are (approximately) equal." So, the general argument seems to be that, given two functions, $f(x)$ and $g(x)$, where $f(x)$ is monotonically increasing and $g(x)$ is monotonically decreasing, if we can find the solution, $y$, to $f(y)=g(y)$, then we have $\min_x\{f(x)+ g(x)\} \approx f(y) + g(y)$ But I see no reason why this should be true. In fact, if $g$ decreases faster than $f$ increases then $\min_x \{f(x) + g(x)\} \rightarrow - \infty$, right? There has to be something missing here, or something I'm not seeing. AI: If you restrict your attention to nonnegative monotone functions and take $f,g:\mathbb R\to\mathbb R_{\geq0}$ so that $f$ is nondecreasing and $g$ is nonincreasing, suppose we find some $y$ so that $f(y)=g(y)$. Set $M := f(y)+g(y)$ as our "approximate" minimum for the sum. By the nonnegativity assumption, we will have for $x>y$ that $f(x)+g(x)\geq f(x)\geq f(y) = \frac M2$ and likewise when $x<y$. I'm not sure what the context of this problem is, but this shows that using a solution $f(y)=g(y)$ in this context will always provide a $2$-approximation for the minimum of $f+g$. Since they're only speaking about an "approximate" minimum, maybe this is sufficient for their purposes? In fact, a $2$-approximation is the best you can do this way: just take $f(x)=c$ and $g(x)$ so that $g(0)=c$ and $g(x)=0$ for $x\neq0$, then $\min(f+g)=c$ but $f(0)+g(0)=2c$. Moreover, like you mentioned, dropping the nonnegativity constraint makes this even worse. For a concrete example, you can just take $f=0$ and $g(x)=-x$, then this approximation will output $0$ while the true minimum is unbounded below.
H: Show $\begin{bmatrix}d-\lambda\cr -c\end{bmatrix}$ is eigenvector of $\begin{bmatrix}a &b\cr c&d\end{bmatrix}$ Show $A=\begin{bmatrix}d-\lambda\cr -c\end{bmatrix}$ is eigenvector of $\begin{bmatrix}a &b\cr c&d\end{bmatrix}$ I first did $\operatorname{det}(A-\lambda I)=0$ and got $\lambda^2+(-a-d)\lambda+(ad-bc)=0$ To that end, I then tried doing the quadratic formula but that really didn't get me anywhere. I need to somehow get $\lambda$ values to get the eigenvectors but I can't even get it. AI: $$\begin{bmatrix}a &b\cr c&d\end{bmatrix}\begin{bmatrix}d-\lambda\cr -c\end{bmatrix}=\begin{bmatrix}ad-\lambda a-bc\\cd-\lambda c-cd\end{bmatrix}=\begin{bmatrix}\lambda d-\lambda^2\\-\lambda c\end{bmatrix}=\lambda\begin{bmatrix}d-\lambda\\-c\end{bmatrix},$$ where the second equality follows using $\lambda^2+(-a-d)\lambda+(ad-bc)=0$. Edit: Of course since an eigenvector must be non-zero, therefore this is only true when $d \neq \lambda$ and/or $c\neq 0.$
H: Is $f(x) = mx + c$ the only set of solutions to $f(x + 1) - f(x) = \text{constant}$ where $m$, $c$, and $x$ are integers? Is $f(x) = mx + c$ the only set of solutions to $$f(x + 1) - f(x) = \text{constant}$$ where $m$, $c$, and $x$ are integers? I was watching this video in Youtube (https://www.youtube.com/watch?v=uJqbHaFqjmI) and in the video he solves for the function $f$ by realizing that $f(x + 1) - f(x) = \text{constant}$. So $f(x) = mx + c$ and then solves for $m$ and $c$. Even though these are valid solutions, I wonder if $mx + c$ is the only the only set of functions with that property. More precisely, how do we know that there is no other function $g$ such that there doesn't exist a $m$ and $c$ such that for all $x$, $g(x) = mx + c$ but satisfies $g(x+1) - g(x) = constant$? AI: If ${g(x)}$ is a function with domain being integers, you can arrive at this conclusion inductively. For example: $${g(1) - g(0) = c}$$ $${\Leftrightarrow g(1) = g(0) + c}$$ $${g(2) - g(1) = c}$$ $${\Leftrightarrow g(2) = g(1) + c = 2c + g(0)}$$ $${...}$$ $${g(n) = nc + g(0)}$$ Likewise, do the same thing and check for negative integers (you will get the same form): $${g(-n) = -nc + g(0)}$$ Hence overall $${\forall\ x \in \mathbb{Z}: g(x) = cx + g(0)}$$ Which is of the form ${g(x) = mx + c}$. So it has to be the case. Since all we have done is take the defining property that you are interested in, and from this inductively shown it must be able to be written in the form of ${g(x) = mx + c}$. Of course for rigour - you can write the whole induction argument out (check the base case, assume yadda yadda yadda) EDIT: This of course only works if the domain of the function in question is ${\mathbb{Z}}$ (since we only checked the integer values, we cannot say anything about ${g}$ at non-integer values, if it's defined at non-integer values). As @Batominovski has pointed out - if the domain was the whole of ${\mathbb{R}}$ - there do exist non-linear solutions, and gave a really nice example. But yeah, in our case if ${g(x)}$ is a function with the domain being the set of integers with the property ${g(x+1)-g(x) = c}$, then indeed it must be of the form ${g(x) = mx + c}$.
H: If $f$ is not bounded from above, then $\lim_{x \to b^{-}}f(x) = \infty$ - Feedback on attempted proofs Let $f:[a,b) \to \mathbb{R}$ be a strictly monotone increasing continuous function on a half closed interval $[a,b)$, and let $d$ be a real number. Claim: If $f$ is not bounded from above, then $\lim_{x \to b^{-}}f(x) = \infty$ Attempt (including reasoning): We know: i) $f$ is a continuous monotone increasing function. ii) $f$ is not bounded. Which means by definition: If $f$ is continuous on $[a,b)$, then $f$ is not bounded above if for all $N >0$, there exists $x \in [a,b)$ such that $f(x) > N$. What we want: To show: $\lim_{x \to b^{-}}f(x) = \infty$, which is a symbolic way of saying that the function diverges. This means by definition: For all $M > 0$, there exists a $\delta > 0$ such that if $0< |b - x| < \delta$ then $f(x) > M$. Proof 1: Assume towards contradiction that $\lim_{x \to b^{-}}f(x) = d$. Since $f$ is monotonic this would imply that $f(a) \leq f(x) \leq d$ this would imply that the function $f$ is bounded above. Which would be a contradiction to our original assumption. Proof 2: This is more of an brainstorm. I wanted to try and show the result directly. But was coming up empty. What I envision happening is that I could use the definition of the function being bounded, but where I stumble is trying to find a $\delta$ in order for me to be able to connect the bounded function definition to the definition of divergence. Could I get feedback on Proof 1 and whether the idea for proof 2 is possible ? Just as an advisory. I haven't been introduced to sequences formally in the textbook I'm using. I'm aware there are sequential ways of possibly doing this, but was trying to avoid them. AI: There is indeed a direct way of showing that the function diverges (as alluded to by your "proof 2"). For any $M>0$, you have since the function $f$ is unbounded above that $f(x)>M$ for some $x\in[a,b)$. By the fact that $f$ is monotone, any $y>x$ will have $f(y)\geq f(x)>M$. Can you infer an appropriate choice of $\delta$ from this?
H: If function is differentiable at a point, is it continuous in a neighborhood? I was reading a proof for the multi-variable chain rule and in the proof the mean-value theorem was used. The use of the theorem requires that a function is continuous between two points. Hence the motivation for the question, if a function is differentiable at a point, is it continuous in some neighborhood? If not, then the multi-variable chain rule proof is fake? AI: To answer the question in the title: a function that is differentiable at a point need not be continuous in a neighbourhood. For example, consider the function $$ f(x) = \begin{cases} x^2, & x\in\mathbb Q \\ 0, & x\notin\mathbb Q\end{cases} $$ then $\frac{\mathrm{d}f}{\mathrm{d}x}(0)=0$, but $\lim_{x\to a}f(x)$ cannot exist for $a\neq0$ because $\lim_{\substack{x\to a\\x\in\mathbb Q}}f(x)=\lim_{x\to a}x^2=a^2$ whereas $\lim_{\substack{x\to a \\ x\notin\mathbb Q}}f(x)=0$. This doesn't mean the multivariable chain rule is fake, however, though perhaps the proof you were reading made stronger assumptions than just being differentiable at the point of interest?
H: Given i+j+k=n, find sum of all possible ijk Just solved this question, correctly (made a computer program to, successfully, verify the formula that I got): Let $S$ denote the set of triples $(i,j,k)$ such that $i+j+k=n$ and $i,j,k\in\mathbb{N}$. Evaluate $$\sum_{(i,j,k)\in S}ijk$$ My solution was as follows: I'll assume that $i,j,k\in\mathbb{N}_0$ and $(i,j,k)$ is an ordered triple (eg. $(1,2,0)\not\equiv(0,2,1)$) WLOG consider $i$ as it varies. Firstly as a sub-problem (for ease of understanding) fix $i=0\implies j+k=n$. Now this sub-problem reduces to finding all unordered pairs $(j,k)$ such that $j+k=n$, to then find $jk$ for each. Notice that if $j+k=N$ for some $N\in\mathbb{N}$, then $\exists \ N+1$ such pairs, which are $(0,N),(1,N-1),(2,N-2),\ldots,(N-1,1),(N,0)$. So the sum of all our $jk$ is given by, generally as $N$ varies $$ \sum_{j=0}^{N}j(N-j) $$ For $i=0$ we have $N=n$, and the desired sum will be $$ 0\times\sum_{j=0}^{n}j(n-j) $$ Now the sub-problem can easily be generalised to the main problem, as all that is different is that as $i$ varies, we must vary $N$ with it. specifically, if $i=b$ then $N=n-b$. So we will obtain a double sum. $$ \begin{align} \therefore \sum_{(i,j,k)\in S}ijk &=\sum_{i=0}^{n}i\sum_{j=0}^{n-i}j(n-i-j) \\ &=\sum_{i=0}^{n}i\left((n-i)\sum_{j=0}^{n-i}j-\sum_{j=0}^{n-i}j^2\right) \\ &= \sum_{i=0}^{n}i\left((n-i)(\frac 1 2 (n-i)(n-i+1))-\frac 1 6 (n-i)(n-i+1)(2(n-i)+1)\right) \\ &= \sum_{i=0}^{n}i\left(\frac 1 2 (n-i)^2(n-i+1)-\frac 1 6 (n-i)(n-i+1)(2n-2i+1)\right) \\ &= \sum_{i=0}^{n}i\left(\frac 1 6 (n-i)(n-i+1)(3(n-i)-(2n-2i+1))\right) \\ &= \frac 1 6 \sum_{i=0}^{n}i(n-i)(n-i+1)(n-i-1) \\ &= \frac 1 6 \sum_{i=0}^{n}i(n-i)((n-i)^2-1) \\ &= \frac 1 6 \sum_{i=0}^{n}i((n-i)^3-(n-i)) \\ &= \frac 1 6 \sum_{i=0}^{n}i(n^3-3n^2i+3ni^2-i^3-n+i) \\ &= \frac 1 6 \sum_{i=0}^{n}n(n+1)(n-1)i+(1-3n^2)i^2+3ni^3-i^4 \\ &= \frac 1 6 n(n+1)(n-1) \sum_{i=0}^{n}i+\frac 1 6 (1-3n^2) \sum_{i=0}^{n}i^2+\frac 1 2 n \sum_{i=0}^{n}i^3- \frac 1 6 \sum_{i=0}^{n}i^4 \\ &= \frac 1 {12} n^2(n+1)^2(n-1)+\frac 1 {36} n(n+1)(2n+1)(1-3n^2)+\frac 1 {8} n^3(n+1)^2 - \frac 1 {180} n(n+1)(2n+1)(3n^2+3n-1) \\ &= (\frac 1 {12} n^5 + \frac 1 {12} n^4 - \frac 1 {12} n^3 - \frac 1 {12} n^2) + (-\frac 1 {6} n^5 - \frac 1 {4} n^4 - \frac 1 {12} n^2 + \frac 1 {36} n) + (\frac 1 {8} n^5 + \frac 1 {4} n^4 + \frac 1 {8} n^3) - (\frac 1 {30} n^5 + \frac 1 {12} n^4 +\frac 1 {18} n^3 -\frac 1 {180} n) \\ &= \frac{1}{120}n^5-\frac{1}{24}n^3+\frac{1}{30}n \\ &= \frac{1}{120}n(n^4-5n^2+4) \\ &= \frac{1}{120}n(n^2-1)(n^2-4) \\ &= \frac{1}{120}n(n+1)(n-1)(n-2)(n+2) \\ &= \frac{(n+2)!}{(n-3)!5!} \\ &= \binom{n+2}{5} \end{align} $$ My question was, how come there is a binomial coefficient as an answer? Are there any cool generalisations/patterns here? It seems unexpected (to me) that a sum of ijk would come out as that, and quite nice, but I can't seem to relate it to anything binomial related. AI: Observe that $$\frac{X}{(1-X)^2}=\sum_{i=1}^\infty iX^i.$$ (At least for small $X$.) Therefore $$\frac{XYZ}{(1-X)^2(1-Y)^2(1-Z)^2}=\sum_{i,j,k=1}^\infty ijkX^iY^jZ^k.$$ Setting $X=Y=Z$ gives $$\frac{X^3}{(1-X)^6}=\sum_{i,j,k=1}^\infty ijkX^{i+j+k}=\sum_{n=3}^\infty a_n X^n$$ where $$a_n=\sum_{i+j+k=n}ijk.$$ As $$\frac1{(1-X)^6}=\sum_{n=0}^\infty\binom{n+5}5X^n$$ we get $$a_n=\binom{n+2}5.$$ This argument will extend to sums like $$\sum_{i_1+\cdots+i_r=n}i_1\cdot \cdots \cdot i_r.$$
H: Proof by contradicion Let $x_1, x_2, . . . , x_n$ be $n$ real numbers. Let the average of $$x = \dfrac{x_1 + x_2 + \cdots + x_n}{n}$$ be their average. Prove that at least one of $x_1, x_2, \cdots , x_n$ is greater than or equal to $x$. I am pretty sure this proof can be proved with contrapositive and I think I may know how to do that. However, I am wondering how you can do it with a contradiction. Any assistance would be greatly appreciated as I am new to discrete math. Thank you for all the comments in advance! AI: Assume that $x_i<x$ for every $i$. Then: $$x=\frac{x_1+...+x_n}{n}<\frac{x+...+x}{n}=\frac{nx}{n}=x $$ Which is a contradiction
H: sum of digit of a large power what is $P(P(P(333^{333})))$, where P is sum of digit of a number. for an example $P(35)=3+5=8$ a)18 b)9 c)33 d)333 f)5 I tried to find this but I couldn't. I started to find a pattern for an example the first few power of $333^{333}$ are: $A=333*333=110889 \; \; \; \; \; \; P(A)=3^{3}=27$ $B=110889*333= 36926037 \; \; \; \; \; \; P(B)=36$ $C=36926037*333=12296370321 \; \; \; \; \; \; P(C)=36 $ $D=12296370321*333=4094691316893 \; \; \; \; \; \; P(D)=63$ Can I say it is always 9? so $P(P(P(333^{333})))=9$? AI: $333^{333} $ has $840 $ digits, so the sum of its digits $P(333^{333})$ could be at most $9\times840=7560$. Since $P(333^{333})$ has at most four digits, $P(P(333^{333}))$ could be at most $9\times4=36$. Since $P(P(333^{333}))\le36$, $P(P(P(333^{333})))\lt18.$ Also $333^{333}$ is a multiple of $9$, so $P(P(P(333^{333})))$ is. Can you take it from here?
H: Prove that (orthogonal) projections $P_{L_1}x=P_{L_1}(P_{L_2}x)$ for closed subspaces $L_1\subset L_2\subset H $ Let $L_1\subset L_2\subset H $ closed subspaces of $H$ and let $x_2=P_{L_2}x$ (*orthogonal projection of $x$ onto $L_2$). Prove $P_{L_1}x=P_{L_1}x_2$ ($P_{L_1}x=P_{L_1}(P_{L_2}x)$) According to this functional-analysis book by Eidelman, $P_{L}x$ for a closed supspace $L\subset H$ satisfies $||x-P_Lx||=\inf_{y\in L}||x-y||$ and it is unique. It's clear that $P_{L_1}x_2\in L_1$ and therefore $||x-P_{L_1}x||\le||x-P_{L_1}x_2||$. But how do I proceed showing that $||x-P_{L_1}x_2||\le||x-P_{L_1}x||$ as well? While $x_2-P_{L_1}x_2$ is orthogonal to $L_1$, I am not sure how to work with $x-P_{L_1}x_2$. I know it must be simple and direct, but still, I am stuck. Maybe I am not approaching it the right way. AI: I don't think that the $\inf$ definition is the most illuminating here. If $H$ is a Hilbert space and $M$ is a closed subspace of $H$, we have $M\oplus M^\perp=H$, i.e. $$\forall x\in H,\exists ! (y,z)\in M\times M^\perp, x=y+z$$ Then the projection $P_M$ is characterized by $P_M(x)=y$. In fact, $$x = \underbrace{P_M(x)}_{y\in M} + \underbrace{\left(x-P_M(x)\right)}_{z\in M^\perp}$$ Now let's turn back to your problem with $L_1\subset L_2$. Using the same decomposition as above, $$P_{L_1}(x) = P_{L_1}\left(P_{L_2}(x) + x - P_{L_2}(x)\right) = P_{L_1}\circ P_{L_2}(x) + P_{L_1}\left(x - P_{L_2}(x)\right)$$ But $x-P_{L_2}(x)\in L_2^\perp\subset L_1^\perp$ so $P_{L_1}\left( x-P_{L_2}(x)\right)=0$. It follows that $$P_{L_1} = P_{L_1}\circ P_{L_2}$$ Note that this reasoning holds for all projections, not just orthogonal projections.
H: Prove that $A\setminus B$ is non-countable where $A$ is a non-countable set and $B\subseteq A$ is a countable set. Problem: Let $A$ be a non-countable set and let $B\subseteq A$ be a countable set. Prove that $A\setminus B$ is non-countable. Where, $(X \text{ is non-countable})\;\iff (|X|>|\mathbb{N}|)$. Of course, a way to prove this would be by finding a function $f:A\to A\setminus B$ that's injective to show that $|A|\leq |A\setminus B|$ thus finishing the proof. I can´t seem to be able to find shuch a function. My guess would be that a function like that should exist due to the following special case: $$\big|\mathbb{R}\setminus\big(\;]-\infty,-\pi/2]\cup [\pi/2,\infty[\;\big)\big|=\big|(-\pi/2,\pi/2)\big|=\big|\mathbb{R}\big|$$ Thanks to the existence of the function $\tan:(-\pi/2,\pi/2)\to\mathbb{R}$ which is bijective. How could I find an appropriate function to prove Problem? Please help adding or removing tags if necessary. Thanks in advance. AI: Since $B$ is countable, there exists an injection $f : B \rightarrow \mathbb N$ Suppose $A \setminus B$ is countable. Then there exists an injection $g : A \setminus B \rightarrow \mathbb N $. Now let $h : A \rightarrow \mathbb N$ be defined by $$ h(x) = \left \{ \begin{array}{l} 2f(x) \quad &\text{if} \quad x \in B \\ 2g(x) + 1 \quad &\text{if} \quad x \in A \setminus B \end{array} \right. $$ $h : A \rightarrow \mathbb N$ is an injection so $A$ is countable, which is a contradiction.
H: Can you square both sides of an equation containing matrices? For example A=λI ⇒ A²=λ²I where A is square, and λ∈ℝ. Or more generally AB=CD ⇒ ABAB=CDCD. Assuming all the necessary matrix products are possible, what other conditions would need to be fulfilled for this to hold? AI: For my money, the answer you’re looking for has nothing whatever to do with matrices. For, in mathematics, when we write$$A=B$$ we’re saying that $A$ and $B$ are the same thing. In your case, you’re not talking about one matrix $AB$ and another matrix $CD$, rather you’re talking about a single matrix that has two factorizations. Thus the principle that “you may always do something to one side of an equation as long as you do the same thing to the other” is not a mathematical principle, but rather a principle of logic, dependent on the mathematical meaning of equality.
H: D&D Probability of 2 Attempts I wanted to get the full probability of 2 attempts made at 60% chance of success. I was looking at a different chain of math and found my probability to hit an enemy is 60% per each attack but I was wondering how it would look at all the outcomes and the probability of it. 6/10 * 6/10 = 36% of both attempts failing and both attempts succeeding with a 64% chance of at least 1 attempt succeeding? I didn't understand how it would look as a failure adding up to 136% AI: Scenario 1: Both Hit 6/10 * 6/10 = 36% Scenario 2: Hit first, miss second 6/10 * 4/10 = 24% Scenario 3: Miss first, hit second 4/10 * 6/10 = 24% Scenario 4: Miss Both 4/10 * 4/10 = 16% Notice how this total adds to 100%. If you define success as at least one hit, then you may add scenarios 1-3 or take 1-P(Scenario 4).
H: Let $G$ be a group with order $105 = 3 \cdot 5 \cdot 7$ (a) Prove that a Sylow $7$-subgroup of $G$ is normal (b) Prove that $G$ is Solvable Can anyone please tell me if I am correct? (a) For the sake of contradiction suppose $G$ dose not have a normal Sylow $7$-subgroup. We first show $G$ has a normal Sylow $5$-subgroup. Then $G$ must have $15$ Sylow $7$-subgroups. So $G$ has $15(7-1) = 90$ elements of order $7$. If $G$ dose not have a normal Sylow $5$-subgroup then $G$ has $21$ Sylow $5$-subgroups so $G$ has $21(5-1) = 84$ elements of order $5$. But $90 + 84 = 174 > 105$. Therefore $G$ has a normal Sylow $5$-subgroup. Let $N$ be the unique Sylow $5$-subroup, and let $P$ be a Sylow $7$-subgroup. Since $N$ is normal $NP$ is a subgroup of $G$. Since $N \cap P = 1$ we have $|NP| = |N||P| = 35$. So by Lagrange $|G : NP| = 3$ since $3$ is the smallest prime dividing $|G|$ we have that $NP$ is normal. So the Fratini Argument $G = N_G(P)N$ Finally since $NP$ is abelian $NP$ normalizes $P$. So $NP \leq N_G(P)$ Bur since $3$ divides $|G|$ and $3$ dose not divide $N$ we have $3$ divides $N_G(P)$ so $105$ divides $N_G(P)$ thus $G = N_G(P)$. (b) Continuing with the notation above $NP$ is a normal subgroup of $G$ and $G/NP$ has order $3$ so is clearly abelian. Since $NP$ is a abelian, the trivial subgroup $1$ is a normal subgroup of $NP$ and $NP/1$ is abelian. Hence $1 < NP < G$ is our disired chain. Also if anyone has any nice rules for proving that groups of a certain order are solvable that would be appreciated. I herd groups with order divisible by at most $2$ distinct primes must be solvable. AI: Here's another way that bypasses the question entirely. It uses the fact that the $5$ is a red herring, and just put there to make the numbers clash. Notice that by standard counting, groups of order $15=3\cdot 5$ and $35=7\cdot 5$ are cyclic, hence both have a normal (and unique) Sylow $5$-subgroup, and the same for the other prime $3$ or $7$. We first claim that the Sylow $p$-subgroup is normal for some prime $p$. If not, then $n_p$, the number of Sylow $p$-subgroups, is given by $n_3=7$, $n_5=21$ and $n_7=15$. Standard element counting gives a contradiction. If $n_5=1$ then $G$ has a normal Sylow $5$-subgroup. If $n_3=1$ or $n_7=1$ then $Q\lhd G$ where $|Q|=3$ or $|Q|=7$. Then $G/Q$ has order $15$ or $35$, and in both cases has a normal Sylow $5$-subgroup. Take the preimage of this to give a normal subgroup of $G$ of order $35$ or $15$. Again this has a normal Sylow $5$-subgroup, so again $G$ has a normal Sylow $5$-subgroup. Quotient out by this. Then $G$ has order $21$, and easily has a normal Sylow $7$-subgroup. But again, take preimages to get a normal subgroup of order $35$, hence a normal Sylow $7$-subgroup as well. Thus any group of order $105$ has a normal Sylow $5$-subgroup and a normal Sylow $7$-subgroup. Since the quotient, of order $3$, cannot act in a non-trivial way on a group of order $5$ (but can on a group of order $7$) one obtains that $G$ is the direct product of $\mathbb{Z}_5$ and a group of order $21$. (There are two such groups.)
H: $v_i$ is an eigenvector for $T$ with eigenvalue $\lambda _i$ then it's eigenvector for $T^*$ with eigenvalue $ \bar{\lambda}_i$ given normal $T$ Given an inner product space $V$ and a normal operator $T$, prove that $\ker T=\ker TT^*$ The solution I found mentions that using the fact that $T$ is normal we know it is diagonalizable, so we have an orthonormal basis of eigenvectors $\left \{ v_1, \ldots , v_n \right \}$. Now we can notice that if $v_i$ is an eigenvector for $T$ with eigenvalue $\lambda_i$ then it is also an eigenvector for $T^*$ with eigenvalue $ \bar{\lambda}_i$. I don't understand why this is true, explanation appreciated. AI: Hint Claim 1 If $T$ is normal then $\|Tv\|=\|T^{*}v\|$ (actually it is an iff statement) $$\langle Tv, Tv \rangle=\langle v, T^{*}Tv \rangle=\langle v, TT^{*}v \rangle=\langle T^{*}v, T^{*}v \rangle.$$ Claim 2 If $T$ is normal then $T-\lambda I$ is also normal. See if you can show $$(T-\lambda I)(T-\lambda I)^{*}= \dotsb=(T-\lambda I)^{*}(T-\lambda I).$$ Now use claim 1 for the normal operator $T-\lambda I$ to get $$\|(T-\lambda_i I)v_i\|=\|(T-\lambda_i I)^{*}v_i\|.$$ So if $v_i$ is the eigenvector for $T$, then the norm on LHS is $0$. This means $\|(T-\lambda_i I)^{*}v_i\|=0 \implies (T-\lambda_i I)^{*}v_i=0 \implies (T^{*}-\bar{\lambda_i}I)v_i=0$
H: show that there is a continuous function from [0, 1] onto a countable product of copies of [0, 1] with product topology I only have difficulties in problem 11. My efforts: Follow the hint. We have a sequence of functions {$F_k(t)$}. Each $F_k(t)$ maps $t\in$ [0, 1] to a sequence $(t_1,t_2,...)$ in $\Pi_{n\geq 1}[0,1]_n$. In problem 10, we have constructed a continuous function $f^{(k)}(t)$ from [0, 1] onto $[0, 1]^k$. Now we make reference to a result from a previously solved problem. Let $X=[0,1]^k$ and $Y=\Pi_{n>k}[0,1]_n$. Let $y_0$ be the all-zero sequence in $Y$. Then the map $f:X\rightarrow X\times Y$ defined by $f(x)=x\times y_0$ is continuous. Then $F_k(t)$ is the composite of two continuous functions: $F_k(t)=f(f^{(k)}(t))$. This holds for each fixed $k$. Thus each $F_k(t)$ is continuous. Of course it is not onto due to its 0-tail. Recall that $[0, 1]^k$ is using the square metric on $\mathbb{R}^k$: $d_k(x,y)=$ max{$|x_1-y_1|,|x_2-y_2|,...,|x_k-y_k|$}. Also recall that $\mathcal{C}(I,I^k)$ is using the corresponding sup metric: $\rho_k(f,g)=$ sup{$d_k(f(t),g(t))|t\in I$}. Now $\Pi_{n\geq 1}[0,1]_n$ is using product topology. We cannot treat it as a metric space. Accordingly there is no metric for $F_k(t)$. We cannot write something like $\rho(F_7,F_9)$. We cannot prove the convergence of {$F_k(t)$} by saying it is Cauchy. We can only use the open-neighborhood definition of convergence for {$F_k(t)$}. What's the open neighborhood of $F_k(t)$? We can view $F_k(t)$ as a point in the product space $[0,1]\times\Pi_{n\geq 1}[0,1]_n=\Pi_{n\geq 0}[0,1]_n$. Then the convergence of {$F_k(t)$} to $F(t)$ is defined as: Any open neighborhood of $F(t)$ intersects {$F_k(t)$}. More exactly, it is defined as: Given any $n$ and any $\epsilon>0$, {($x,y$)|max{$|x-t|,d_n(y_{1:n},F(t)_{1:n})$}<$\epsilon$} intersects {$F_k(t)$}. That is, given any $n$ and any $\epsilon>0$, there exists $k$ such that $|t-s|<\epsilon$ and $d_n(F_k(s),F(t))<\epsilon$. The convergence cannot be guaranteed by the construction of $f^{(k)}(t)$. We can find a $F_k(s)$ having the same 1-to-$n$ coordinates as F(t), but $|t-s|$ may be greater than $\epsilon$. ------------------------Updated according to the hints given by Brian M. Scott------------------------ Let $h^{(k)}(t)$ be the $k$-time composition of $h(t)$, e.g., $h^{(3)}(t)=h(h(h(t)))$. $f^{(2)}(t)=(g(t),h(t))$ $f^{(3)}(t)=(g(t),g(h(t)),h(h(t)))$ $f^{(4)}(t)=(g(t),g(h(t)),g(h^{(2)}(t)),h^{3}(t))$ $f^{(5)}(t)=(g(t),g(h(t)),g(h^{(2)}(t)),g(h^{(3)}(t)),h^{4}(t))$ Given $n$, $t_n^{(k)}=0$ for $k<n$. We claim that $t_n^{(k)}=g(h^{(n-1)}(t))$ for $k>n$. We show this by induction on $n$. By construction, $f^{(k+1)}(t)=(g(t),f^{(k)}(h(t)))=(g(t),g(h(t)),f^{(k-1)}(h^2(t)))=(g(t),g(h(t)),g(h^{(2)}(t)),f^{(k-2)}(h^3(t)))=...$ The $n=1$ case is that $t_1^{(k)}=g(t)$. This is obvious from the construction. Assume it is true for all $n$. By construction, $t^{(k)}_{n+1}=h(t)^{(k)}_n=g(h^{(n-1)}(h(t)))=g(h^{(n)}(t))$. We have shown that $t_n^{(k)}$ is constant after $k>n$. Next we show $F$ is continuous. We only need to show: Given any open neighborhood $U$ of $\langle t_n:n\geq 1\rangle$, $F^{-1}(U)$ is open. Since $\prod_{n\geq 1}[0,1]_n$ is using product topology, we can take $U$ to be $$U=\prod^N_{n=1}\big((t_n-\epsilon_n,t_n+\epsilon_n)\cap [0,1]_n\big)\times\prod_{n\geq N+1}[0,1]_n$$ for any fixed $N$. Since $f^{(N)}$ is continuous when $[0,1]^N$ is using square metric topology and the ball $$B(\langle t_n,n=1,...,N\rangle,\min\{\epsilon_1,\ldots,\epsilon_N\}, d_N)\subset\prod^N_{n=1}\big((t_n-\epsilon_n,t_n+\epsilon_n)\cap [0,1]_n\big)\;,$$ $f^{(N)}$ is also continuous when $[0,1]^N$ is using box topology. Thus $F^{-1}(U)=(f^{(N)})^{-1}\left(\prod^N_{n=1}\big((t_n-\epsilon_n,t_n+\epsilon_n)\cap [0,1]_n\big)\right)$ is open. Finally we need to show $F$ is surjective: Given any $\langle t_n,n\geq 1\rangle\in\prod_{n\geq 1}[0,1]_n$, there exists a $t\in[0,1]$ such that $F(t)=\langle t_n,n\geq 1\rangle$. For any fixed $N$, we can find a $t$ such that $g(h^{(n-1)}(t))=t_n$ for all $n<N$ by construction of $F(t)$. Since $t$ and $t_n$ remain constant as we increase $N$, we have found the desired $t$. AI: The product topology on $\prod_{n\ge 1}[0,1]_n$ actually is metrizable: this is true of any product of countably many metrizable spaces. However, you don’t need that here. HINT: For $t\in[0,1]$ and $k\ge 1$ let $F_k(t)=\left\langle t_n^{(k)}:n\ge 1\right\rangle$. Show that for each $n\ge 1$ the sequence $\left\langle t_n^{(k)}:k\ge 1\right\rangle$ is eventually constant and therefore converges to some $t_n\in[0,1]$. (In fact you should find that $t_1=g(t)$, $t_2=g(h(t))$, $t_3=g(h(h(t)))$, and so on.) Define $F:[0,1]\to\prod_{n\ge 1}[0,1]_n:t\mapsto\langle t_n:n\ge 1\rangle$. Show that $F$ is continuous. Show that $F$ is a surjection.
H: A notation for distinctness (in logic) I want to say formally that $C_0$, ..., $C_{n+1}$ are distinct. Two ways seem to be (1) $\qquad\qquad\qquad\qquad (\forall x,y \in \Bbb N)(x \ne y \to C_x \ne C_y)$, (2) $\qquad\qquad\qquad\qquad C_{0 \le i \le n} \space \ne \space C_{i+(1 \le j \le n+1)}$. Are (1) and (2) interchangable, assuming (the likely obvious) that $i, j, n \in \Bbb N$? And is there a better way? AI: If you really want to say it symbolically (keep in mind that in most cases it's better to just write precisely in natural language), a useful notation is indexed conjunctions, the $\wedge$-analogue of $\sum$: we write "$\bigwedge_{i\in I}\alpha_i$" to denote that $\alpha_i$ is true for each $i\in I$. (Similarly, we have $\bigvee$ for disjunctions.) And we can further fold abbreviations into the subscript, just as we do for $\sum$; so the proposition $$\bigwedge_{0\le i<j\le n+1}C_i\not=C_j$$ does the job. (Of course there's an implicit assumption that $i,j$ range over naturals, but this is standard here.)
H: Are we comparing the expectation of random variables in "convergence in probability"? I was watching this and this to try to understand what convergence in probability means. In the first video, I was confused at why, in the Excel demonstration, the random variables had their standard deviations depend on $n$. Isn't this law/phenomenon supposed to hold even if my sequence ${X_1, X_2,...X_n}$ was just $n$ standard normal distributions? In which case, what makes it so that $X_n$ is so much closer to our $c$ (or $a$ from the second video) than $X_{n-1}$? I can understand a normal sequence $A = \{a_1, a_2,...a_n\}$ converges to $1$ if $a_n=1 - \frac{1}{n}$, as is a common example. But if we use an example of a sequence of $n$ standard normal variables, why is $X_n$ any closer to the mean (zero) than $X_1$? Isn't it the mean of the sequence that's supposed to be close to zero? My initial understanding was that say I have a set of trials of stnd random variables that turned out to be $\{1, 2, -1, -2, 3\}$. In this case the mean is $0.6$. If I had a waaay bigger samples size $\{1, 2, -1, -2, 3, 2, -1, 3, -2,....\}$ I would eventually get the mean to $0.0$. But based on the first video it looks like each term itself is getting closer to zero (in the video it was closer to 1), implying it's more like $\{1, 2, -1, -2, 3, 2, -2, 1, -1, 1, 1,...\}$ AI: The video is not about the central limit theorem as you seem to imply with your question. No averages are taken. The random variables in the video are independent normal variables with mean 1 and decreasing variance. "But if we use an example of a sequence of standard normal variables, why is $X_n$ any closer to the mean (zero) than $X_1$?" If their variances were positive and constant, then you are correct, you wouldn't expect the sequence to converge. At no point is the values of the sequence summed up and divided by the length of the sequence.
H: Solve $\int\limits_0^{1/\sqrt{2}} \frac{au^2}{5(1-u^2)^2}du = 1$ for $a$ Problem: The function $f_U(u) = \frac{au^2}{5(1-u^2)^2}$ is a probability density for the random variable $U$, which is non-zero on the interval $(0, \frac{1}{\sqrt{2}})$. I am supposed to find the value $a$. I understand that that amounts to solving the equation $\int_0^{\frac{1}{\sqrt{2}}} \frac{au^2}{5(1-u^2)^2}du = 1$ for $a$. Hint: The hint I am given is to use integration by parts on the left hand side of $$\int_0^{\frac{1}{\sqrt{2}}} \frac{1}{1-u^2}du = \log(1 + \sqrt2)$$ My Attempt: I don't understand how to use this hint. I know how to integrate by parts. If the standard expression for integrating by parts is $st\bigg\rvert_0^{\frac{1}{\sqrt{2}}} - \int_0^{\frac{1}{\sqrt{2}}}tds$, then I guess I am supposed to find the appropriate $s$ and $dt$ in the expression $\dfrac{u^2}{(1-u^2)^2}$? In particular, I am supposed to choose $s$ and $dt$ such that $tds = \dfrac{1}{1-u^2}$? I am led to believe that this is not a particularly nasty integral, though the online integral calculators do some sort of partial fraction decomposition that looks like a mess. Another idea I had was to do a trigonometric substitution, but I don't think that will work. AI: Note \begin{align} \int_0^{1/\sqrt{2}} \frac{u^2du}{5(1-u^2)^2} & =\frac1{10} \int\limits_0^{1/\sqrt{2}} u\> d\left(\frac{1}{1-u^2}\right)\\ &=\frac{u}{10(1-u^2)}\bigg|_0^{1/\sqrt2}-\frac 1{10} \int\limits_0^{1/\sqrt{2}} \frac{du}{1-u^2}\\ &= \frac{1}{5\sqrt2}-\frac 1{10}\ln(1+\sqrt2) \end{align}
H: If $\lim_{x \to b^{-}}f(x) = \infty$ then the image of $f$ is the ray $[f(a),\infty)$ - Proof feedback Let $f:[a,b) \to \mathbb{R}$ be a strictly monotone increasing continuous function on a half closed interval $[a,b)$, and let $d$ be a real number. Claim: If $\lim_{x \to b^{-}}f(x) = \infty$ then the image of $f$ is the ray $[f(a),\infty)$ This question is a follow up to another one I did that most likely applies: If $\lim_{x \to b^{-}} f(x) = d$ then the image of $f$ is the half closed interval $[f(a),d)$ - Proof feedback So in that question we established that the image of $f$ is the half closed interval $[f(a),d)$. For this question, I wanted to apply this result and let $d = \infty$. Then the result will follow. I think this is correct, but something is telling me that I can't treat $\infty$'s in such a cavalier way. So do I need to be more confident in my solutions or am I right in wanting to tread more carefully due to the $\infty$ ? AI: Claim 1. $\operatorname{im}f \subset [f(a), \infty)$. Proof. Follows from monotonicity. Claim 2. $\operatorname{im}f \supset [f(a), \infty)$. Proof. Let $y_0 \in [f(a), \infty)$. We need to show that there exists $x_0 \in [a, b)$ such that $f(x_0) = y_0$. Let $M = y_0$ (as in the notation in the comments). Let $\delta$ be as in the comments. Choose any $x \in [a, b)$ that satisfies $0 < |x - b| < \delta$. Then, we have $f(x) > M$. On the other hand, we have $f(a) \le y_0 < f(x)$. By IVT, we see that there exists $x_0 \in [a, x)$ such that $f(x_0) = y_0$, as desired. Equality of $\operatorname{im} f$ and $[f(a), \infty)$ now follow.
H: Describing the kernel of a group homomorphism Let $\phi : \mathbb Z \to \mathbb Z_{75}$ be the function defined by $\phi(n) = 27n \mod 75$, for all $n \in \mathbb Z$. I'm trying to describe the kernel of $\phi$ as simply as possible and so far I got... $\phi (n) = 27n\mod {75}$ $\\$ $\phi (n) = 0$ $\\$ $27n \mod 75 = 0$ $\\$ $27n = 75k$ From here i'm not sure how to finish this, i'm assuming $\mbox{Ker}( \phi ) \ne \frac{75}{27} \Bbb Z$ since that doesn't make much sense. AI: So you are looking for $$27n \equiv 0 \pmod{75}.$$ This is same as saying $$9n \equiv 0 \pmod{25}.$$ $9$ has an inverse $14$ mod $25$. So multiply by $14$ on both sides to get $$14(9n) \equiv 14(0) \pmod{25} \implies n \equiv 0 \pmod{25}.$$ Thus $n \in 25\Bbb{Z}=\{25k \, | \, k \in \Bbb{Z}\}$. Note: An intuitive way is to think that you want $27n$ to be $0$ in mod $75$. With $27n=3(9n)$, we already have a factor of $3$, so all we need is a factor of $25$ from $9n$ so that the whole thing will be a multiple of $75$. But $\gcd(9,25)=1$ so we can't anything from $9$, so $n$ has to supply the missing factor of $25$.
H: If $X$ is a metric space and $E \subset X$, then $\overline{E}$ is closed. I would like to understand how a certain statement below follows from the previous statements. Theorem If $X$ is a metric space and $E \subset X$, then $\overline{E}$ is closed. Proof. If $p \in X$ and $p \notin \overline{E}$ then $p$ is neither a point of $E$ nor a limit point of $E$. Hence $p$ has a neighborhood which does not intersect $E$. The complement of $\overline{E}$ is therefore open. Hence $\overline{E}$ is closed. Since $p$ has a neighborhood which does not intersect $E$. I was wondering how it follows from "$p$ has a neighborhood which does not intersect $E$ " that the complement of $\overline{E}$ is open if and only if $(\overline{E})^{c}$ is open? At first I thought that since $p$ has a neighborhood, $N_{r}(p)$, which intersects $E^{c}$, then $N_{r}(p) \subset E^{c} \subset \overline{E^{c}}$ since $\overline{E^{c}} = E^{c} \cup (E^{c}$)'. But, I don't think that $\overline{E^{c}}$ necessarily equals $(\overline{E})^{c}$. AI: The argument goes like this. If $p$ is not in $\operatorname{cl}E$, then by definition $p\notin E$ and $p$ is not a limit point of $E$. This implies that $p$ has a nbhd $N_p$ that does not intersect $E$. We can do this for each $p\in X\setminus\operatorname{cl}E$, so $$X\setminus\operatorname{cl}E\subseteq\bigcup_{p\in X\setminus\operatorname{cl}E}N_p\subseteq X\setminus\operatorname{cl}E\;.$$ But then $$X\setminus\operatorname{cl}E=\bigcup_{p\in X\setminus\operatorname{cl}E}N_p\;,$$ which, being a union of open sets, is open. By definition the complement of $X\setminus\operatorname{cl}E$ is then closed, and $$X\setminus(X\setminus\operatorname{cl}E)=\operatorname{cl}E\;,$$ so $\operatorname{cl}E$ is closed.
H: Proof that the limit is 0 using Fourier series I have this problem and I guess I should solve it using Fourier because of the context: let $f:\mathbb{R}\rightarrow\mathbb{R}$ a continuous function over $\mathbb{R}$ such that $\left|f\left(x\right)\right|\le x^4$ for all $x$. Need to prove: $\lim_{n\to\infty}\int_{0}^{\pi}{\frac{f\left(x\right)}{x}\cos{\left(nx\right)}dx}=0$ I can see a connection to fourier coefficients and "riemann lebesgue" maybe but I don't know how to prove it. AI: Let $g(x)=\frac {f(x)} x$ for $0<x<\pi$ and $g(x)=\frac {f(\pi)} {\pi}$ fo $-\pi \leq x \leq 0$. Then $g$ is integrable and Riemann Lebesgue Lemma implies that $\int_{-\pi}^{\pi} g(x) \cos (nx)dx \to 0$. Hence the result. Integrability of $g$ comes from the given inequality (which shows that $g$ is bounded).
H: what is the probability that a prime number divides another prime plus 1? what is the probability that a prime number divides another prime plus 1? what I do know is that for 2 it's 100% I can show this fact using a function $f(x,y):=$ the number of primes between $1$ & $y$ that when you add 1 you can divide it by $prime(x)$ and get a whole number and divide that by $π(y)$ $π(x)$ is the prime counting function $f(1,x)=(π(x)-1)/π(x)$ because the only time $prime(x)+1$ doesn't equal an even number is when the prime is $2$. $2+1$ isn't even. and as the x goes to infinity $(π(x)-1)/π(x)$ goes to 100% my question is what is the probability that $3,5,7,...$ divides a random prime number plus 1 do you know the general formula for $f(x,y)$ as $y$ goes to infinity AI: If you fix the first prime $p$ then you are looking for primes of the form $pk-1$ The density of this set of primes is $1/(p-1)$ in the set of all primes by the Chebotarev theorem.
H: What would be the arithmetic/algebraic rules for solving the problem $500=\frac{66}{\sqrt{1 - \frac{V^2}{(3 \times10^8)^2}}}$ Every direction I take to solve this problem leaves me with a negative on one side of the equation and $V^2$ on the other. Arithmetic/algebraic rules were the cause of my last question on this site, and it is very irritating. I will provide some of my attempt below and if anyone can suggest texts for me to read to improve my understanding of algebraic rules or point me in the right direction, I would be appreciative. I'm pretty sure I understand at least the basic rules! Which is why this is so frustrating to me. Please don't give the answer directly but hint me in the right direction, please and thank you! Attempt: I squared both sides of the equation which gave me this: $$250,000=\frac{4,356}{1-\frac{V^2}{(3\times10^8)^2}}$$ And then from here, I get: $$250,000-\frac{250,000V²}{(3\times10^8)^2} = 4356$$ From here I realize that I'll be attempting to square a negative which I know isn't possible. Did I mess up at the very first step? If so what is the proper first step? I'm solving for $V^2$ obviously. AI: You are doing fine so far. Now move the $V^2$ term to the other side and subtract $4356$ from each side. You just need $V$ to be a little less than $3\cdot 10^8$ so the fraction is a little less than $1$. Added: $$250,000-\frac{250,000V^2}{(3\times10^8)^2} = 4356\\ 250,000-4356=\frac{250,000V^2}{(3\times10^8)^2}\\ (250,000-4356)\frac{(3\cdot 10^8)^2}{250,000}=V^2\\ \sqrt{(250,000-4356)\frac{(3\cdot 10^8)^2}{250,000}}=V$$
H: Evaluate $\lim_{x\to 0} \frac{f(x^3)}{x}$ Let $f:\mathbb{R}\to \mathbb{R}$ be a function such that $|f(x)|\leq 2|x|$ for every $x \in \mathbb{R}$. Evaluate $\lim_{x\to 0} \frac{f(x^3)}{x}$. According to the answer key, it is $0$ (which matches mine). I am not so sure about my solution (below) though. Rewritting $\lim_{x\to 0} \frac{f(x^3)}{x}$ yields $$\lim_{x\to 0} \frac{f(x^3)}{x} = \lim_{x\to 0} 2\frac{|x^3|}{x} = 2\lim_{x\to 0} \frac{|x^3|}{x}$$ Analyzing one-sided limits, we have: $$\lim_{x\to 0^-} \frac{-x^3}{x}=\lim_{x\to 0^-} -x^2=0$$ and $$\lim_{x\to 0^+} \frac{x^3}{x}=\lim_{x\to 0^+} x^2=0$$ Both one-sided limits exist and are equal, therefore $$2\lim_{x\to 0} \frac{|x^3|}{x}= 2\cdot0=0$$. Is my solution correct? AI: Subtle points: you should have $$ 0 \leq \left|\lim_{x\to 0}\frac{f(x^3)}{x}\right|\leq \left|\lim_{x\to 0}\frac{2|x^3|}{x}\right|; $$then you can proceed.
H: what is the probability that you get a prime number when you pick two random numbers between 0 and 1 ,a and b, and divide b by a and round up? what is the probability that you get a prime number when you pick two random numbers between 0 and 1, a and b, and divide b by a and round up? let's say your random numbers where 0.09121=a, and 0.6163=b, b/a=6.8665 and rounding up you get 7 AI: It will be $$\sum\limits_{p\text{ prime}} \dfrac1{2p(p-1)} \approx 0.3866$$ if you consider something like this
H: Finding a particular continuous function Let $f:[0,1]\rightarrow\mathbb R$ be a continuous function such that $\int_{0}^{1} f(x)(1-f(x)) dx =\frac {1} {4}$ . How many such functions exist? I really have no idea where to start. How do we solve it? AI: Hint $$f(x)(1-f(x))-\frac{1}{4}=-\left(f(x)-\frac{1}{2}\right)^2.$$ Since $f$ is continuous so $f-\frac{1}{2}$ is continuous as well and we have $$-\int_0^1\underbrace{\left(f(x)-\frac{1}{2}\right)^2}_{\geq 0} \, dx =0$$
H: Proving a group homomorphism Let $\Bbb R [x]$ denote the group of all polynomials with real coefficients, under the operation of addition. Let $\psi : \Bbb R[x] \to \Bbb R$ be the function defined by $\psi (p(x)) = p(3)$, for all $p(x) \in \Bbb R [x]$. I'm trying to prove this to be a group homomorphism and so far this is what I got... Proof: First, we must show that $\psi (p(x+y)) = \psi (p(x)) + \psi (p(y))$ for all $x,y \in \Bbb R$. Therefore, $(x+y)(3) = 3x + 3y = \psi (p(x)) + \psi (p(y)) = \psi (p(x+y))$, proving that $\psi$ is a group homomorphism. Just wondering if I have everything I need in this proof for it to be correct. AI: You’re trying to prove the wrong thing. What you must prove is that if $p,q\in\Bbb R[x]$ (i.e., $p$ and $q$ are polynomials in the indeterminate $x$ with real coefficients), then $$\psi(p+q)=\psi(p)+\psi(q)\;.$$ And this is true: $$\begin{align*} \psi(p+q)&=(p+q)(3)\\ &=p(3)+q(3)\\ &=\psi(p)+\psi(q)\;. \end{align*}$$
H: Prove that a function in $\mathbb R^n$ is surjective I have a function $f$: $\mathbb R^n$$\,\to\,$$\mathbb R^n$ defined by $f(\hat{x}) = \hat{x} - 2(\hat{x} \cdot\hat{v})\hat{v}$, with $\mid \hat{v}\mid$ $= 1$. I'm looking to prove that this function is surjective. I'm a bit rusty on vectors, however, so I'm struggling to solve for $\hat{x}$, to show that for any $y \in \mathbb R^n$ there exists some $\hat{x} \in \mathbb R^n$ such that $f(\hat{x}) = \hat{y}$. Is there another way to go about proving surjectivity? Or is this it—in which case, what's the right vector manipulation? Thanks! AI: So, I don't know that you're expected to recognize this, but if $v\cdot v =1$, then this is the formula for reflecting the vector $x$ in the plane that is perpendicular to $v$. Which means that if you repeat the reflection you get the same vector back again, i.e. $$ f(f(x)) = x $$ for all vectors $x$. You can verify this by grinding it out. And once you know $f(f(x)) = x$, do you see how you can easily prove surjectivity?
H: If $B_n$ is bounded then $\bigcup_{n=0}^{\infty} \varepsilon_n E_n$ is bounded Let $E$ be metrizable space, that is, there exists a basis $\mathcal{B}:=\{U_n \subset E \; ; \; n\in \mathbb{N}\}$ of neighborhoods of $0 \in E$. I want to prove that: if $\{B_n \subset E \; ; \; n \in \mathbb{N}\}$ is family of bounded sets, then there exists a sequence $(\varepsilon_n)_{n \in \mathbb{N}} \subset \mathbb{R}_+^{*}$ such that $$B:= \bigcup_{n=0}^{\infty}\varepsilon_n B_n$$ is a bounded subset of $E$. I thought of the following: since, for each $n \in \mathbb{N}$, $B_n \subset E$ is bounded, and $U_n$ is a neighborhood of $0 \in E$, then exist $\lambda_n>0$ satisfying $$B_n \subset \lambda_n U_n.$$ Define, for each $n \in \mathbb{N}$, $\varepsilon_n:=\frac{1}{\lambda_n}$. So, $$B=\bigcup_{n=0}^{\infty}\varepsilon_nB_n \subset \bigcup_{n=0}^{\infty}U_n.$$ For this, how do I conclude that given a neighborhood of $ 0 \in E $ there is $ \lambda> 0 $ such that $ B \subset \lambda V $? AI: Let $V_1 \supset V_2 \supset \cdots \supset V_m \supset \cdots$ be a countable decreasing basis of neighborhoods of $0$ in $E$ where $V_i$ is balanced for all $i\in \mathbb{N}$. For each $k \in \mathbb{N}$ select $\varepsilon>0$ such that $\varepsilon_k B_k \subset V_k$. We claim that the sequence $\{\varepsilon_k\}$ fulfills the desired properties. Indeed, let $n \in \mathbb{N}$ arbitrary. If $k\geq n$ then $\varepsilon_k B_k \subset V_k \subset V_n$. There is $c_n>0$ such that $c_n \varepsilon_k B_k \subset V_n$ for $k<n$, since $\bigcup_{k=1}^{n}\varepsilon_kB_k$ is bounded. Thus, $B \subset \frac{1}{c_n}V_n \cup V_n$. Let $\lambda_n=\max\{1,\frac{1}{c_n}\}$ then $B \subset \lambda_n V_n$. Therefore, $B$ is bounded.
H: Existence of constant for a "Minkowski-like" inequality to hold on $L_p$ $p<1$. I'm solving some problems to prepare for my phd qualifying exam on functional analysis and measure theory. I want to prove that given a measure space $(X,\mathcal{A},\mu)$ for every $0<p<\infty$, there exists a constant $C_p>0$ such that for every $f,g\in L_p(\mu)$ we have the inequality $$\lVert f+g \rVert_{L^p} \le C_p(\lVert f \rVert_{L^p}+ \lVert g\rVert_{L^p}).$$ I also need to find the optimal value for $C_p$ when $p<1$. By now, I figured out that the case $p>1$ is a weaker statement than Minkowsky inequality taking $C_p=1$. I also found this article where it is stated that $\lVert f+g \rVert_p \le 2^\nu (\lVert f \rVert_p+ \lVert g \rVert_p)$, where $\nu=(1-p)/p$, and that it can be proven using the function $(1+x^p)/(1+x)^p$. I don't see how could I relate this function to the desired result. I've tried some ideas but I wasn't able to link this function with a relation with the integral. Also it would be very instructive for me to see the way we approach $L^p$ spaces when the usual tools are out of the picture. Any hint/answer is appreciated, thanks in advance. AI: Note that if you can prove that $ A \leq (1+x^p)/(1+x)^p \leq B $ for $x>0$ and $A,B>0$ independent of $x$, then it follows, by replacing $x$ by $y/x$ (with $x,y>0$) and multiplying up that $$ A(x^p+y^p) \leq (x+y)^p \leq B (x^p+y^p) \tag{1} $$ Replacing $x$ by $\lvert f \rvert$ and $y$ by $\lvert g \rvert$, the right-hand inequality becomes $$ (\lvert f \rvert + \lvert g \rvert )^p \leq B (\lvert f \rvert^p + \lvert g \rvert^p) . $$ The triangle inequality gives the additional inequality $ \lvert f + g \rvert^p \leq (\lvert f \rvert + \lvert g \rvert )^p $ and integrating then gives $$ \lVert f+g \rVert_p^p \leq B( \lVert f \rVert_p^p + \lVert g \rVert_p^p ) , $$ and now we can use the left-hand side of (1), so $$ \lVert f+g \rVert_p^p \leq \frac{B}{A}( \lVert f \rVert_p + \lVert g \rVert_p )^p , $$ and taking $p$th roots will give the result.
H: continuous functions of finite dimension banach spaces in infinite dimension banach spaces I have the following problem but I am not sure how to proceed. I that could suppose that there is at least one continuous function, but afterwards I don't know how to continue. Prove that there is no continuous function of a finite dimension Banach X space over an infinite dimension Banach Y space. I would appreciate any suggestions. AI: Consider the balls $nB_X$ in $X$, centred at $0$, radius $n$. Note that these sets are compact and union to give $X$. If we have a continuous surjective function $f : X \to Y$, then $f(nB_X)$ are compact and union to give $Y$. But according to my answer here, this would imply $Y$ is finite-dimensional.
H: Determining if $\frac{(-1)^n2n+1}{n-2}$ converges I know a series $S$ with general term $a_n$ diverges if $\lim_{n\to\infty} a_n \neq 0$. My series is $\left\{\frac{(-1)^n2n+1}{n-2}\right\}$. $$\begin{align} \lim_{n\to\infty} a_n &= \lim_{n\to\infty} \frac{(-1)^n2n+1}{n-2} \\ &= \lim_{n\to\infty} \left((-1)^n \frac{2n+1}{n-2}\right) \\ &= (-1)^n \lim_{n\to\infty} \frac{2n+1}{n-2} \\ &= (-1)^n \lim_{n\to\infty} \frac{\frac{2n}{n}+\frac{1}{n}}{\frac{n}{n}-\frac{2}{n}} \\ &= (-1)^n \lim_{n\to\infty} \frac{2+\frac{1}{n}}{1-\frac{2}{n}} \\ &= (-1)^n \left(\frac{2+\frac{1}{\infty}}{1-\frac{2}{\infty}}\right) \\ &= (-1)^n \left(\frac{2+0}{1-0}\right) \\ &= (-1)^n \left(\frac{2}{1}\right) \\ &= 2(-1)^n \end{align}$$ Since the limit can never be zero (either $-2$ or $2$), I wonder if this means the series diverges. Can I can do it this way? AI: Just for your curiosity since you already received good answers. $$\sum_{n=3}^\infty\frac{(-1)^n 2n+1}{n-2} x^{n-2}=-\frac{2 x+(1+x) \big[ \log (1-x)+4 \log (1+x)\big]}{1+x }$$ See what happens when $x \to 1^-$.
H: Continuous extension of a partially defined function on a finite subspace Let $X$ and $Y$ be topological spaces, and let $x_1,...,x_m$ be distinct points in $X$. Let $f: \{x_1,...,x_m\} \to Y$ be a function. Is there a continuous extension of $f$ from $\{x_1,...,x_m\}$ to $X$? It's easy to see that this is true if $X = Y = \mathbb R$, for example. But I'm not sure how far this generalizes. Does the result hold for arbitrary topological spaces, or do we need some assumptions about $X$ and $Y$? AI: It does not hold for arbitrary topological spaces if $m>1$: there are spaces $X$ with the property that every real-valued continuous function on $X$ is constant. Thus, if $x$ and $y$ are distinct points of such a space $X$, $r$ and $s$ are distinct real numbers, $f'(x)=r$, and $f'(y)=s$, there is no continuous function $f:X\to\Bbb R$ extending $f'$. It’s not hard to construct such counterexamples if we don’t impose any ‘niceness’ conditions on $X$ — just give $X$ the indiscrete topology, for instance — but in this answer I describe a construction, due to Eric van Douwen, of such spaces starting with any $T_3$-space containing two points that cannot be separated by a continuous real-valued function; in this answer I describe such a $T_3$-space, due to John Thomas. If $X$ is a $T_4$-space, the Tietze extension theorem ensures that every real-valued function on a finite subset of $X$ has a continuous extension to $X$. The accepted answer to this question is also relevant.
H: How was the vector magnitude derived? The magnitude of a $n$-vector is defined as: $$ \sqrt{a_1^2+a_2^2+...+a_n^2} $$ or for those that prefer sigma notation: $$ \sqrt{\sum_{i=1}^n a_i^2} $$ How would this have been derived? Or was this one of those cases where mathematicians went trial-by-error to find a formula that seemed to work and then proved it later? AI: Note that for a vector $[a_1]\in\mathbb{R}$, the magnitude is obviously $\sqrt{a_1^2}=|a_1|$. For a vector $\begin{bmatrix}a_1\\a_2\end{bmatrix}\in\mathbb{R}^2$, using pythagros, the magnitude is $\sqrt{a_1^2+a_2^2}$. Then for a vector $\begin{bmatrix}a_1\\a_2\\a_3\end{bmatrix}\in\mathbb{R}^3$, consider the image: In this case, $x=a_1$, $y=a_2$ and $z=a_3$. The length by pythagoras of $AC$ is $\sqrt{a_1^2+a_2^2}$. Using pythagroas again to find AB, the magnitude of out vector, we get: $$AB=\sqrt{AC^2+a_3^2}=\sqrt{\sqrt{a_1^2+a_2^2}^2+a_3^2}=\sqrt{a_1^2+a_2^2+a_3^2}$$ Similarly for $\begin{bmatrix}a_1\\a_2\\a_3\\a_4\end{bmatrix}\in\mathbb{R}^4$, we get: $$\sqrt{\sqrt{\sqrt{a_1^2+a_2^2}^2+a_3^2}^2+a_4^2}=\sqrt{\sqrt{a_1^2+a_2^2+a_3^2}^2+a_4^2}=\sqrt{a_1^2+a_2^2+a_3^2+a_4^2}$$ While this doesn't provide a rigorous proof for an n-dimensional formula in the euclidean metric, it provides some intuition into how you generalise the Pythagorean formula to dimensions higher than 2. You can prove the general case with induction, which I will leave to you if you want. I hope this helps!
H: Why are isometries all and only $f:\mathbb{R}^n \rightarrow \mathbb{R}^n $ :$f(x)=Nx+p$,where $N$ ortogonal A passage in my notes reads: Isometries in metric spaces coincide with the usual ones when considering mappings $f:\mathbb{R}^n \rightarrow \mathbb{R}^m $ with the respective Euclidean metrics. In particular, every isometry $f$ of $\mathbb{R}^n$ with itself is of the form $f(\vec{x})=N\vec{x} + \vec{p}$,where $N \in O(n)$ is an $n \times n$ orthogonal matrix and $\vec{p} \in\mathbb{R}^n.$ Can someone elaborate on this? Why do all isometries of $\mathbb{R}^n$ take this form? AI: I think this argument works but please correct me if wrong, or if I made any assumptions. Consider the function $g(x) = f(x) - f(0)$. If we show that $g$ is a linear isometry, then we are done. The isometry is trivial. $$||g(x) - g(y)|| = ||f(x) - f(0) - f(y) + f(0)|| = ||f(x) - f(y)|| = ||x-y||$$ The tricky part is showing linearity. Okay so, we immediately have $g(0) = f(0) - f(0) = 0$ so that's great. Let's also note that $g$ preserves norms of vectors. Indeed $||g(x)|| = ||f(x) - f(0)|| = ||x-0|| = ||x||$. How do we show $g(x+y) = g(x) + g(y)$? Let's try something slightly different. Let us show that $g(x-y) = g(x) - g(y)$. We can almost say it's the same as showing $g(x+y) = g(x) + g(y)$ but remember we haven't shown that $g$ respects scalar multiplication yet! Now consider a triangle in $\mathbb{R}^n$ with two sides $x$ and $y$ (starting at the origin). Then $x-y$ represents the third side of the triangle. So $x, y, x-y$ as points, get mapped to $g(x), g(y), g(x-y)$. Crucially, this is also a triangle "based" at the origin (since the line segment from $0$ to $x$ gets mapped to the line segment between $0$ and $g(x)$). Note that since $g$ is an isometry, the triangle formed by these three points is congruent to the one formed by $x, y, x-y$. But saying this is exactly saying that $g(x-y) = g(x) - g(y)$. Great, we're almost there. If we want to show that $g(ax) = ag(x)$, we again use a similar argument as before. Consider points $x$ and $ax$. The relevant "triangle" (actually a line here) is formed by $x, ax$ and $(a-1)x$. This gets mapped to a line and so we have $g(ax)$ and $g(x)$ are collinear (with the orientations preserved i.e. if $a > 0$ then they point in the same direction, and if $a < 0$ then they point in opposite directions). But moreover we also know that $||g(ax)|| = ||ax|| = |a| ||x||$ so in fact it preserves the scaling factor. These two facts combined give us $g(ax) = ag(x)$. So we have shown that $g$ is a linear isometry so it's of the form $Nx$ for some $N \in O(n)$. So $f(x) - f(0) = Nx \implies f(x) = Nx + f(0) = Nx + p$. The gist is isometries take the triangle formed by two vectors to another triangle formed by two vectors, and they are congruent. This congruency implies linearity of the "shifted" function. Even more generally, isometries always preserves lengths and angles (so they preserve the dot product). In fact you can generalize this by saying that the group of isometries on any normed vector space $V$ (whose metric is induced by the norm) is equal to $V \rtimes O(V)$.
H: Equivalence relation in construction of Grothendieck group Sorry if this has been asked before but I couldn't find the question I have. Yesterday I read the wikipedia page for a Grothendieck group. It provided two explicit constructions given a commutative monoid $M$. One construction was taking the cartesion product of $M$, defining an equivalence relation on this cartesion product and defining an operation between the equivalence classes. The equivalence relation says $(m_1,m_2)\sim (n_1,n_2)$ iff $m_1+n_2+k=m_2+n_1+k$ for some $k\in M$. It then notes the element $k$ is necessary because the cancellation doesn't necessarily hold on an arbitrary commutative monoid $M$. I didn't understand this comment. So I tried to create an example with a commutative monoid that doesn't have the cancellation law. I took $\mathbb{R}$ with multiplication as my operation, and my equivalence relation without the element $k$, so I could see what fails. For any $x,y\in \mathbb{R}$ we have that $x*0=0*y$ where $x$ and $y$ could be different elements. Therefore for any $x,y\in\mathbb{R}$ we have $(x,0)\sim(y,0)$.I saw this prevents us from having an inverse element for the equivalence class of these elements. We have that $[(1,1)]$ is the identity of the "group" and we're suppose to have $[(0,x)]$ be the inverse element of $[(x,0)]$, however $$[(x,0)]*[(0,x)]=[(x*0,0*x)]=[(0*0)]$$ Then I immediately realized everything fails because the relation I use isn't even an equivalence relation. For any $x\in \mathbb{R}$ we have $(x,0)\sim (0,0)$ and $(0,0)\sim(0,x)$ but we don't always have $(x,0)\not\sim(0,x)$. I still don't see how involving an element $k$ saves all of this though. If I take the equivalence relation defined in the wikipedia instead, then with $k=0$ we do have $(x,0)\sim(0,x)$ since $x*x*0=0*0*0$. But what about the inverse elements then? What would be the inverse of $[(0,x)]$ ? Since now we have that $[(0,x)]=[(x,0)]$. Did I do something wrong? Did I overlook anything? AI: The idea behind creating a group from a commutative monoid comes from how you can go from the natural numbers to the integers. For any integer $n\in\mathbb Z$, you have a "positive part" $n^+ := \max(n,0)$ and a "negative part" $n^- := \min(n,0)$, and from this we always have the identity $n = n^+ - n^-$, and both of these will be natural numbers. Since a number is completely characterised by its positive and negative parts this way, we can instead think of $\mathbb Z$ as a set of ordered pairs $(a,b)$ for $a,b\in\mathbb N$, where the pair $(a,b)$ corresponds to the integer $a-b$. In this way, addition in $\mathbb Z$ is induced by arithmetic in $\mathbb N$: since $(a-b)+(a'-b') = (a+a')-(b+b')$, the addition on ordered pairs should just be given pointwise. The only problem remaining now is that there are many pairs that represent the same integer. We can remedy this by observing that $a-b=a'-b'$ iff $a+b'=a'+b$, so this is how we invoke an equivalence relation on pairs of natural numbers: $(a,b)\sim(a',b')$ iff $a+b'=a'+b$. With all this structure, $(\mathbb N\times\mathbb N)/(\sim)$ will be an abelian group that is isomorphic to $\mathbb Z$. Now, like the wikipedia article mentioned, we can almost do the same thing for an arbitrary commutative monoid $M$. By starting with ordered pairs $(m,n)\in M\times M$, the intuition is that $(m,n)$ "$=$" $m-n$. The problem, however, is that saying $(m,n)\sim(m',n')$ iff $m+n'=m'+n$ may not be an equivalence relation, and the issue here is in transitivity. Suppose $(m,n)\sim(m',n')$ and $(m',n')\sim(m'',n'')$, then we have $m+n'=m'+n$ and $m'+n''=m''+n'$. For this to imply $(m,n)\sim(m'',n'')$, we need a cancellation law: this way, we can say $$ m+n''+m' = m+(m'+n'') = m+(m''+n') = (m+n')+m'' = (m'+n)+m'' = m''+n+m $$ and so by cancelling $m'$ we deduce $m+n''=m''+n$ and thus $(m,n'')\sim(m'',n)$. In a general commutative monoid, since we may not have cancellation, we have to adapt the definition to allow us to "put $m'$ there" in my above argument. Therefore, they introduce an element $k$ to serve this role. Now, let's look at your example of the multiplicative monoid on $\mathbb R$. The goal is to make $\mathbb R$ into a group by providing inverses where we need them. As you noticed, the problem lies in the existence of an annihilator $0$, which is capable of killing everything in $\mathbb R$. Note that in comparison, if we remove $0$, we actually already get a group, called the group of units $\mathbb R^\times$. However, the Grothendieck group aims to not remove any elements of $\mathbb R$, at whatever the cost. Unfortunately, the cost is everything: the resulting group will be trivial. For any $x,y\in\mathbb R$, we will have $(x,y)\sim(0,0)$ because by setting $k=0$ we get $x\cdot0\cdot0=0\cdot y\cdot0$. Why? The problem is that inverting $0$ is destructive. Recall that we should think of the ordered pair $(x,y)$ as $xy^{-1}$ (since the operator here is multiplication rather than addition), so let's think about the element $(1,0)$. We can interpret this element as $0^{-1}$, since it exists to serve as an inverse to $0$ in the Grothendieck group, but since $0\cdot x=0$ for any $x$, having an inverse for zero will give the equations $0 = 0\cdot x = 0^{-1}\cdot 0\cdot x = 1\cdot x=x$, which implies that $x=0$ for all $x$ in the Grothendieck group! I know this was a very long answer, but I hope this is helpful.
H: The best way to reduce a set I was always told that the best way to guess what number someone's thinking of is to split the set into half continuously until I reach one item (ex. guess number from 1-100; 1-50; 1-25; 1-12; 1-6; 1-3 ; 1) and that it will generally be guessed in $ \lceil \log_2 n \rceil $ where $n$ is the set size. However, this video seems to come to a better conclusion on splitting a set. They propose a 3 move solution in a set of 12, where $ \lceil \log_2 12 \rceil = 4$. How are these different? AI: You’re getting very different information in the two puzzles. In the guessing game at each stage you simply learn which of two complementary sets contains the number. Since there are only $2^n$ possible sequences of yes and no, $n$ stages can distinguish at most $2^n$ different numbers. In the coin problem at each stage you learn whether certain coins are potentially light, potentially heavy, or definitely genuine. This is demonstrably more information, since the $12$-coin problem can be solved in $3$ weighings even though $2^3<12$.
H: Let $f(x)=x^5+a_1x^4+a_2x^3+a_3x^2$ be a polynomial function. If $f(1)<0$ and $f(-1)>0$. Then Let $f(x)=x^5+a_1x^4+a_2x^3+a_3x^2$ be a polynomial function. If $f(1)<0$ and $f(-1)>0$. Then $f$ has at least $3$ real zeroes $f$ has at most $3$ real zeroes $f$ has at most $1$ real zero All zeroes are real My attempt:- From the given condition. we get $f(-1)>0 \implies a_1-a_2+a_3>1$ $f(1)<0 \implies a_1+a_2+a_3<-1$ By intermediate theorem, $f$ has at least a zero in $[-1,1]$ $f(0)=0\implies 0$ is a zero of $f(x).$ How do I draw conclusion from this? AI: We know that $$\lim_{x\to\infty}f(x)=\infty$$ and $$\lim_{x\to-\infty}f(x)=-\infty$$ Therefore there are constants $C_-,C_+$ with $C_-<-1$, $C_+>1$ and so that $F(C_-)<0$, $F(C_+)>0$. Apply intermediate value theorem on $[-C_-,-1]$, $[-1,1]$, $[1,C_+]$ to get at least 3 real zeroes.
H: A combination problem with repeated number We know about the combination problems. However, if we say the numbers are $1,1,2,3,4,5$, then how many unique numbers with $6$ digits can be written with these numbers? AI: first you put the 1's 6C2 then you multiply by 4! to permutate the others another way to do it is: do 6! and then divide by 2! in this case, what youre doing is: permutate every number and then dividing by the permutations of the repeated number for instance, if you had 1 1 2 2 3 4 you'd do 6! divided by (2! * 2 ! ) (i do not know who voted -1 ...... this answer is indeed correct)
H: $\forall x \in \mathbb{R}^+ ( \exists M \in \mathbb{Z}^+ ( x > 1/M > 0))$: Cauchy Sequences Synopsis I'm sure this question can be found elsewhere on this website, but I haven't yet found a perspective though that uses the construction of the reals through Cauchy sequences to prove this statement. The reason why I'm using Cauchy sequences to prove this is because my textbook (Tao Analysis 1) builds up the reals first before delving into the more well known properties of it (like the Axiom of Completeness). I would like to ask the community here just to help verify the validity of my proof. Note that the definitions Tao uses are often non-standard and peculiar. So in my proof, I'll try my best to highlight any definitions I use and expand them out. Also note that we've already proved that for every rational $q$, there exists an integer $N$ such that $N \leq q < N+1$. Exercise Show that for any positive real number $x>0$ there exists a positive integer $N$ such that $x > 1/N > 0$. Proof Since $x$ is a positive real number, $x = \lim_{n \rightarrow \infty}a_n$ for some rational sequence $(a_n)_{n=1}^{\infty}$ positively bounded away from zero. In other words, there exists a rational $c > 0$ such that $a_n \geq c$ for all $n \geq 1$. Because $(a_n)_{n=1}^{\infty}$ is Cauchy, we know that for each $\epsilon > 0$, there exists some $N \geq 1$ such that for each $j,k \geq N$, $|a_j - a_k| \leq \epsilon$. Now take $\epsilon = c/2$ and its corresponding $N$. Then $a_N - \epsilon > 0$. Now construct a new sequence $(b_n)_{n=1}^{\infty}$ such that if $n \leq N$, then $b_n = a_N$, and if $n > N$, then $b_n = a_n$. Since for all $j,k > N$, $-\epsilon \leq a_j - a_k \leq \epsilon$, it is clear that for all $n \geq 1$, $a_N - \epsilon \leq b_n \leq a_N + \epsilon$. Now we wish to find some integer $M > 0$ such that $1/M < a_N - \epsilon$. So we must find some integer $M$ such that $M > 1/(a_N - \epsilon)$. But we know that such an integer exists by a previous proposition, and since $1/M < a_N - \epsilon \leq b_n$ for all $n \geq 1$, we can conclude that $0 < 1/M \leq x$. If $1/M < x$, then the exercise is proven. If $1/M =x$, just take $M+1$. Then the exercise is satisfied. So either way, we have found a positive integer $M$ such that $x > 1/N > 0$. AI: You can make it clearer by specifying that $x$ is the limit of a rational sequence. You also need a bit more detail about how that sequence is bounded away from $0$: there are an $\epsilon>0$ and an $n_0\in\Bbb Z^+$ such that $a_n\ge 2\epsilon$ for each $n\ge n_0$. Now use the fact that the sequence is Cauchy to say that there is an $n_1\ge n_0$ such that $|a_j-a_k|\le\epsilon$ whenever $j,k\ge n_1$: this ensures that $a_{n_1}-\epsilon>0$, since $a_{n_1}\ge 2\epsilon$. Now define the $b_n$ as before: $b_n=a_{n_1}$ if $n\le n_1$, and $b_n=a_n$ otherwise. Your argument that $a_{n_1}-\epsilon\le b_n\le a_{n_1}+\epsilon$ for all $n\ge 1$ is fine, but after that you have a problem: all of a sudden you’re using $N$ for two different integers, my $n_1$ and a new one that I’m going to call $m$. What you want now is some positive integer $m$ such that $\frac1m<a_{n_1}-\epsilon$ or, equivalently, $m>\frac1{a_{n_1}-\epsilon}$. Then you can argue that $\frac1m<a_{n_1}-\epsilon\le b_n$ for all $n\ge 1$ and conclude that $0<\frac1m\le x$. At that point there’s really no need to split the argument into cases depending on whether that last inequality is strict: you might as well just observe that $0<\frac1{m+1}<\frac1m\le x$.
H: Simple conditions on Radon-Nikodym derivative to obtain equivalent measures Are there some simple conditions for the converse statement of the following statement: If $\mu$ and $\nu$ are equivalent (i.e. $\mu \ll \nu$ and $\nu \ll \mu$) then $$ \frac{d\mu}{d\nu}=\left(\frac{d\nu}{d\mu}\right)^{-1} $$ Is strict positivity of the derivate one for instance? AI: If $\frac{\mu}{\nu}$ is not strictly positive almost everywhere, then that means there is a $\nu$-non-null set where $\frac{\mu}{\nu}$ vanishes, thus $\mu$ vanishes on that set, hence, $\mu$ and $\nu$ are not equivalent. Conversely, if $\frac{\mu}{\nu}$ is strictly positive almost everywhere, then on every $\nu$-non-null set, the integral of $\frac{\mu}{\nu}$ should be nonzero, thus $\mu$ and $\nu$ should be equivalent.
H: Non existence of closure of an operator A linear operator $\Omega$ on $C(X)$ is closed if its graph is a closed subset of $C(X)\times C(X)$. A linear operator $\bar{\Omega}$ is called the closure of $\Omega$ if $\bar{\Omega}$ is the smallest closed extension of $\Omega$. Not every linear operator has a closure. The difficulty which may arise is that the closure of the graph of the operator may not be the graph of a linear operator. It may instead correspond to a "multi-valued" operator. An example of a linear operator with no closure is: $$D(\Omega)=\{f\in C[0,1],f'(0) \text{ exists}\}\text{ and } \Omega f(\eta)\equiv f'(0) \text{ for } f\in D(\Omega) $$ where $D(\Omega)$ is the domain of $\Omega$. I don't understand how the closure of the graph of the operator defined above is not the graph of a linear operator? AI: The graph consists of pairs $(f,a)$ where $f$ is continuous, $f'(0)$ exist and $a=f'(0)$ (a constant function). Clearly $(0,0)$ belongs to this graph and hence to its closure. Now let $f_n(x)=\frac {(1-x)^{n}} n$. Then $f_n \to 0$ uniformly and $f_n'(0) \to -1$. Hence $(0,-1)$ belongs to the closure of the graph. If this closure is the graph of some operator then the second coordinate is uniquely determined by the first coordinate. Since $(0,0)$ and $(0,-1)$ are both in this set it cannopt be a graph of any operator.
H: General Gaussian integrals over the positive real axis. Everyone has a special memory from their multivariable calc class deriving the famous Gaussian integral: $$ \int_0^\infty e^{-x^2} \,dx = \frac{\sqrt\pi}{2}$$ A more general case is easy to find online and (not too hard to do yourself): $$\int_{-\infty}^\infty e^{-ax^2+bx+c} \,dx = \sqrt{\frac{\pi}{a}}e^{\frac{b^2}{4a}+c}$$ While I was trying to solve a certain Laplace Transform, I went looking for the answer to a similar generalization over the postive real axis: $$\int_0^\infty e^{-ax^2+bx+c} \, dx $$ Any help approaching this last question would be extremely appreciated! AI: You can also express the integral in terms of the Lower incomplete Gamma function. $$\int _0^{\infty }e^{-ax^2+bx+c}\:dx=e^c\underbrace{\int _0^{\infty }e^{-a\left(\left(x-\frac{b}{2a}\right)^2-\frac{b^2}{4a^2}\right)}\:dx}_{u=x-\frac{b}{2a}}$$ $$=e^c\int _{-\frac{b}{2a}}^{\infty }e^{-a\left(u^2-\frac{b^2}{4a^2}\right)}\:du$$ $$=e^{c+\frac{b^2}{4a}}\int _{-\frac{b}{2a}}^{\infty }e^{-au^2}\:du=e^{c+\frac{b^2}{4a}}\int _0^{\infty }e^{-au^2}\:du\:-e^{c+\frac{b^2}{4a}}\underbrace{\int _0^{-\frac{b}{2a}}e^{-au^2}\:du}_{u=-u}$$ $$=e^{c+\frac{b^2}{4a}}\int _0^{\infty }e^{-au^2}\:du\:+e^{c+\frac{b^2}{4a}}\underbrace{\int _0^{\frac{b}{2a}}e^{-au^2}\:du}_{t=au^2}$$ $$=\frac{1}{2}\sqrt{\frac{\pi }{a}}\:e^{c+\frac{b^2}{4a}}+\frac{e^{c+\frac{b^2}{4a}}}{2\sqrt{a}}\int _0^{\frac{b^2}{4a}}e^{-t}\:t^{-\frac{1}{2}}\:dt=\frac{e^{c+\frac{b^2}{4a}}}{2\sqrt{a}}\left(\sqrt{\pi }+\gamma \left(\frac{1}{2},\frac{b^2}{4a}\right)\right)$$ If you want to express it in terms of the Error function you can use the following identity $$\gamma \left(\frac{1}{2},x\right)=\sqrt{\pi }\:\text{erf}\left(\sqrt{x}\right)$$
H: Let $G$ a graph of order $n$. Prove that $n\leq \mathcal{X}(G) \mathcal{X}(\overline{G})$ and $2\sqrt{n}\leq \mathcal{X}(G)+\mathcal{X}(\overline{G})$ Given a graph $G$, the $\textit{chromatic number}$ of $G$, denoted by $\mathcal{X}(G)$, is the smallest integer $k$ such that $G$ is $k-$colorable. $\textbf{Problem.}$ Let $G$ a graph of order $n$. Prove that $1.$ $n\leq \mathcal{X}(G) \mathcal{X}(\overline{G})$. $2.$ $2\sqrt{n}\leq \mathcal{X}(G)+\mathcal{X}(\overline{G})$. Note: $\overline{G}$ it's the complement of $G$. I was trying for the second part about it problem to use AM-GM and first part. I was able to test the first part of the problem, but how could I prove the second part of the problem? AI: The second part of the problem immediately follows from AM-GM and the first part, since $$\sqrt{n} \leq \sqrt{\chi(G) \chi(\bar G)} = GM \leq AM =\frac{\chi(G) + \chi(\bar G)}{2}$$ The first part in my opinion is slightly more challenging. To see why this is, note that if $\chi(G) = k$, then the vertices of $G$ can be divided into $k$ independent sets. Furthermore, one of these independent sets has size at least $n / k$. The vertices in any such independent set become a clique of size at least $n / k$ in $\bar G$. But the chromatic number of a clique is the number of vertices, so the chromatic number of $\bar G$ must be at least $n / k$. It then follows that $\chi(\bar G) \geq n / \chi(G) \iff \chi(G) \chi(\bar G) \geq n$.
H: How to find pair of reflected point of $f(x)$, $f^{-1}(x)$ and $g(x)$? $f(x)=4+\sqrt{x-2}\\f^{-1}(x)=x^2-8x+18\\g(x)=x$ Obtain the graph of $f$ , $f^{-1}(x)$, and $g(x)=x$ in the same system of axes. About what pair (a, a) are (11, 7) and (7, 11) reflected about? After I graphed three equations, there is no pair point met at (11,7) and (7,11). What does this question mean? About what pair (a, a) are (11, 7) and (7, 11) reflected about? AI: Your inverse of $f$ is correct, so graphing all three functions should yield the following: Where by setting all three functions equal to each other and solving for $x$ we get the intersection of the functions at the point $(6,6)$ (a point of the form $(a,a)$ (where $a$ is a real number)). However, this is not the pair on the line $g(x) = x$ where $(11,7)$ and $(7,11)$ are reflected about. Points of the form $(a,b), (b,a)$ (where $b$ is a real number) are reflected about the midpoint between the two points. So, in this case the midpoint between $(11,7)$ and $(7,11)$ is $(9,9)$, i.e. the point of reflection shown in the (piece of the) graph below. Keep in mind: $f(11) = 7$ and $f^{-1}(7) = 11$
H: Whitney extension theorem for Hölder spaces The usual Whitney extension theorem says that Whitney data with remainders like $R_\alpha=o(x-y)^{k-|\alpha|}$ extends to a $C^k$ function. If we also have $R_\alpha=o(x-y)^{k+\lambda-|\alpha|}$ for some $\lambda\in(0,1]$, do we get a $C^{k+\lambda}$ function? That is, a function whose $k$th derivative is $\lambda$-Hölder continuous? AI: Yes, see Stein's Singular Integrals and Differentiability Properties of Functions Chapter VI Theorem 4: https://archive.org/details/singularintegral0000stei
H: Prove the condition below mentioned. Let $f(x)$, denote a polynomial in one variable with real coefficients, such that $f(a)=1$ for some real number a. Does there exist a polynomial $g(x)$ with real coefficients, such that, if $p(x)=f(x) g(x),$ then $p(a)=1$ $p^{\prime}(a)=0$ and $p^{\prime \prime}(a)=0 ?$ Justify your answer. My approach: $p(x)=f(x) g(x),$ or,$p(a)=f(a) g(a)$ or,$p(a)= 1* g(a)$ Further I am getting no clue Any hint will be highly appreciated AI: We are going to deduce $g(x)$ from the conclusions: $$p(a)=f(a)g(a)=1\Rightarrow g(a)=1$$ Also $$p'(a)=f'(a)g(a)+g'(a)f(a)$$ $$p'(a)=0=f'(a)+g'(a)\Rightarrow-f'(a)=g'(a)$$ Furthemore $$p''(a)=f''(a)g(a)+f'(a)g'(a)+g''(a)f(a)+g'(a)f'(a)=0$$ From the above then: $$p''(a)=0=f''(a)+g''(a)-2(f'(a))^2\Rightarrow g''(a)=2(f'(a))^2-f''(a)$$ Then : $g (x)$ is such that $g(a)=1$ , $g'(a)=-f'(a)$ , $g''(a)=2(f'(a))^2-f''(a)$
H: How to plot a plane with the given normal vector (numerically)? I have some point and a normal vector. I want to plot a plane defined with this normal vector and a point. Analytically, we use formula (for simplicity, in 2d case): $n_1 (x-x_0) + n_2(y-y_0) = 0$ So, we get an equation and that is it. But how can I do it in programming languages? I mean, what is an algorithm for it? AI: The following might be a useful approach. Starting with the general expression for a plane in 3D, solve for z. $$ n_{1}\left(x-x_{0}\right)+n_{2}\left(y-y_{0}\right)+n_{3}\left(z-z_{0}\right)=0 $$ $$ n_{1}\left(x-x_{0}\right)+n_{2}\left(y-y_{0}\right)=-n_{3}\left(z-z_{0}\right) $$ $$ -\frac{n_{1}}{n_{3}}\left(x-x_{0}\right)-\frac{n_{2}}{n_{3}}\left(y-y_{0}\right)+z_{0}=z $$ Once you have this, you can generate a grid of $x,y$ values then compute the corresponding $z=f\left(x,y\right)$ and plot. The below is an example code which does this along with a figure from the code. I hope this helps. import numpy import matplotlib.pyplot as plt from mpl_toolkits.mplot3d import axes3d fig = plt.figure() ax = fig.add_subplot(111, projection='3d') # Plane parameters n_1=2 n_2=0 n_3=1 x_0=0 y_0=0 z_0=10 # Create a grid of x and y x = np.arange(-5, 5, 0.1) y = np.arange(-5, 5, 0.1) xg, yg = np.meshgrid(x, y) # Compute z = f(x,y) z = -n_1/n_3*(xg-x_0)-n_2/n_3*(yg-y_0)+z_0 ax.plot_wireframe(xg, yg, z, rstride=10, cstride=10) plt.xlabel('x') plt.ylabel('y') plt.title('Plane: z=f(x,y)') plt.show()
H: Short, yet powerful proofs One of my favorite proofs is the following: Claim: There exists irrational numbers $\alpha$ and $\beta$ such that $\alpha^{\beta}$ is rational. Proof: Let $\alpha = \sqrt{2}^{\sqrt{2}}$ and $\beta = \sqrt{2}$ so $\alpha, \beta \notin \mathbb{Q}$. Therefore, $$\alpha^{\beta} = (\sqrt{2}^{\sqrt{2}})^{\sqrt{2}} = \sqrt{2}^{{\sqrt{2}}\cdot{\sqrt{2}}} = (\sqrt{2}^{2}) = 2 $$ So $\alpha^{\beta} \in \mathbb{Q}$. With that said: are there are any other proofs to claims, theorems, lemma's, etc. that are short and powerful like this one? Please do share. Visual proofs are also welcome! AI: For example, let $x$ be a root of the equation: $$\left(\sqrt2\right)^x=3.$$ Prove that this $x$ is an irrational number (it's not so hard by contradiction).
H: Showing $\operatorname{diag}(x)-xx'$ is positive definite on the tangent space of the unit simplex. Let $x$ be in the unit simplex (i.e. $\sum_i x_i = 1, x_i \geq 0$ ). I want to show that $\operatorname{diag}(x) - xx'$ is positive definite on the tangent space of the simplex. That is, $z'[\operatorname{diag}(x)-xx']z\geq 0$ (with equality only for $z=0$) for all $z$ such that $\sum_i z_i =0$. By $\operatorname{diag}(x)$, I mean the diagonal matrix D with $x$ along the diagonal (i.e. $D_{ii} = x_i$ and $D_{ij}=0$ for $i\neq j$). I am not sure if this result is true but it seems to be presupposed in something I was reading and I have been unable to disprove it by example in matlab. I believe this comes down to showing that $\sum_i z_i^2 x_i - (\sum_i z_i x_i)^2 \geq 0$ for $x$ in the simplex and $z$ in the tangent space. AI: This is not true. Let $x$ be an $n$-vector and let $P=\operatorname{diag}(x)-xx^\ast$. When $x=(1,0)^\top$, we have $P=0$. Hence $P$ is zero (and cannot possibly be positive definite) on every subspace of $\mathbb R^2$. However, the following statements are true: $P$ is positive semidefinite (and in particular it is PSD on $e^\perp$). $P$ is positive definite on $e^\perp$ if and only if $x>0$ entrywise. Let $y_i=\sqrt{x_i}z_i$. By Cauchy-Schwarz inequality, $$ \left|\sum_ix_iz_i\right|^2 =\left|\sum_i\sqrt{x_i}y_i\right|^2 =\left|\langle\sqrt{x},y\rangle\right|^2 \le\left\|\sqrt{x}\right\|^2\left\|y\right\|^2 =\sum_ix_i|z_i|^2.\tag{1} $$ Hence $P$ is always positive semidefinite. This proves statement $1$. When $x>0$, since every $z\in e^\perp\setminus$ is not parallel to $e$, $y$ is not parallel $\sqrt{x}$. Hence strict inequality holds in $(1)$ and $P$ is positive definite on $e^\perp$. When some $x_i$ is zero, we may assume without loss of generality that $x_i\ne0$ for all $i\le k$ and $x_i=0$ for all $i>k$. Let $z=\sum_{i=1}^ke_i-ke_{k+1}=(1,\ldots,1,-k,0,\ldots,0)^\top$. Then $0\ne z\in e^\perp$ but $y=\sqrt{x}$. Hence equality holds in $(1)$ and $P$ is not positive definite on $e^\perp$. This proves statement 2. Alternatively, you may prove the two statements above by using matrix congruence. Let $$ A=\pmatrix{1&x^\ast\\ x&\operatorname{diag}(x)}. $$ Since $e^\ast x=1$, $A$ is congruent to $$ \pmatrix{1&-e^\ast\\ 0&I}\pmatrix{1&x^\ast\\ x&\operatorname{diag}(x)}\pmatrix{1&0\\ -e&I}=\pmatrix{0&0\\ 0&\operatorname{diag}(x)}\tag{2} $$ and also to $$ \pmatrix{1&0\\ -x&I}\pmatrix{1&x^\ast\\ x&\operatorname{diag}(x)}\pmatrix{1&-x^\ast\\ 0&I}=\pmatrix{1&0\\ 0&P}.\tag{3} $$ Since the RHS of $(1)$ is PSD, so is the RHS of $(2)$. Hence $P$ is PSD. Moreover, since $Pe=0$, $P$ is positive definite on $e^\perp$ if and only if $\operatorname{rank}(P)=n-1$. However, as the RHSs of $(1)$ and $(2)$ are both congruent to $A$, $\operatorname{rank}(P)$ is equal to $\operatorname{rank}(\operatorname{diag}(x))-1$. Therefore $\operatorname{rank}(P)=n-1$ if and only if $\operatorname{diag}(x)$ has rank $n$, i.e. iff every $x_i$ is positive.
H: If $\int f(x)dx =g(x)$ then $\int f^{-1}(x)dx $ is equal to If $\int f(x)dx =g(x)$ then $\int f^{-1}(x)dx $ is equal to (1) $g^{-1}(x)$ (2) $xf^{-1}(x)-g(f^{-1}(x))$ (3) $xf^{-1}(x)-g^{-1}(x)$ (4) $f^{-1}(x)$ My approach is as follows: Let $f(x)=y$, therefore $f^{-1}(y)=x$, $\int f^{-1}(f(x))dx =g(f(x))$ On differentiating we get $x=g'(f(x))f'(x)$ After this step, I am not able to proceed. AI: Ignoring the constant of integration the answer is (2):$$\int f^{-1}(x)dx=\int yf'(y)dy=yf(y)-\int f(y)dy$$ (where I have used integration by parts). Hence $$\int f^{-1}(x)dx=f^{-1}(x)x-g(y)=xf^{-1}(x)-g(f^{-1}(x))$$.
H: How to calculate $ \left| \sin x \right| $ derivative in a more elegant way? I am trying to calculate the derivative of $\left| \sin x \right| $ Given the graphs, we notice that the derivative of $\left| \sin x \right|$ does not exist for $x= k\pi$. Graph for $\left|\sin x\right|$: We can rewrite the function as $\left| \sin(x) \right| = \left\{ \begin{array}{ll} \sin(x),& 2k\pi < x < (2k+1)\pi \\ -\sin(x), & \text{elsewhere} \\ \end{array} \right. $ Therefore calculate its derivative as: $(\left| \sin(x) \right|)^{'} = \left\{ \begin{array}{ll} \cos(x),& 2k\pi < x < (2k+1)\pi \\ -\cos(x), & \text{elsewhere} \\ \end{array} \right. $ Is there a way to rewrite this derivative, in a more elegant way (as a non-branch function) $(\left| \sin(x) \right|)^{'} = g(x)$? AI: The better way, for me, is as follows: $$f(x)=\left|\sin(x)\right|=\sqrt{\sin^2(x)}$$ Now, differentiate both sides to get $$f'(x)=\frac{1}{2\sqrt{\sin^2(x)}}\cdot2\sin(x)\cos(x)=\frac{\sin(2x)}{2\left|\sin(x)\right|}$$ Therefore, $$\left(\left|\sin(x)\right|\right)'=\frac{\sin(2x)}{2\left|\sin(x)\right|}, \ \ \ \ x \neq k\pi, k\in \mathbb{Z}$$ Appendum: This approach can easily be extended to a general case of finding $\left(|f(x)|\right)'$. First, we rewrite $$|f(x)| = \sqrt{f^2(x)}$$ Then, repeating the work above: $$|f(x)|' = \frac{1}{2\sqrt{f^2(x)}}[2f(x)f'(x)] = \frac{f(x)}{|f(x)|}f'(x)$$ we get $$\boxed{|f(x)|' = \frac{f(x)}{|f(x)|}f'(x)}$$
H: Prove or disprove: a group with 3 different elements of order 6 can't be cyclic. I have a question of prove or disprove as above: a group $G$ with $3$ different elements of order $6$ can't be cyclic. I tried to find a group of order $6*n$ such that it has 3 different elements, but having trouble doing so. I know it must be isomorphic to $Z_{6n}$, and the elements $n, 5n$ are both of order 6. However, I can't find a third different element (it can be a numeric example i.e. $120$ and $20/100$). That's why I think it may be true. However, I can't think of a way to prove it. Any help will be appreciated! AI: As you noted, if a group is cyclic and has at least an element of order $6$, then $6$ is a divisor of the cardinal of the group. There is exactly one subgroup of order $6$. And it happens, that the cyclic group of order $6$ does not have three distinct generators, only two.
H: What are the odds of rolling a 1 and a 6 in four dice throws? There are $\binom42 = 6$ different ways to arrange the two desired rolls in a sequence of $4$ dice, and $6^2$ possible results for the remaining two dice. There are $6^4$ possible total outcomes, so the odds of getting a 1 and a 6 should be $6^3/6^4 = 1/6$, but I get the feeling this is not correct. I think my logic is flawed because I am double-counting some configurations somewhere, but I don't know how to do the computation correctly. AI: There are $6^4$ rolls, of which $5^4$ have no $1$'s and $5^4$ have no $6$'s. However, we've counted the rolls with neither $1$'s nor $6$'s twice. The probability of both a $1$ and a $6$ is $$\frac{6^4-2\cdot5^4+4^4}{6^4}=\frac{151}{648}\approx.23302469$$
H: What does $C([0,1))$ stand for in the picture below? Could anyone tell me what $C([0,1))$ stands for in the picture below? If I see "$f \in C[0,1)$" anywhere, I'd read it as "the function $f(x)$ is continuous for all $x\in[0,1)$". But this one should be different... Any help is appreciated. AI: $C(I)$ is the set (in fact, vector space) of all continuous functions $I\to\Bbb R$. Here, we want to define a map $\Phi\colon U\to V$ where $U=V=C([0,1))$. To do so, we must specify, for each $g\in U$, some $v\in V$, which we then call $\Phi(g)$. Now $\Phi(g)$ (as well as $g$ itself) is a function $[0,1)\to\Bbb R$, so to spell out what $\Phi(g)$ is, we have to say what $\Phi(g)(x)$ is for every $x\in[0,1)$. This is done here: The function $\Phi(g)$ maps $x\in[0,1)$ to $e^{\int_0^xg(t)\,\mathrm dt}\in \Bbb R$. Or, for $g\in C([0,1))$, the function $v:=\Phi(g)\colon [0,1)\to\Bbb R$ is given by $$ v(x)=e^{\int_0^xg(t)\,\mathrm dt}.$$ I suspect that it was the double parentheses $\Phi(g)(x)$ that confused you here ... Of course, one has to verify that the function $\Phi(g)$ is indeed continuous. But as $g$ is continuous, the integral exists, and it is a continuous function of the upper limit (in fact, this is even continuously differentiable, according to the fundamental theorem). Taking the exponential, doesn't destroy this continuity.
H: volume of tilted ellipsoid I'm supposed to calculate the volume of $$(2 x+y+z)^2 + (x+2 y+z)^2 + (x+y+2 z)^2 \leq 1$$ simplifying it gives $$6 (x^2 + y^2 + z^2) + 10 (x y + y z + x z) \leq 1$$ After drawing it using GeoGebra, I saw that it's a tilted ellipsoid inside the unit sphere, but I'm unable to think of how to solve this. I tried replacing the coordinates and I tried using spherical coordinates but I was unable to go anywhere with them. The final answer is $\frac{\pi}{3}$ meaning it's $\frac{1}{4}$ of a unit sphere's volume, and still, I wasn't able to conclude anything useful from it. Any hints would be greatly appreciated. AI: Hint. Let $X=2x+y+z$, $Y=x+2y+z$, $Z=x+y+2z$ then the volume of the given ellipsoid $E$ is $$V=\iiint_E 1\, dxdydz=\iiint_{X^2+Y^2+Z^2\leq 1} f(X,Y,Z)\, dXdYdZ$$ where $f(X,Y,Z)$ is a suitable function. Since here the substitution is linear, the function $f$ is constant and the integral on the right reduces to this constant multiplied by the volume of the unit ball. Can you take it from here?
H: For any complex $z$, $|z-1|\leq |z-j|+|z-j^2|$ Let $z\in \mathbb C$. Prove that $|z-1|\leq |z-j|+|z-j^2|$ This inequality appears as an exercise in a book for highschoolers. It is marked as very difficult. $j=\exp(2i\pi/3)$ denotes a third root of unity. I tried squaring both sides, and making use of $j^2=-1-j$, as well as $\bar j=j^2$ but I haven't made significant progress... Remark: In retrospect it seems obvious that the problem cannot be solved without geometric insights, that I didn't know of at the time. AI: (The following is equivalent to Aqua's elegant answer. I'm just stating it to demonstrate a self-contained solution.) Start with the identity $$ (z-1)(j-j^2) = (z-j)(1-j^2) + (z-j^2)(j-1) \, . $$ Then apply the triangle inequality and use that $$ |j-j^2| = |1-j^2| = |j-1| \, . $$
H: Question on 'taking out' pointwise limit in the $L^p$ norm In functional analysis, many properties of certain spaces are normally derived from taking a pointwise limit out of the norm, i.e. $\lvert \lvert x \rvert \rvert=\lim\limits_{n\to \infty}\lvert \lvert x_{n} \rvert \rvert$ $(*)$. The normal justification for this is that $\lvert \lvert \cdot \rvert \rvert: X \to \mathbb R$ is a continuous function wrt to the norm $\lvert \lvert \cdot \rvert \rvert$ by the reverse triangle inequality. Note that we merely used the definition of a norm to obtain continuity. In particular, $\lvert \lvert \cdot \rvert \rvert_{L^{p}}$ is continuous. However, when we arrive at $L^{p}$ spaces there are convergence theorems, like Dominated Convergence Theorem, Monotone Convergence Theorem and Fatou's Lemma which of course imply that simply taking the limit out of the norm is not possible. Why is this not a contradiction to $(*)$? AI: The difference is that in the first case you assume $x_n \to x$ in the topology of the norm $\|\cdot\|$ (this means $\|x_n-x\| \to 0$) and then you conclude $\|x_n\| \to \|x\|$ by continuity of the norm. In the context of convergence theorems in $L^p$ you assume $f_n \to f$ a.e. (this is different from $\|f_n-f\|_{L^p} \to 0$) and then conclude $\|f_n\|_{L^p} \to \|f\|_{L^p}$.
H: Does $\sum \frac{n!e^n}{n^n}$ converge or diverge I have tried root and ratio test but they were inconclusive. thank you very much AI: Using Stirling's formula, you should find that $$\frac{n!\,\mathrm e^n}{n^n}\sim_\infty\sqrt{2\pi n},$$ so it diverges trivially.
H: Concavity/convexity of a function whose domain is a singleton set Is a function which is defined on the singleton domain both convex and concave? This question got raised when I read the following from here: "If you look at the definition of concavity, you see that every function is concave on a domain consisting of a single point!" My guess is that convexity/concavity is trivially fulfilled for such function, since such functions are constant functions. Is there any non-trivial argument regarding the same that I am missing? AI: This community wiki solution is intended to clear the question from the unanswered queue. Yes, it is trivial and you are not missing anything.
H: Calculate real matrix inverse of a complex matrix Given a Hermitian positive semidefinite matrix $A \in \mathbb{C}^{n \times n}$. If $B=A^{-1},\ D=\text{Re}(B),\ C=D^{-1}$. where $D=\text{Re}(B)\Leftrightarrow{}d_{ij}=\text{Re}(b_{ij})$. Can I calculate matrix $C$ from $A$ directly without calculating matrix inverse twice (or any methods that require lower operations than this)? AI: We have $\operatorname{Re}(B) = \frac 12 (B + \bar B)$. Note that $\bar B = \overline{A^{-1}} = \bar A^{-1}$, so that $$ A(B + \bar B) \bar A = AB\bar A + A\bar B\bar A = \bar A + A = 2\operatorname{Re}(A). $$ That is, we have $$ 2A \operatorname{Re}(B) \bar A = 2\operatorname{Re}(A) \implies \\ \operatorname{Re}(B) = A^{-1}\operatorname{Re}(A)\bar A^{-1} \implies \\ \operatorname{Re}(B)^{-1} = \bar A \operatorname{Re}(A)^{-1} A. $$
H: Ideas for calculating $K_0(l_{\infty})$ and $K_1(l_{\infty})$. Thank you for answering my question. I'm a bit new to K-theory. So I was wondering how can I calculate $K_0(l_{\infty})$ and $K_1(l_{\infty})$. I think if we have one, then by using bott periodicity we can have the other one. Can anyone help me? AI: It can be shown that $K_0(\ell^\infty)$ is isomorphic to the collection of all bounded functions $\mathbb N\to\mathbb Z$, and $K_1(\ell^\infty)=0$. To see the result about $K_0$, first observe that if $p\in M_n(\ell^\infty)$ is a projection, then $p$ is unitarily equivalent to $p_1\oplus\cdots\oplus p_n$ for some projections $p_k\in\ell^\infty$. Thus $K_0(\ell^\infty)$ is the $\mathbb Z$-linear span of the $K_0$-classes of projections in $\ell^\infty$. Now define a map $\varphi: K_0(\ell^\infty)\to\{f:\mathbb N\to\mathbb Z\text{ bounded}\}$ by linear extension of $$\varphi([\chi_E])=\chi_E$$ for any $E\subset\mathbb N$, where $\chi_E:\mathbb N\to\{0,1\}$ is the characteristic function of $E$. This map gives the desired isomorphism. For $K_1$, we can cheat by observing that $\ell^\infty$ is a von Neumann algebra, so it has a Borel functional calculus (as do all matrix algebras over $\ell^\infty$). This implies that their unitary groups are path connected (if $u$ is a unitary, take a Borel measurable logarithm on its spectrum to find a self-adjoint element $a$ such that $u=e^{ia}$, and thus $u$ is homotopic to $1$). This then implies that $K_1(\ell^\infty)=0$. (This proof holds more generally for any von Neumann algebra.)
H: How does $s'$ define from the theorem? I encounter this theorem from my introductory algebra textbook. I'm quite not good at defining some of the definitions. Here is the theorem: Theorem (3.2) Let $m_{1}$, $m_{2}$ be positive integers. Let $d$ be a positive generator for the ideal generated by $m_{1}$ and $m_{2}$. Then $d$ is the greatest common divisor of $m_{1}$ and $m_{2}$. Proof: Since $m_{1}$ lies in the ideal generated by $m_{1}$ and $m_{2}$ (because $m_{1}=1m_{1}+0m_{2}$), there exists an integer $q_{1}$ such that: $m_{1}=q_{1}d$, whence $d$ divides $m_{1}$, Similarly, $d$ divides $m_{2}$. Let $e$ be a non-zero integer dividing both $m_{1}$ and $m_{2}$, say $m_{1}=h_{1}e$ and $m_{2}=h_{2}e$ with integers $h_{1}$, $h_{2}$. Since $d$ is in ideal generated by $m_{1}$ and $m_{2}$, there are integers $s_{1}$, $s_{2}$ such that $d=s_{1}m_{1}+s_{2}m_{2}$. What do the $s_i$ define for $d$? I know this question is kinda foolish to ask here. I just want some clarification for this! Thanks. AI: A basic theorem on the ring of integers $\mathbb{Z}$ is that every ideal thereof has a unique nonnegative generator. More precisely, if $I$ is an ideal, then there exists a unique $d\ge0$ such that $I=d\mathbb{Z}$. Now it depends on how you define the greatest common divisor. Since your statement insists on the numbers being positive, I guess the definition is “the largest positive number $d$ such that $d$ divides both $m_1$ and $m_2$”. The ideal $I$ generated by $m_1$ and $m_2$ consists of all integers of the form $x_1m_1+x_2m_2$, with $x_1,x_2$ arbitrary integers, because this is an ideal (prove it) and it is the smallest one containing both $m_1$ and $m_2$ (prove it). By the basic theorem, there exists $d\ge0$ such that $I=d\mathbb{Z}$ and, since $I\ne\{0\}$, we get that $d>0$. In particular $m_1,m_2\in d\mathbb{Z}$, so $d$ divides both. Also, since $d\in I$, we can find integers $s_1,s_2$ such that $d=s_1m_1+s_2m_2$ by the description of $I$ given above. Suppose $e>0$ divides both $m_1$ and $m_2$. Then $m_1=h_1e$ and $m_2=h_2e$, for some integers $h_1,h_2$. Then $$ d=s_1m_1+s_2m_2=s_1h_1e+s_2h_2e=(s_1h_1+s_2h_2)e $$ which implies $e$ is a divisor of $d$. Since both are positive, this forces $e\le d$. Therefore $d$ is the greatest common divisor.
H: Continuity and derivability of a piecewise function $f(x,y)=\frac{x^2}{y}$ Study the continuity and the derivability of the function $$f(x,y)=\begin{cases} \frac{x^2}{y}, \ \text{if} \ y \ne 0 \\ 0, \ \text{if} \ y=0 \end{cases}$$ For the continuity: $f$ is continuous where it is defined because it is a ratio of continuous functions, the only problematic point is the junction point $(x_0,0)$ with a generic $x_0\in\mathbb{R}$. So I have to check if $$\lim_{(x,y)\to(x_0,0)} \frac{x^2}{y}=0$$ I've used the direction $x=\sqrt{y}+x_0$ to show that the limit doesn't exist, since for all $x_0\in\mathbb{R}$ it is $$\lim_{(x,y)\to(x_0,0)} \frac{(\sqrt{y}+x_0)^2}{y}\ne 0$$ So $f$ is continuous for all $(x,y)\in\mathbb{R} \times \mathbb{R} \setminus \{0\}$. For the derivability it is the same, because $f$ is a ratio of derivable functions where is is defined but, since it isn't even continuous in $(x,0)$, it can't be derivable in that point and so $f$ is derivable for all $(x,y) \in \mathbb{R} \times \mathbb{R} \setminus \{0\}$ and it is $$f_x(x,y)= \frac{2x}{y}, \ \forall(x,y) \in \mathbb{R} \times \mathbb{R} \setminus \{0\}$$ $$f_y(x,y)=-\frac{x^2}{y^2}, \ \forall(x,y) \in \mathbb{R} \times \mathbb{R} \setminus \{0\}$$ Is this correct? I have question: in studying continuity I've chosen the direction $x=\sqrt{y}+x_0$ because I thought that the restriction must pass through the point $(x_0,0)$; is this correct or I could have used any restriction such that $y \to 0$ (for instance $x=\sqrt{y}$)? Why, for piecewise defined functions, we have to use the definition of derivability in junction points and we can't use the derivation rules? Which are the reasons? I see that we don't know if $f$ is derivable in $(x_0,0)$ yet and those rules are valid if we already know if $f$ is derivable in certain points, but I don't see from the general theory the reasons why we have to study apart the junctions points. Thanks. AI: Actually,$$\lim_{y\to0}\frac{\left(\sqrt y+x_0\right)^2}y=\lim_{y\to0}\frac{y+2x_0\sqrt y+x_0^{\,2}}y=\lim_{y\to0}1+2\frac{x_0}{\sqrt y}+\frac{x_0^{\,2}}y,$$and this limit doesn't exist (in $\Bbb R$). So, $f$ is discontinuous at those points. In particular, it is not differentiable there.
H: Proving that there exists an uncountable number of distinct basis of the euclidean topology on $\mathbb R$ In my general topology textbook there is the following exercise: (i) - Let $\mathcal B$ be a basis for a topology $\tau$ on a non-empty set $X$. If $\mathcal B_1$ is a collection of subsets of $X$ such that $\mathcal B \subseteq \mathcal B_1 \subseteq \tau$, prove that $\mathcal B_1$ is also a basis of $\tau$ (ii) - Deduce from (i) that there exists an uncountable number of distinct basis of the euclidean topology on $\mathbb R$ I already proved statement number (i), but I'm having some trouble to think of a proof for the second statement. I think that if we consider $\mathcal B= \{]a,b[:a,b\in \mathbb R\}$ as a basis for the euclidean topology $\tau$, then we just need to show that there exists an uncountable number of sets $\mathcal B_1$ such that $\mathcal B \subseteq \mathcal B_1 \subseteq \tau$. But how can I show that? AI: You have the right idea. Suppose we can find an uncountable collection of open sets not contained in the standard basis for the euclidean topology. Denoting this collection by $\mathcal U$, we would then have that for any $U \in \mathcal U$, $\mathcal B \subset \mathcal B \cup U \subset \tau$, which would give us the result we want. In comparison with $\tau$, $\mathcal B$ is rather small - the vast majority of open sets in the euclidean topology are not open intervals. This is why theorems which let us work with basis elements rather than open sets are so useful - they greatly decrease the number and types of sets that we have to consider. Finding a collection $\mathcal U$ which works is therefore not so difficult. For example, all sets of the form $(a, \infty),\, a \in \mathbb R$ are open, but none of them appear in the standard basis.
H: If $\int_a^b f(x)dx=\left[F(x)\right]_a^b$, Is $\int_a^b \lvert f(x)\rvert dx= \left[\lvert F(x)\rvert \right]_a^b$ true? I was brushing up on some basic inequalities and I attempted to derive an alternate form of the regular triangle inequality by using $(\left|b\right|-\left|a\right|)^2$. $$ \begin{align*}(\left|b\right|-\left|a\right|)^2 &= \left|b\right|^2 + \left|a\right|^2 - 2|a||b|\\ &=b^2+a^2-2|a||b|\\ &\leq(b-a)^2\\&=|b-a|^2\end{align*} $$ I arrived at the conclusion that $|b|-|a|\le|b-a| $. This however, does not seem to agree with the following: $$\biggl|\int_{a}^{b} f(x) \, dx\biggr| \leq \int_{a}^{b} |f(x)|\,dx$$ $$\biggr|F(b)-F(a)\biggl| \leq \biggr|F(b)\biggl|-\biggr|F(a)\biggl| $$ Edit: I believe there may be a mistake going from line 3 to line 4. It should be $|b|-|a| \le (b-a)$ which is only true for $\bigg||b|-|a|\bigg|\le\bigg|b-a\bigg|$. I was wondering if maybe there was a mistake in my initial derivation or if I am using the absolute value signs incorrectly when applying them to the integrals through the fundamental theorem of calculus. Possibly a condition or assumption I am unaware of. It just seems like an odd result to get and I can't figure out why this is. AI: We know that $|b-a|\geq |b|-|a| $. This however, does not seem to agree with the following: $$\biggl|\int_{a}^{b} f(x) \, dx\biggr| \leq \int_{a}^{b} |f(x)|\,dx\tag{1}$$ $$\biggr|F(b)-F(a)\biggl| \leq \biggr|F(b)\biggl|-\biggr|F(a)\biggl| \tag{2}$$ Where am I going wrong? We know that equation $(1)$ is correct. It is the equation $(2)$ that has the problems. If $\int_a^b f(x)dx=\left[F(x)\right]_a^b$, $\int_a^b |f(x)|dx\color{red}{\neq} \left[|F(x)|\right]_a^b$. One has to break the integral to get the latter. For example, \begin{align*} \left|\int_{-1}^{+1}x^3\,dx\right|&=\left|\frac14[x^4]_{-1}^{+1}\right|=0\\ \int_{-1}^{+1}|x^3|\,dx=-\int_{-1}^{0}x^3\,dx+\int_{0}^{+1}x^3\,dx=-\frac14[x^4]_{-1}^{0}+\frac14[x^4]_{0}^{+1}=\frac12&\color{red}{\neq}\frac14[|x^4|]_{-1}^{+1}=0\\ \end{align*}
H: $PSl_n(\mathbb{C})\cong PGl_n(\mathbb{C})$? I was reading about projective linear groups because I was asked to show that $PSl_n(\mathbb{C})\cong PGl_n(\mathbb{C})$. Here $PSl_n(\mathbb{C})$ is the projective space of $Sl_n(\mathbb{C})$, i.e. $Sl_n(\mathbb{C})/(\text{scalar matrices in } Sl_n(\mathbb{C}))$ and $PGl_n(\mathbb{C})$ is the projective space of $Gl_n(\mathbb{C})$, i.e. $Gl_n(\mathbb{C})/(\text{scalar matrices in } Gl_n(\mathbb{C}))$. the inclusion map decends to a map that is well defined and injective $$\frac{Sl_n(\mathbb{C})}{(\text{scalar matrices in } Sl_n(\mathbb{C}))} \to \frac{Gl_n(\mathbb{C})}{(\text{scalar matrices in } Gl_n(\mathbb{C}))}$$. But I am not sure about surjectivity. If I take $A \in Gl_n(\mathbb{C})$, I know there is a scalar matrix $S$ such that $det(AS)=\pm 1$ but $Sl_n(\mathbb{C})$ are matrix with $+1$ determinant. AI: I am spelling out my comments as an answer. To your first question, if $\det(AS) = -1$, then replace $S$ by $ST$, where $T = \lambda E_n$ is a matrix such that $\det(T) = \lambda^n = -1$. Such a $\lambda$ exists, because the equation $\lambda^n = -1$ is solvable over the complex field. As noted in the comments, one could take $S$ such that $\det(S) = 1/\det(A)$ from the beginning. To the second question, which arose in the comments: Yes, it is essential that $\mathbb C$ is algebraically closed. In general, if one wants to prove that $\mathrm{PSL}_n(k)$ and $\mathrm{PGL}_n(k)$ are isomorphic for a fixed $n$, it is sufficient that $n$-th roots of all elements of $k$ exist and are contained in $k$. For example, if $n$ is odd, then $\mathrm{PSL}_n(\mathbb R)$ and $\mathrm{PGL}_n(\mathbb R)$ are isomorphic, because you can take $n$-th roots of any real number and still get a real number. If however $n$ is even, say $n = 2$, it is no longer true that $\mathrm{PSL}_2(\mathbb R)$ and $\mathrm{PGL}_2(\mathbb R)$ are isomorphic. To see this, a little bit of group theory is needed (maybe someone has a simpler argument). Note that $\mathrm{PGL}_2(\mathbb R)$ contains the group $$\left\langle \left[\begin{pmatrix}1 & 0 \\ 0 & -1 \end{pmatrix} \right], \left[\begin{pmatrix}0 & 1 \\ 1 & 0 \end{pmatrix} \right]\right\rangle\cong \mathbb Z/2\mathbb Z \times \mathbb Z/2\mathbb Z,$$ so that it suffices to prove that $\mathrm{PSL}_2(\mathbb R)$ does not contain a subgroup isomorphic to $\mathbb Z/2\mathbb Z \times \mathbb Z/2\mathbb Z$. Indeed, if $C := \begin{pmatrix}a & b \\ c & d\end{pmatrix}\in \mathrm{SL}_2(\mathbb R)$ is an element of order $2$, then $$C^2 = \begin{pmatrix}a & b \\ c & d\end{pmatrix}^2 = \begin{pmatrix}a^2+bc & b(a+d) \\ c(a+d) & d^2+bc\end{pmatrix} = I_2.$$ Note that we also have $\det(C) =ad-bc = 1$. Hence, if $a = -d$, we would have $-a^2-bc = 1$, hence the diagonal entries of $C^2$ are $-1$. Thus $b = c = 0$, and then $C =-I_2$. This proves that the only element of order $2$ in $\mathrm{SL}_2(\mathbb R)$ is $-I_2$. Assume now that $\mathrm{PSL}_2(\mathbb R)$ has a subgroup $H$ isomorphic to $\mathbb Z/2\mathbb Z \times \mathbb Z/2\mathbb Z$. Let $$q \colon \mathrm{SL}_2(\mathbb R) \to \mathrm{PSL}_2(\mathbb R)$$ be the quotient map. Write $H' := q^{-1}(H)$. Since $q$ is surjective, $q(H')=H$, and hence $|H'| = 8$ (since $\mathrm{ker}(q) = \langle -I_2\rangle$ has order $2$). Moreover, the only element in $H'$ whose order is $2$ is $-I_2$. By the classification of groups of order $8$, we have that $$H' \cong Q_8 = \langle a,b \, |\, a^4 = 1, a^2 = b^2, ab = b^{-1}a\rangle,$$ the quaternion group of order $8$. So what we've done so far, is to show the existence of a subgroup $H' \subset \mathrm{SL}_2(\mathbb R)$ which is isomorphic to $Q_8$. Now we somehow need to show, that this is not possible. I don't know your background in representation theory, but it can be shown that $Q_8$ does not have any $2$-dimensional faithful irreducible real representation: the only $2$-dimensional irreducible representation of $Q_8$ is, up to isomorphism, $$a \mapsto A := \begin{pmatrix} 0 & 1 \\ -1 & 0\end{pmatrix}, \;\;\;b \mapsto B := \begin{pmatrix} i & 0\\ 0 & -i\end{pmatrix}.$$ To show that this is not isomorphic to any real representation, there are two possibilities: Use the Frobenius-Schur indicator Show by hand that there is no $S \in \mathrm{GL}_2(\mathbb C)$ such that both $S^{-1}AS$ and $S^{-1}BS$ are real matrices. We conclude that $\mathrm{SL}_2(\mathbb R)$ does not have a subgroup isomorphic to $Q_8$, which means that $\mathrm{PSL}_2(\mathbb R)$ and $\mathrm{PGL}_2(\mathbb R)$ cannot be isomorphic.
H: What does `Sitzber. Heidelberg Akad. Wiss., Math.-Naturw. Klasse. Abt. A' stand for? I would like to cite an article from 1914 by Oskar Perron without any abbreviations. I am unable to figure out what `Sitzber. Heidelberg Akad. Wiss., Math.-Naturw. Klasse. Abt. A' is short for. Can anyone here perhaps help me with this? AI: It is “Sitzungsberichte der Heidelberger Akademie der Wissenschaften, Mathematisch-Naturwissenschaftliche Klasse: Abteilung A, Mathematisch-physikalische Wissenschaften” – Source
H: Joint distribution of $X$ and $Y-X$ (discrete case) The joint distribution of the random variable $(X,Y)$ is $\mathbb{P}(X=x,Y=y)=\frac{e^{-2}}{x!(y-x)!},x=0,1,...,y=x=x+1,...$ Find the marginal distribution of $X$ and $Y$. $\rightarrow \mathbb{P}(X=x)=\sum_{y=x}^{+\infty}\mathbb{P}(X=x,Y=y)=\frac{e^{-2}}{x!}\sum_{s=0}^{+\infty}\frac{1}{s!}\Rightarrow X\sim Poi(1)$; $\rightarrow \mathbb{P}(Y=y)=\sum_{x=0}^{y}\mathbb{P}(X=x,Y=y)=\frac{e^{-2}}{y!}\sum_{x=0}^{y}\frac{y!}{x!(y-x)!}\cdot 1^x \Rightarrow Y\sim Poi(2);$ $\rightarrow \mathbb{P}(X=x)\cdot \mathbb{P}(Y=y)\neq \mathbb{P}(X=x,Y=y)\Rightarrow X$ and $Y$ are not independent. Find the joint distribution of $X$ and $Y-X$, and say if are independent or not. If we had been in the continuous case, for $X$ and $Y$ not independent, we'd have $f_{VZ}(v,z)=f_{XY}(v,z)|J|$ with $\left\{\begin{matrix} v=x\\ z=y-x \end{matrix}\right. \Rightarrow \left\{\begin{matrix} x=v\\ y=z+v \end{matrix}\right.$. How does it works in the discrete case? What is the formule to apply? Thanks in advance. AI: $P(X=x,Y-X=z)=P(X=x,Y=x+z)=\frac {e^{-2}} {x!z!}$ for $x,z =0,1,2,...$. Note that $P(X=x,Y-X=z)=P(X=x)P(Y-X=z)$ so $X$ and$Y-X$ are independent.
H: is the limit as k approaches infinity of a Taylor Polynomial of order k, that approximates a function f, the same as the function itself? Since Taylor polynomial approximation gets better as It's order gets bigger, so I was wondering, what happens when this order approaches infinity? Does the approximation equal the function itself and become perfect? Thanks in advance. Loving calculus so far :) AI: In general, no. Take, for instance,$$\begin{array}{rccc}f\colon&\Bbb R&\longrightarrow&\Bbb R\\&x&\mapsto&\begin{cases}e^{-1/x^2}&\text{ if }x\ne0\\0&\text{ if }x=0.\end{cases}\end{array}$$You can check that $(\forall n\in\Bbb Z_+):f^{(n)}(0)=0$. So, $\sum_{n=0}^\infty\frac{f^{(n)}(0)}{n!}x^n=0\ne f(x)$ (unless $x=0$).
H: Set of polynomials of degree less than $N$ that have value $0$ in $x=1$ as vector space? How can I prove that all polynomials of degree less than $N$ that have value $0$ in $x=1$ are writable in this form? \begin{equation} p(x) = a_1 (x-1) + a_2 (x-1)^2 \dots + a_{N-1} (x-1)^{N-1} \end{equation} and so the set of polynomials of the question constitute a vector space of dimension $N-1$ (other properties are self-evident)? Of course $p(x)$ are polynomial and $p(1)=0$, but this doesn't answer the question. AI: Basic steps: Prove that your set of polynomials is a vector space through the axioms Notice that your set is strictly included in the vector space of all the polynomial with degree at most $N-1$, so it has dimension at most $N-1$ Notice that $(x-1)^k$ belongs to your set for $k=1,2,\dots,N-1$ and that they are linearly independent Profit
H: Double integral of a shifted circle Task: find a double integral $$\iint_D (x+y)dxdy,$$ where D is bound by $x^2 + y^2 = x + y$. What I have done so far: turns out it's a circle $$(x-1)^2 + (y-1)^2 = 2$$ Calculating it as a common double integral is hard because I get something like this: $$\int_{1-\sqrt{2}}^{1+\sqrt{2}} dx \int_{1 - \sqrt{2 - (x-1)^2}}^{1 + \sqrt{2 - (x-1)^2}} (x + y) dy.$$ So, I decided to give up on this. My next idea is to transform it into Polar coordinates. And that's where I got stuck. $$dxdy = rdrd\theta \\ x = r \cos{\theta} \\ y = r \sin{\theta}.$$ What to do next? For me, it looks like $$0 \leq\theta \leq 2\pi \\ 0 \leq r \leq 2\sqrt{2},$$ but this seems like a case when the origin of a circle is $(0, 0)$. I have my circle shifted and there should be some tricks. Any help would be appreciated. AI: $\iint_D (x+y)dxdy=\iint_{D'} (u+v+2)dudv$ by the substitution $u=x-1, v=y-1$, $D'$ being $\{(u,v): u^{2}+v^{2} \leq 2\}$. By symmetry the integral of $u$ and $v$ over $D'$ is $0$. Hence the value is just $\iint_{D'} 2dudv=2(\pi) (2)=4\pi$.
H: Any generalization of the complex conjugates in the theory of fields? We know that for any complex number $z = x + \iota y$, where $x$ and $y$ are real numbers, there exists the complex number $\overline{z} = x - \iota y$, and the complex numbers $z$ and $\overline{z}$ are said to be the complex conjugates of each other. Of course, every real number is its own conjugate. Now is there any generalization of this notion of conjugate pairs to elements of a general abstract field? AI: The complex numbers are an extension of the real numbers, and complex conjugation is a field automorphism of the complex numbers fixing the reals. In general, if $F$ is a field and $E$ is an extension of $F$ (that is, a field containing $F$), then one can speak of the automorphisms of $E$ that fix the elements of $F$. That's the idea analogous to complex conjugate. It's a really, really important concept. It forms the basis of Galois Theory, for one thing.
H: Is there any real function that does not obey this rule on limit? Consider the following rule $\lim\limits_{h\to 0} f(x+h)= f(x)$ Do any real function exists that does not satisfy the above rule? AI: Take the function $f:\mathbb{R}\rightarrow\mathbb{R}$ defined by $f(0)=1$ and $f(x)=0$ for all $x\neq 0$.
H: How to find the closest integer to a given number $y \in \mathbb{R}$ that has a perfect root $x \in \mathbb{Z}: \sqrt x = k \in \mathbb{Z}$? Background Theory An approximation of a number $x_0$ can be made using using derivatives, based on the formula $(1)$ $$f(x_0 + \Delta x) \approx f(x_0) + f'(x_0)*\Delta x\quad (1)$$ For example we can approximate $\sqrt24$ by choosing $f(x) = \sqrt x$ and plugging in the equation $x_0 = 25$ and $\Delta x = -1 $ i.e: $$f(25 - 1) \approx f(25) + f'(25)*(-1) \iff \sqrt24 \approx4.9$$ Note: We chose $25$ because it is the closest integer to 24 that has a perfect square root. Therefore, by doing this, we minimize the approximation error. The Question So, I am wondering: How can one find the closest integer $x$ to any given $y \in \mathbb{R}: \sqrt x = k \in \mathbb{Z}$? Is there a clever mathematical way to do this or should one use a heuristic algorithm? AI: It's the same problem as finding the integer closest to $\sqrt y$, so it amounts to finding a good approximation to $\sqrt y$ (which isn't going to help you, since finding a good approximation to $\sqrt y$ is what you want to do in the first place. Nevertheless,...) one way to do this is to start with any approximation $y_0$ to $\sqrt y$, even something pathetic like $y_0=1$, and then improve it by running a few rounds of Newton's Method. You know Newton's Method?
H: Is the statement "$p$ implies $q$" logically equivalent to the statement "$p$ implies only $q$"? I am confused if the statement "$p$ implies $q$" logically equivalent to the statement "$p$ implies only $q$"? Assuming that the two said statement is logically equivalent, then the truth value of the statement ... "If $a^2=b$ and $b>0$, then $a=\sqrt{b}$." ... is false. Since a can be equal to $a=\sqrt{b}$ OR $a=-\sqrt{b}$, not only $a=\sqrt{b}$. AI: $$a^2=b>0\implies a=\sqrt b$$ is a false statement, full stop. $$a^2=b>0\implies a=\sqrt b\lor a=-\sqrt b$$ is a true statement, full stop. You never assume that "there could be other predicates but they are missing" or anything of this kind. If you want to express that "$a=\text{only}\sqrt b$" with the ulterior motive that "$a=-\sqrt b$" could have been possible, you write $a=\sqrt b$, and there is no need to mention $a\ne-\sqrt b$.
H: Clarification on proof of $\tau_1 \times \tau_2$ is $T_2$ iff $\tau_i $ is $T_2$ for $i=1,2$ Proposition: Let $(X, \tau)$ be a topological space (1)If $\mathscr{B}$ is a base of $\tau$ and $p \in X$ , the set $\mathscr{B}_p=\{ B \in \mathscr{B}\mid p \in B\}$ is base of neighborhoods of $p$ (2) If for every $p \in X$ is defined a base of neighborhoods $\mathscr{B}_p$, then the set $\mathscr{B}=\{B \in \mathscr{B}_p\mid p \in X \}$ is a base of $\tau$ Theorem $\tau_1 \times \tau_2$ is $T_2$ iff $\tau_i $ is $T_2$ for $i=1,2$ The reverse implication is easy and I understand it. I have a problem with the forward implication: Let $p_1, q_1 \in X_1$ with $p_1 \neq q_1$, Now for any $p_2 \in X_2 $we have $p=(p_1,p_2) \neq q=(q_1,p_2)$. By the $T_2$ hypothesis, there exists $ U \in \mathscr{U_{p}}$ and $V \in \mathscr{U_{p}}$ with $U\cap V = \emptyset$. Using the previous proposition , there exist $ U_1 \in \mathscr{U_{p_1}}, U_2 \in \mathscr{U_{p_2}},V_1 \in \mathscr{U_{q_1}}$ and $V_2 \in \mathscr{U_{p_2}}$, such that $U_1 \times U_2 \subseteq U$ and $V_1 \times V_2 \subseteq V$.        $(\alpha)$ So $(U_1 \times U_2)\cap (V_1 \times V_2)=\emptyset$. So necessarily $U_1 \cap V_1 =\emptyset$ and so $\tau_1$ is $T_1$ What I don't understand is $(\alpha)$, how are they using that proposition to imply the existence of $ U_1 \in \mathscr{U_{p_1}}, U_2 \in \mathscr{U_{p_2}},V_1 \in \mathscr{U_{q_1}}$ and $V_2 \in \mathscr{U_{p_2}}$, such that $U_1 \times U_2 \subseteq U$ and $V_1 \times V_2 \subseteq V$ ? AI: You may agree that the collection of all $U\times V$ for open subsets $U\subseteq X$ and $V\subseteq Y$ forms a basis of $X\times Y$. Hence we can see that the collection $$\{U\times V\mid p\in U\text{ and } q\in V\}$$ is a local base of $(p,q)\in X\times Y$. Could you conclude $(\alpha)$ from this observation?
H: Why is $\sqrt{x^2}$ is $|x|$? I was trying a problem and was getting the wrong answer and when I saw the solution on the internet I found this statement written in square brackets $\sqrt{x^2}$[note square is on $x$] is $|x|$. Till now I have learned that by laws of exponents we can multiply the powers and obtain $\sqrt{x}^2$ as $x$. But this seems confusing to me. Please clear this doubt. And does this apply to $\left({\sqrt x}\right)^2$[note square is on radical] i.e. is $\left({\sqrt x}\right)^2 = |x|$? AI: The square root of $x$ is a non-negative number $y$ such that $y^2=x$. Thus the square root of $x^2$ is a non-negative number $y$ such that $y^2=x^2$. To make sure the square root of $x^2$ is non-negative then you take $y=\lvert x\rvert$. In the second part, writing $(\sqrt{x})^2$ means that $x \geq 0$, so $\lvert x\rvert = x$, and you can safely say $(\sqrt{x})^2 = x$.
H: Number of ways to color sides of square with rotation The edges of a square are to be colored either red, blue, yellow, pink, or black. Each side of the square may only have one color, but a color may color many sides. How many different ways are there to color the square if two ways that can obtained from each other by rotation are identical? So, it seems to simply be a variation of a circular permutation problem. The only real complication is the fact that there are 5 colors to choose from in a 4 position arrangement. My solution to this is $$3!(5C4)+\frac{3!(5C3)}{2!}+\frac{3!(5C2)}{2!2!}+\frac{3!(5C2)}{3!}+5=90$$ The denominators account for the repetition of colors to fill all 4 sides when only 3 or 2 unique colors are selected. While this was my initial solution, the answer given to us turned out to be 165. How is this? AI: Your first and last terms are correct, but rest are not. There are two ways of coloring when $3$ colours are to be used (as shown in this picture). Either the same coloured edges are opposite or are adjacent. Accordingly, number of ways of colouring are $5C1\times4C2$ and $5C1\times4C2\times2$, i.e. $30$ and $60$ ways. There are three ways of colouring when $2$ colours are to be used (as shown in the following picture). Accordingly, number of ways of colouring are $5C2\times2$, $5C2$ and $5C2$ respectively. So, total number of ways of colouring are $$(30)+(30+60)+(20+10+10)+(5)=\boxed{165}$$