text
stringlengths
83
79.5k
H: Coercive/(weakly) semicontinuous function: extreme values Consider functionals of the form $$\phi : X \rightarrow \mathbb{R} \cup\{+\infty\},$$ where $X$ is an arbitrary, normed vector space. In particular, $X$ may be of infinite dimension. I would be fine with restrictions like Banach-spaces or separabel/reflexive spaces though, if necessary. Now I have found several different statements about the existence of minimizers/maximizers of $\phi$ and am little bit confused. So my first question is which of the following statement is true/false? (For the sake of simplicity I consider only minimum values.). Let $U \subset X$ be compact. If $\phi$ is lower semicontinuous, it attains its minimum on $U$. If $\phi$ is sequentially weakly lower semicontinuous, it attains its minimum on $U$. If $\phi$ is lower semicontinuous, it does not necessarily attain its minimum on $X$. Does it posses an infimum at least? Same as (3.) with sequentially weakly lower semicontinuous $\phi$. Now let $\phi$ additionally be coercive and please consider statements (1)-(4) again. My second question would be if anyone could recommend literature on this topic. AI: Coerciveness is a sufficient condition to avoid dealing with bounded constraint sets. If, for all sequences $x_n$ such that $||x_n||\rightarrow \infty$, $f(x_n) \rightarrow \infty$, then $f$ is coercive. The idea is that if this condition holds, you can restrict $f$ to bounded sets of increasing radius, and there will eventually be a local minimizer that eventually becomes a global minimizer, since the function becomes unbounded in every feasible direction. See Kinderlehrer and Stampaccia. The definition of lower semi-continuous is that for all $x_n \rightarrow x$, $\phi(x_n) \ge \phi(x)$. So the idea is, let $\underline{\phi} = \inf_{x \in U} \phi(x)$. Take any sequence $x_n$ satisfying $$ \underline{\phi} + \dfrac{1}{n} \ge \phi(x_n) \ge \underline{\phi}. $$ In the classic Weierstrass version (like for $\mathbb{R}^N$), compactness of $U$ implies every sequence $x_n$ has a convergent subsequence $x_{n_k} \rightarrow x^*$. Then $$ \lim_{n_k \rightarrow \infty} \underline{\phi} + \dfrac{1}{n_k} \ge \lim_{n_k \rightarrow \infty} \phi(x_n) \ge \underline{\phi} $$ and lsc implies $$ \underline{\phi} + 0\ge \lim_{n_k \rightarrow \infty} \phi(x_n) \ge \phi(\lim_{n_k \rightarrow \infty} x_{n_k}) \ge \underline{\phi} $$ so that $\underline{\phi} = \phi(x^*)$ and $x^*\in U$ is the minimizer. To generalize the result to more abstract spaces, the problem is existence of a convergent subsequence, $x_{n_k} \rightarrow x^*$. That is all that is going on with the word salad you're trying to sort out. In a metric space, this all still works, because compactness is equivalent to sequential compactness, so any sequence in a compact set has a convergent subsequence. The problems begin when you stop assuming $U$ is compact as a subset of an infinite dimensional vector space. The closed unit ball is not compact in an infinite dimensional vector space, despite being closed and bounded. So the Heine-Borel Theorem no longer applies, and if you want to prove $U$ is compact, you must show it is complete and totally bounded in a metric space, or that every open cover has a finite subcover in a topological space. This leads to other characterizations of compactness, like the Arzela Ascoli theorem. But you can't always just assume $U$ is compact, because sometimes it isn't. Where does the weak stuff come from? Even thought the closed unit ball is not compact in infinite dimensional vector spaces, the closed unit ball in the dual space is weak* compact (Alaoglu's theorem). Because of this, you can find a convergent subsequence in the dual space, so that a weak* lower semi-continuous function on a weak* compact set achieves a minimum. (Luenberger, "Optimization by Vector Space Methods", ch 5) The other kind of weak result goes like this. Kakutani's theorem is, a Banach space is reflexive iff the closed unit ball is compact in the weak topology. Then, if $E$ is a reflexive Banach space, $K \subset E$ nonempty, closed, convex and bounded, then $K$ is compact in the weak topology. Then, a coercive, convex lsc function $f$ that satisfies $f(x) \neq \infty$ for all $x \in E$ achieves a minimum on any nonempty, closed, convex subset of $E$. The strategy is to exploit the coercive property to get existence on any bounded subset, and then allow the bounded subsets to get large, and you find a global minimum (Brezis, "Functional Analysis, Sobolev Spaces, and PDE's", ch. 3). There is yet another approach. If you are trying to minimize a distance in a Hilbert space to a convex set, you can exploit the parallelogram equality to extract a convergent subsequence. This works especially well when you have a point $x$ in a high dimensional vector space and have to project it onto a lower dimensional vector subspace, which is essentially all of statistics. (Kreyszig, Ch 4, or Luenberger, ch...3? I forget) So the answer is that compactness+lsc in a metric space gets you existence every time, but all the other concepts come into play once you have an infinite dimensional space and a subspace that isn't assumed to be compact. Then you have to start considering other topologies and strategies to find your convergent $x_{n_k}$.
H: Not knowing the $\mathrm{gcd}$ and $\mathrm{lcm}$ and knowing $\mathrm{gcd+lcm}$, how to find $a$ and $b$ in $\mathrm{gcd}(a,b)$? Here's what we have: $\mathrm{gcd}(a,b)=d$ ; $\mathrm{lcm}(a,b)=m$ ; $a+b=30$ ; $m+d=42$ ; $b>a$. What I tried: if $d$ divides $a$ and $b$ so it divides $a+b$ so $d$ divides $30$. And with $\mathrm{gcd}$ and $\mathrm{lcm}$ rules I found that $md=ab$. But that's all, I didn't know how to continue. Thanks. AI: Hint: This might help narrowing down the search: $d$ also divides $m$ (and hence it divides $42$).
H: Upper bound for the nth derivative of $\Gamma(x)^n$ I was trying to find an upper bound for $$ \frac{d^n}{ds^n} \Gamma(s)^{n}|_{s=1}$$ yet, I only get the bound for the nth derivative of gamma, as follow: First, the integral of the nth derivative of gamma is $$\frac{d^n}{ds^n} \Gamma(s) = \int_0^{\infty} t^{s-1} e^{-t} (\log t)^n dt$$ Note that this integral, if it changes sign, does so at $t=1$. So let us break it into two integrals, one ranging from 0 to 1, the other from 1 to $\infty$. $$I_n = \int_0^{1} e^{-t} (\log t)^n dt + \int_1^{\infty} e^{-t} (\log t)^n dt = K_n + L_n$$ Consider first $K_n$. We have that, in the $[0,1]$ interval, $\exp(-t)\leq 1$. Therefore we can bound $$K_n = \int_0^{1} e^{-t} (\log t)^n dt\leq \int_0^{1} (\log t)^n dt = (-1)^n n!.$$ For $L_n$ we can do a change of variables $t \rightarrow t+1$ to write $$L_n = \frac{1}{e} \int_0^{\infty} \exp(-t)[\log (1+t)]^n dt.$$ Now note that $\log(1+t) \leq \sqrt{t}$ for all positive $t$. Therefore we can write $$L_n \leq \frac{1}{e} \int_0^{\infty} \exp(-t)t^{n/2} dt= \frac{1}{e} \Gamma\left(1+\frac{n}{2}\right).$$ There, we can write $$ \left|\frac{d^n}{ds^n} \Gamma(s)\right|_{s=1} \leq n!+\frac{1}{e} \left(\frac{n}{2}\right)!$$ I've like to have some ideas of how to relate this bound with the bound I needed, or some way of writing $ \frac{d^n}{ds^n} \Gamma(s)^{n}|_{s=1}$ in terms of polygamma function if possible. AI: You can just use the general Leibniz rule $$ (f_1\dots f_m)^{(n)}=\sum_{k_1+k_2+\dots+k_m=n}\binom{n}{k_1,k_2,\dots,k_n}\prod_{i=1}^m f_i^{(k_i)} $$ and get $$ \left.\frac{\mathrm{d}^n}{\mathrm{d}s^n}[\Gamma(s)^n]\right\rvert_{s=1} = \sum_{k_1+k_2+\dots+k_n=n}\binom{n}{k_1,k_2,\dots,k_n}\prod_{i=1}^n \left.\frac{\mathrm{d}^{k_i}}{\mathrm{d}s^{k_i}}\Gamma(s)\right\rvert_{s=1}. $$ There is an error in your derivation. Since $\log t<0$ for $t\in(0,1)$, and you bound $0<e^{-t}<1$, your bound for $K_n$ doesn't work with the $(-1)^n$. If you put everything in modulus sign you get $\lvert K_n\rvert\leq n!$ which seems to be what you were doing at the end.
H: Is a locally compact Hausdorff quotient of a locally compact $\sigma$-compact first countable Hausdorff space always first countable? Let $Y$ be a locally compact, $\sigma$-compact, first countable Hausdorff space and $q: Y\to X$ a quotient map with $X$ Hausdorff. Suppose that $X$ is locally compact. Is $X$ first countable? I have spent a while hunting the literature for an answer but have not been able to find one. It works the other way: if $X$ is first countable then $X$ is locally compact, but what about this way? AI: Here is a counterexample. Let $Y=[0,1]\times[0,1]$ with the order topology of the lexicographic order (so $Y$ is first countable, compact, and Hausdorff). Let $X$ be the quotient of $Y$ that identifies $[0,1]\times\{0,1\}$ to a single point. This is a closed subset of $Y$, so the quotient is still compact Hausdorff. However, $X$ is not first-countable: it is the one-point compactification of a disjoint union of uncountably many copies of $(0,1)$ (the image of $[0,1]\times(0,1)\subset X$, which has the product topology where the first coordinate is discrete and the second has the usual topology), which is not first-countable at the point at infinity.
H: Given orthogonal matrix $P$, is $P \circ P$ invertible? Let $P$ be a orthogonal matrix, i.e., $P^T P = P P^T =I.$ Then can we say that $P \circ P$ is invertible? P.S: $A \circ B$ is the elementwise product of matrices $A$ and $B$. AI: No. My gut reaction to reading any question like this is skepticism: elementwise product is an "unnatural" operation on matrices, and rarely bears much relation to ordinary matrix multiplication. So I wouldn't expect the latter to give much guarantees about the former. So I try to turn my skepticism into a concrete counterexample. Well, $P \circ P$ is a squaring of all of the entries of $P$. An easy example of a singular matrix is one where all of the entries are equal. The zero matrix won't work, obviously, so what about the ones matrix? I just need a basis of vectors that consist of $\pm 1$ and are mutually orthogonal. For example, $(1,1)$ and $(1,-1)$ in $\mathbb{R}^2$. So now with a little clean-up to correct for norms, I have the counterexample: $$P = \frac{1}{\sqrt{2}} \begin{bmatrix} 1 & 1\\1 & -1\end{bmatrix}.$$
H: Uniform convergence of $\varphi_n(x)=\int_{-\infty}^{\infty}\frac{\sin(n(y-x))}{n(y-x)}f(y)dy$ Let $f\in L^1(\mathbb{R})$ and $n\in\mathbb{Z}_{+}$, we define $$\varphi_n(x):=\int_{-\infty}^{\infty}\frac{\sin(n(y-x))}{n(y-x)}f(y)dy\ .\ \ (x\in\mathbb{R})$$ (However, if $y=x$, it is interpreted as $\frac{\sin(n(y-x))}{n(y-x)}=1$.) Proof that $\{\varphi_n\}$ converges uniformly to $0$. AI: 1st proof. Let $\operatorname{Leb}$ denote the Lebesgue measure. For any $\epsilon > 0$, there exists $\delta > 0$ such that $\operatorname{Leb}(E) \leq \delta$ implies $\int_{E} |f(x)| \, \mathrm{d}x < \epsilon/2$, and there exists $N \geq 1$ such that $ \frac{2}{n\delta}\int_{-\infty}^{\infty} |f(x)| \, \mathrm{d}x < \epsilon/2$ for all $n \geq N$. Then for any $n \geq N$ and $x \in \mathbb{R}$, \begin{align*} \left| \varphi_n(x) \right| &\leq \left| \int_{|y-x|<\delta/2} \frac{\sin(n(y-x))}{n(y-x)} f(y) \, \mathrm{d}y \right| + \left| \int_{|y-x|>\delta/2} \frac{\sin(n(y-x))}{n(y-x)} f(y) \, \mathrm{d}y \right| \\ &\leq \int_{|y-x|<\delta/2} \left| f(y) \right| \, \mathrm{d}y + \int_{|y-x|>\delta/2} \frac{2}{n\delta} \left| f(y) \right| \, \mathrm{d}y \\ & < (\epsilon/2) + (\epsilon/2) = \epsilon. \end{align*} Therefore $\varphi_n \to 0$ uniformly as $n\to\infty$. 2nd proof. Write $$ S(x) = \int_{-\infty}^{x} \frac{\sin t}{t} \, \mathrm{d} t. $$ We know that $S(x)$ is bounded on $\mathbb{R}$. Let $M > 0$ denote a bound of $S$, and write $C^1_c(\mathbb{R})$ for the set of all compactly supported $C^1$-functions on $\mathbb{R}$. Then for any $g \in C^1_c(\mathbb{R})$, \begin{align*} \left| \varphi_n(x) \right| &\leq \left| \int_{-\infty}^{\infty} \frac{\sin(n(y-x))}{n(y-x)} (f(y) - g(y)) \, \mathrm{d}y \right| + \left| \int_{-\infty}^{\infty} \frac{\sin(n(y-x))}{n(y-x)} g(y) \, \mathrm{d}y \right| \\ &\leq \| f - g \|_{L^1} + \left| \frac{1}{n}\int_{-\infty}^{\infty} S(n(y-x)) g'(y) \, \mathrm{d}y \right| \\ &\leq \| f - g \|_{L^1} + \frac{M}{n} \int_{-\infty}^{\infty} |g'(y)| \, \mathrm{d}y. \end{align*} So, taking supremum over $x \in \mathbb{R}$ and taking limsup as $n\to\infty$, $$ \limsup_{n\to\infty} \left( \sup_{x\in\mathbb{R}} \left| \varphi_n(x) \right| \right) \leq \| f - g\|_{L^1}. $$ However, since the left-hand side is independent of $g$ and $C^1_c(\mathbb{R})$ is dense in $L^1(\mathbb{R})$, letting $g \to f$ in $L^1$ proves the desired claim.
H: Evaluating $\lim_{n\to \infty} \prod_{k=1}^{n}\frac{1+2\cos(2x/3^k)}{3}$ $f(x)$ is defined as follows and $g(x)=|f(x)|$. I have to find the number of points of non-differentiability of $g(x)$, which will be much easier once I have dealt with this product and simplified $f(x)$. $$f(x)=\lim_{n\to \infty}\prod_{k=1}^{n}\left(\frac{1+2\cos(2x/3^k)}{3}\right)$$ Any ideas. Thanks. AI: $$1+2\cos2A=1+2(1-2\sin^2A)=\dfrac{\sin3A}{\sin A}\text{ if }\sin A\ne0$$ $$\prod_{k=1}^n\left(1+2\cos\dfrac{2x}{3^k}\right)=\prod_{k=1}^n\dfrac{\sin\dfrac x{3^{k-1}}}{\sin\dfrac x{3^k}}=\dfrac{\sin x}{\sin\dfrac x{3^n}}\text{ by Telescoping Series }$$ Finally $\lim_{h\to0}\dfrac{\sin h}h=1$ Can you identify $h$ here?
H: Compute the matrix of norms of $A=\begin{bmatrix}3&4\\1&-3\end{bmatrix}$ My work so far Using the following $\hspace{30px} L^1\ =\displaystyle \max_{\small 1\le j\le m}(\displaystyle \sum_{i=1}^n |a_{ij}|)\\ \hspace{30px} L^2\ =\sigma_{max}(A)\\ \hspace{30px} L^F\ =\sqrt{\displaystyle \sum_{i} \displaystyle \sum_{j} |a_{ij}|^2}\\ \hspace{30px} L^\infty\ =\displaystyle \max_{\small 1\le i\le n}(\displaystyle \sum_{j=1}^m |a_{ij}|)\\$ Thus, $L^1=\begin{bmatrix}3&\textbf{4}\\1&\textbf{3}\end{bmatrix}=7\\ L^2=?\\ L^F=\sqrt{3^2+4^2+1^2+(-3)^2}=\sqrt{35}=5.916079783\\ L^\infty=\begin{bmatrix}\textbf{3}&\textbf{4}\\1&-3\end{bmatrix}=7$ However, I'm unsure how to get $L^2$. How would I start off doing this part? AI: To find the $L^2$ norm, compute the eigenvalues of $A A^T$, then square root and take the largest one. So like $$ A A^T = \begin{bmatrix}3& 4\\1& -3\end{bmatrix}\cdot \begin{bmatrix}3& 1\\4& -3\end{bmatrix} = \begin{bmatrix}25& -9\\-9& 10\end{bmatrix}\\ $$ $$ \det(\lambda I - A A^T) = \lambda^2-35\lambda + 169 $$Setting this equal to zero and solving gives $\lambda = \frac{35\pm3\sqrt{61}}{2}$, so the $L^2$ norm is $\sqrt{\frac{35+ 3\sqrt{61}}{2}}$.
H: Why must $\int_\gamma f(z)\;d z = 0$ for *any* contour $γ$ to define antiderivative of $f$? Whilst I was reading the following proposition from Dexter Chua's lecture notes on Complex Analysis: Let $U \subseteq \mathbb{C}$ be a domain (i.e. path-connected non-empty open set), and $f: U \to \mathbb{C}$ be continuous. Moreover, suppose $$ \int_\gamma f(z)\;d z = 0 $$ for any closed piecewise $C^1$-smooth path $\gamma$ in $U$. Then $f$ has an antiderivative. I am not sure where in the proof does use the property that the integral must vanish on a closed path except at its well-definedness. Sketch Proof: Pick any point $a_0\in U$ and let $\gamma_w$ be any path from $a_0$ to $w.$ Define $F(w) = \int_{\gamma_w} f(z)\;d z,$ we will show it is an antiderivative, now use the hypothesis that the integral around a closed path must vanishes shows that such $F(w)$ is independent of the path chosen. Since $U$ is open, we can pick $\epsilon > 0$ such that $B(w; \varepsilon) \subseteq U$. Let $\delta_h$ be the radial path in $B(w, \varepsilon)$ from $w$ to $w + h$, with $|h| < \varepsilon$. Now note that $\gamma_w * \delta_h$ is a path from $a_0$ to $w + h$. Now we can show, $$\left|\frac{F(w+h)-F(w)}{h}-f(w)\right|\to 0$$ as $h\to 0$ [I have skipped a great detail of the proof, the full proof can be found here. Page 22] My confusion: Why do we need $F$ to be independent on the path taken? Why can we not have the situation where each path will yield a different anti-derivative? Moreover suppose the definition of $F(w)$ does depend on the path taken (and so different path gives different $F$), would it not be true that for each path $\gamma_i$, the induced $F_{\gamma_i}(z)$ will have the property that $F_{\gamma_i}'(z)=f(z)$, simply because the above limit, where $h\to 0$ will still stand? (I know this will not be true since if this is true then any continuous function will have an anti-derivative but I cannot seem to see where, other than well-defineness, does it use the integral must vanish property.) Many thanks in advance! AI: Because well-definedness is not the only place within the proof that that fact is used. It is also used when the author states (page 23) that$$F(w+h)=\int_{\gamma_w*\delta_h}f(z)\,\mathrm dz.$$This is essential for what comes after that.
H: Proving the existence of a non-measurable set I'm asking the same question as raised in this one. There hasn't been an answer yet, so can someone please help? I did read the comments. One of the comments says the following I see. Then I guess the point of not having $0 \in H$ is so that the union is $(0,1]$ rather than $[0,1]$ (so assuming $0$ not in $H$ allows the author to avoid some case work). What case work is there to do? The equivalence class of $0$ is essentially the set of rationals between $[0,1]$. What additional work is there to do even if I include $0$ in $H$? Separately, I wanted to ask: in the proof, Rosenthal uses "for definiteness". What does "definiteness" mean here? AI: He has defined $\oplus$ so that it’s not quite the usual addition mod $1$: it takes values in $(0,1]$ instead of in $[0,1)$. If we had $0\in H$, we’d have to take $$\bigcup_{r\in\color{red}{(0,1]}\cap\Bbb Q}(H\oplus r)$$ to get a cover of $(0,1]$ instead of Rosenthal’s $$\bigcup_{r\in\color{red}{[0,1)}\cap\Bbb Q}(H\oplus r)\;.$$ To avoid having to deal both with the case $0\in H$ and the case $0\notin H$ he chooses one of them. His For definiteness is just saying that he’s pinning down one possibility for the representative of the rationals, and at that point the reader can reasonably suspect that this is because the choice makes some minor technical difference to the argument, as is the case here. Where he says For definiteness I’d probably had said Without loss of generality.
H: Understanding the chain rule for differentiation operators Suppose I want to transform a partial derivative operator from spherical to Cartesian coordinates. I have found the following relation based on the chain rule here: $$ \frac{\partial }{\partial \theta } = \frac{\partial x}{\partial \theta} \frac{\partial}{\partial x} + \frac{\partial y}{\partial \theta} \frac{\partial}{\partial t} + \dots $$ As I know from calculus, the chain rule is commonly defined, for example when want to take derivative of some 'functions' wrt to some variables. So, how can the chain rule be defined for the derivative operators? I don't exactly understand the chain rule in this context for the system transformation. Thank you. AI: Suppose we have $f = f(r,s)$ where $r=r(x,y)$ and $s=s(x,y)$. Then, by the usual chain rule for functions you mentioned: $$\frac{\partial}{\partial x}f(r(x,y),s(x,y)) = \frac{\partial r}{\partial x}\frac{\partial f(r,s)}{\partial r} + \frac{\partial s}{\partial x}\frac{\partial f(r,s)}{\partial x}$$ Thus, if $\frac{\partial}{\partial x}$ is viewed as an operator acting on $f$, you can see $\frac{\partial r}{\partial x}\frac{\partial}{\partial r}+\frac{\partial s}{\partial x}\frac{\partial}{\partial x}$ also as an operator acting on $f$. This is what we mean by: $$\frac{\partial}{\partial x} = \frac{\partial r}{\partial x}\frac{\partial}{\partial r} + \frac{\partial s}{\partial x}\frac{\partial}{\partial x}$$ which can be interpreted as a chain rule for operators.
H: Interpreting almost sure convergence I'm reading: https://en.wikipedia.org/wiki/Convergence_of_random_variables#Almost_sure_convergence and here it says that Given a probability space $(\Omega,\mathcal{F},P)$ and a random variable $X:\Omega \rightarrow \mathbb{R}$ almost sure convergence stands for $$P\left(\omega \in \Omega: \lim_{n \rightarrow \infty} X_n(\omega)=X\right)=1.$$ [...] almost sure convergence can also be defined as follows: $$P\left(\limsup_{n \rightarrow \infty} \left\{\omega \in \Omega: |X_n(\omega) - X(\omega)| > \varepsilon\right\}\right)=0, \quad \forall \; \varepsilon>0.$$ My question is, what is the intuition behind this equivalence? I understand the first definition, but why do we use $\limsup$ in the second one to make the equivalence work? Thanks AI: I don't really see intuition here, the equivalence just follows from using the definition of convergence. For a sequence of sets $(A_n)$ the set $\lim \sup(A_n)=\{A_n\ \ i.o\}$ is the set of elements which belong to infinitely many of the sets $A_n$. The formal definition of this set is $\cap_{n=1}^\infty \cup_{k=n}^\infty A_k$. Assume $X_n\to X$ almost surely by the first definition and let any constant $\epsilon>0$. Define the sequence $A_{n,\epsilon}:=\{\omega: |X_n(\omega)-X(\omega)|>\epsilon\}$. Note that if $\omega\in\lim\sup A_{n,\epsilon}$ then it means that $|X_n(\omega)-X(\omega)|>\epsilon$ for infinitely many values of $n$, and hence $X_n(\omega)$ obviously does not converge to $X(\omega)$. So $\lim\sup A_{n,\epsilon}\subseteq \{\omega: X_n(\omega)\nrightarrow X(\omega)\}$, and by monotonicity of probability: $\mathbb{P}(\lim\sup A_{n,\epsilon})\leq \mathbb{P}(\{\omega: X_n(\omega)\nrightarrow X(\omega)\})=0$ Second direction: Now assume $X_n\to X$ by the second definition. For each $k\in\mathbb{N}$ define $B_k=\lim\sup A_{n,\frac{1}{k}}$ where the sets $A_{n,\epsilon}$ are defined like before. Then by assumption $\mathbb{P}(B_k)=0$ for all $k$, and hence $\mathbb{P}(\cup_{k=1}^\infty B_k)=0$. Now suppose we have $X_n(\omega)\nrightarrow X(\omega)$ for some $\omega$. This implies that there must be some $m\in\mathbb{N}$ such that $|X_n(\omega)-X(\omega)|>\frac{1}{m}$ for infinitely many natural numbers $n$, and thus $\omega\in B_m\subseteq\cup_{k=1}^\infty B_k$. In other words, we have the inclusion $\{\omega: X_n(\omega)\nrightarrow X(\omega)\}\subseteq\cup_{k=1}^\infty B_k$, and so $\mathbb{P}(\{\omega: X_n(\omega)\nrightarrow X(\omega)\})=0$.
H: Finding value of $\bigg(\frac{\partial u}{\partial y}\bigg)_{x}$ at point $(5,1,-3,1)$ $\displaystyle \bigg(\frac{\partial u}{\partial y}\bigg)_{x}$ at point $(u,x,y,z)=(5,1-3,1)$ . If it is given $u=x^2y^2+yz-z^3$ and $x^2+y^2+z^2=11$ What i try :: $\displaystyle \frac{\partial u}{\partial y}=\frac{\partial }{\partial y}\bigg(x^2y^2+yz-z^3\bigg)=2xy+z$ $\displaystyle \bigg(\frac{\partial u}{\partial y}\bigg)_{x}=\frac{\partial }{\partial x}\bigg(2x^2y+z\bigg)=4xy=4(1)(-3)=-12$ I did not understand where i have use the relation $x^2+y^2+z^2=11$ also please tell me is my solution is right Thanks AI: Asking for $$\left({\partial u\over \partial y}\right)_x$$ means that you are considering the value of $u$ as a function of $x$ and $y$. This is allowed since in the neighborhood of $(x,y,z):=(1,-3,1)$ the constraint $x^2+y^2+z^2=11$ defines $z$ as $z=\sqrt{11-x^2-y^2}$. It follows that you are actually looking at the function $$u=\psi(x,y):=x^2y^2+y(11-x^2-y^2)^{1/2}-(11-x^2-y^2)^{3/2}\ .$$ Compute now the partial derivative $${\partial\psi\over\partial y}\biggr|_{(1,-3)}\ .$$ During this calculation $x$ is held constant $=1$.
H: Show that the conditional variance of a Gaussian random vector is equal to a constant almost surely. Let $(X,Y)$ be a 2-dimensional Gaussian random variable. (a) Prove that there are constant $a$ and $b$ such that \begin{align*} E[Y\,|\,X]=aX+b \end{align*} (b) Prove that the conditional variance defined as \begin{align*} \text{Var}(Y\,|\,X)=E\big((Y-E[Y\,|\,X])^2\,|\,X\big) \end{align*} is equal to a constant almost surely. I have solved part (a) as follows: Note that $(X,Y)$ Gaussian $\implies$ that $(Y-aX, X)$ is Gaussian for any $a$, as for any $\alpha, \beta\in\mathbb{R}$ we have \begin{align*} \alpha(Y-aX)+\beta X=(-\alpha a +\beta)X+\alpha Y\,\,\text{is a one dimensional normal rv by definition.} \end{align*} Now, we choose $a$ such that $Y-aX$ and $X$ are independent, to this end we must have that \begin{align*} 0&=\text{Cov}(Y-aX, X)\\ &=\text{Cov}(Y, X)-a\text{Cov}(X, X)\\ &=\text{Cov}(Y, X)-a\sigma_X^2\\ &\iff a=\frac{\text{Cov}(Y, X)}{\sigma_X^2} \end{align*} So, with this choice of $a$, $Y-aX$ and $X$ are independent. Thus, we have \begin{align*} E(Y-aX\,|\,X)=E(Y-aX)=EY-aEX:=b \end{align*} But on the other hand, \begin{align*} E(Y-aX\,|\,X)=E(Y\,|\,X)-aE(X\,|\,X)=E(Y\,|\,X)-aX \end{align*} And thus, $b=E(Y\,|\,X)-aX\implies E(Y\,|\,X)=aX+b$, as we wished to show. However, I am having trouble showing part $(b)$, I thought it might have something to do with breaking down $Y$ into independent pieces as $Y=(Y-aX)+aX$ but I am having no luck with that, any help here would be greatly appreciated. AI: Using the $a$ you described which has that property that $Y-aX$ is independent from $X$ $$\begin{align*} Var(Y|X) &= E[(Y-E[Y|X])^2|X]\\ &= E[(Y^2 - aX-b)^2|X]\\ &= E[(Y - aX)^2 - 2b(Y-aX) + b^2|X] \\ &= E[(Y-aX)^2|X] - 2b E[(Y-aX)|X] + b^2\end{align*}$$ Since $Y-aX$ is independent of $X$ and since Borel measurable functions such as $t\to t^2$ preserve independent random variables, then $(Y-aX)^2$ is also independent of $X$. Hence $E[(Y-aX)^2|X] = E[(Y-aX)^2]$ and $E[Y-aX|X] = E[Y-aX]$ by properties of the conditional expectation function. Hence $Var(Y|X)$ is a constant a.s. $$Var(Y|X) = E[(Y-aX)^2 - 2b(Y-aX) + b^2] = E[(Y-E[Y|X])]$$
H: Proving that something is a vector bundle Let $f:X \to Y$ be a surjective morphism of irreducible varieties (over an algebraically closed field) such that, for each $y \in Y$, $f^{-1}(y)$ is a vector space of dimension $r$. Is these informations enough to say that this triple is a vector bundle? I am tempted to say that is not because of the compatibility conditions, but, I don't know any counterexamples. On the other hand the condition of both varieties to be irreducible and the fibers being of the same dimension give some hope that this could be true. Any help will be appreciated. AI: One nice counterexample is $$ (\mathbb{P}^1 \times \mathbb{P}^1) \setminus \Delta $$ (where $\Delta$ is the diagonal). It is fibered over $\mathbb{P}^1$, each fiber isomorphic to a 1-dimensional vector space, but the projection has no sections, so it is not a vector bundle.
H: Length of $A/\mathfrak{m}^2$ as an $A$-module Let $A$ be a commutative noetherian ring and $\mathfrak{m}\subset A$ a maximal ideal generated by $1$ element $f\in A$. In that case, $\mathfrak{m}/\mathfrak{m}^2=\langle\overline{f}\rangle_A$ and $\ell(\mathfrak{m}/\mathfrak{m}^2)=1$. Now consider the exact sequence of $A$-modules: $$0\to \mathfrak{m}/\mathfrak{m}^2\to A/\mathfrak{m}^2\to A/\mathfrak{m}\to 0$$ That way $\ell(A/\mathfrak{m}^2)=\ell(\mathfrak{m}/\mathfrak{m}^2)+\ell(A/\mathfrak{m})=1+1=2$. I'm trying to find out whether or not this argument works in general for $n$ generators $f_1,...,f_n$. If $f_1,...,f_n$ is a minimal set of generators of $\mathfrak{m}$, then $\mathfrak{m}/\mathfrak{m}^2=\langle\overline{f_1},...,\overline{f_n}\rangle_A$ and I'm tempted to say that $\ell(\mathfrak{m}/\mathfrak{m}^2)=n$, so that $\ell(A/\mathfrak{m}^2)=n+1$. But I don't know how to prove that $\overline{f_1},...,\overline{f_n}$ is a minimal set of generators for $\mathfrak{m}/\mathfrak{m}^2$, and I don't even know if this is true in general. Maybe some additional hypothesis on $A$ or $\mathfrak{m}$ are necessary? AI: We may assume $A$ is local with maximal ideal $\mathfrak{m}$, since localizing at $\mathfrak{m}$ does not change $A/\mathfrak{m}^2$. By Nakayama's lemma, then, if $M$ is a finitely generated $A$-module and elements $m_1,\dots,m_k$ are such that their images in $M/\mathfrak{m}M$ generate $M/\mathfrak{m}M$, then they generate $M$. So, if $m_1,\dots,m_k$ are a minimal set of generators for $M$, their images much be a minimal set of generators for $M/\mathfrak{m}M$, since no proper subset can generate. Applying this to $M=\mathfrak{m}$ gives you what you want.
H: prove that a function $f$ is uniformly continuous if and only if there exists a modulus of continuity for $f$ Consider two metric spaces $(X,d_X)$, $(Y,d_Y)$, and a function $f: X\to Y$, $f$ is uniformly continuous. A function $w: [0,\infty)\to [0,\infty]$ is called a modulus of continuity for $f$, if: $w(0)=0$ $lim_{s\to 0}w(s)=0$ for all $x,z\in X$, $d_{Y}(f(x),f(z))\leq w(d_{X}(x,z))$ Then prove that $f$ is uniformly continuous if and only if there exists a modulus of continuity for $f$. I think I have an idea on how to prove one direction, suppose there exists a modulus of continuity $w$ for $f$, then fix $\varepsilon>0$, since $lim_{s\to 0}w(s)=0$, there exists $\delta>0$ such that for any $|s|<\delta$, we have $w(s)<\varepsilon$, then we have for any $x,z\in X$ such that $d_{X}(x,z)<\delta$, we have $d_{Y}(f(x),f(z))\leq w(d_{X}(x,z))<\varepsilon$, which means $f$ is uniformly continuous. Could anyone give me some ideas on how to prove the reverse? Thank you! AI: Here's a proof of the converse . . . Suppose $f:(X,d_X)\to (Y,d_Y)$ is uniformly continuous. Let $\delta_1 > 0$ be such that $d_X(a,b) < \delta_1$ implies $d_Y(f(a),f(b)) < 1$. Define $w:[0,\infty)\to [0,\infty]$ by $$ w(s) = \begin{cases} \sup\,\{d_Y(f(a),f(b))\}{\,\large{\mid}\,} d_X(a,b)\le s\}&&\text{if}\;s \le \delta_1\\[4pt] \infty&&\text{if}\;s > \delta_1 \end{cases} $$ We'll show that $w$ is a modulus of continuity for $f$ . . . By definition of $w$, it's immediate that $w(0)=0\;$and it's clear that $$ d_Y(f(a),f(b))\le w(d_X(a,b)) $$ for all $a,b\in X$. It remains to show $\lim_{\large{{s\to 0^+}}} w(s)=0$. It's easily seen that $w$ is nonnegative and non-decreasing, hence $\lim_{\large{{s\to 0^+}}}=L$ for some $L\ge 0$, where $L=\inf\;w((0,\infty))$. Let $\epsilon > 0$. By uniform continuity of $f$, there exists $\delta > 0$ such that $d_X(a,b) < \delta$ implies $d_Y(f(a),f(b)) < \epsilon$, hence by definition of $w$, we get $w(\delta)\le \epsilon$. Thus $L\le \epsilon$ for all $\epsilon > 0$, hence $L=0$. This completes the proof.
H: Evaluate the given limit by recognizing it as a Riemann sum; question regarding interval of integration Problem Find the limit : $\lim_{n\to \infty}\sqrt[n]{(1+1/n)(1+2/n)\cdot...\cdot(1+1/n)}$ which is same as problem solved here Evaluate the limit by first recognizing the sum as a Riemann Sum for a function defined on $[0,1]$., but in this case interval was given [0,1]. We apply the $\log$ function and we use the Riemann sum and we integrate: $$\log\left(\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!}\right)=-\log n+\frac{1}{n}\sum_{k=1}^n\log(k+n)=\frac{1}{n}\sum_{k=1}^n\log(k+n)-\log n\\=\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)\to\int_0^1 \log(1+x)dx=2\log(2)-1$$ hence $$\lim_{n\to\infty}\frac{1}{n}\sqrt[n]\frac{(2n)!}{n!}=\frac{4}{e}$$ Now if interval is not given as in my problem i get interval from (1,2) so i get different result. if we follow that solution above we get $\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)$. Now from formula : $\int_{a}^{b} f(x)dx=lim_{n\to\infty}\sum_{i=1}^{n}f(a+(\Delta x)i) $ i get a=1,b=2 interval so i get $\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)\to\int_1^2 \log(1+x)dx$. My question is : 1.Is this correct bcs wolphram alpha gives https://www.wolframalpha.com/input/?i=limit+%281%2Fn%29*%28%282n%29%21%2Fn%21%29%5E%281%2Fn%29+as+n+goes+to+infinity%3De%5E%28integrate+log%281%2Bx%29dx%2C+x+goes+from+0+to+1%29 this is correct and this is not https://www.wolframalpha.com/input/?i=limit+%281%2Fn%29*%28%282n%29%21%2Fn%21%29%5E%281%2Fn%29+as+n+goes+to+infinity%3De%5E%28integrate+log%281%2Bx%29dx%2C+x+goes+from+1+to+2%29 AI: $\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)\to\int_1^2 \log(1+x)dx$ should be $\frac{1}{n}\sum_{k=1}^n\log\left(\frac{k}{n}+1\right)\to\int_0^1 \log(1+x)dx$.
H: If $A\subset X$ is a deformation retract, then do we have $\pi_k(X,A)=0$? Let $A$ be a deformation retract of the topological space $X$; the example in my mind is for example: $X=(\mathbb C^*)^n$ and $A=(S^1)^n$. Notice that if we consider the homology instead, then we know $H_k(X,A)=0$ by using the long exact sequence in homology theory and using the fact that $H_*(A)\cong H_*(X)$. However, do we have still have $\pi_k(X,A)=0$ for the relative homotopy groups? I am not familiar with them; do we have some similar properties? AI: Let $j:(X,x_0)\to (X,A)$ denote an inclusion of pairs, and $i:A\to X$ the obvious inclusion. There is a long exact sequence of homotopy groups: $$ \cdots \to\pi_{k+1}(X,A)\to\pi_k(A,x_0)\xrightarrow{i_*}\pi_k(X,x_0)\xrightarrow{j_*} \pi_k(X,A)\to\cdots.$$ Since $A\subseteq X$ is a deformation retract of $X$, it follows that $i:A\to X$ induces an isomorphism on homotopy groups and hence that the middle map $i_*$ is an isomorphism in all degrees. Hence, $\pi_k(X,A)=0$ for all $k$ by exactness.
H: Proving $\lim\limits_{x\to a} |g(x)| = 0 \implies$ $\lim\limits_{x\to a} g(x) = 0$ I am trying to prove: Let $g$ be a real valued function for all $x \in \mathbb{R}$. If $a \ne x$ is such that $\lim\limits_{x\to a} |g(x)| = 0$, then $\lim\limits_{x\to a} g(x) = 0$. Proof: Let $x \in \mathbb{R}$, $g(x)$ be a real valued function, and $a \ne x$ be such that $\lim\limits_{x\to a} |g(x)| = 0$. Let $\epsilon>0$. Then, $\exists \delta>0$ such that \begin{equation*} 0<|x-a|<\delta \implies \left||g(x)|-0\right| = |g(x)| = |g(x)-0| < \epsilon \end{equation*} and we conclude that $\lim\limits_{x\to a} g(x) = 0$. Is this proof correct? AI: The idea is good, but the wording is not so good. First of all (and you have already been informed of this in the comments), it makes no sense to say that $a\ne x$. Then I would simply say that the assertion $\lim_{x\to a}|g(x)|=0$ means$$(\forall\varepsilon>0)(\exists\delta>0):|x-a|<\delta\implies\bigl||g(x)|\bigr|<\varepsilon,\tag1$$whereas the assertion $\lim_{x\to a}g(x)=0$ means$$(\forall\varepsilon>0)(\exists\delta>0):|x-a|<\delta\implies|g(x)|<\varepsilon.\tag2$$But, since, for each $b\in\Bbb R$, $\bigl||b|\bigr|=|b|$, it is clear that, in fact, $(1)$ and $(2)$ are the same statement. So, this actually proves that$$\lim_{x\to a}|g(x)|=0\iff\lim_{x\to a}g(x)=0.$$
H: Ratio of polynomials, how to prove $f(t) \ge 0$ for $t > 0$? I am looking at the following function $$ f(t) = \frac{t+1}{t^2} - \frac{16}{(t+1)^3} $$ and am struggling to prove that $f(t) \ge 0$ whenever $t>0$. The statement appears true from plotting and inspecting the graph, but this is far from a proof. AI: Combine the fractions and keep factoring $$\frac{t+1}{t^2}-\frac{16}{(t+1)^3} = \frac{(t+1)^4-16t^2}{t^2(t+1)^3} = \frac{[(t+1)^2-4t][(t+1)^2+4t]}{t^2(t+1)^3}$$ $$ = \frac{(t-1)^2[(t+1)^2+4t]}{t^2(t+1)^3}$$ which is always nonnegative for $t>0$ (and $0$ when $t=1$)
H: Why is it ok to put all the elements to the exact power if they had different power before that? The question could sound messy, so will demonstrate what I mean. I'm going through Algorithm course from Stanford (full screen), when the teacher makes the proof for $O(n^k)$ notation: $$T(n) = a_kn^k + ... + a_1n+a_0$$ $$T(n) = O(n^k)$$ $$c = |a_k| + ... + |a_1|+|a_0|$$ $$n_0 = 1$$ And, when making the proof he starts with: $$T(n) \le |a_k|n^k + ... + |a_1|n+|a_0|$$ But then decides to power all the elements to $n^k$: $$T(n) \le |a_k|n^k + ... + |a_1|n^k+|a_0|n^k$$ So, my question is - why it's possible? It's not like he multiplied both parts of inequality, he just added a lot to the right part, but the inequality still valid for some reason. Any link or short answer will do, need just to get a general idea. AI: If $b\ge a$, and you add a lot to $b$, is there any way that the resulting number could be less than $a$? The point of doing it here is to be able to pull out the common factor of $n^k$ to see that $$T(n)\le(|a_k|+\ldots+|a_1|+|a_0|)n^k\;:$$ the factor $|a_k|+\ldots+|a_1|+|a_0|$ is a constant that doesn’t depend on $n$. If we call it $M$, we can say that $T(n)\le Mn^k$ for all sufficiently large $n$, which is exactly what we need in order to conclude that $T(n)$ is $O(n^k)$.
H: Combinatoric question on preimage of a function Got stuck on the following combinatoric question. Will be glad for any suggestions. Find the number of functions $f:\{1,2,3,4\} \rightarrow \{1,2,3,4\}$ so that for all $1\le i\le4$, $f^{-1}(\{i\})≠\{i\}$ . (i.e. Find the number of these functions in which the pre-image of a subset with a single member is different from the set containing that member.) Now, finding the number of injective functions that fulfill this is pretty easy (it's called the number of "derangements" of a set and is the number of injective functions with no fixed point, equal in this case to 9) but there are so many other possibilities that checking them all, seems to be too tedious. For example a partly injective function such as $f(1)=2 ,\ f(2)=2, \ f(3)=1, \ f(4)=3$ fulfills the condition in spite of $2$ being a fixed point, since the pre-image of $2$ is $\{1,2\}$ which is different from $\{2\}$. AI: If the function were injective or surjective, and thus bijective, you would indeed be talking about a specific class of permutations called derangements. There are $!4$ such derangements. Since you are not restricting yourself to injective or surjective functions, we can consider all $4^4$ possible functions here. Since the numbers are so small, we can proceed directly by cases. No fixed points: for each $i$ we have $3$ choices for $f(i)$ to be such that $f(i)\neq i$. There are $3^4$ such functions here. Exactly one fixed point: Pick which one value was fixed. From there, in order to prevent the preimage of that point being exactly that point alone, it must be the case that for each of the other points we did not avoid mapping to it. There are $3^3$ functions with only $1$ as a fixed point, $2^3$ of which did not map any other elements to $1$ as well, making the total number of functions with exactly one fixed point $4\cdot (3^3-2^3)$ Exactly two fixed points: Pick which two values were fixed. Due to the small number of available elements, we note that it must be the case that each of the two remaining elements are mapped one each to the chosen values. There are $\binom{4}{2}\cdot 2$ such functions. This gives a total of $$3^4+4\cdot (3^3-2^3)+\binom{4}{2}\cdot 2$$ I do not as of yet see a convenient way to approach the general problem were we to consider $\{1,2,3,\dots,n\}$ in place of merely $\{1,2,3,4\}$ as our set in question
H: Let $y = f(x)$ be the particular solution to the differential equation $ \frac{dy}{dx} $=$y^2$ with the initial condition $f(3) = 1$ Let $y = f(x)$ be the particular solution to the differential equation $ \frac{dy}{dx} $=$y^2$ with the initial condition $f(3) = 1$. Which of the following gives an expression for $f(x)$ and the domain for which the solution is valid? A.) $f(x)= \frac{1}{4-x} $ for $x < 4$ B.) $f(x)= \frac{1}{4-x} $ for $x > 4$ C.) $f(x)= \frac{4x-1}{x} $ for $x > 0$ D.) $f(x)= \frac{4x-1}{x} $ for $x \ne 0$ Would the answer be A since the equation is then valid for when $x=3$? And it couldn't be C or D, as those don't satisfy $f(3)=1$, right? AI: This is a simple separable ODE $$\frac{dy}{dx} = y^2$$ Separate variables, $$\frac{1}{y^2} \frac{dy}{dx} = 1$$ Integrating both sides, $$\int y^{-2} dy = x+ c_1$$ So $y^{-1} = -x + c_1'$ (with $c_1' = -c_1+c_2$). Thus, we get that $$y = \frac{1}{c_1' - x}$$ Since $y(3)=1$ we then immediately see that ${c_1'} = {4}$. We find the solution $$y = \frac{1}{4-x}$$ Since $3 < 4$ we then get that A) is the correct answer.
H: Solving a system of a system of equations, numerically. I have a system of 4 systems of equations. $$\begin{align*} C - 0 &= 1.02 \\ C - F &= 0.45 \\ C - N &= 0.24 \\ C - I &= -0.21 \\ \end{align*}$$ $$\begin{align*} F - 0 &= 0.59 \\ F - C &= -0.45 \\ F - N &= -0.20 \\ F - I &= -0.68 \\ \end{align*}$$ and so on. Obviously, I can just say that $C = 1.02, F = 0.59$. But then $C - F = 0.43$, not $0.45$, as the second line indicates. Is there a way to solve for the best approximation for $C, F, N, I$, or am I better off not wasting my time and just going with $C = 1.02$, etc.? I've tried setting up a matrix equation for each system, and solving the normal equations symbolically in Matlab, but that didn't work out too well. Thanks! AI: We can arrange all the equations in a whole system in the form $Ax=b$ with $x=(C,F,N,I)$ and solve by least-squares method $$A^TA\hat x=A^Tb \implies \hat x =(A^TA)^{-1}A^Tb$$
H: Why is this map a unitary? Consider following theorem from Murphy's '$C^*$-algebras and operator theory' The proof says that if $p=1$, then the assertion that the map $H \to \bigoplus_\lambda p_\lambda(H)$ is a unitary is clear. I don't see why this is true though. To show it is a unitary, it suffices to show that it is isometric and surjective. I can see it is isometric, but don't see why it should be surjective. I tried the following: Let $(p_\lambda(x_\lambda))_\lambda \in \bigoplus_\lambda p_\lambda(H)$. I guess we must take something like $x= \sum_\lambda x_\lambda$ and show that this still gets mapped to what we want? AI: The map is obviously surjective. Let $y\in \oplus_{\lambda\in \Lambda} p_{\lambda}(H)$ and define $x=\sum_{\lambda} y_{\lambda}$. We just need to argue that $p_{\lambda}(x)=y_{\lambda}$. Since, the $p_{\lambda}$ are orthogonal, we see that $p_{\lambda}(y_{\lambda'})=0$ for $\lambda\neq \lambda'$ and thus, by continuity $$ p_{\lambda}(x)=\sum_{\lambda'}p_{\lambda}(y_{\lambda'})=p_{\lambda}(y_{\lambda})=y_{\lambda} $$ since $p_{\lambda}$ is a projection. This proves the desired.
H: Change of integration variable I have the following relation for a function $\phi_A(\textbf{r})$ and it is known that $\rho_B(\lambda \textbf{r}) = \rho_A(\textbf{r})$ $\phi_A(\textbf{r}) = \int{d^3r' \frac{\rho_A(\textbf{r}')} {|\textbf{r - r}'|}} = \int{d^3r' \frac{\rho_B(\lambda \textbf{r}')} {|\textbf{r - r}'|}}$. Now if I change the integration variable to $\lambda\textbf{r}'$, then $\phi_A(\textbf{r}) = \int{d^3(\lambda r') \frac{\rho_B(\lambda^2 \textbf{r}')} {|\textbf{r} - \lambda \textbf{r}'|}}$. My book says that $\phi_A(\textbf{r}) = \frac{1}{\lambda^2}\int{d^3(\lambda r') \frac{\rho_B(\lambda \textbf{r}')} {|\lambda\textbf{r} - \lambda \textbf{r}'|}}$. I am guessing that this relation is trivial, but I am unable to get the third relation from the second one. AI: I'm assuming $\bf{r}$ and $\bf{r}'$ are vectors in $\mathbb{R}^{3}$, so: $$|{\bf{r}}-{\bf{r'}}| = \sqrt{(x-x')^{2}+(y-y')^{2}+(z-z')^{2}} =\sqrt{\frac{1}{\lambda^{2}}[(\lambda x-\lambda x')^{2}+(\lambda y-\lambda y')^{2}+(\lambda z-\lambda z')^{2}]} = \frac{1}{\lambda}|\lambda{\bf{r}}-\lambda{\bf{r}'}|$$ Here, I assumed $0< \lambda$, ${\bf{r}}=(x,y,z)$ and ${\bf{r}}'=(x',y',z')$. Thus, we have: $$\int d^{3}{\bf{r}'}\frac{\rho_{B}(\lambda {\bf{r}}')}{|{\bf{r}}-{\bf{r}}'|} =\lambda\int d^{3}{\bf{r}'}\frac{\rho_{B}(\lambda {\bf{r}}')}{|\lambda{\bf{r}}-\lambda{\bf{r}}'|} $$ Finally, $d^{3}{\bf{r}}' = \frac{1}{\lambda^{3}}d^{3}(\lambda{\bf{r}}')$, from where it follows that: $$\phi_{A}({\bf{r}}) = \frac{1}{\lambda^{2}}\int d^{3}(\lambda{\bf{r}'})\frac{\rho_{B}(\lambda {\bf{r}}')}{|\lambda{\bf{r}}-\lambda{\bf{r}}'|}$$
H: have to find cartesian-coordinates from the given diagram Honestly, I don't know where to start in this question The cartesian co-ordinates of the point $Q$ in the figure is : (a) $(\sqrt{3}, 1)$ (b) $(-\sqrt{3}, 1)$ (c) $(-\sqrt{3},-1)$ (d) $(\sqrt{3},-1)$ AI: Well, by the Theorem of angles opposed by the vertex, the angles opposite each other when two lines cross are always equal. Here, I will denote the intersection between the circumference and the $Ox$ axis, the point which is part of the arc ${RQ}$ as $M$. By the theorem which I stated we then get that $\angle MOQ = -\pi/6$ radians $ = 30 \deg$. We will now use the sine and cosine function to find $MQ$ and $OM$, respectively. $$\sin {\angle MOQ} = \frac{MQ}{OQ} \Rightarrow MQ =1 $$ $$\cos {\angle MOQ} = \frac{OM}{OQ} \Rightarrow OM = \sqrt3 $$ Now, notice that $OM = pr_{Ox}{OQ}$ so $-OM=x_Q$ (that is obvious from the drawing) and $QM=pr_{Oy}OQ$ so $MQ=y_Q$. Thus we get that $Q=(-\sqrt3, 1)$.
H: For which real number $\alpha$ is there a value $c$ for which $\int^c_0 \frac{1}{1+x^\alpha}dx=\int^\infty_c\frac{1}{1+x^\alpha}dx$ For which real number $\alpha$ is there a value $c$ for which $\displaystyle\int^c_0 \frac{1}{1+x^\alpha}\mathrm{d}x=\int^\infty_c\frac{1}{1+x^\alpha}\mathrm{d}x$. What I have tried: Since when $0\leq x\leq c$ , $\displaystyle0\leq \frac{1}{1+x^\alpha} \leq 1$ for all $\alpha$, so $\displaystyle \int^c_0 \frac{1}{1+x^\alpha}\mathrm{d}x$ converge. when $c\leq x$, when $\alpha<1$, $\displaystyle \int^\infty_c\frac{1}{1+x^2}\mathrm{d}x$ converge, when $\alpha\geq 1$, $\displaystyle \int^\infty_c\frac{1}{1+x^2}\mathrm{d}x$ diverge. So only when $\alpha<1$, the existence of such a $c$ is possible. I don't know what to do from here, and there's a hint to use the intermediate value theorem. Then $\displaystyle\int^c_0 \frac{1}{1+x^\alpha}\mathrm{d}x=\frac{c}{1+t{_1}{^\alpha}}, 0<t_1<c$, $\displaystyle \int^m_c\frac{1}{1+x^2}\mathrm{d}x=\frac{m-c}{1+t{_2}{^\alpha}}, m\to\infty, c<t_2<m$. It doesn't look promising. AI: Note that for any fixed $\alpha$, the expression $f_\alpha(c) = \int_0^c \frac{1}{1 + x^\alpha}dx$ is a continuous (and differentiable!), strictly increasing function of $c$ for $c\geq 0$. Additionally, note that the equation in the problem statement is equivalent to $$f_\alpha(c)=\frac{1}{2}\int_0^\infty \frac{1}{1 + x^\alpha}dx$$ and thus by the intermediate value theorem, there exists a root whenever $\alpha$ is such that the integral converges, since $f(0)=0$ and $\lim_{c\to\infty}f(c)=\int_0^\infty \frac{1}{1 + x^\alpha}dx$.
H: Uniform continuity on different intervals We know that $f_n:(0,1)\rightarrow\mathbb{R}$ is a sequence of nondecreasing functions on interval $(0,1)$ and that $f:(0,1)\rightarrow \mathbb{R}$ is a continuous function. Let $A\subset (0,1)$ be a dense subset of $(0,1)$, which fulfills the condition $$ \forall_{x\in A} \ \lim_{n\rightarrow \infty}f_n(x)=f(x).$$ With this assumptions I have to a) show that $f_n$ converges uniformly to $f$ on every $[a,b]\subset(0,1)$, b) answer if $f_n$ converges uniformly to $f$ on $(0,1)$. My attempts: a) We have to show that $$ \forall_{\epsilon>0} \ \exists_{n_0=n_0(\epsilon)} \forall_{x\in [a,b]}\forall_{n>n_0} \ \ |f(x)-f_n(x)|<\epsilon.$$ For $f_n(x)$, $x\in A$, where $A$ is a dense subset of $(0,1)$ we can observe pointwise convergence, so let's take $[a,b]$ in such a way to make it a dense subset of $(0,1)$. Then for every $\epsilon$ we will be able to choose such $n_0=n_0(\epsilon)$ that $\forall_{x\in [a,b]}\forall_{n>n_0} \ \ |f(x)-f_n(x)|<\epsilon$ (from the point convergence). b) $f_n$ doesn't converge uniformly to $f$ on $(0,1)$ because on this interval doesn't converge even pointwise. Are my solutions sensible? If not, I politely ask for help. Thanks in advance and have a good day. AI: a) First, we show pointwise convergence. For any $x$ we build increasing $x_k \to x$ and decreasing $x'_k \to x$, belonging to $A$. For any $n$, we have $$f_n(x_k) \le f_n(x) \le f_n(x'_k)$$ Since $f_n$ converges on $A$, $$\forall k\ \forall \epsilon\ \exists n_{k,\epsilon}:\ \forall n > n_{k,\epsilon} \ \ |f_n(x_k) - f(x_k)| < \frac \epsilon 2 \tag{1}\label{eq1}$$ On the other hand, since $f$ is continuous: $$\forall \epsilon\ \exists k_{\epsilon}:\ \forall k \ge k_{\epsilon} \ \ |f(x_k) - f(x)| < \frac \epsilon 2 \tag{2}\label{eq2}$$ (The same for $x'_k$). Therefore, $\forall n > n_{k_\epsilon,\epsilon}$ \begin{alignat}{1} \text{(by \eqref{eq1} for $k \gets k_\epsilon$)} \quad\quad & f(x_{k_\epsilon}) &- \frac \epsilon 2 &\le f_n(x) &\le f(x'_{k_\epsilon}) &+ \frac \epsilon 2 \Rightarrow \\ \text{(by \eqref{eq2} for $k \gets k_\epsilon$)} \quad\quad & f(x) &- \epsilon &\le f_n(x) &\le f(x) &+ \epsilon \end{alignat} I.e., we have a pointwise convergence at point $x$. Uniform convergence on $[a,b]$ is shown here. b) It's false: consider $f_n(x) = \begin{cases} -\frac 1x, & x < \frac 1n \\ 0, & x \ge \frac 1n \end{cases}$. For each $x$, we have $f_n(x) \to f(x) = 0$. But convergence is not uniform: for any $n$ and $\epsilon$ there exists $x = \min(\frac 1 {2n}, \frac 1 {2 \epsilon})$: $f_n(x) \le -2 \epsilon$.
H: Inequality involving factorial of sum I noticed the following inequality involving factorials as a consequence of a statistics exercise: $$ (x_1+\cdots+x_n)!\leq n^{x_1+\cdots +x_n}\,x_1!\,\cdot\cdots\cdot\,x_n!\,, $$ where $x_1,\ldots,x_n$ are nonnegative integers. I thought such a clean inequality would have a name, but wasn't able to find anything on the internet. Can someone provide an elementary proof of it, or at least one that feels more natural than mine? How I arrived at it: Let $X=(X_1,\ldots,X_n)$ be a random sample from the Poisson($\lambda$) distribution. Consider the statistic $T=X_1+\,\cdots\,X_n\,$. By the superposition property of independent Poisson random variables, $T$ has a Poisson($n\lambda$) distribution. Denoting $t(x)=x_1+\cdots+x_n\,$, the following line shows that $T$ is a sufficient statistic for $\lambda\,$; $$ P\big(X=x\,\big|\,T=t(x)\big)=\frac{P(X=x)}{P\big(T=t(x)\big)}=\frac{\prod_1^nP(X_i=x_i)}{P\big(T=t(x)\big)}=\frac{\big(e^{-\lambda}\big)^n\,\frac{\lambda^{t(x)}}{x_1!\,\cdots \,x_n!}}{e^{-n\lambda}\,\frac{(n\lambda)^{t(x)}}{t(x)!}}=\frac{t(x)!}{n^{t(x)}x_1!\cdots x_n!}\,. $$ As the probability of any event cannot be bigger than one, we must have $t(x)!\leq n^{t(x)}x_1!\cdots x_n!\,$. AI: Combinatorially, $$\frac{(x_1+\dots+x_n)!}{x_1!\cdots x_n!}$$ is a multinomial coefficient that counts the number of ways to partition a set of size $x_1+\dots+x_n$ into sets of sizes $x_1,\dots,x_n$. On the other hand, $$n^{x_1+x_2+\dots+x_n}$$ counts the number of functions from a set of size $x_1+\dots+x_n$ to a set of size $n$, or equivalently (considering the fibers of such a function) the number of partitions of a set of size $x_1+\dots+x_n$ into $n$ (ordered) subsets. Thus $$\frac{(x_1+\dots+x_n)!}{x_1!\cdots x_n!}\leq n^{x_1+x_2+\dots+x_n}$$ and your inequality follows. This also shows the inequality is strict unless $n=1$ or $x_i=0$ for all $i$, since otherwise there will exist partitions into $n$ subsets where the subsets do not have sizes $x_1,\dots,x_n$.
H: Is this Lie algebra decomposition always true: $\mathcal{G} = \text{Ker}(ad_{a}) \oplus \text{Im}(ad_{a})$ I am reading: https://i.stack.imgur.com/QPS4m.png and am not understanding their decomposition $$\mathcal{G} = \text{Ker}(ad_{a}) \oplus \text{Im}(ad_{a})$$ where $\mathcal{G}$ is a Lie algebra and $ad_{a} := [a, \cdot]$. Since a Lie algebra is a vector space plus some conditions and bracket...then isn't this always true by the classic Linear Algebra rank-nullity theorem? If so, why say this? AI: If $a$ is a nilpotent element, i.e., say one where the adjoint map squares to $0$, then the image must be contained in the kernel. The dimensions adding does not mean that one is a complement to the other. For example, the map $f:\mathbb{R}^2\to \mathbb{R}^2$ given by $(x,y)\mapsto (0,x)$ does not satisfy your statement. The image is generated by $(0,1)$, and so is the kernel. Edit: As Torsten Schoeneberg mentioned, the map $f$ is $\mathrm{ad}_x$ for the $2$-dimensional Lie algebra with basis $x,y$, and Lie bracket $[x,y]=x$.
H: Prove that $f$ is Lebesgue integrable on $E$ if and only if $\sum_{n=0}^\infty 2^nm(\{x\in E:f(x)\geq2^n\})<\infty$ Question: Let $E$ be a finite measure space and let $f$ be a nonnegative function on $E$. Prove that $f$ is Lebesgue integrable on $E$ if and only if $\sum_{n=0}^\infty 2^nm(\{x\in E:f(x)\geq2^n\})<\infty$. My Thoughts and Attempt: For the forward direction, if $f$ is integrable on $E$, then $\int_Ef(x)<\infty$. Consider two sets $A=m(\{x\in E:f(x)\geq2^n\})$ and $B=m(\{x\in E:f(x)<2^n\})$. Then, $\int_Ef=\int_Af+\int_Bf$. Since $\int_Ef<\infty$, then, in particular, $\int_Af<\infty$, and so by Markov we get: $2^nm(A)\leq2^n\frac{1}{2^n}\int f(x)<\infty$ For the other direction, suppose $\sum_{n=0}^\infty 2^nm(\{x\in E:f(x)\geq2^n\})<\infty$. So, $\int_Ef=\int_Af+\int_Bf$, and by our hypothesis, we know $\int_Af<\infty$. So, our only concern is about $\int_Bf$..... and this is where I am getting stuck... Any help, suggestions, etc. are greatly appreciated. Also, please let me know if I have any errors in the forward direction. Thank you! AI: You need $f$ to be measurable. Let $C_n=\{x:f(x)\ge 2^n\}$ and consider the indicator function $I_{C_n}$. Define $$g(x)=\sum_{n=0}^\infty 2^nI_{C_n}(x).$$ Then $$\int_E g=\sum_{n=0}^\infty 2^nm(C_n).$$ So you want to show $\int_Ef<\infty$ iff $\int_E g<\infty$. If $2^N\le f(x)<2^{N+1}$ then $g(x)=1+2+4+\cdots+2^N=2^{N+1}-1$. If $f(x)<1$ then $g(x)=0$. Therefore $f(x)\le g(x)+1$ and $f(x)\ge g(x)/2$. These inequalities show that if $\int_Eg$ is finite then so is $\int_Ef$ and vice versa.
H: Galois Group of $x^4 - 7$ over $\mathbb{F}_5$ I am asked to find the Galois Group of the polynomial $x^4 - 7$ over $\mathbb{F}_5$. I am wondering if the following is correct: The splitting field of $x^4 - 7 = x^4 - 2$ over $\mathbb{F}_5$ is $\mathbb{F}_5(\sqrt[4]{2},i)$ where $i,\sqrt[2]{2}$ lie in a fixed algebraic closure of $\mathbb{F}_5$ and $i^2 = -1$ and $(\sqrt[4]{2})^4 = 2$. Since $2^2 = 4 = -1 \in \mathbb{F}_5$ we see that $i = 2$. Now, $x^4 - 2$ does not have any roots in $\mathbb{F}_5$ and so $[\mathbb{F}_5(\sqrt[4]{2}): \mathbb{F}_5] = 2$ or $4$ which means the Galois Group is either order $2$ or $4$. If it were $2$, then $\sqrt[4]{2} = a + b\sqrt{2}$ where $a,b \in \mathbb{F}_5$. After squaring we must have that $a^2 + b^2 = 0$ and $2ab = 1$. This is an impossibility and so the degree of this extension (and hence the order of the Galois group) is $4$. Consider $\sigma: \mathbb{F}_5(\sqrt[4]{2}) \to \mathbb{F}_5(\sqrt[4]{2})$ given by $\sigma(\sqrt[4]{2}) = 2\sqrt[4]{2}$. This is an automorphism of $\mathbb{F}_5(\sqrt[4]{2})$ of order $4$ and so must be the the galois group is $\langle \sigma \rangle$. AI: I am stating an alternative path. I will show that $x^4-2$ is irreducible over $\mathbb F_5$ and hence $[\mathbb F_5(\sqrt[4] 2):\mathbb F_5]=4$. Also $\mathbb F_5$ contains fourth root of unity over $\mathbb F_5$($\because a^{(5-1)}\equiv 1(\mod 5) $ for all $a\in \mathbb F_5^*$ ). So $x^4-2$ splits over $\mathbb F_5(\sqrt[4] 2)$. $$x^4-2=\prod_{a\in \mathbb F_5^*}(x-a\sqrt[4]2)$$ If $g(x)$ is a factor of characteristic $x^4-2$ then $g(-x)$ is also a factor . $x^4-2$ has no root in $\mathbb F_5$. Then observe that if $x^4-2$ reducible then only possibility is $$x^4-2=(x^2+ax+b)(x^2-ax+b)=x^4+(2b-a^2)x^2+b^2x$$ $\therefore 2b=a^2$ and $b^2=2$ . But this is not possible for any $a,b\in \mathbb F_5$. So $x^4-2$ is irreducible in $\mathbb F_5$. Then you have $[\mathbb F_5(\sqrt[4] 2):\mathbb F_5]=4$. After this use the fact that every finite extension over a finite field is cyclic. (you can find a proof here)
H: Kernel of rational matrix has rational elements arbitrarily close to real elements I am trying to understand this: Let $\mathscr{S}$ be a finite system of homogeneous linear equations with rational coefficients, i.e. $Ax = 0$ for some rational matrix $A$. If $\hat{x} \in \mathbb{R}^n$ is a real solution to $\mathscr{S}$, then there exist rational solutions $x' \in \mathbb{Q}^n$ arbitrarily close to $\hat{x}$. (This follows from the solution space having a basis of rational vectors.) In other words, let $A$ be an $m \times n$ matrix with entries from $\mathbb{Q}$. If some $\hat{x} = (\hat{x}_1, \dots, \hat{x}_n) \in \mathbb{R}^n$ is in the "solution space" $\ker{A}$, then there is another vector $x' = (x'_1, \dots, x'_n) \in \mathbb{Q}^n$ that is as "close" as we want to $\hat{x}$. How do we know that $\ker{A}$ will have a basis of rational vectors - couldn't we have $A \hat{x} = 0$ for $\hat{x} \in \mathbb{R}^n \setminus \mathbb{Q}^n$? I'd assume the metric $d$ we use to measure closeness is $d(\hat{x}, x') = ||\hat{x} - x'||_1 = \sum_{i=1}^n |\hat{x}_i - x'_i|$, or perhaps $d(\hat{x}, x') = \sqrt{\sum_{i=1}^n (\hat{x} - x')^2}$ using the dot product on $\mathbb{R}^n$. Since we can get $d(\hat{x}, x') < \epsilon$ for any $\epsilon > 0$, I wonder if it makes a difference? The italicized text is from: Archer, A. F. (2000). On the upper chromatic numbers of the reals. Discrete Mathematics, 214(1-3), 65-75 Related questions: System of linear equations having a real solution has also a rational solution., If Ax=v has a real solution then it must have a rational solution, When a system of rational linear equations have complex solutions does it have rational solutions?. AI: This fact is rather general from linear algebra. The general idea is that homogeneous linear systems are insensitive to scalar extensions – you can’t create a “really new” solution to them by extending the base field. There will be solutions in the extended field, sure, but they will be linear combinations of solutions from the smaller field. Let $A$ be a $m \times n$ matrix with entries in a field $F$. You know from linear algebra that $\dim{\ker{A}}+\mathrm{rk}\,A=n$ (all of this over $F$). We want to show that $\dim{\ker{A}}$ (which a priori would depend on $F$) does not change if we replace $F$ with a larger field $K$. Because of the equation above, it is enough to do so for the rank of $A$. But the rank of $A$ is the unique integer $r$ such that $A=PDQ$ with $D$ having exactly $r$ nonzero entries, all equal to one, on its main diagonal, and $P \in GL_m(F),Q \in GL_n(F)$. But then $P,Q,D$ are matrices with entries in $K$ with the same size as in $F$, and $P,Q$ are invertible in $F$ thus in $K$. So the rank of $A$ over $K$ is the same as the rank of $A$ over $F$. So, let $x_1,\ldots,x_p$ a basis in $F^n$ of the kernel over $F$ of $A$. Then $x_1,\ldots,x_p$ generate a subspace $V \subset K^n$ of the kernel of $A$. Now $V$ is the image in $K^n$ of the matrix $B$ (in $F$) with columns $x_1,\ldots,x_p$, so the dimension of $V$ is the rank of $B$ over $K$, thus the rank of $B$ over $F$, which is $p$, which is the dimension of the kernel of $A$ over $K$. So $V$ is the kernel of $A$ over $K$ and we are done.
H: grid "proof" for commutativity of multiplication In this write-up by Tim Gowers on why multiplication is commutative, https://www.dpmms.cam.ac.uk/~wtg10/commutative.html he gives a physical grid model to which multiplication corresponds and says - "This argument, compelling as it is, doesn't quite qualify as a mathematical proof" Why, precisely, doesn't this qualify as a mathematical proof ? AI: From a semi-formal point of view, it doesn't qualify as a mathematical proof because: it is unclear whether it is speaking of the single identity $395\times 428=428\times 395$ or whether the author claims of having given a general argument; it hasn't been shown how the properties of being a rectangular grid and rotating by ninety degrees work together towards the result of preserving the cardinality of a (the?) rectangular grid. It should be pointed out that modern mathematical formalism generally demands that, with a few notable classes of exceptions, a proof should be founded in the language of some set theory, the most common being ZFC, and that the argument of interest does not immediately qualify as such.
H: Is the finite intersection of prime ideals radical? Does there exist a ring $R$ and finitely many prime ideals $P_i$ such that $\cap_{i = 1}^n P_i$ is not radical ideal? In other words, is the finite intersection of prime ideals a radical ideal? AI: Consider a commutative unital ring $R$ and some prime ideals $P_1, \dots, P_n.$ Claim. We have that $I = P_1 \cap \cdots \cap P_n$ is radical, i.e., $\sqrt I = I.$ Proof. Considering that radicals distribute over intersections, we have that $$\sqrt I = \sqrt{P_1 \cap \cdots \cap P_n} = \sqrt{P_1} \cap \cdots \cap \sqrt{P_n}.$$ But prime ideals are radical, so $\sqrt{P_i} = P_i$ implies that $\sqrt I = I.$ QED.
H: Sonin's identity The following identity attributed to N.Y. Sonin states the following: Suppose $f\in C^2[a,b]$. Let $\rho(x)=\frac12-\{x\}$, where $\{x\}$ is the fractional part of $x$, and $\sigma(x)=\int^x_0\rho(t)\,dt$. Then $$ \sum_{a< n\leq b}f(n)=\int^b_a f(t)\,dt +\rho(b-)f(b)-\rho(a)f(a)-\big(\sigma(b)f'(b)-\sigma(a)f'(a)\big) +\int^b_a\sigma(t)\,f''(t)\,dt $$ where summation runs over all integers between $a$ and $b$. This looks like integration by parts and Abel summation kind of thing. I tried to apply Riemann-Stieltjes formula directly but this did not quite work. Hints (or a sketch of proof) would be appreciated. AI: \begin{align} \int_a^b \sigma(t) f''(t) \, dt &= [\sigma(t) f'(t)]_a^b - \int_a^b \rho(t) f'(t) \, dt \\&= [\sigma(t) f'(t)]_a^b - [\rho(t) f'(t)]_{a+}^{b-} + \int_{(a,b)} f(t) \, d\rho(t) . \end{align} Also if $a < x < b$, then $$ \lfloor x\rfloor = - \lfloor a \rfloor + \sum_{a<n<b} H(x-n) $$ where $$ H(x) = I_{x \ge 0} .$$ So $$ \rho(t) = \tfrac12 - x + \lfloor x\rfloor = \tfrac12 - \lfloor a \rfloor - x + \sum_{a < n < b} H(x-n). $$ Finally, if $a<n<b$, then $$ \int_a^b f(t) \, dH(t-n) = f(n) .$$
H: Irreducible polynomial for infinitely many values I want to prove that there are infinitely many values of k such that the polynomial $x^{9}+12x^{5}-21x+k$ is irreducible. I sense that I have to use Eisenstein and the number 3 but I don't see exactly how. Any help would be appreciated. AI: General idea: What does Eisenstein with the prime $3$ say if $k=3$? What about $k=6$? What about $k=9$? Can you now find infinitely many $k$ that work?
H: Trouble understanding QCQP Using a graphical method, indicate the feasible region and solve the minimization problem. $$\begin{array}{ll} \text{minimize} & f := x_1^2 + x_2 + 4\\ \text{subject to} & c_1 := -x_1^2-(x_2+4)^2 +16 \ge 0\\ & c_2 := x_1 - x_2 - 6 \ge 0\end{array}$$ I draw the problem as such: And I do understand that to be able to relate the constraints to the subject function, the function needs to be held constant. I do have a hard time understanding where the minimizer actually is. By holding the function constant results in level curves consisting of parabolas, is the minimizer at the top of those parabolas, or have i misunderstood something? AI: First, a small mistake in your figure, the line for the second constraint does not pass through $(0,-4)$. The function that you want to minimize is $f(x)=x_1^2+x_2+4$. Let's say the value for this is some $v$, not necessarily the minimum. We want to plot contours where $f(x)=v$. $$x_2=-x_1^2-4+v$$ These are upside down parabolas, shifted along $x_2$ axis, with the vertex on the $x_1=0$. The the figure below. The legend shows the $v$ value. It is easy to see that the minimum of $f(x)$ given the constraints is $-4$ at $x=(0,-8)$.
H: If $a^2 + b^2 + c^2 = 1$, what is the the minimum value of $\frac {ab}{c} + \frac {bc}{a} + \frac {ca}{b}$? Suppose that $a^2 + b^2 + c^2 = 1$ for real positive numbers $a$, $b$, $c$. Find the minimum possible value of $\frac {ab}{c} + \frac {bc}{a} + \frac {ca}{b}$. So far I've got a minimum of $\sqrt {3}$. Can anyone confirm this? However, I've been having trouble actually proofing that this is the lower bound. Typically, I've solved problems where I need to prove an inequality as true, but this problem is a bit different asking for the minimum of an inequality instead, and I'm not sure how to show that $\sqrt {3}$ is the lower bound of it. Any ideas? AI: Trivially, we have $(x-y)^2 + (y-z)^2 + (z-x)^2 \geq 0$, so we get $$(x+y+z)^2 \geq 3(xy+yz+xz)$$ by adding to both sides of the equation. Thus by plugging in $x = \frac{ab}{c}$, $y = \frac{bc}{a}$, $z = \frac{ca}{b}$, we get $$\left(\frac{ab}{c} + \frac{bc}{a} + \frac{ca}{b}\right)^2 \geq 3(b^2 + c^2 + a^2) = 3$$ and thus $\frac{ab}{c} + \frac{bc}{a} + \frac{ca}{b} \geq\sqrt{3}$. We attain equality by setting $a=b=c=\frac{\sqrt{3}}{3}$.
H: Does this lead to a contradiction within ZF? Let the following be an axiom (which I will denote P): If $x,y$ are sets and $f:y\to x$ is a surjection, then the existence of an injection $f:x\to y$ guarantees a choice function exists. Does P lead to contradiction within ZF in any obvious Russell's paradox like sense? AI: If I understand you correctly, you want to say that if there is a surjection from $y$ onto $x$, then if there is any injection, there is one splitting the surjection. This cannot lead to contradiction, since it follows from the Axiom of Choice. Another question would be whether or not this implies the Axiom of Choice. This is a bit more complicated. Let "The Partition Principle" denote the statement "if there is a surjection from $y$ to $x$, then there is an injection". And what we're asking is: Does the Partition Principle implies AC provable from $\sf ZF$? And the answer is that we don't know.
H: Show that $u(x)=0$ for all $x\in\Omega$ Suppose $\Omega\subset R^n$ is a bounded open domain and $u(x)$ is a smooth function that satisfies $$\left\{\begin{matrix} \Delta u+x_{1}u^{2}u_{x_1}=0 \text{ for all } u\in\Omega\\ u(x)=0 \text{ for all } x\in\partial\Omega \end{matrix}\right.$$ Show that $u(x)=0$ for all $x\in\Omega$ My attempt: By multiplying th first line with $u$ we have: $I= \int\limits_{\Omega}u\Delta u+x_1u^3u_{x_1}dx=0$ And from Green's identity we have: $\int\limits_{\Omega}u\Delta udx=-\int\limits_{\Omega}|Du|^2dx+\int\limits_{\partial\Omega}u\frac{\partial u}{\partial\nu}ds=-\int\limits_{\Omega}|Du|^2dx$ (since $u=0$ on $\partial\Omega$) Also: $\int\limits_{\Omega}x_1u^3u_{x_1}dx=\frac{1}{4}\int\limits_{\Omega}\frac{\partial}{\partial x_1}(x_1u^4)dx-\frac{1}{4}\int\limits_{\Omega}u^4dx$ Then by substituting in $I$, $I=-\int\limits_{\Omega}|Du|^2dx-\frac{1}{4}\int\limits_{\Omega}u^4dx+\frac{1}{4}\int\limits_{\Omega}\frac{\partial}{\partial x_1}(x_1u^4)dx=0$ So if we define, $$E(x_1)=\int\limits_{\Omega}x_1u^4dx$$ Then $\frac{d}{dx_1}E(x_1)=\int\limits_{\Omega}|Du|^2dx+\frac{1}{4}\int\limits_{\Omega}u^4dx\geq0$ and I couldn't proceed afterwards.. May be there is a different way to do this problem Appreciate your help AI: Using integration by parts in the second integral you should gain a vanishing boundary integral: in fact, you get: $$\begin{split} \int_\Omega x_1 u^3 u_{x_1}\ \text{d} x &= \int_\Omega x_1 \ \frac{\partial}{\partial x_1} \left[ \frac{1}{4}\ u^4\right]\ \text{d} x \\ &= \underbrace{\frac{1}{4}\int_{\partial \Omega} x_1\ u^4\ \nu_1\ \text{d} \sigma}_{=0} - \int_\Omega \frac{\partial}{\partial x_1}[x_1]\ \frac{1}{4} u^4\ \text{d} x \\ &= - \frac{1}{4}\int_\Omega u^4\ \text{d} x \end{split}$$ ($\nu_1$ in the vanishing integral is the first coordinate of the exterior normal unit vector $\nu$ to the boundary $\partial \Omega$). Therefore you have: $$-\int_\Omega \left[ |\operatorname{D} u|^2 + \frac{1}{4} u^4\right]\ \text{d} x = 0$$ entailing $u=0$ a.e. in $\Omega$.
H: Showing $f_j(0)$ converges to $f(0)$ where $f_j$ and $f$ are rational functions. Let $A=\{z \in \mathbb{C}: \frac{1}{2} < |z| < 1 \}$. Let $\{f_j\}_{j=1}^\infty$ be a sequence of rational functions and let $f$ be a rational function. Suppose none of these functions have poles on $A \cup \{0 \}$, $f_j$ converges uniformly to $f$ on $A$, and each $f_j$ is nonzero on the closed unit disc. I am trying to show that $f_j(0) \rightarrow f(0)$. I am having trouble with this because it is possible that even though each $f_j$ is nonzero, the numerator has a zero not in the closed unit disc. Otherwise, I would be able to assume the numerator was a constant and perhaps proceed from there. I would appreciate any help with how to start this proof. AI: We need the extra condition that $f$ is not identically zero since otherwise the result doesn't hold as $f_n(z)=\frac{1}{(4z)^n+1}$ satisfies $f_n(0)=1, f_n(z) \to 0$ uniformly on A, $f_n$ rational with no poles on $A$ or at $0$ etc Local uniform convergence on $A$ and the rest of the hypothesis is enough to conclude that $f_j \to f$ locally uniformly on the open disc; in particular, since $f_j, f$ have no poles at zero, it follows $f_j(0) \to f(0)$ (where local uniform convergence is meant in the sense of meromorphic functions, so if $w$ is not a pole of $f$ then there is a neighborhood $W$ and an $n(W)$ for which $f_n$ has no pole in $W$ for $n \ge n(W)$ and $f_n \to f$ uniformly on $W$ while if $w$ is a pole of $f$, poles of $f_n$ accumulate there with the corresponding order and $1/f_n \to 1/f$ in a neighborhood of $w$ - so for example if $w$ is a pole of order $2$, then for every small enough neighborhood, there is $n(W)$ for which $f_n$ has no zeroes and either two simple poles or a double pole there for $n \ge n(W)$ and $1/f_n \to 1/f$ uniformly in $W$) Proof: $f$ not identically zero as noted so by Hurwitz $f$ has no zeroes in $A$ and $1/f_n \to 1/f$ locally uniformly in $A$; but if $g_n =1/f_n, g=1/f$, the hypothesis on the zeroes of $f_n$ implies that $g_n$ is holomorphic in the open unit disc, $g_n \to g$ locally uniformly in $A$; in particular $g_n$ is uniformly locally bounded in $A$ so by maximum modulus ($\sup_{|z| \le 1-\epsilon}|g_n(z)|=\sup_{|z| = 1-\epsilon}|g_n(z)|$) it follows that $g_n$ is locally uniformly bounded in the opne unit disc so it is a normal family; but now every subsequence $s$ of $g_n$ on the open unit disc has a subsequence converging to a holomorphic function $h_s$ on the open unit disc that is $g$ on $A$, hence by the identity principle, $h_s=h$ for all subsequences of $g_n$, $g_n \to h$ and $h=g$ on $A$, while $h$ is holomorphic on the open unit disc; since $g$ is meromorphic on the open unit disc (rational), it follows that $g=h$ and $1/f$ is holomorphic in the open unit disc too, while $1/f_n \to 1/f$ locally uniformly there and we are done (since that clearly implies $f_n \to f$ in the sense described above)
H: CW complexes are T$_1$ Given the constructive definition of CW-complexes (i.e. the one Hatcher gives in his Algebraic Topology book) how would one prove that every singleton in closed. He states in page 522 that every point pulls back to closed subsets of the closed discs $D_\alpha^n$ under every characteristic map $\Phi_\alpha$. But I do not see how this is immediate. AI: Fix a point $x$ in your CW-complex $X$ and a cell $e^n_\alpha$ with attaching map $\varphi_\alpha:\partial D^n\to X^{n-1}$ and characteristic map $\Phi_\alpha:D^n\to X$. By induction on $n$, you can assume the $(n-1)$-skeleton $X^{n-1}$ is $T_1$. If $x\not\in X^{n-1}$, then $\Phi_\alpha^{-1}(\{x\})$ has at most one point and thus is closed (since $\Phi_\alpha$ is injective on the interior of $D^n$ and that is the only part of $D^n$ that it maps outside of $X^{n-1}$). If $x\in X^{n-1}$, then $\{x\}$ is closed in $X^{n-1}$ by the induction hypothesis, so $\Phi_\alpha^{-1}(\{x\})=\varphi_\alpha^{-1}(\{x\})$ is closed in $\partial D^n$ and hence also in $D^n$.
H: Total derivative of vector function Let's asume that I have a (vector) function $f \space (s,t,u,v):\mathbb{R}^4 \rightarrow \mathbb{R}^n.$ I would like to calculate: $\frac{d}{dt}\biggr|_{t=t_0} f(t,t,t,t).$ Intuitively: This should just be given by: $$\frac{d}{dt}\biggr|_{t=t_0} f(t,t,t,t)=\frac{\partial}{\partial s}\biggr|_{(s,t,u,v)=(t_0,t_0,t_0,t_0)}f(s,t,u,v)+\frac{\partial}{\partial t}\biggr|_{(s,t,u,v)=(t_0,t_0,t_0,t_0)}f(s,t,u,v)+\frac{\partial}{\partial u}\biggr|_{(s,t,u,v)=(t_0,t_0,t_0,t_0)}f(s,t,u,v)+\frac{\partial}{\partial v}\biggr|_{(s,t,u,v)=(t_0,t_0,t_0,t_0)}f(s,t,u,v).$$ Am I right? Can someone provide some explanation. I know this is probably a stupid question, but I'm not a mathematician and have trouble finding like a theorem or something regarding this, couldn't find something useful here on math.SE either .It seems like this just the total deriative of the function, but in this case Wikipedia just provides the formula for a scalar function. AI: Generally, in appropriate conditions, when $f$ have $n$ variables $z=f(y_1, \cdots, y_n)$ and we take its composition with functions $y_i=\psi_i(x_1, \cdots, x_m)$, then well known chain rule is: $$\frac{\partial f \circ \psi}{\partial x_k}= \frac{\partial f}{\partial y_1}\frac{\partial \psi_1}{\partial x_k}+\frac{\partial f}{\partial y_2}\frac{\partial \psi_2}{\partial x_k}+ \cdots+\frac{\partial f}{\partial y_n}\frac{\partial \psi_n}{\partial x_k}$$ In your case $m=1, n=4$ and $\forall k=\overline{1,4}$ we have $\psi_k(t)=t$, which gives $\psi_k^{'}=1$. So your formula is right and can be written as: $$\frac{df}{dt}= \sum_{i=1}^{4}f_{y_i}^{'}=f_{y_1}^{'}+f_{y_2}^{'}+f_{y_3}^{'}+f_{y_4}^{'}$$
H: Given 6 distinct points in $3$-$D$ space, can the distances between $3$ of the points be determined if all other distances between points are known? In the figure below of points in $3$-$D$ space, suppose the lengths of the blue segments are all known. Is it possible to determine the lengths of the red segments? Each point is connected to every other point by a line segment. The triangle $XYZ$ is red (lengths unkown), but all other lengths are known (blue). $6$ points in $3$-$D$ space. The points are labeled $A$, $B$, $C$, $X$, $Y$, and $Z$. Each point is connected to each other point by a line segment. The segments of the triangle $XYZ$ are colored red. The other twelve line segments are colored blue." /> My geometry is rusty (especially in $3$-$D$), so I am not sure where to begin this problem. I would think that since $12$ of the segments are known and only $3$ are unknown, it may be possible to set up a system of equations to solve for the unknown lengths. But I am stumped on how to even create the equations. I found some equations about tetrahedra, but there is not an obvious (to me) way to combine them to create a solvable system. The two most similar questions I found by searching are this one and this one. The first has a similar premise, but in that problem, the lengths are only known between two desired points and many other arbitrary points. The distances between the arbitrary points in that question are unknown, but they are known in my question. The second has more information about the points in a coordinate system, but in my problem no coordinates are known, just the distances between points. If this problem can not be solved in the general case, could we add some assumptions to make it solvable? E.g. that triangles $ABC$ and $XYZ$ do not intersect, no three points are co-linear, triangle $ABC$ is/is not co-planar/parallel to triangle $XYZ$, etc. AI: This is not possible. Counterexample: Suppose the distances between A, B and C are such that we can somehow place them on the unit circle in the xy-plane. Further suppose $|AX|=|BX|=|CX|=\sqrt{d_X^2+1}$ and that similar conditions for Y and Z hold. Then X can either be at $(0,0,d_X)$ or at $(0,0,-d_X)$ and similarly for Y and Z. Depending on the choice of the sign if the z-coordinate for each of the points, the distances $|XY|, |YZ|, |ZX|$ will be different. The general problem at play is the following: First of all, it is not an issue that you don't know any coordinates of A, B and C. As you are only interested in the distances between X, Y and Z, you can WLOG place A at the origin and B, C in the xy-plane. The triangle ABC is uniquely determined by its sidelengths, any possible rotations around the z-axis don't matter for the desired lengths. Everything you know about X, for example, is that it lies on a sphere with radius $|XA|$ centered at A, on a sphere with radius $|XB|$ centered at B and on a sphere with radius $|XC|$ centered at C. However, in general, three spheres intersect in two points, either of which could be X. Similarly for Y and Z. Depending in which of the two possible locations for each of X, Y and Z you choose, the triangle XYZ will have different sidelengths. EDIT: Given the additional assumption that it is known on which side of the plane ABC the points X, Y, Z lie, respectively, they are determined uniquely (see comment). The OP requested an example for $|AB|=1149, |BC|=1730, |CA|=1016, |AX|=1054, |AY|=1872, |AZ|=1914$, $|BX|=1818, |BY|=2445, |BZ|=2163, |CX|=102, |CY|=856, |CZ|=1020$. Assume further that X, Y and Z lie all in the same half space determined by the plane ABC. We can proceed as follows: Set $A=(0,0,0)$ and $B=(1149, 0, 0)$. C can be chosen as the intersection of the circles with radii 1016 and 1730 around A and B, respectively, in the xy-plane. I.e., in order to find a suitable C, we solve $$x^2+y^2=1016^2, \hspace{0.5cm} (x-1149)^2+y^2=1730^2$$ WLOG we choose the solution with $y>0$ and obtain $x=-\frac{213481}{766}$ and $y=\frac{15\sqrt{2489370063}}{766}$. We can obtain X by finding the intersection of the spheres (a) around A with radius 1054, (b) around B with radius 1818, (c) around C with radius 102, that has positive z-coordinate. That is, we need to solve $$x^2+y^2+z^2=1054^2, \hspace{0.5cm} (x-1149)^2+y^2+z^2=1818^2, \hspace{0.5cm} \left(x+\frac{213481}{766}\right)^2+\left(y-\frac{15\sqrt{2489370063}}{766}\right)^2+z^2=102^2$$ yielding $x=-\frac{874007}{2298}, y=\frac{338107548509}{6894\sqrt{2489370063}}, z=\frac{2}{9}\sqrt{\frac{1938961458551}{2489370063}}$. Solving similar systems, one may also find the coordinates of Y and Z (always choosing the solution with positive z-coordinate as X, Y and Z are supposed to be located in the same half space with respect to the plane ABC), from which one can then easily deduce the lengths $|XY|, |YZ|$ and $|ZX|$.
H: Finding gradient of line using graph equation and given point Consider a parabola y = x$^2$. The line that goes through the point (0, $\frac{3}{2}$) and is orthogonal to a tangent line to the part of the parabola with x > 0 is y = Ax + $\frac{3}{2}$. Find the value of A The answer is supposedly $\frac{-1}{2}$ but I'm not sure how they got to that answer. AI: Suppose that $(x_0,x_0^2)$ is the point on the parabola $y=x^2$ where the tangent line and its perpendicular line through $(0,\frac{3}{2})$ intersect. Note that $y'=2x$, which means the slope of the tangent line at $(x_0,x_0^2)$ is $2x_0$, hence the slope of its perpendicular line is $-\frac{1}{2x_0}$. Thus, we have $A = -\frac{1}{2x_0}$. Because the perpendicular line passes through $(0,\frac{3}{2})$, we have \begin{align*} y = Ax+\frac{3}{2} \implies x_0^2 &= -\frac{1}{2x_0}x_0+\frac{3}{2},\\ &= -\frac{1}{2}+\frac{3}{2},\\ &= 1. \end{align*} Given that $x>0$, we find $x_0 = 1$, which means $(x_0,x_0^2) = (1,1)$. And so, we find that $\boxed{A = -\frac{1}{2}}$.
H: Proving that $X^n-a$ is irreducible if $a$ is no $p$-th power for any prime $p$ dividing the degree diving the degree Let $F$ be a field containing a primitive $n$-th root of unity (for $n \geq 2$) and let $E = F(\alpha)$ where $\alpha \in E$ is an element whose $n$-th power (but no smaller power) is in $F$. Let $\alpha^n := a$. Question: Assuming $p$ is an arbitrary prime dividing $n$ and $a$ is not a $p$-th power, why is the polynomial $X^n - a$ irreducible in $F[X]$? What I did by myself? I know that the roots of $X^n - a$ must be $\zeta^i \alpha$ for $i=0,\dots,n-1$ and $\zeta \in F$ being the primitive $n$-th root of unity from the assumption. I also know that $E/F$ is a cyclic Galois of order $n$. A generator is $\sigma: \alpha \mapsto \zeta \alpha$. Is there a way to use one of these facts to prove the statement? If not, what can I do differently? Thanks in advance! AI: I don't see why you need the assumption of $\alpha$ not being a $p$ th power. Maybe I'm missing something, here's a (hopefully correct) answer: Suppose that $X^n-a$ is reducible, so that it has a proper divisor $g$. Its roots are some of the $\zeta^i\alpha$, and their product is $g$'s last term. There exists then $J \subsetneq \{1, \ldots, n\}$ such that $\prod_{j \in J}\zeta^j \cdot \alpha \in F$. Since $\zeta \in F$, we get $\alpha^{|J|} \in F$. But $|J| < n$, which is absurd. Edit: if you already know that $E/F$ has degree $n$, then no calculation is needed. Since $\alpha$ is a root of $X^n-a$, we have that $m(\alpha,F) \mid X^n-a$. Equality follows from the fact that $$\deg m(\alpha,F) = [E:F] = n$$ and so $X^n-a$ is irreducible (since it is a minimal polynomial).
H: Finding expectation of a random variable Let $X$ be a random variable whose distribution function is given by $$ \mathrm{F}\left(x\right) = \left\{\begin{array}{lcl} {\displaystyle 0} & \mbox{if} & {\displaystyle x < 2} \\[1mm] {\displaystyle{1 \over 3}\,x} & \mbox{if} & {\displaystyle 2 \leq x \leq 3} \\[1mm] {\displaystyle 1} & \mbox{if} & {\displaystyle x > 3} \end{array}\right. $$ Then find $E\left(X\right)$ and $E\left(X^{2}\right)$. Given random variable is neither discrete nor continuous. Then by Jordan decomposition theorem first we have to write $X$ as a sum of a step function and a continuous function. But how should I do that ?. AI: You can use Stieltjes integral formula. $E(X^k)=\int_R x^kdF(x)=\frac{2}{3}2^k+\int_2^3 \frac{x^k}{3}dx$
H: How to build a strong foundation for university mathematics? My objective: To study pure mathematics at university level next year. I find Abstract Algebra, Number Theory and Foundations of Maths(Set Theory, FOL, etc.) to be intriguing yet mostly inaccessible with my current mathematical maturity, but I would love to learn more about them in the time to come. I want to make sure when that time comes, I can digest most of the material without getting bogged down. Ultimately, I would like to make a meaningful contribution to the field of pure mathematics :) My background: I recently finished high school, covering single variable calculus(Calculus 1), some matrices-determinants(mostly computational problems) and some elementary notions of sets, relations, functions, combinatorics, basics of vector algebra and discrete probability. I happen to have a year of time at hand, and I can devote multiple hours of study consistently. However, I've tried learning a variety of topics for some time now, but disorganised learning makes me lose track of my progress. I want to utilise the time to come gainfully, so as to get a better insight into what mathematics is about, while also building a strong foundation. I tried organising a broad outline of how I could possibly study during this time, divided into 3 tracks: Track 1-Continuation of school mathematics: Continuing with Calculus 1, I could start diving into Calculus 2 and 3; similarly extend my knowledge of matrices-determinants to basic linear algebra. While I do this, I could lay some more emphasis on proving and understanding results as opposed to just performing mechanical computations. Track 2-Studying for high-school maths olympiads: This doesn't imply that I would be enrolling for any olympiad; rather, I'd be covering maths that is usually not taught at school but constitutes questions in maths olympiads geared towards high school students. I'll try to cover topics such as elementary number theory, Euclidean geometry, functional equations, inequalities, theory of equations, combinatorics and probability, etc. Track 3-Start diving into undergraduate mathematics: Due to the currently prevailing circumstances, there has been a flurry of online learning resources for all levels of learners. MOOCs on higher mathematics are no exception. Thus, I could start studying some basic real analysis, introductory linear and abstract algebra, set theory and logic. I like to prove things, but I cannot figure out how to hone this skill. I have ample learning resources available with me(a plethora of maths textbooks such as Analysis 1 and 2 by T.Tao, Contemporary Abstract Algebra by Gallian, Ordinary Differential Equations by M.Tenenbaum to name a few- I wouldn't shy away from buying more reasonably priced textbooks which are necessary to further my objective). However, here's what I'm confused about: 1)Which of the aforementioned tracks is best suited to achieve my objective? I most certainly don't expect to become a jack of all trades within a year, but I want a firm footing in later years of my maths education. Detailed suggestions about any other track are also welcome. 2) In the absence of an instructor, how do I evaluate and monitor my progress in a time-bound manner? Of course, nothing can substitute for actually studying maths at university, but what's the least I could do to evaluate my work? I want to be sure that I don't get bogged down in the middle, not sure where I'm going with my studies. My preference in these tracks goes as $3>2>1$. I'm enthralled at the perspective of learning higher mathematics(I did some basic group theory while in high school), but eventually gave up because even though I used to frame a single proof in 40-50 minutes of time, I wasn't sure if it was correct at all in the end. Further, topics at higher levels are interconnected and hence require some background/prerequisites(part of the reason I stopped studying group theory was my lacking background in modular arithmetic) along with mathematical maturity, which sometimes become a barrier to learning. Nevertheless, I welcome any and every suggestion which comes from a community of maths students, teachers and professionals. Sidenote: I have already checked out several questions asked on related themes on MSE and elsewhere, but could not reasonably relate to any of them. AI: I would strongly recommend you to go for "Track 3". To say why, Olympiad aren't the best choice for you right now, you have already finished high school and for good foundation in high school olympiads you need to devote at least more than a year. And also continuing with high school exercises (rather than problems) isn't really a good choice If you are curious for learning more and more and thinking more abstract in mathematics. I would recommend you start with real analysis, like i did. Its been 4 months I started with real analysis I from Tao's book and due to holidays/etc right now a lot of free time is available, for which I completed Tao I,parts of Bartle Sherbert's real analysis,did linear algebra and am now with Metric Spaces (in these 4 months and self study). Personally I did math most of the day, 5 days in a week. So If you can, try starting as soon as possible and within the next year, you can develop that mathematical maturity for University maths and moreover, self study on your own topics which attract you more.
H: What is the idea behind the derivation of $z$-score formula? I can't fully wrap my head around the reasoning behind the $z$-score formula. $z = (x - \mu)/\sigma.$ Can someone explain how this formula was derived or how it works? Edit: I understand why and how the formula and concept is used. I just don't get how that exact formula is able to create a mean of 0 and standard deviation of one. I don't see much tutorials or sites explain this thoroughly. It's only that, they just give the formula and say that it produces a mean of zero and standard deviation of one. Yes, I see explanations seeing that you subtract the random variable X by the mean to make the mean zero. Also, you divide that entire quantity by the standard deviation to make it one. But can someone explain why it works? I don't get why it works. AI: This boils down to two facts for (real-valued) random variables: first, linearity of expectation: $$ E[aX+b] = aE[X]+b $$ and this property of variance: $$ \text{Var}[aX+b] = a^2\text{Var}[X] $$ Applying these formulas, if $X$ is a normal random variable with mean $\mu$ and variance $\sigma^2$, and $Z=(X-\mu)/\sigma$ then $$E[Z]=E\left[\frac{X-\mu}{\sigma}\right]=\frac{E[X]-\mu}{\sigma}=\frac{\mu-\mu}{\sigma}=0$$ and $$\text{Var}[Z]=\text{Var}\left[\frac{X-\mu}{\sigma}\right]=\frac{\text{Var}[X]}{\sigma^2}=\frac{\sigma^2}{\sigma^2}=1$$ Therefore $Z$ has mean $0$ and variance $1$. Also, we know that linear transform of a normal random variable is still normal, therefore $Z$ is the standard normal distribution.
H: natural isomorphism by right exactness R is a local ring with maximal ideal $\mathfrak{m}$ and residue field k. M is a finitely generated R-module with a projective cover: $0 \to$ N $\to$ F $\to$ M $\to 0$. Tensor $0\to$ $\mathfrak{m}$ $\to$ R $\to $ k $\to 0$ by N; right exactness gives a natural isomophism $\tau_N$: N $\otimes_R$ k $\to$ N/$\mathfrak{m}$N I am not sure how to reach the last natural isomorphism by right exactness. I don't think the answer here Showing that if $R$ is a commutative ring and $M$ an $R$-module, then $M \otimes_R (R/\mathfrak m) \cong M / \mathfrak m M$. is correct as M might not be flat. Any help would be appreciated! AI: By tensoring your exact sequence by $N$ you get an exact sequence $$N\otimes_R\mathfrak{m}\to N\otimes_RR\to N\otimes(R/\mathfrak{m})\to0$$ by right-exactness. This means that $N\otimes(R/\mathfrak{m})$ is isomorphic to the cokernel of $\phi:N\otimes_R\mathfrak{m}\to N\otimes_RR$. The image of $\phi$ is generated by the elements of the form $m\otimes a$ with $n\in N$ and $a\in\mathfrak{m}$. Now $N\otimes_RR$ is isomorphic to $N$ via the map taking $n\otimes a$ to $an$. Under this isomorphism the image of $\phi$ corresponds to the submodule $N'$ of $N$ generated by the $an$ for $a\in\mathfrak{m}$ and $n\in N$, that is $N'=\mathfrak{m}N$. So the cokernel of $\phi$ is isomorphic to $N/N'$ which is isomorphic to $N/\mathfrak{m}N$. But this cokernel is also isomorphic to $N\otimes_R(R/\mathfrak{m})\cong N\otimes_Rk$.
H: Determinant as a polynomial I am trying to understand the notion of a determinant of an $n \times n$ matrix as a polynomial of degree $n$ in the entries of a matrix. If I wrote a matrix of the form $$\begin{bmatrix} a & b \\ c & d\end{bmatrix},$$ then its determinant is $ad - bc$, which is a function of four variables, but only of order $1$. Is there something that I'm missing? AI: The polynomial is of degree $1$ in any individual variable, but is of "total degree $n$" in that every term is the product of $n$ of the variables. An example for why this is important is that if you have an expression like $\det(A+xB)$, where $A$ and $B$ are fixed and $x$ varies, it is a polynomial of degree $n$ in the single variable $x$.
H: Let $\frac{1}{2}<\cos2A<1$ and $6\tan A-6\tan^3A=\tan^4A+2\tan^2A+1$, find $\tan 2A$ Let $\dfrac{1}{2}<\cos2A<1$ and $6\tan A-6\tan^3A=\tan^4A+2\tan^2A+1$, find $\tan 2A$ My attempt: \begin{align*} 6\tan A(1-\tan^2A)&=\tan^4A+2\tan^2A+1\\ 12\tan^2A&=\tan2A\tan^4A+2\tan2A\tan^2A+\tan2A\\ 0&=\tan2A(\tan^4A)+(2\tan2A-12)\tan^2A+\tan2A\\ \because\tan2A&\in\mathbb{R}\\ \therefore \tan2A&\leqslant3 \end{align*} From $\dfrac{1}{2}<\cos2A<1$ gives $0\leqslant\tan2A<\sqrt{3}$ Alfter using 2 inequality, I still can't find the exact value of $\tan2A$ AI: We obtain: $$6\tan{A}(1-\tan^2A)=(1-\tan^2A)^2+4\tan^2A$$ or $$\frac{6\tan{A}}{1-\tan^2A}=1+\frac{4\tan^2A}{(1-\tan^2A)^2}$$ or $$\tan^22A-3\tan2A+1=0,$$ which gives $$\tan2A=\frac{3+\sqrt5}{2}$$ or $$\tan2A=\frac{3-\sqrt5}{3}.$$ Also, $$\tan^22A=\frac{1}{\cos^22A}-1<4-1=3,$$ which fives $$\tan2A=\frac{3-\sqrt5}{2}.$$
H: What did I do wrong on this inequality problem? The problem is: If $b < a$, what is the range for $x$ if $ax+a < bx+b$? Below is my work: $$ax+a < bx + b$$ $$(a+1)x < (b+1)x$$ Divide both sides by $x$ and you get: $$a+1 < b+1$$ Which can be simplified to $a<b$ if $x$ is positive. But $a<b$ contradicts to the statement given by the problem ($b<a$) Therefore $x$ is negative. The range for $x$ is $x<0$ However, the answer is $x<-1$. What did I do wrong? Edit: Sorry everyone I made a typo on the original inequality. The wrong simplification is what I did wrong. AI: First, your initial algebra is in error: $ax+a$ equals $a(x+1)$, not $(a+1)x$, and $bx+a$ doesn’t simplify at all. What you should have done is subtract $a$ from both sides to get $ax<bx$. At this point I would rewrite it as $ax-bx<0$ and factor out the $x$ to get $(a-b)x<0$. We know that $a>b$, so $a-b>0$, and the product $(a-b)x$ is negative if and only if $x$ is negative. Thus, the solution is $x<0$. (It is not $x<-1$ if you copied the original inequality correctly.) But there is a major error after the original algebraic error. Suppose for a moment that you really were dealing with the inequality $(a+1)x<(b+1)x$. When you multiply or divide both sides of an inequality by a negative number, the inequality reverses its direction. For instance, $2<3$, but $(-1)\cdot 2=-2>-3=(-1)\cdot3$. And here you actually have three cases: If $x>0$, then $a+1>b+1$, and you do indeed get a contradiction, so $x\not>0$. If $x=0$, you can’t divide by it at all, but in that case $(a+1)x=0=(b+1)x$, so $a=b$, and again you get a contradiction. Thus, $x\ne 0$. If $x<0$, dividing by $x$ reverses the inequality: $a+1\color{red}<b+1$, so $a<b$, which is fine. Had your initial algebra been correct, you should have concluded that $x<0$, thereby getting the right answer for the wrong reason.
H: Test the convergence of the series with alternating signs The series is given by ($a>0$) $$\frac{1}{a} +\frac{1}{a+1} - \frac{1}{a+2} +\frac{1}{a+3} +\frac{1}{a+4} -\frac{1}{a+5} +\cdots$$ So how can I move forward? I can not find the general term of the series. If someone can give the general term then I can go further. AI: I am editing this response to make it more readable and to incorporate the analysis in the comments posted subsequent to this response. Let $s_n$ denote the nth term (i.e. $s_1 = 1/a$). The proof is in two parts: (1) showing that $s_1 + s_4 + s_7 + \cdots$ diverges. (2) showing that as a result of (1) above, the overall series is divergent. Fact: the harmonic series is known to be divergent. (1) Let M be a positive integer > a. Let R = $s_1 + s_4 + s_7 + \cdots.$ Let S = 1/M + 1/(M+3) + 1/(M+6) + ... Let T = 1/M + 1/(M+1) + 1/(M+2) + ... Then R > S and 3S > T. Further, since the harmonic series is divergent and since T is the same as the harmonic series, with a finite group of terms excluded, T is divergent. Therefore, S is divergent. Therefore R is divergent. (2) Since $(s_2 + s_3) > 0, (s_5 + s_6) > 0, \cdots$ it is immediate that $(s_1 + s_2 + s_3) + (s_4 + s_5 + s_6) + \cdots > s_1 + s_4 + s_7 + \cdots = R,$ which was shown to be divergent in part (1). Addendum: response to maa's comment below First of all, my knowledge of convergent/divergent series in Real Analysis is somewhat limited, so I could well be mistaken. However, I've checked my analysis and I don't see any flaw. Based on the comment that I am responding to, it looks like the analysis in part (1) is being accepted and the analysis in part (2) is being questioned. It also seems like the following intermediate conclusions in part (2) are being accepted. If not, please advise. $(s_1 + s_2 + s_3) > s_1$ $(s_4 + s_5 + s_6) > s_4$ $(s_7 + s_8 + s_9) > s_7$ $\cdots$ Define a new notation: Let $b_1 \equiv (s_1 + s_2 + s_3).$ Let $b_2 \equiv (s_4 + s_5 + s_6).$ Let $b_3 \equiv (s_7 + s_8 + s_9).$ $\cdots$ Now contrast the following two infinite sums: $s_1 + s_2 + s_3 + s_4 + s_5 + s_6 + s_7 + s_8 + s_9 + \cdots\;\;$ and $b_1 + b_2 + b_3 + \cdots.$ Notice that the 2nd infinite sum (above) duplicates the exact order that the terms occur in in the 1st infinite sum (above). Therefore, the 1st infinite sum (above) will be divergent if and only if the 2nd infinite sum above is divergent. Edit I imagine that it is possible to provide a counter-example to the analysis directly above. For instance, a series that looks like: $(0 +5 -5) + (0 +5 -5) + (0 +5 -5) + (0 +5 -5) + \cdots.$ However, I don't think that such a bizarre counter-example is relevant here. Remember, in the original series, $|s_n|$ is strictly decreasing and goes to zero. Edit-2nd Also, in the analysis below, the result I really need is that if $b_1 + b_2 + b_3 + \cdots$ is a divergent series then so is $s_1 + s_2 + s_3 + \cdots.$ I am having a hard time imagining a counter-example where this one-way implication breaks down. Now consider the following two infinite sums: $s_1 + s_4 + s_7 + \cdots \;\;$ and $b_1 + b_2 + b_3 + \cdots .$ From the previous analysis we know that: $s_1 + s_4 + s_7 + \cdots$ is a divergent series $b_1 > s_1$ $b_2 > s_4$ $b_3 > s_7$ $\cdots$ Therefore, $b_1 + b_2 + b_3 + \cdots$ is a divergent series. Therefore, $s_1 + s_2 + s_3 + s_4 + s_5 + s_6 + s_7 + s_8 + s_9 + \cdots$ is a divergent series.
H: Convergence of series via ratio test I am attempting to show that a sequence which I defined as $\{a_n\}$ where $a_n =\sum_{p=0}^{n}\frac{x^p}{p!}$ converges, in order to do so I have attempted the ratio test at which point I arrive at the term $a_n =\sum_{p=0}^{n}\frac{x^p}{p!}$ and $a_{n+1} =\sum_{p=0}^{n+1}\frac{x^p}{p!}$ then taking the ratio I arrive at $$\frac{a_{n+1}}{a_n}=\frac{\sum_{p=0}^{n}\frac{x^p}{p!}}{\sum_{p=0}^{n+1}\frac{x^p}{p!}}$$ Taking the limit of the above is proving to be difficult. I am attempting the ratio test is . If this prove impossible I would like to show that it converges in some other form. I have proved that in the case that $x=1$ we have convergence because it is bounded above and is increasing, however I want to show it for this case or for all x? How can we show this? I have also thought about showing it is cauchy in some form which would imply that it is convergent. AI: The following are equivalent (by definition of series convergence): (I) The sequence $\displaystyle\big(\sum_{p=0}^n \frac{x^p}{p!}\big)_{n=0}^{\infty}$ of partial sums $\displaystyle\sum_{p=0}^n \frac{x^p}{p!}$ of the infinite series $\displaystyle\sum_{p=0}^{\infty}\frac{x^p}{p!}$ converges. (II) The infinite series $\displaystyle\sum_{p=0}^{\infty}\frac{x^p}{p!}$ converges. The latter is implied by the following: (III) The limit $\displaystyle\lim_{n\to\infty} \left|\frac{x^{n+1}/(n+1)!}{x^n/n!}\right| $ exists and is less than $1$. The implication (III) $\Rightarrow$ (II) is an application of the ratio test: Correct Ratio Test. An infinite series $\displaystyle\sum_{p=0}^{\infty} t_p$ converges if $\displaystyle\lim_{n\to\infty}|t_{n+1}/t_n|<1$. You seem to be misinterpreting the ratio test as follows: Wrong Ratio Test. An infinite sequence $(s_n)_{n=0}^{\infty}$ converges if $\displaystyle\lim_{n\to\infty}|s_{n+1}/s_n|<1$. You seem to be applying this wrong ratio test with $\displaystyle s_n=\sum_{p=0}^n\frac{x^p}{p!}$.
H: Is $\phi =\angle A"OB" = \measuredangle(AB,A"B")=\measuredangle(A'B',A"B")$? [Doubt] Can someone clarify this doubt ? We denote the spiral similarity by $S$, the rotation centered at $O$ with angle $\phi$ by $\rho _O ,\phi$ , and the homothety centered at $O$ with ratio $k$ by $\chi _{ O, k}$ , then $S _{O, k, \phi}$ = $\rho_O, \phi \circ \chi _{ O, k}$ . Consider the following image : We are given a triangle $ABC$ which is being dilated by a spiral symmetry $S$ centred at $O$ with ratio $k$ and and angle $\phi$ . I noted that since angles are preserved in dilation and homothety, we get that $\Delta ABC \sim \Delta A"B"C" $. Also $\Delta OAB \sim \Delta OA"B"$. And we have $\angle A"OB"=\angle AOB$ . And we also have $\measuredangle(AB,A"B")=\measuredangle(A'B',A"B")$ (since $A'B'||AB$) . But I couldn't understand how $\measuredangle(AB,A"B")=\measuredangle(A'B',A"B")=\phi$ . Isn't $\phi =\angle A"OB"$ ? Here is the whole explanation of the book: AI: It should be $\phi = \angle A'OA''$. Rotation takes $A'$ to $A''$ and $B'$ to $B''$ so it takes $A'B'$ to $A''B''$ so we have $$\angle (A''B'',A'B') = \phi$$ Since homothety takes $A$ to $A'$ and $B$ to $B'$ we have $AB||A'B'$, so it is obivously that $$\angle (A''B'',A'B') =\angle (A''B'',AB)$$
H: Show that $\frac{X-\mu}{\sigma}\sim N(0,1)$ using moment functions Let $X\sim N(\mu,\sigma^2)$. Show that $Z=\frac{X-\mu}{\sigma}\sim N(0,1)$ using moment generating functions. \begin{align*} M_Z(t)&=M_{\frac{X-\mu}{\sigma}}(t)\\ &=M_{X-\mu}\left(\frac t\sigma\right)\\ &=e^{-\mu t}M_X\left(\frac t\sigma\right)\\ &=e^{-\mu t}\cdot e^{\frac{t\mu}{\sigma}+\frac{t^2}{2}}\\ &=e^{-\mu t+\frac{t\mu}{\sigma}+\frac{t^2}{2}} \end{align*} I'm not sure how to conclude that $Z\sim N(0,1)$. AI: As mentioned in the comments, your error is that $$M_{X-\mu}(t/\sigma) = e^{-\mu t/\sigma}M_X(t/\sigma)$$ Fixing that gives a final expression of $M_Z(t) = e^{t^2/2},$ as expected.
H: Does a Bounded linear transformation from a Banach space to Real numbers map a bounded closed set to a closed set? Specifically I need to know that whether a bounded linear transformation from the sequence space l_1 to Real numbers map the closed unit sphere to a closed set? I'm thinking of this problem for two days. I'll be thankful for any help AI: Let $T(x_n)=\sum (1-\frac 1 n) x_n$ for all $(x_n) \in \ell^{1}$. Note that $|T(x_n)| \leq 1$ for all $(x_n)$ in the closed unit ball. Since $Te_n=1-\frac 1 n$ it follows that the supremum of the image of the unit ball is $1$. I will let you check that this supremum is never attained. Hence the image is not closed.
H: How can I find the distance between the centres of two circles? I'm a senior year maths student and I stumbled upon a question from a maths competition from a previous year. I seem to be on the cusp of solving it but I am unable to solve for the radius (to give me the answer). The question reads as follows: A rectangle has sides of length 5 and 12 units. A diagonal is drawn and then the largest possible circle is drawn in each of the two triangles. What is the distance between the centres of these two circles? Image below for reference: What I have attempted so far is connecting the points of tangency for each circle for their respective centres and labelled them $r$. From this, I was able to label sides $5-r$ and $12-r$. I noticed that the diagonal and both widths of the rectangle were lines of tangency that met in the top left and bottom right corners, and therefore those parts of the diagonal from the corner to the point of tangency were also $5 - r$. From this, I could label the middle part of the diagonal $3 + 2r$. Drawing the line I had to solve for, I broke this middle part into equal sections of $3/2 + r$. From there I found out I could use Pythagoras' theorem to calculate the hypotenuse which was half of the length of the line I was trying to find. I ended up calculating this to be $\sqrt{8r^2+12r+9}$. The only problem is I am unsure of how to solve for $r$. Help is much appreciated. AI: Hint The inradius of a right-angled triangle is given by $\frac{1}{2}(a+b-c)$, where $a,b$ are the legs and $c$ is the hypotenuse. A proof of this is here on the line after the word 'Proof'. Alternatively, you can observe the following diagram: The area of the triangle is $\frac{1}{2} (5)(12)$. However, the area of the triangle is also $\frac{1}{2} (5r + 12r + 13r)$ by adding the areas of $\Delta CDA, \Delta ADB, \Delta BDC$ together. Therefore: $$\frac{1}{2} (5r + 12r + 13r) = \frac{1}{2}(5)( 12) \Rightarrow 30r=60 \Rightarrow r=2$$ A generalisation of this for any triangle gives the fact that $A = rs \Rightarrow r = A/s$, where $s$ is the semiperimeter $\frac{a+b+c}{2}$.
H: Prove that if $U$ is a linear operator on $V$, then $UT=TU$ if and only if $U=g(T)$ for some $g(T)$. Let $T$ be a linear operator on a vector space $V$, and suppose that $V$ is a $T-$cycle subspace of itself. Prove that if $U$ is a linear operator on $V$, then $UT=TU$ if and only if $U=g(T)$ for some polynomial $g(t)$. $\textbf{My attempt:}$ Suppose that $U: V \to V$ is a linear operator, $U=g(T)$ for some $g(T)$ and $V$ is generated by $v$, then the set $$\beta=\{v,T(v),T^{2}(v),...T^{k}(v)\}$$ where $\dim(V)=k$ is a basis for $V$, we need to prove that $UT=TU$, but $T(T^{k})=(T^{k})T=T^{k+1}$ and $T$ is linear, so $UT=TU$. $\textbf{My final attempt:}$ We need to prove that if $U$ is a linear operator on $V$, then $UT=TU$ if and only if $U=g(T)$ for some polynomial $g(t)$. $(\Longleftarrow)$ If $U=g(T)$ for some polynomial $g(t)$, so $$UT=\left(\sum_{k=0}^{n}a_{k}T^{k}\right)T=T\left(\sum_{k=0}^{n}a_{k}T^{k}\right)=TU$$since $$T(T^k)=(T^k)T=T^{k+1}$$ and $T$ is a linear transformation, so $TU=UT$. $(\implies)$ Suppose $V$ is generated by $v \in V$, then the set $$\beta=\{v,T(v),...,T^{k-1}\}$$ is a basis. So the vector $U(v)$ could be written as linear combination of basis $\beta$. So $U(v)=g(T)(v)$ for some polynomial $g(t)$. If $UT=TU$, we want to prove that $U=g(T)(v)$ for that we can prove $\forall \hat{v} \in \beta: U(\hat{v})=g(T)(\hat{v})$. Indeed $$U(T^{n}(v))=T^{n}(U(v))=T^{n}g(T)(v)=g(T)(T^{n}(v)), \quad \forall n \in \mathbb{N}$$ Finally, I proved that if $U$ is a linear operator on $V$, then $UT=TU$ if and only if $U=g(T)$ for some polynomial $g(t)$.$\Box$ AI: Note that the set $S = \{U : UT = TU\}$ is closed under scalar multiplication, addition, and multiplication by $T$; further, note that it contains the identity matrix. Thus, $S$ contains all $g(T)$. Now suppose we have $U \in S$. Take $v$ such that $V$ is spanned by $\{T^n v : n \in \mathbb{N}\}$. Then we can write every $x \in V$ as $g(T) v$ for some $v$. In particular, take $g$ such that $g(T) v = U v$. It can be shown by induction that for all $n$, $g(T) T^n v = U T^n v$ (also using the fact that $g(T) \in S$). Since $g(T)$ and $U$ agree on a set which spans $V$, they are equal.
H: Evaluating $\int_0^\infty \frac{\tan^{-1}(t)}{e^{2\pi t}-1}\,\mathrm{d}t$ As stated in the title, I want to evaluate the integral $$\int_0^\infty \frac{\tan^{-1}(t)}{e^{2\pi t}-1}\,\mathrm{d}t$$ Because of the $e^{2\pi t}$, it seems that complex analysis techniques are required for this one, which I am not so familiar with. I have tried some substitutions and integration by parts. Besides, I tried using the property $$\int_0^\infty f(t)\,g(t) \,\mathrm{d}t =\int_0^\infty \mathscr{L}[f](s)\cdot \mathscr{L}^{-1}[g](s)\,\mathrm{d}s$$ But no luck here. I would like a solution without complex analysis techniques, if that's possible. Thank you! AI: $\displaystyle \int_{0}^{\infty}{\arctan\left(x\right) \over \mathrm{e}^{2\pi x} - 1}\,\mathrm{d}x\ =\ \bbox[10px,border:1px solid navy]{{1 \over 2} - {1 \over 4}\ln\left(2\pi\right)}$ See $\color{black}{\bf 6.1.50}$ in A & S. The result comes from the Second Binet Formula.
H: Why is the following method of finding out a conserved quantity wrong? Let a system be defined by $\dot{x}=y; \dot{y}=f(x).$ And let $E(x,y)$ be a conserved quantity of the system. Then $$ \frac{\partial{E}}{\partial{x}}\dot{x}+\frac{\partial{E}}{\partial{y}}\dot{y}=0. $$ My question is why cannot I just rewrite this equation as $$ \frac{\frac{\partial{E}}{\partial{x}}}{{\frac{\partial{E}}{\partial{y}}}}=\frac{-\dot{y}}{\dot{x}}. $$ $$ \frac{dy}{dx}=\frac{-\dot{y}}{\dot{x}}. $$ Now the separation of variables would easily give me a curve in $(x,y)$ space which can also be thought of as a conserved quantity. Looking at examples in Strogatz, I figured out that this method is wrong, but I am unable to find out why? AI: The question is already answered in the comment. The mistake is that in general, $$\tag{1}\frac{\frac{\partial E}{\partial x}}{\frac{\partial E}{\partial y} }\neq \frac{\mathrm d y}{\mathrm d x}.$$ Probably the confusion comes from a similar results (that is used in ODE a lot) where $$ \frac{ y' }{x'} = \frac{\mathrm dy}{\mathrm dx}.$$ Thus is essentially Chain rule in one dimension. But when $E$ has two variables, the chain rule is more complicated and we do not have (1).
H: A question based on location of roots using information about derivative at some points I am trying some questions asked in masters of mathematics exam of my university and I was unable to solve this particular problem. Let $f:\mathbb{R} \rightarrow \mathbb{R}$ be twice continuous function with $f(0) = f(1) = f'(0) = 0$. Then which one of the following is true: $f"$ is zero function. $f"(0)=0$. $f"(x) = 0$ for some $x \in (0, 1)$. $f"$ never vanishes. Options 1., 2. and 3. can be negated by taking $f(x) = x^{2} \cdot (x-1)$. Can anyone please tell how to rigorously prove $3$rd option? AI: Assuming you mean $f\in C^2$, i.e. $f$ twice continuously differentiable. Recall Rolle's theorem Hint: Apply it twice. Details if you really want to not work: First for $f\colon [0,1]\to\mathbb{R}$ and we have some $c_1\in(0,1)$ such that $f'(c_1)=0$. Then for $f'\colon[0,c_1]\to\mathbb{R}$ and you have option (3).
H: Does there exist a second category set that is not a Baire space? Can someone give an example of a second-category set $Y$ in a metric space $X$ but $Y$ is not a Baire space. We know that Baire space$\implies $ Second Category. But I am trying to show that the converse in not true. Clearly ,$Y$ must not be complete. AI: Let $X=\Bbb R$ with the usual topology, and let $Y=[0,1]\cup(\Bbb Q\cap[2,3])$; $Y$ is second category in $X$, but it’s not a Baire space, since the sets $Y\setminus\{q\}$ for $q\in\Bbb Q\cap[2,3]$ are dense and open in $Y$, but their intersection $[0,1]$ is not dense in $Y$.
H: A characterization of identity operator on Hilbert space Let $H$ be a Hilbert space and $T\in B(H)$ be a bounded linear operator on $H$, then $T=I$ $\Longleftrightarrow$ $\langle\psi,T\psi\rangle=1$ for every $\|\psi\|=1$. It is easy to examine the "$\Longrightarrow$". But how to show the opposite implication? AI: This is false if the scalar field is $\mathbb R$. A counte-example is $T=I+S$ wheree $S$ is rotation by $90^{0}$ in $\mathbb R^{2}$. In a complex Hilbert space it is well known that if $S$ is a bounded operator then$ \langle \psi, S (\psi) \rangle=0$ for all $\psi$ of norm $1$ (and hence for all $\psi$) implies that $S=0$. Apply this to $S=T-I$ and you get the conclusion easily.
H: Why point is a circle with radius zero? I was reading this What is a point circle, a real circle and an imaginary circle? and i get confused with the statement that is written in the accepted answer , i.e A point "circle" is just a point; it's a circle with a radius of zero But point itself is a circle and when we zoom it enough there is still a radius $>0$ , so how it is a circle of radius zero ??? AI: A point is defined to have no size. It is simply a location in space. I think the confusion arises because if you put a "point" of ink on a piece of paper and look closely enough, it seems to have a size. But that dot of ink isn't really a point; it has a size. If you wanted a true mathematical point on paper, you will need a dot of ink that is so tiny, no other dot can be made that is smaller than it.
H: How to prove that the induced topology is the coarsest and identification topology is the finest topology that keeps the map continuous? I am reading maps between topological space from Isham, Chris J. Modern differential geometry for physicists. Vol. 61. World Scientific, 1999.. Here he defines the induced topology and the identification topology in the following way If $(Y,\tau)$ is a topological space and $f$ is a map from $X$ to $Y$ then the induced topology on $X$ is defined to be $$ f^{-1}(\tau):=\{f^{-1}(O)|O \in \tau\} $$ The key property of the induced topology is that it is the coarsest topology such that $f$ is continuous Another important example arises when $(Y,\tau)$ is a topological space and there is a surjective map $p: Y \to X$. The identification topology on $X$ is defined as $$ p(\tau) := \{A\subset X | p^{-1}(A) \in \tau\} $$ The key property of this topology is that it is the finest one on $X$ such that $p$ is continuous. I want to prove that the induced topology is the coarsest topology such that $f$ is continuous the identification topology is the finest topology such that $p$ is continuous I know that a map $f:(W,\tau) \to (V,\tau')$ is a continuous map if for all $O \in\tau', f^{-1}(O) \in \tau$. I also know that in general, any maps between two sets ($f:A\to B$) have the property $$ f^{-1}(A\cap B) = f^{-1}(A) \cap f^{-1}(B) \\ f^{-1}(A \cup B) = f^{-1}(A) \cup f^{-1}(B) $$ This comes handy to prove that both of the induced and identification topologies are topologies. But I don't know where to start with the proofs. AI: It's an exercise in definitions: Recall that given $f:X \to (Y, \tau_Y)$, we define $\tau_f = \{f^{-1}[O]\mid O \in \tau_Y\}$, which is indeed a topology on $X$, from the properties of inverse images you name, among other things. It's trivial that $f$ is continuous when $X$ is given the induced topology $\tau_f$ wrt $(Y,\tau_Y)$: let $O \in \tau_Y$ be open. By definition of the induced topology $\tau_f$, $f^{-1}[O] \in \tau_f$. So $f$ is continuous as a map $(X,\tau_f) \to (Y, \tau_Y)$. In fact, if $\tau$ is any topology on $X$ such that $f:(X,\tau) \to (Y, \tau_Y)$ is continuous, when $O \in \tau_f$, we know $O=f^{-1}[O']$ for some $O' \in \tau_Y$, and as $f$ is assumed to be continuous by definition of continuity of $f$, $f^{-1}[O'] \in \tau$, so $O \in \tau$ and $\tau_f \subseteq \tau$. So $\tau_f$ is the coarsest among all topologies that makes $f$ continuous with codomain $(Y, \tau_Y)$. Now to the identification topology (aka as quotient topology or final topology), when $(X, \tau_X)$ is pre-given and $\tau_i:= \{O \subseteq Y: f^{-1}[O] \in \tau_X\}$ is the identification topology. Now, if $\tau$ is any topology on $Y$ such that $f:(X, \tau_X) \to (Y, \tau)$ is continuous. Let $O \in \tau$ arbitrary. Then, as $f$ is continuous, by definition, $f^{-1}[O] \in \tau_X$. But this means that $O$ obeys the defining property for $\tau_i$, so $O \in \tau_i$. And as $O$ is arbitrary, $\tau \subseteq \tau_i$, so the identification topology is the finest (largest) topology among all topologies on $Y$ that make $f$ continuous with domain $(X, \tau_X)$.
H: Largest number at singleton I have a funny question - can I call one and only element of singleton the largest number? For example: I have a singleton : $\{1\}$. Can I call 1 the largest number? Here is what I think: largest number $n$ from set $L$ is a number for which is true $n - k$ (also from $L$) not equal $0$. And if we have not number to compare = 1 is not a largest number. Also here is we have things like: 1 can be as largest as lowest number, largest = lowest? So can I call 1 the largest number? And if I can't - why? AI: How do you define the largest element of a set $A$ with respect to partial ordering relation $\le$ on it? The usual definition is: $a\in A$ is the largest element if and only if $(\forall x\in A)x\le a$. This is satisfied for the only element $a$ in a singleton $A=\{a\}$. What I am guessing is that you are starting with a different definition, e.g. $(\forall x\in A, x\ne a) x\lt a$. ($p\lt q$ means $p\le q$ and $p\ne q$.) In this case, set of those $x$ is empty and you may be confused whether you can claim something is true for all elements of an empty set. Indeed, the very useful convention in math is that you can. This is known as "vacuous truth": something is true "in all cases" because there are no examples where it is false - because there are no examples at all! See: https://en.m.wikipedia.org/wiki/Vacuous_truth .
H: How many unique "$\phi$-nary" expansions are there for $1$? I was playing around the expansions of numbers in irrational bases, namely base $\phi=\frac{1+\sqrt5}{2}$. Of course, I should immediately define what it means to symbolize digits in a non-integer base. At least in my case, the expansions consist of $\lceil\phi\rceil=2$ unique digits, (0 & 1). Hence, I've dubbed it "phi-nary". Due to the base being the golden ratio, it carries along several unique properties, such as $$1.1_\phi=10_\phi=\phi$$ Which got me thinking: This base is able to express a number in multiple unique terminating expansions! Immediately, I was curious to see how many there were for 1. I found these 3: $$1_\phi=0.11_\phi=0.1011_\phi$$ Using $\phi^2=\phi+1$ and $\phi^{-1}=\phi-1$, here's the proof for $0.11_\phi$: $0.11_\phi=\phi^{-1}+\phi^{-2}=(\phi-1)+(\phi^{-1})^2=(\phi-1)+(\phi-1)^2=(\phi-1)+(\phi^2-2\phi+1)=-\phi+(\phi+1)=1$ The third expansion follows the same modes of deduction. I also found the non-terminating expansion $0.\bar{10}_\phi=1$ My intuition tells me there are a (countably) infinite amount, but I do not know how to go about proving that. Are those the only three terminating expansions? In other words, in general for what $S\subset\mathbb{Z}$ does $$\sum_{k\in S}\phi^k=1$$ AI: There are countably infinitely many finite expansions. For starting with $1$, we can replace the terminating $1$ in the $n$th phi-nimal place by $011$ in the $n$th, $n+1$th, and $n+2$th places respectively. Now suppose given an infinite binary sequence $b$ such that $\sum b_n \phi^{-n} = 1$. Consider the following possibilities: $b_0 = 1$. Then $b$ is a single $1$ followed by infinite zeroes. $b_0 = 0$ and $b_1 = 1$. Then we have $\sum b_{n + 2} \phi^{-n} = 1$. $b_0 = 0$ and $b_1 = 0$. Then we have $\sum b_{n + 2} \phi^{-n} \leq \frac{1}{1 - \phi^{-1}} = \phi^2$, and equality can only hold when every $b_i$ for $i \geq 2$ is 1. Thus, it is apparent that either $b$ is the alternating sequence $0, 1, 0, 1, ...$ $b$ begins with a prefix of the sequence $0, 1, ...$ but eventually terminates with a $1$ in an evenly indexed position or $b$ begins with a prefix of the alternating sequence $0, 1, ...$ but eventually has a $0$ in an odd-indexed position, followed by an endless sequence of $1$s So the set of all $\phi$-nary representations of $1$ is countable.
H: What is actual difference between transitivity and quasitransitivity? I have been trying to construct q.t. relation, but always get transitive relation. It seems to me, that transitivity includes q.t. Ok, but how would look like pure q.t. relation? Examples and definitions seem to me ( in my opinion, which might be wrong) to contracict each other. Here: https://en.wikipedia.org/wiki/Quasitransitive_relation I cant belp myself but it seems to me there is contradiction in definition and example given in properties. Further search on the internet didnt clear it up for me. Please give me explicit examples. Thank you all kindly. AI: What is wrong with their example of people being indifferent between $7g$ and $8g$ of sugar, and also indifferent between $8g$ and $9g$ of sugar, but preferring $9g$ to $7g$? The relation on $X=\{7,8,9\}$ is given as $\le=\{(7,7),(7,8),(7,9),(8,7),(8,8),(8,9),(9,8),(9,9)\}=X^2\setminus\{(9,7)\}$. I will leave to you to prove that the implication in the definition of quasitransitivity: $$(a\text{ T }b)\land\lnot(b\text{ T }a)\land(b\text{ T }c)\land\lnot(c\text{ T }b)\implies(a\text{ T }c)\land\lnot(c\text{ T }a)$$ is always true because the left side of it is always false (as the first two terms imply $a=7,b=9$ but the second two term imply $b=7,c=9$. However, the relation is not transitive as $9\le 8$ and $8\le 7$ but $9\not\le 7$.
H: differentials and tangent space of a fibre The setup I have is as follows: Let $f: X \to Y$ be a morphism of non-singular $n$-dimensional varieties (separated reduced irreducible scheme of finite type over $k$) over $k$ an algebraically closed field. For all closed points $y \in Y$, the fibre $f^{-1}(y)$ is a finite set of reduced points. Assume $Y = \operatorname{Spec}B$ Let $y = f(x)$ for some closed point $y \in Y$. In the proof I'm going through we have deduced that $f^{-1}(y)$ is locally $\operatorname{Spec}k[X_1, ,,,, X_N]/( \bar{f}_1,..., \bar{f}_N )$. Up to here I understand, but I am struggling to see the next two lines and I'd appreciate explanations. He states (This is from Mumford's red book Theorem 4 III.5): By assumption, this fibre has no tangent space at all at $x$. Therefore, the differentials $d \bar{f}_i$ must be independent at $x$. Thank you! edit. I forgot to mention fintie type over $k$ and this has been added AI: On the one hand, the cotangent space is the dual of the cotangent space. On the other, it's $\Omega_{X/Y,x}\otimes k(x)$, and $\Omega_{X/Y,x}$ has a presentation (near $x$) as a free module on $dX_i$ with relations coming from $df_j$. The assumptions imply that the tangent space vanishes, so it's dual must vanish as well, which is equivalent to the $df_j$ being independent at $x$ - if they weren't, the module wouldn't vanish, and $\Omega_{X/Y,x}\otimes k(x)$ would give a nonzero module in contradiction to our assumptions.
H: Show that the inequality $\left|\int_{0}^{1} f(x)\,dx\right| \leq \frac{1}{12}$ holds for certain initial conditions Given that a function $f$ has a continuous second derivative on the interval $[0,1]$, $f(0)=f(1)=0$, and $|f''(x)|\leq 1$, show that $$\left|\int_{0}^{1}f(x)\,dx\right|\leq \frac{1}{12}\,.$$ My attempt: This looks to be a maximization/minimization problem. Since the largest value $f''(x)$ can take on is $1$, then the first case will be to assume $f''(x)=1$. This is because it is the maximum concavity and covers the most amount of area from $[0,1]$ while still maintaining the given conditions. Edit: Because of the MVT and Rolle's Theorem, there exists extrema on the interval $[0,1]$ satisfying $f'(c)=0$ for some $c\in[0,1]$. These extrema could occur at endpoints. Then $f'(x)=x+b$ and $f(x)=\frac{x^2}{2}+bx+c$. Since $f(0)=0$, then $c=0$ and $f(1)=0$, then $b=-\frac{1}{2}$. Remark: Any function with a continuous, constant second derivative will be of the form $ax^2+bx+c$ and in this case, $a=-b$ and $c=0$. Now, $$\begin{align*}\int_{0}^{1}f(x)\,dx&=\frac{1}{2}\int_{0}^{1}(x^2-x)\,dx\\&=\frac{1}{2}\bigg[\frac{x^3}{3}-\frac{x^2}{2}\bigg]_{x=0}^{x=1}\\&=-\frac{1}{12}\end{align*}$$ Next, we assume that $f''(x)=-1$ and repeating the process yields $$ \begin{align*}\int_{0}^{1}f(x)\,dx&=\frac{1}{2}\int_{0}^{1}(-x^2+x)\,dx\\&=\frac{1}{2}\bigg[\frac{-x^3}{3}+\frac{x^2}{2}\bigg]_{x=0}^{x=1}\\&=\frac{1}{12}\end{align*}$$ Thus we have shown that at the upper and lower bounds for $f''(x)$ that $\frac{-1}{12}\leq\int_{0}^{1}f(x)\,dx\leq \frac{1}{12} \Longleftrightarrow \left|\int_{0}^{1}f(x)\,dx\right|\leq\frac{1}{12}$ because $f''(x)$ is continuous on $[0,1]$. I was wondering if this was 'rigorous' enough to be considered a full proof and solution to the problem. AI: Consider the following integral: $$\int_{0}^{1}\left(\frac{x^{2}}{2}-\frac{x}{2}\right)f^{\prime\prime}(x)\, dx. $$ By integrating by parts twice, you get $$\int_{0}^{1}\left(\frac{x^{2}}{2}-\frac{x}{2}\right)f^{\prime\prime}(x)\, dx = \underbrace{\left(\frac{x^{2}}{2}-\frac{x}{2}\right)f'(x)\bigg|_0^1}_{0} - \int_0^1\bigg(x-\frac{1}{2}\bigg)f'(x)dx=$$$$= - \int_0^1\bigg(x-\frac{1}{2}\bigg)f'(x)dx= \underbrace{- \bigg(x-\frac{1}{2}\bigg)f(x)\bigg|_0^1}_{0} + \int_0^1f(x)dx$$ Therefore, $$\boxed{\int_{0}^{1}f(x)\, dx = \int_{0}^{1}\left(\frac{x^{2}}{2}-\frac{x}{2}\right)f^{\prime\prime}(x)\, dx}$$ Now use the following inequality: $$\left|\int_{a}^{b}f(x)g(x)\,dx\right| \leq \int_{a}^{b}|f(x)||g(x)|\, dx$$ Since $g(x)=\frac{x^{2}}{2}-\frac{x}{2}$ is the expression you got, this should yield the desired result. $$\left|\int_0^1 f(x)\,dx\right|=\left| \int_{0}^{1}\left(\frac{x^{2}}{2}-\frac{x}{2}\right)f^{\prime\prime}(x)\, dx\right|\le\frac{1}{2}\int_{0}^{1}|x^2-x|\,dx=\frac{1}{12}$$
H: Binary number and measure Consider I express all the number between $[0,1]$ into binary number. Define the set $X:=\{x\in[0,1]|(0.1x_{1}1x_{2}1x_{3}...),x_{i}\in\{0,1\}\}$. Now I believe this set is uncountable, totally disconnected. But what about the measure of $X$? Can it be measure zero? My intuition tell me that if this is true, then it's complement $[0,1]$\ $X=\{x\in[0,1]|(0.x_{1}x_{2}...),x_{i}\in\{0,1\},x_{2j}=0$ for some $j\in\mathbb{N}\}$ should have full measure, but it expression is almost identical to $X$, and also it is uncountable, totally disconnected, how can I tell which one has measure zero(if any on them are)? Edit: Thanks to the comment, I notice the complement is wrong. Now the question become how do I tell the measure of $X$ AI: To find the measure, note that we may write $X = X_0 \cup X_1$ where $X_0 = \{\frac{2 + x}{4} : x \in X\}$ and $X_1 = \{\frac{3 + x}{4} : x \in X\}$ (since $X_0$ accounts for binary sequences beginning with zero and $X_1$ accounts for binary sequences beginning with $1$). Now we see that $\mu(X_0) = \mu(X_1) = \frac{\mu(X)}{4}$ by translation invariance and scaling. And $0 \leq \mu(X) \leq \mu(X_0) + \mu(X_1) = \frac{\mu(X)}{2}$. Therefore, $\mu(X) = 0$.
H: Singular values of matrices which preserve the ellipse $\frac{x^2}{a^2} + \frac{y^2}{b^2} \le 1$ Let $0<a<b$, $ab=1$, and let $$ D_{a,b}=\biggl\{(x,y) \,\biggm | \, \frac{x^2}{a^2} + \frac{y^2}{b^2} \le 1 \biggr\} $$ be the ellipse with diameters $a,b$. Let $A \in \operatorname{SL}_2(\mathbb R) \setminus \operatorname{SO}(2)$ and suppose that $AD_{a,b}=D_{a,b}$. Question: Must the singular values of $A$ be $\bigl\{\frac{a}{b},\frac{b}{a}\bigr\}$? This option is always possible since we can take $A=\begin{pmatrix} \frac{a}{b} & 0 \\\ 0 & \frac{b}{a}\end{pmatrix}R_{\pi/2}$. Here is a (very) partial attempt: The condition $AD_{a,b}=D_{a,b}$ implies* that $A$ is similar to an orthogonal matrix, i.e. $A=CQC^{-1}$, where $C \in \operatorname{SL}_2(\mathbb R) , Q \in \operatorname{SO}(2)$. $AD_{a,b}=D_{a,b}$ implies that $Q\tilde D=\tilde D$, where $\tilde D=C^{-1}D_{a,b}$. If $Q$ is an irrational rotation (of infinite order), then $\tilde D$ must be the unit disk, which implies that the singular values of $C$ are $a,b$. This mean that $C=U\Sigma V^T$, where $\Sigma=\operatorname{diag}(\sigma_1,\sigma_2)$. Thus, $$ A=CQC^{-1}=U\Sigma V^T Q V\Sigma^{-1}U^T, $$ so up to left and right multiplication by rotations, $A=\Sigma R\Sigma^{-1}$, where $R= V^T Q V$. I don't see how to continue from here. I also don't know how to start analyzing the case where $Q$ is of finite order. *You may see this answer and the comments below it. AI: That's not true in general: the singular values of $A$ are by definition the square roots of the eigenvalues of $A^*A$. Set \begin{align*}A(\theta) & = \begin{pmatrix} a& 0 \\ 0 & b \end{pmatrix} \begin{pmatrix} \cos\theta & -\sin\theta \\ \sin\theta & \cos \theta \end{pmatrix}\begin{pmatrix} 1/a& 0 \\ 0 & 1/b \end{pmatrix}\\ &= \begin{pmatrix} \cos\theta & -\frac ab \sin\theta \\ \frac ba \sin\theta & \cos \theta \end{pmatrix} \end{align*} Then it is clear that there are choices of $\theta$ such that the singular values are not $\{ a/b, b/a\}$ when $a\neq b$ (For example, note that the singular values are continuous functions (see remark) of $\theta$, and one has $\{1, 1\}$ at $\theta = 0$ and $\{ a/b, b/a\}$ at $\theta = \pi/2$). Remark One can calculate that the eigenvalues of $A^*A$ are $$ \frac{1}{2} \left(2\cos ^2 \theta + \left( \frac{a^2}{b^2} + \frac{b^2}{a^2} \right) \sin^2\theta \right) \pm \sqrt{\frac 14\left(2\cos ^2 \theta + \left( \frac{a^2}{b^2} + \frac{b^2}{a^2} \right) \sin^2\theta \right)^2 -1}, $$ which are clearly continuous in $\theta$.
H: Question about proof that sub $*$-algebra of $B(H)$ is strongly dense in bicommutant. Consider following fragment from Murphy's "$C^*$-algebras and operator theory": In the proof of lemma 4.1.4, why does $u(x) \in K$ follow from $pu = up?$ AI: First, we will show that $u(K) \subseteq K$. Let $y \in K$, then $p(y) = y$ and $$u(y) = u(p(y)) = p(u(y)) \in K.$$ Now, as $id_H \in A$, we have $x = id_H (x) \in K$ from the definition of $A$. Hence, $u(x) \in K$.
H: Confusion on proof of limit laws epsilon delta at https://www.youtube.com/watch?v=9tYUmwvLyIA from 35:33 to 39:33, Herb Gross says: \begin{align} f(x) &= L+ [f(x)-L] \\ g(x) &= M+ [g(x)-M] \end{align} Multiplying these 2, we get: \begin{align} f(x)g(x) &= LM + L[g(x) -M] + M[f(x) -L] +[f(x) -L][g(x)-M] \\ f(x)g(x) - LM &= L[g(x) -M] + M[f(x) -L] +[f(x) -L][g(x)-M] \end{align} From here we can say: \begin{align*} |f(x)g(x) - LM| &= |L[g(x) -M] + M[f(x) -L] +[f(x) -L][g(x)-M]| \\ &\leq |L[g(x) -M]| + |M[f(x) -L]| +|[f(x) -L][g(x)-M]| \end{align*} From here we can impose: \begin{align} |L[g(x) -M]| &< \frac{\epsilon}{3} \\ |M[f(x) -L]| &<\frac{\epsilon}{3} \\ |f(x) -L] &< \sqrt{\frac{\epsilon}{3}} \\ |g(x)-M| &< \sqrt{\frac{\epsilon}{3}} \end{align} My problem is: \begin{align} |L[g(x) -M]| &< \frac{\epsilon}{3} \implies|g(x) -M| &< \frac{\epsilon}{3|L|} \\ |M[f(x) -L]| &<\frac{\epsilon}{3} \implies |f(x) -L| <\frac{\epsilon}{3|M|}\\ |f(x) -L] &< \sqrt{\frac{\epsilon}{3}} \\ |g(x)-M| &< \sqrt{\frac{\epsilon}{3}} \end{align} So does that mean: $$|f(x) -L] < \min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|M|} \}$$ $$|g(x) -M] < \min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|L|} \}$$ Is this how it would look like in a proof: Given $\epsilon > 0$ Let $\epsilon_1 =\min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|M|} \} $ then $\exists \delta_1 >0$ such that: $$0<|x-a|<\delta_1 \implies |f(x) - L| < \epsilon_1$$ Let $\epsilon_2 =\min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|L|} \} $ then $\exists \delta_1 >0$ such that: $$0<|x-a|<\delta_2 \implies |g(x) - M| < \epsilon_2$$ then let $\delta \leq \min\{\delta_1,\delta_2\}$ \begin{align*} 0<|x-a|<\delta &\implies |L||g(x) -M| + |M||f(x) -L| +|f(x) -L||g(x)-M| \\ &< |L|\times\min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|L|} \} + |M|\times\min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|M|} \} + \min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|L|} \} \times \min \{ \sqrt{\frac{\epsilon}{3}}, \frac{\epsilon}{3|M|} \}\\ &< \epsilon \end{align*} This last part is not mentioned in the video is it all correct? AI: That is needlessly complicated. (1). Prove that if $A$ is a constant and $\lim_{x\to y}u(x)=V$ then $\lim_{x\to y}A+u(x)=A+V.$ (2). Prove that if $A$ is a constant and $\lim_{x\to y}u(x)=V$ then $\lim_{x\to y}Au(x)=AV.$ (3). Prove that if $\lim_{x\to y}u_1(x)=V_1$ and $\lim_{x\to y}u_2(x) =V_2$ then $\lim_{x\to y}u_1(x)+u_2(x)=V_1+V_2.$ (4). Prove that if $\lim_{x\to y}u_1(x)=0=\lim_{x\to y}u_2(x)$ then $\lim_{x\to y}u_1(x)u_2(x)=0$.... See (**) below. Suppose $\lim_{x\to y}f(x)=L$ and $\lim_{x\to y}g(x)=M.$ Let $f(x)=L+h(x)$ and $g(x)=M+i(x).$ By (1) with $A=-L$ and $u=f$ and $V=L$ we have $\lim_{x\to y}h(x)=0.$ Similarly we have $\lim_{x\to y}i(x)=0. $ Now $f(x)g(x)-LM=h(x)M+i(x)L+h(x)i(x).$ By (2) we have $\lim_{x\to y}h(x)M=0\cdot M=0.$ And similarly $ \lim_{x\to y}i(x)L=0.$ So by (3) with $u_1(x)=h(x)M$ and $u_2(x)=i(x)L$ and $V_1=V_2=0,$ we have $\lim_{x\to y}h(x)M+i(x)L=0.$ By (4) we have $\lim_{x\to y}h(x)i(x)=0.$ So by (3) with $u_1(x)=h(x)M+i(x)L$ and $u_2(x)=h(x)i(x)$ we have $\lim_{x\to y}f(x)g(x)-LM=0.$ Finally by (1) with $u(x)=f(x)g(x)-LM$ and $A=LM$ we have $\lim_{x\to y}f(x)g(x)=LM.$ (**). To prove (4): Given $e>0$ let $e'=\min (1,e).$ Note that $0<(e')^2\le e'\le e.$ For $j\in \{0,1\}$ take $d_j>0$ such that $0<|x-y|<d_j\implies |u_j(x)|<e'.$ Let $d=\min(d_1,d_2).$ Then $d>0$ and $0<|x-y|<d\implies |u_1(x)u_2(x)|<(e')^2\le e.$
H: $\mathbb{R}$ is not isomorphic to a proper subfield of itself let $\mathbb{R}$ be the field of real numbers. I found stated in this pretty work On Groups that Are Isomorphic to a Proper Subgroup, that there is no proper subfield $K$ of $\mathbb{R}$ which is isomorphic to $\mathbb{R}$ itself. Does someone have a proof of this fact? Thank you very much for your help in advance. NOTE1. Contrast this situation with the case of the field $\mathbb{C}$ of complex numbers, for which there exist proper subfields isomorphic to $\mathbb{C}$ itself: see e.g. Automorphisms of the Complex Numbers, Concluding Remark 2. NOTE2. This issue arouse in my post Proper Subgroup of O_2(R) Isomorphic to O_2(R) about whether the orthogonal group $O_2(\mathbb{R})$ is co-Hopfian or not. AI: Say $K$ is a proper subfield of $\mathbb{R}$ and let $\phi:\mathbb{R}\rightarrow\mathbb{R}$ be the isomorphism mapping $\mathbb{R}$ to $K$. Then, $\phi$ is an injective ring map. As $\phi$ is a ring map, $\phi(1)=1$ so $\phi(n) =n$ for $n\in\mathbb{Z}$ and also $\phi(q)=q$ for $q\in\mathbb{Q}$. First, I want to show that $\phi$ preserves the order : If $x>0$, then there exists $y>0$ such that $y^2=x$. Then, $\phi(x)=\phi(y^2)=\phi(y)^2>0$. Since $\phi$ preserves positivity, it preserves the order, i.e. if $a<b$ then $\phi(a)<\phi(b)$. Now, I want to show that $\phi$ is continuous : Say $a_i$ is a convergent sequence with limit $a\in\mathbb{R}$. Then, for each $\epsilon\in\mathbb{Q}_{>0}$, there exists $N\in\mathbb{N}$ such that $|a-a_i|<\epsilon$ for $i>N$. But then, $|\phi(a)-\phi(a_i)|=|\phi(a-a_i)|<\phi(\epsilon)=\epsilon$. Hence, $\phi(a_i)$ is also convergent and converges to $\phi(a).$ Each real number $x\in\mathbb{R}$ can be written as a limit of a rational sequence $(q_i)$. But now, $\phi(x) = \phi(\lim_i q_i) = \lim_i \phi(q_i)=\lim_i q_i=x$. Hence, $\phi$ is the identity map.
H: Approximating an expression for $\rho$ tends to zero I need to approximate the following expression within $-M\leq z \leq M$ for $\rho$ tends to zero, $\frac{\sqrt{\rho^2+(z+M)^2}-(z+M)}{\sqrt{\rho^2+(z-M)^2}-(z-M)}$, where $M$ is a constant and $z$ is a variable My lecturer insists that a Taylor expansion about $\rho=0$ will produce the appropriate approximation, however, even with Mathematica, I was not able to get any sensible approximation. Can someone help me out? AI: I suppose your lecturer was thinking of this?
H: Example of number field with certain conditions on ramification index and degree I am looking for a number field with degree $n$ over $\mathbb{Q}$ and with a ramified prime $p$ with ramification index $e$ such that $\textrm{gcd}(n, p-1) = 1$ and $\textrm{gcd}(e, p-1)>1$. I would also be interested in a slightly stronger condition, namely where $\textrm{gcd}(n, p-1) = 1$ and $e|(p-1)$. I would prefer the number field to be as simple as possible. Simple here could mean small degree, or small absolute value of the discriminant of the extension. So far, I have had no luck with trying simple cases for quadratic, cubic and quartic extensions. For those cases I either get complete ramification ($e = n$) or for $\mathbb{Q}[\sqrt{p}, \sqrt{q}]$ I get $e=2$ and $n=4$. Cyclotomic extensions also do not work, since there we have complete ramification as well. AI: Such a number field $K$ cannot be Galois over $\mathbb Q$. Indeed, if $K$ is Galois, then every prime ideal above $p$ has the same ramification index and residue degree, so $e\mid n$. That rules out all quadratic, biquadratic and cyclotomic extensions. For an example when $K$ is not Galois, one place to start would be to try to find an odd extension in which $p=3$ has ramification index $2$. An example from LMFDB is $K = \mathbb Q(\alpha)$ where $\alpha$ is a root of $x^3 - x^2 + 2x + 1$. In this extension, there are two primes $\mathfrak p_1, \mathfrak p_2$ above $3$. One is unramified, and the other has ramification index $2$.
H: Conditions for interchanging order of limits and summations Let $f: \mathbb{N} \times \mathbb{N} \to \overline{\mathbb{R}}$. Then under which conditions is the expression $\lim\limits_{n\to\infty}\sum\limits_{m=1}^\infty f(m,n)=\sum\limits_{m=1}^\infty \lim\limits_{n\to\infty} f(m,n)$ valid? Would anyone have a rigorous answer to this? Any proof using measure theory, or elementary calculus, is more than welcome. I know that a very similar question has been asked here: Under what condition we can interchange order of a limit and a summation? , but I would need more detail. For example, one of the answers states that the dominated convergence theorem suffices as 'sums are just integrals with respect to the counting measure on $\mathbb{N}$'. I am unable to see how works; I don't know how this 'counting measure' can be used with the dominated convergence theorem to provide the conclusion. AI: While looking for higher considerations let me suggest some simple criteria. Suppose we have double sequense $a_{n,m}$. If exists $\lim_\limits{m \to \infty}a_{n,m}=a_n, \ n\in \mathbb{N}$ ; and series $\sum\limits_{n=1}^{\infty}a_{n,m}$ converged uniformly, then we can interchange order of limit and summation. Suppose series $\sum\limits_{m,n =1}^{\infty}a_{n,m}$, $\sum\limits_{n =1}^{\infty}\sum\limits_{m =1}^{\infty}a_{n,m}$ and $\sum\limits_{m =1}^{\infty}\sum\limits_{n =1}^{\infty}a_{n,m}$ all converged. Then they equal one and same value. Suppose $f(x,y)$ is defined on some set $E$, which includes all points from some rectangle with center in $(x_0,y_0)$, except, possibly, lines $y=y_0$ and $x=x_0$. If exists double limit for $f$ with respect to $E$ and for any $y \ne y_0$ in some neigbourhood of $y_0$ exists $\lim\limits_{x \to x_0}f(x,y) = g(y)$, then exists $\lim\limits_{y \to y_0}g(y)$ and holds $$\lim\limits_{y \to y_0}\lim\limits_{x \to x_0}f(x,y) = \lim\limits_{(x,y) \to (x_0,y_0)}f(x,y)$$
H: Can we imply $\exists x\phi(x)\rightarrow \exists x\psi(x)$ from $\exists x(\phi(x)→\psi(x))$ let $\Sigma$ be a consistent set of formulas in first-order logic and implies $\exists x(\phi (x)→\psi(x))$. which one of these statements is logical implication from $\Sigma$? a) $\forall x\phi(x)→\forall y\psi(y) $ b) $\exists x\phi(x)→\forall y\psi(y)$ c) $\exists x\phi(x)→\exists x\psi(x) $ d) $\forall x\phi (x)→\exists y\psi(y)$ I think This sentence implies $\exists x\phi(x)→\exists x\psi(x) $ but the answer is $\forall x\phi (x)→\exists y\psi(y)$ how we can imply that for all x exist a y which $y\psi(y)$ ?! I can't understand it. AI: For a counter-example to (c), you just need a single $x$ for which $\phi(x)$ is false. Then for this $x$, we have $\phi(x)\to$ anything at all. So $\exists x(\phi(x)\to\psi(x))$ is true. But we have no reason to infer that $\exists x(\psi(x))$. As for your final paragraph: it looks like you are interpreting $$\forall x\phi (x)\to\exists y\psi(y)$$ as $$\forall x(\phi (x)\to\exists y\psi(y))$$ whereas the correct interpretation is $$(\forall x\phi (x))\to(\exists y\psi(y))$$
H: If $R$ is an Euclidean domain that is not a field, is $R[X]$ a PID? I have a question in ring theory whose answer I am looking for. Consider R to be a Euclidean Domain such that R is not a field. Then is polynomial ring R[X] is always a PID or not. Attempt : R is ED implies R[X] is ED and R is ED implies R is PID. But can someone please tell what can I say about R[X] from it. AI: In this case $R[X]$ is never a PID. Let $\pi$ be a prime element in $R$ (a nonzero generator of a maximal ideal). Then in $R[X]$ the ideal $(\pi,X)$ is non-principal ($R[X]$ is a UFD and $\pi$ and $X$ have no nontrivial common factor).
H: Given $\gamma(t)$ a $\mathcal{C}^1$ path on $\mathbb{C}^{\times}$, why is it true that $|\gamma(t)|$ is also $\mathcal{C}^1$? Suppose $\gamma:[0,1]\to\mathbb{C}^{\times}$ is continuously differentiable, there is a proof I saw involved using the fact that $s(t)=|\gamma(t)|$ must also continuously differentiable but I cannot quite convince myself why this is the case. (Of course, the differentiability of $s(t)$ is in the sense of real-variables.) So we want to show that $\lim_{h\to 0}\frac{|\gamma(t+h)|-|\gamma(t)|}{h}$ exist given $\gamma(t)$ is continuously complex differentiable. I have thought about using a bit of triangle inequality and manipulation, one can obtain $$\lim_{h\to 0}\frac{|\gamma(t+h)|-|\gamma(t)|}{h}\leq \lim_{h\to 0}\frac{|\gamma(t+h)-\gamma(t)|}{h}$$ Now I am a bit stuck, I think we can say since $\gamma(t)\in \mathcal{C}^1$ then $\lim_{h\to 0}\frac{|\gamma(t+h)-\gamma(t)|}{h}$ must exist, but what is next? I can see the statement to be true but I wanted to construct something that is more concrete. Many thanks in advance! AI: The absolute value function from $\Bbb C\setminus\{0\}$ into $\Bbb R$ is real-differentiable everywhere, and its derivative is continuous; in other words, it's a class $C^1$ function. So, $t\mapsto|\gamma(t)|$ is the composition of two class $C^1$ functions, and thereforeit is a class $C^1$ function too..
H: Coordinate Geometry Question using matrices Let on the x-y plane, the distance between the points $A(x_1,y_1)$ and $B(x_2,y_2)$ be $d$. Another point $P(a,b)$ satisfies the equations $x_1a+y_1b=1$ and $x_2a+y_2b=1$ and the distance between point $P$ from the origin,$O(0,0)$ is $p$. Find the area of triangle $\Delta OAB$. My solution: Define a non-singular square matrix $$A=\begin{bmatrix} x_{1} & y_{1} \\ x_{2} & y_{2} \end{bmatrix}$$. The required area, $\Delta = \frac{1}{2} \begin{vmatrix} x_{1} & y_{1} \\ x_{2} & y_{2} \end{vmatrix} = \frac{1}{2} \det(A)$ The given condition of point $P$ can be written as $$\begin{bmatrix} x_{1} & y_{1} \\ x_{2} & y_{2} \end{bmatrix} \begin{bmatrix} a \\ b \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}$$ $$\implies \begin{bmatrix} a \\ b \end{bmatrix} = A^{-1} \begin{bmatrix} 1 \\ 1 \end{bmatrix}$$ (on pre multiplication of both sides with $A^{-1}$) The inverse of matrix $A$, $A^{-1} = \frac{1}{2 \Delta} \begin{bmatrix} y_{2} & -y_{1} \\ -x_{2} & x_{1} \end{bmatrix}$ $$\implies \begin{bmatrix} a \\ b \end{bmatrix} = \frac{\begin{bmatrix} y_{2}-y_{1} \\ -x_{2}+x_{1} \end{bmatrix}}{2 \Delta}$$. Comparing element wise and taking sum of their squares, $p^2 = \frac{d^2}{4 \Delta^2} \implies \Delta = \frac{d}{2p}$ Is there any other way to solve this question? AI: Points $A$ and $B$ lie on the line $\ell \ldots ax+by-1=0$. The distance of $\ell$ from the origin is $$d(\ell,0) = \frac{|a\cdot 0+b\cdot 0-1|}{\sqrt{a^2+b^2}}=\frac1p$$ since $\sqrt{a^2+b^2} = d(P,O) = p$. The area of your triangle is $$\text{area} = \frac12 \,\text{base}\cdot \text{height} = \frac12 d(A,B)\cdot d(\ell,O) = \frac12 \cdot d \cdot \frac1p = \frac{d}{2p}.$$
H: Complement of family of binary sequence Consider set of the binary sequence, we choose an element $(0,0,0,...)$,then construct a family of sequence by changing each entry, so that each element is different from zero vector by one entry. Denote the elements by $x_{n}$. Now we proceed the similar way, from starting on the element of this family, changing each entries to construct new family of sequence. Denote the elements by $x_{nm}$, constructed from $x_{n}$. We then do the same for all elements we have, producing $x_{nmi...}$. Finally taking union of all the elements, denote it by $X$. What is the complement of $X$? AI: The base 0-th step gives all sequence with at most zero nonzero values values. The 1st step gives all sequences with at most 1 nonzero value. The n-th step gives all sequences with at most n nonzero value. Altogether all the steps yield all sequences with a finite number of nonzero values. The complement is now apparent.
H: bilinear form of positive matrix If $A$ is a positive-definite matrix and $x$,$y$ are non negative vectors, is $f(A)=x^TAy$ positive ? AI: The answer is no, we have $$\left\langle \begin{bmatrix} 2 & -1 \\ -1 & 1\end{bmatrix}\begin{bmatrix} 0 \\ 1\end{bmatrix}, \begin{bmatrix} 1 \\ 0\end{bmatrix}\right\rangle = \left\langle \begin{bmatrix} -1 \\ 1\end{bmatrix}, \begin{bmatrix} 1 \\ 0\end{bmatrix}\right\rangle=-1.$$
H: Calculation of covariant derivative being chart-dependent Ref. Schuller's lecture 7 on gravity and light (derivation starts at this timestamp). I'm watching a lecture that introduces connection coefficients - specifically the part of the video showing for the first time how to calculate covariant derivative. Relevant timestamp here, the calculation is being done for $\nabla_XY$ in the particular chart $(U,x)$. In short, it goes like this (let $X,Y$ be smooth vector fields on a smooth manifold $M$): $$\nabla_XY=X^i\nabla_i(Y^m\partial_m)=X^i(\nabla_iY^m)\partial_m+X^iY^m(\nabla_i\partial_m)$$ Regarding the $\nabla_i\partial_m$ term, the lecturer says: What is the result? It's a vector field because this ($\partial_m$) is a vector field. We act on a vector field to get a vector field. Whatever it is, we can expand it in the chart [...] as $\partial_q$ and it has some coefficient function - let's call it $\Gamma^q$. But these coefficients coefficients also depend on which basis vectors (i.e. $i$ and $m$) we chose, so it becomes $\Gamma^q_{\ \ mi}$. So first of all, when defining the covariant derivative towards the start of the lecture (see definition on the board here), the implication is that $\nabla$ takes in vector fields $A,B$ and gives back a vector field $\nabla_AB$. This is fine, but I want to confirm: is the reason why we can take $A=\partial_i$ and $B=\partial_m$ (which are effectively a chart map-dependent vector fields), as it has been done in the last term of the above equation, because we're confining our attention only to a particular chart $(U,x)$? In other words, if we change the chart map (but not the neighborhood, i.e. $(U,y)$), $\nabla_i\partial_m$ will yield an entirely different vector field, and also $\nabla_AB$ will be entirely different, right? AI: The expression $\nabla_i\partial_m$ only makes sense in a coordinate chart $(U, (x^1, \dots, x^n))$. In different coordinates $(U, (y^1, \dots, y^n))$ we won't necessarily have $\partial_{x^i} = \partial_{y^i}$, so $\nabla_{\partial_{x^i}}\partial_{x^m}$ and $\nabla_{\partial_{y^i}}\partial_{y^m}$ will be different in general. Note however that $\nabla_XY$ is perfectly well-defined (when you change coordinates, the functions $X^i$ and $Y^j$ also change).
H: The cardinality of an equivalence relation over a set I'm trying to prove the following statement: let $A$ be an infinite set, then for any equivalence relation $E$ on $A$, $ |E| = |A|$ But I'm really stuck. Trying to show a bijection from $E$ to $A$ made sense only when dealing with natural numbers, using maybe $f(x,y) = 2^x3^y$ but what do I do when these are real numbers? Any idea on how this can be shown? Thanks in advance. AI: Since $E$ is an equivalence relation, it is reflexive in particular. Thus, you have an injection $A \hookrightarrow E$ given by $x \mapsto (x, x)$. Thus, $|A| \le |E|$. On the other hand, by definition, $E \subset A \times A$ and thus, $|E| \le |A|^2$. Now, since $A$ is infinite, we have $|A| = |A|^2,$ assuming the Axiom of Choice. Then, by the Schröder–Bernstein theorem, we have $|E| = |A|$. Note that you do indeed need the Axiom of Choice. Else, we may have an infinite set $A$ such that $|A| \neq |A \times A|.$ Then, taking $E = A \times A$ (which is indeed an equivalence relation), we get a counterexample.
H: Statistical Inversion Problem $F = Ku + \mathcal{E}$ derive conditional probability density $p(f | u)$ Consider the following Inversion Problem $f = Ku + \varepsilon$ where $f \in \mathbb{R}^{m}$, $u \in \mathbb{R}^{n}$, $K \in \mathbb{R}^{m,n}$ and $\varepsilon$ is an additive, Gaussian noise. In the Bayesian approach towards Inverse Problems, where you don't rely on explicit regularizers, you consider this problem as $F = Ku + \mathcal{E}$ where $F$ and $\mathcal{E}$ are random variables, $\mathcal{E} \sim \mathcal{N}(0, \Sigma_{\varepsilon})$. Apparently one can then determine the conditional probability density of $F$ given $u$ as $$ p(f | u) \propto \operatorname{exp}(-\frac{1}{2}\|f - Ku\|^2_{{\Sigma_{\varepsilon}}^{-1}}) $$ where $\|y\|^2_{A} := y^{T}Ay$. How is this derived? AI: In general, using the expression for conditional densities $p(f | u) = p(f, u) / p(u)$. In this case, no one would do that, and simply prefer to use simple properties of Gaussians. The distribution of a Gaussian random variable is completely determined by two things: its mean, and its covariance matrix. Moreover, if $X$ is a Gaussian random vector, then $AX + b$ is Gaussian for any deterministic matrix $A$ and vector $b$. Applying this to your situation, given $u$, $Ku$ is a deterministic vector. Hence $Ku + \mathcal{E}$ is Gaussian with mean $\mathbb{E}[Ku + \mathcal{E}] = Ku$ and covariance $\mathbb{E}[(Ku + \mathcal{E} - Ku)(Ku + \mathcal{E}- Ku)^T ] = \mathbb{E}[\mathcal{E}\mathcal{E}^T]=\Sigma_\varepsilon$. Knowing this, and the fact that the density $p(z)$ of a Gaussian with mean $\mu$ and covariance $\Sigma$ is simply $$p(z) \propto \exp\left(-\frac{1}{2}\|z - \mu\|^2_{\Sigma^{-1}}\right),$$ you can write down the conditional density above, simply by treating $u$ (and hence $Ku$) as being "fixed".
H: Prove that $E(Y_i \bar{Y}) = \frac{\sigma^2}{n}+\mu^2$ Given $Y_1, Y_2,...Y_n$ are i.i.d random variable which follows a distribution of $N(\mu, \sigma^2)$, I'm trying to prove that $$E(Y_i \bar{Y}) = \frac{\sigma^2}{n}+\mu^2$$ Here's what I've tried: $$E(Y_i \bar{Y}) = E(Y_i \frac{1}{n} \sum_{i=1}^{n}Y_i )$$ $$ = \frac{1}{n} \sum_{i=1}^{n}E(Y_i Y_i )$$ Since $E(Y_i^2) = Var(Y_i)+E(Y_i)^2 = \sigma^2 +\mu^2$ $$E(Y_i \bar{Y}) = \frac{1}{n} \sum_{i=1}^{n}E(Y_i^2)$$ $$ = \frac{1}{n} n (\sigma^2 +\mu^2)$$ $$ = (\sigma^2 +\mu^2)$$ which does not prove it right. I am unsure of where i got this wrong. Can anyone help me with this? Thank you. AI: Here is the error in your argument caused by the choice of the index letter that is already used. $E(Y_i \bar{Y}) = E(Y_\color{red}{i} \frac{1}{n} \sum_{\color{red}{i=1}}^{n}Y_\color{red}{i} )$ To be correct, \begin{align} E(Y_i \bar{Y}) = E(Y_i \frac{1}{n} \sum_{\color{blue}{j=1}}^{n}Y_\color{blue}{j} ) = E(\sum_{j=1}^{n} \frac{1}{n}Y_iY_j) =\frac{1}{n}\sum_{j=1}^{n} E(Y_iY_j) \end{align} If $i \neq j$, we have $E(Y_i Y_j)=E(Y_i)E(Y_j)=\mu^2$. If $i=j$, $E(Y_i^2)=Var(Y_i) + E(Y_i)^2=\sigma^2+\mu^2$ Thus \begin{align} E(Y_i \bar{Y}) = \frac{1}{n}\sum_{j=1}^{n} E(Y_iY_j)&=\frac{1}{n} \left( (n-1)\mu^2+\ \sigma^2 + \mu^2\right) \\ &=\frac{1}{n}(n\mu^2 + \sigma^2)=\mu^2 + \frac{1}{n}\sigma^2 \end{align}
H: Going-down theorem hypothesis Something I don't get form the hypothesis of this theorem. If $C$ is the integral closure of $A$, and $A$ is integrally closed (since $A$ is integral domain, it's integrally closed over its field of fractions), then $A=C$. If $B$ is integral over $A$, then $B\subset C$. Since $A\subset B$, we have $A=B$... What is wrong (beyond myself)? AI: $B$ being integral over $A$ does not imply that $B$ is contained in $C$ as $A$ being integrally closed need not imply that $A$ is integrally closed in $B$.
H: Operation in the notation for group homomorphisms A group homomorphism $\varphi$ from the additive group of real numbers to the multiplicative group of non-zero real mumbers... Could this be written as $$\varphi :\, (\mathbb{R},+)\to (\mathbb{R}\setminus \{0\},\cdot )?$$ I'm not sure, because I only encountered the following notation: $$\varphi :\, \mathbb{R}\to\mathbb{R}\setminus\{0\}$$ where they specify the operations on the groups elsewhere. AI: It is more clear to write as $$\varphi :\, (\mathbb{R},+)\to (\mathbb{R}\setminus \{0\},\times).$$ or simply you can write: $$\varphi :\, (\mathbb{R},+)\to \mathbb{R}^\times.$$ as @Bernard commented.
H: The number of integral points on the hyperbola $x^2 - y^2 = (2000)^2$ is The number of integral points on the hyperbola $x^2 - y^2 = (2000)^2$ is ____? (An integral point is a point both of whose coordinates are integers. My attempt: The equation can be rewritten as: $$(x-y)(x+y) = 2000^2 = 2^8 \cdot 5^6$$ Now, both of $x+y$ and $x-y$ need to be odd or even. It is impossible for both of them to be odd, hence both of them must be even. The number of even pairs of factors of $2^8 \cdot 5^6$ is $7 \times 7 = 49$. Hence, the total number of integral points is $\boxed{49}$ The answer: The textbook claims the answer to be $\boxed{98}$ and counts the number of even pairs of factors as $7 \times 7 \times 2 = 98$. Could someone point out where I am going wrong? (or if the textbook has made an error) EDIT: This question has been answered but I wanted to add that the 2000 seemed conspicuous, so I went to Approach0. This problem is taken from AIME 2000. the referenced AoPS link also does a great job explaining the solution. AI: The number of even factors of $2^8\cdot 5^6$ is $8\cdot 7=56$. You can have $1$ to $8$ factors of $2$ and $0$ to $6$ factors of $5$. You need both factors to be even, so the first one can have from $1$ to $7$ factors of $2$, which gives $49$ factorizations into two even numbers. We insist that $|x| \ge |y|$ so that the difference of squares is positive. The $49$ factorizations include $24$ with the absolute values of factors different, each in two orders, plus the one factorization $2000 \cdot 2000$. Each of the $24$ that have $|x| \gt |y|$ give four choices of signs, while $2000^2-0^2$ gives two choices of signs, for a total of $98$ solutions.
H: Weaker conditions for differentiating under the integral sign Standard theorems of real analysis give conditions under which it holds $$\int_0^1 \partial_x f(x,y)dy = \frac{d}{dx}\int_0^1 f(x,y)\,.$$ In most of the formulations that I have found, it is required that, for almost every $y$, $f$ is everywhere differentiable. I'm wondering if this condition can be weakened, at least in some particular setting. Consider an integral operator $F$ on $L^2(0,1)$ which maps an element $\phi$ to $$ F\phi(x) = \int_0^1 k(x,y)\phi(y)dy\,.$$ $k(x,y)$ is supposed to be some bounded continuous function on $(0,1)^2$. If $k$ is of class $C^1$, then all functions in the image of $F$ are of class $C^1$. But can we give some weaker condition in order to have an image at least differentiable? For example if $k(x,y)=|x-y|$, then it can be proven explicitly (just by writing down the definition of derivative and by bounding the remainder) that it holds $$\frac{d}{dx}F\phi(x) = \int_0^1sign(x-y)\phi(y)dy\,.$$ which is $C^0$ and so $F\phi(x)$ is even $C^1$. Is this a particular case of some general and well known result? AI: I am not an expert of this topic, and would love to see some nice references to well-known results of this kind. Meanwhile, let me try to first perform a heuristic computation and see what kind of conditions we can pick up to justify each step. Let $k $ be measurable. Then \begin{align*} F\phi(x_1) - F\phi(x_0) &= \int_{0}^{1} (k(x_1,y) - k(x_0,y))\phi(y) \, \mathrm{d}y \\ &= \int_{0}^{1} \left( \int_{x_0}^{x_1} \partial_x k(x, y) \, \mathrm{d}x \right) \phi(y) \, \mathrm{d}y \tag{1} \\ &= \int_{x_0}^{x_1} \int_{0}^{1} \partial_x k(x, y) \phi(y) \, \mathrm{d}y\mathrm{d}x. \tag{2} \end{align*} $\text{(1)}$ is justified if $x \mapsto k(x, y)$ is absolutely continuous on any compact intervals for any $y$. $\text{(2)}$ is justified by the Fubini-Tonelli's Theorem if $\int_{x_0}^{x_1} \int_{0}^{1} \left| \partial_x k(x, y) \phi(y) \right| \, \mathrm{d}y\mathrm{d}x < \infty$ for any interval $[x_0, x_1]$. In particular, this occurs if $y \mapsto \partial_x k(x, y)$, regarded as a family of maps indexed by $x$, is dominated by an $L^2$ function. Under the above conditions, it follows that $F\phi$ is absolutely continuous and $$ \frac{\mathrm{d}}{\mathrm{d}x}F\phi(x) = \int_{0}^{1} \partial_x k(x, y) \phi(y) \, \mathrm{d}y \tag{*} $$ almost everywhere. Example 1. Suppose that $k(x, y)$ is uniformly Lipschitz in the variable $x$, i.e., there exists $L \geq 0$ such that $\left| k(x_1,y) - k(x_0,y) \right| \leq L\left|x_1 - x_0\right|$ for any $x_0$, $x_1$, and $y$. Then $x \mapsto k(x, y)$ is absolutely continuous and $\left| \partial_x k(x, y) \right| \leq L$, and so, both conditions are satisfied and the above conclusion holds. Example 2. Suppose that the kernel is of the form $k(x-y)$. If $k$ is locally absolutely continuous and its derivative is locally $L^2$, then the conditions are satisfied and we have $$ \frac{\mathrm{d}}{\mathrm{d}x}F\phi(x) = \int_{0}^{1} k'(x - y) \phi(y) \, \mathrm{d}y. $$ Moreover, by the $L^p$-continuity of translation operator, it follows that $\frac{\mathrm{d}}{\mathrm{d}x}F\phi$ is continuous.
H: Is there an easier prime factorization method for the sum of a prime's powers? I need to obtain prime factorizations of numbers of the type: $\sum_{i=0}^n p^i$, for any prime number $p$ (not the same one each time). Do you know if there is a quicker algorithm to calculate these factorizations than those used for other natural numbers? I don't know if there is a known solution. My only lead is that all Mersenne primes are of the form $\sum_{i=0}^n 2^i$. Edit: by prime factorization I mean, for example, if $p$ is 3 and $n$ is 6, the number is 364, and the prime factorization I'm looking for is 2^2, 7 and 13. AI: You have a geometric series, so $\sum_{i=0}^n p^i=\frac {p^{n+1}-1}{p-1}$. When the prime is $2$ this does not give a factorization because the denominator is $1$. For all other primes it does.
H: What is the range of $x,y,z$ when $n$ is a known natural number in: $n=x^5+y^5+z^5$ I have the following question: What is the range of the sum of three distinct natural numbers to the fifth power than are equal to a known natural number? Mathematically speaking: $$n=x^5+y^5+z^5\tag1$$ When $n\in\mathbb{N}$ is known, what is the range where $x\space\wedge\space y\space\wedge\space z\in\mathbb{N}$ can be in when we know that $x\ne y\ne z$? I think that the range should be: $1\le x,y,z\le\left(\left\lceil\sqrt{n}\right\rceil\right)^5$ but I am not sure why that should be true. AI: If $x$ is maximal among them, then $y\leq x-1$ and $z\leq x-2$ (or vice versa). So $$(x-2)^5+(x-1)^5+x^5\geq n\implies 3(x-1)^5\geq n$$ so $$x\geq \sqrt[5]{n\over 3}+1$$ So $$\sqrt[5]{n\over 3}+1\leq x\leq \sqrt[5]{n-33}$$ if $n\geq 107313$.
H: Convergence in $p$th mean does not imply convergence in mean. My book proves the following: Let $(\Omega,\mathcal{A},\mu)$ be a finite measure space. Then every sequence $(f_n)$ in $\mathcal{L}^p$ which converges in $p$th mean to an $f \in \mathcal{L}^p$ for some $p\geq 1$ also converges to $f$ in mean. and then ask to provide an example where it fails when $\mu$ is not finite. It hints me to the following previous example: $\Omega=\mathbb{N}$, $\mathcal{A}=\mathcal{P}(\mathbb{N})$, and $\mu$ defined by $\alpha_n:=\mu(\{n\})=\frac{1}{\sqrt{n}}$ for each $n\in\mathbb{N}$. Then if $f$ is the function on $\Omega$ defined by $f(n)=\alpha_n$ for each $n$ we see that $f\in\mathcal{L}^2$ but $f\notin\mathcal{L}^1$. Am having trouble using this to construct the aforementioned example. Any help is greatly appreciated. AI: We have $f \in \mathcal{L}^2(\Omega)$ because $$ \int f^2 d\mu = \sum_{n = 1}^{\infty} f(n)^2 \mu(\{n\}) = \sum_{n = 1}^{\infty} \frac1n\cdot \frac1{\sqrt{n}} < \infty. $$ We have $f\not\in \mathcal{L}^1(\Omega)$ because $$ \int f d\mu = \sum_{n = 1}^{\infty} f(n) \mu(\{n\}) = \sum_{n = 1}^{\infty} \frac1{\sqrt{n}}\cdot \frac1{\sqrt{n}} = \sum_{n = 1}^{\infty} \frac1n = \infty. $$ Now define for $m \in \mathbb N$ the function $g_m:\Omega \rightarrow \mathbb R$ by $$ g_m(n) = f(n)\quad if \quad n < m,\qquad g_m(n) = 0\quad if \quad n \geq m. $$ We find $$ \int (f - g_m)^2 d\mu = \sum_{n = m}^{\infty} f(n)^2 \mu(\{n\}) = \sum_{n = m}^{\infty} \frac1n\cdot \frac1{\sqrt{n}} \rightarrow 0 \quad for \quad m \rightarrow \infty, $$ and hence $$ g_m \rightarrow f \quad in \quad \mathcal{L}^2(\Omega). $$ We also find $$ \int (f - g_m) d\mu = \sum_{n = m}^{\infty} f(n) \mu(\{n\}) = \sum_{n = m}^{\infty} \frac1n \not\rightarrow 0 \quad for \quad m \rightarrow \infty, $$ and hence $$ g_m \not\rightarrow f \quad in \quad \mathcal{L}^1(\Omega). $$ In fact, it's not so difficult to check along the lines of the calculations above that the sequence $g_m$ is not Cauchy in $\mathcal{L}^1(\Omega)$, and hence does not converge at all in $\mathcal{L}^1(\Omega)$.