text
stringlengths
83
79.5k
H: How to find roots of this equation? I have this equation and want to find its roots. $\left(a^2+1\right) \cosh (a (c -b))- \cosh (c a)=0 $. Any comment is welcome. AI: One has \begin{equation} 1 \le f(a) = \frac{(a^2-1)^2}{(a^2+1)^2}\cosh(2\pi a) + \frac{4 a^2}{(a^2+1)^2}< \cosh(2\pi a) \end{equation} because this expression is a convex combination of $\cosh(2\pi a)$ and $1$. Then the equation implies \begin{equation} b = \pi \pm \frac{1}{2a}\cosh^{-1}(f(a))\in (0, 2\pi) \end{equation} Hence for every $a$ there are two solutions $b$, except in the case where $f(a)=1$ which happens only when $a=1$. In this case there is only one solution $b = \pi$. Plotting $b$ as a function of $a$ (with the $+$ sign) indicates that $b$ decreases from $2\pi$ to $\pi$ when $a$ varies from $0$ to $1$ and increases from $\pi$ to $2\pi$ when $a$ varies from $1$ to $+\infty$, so there should be two solutions $a$ for each $b\in (\pi, 2\pi)$, one of them in $(0, 1)$ and the other one in $(1, +\infty)$. The condition $a<10$ is irrelevant and should be removed.
H: Given the solution of $Ax = b$, do we know whether $A$ spans $\mathbb{R}^3$? Let $A$ be a $3 \times 4$ matrix such that all solutions to the equation $A\mathbf{x} = \mathbf{b}$ may be written as: $$ \left[ \begin{matrix} 1 \\ 2 \\ 3 \\ 4 \end{matrix} \right] + x_3 \left[ \begin{matrix} 3 \\ 2 \\ 1 \\ 0 \end{matrix} \right] $$ with $x_3 \in \mathbb{R}$. Does $A$ span $\mathbb{R}^3$, or can't we tell? AI: I assume that by "does $A$ span $\Bbb R^3$", you mean whether the columns of $A$ span $\Bbb R^3$, that is, whether the image of (the linear transformation induced by) $A$ is $\Bbb R^3$. (And that $A$ is a $3 \times 4$ matrix.) Recall the following: If $x_0$ is a particular solution to the system $Ax = b$, then the complete set of solutions is given by $$x_0 + \cal N(A),$$ where $\cal N(A)$ denotes the null space of $A$. In this case, we see that $\cal N (A)$ is spanned by one-vector, that is, it is one dimensional. Recalling the rank-nullity theorem, we see that $$\operatorname{rank} A = 4 - \operatorname{nullity} A = 3.$$ Since the image of $A$ is a subspace of $\Bbb R^3$, we see that $\operatorname{rank} A = 3$ gives us that $A$ does span $\Bbb R^3$.
H: Rudin's Real and Complex Analysis, Section 9.16 In Section of 9.16 from Rudin's RCA, it says Let $\hat{M}$ be the image of a closed translation-invariant subspace $M \subset L^2$, nder the Fourier transfrom. Let $P$ be the orthogonal projection of $L^2$ onto $\hat{M}$ (Theorem 4.11): To each $f \in L^2$ there corresponds a unique $Pf \in \hat{M}$ such that $f - Pf$ is orthogonal to $\hat{M}$. Hence $$f - Pf \perp Pg, \quad (f \text{ and } g \in L^2)$$ and since $\hat{M}$ is invariant under multiplication by $e_\alpha = e^{-i\alpha t}$, we also have $$ f - Pf \perp (Pg)e_\alpha \quad (f \text{ and } g \in L^2, \, \alpha \in \mathbb{R}^1)$$ If we recall how the inner product is defined in $L^2$, we see that the above statement to $$ \int_{-\infty}^\infty (f -Pf)\cdot \overline{Pg}\cdot e_{-\alpha} \,dm = 0, \quad (f \text{ and } g \in L^2, \, \alpha \in \mathbb{R}^1)$$ and this says that the Fourier transform of $$ (f-Pf)\cdot \overline{Pg}$$ is $0$. So far so good! However, he goes onto say "This remains true if $\overline{Pg}$ is replaced by $Pg$". Why is that!? Could you help me to understand this? Thank you in advance. AI: If $a,b:\mathbb R\to\mathbb C$ are functions in $L^2$ and $a\,\bar b=0$, then this occurs pointwise a.e. If now $a,b\in\mathbb C$ and $a\bar b=0$, then either $a=0$ or $b=0$, so $ab=0$.
H: What are the homology groups $H_k(\mathbb{R}\setminus \{0\})$? Is there an easy way to calculate the homology groups $$H_k(\mathbb{R}\setminus \{0\}), k\geq 0$$ I was able to calculate $H_k(\mathbb{R}^m\setminus \{0\}), m > 1$ because we have a homeomorphism onto the sphere $S^{m-1}$ whose homology groups I know, but we do not have $\mathbb{R}\setminus \{0\} \cong S^0 = \{-1,1\}$ so for $m=1$ this approach fails. Of course, $H_0(\mathbb{R}\setminus \{0\}) \cong \mathbb{Z} \oplus \mathbb{Z}$ since there are two path components. AI: Hint: $\mathbb{R}\backslash\{0\}$ deformation retracts to two discrete points. Also, $\mathbb{R}^m\backslash\{0\}$ is not homeomorphic to a sphere (again, deformation retract).
H: Exercise on finitely generated $A$-modules Here is the exercise I'm trying to solve: Let $M$ be a finitely generated $A$-module (where $A$ is a commutative ring) and let $g:M\rightarrow A^n$ a surjective $A$-module morphism. Prove that $\text{Ker}(g)$ is finitely generated. Here is what I'd do: Since $g$ is surjective I'd choose $\{x_1,x_2,\dots,x_n\}\subset M$ s.t. $g(x_i)=e_i$ for all $i=1\dots n$, where $e_i$ is the i-th canonical generator of the direct product $A^n$. Then I'd consider $$\begin{align}\varphi: A^n \oplus\text{Ker}(g) &\longrightarrow M\\ ((a_1,\dots,a_n),y) &\longmapsto a_1 x_1 + \dots + a_n x_n + y\end{align}$$ which is an $A$-module isomorphism. Hence $$M \cong A^n\oplus \text{Ker}(g)$$ Since $M, A^n$ are both finitely generated as $A$-modules, $\text{Ker}(g)$ also needs to be finitely generated. Is this acceptable? I have the feeling that it could be solved in a more concise way, without having to explicitely construct a morphism. Thanks! AI: I doubt that there is any better way than noting that the kernel is a direct summand of $M$, hence, finitely generated too. You could circumvent this a little by noting that if $f\colon A^n\to M$ is the map $e_i\mapsto x_i$ (the first component of your $\varphi$), then $\mathrm{id}_M-fg$ maps $M$ surjectively onto $\ker g$. Perhaps this feels a bit more concise. In case you just don't want to deal with elements, here is a closely related argument in a slightly more abstract phrasing. Consider the short exact sequence $0\to \ker g\to M\to A^n\to 0 $ and the associated exact sequence after applying $\hom(-,\ker g)$: $$0\to\hom(A^n,\ker g)\to\hom(M,\ker g)\to\hom(\ker g,\ker g)\to \mathrm{ext}^1_A(A^n,\ker g).$$ Since $A^n$ is projective, $\mathrm{ext}^1_A(A^n,M)=0$ and so the map $\hom(M,\ker g)\to\hom(\ker g,\ker g)$, $\varphi\mapsto \varphi|_{\ker g}$, is surjective. Thus, there exists a morphism $\varphi\colon M\to \ker g$ such that $\varphi|_{\ker g}=\mathrm{id}_{\ker g}$. In particular, $\varphi$ is surjective and since $M$ is finitely generated, so is $\ker g$.
H: How to solve this ODE with the Laplace transform? I want to solve this ODE $$ y'''-y''-y'+y= -10 \cos (2t-1)+5 \sin(2t-1) $$ with $y( \frac12)= 1 $ , $ y'( \frac12 )=2 $, $y''( \frac12 )=1 $ $ t \in [ \frac12 , + \infty [ $ using the Laplace-Transformation. usually I use the differential-approach: $$ ( L( f^{(k)}))(s)=s^k(Lf)(s)- \sum_{j=0}^{k-1} s^j f^{(k-1-j)} (0) $$ so setting $Y(s)= L \{ y(t) \} $ I get $s^3 Y(s)-s^0y''(0)-s^1y'(0)-s^2y(0)-s^2Y(s)+s^0 y'(0)+s^1y(0)-sy(s)+s^0y(0)+Y(s)... $ I can't continue here, because of the initital values. How can I solve this ODE? Do I need to shift it somehow? Or do $ y''(0) etc..$ vanish, because $ t \in [ \frac12, \infty[ $? Appreciate any help ! EDIT Thank you for the help so far ! after applying the shift theorem I solve $$z'''-z''-z'+z=-10 \cos (2t) +5 \sin(2t) $$ with $ z(0)=1,z'(0)=2,z''(0)=1 $ Using the differential approach it comes to $$s^3 Z(s)-1-2s-s^2-s^2Z(s)+2+s-sZ(s)+1+Z(s)= \frac{-10s+10}{(s^2+4} $$ $$ \Leftrightarrow Z(s)(s^3-s^2-s+1)= \frac{-10s+10}{(s^2+4)} -2+s^2+s $$ $$ \Leftrightarrow Z(s)= \frac{(-10s+10)-(s^2+4)(2+s^2+s)}{(s^3-s^2-s+1)(s^2+4)} $$ $$ \Leftrightarrow Z(s)= - \frac32 \frac{1}{s-1}- 2 \frac{1}{(s-1)^2} + \frac12 \frac{1}{s+1} + \frac{2}{s^2+4} $$ looking up the inverse it comes to $z(t)= - \frac32 e^t - 2t e^t+ \frac12 e^{-t}+ \sin(2t) $ but $ z'(t)= \cos(t)-2te^t- \frac{7 e^t}{2}- \frac{e^{-t}}{2} $ and so $ z'(0)=-2 \neq 2 $ where is my Mistake? AI: $$y'''-y''-y'+y= -10 \cos (2t-1)+5 \sin(2t-1)$$ Substitute $u=2t-1$ the equation becomes: $$8y'''-4y''-2y'+y= -10 \cos (u)+5 \sin(u)$$ Where you have the initial conditions: $$u(0)=1,u'(0)=2,u''(0)=1$$ $$u \in [0 , + \infty [$$ $$\Leftrightarrow Z(s)(s^3-s^2-s+1)= \frac{-10s+10}{(s^2+4)} -2+s^2+s$$ Divide both sides by $s-1$: $$ Z(s)(s^2-1)= \frac{-10}{(s^2+4)} +s+2$$ $$ Z(s)= \frac{2}{(s^2+4)} - \frac{2}{(s^2-1)}+\dfrac 1{s-1}+\dfrac 1{s^2-1}$$ $$ Z(s)= \frac{2}{(s^2+4)} - \frac{1}{(s^2-1)}+\dfrac 1{s-1}$$ $$ Z(s)= \frac{2}{(s^2+4)} + \frac{1}{2(s+1)}+\dfrac 1{2(s-1)}$$ $$z(t)=\sin(2t)+\dfrac 12e^{-t}+\dfrac 12 e^t$$ $$\implies z(t)=\sin(2t)+\cosh (t)$$ And you have that $$z(0)=1,z'(0)=2,z''(0)=1$$ As expected.
H: Building a sequence that alternates for odd numbers without using cases I was wondering, whether it is possible to build a sequence, that alternates for odd numbers, such as $$a_n = \begin{cases} 1, & \text{if} \ n = 4k+1, k\in \mathrm{N} \\ -1, & \text{if} \ n = 4k+3, k\in\mathrm{N} \end{cases}$$ but without using cases. An option would be $a_n = \sin(\frac{n}{2}\pi)$ but I wonder whether there is some kind of alternative, that would e.g. use something like $a_n = (-1)^{v(n)}$ where $v: \mathrm{N}\to\mathrm{N}, \ n \mapsto v(n)$. The result for $a_n$ if $n$ ist even doesn't matter. AI: You can use $$a_n=(-1)^{\frac{n-1}{2}}$$
H: Can't prove $(B\oplus C)\cap D=(B\cap D)\oplus (C\cap D)$. Prove $(B\oplus C)\cap D=(B\cap D)\oplus (C\cap D)$. Note: $B\oplus C=(B\cup C)-(B\cap C)$ and $B-C=B\cap C^C$. I have tried to prove it as below. \begin{align*} (B\oplus C)\cap D &= ((B\cup C)-(B\cap C))\cap D\\ &=((B\cup C)\cap(B\cap C)^C)\cap D\\ &=((B\cup C)\cap D\cap(B\cap C)^C)\\ &=((B\cap D)\cup (C\cap D))\cap(B\cap C)^C\\ &=((B\cap D)\cup (C\cap D))-(B\cap C)... (1) \end{align*} I'm stuck to get right hand side. Now I try to expand from right hand side as below. \begin{align*} (B\cap D)\oplus (C\cap D)&= ((B\cap D)\cup (C\cap D))-((B\cap D)\cap (C\cap D)) ...(2) \end{align*} Now I stuck to prove it, because I can't get eq (1)= eq (2). Anyone can give me hint to prove it? AI: There's just one more fact you need to get from your last line to the right hand side. Since $(B\cap D) \cup (C \cap D)$ is a subset of $D$, you can write: $$(B\cap D) \cup (C \cap D) - (B\cap C) = (B\cap D) \cup (C \cap D) - (B\cap C)\cap D$$ Now $D = D\cap D$, so using associativity and commutativity of the intersection, you can write: $$(B\cap D) \cup (C \cap D) - (B\cap C)\cap D = (B\cap D) \cup (C \cap D) - (B\cap D\cap C \cap D)$$ So changing the last term with $(B\cap D\cap C \cap D) = (B\cap D)\cap (C \cap D)$, you get the symmetric difference $(B\cap D)\oplus(C\cap D)$ on the right.
H: Find gradient & angle of inclination of the tangent where x = 3 Differentiate f (x) = $x$$^2$ − 3$x$ − 6.Find the gradient and the angle of inclination of the tangent at the point where $x$ = 3 I have already differentiated the beginning and got 2$x$ -3 but I am not sure what to do next. AI: The angle of inclination ( $\theta$ ) is just $tan^{-1}(m)$ ,where $m$ is the slope, so for your function the gradient (slope) at $x=3$ is $3$ and $\theta$ is $tan^{-1}(3)$ which is approximately $1.249$ radians.The reason for why $m$=tan($\theta$) is because $m$=$\large rise\over run$ but that is the same as $$sin(\theta) \over cos(\theta)$$ If you draw a right angled triangle .
H: Are all vertical asymptotes the points where the denominator is zero? Question 1: Are all vertical asymptotes the points where the denominator is zero? Question 2: For all rational functions ( fractions of polynomials), are all vertical asympototes the points where the denominator is zero? Thank you so much. AI: For question 1: No. In general, vertical asymptotes do not need to be zeroes of the donimanator. Consider the function $f(x)=\ln(x)$. It has a vertical asymptote at $x=0$, but it does not even have a denominator. For question 2: Yes. For all rational functions (fractions of polynomials), vertical asymptotes are always at the zeroes of the denominator. The only values for which a rational function will be undefined is at the zeroes of its denominator, because polynomials are continuous everywhere. So necessarily, all vertical asymptotes must be at the zeroes of the denominator, as these are the only places the function is undefined.
H: Hall and knight inequality question If $a^2+b^2=1$ and $x^2+y^2=1$, show that $ax+by<1$ AI: This is probably what you're looking for:$$1=(a^2+b^2)(x^2+y^2)=(ax+by)^2+(ay-bx)^2\geqslant (ax+by)^2\implies \vert ax+by\vert\leqslant 1$$
H: Yet another Rouché's theorem example for $f(z) = e^z-3z^{2019}$ - solution verification Use Rouché's Theorem to conclude that $$ f(z) = e^z-3z^{2019} $$ has exactly 2019 zeroes in $D(0,1)$. My main question is, since we're working for $|z| = 1$ can I conclude that $|e^z| \sim e^1 = e \sim 2.71...$? Because if that is true then taking $f(z) = -3z^{2019}$ and $g(z) = e^z -3z^{2019}$ we could utilise Rouché's Theorem as follows: $$ |f - g| = |-e^z| = 2.71... < 3 = |-3z^{2019}| = |f| $$ and since $-3z^{2019}$ has 2019 zeroes, then so does $g(z) = e^z -3z^{2019}$. Is that correct? If not, how can one solve this example? AI: I don't know what is it that you mean when you write $|e^z|\sim e^1$. If $|z|=1$, then$$|e^z|=e^{\operatorname{Re}z}\leqslant e^1=e<3=|3z^{2\,019}|.$$That's all you need.
H: How to calculate distance run on athletics track first time poster and definitely no maths expert. I am trying to solve a basic problem using an athletics track. The total distance around a standard athletics track is 400m: If you run in the first lane you run 400m, I am trying to work out the formula to estimate the distance run in the 2nd, 3rd, 4th lane etc if they all start at the same point (no staggers) If I assume that the distance between lane 1 and two is 1m how would I go about calculating? Any help would be much appreciated! Thanks AI: Let $L$ be the distance of the straight part, and $R$ be the radius of the turns. Now, the total length (when you're running on lane 1) is $400~\text{m}$, or $$\tag{1} 2L +2 \pi R = 400 $$ We also assume that the straights are exactly $100~\text{m}$. Then we see that on lane 1, the radius of curvature is $R = \frac{400-2\cdot 100}{2\pi}\approx 31.83~\text{m}$. When moving from the first lane to the next one, the radius $R$ increases by 1 meter, and the straight parts remain the same. Therefore, as we can see from Equation (1), the total length increases by $2\pi$ meters per every meter that the radius increases.
H: How to prove that $1^n+2^n+...+(p-1)^n \equiv 0\pmod p$? I have a homework for the university and I am 'on this' for the entire week, so I really need help. The question: let $p>2$ be a prime number and $n\in \Bbb N$, $\ p-1\nmid n$. Prove that $1^n+2^n+...+(p-1)^n \equiv 0\pmod p$. I thought: it is pretty clear that $p-1$ is composite so I can write $p-1=q_1^{t_1}q_2^{t_2}...q_k^{t_k}=\prod_{i=0}^{k} q_i^{t_i}$ and I know that $2 \le q_i\le p-1 \ , \ \ 0 \le i \le k \ $ and that $ \ 2\mid p-1$ so $ \ p-1=2k$. The sum is $\sum_{i=0}^{p-1} i^n=1^n+2^n+...+(p-1)^n \equiv 0\pmod p$ but I don't know what can I understand from that. I really need help. Thank you AI: Consider a primitive root $g \bmod{p}$. We have: $$1^n+2^n+\ldots+(p-1)^n \equiv 1+g^n+g^{2n}+\ldots+g^{(p-2)n}=\frac{g^{(p-1)n}-1}{g^n-1} \pmod{p}$$ As $p-1 \nmid n$, we have $p \nmid (g^n-1)$. As $p-1 \mid (p-1)n$, we have $p \mid (g^{(p-1)n}-1)$. That shows that: $$p \mid \frac{g^{(p-1)n}-1}{g^n-1} \implies p \mid (1^n+2^n+\ldots+(p-1)^n)$$
H: Isomorphisms on $L^p$ and $\ell ^p$ spaces. I'm studying real analysis and I have some questions about isomorphism on $L^p$ and $\ell ^p$ spaces. First of all, I want to see whether there exist infinite sets like $A$ and $B$ such that $\ell ^p(B)$ is not isomorphic to a subspace of $\ell ^p(A)$, with $p \in [1, +\infty]$? And secondly, Let $(X, \mathcal{M}, \mu)$ and $(Y, \mathcal{N}, \lambda)$ be two probability spaces and let $p \in [1, +\infty]$. Now, I want to show that $L^p (X, \mathcal{M}, \mu)$ is isomorphic to a subspace of $L^p(X\times Y, \mathcal{M} \otimes \mathcal{N}, \mu \times \lambda)$. Any help is appreciated. Also, if you know any books containing these things, please lemme know. I really need to know more about these spaces and their properties. Thanks. AI: $\ell^{p}(\mathbb R)$ is not isomorhic to any subspace of $\ell^{p}(\mathbb N)$ because the former is non-separable and the latter is separable. [In $\ell^{p}(\mathbb R)$ define $e_t(x)=1$ if $t=x$ and $0$ otherwise. Then $\|e_t-e_s\|_p=2^{1/p}$ whenever $t \neq s$ and this implies that $\ell^{p}(\mathbb R)$ is not separable]. Fix any $g \in L^{p} (\lambda)\setminus \{0\}$. Then the map $T$ defined by $(Tf)(x,y)=f(x)g(y)$ is an isomorphism from $L^{p}(\mu)$ into $L^{p}(\mu \times \lambda)$. In fact $\frac 1 {\|g\|_p} T$ is an isometric isomorphism.
H: Find largest number divided by which each element of vector is integer This is my first question on math.stackexchange, and as you will notice, I am not a mathematician at all, and this may be a very simple question. Apologies. I also don't know if I used the right terms when asking the title, so here an example of what I am looking for (the c() is from R, it concatenates to a vector, in this case of numerics or integers) I am looking for a generalized way to find the largest number divided by which each element of a vector would result in an integer. c(1, 3, 5) # expected result = 1 c(0.2, 0.4, 0.6) # expected result = 0.2 c(0.3, 0.5, 0.7) # expected result = ??? this is where I am stuck AI: I don't know about linear algebra that much too, but I think I can answer this question. You have a vector $V$. Let's denote $V[i]$ as i'th element of $V$ and $L$ as the length of $V$. Now, you want to find a number let's say $n$. Now if you divide each $V[i]$ with n every element of $V$ would become an integer. Now, the simplest way that I can think of would be taking $n = 10^{-d}$ where you can set $d$ by examining $V$. Now let's denote a function $f(x)$ which returns the number of digits $n$ has after the decimal point. Some examples would be, $f(1.512) = 3$ because there are 3 digits after the decimal point in $1.512$ . Now we have to find the maximum value of $f(V[i])$ for $i \in [1, L] $. Now $d$ has to be equal to that maximum value. Let's take the last example that you mentioned in your question. $V(0.3, 0.5, 0.7)$. $f(0.3) = 1$ $f(0.5) = 1$ $f(0.7) = 1$ So, the maximum value of those three values is $1$. So $d = 1$ and $n = 10^{-1}$. Now we have to each element of $V$ by $n$. After that we get $V' = (3, 5, 7)$. Now you wanted the largest $n$. So, we got to take another step. We have to multiply $n$ by the $gcd(V')$. where $gcd$ means $Greatest\ Common\ Divsior$ and $gcd(V')$ means the $gcd$ of all the elements of $V'$. Now you should have the greatest $n$ by which if the elements of $V$ are divided they would result in an integer.
H: Complex projective plane: $\mathbb C^2$ vs $\mathbb C^3$ I have a question about Phillip A. Griffiths - Introduction to algebraic curves. In Chapter I.1, it seems to say $\mathbb C^3 \cup L_{\infty} = P^2\mathbb C$, but later on it says $P^2\mathbb C \setminus L_{\infty} = \mathbb C^2$. What's going on please? Guesses: I'm wrong: It's not that 'together' means '$\cup$'. I'm wrong: It's not that $\setminus$ means 'set minus'. (Quotient?) I'm right, and so is the book: 'Together' means '$\cup$' and $\setminus$ means 'set minus'. However, no one ever said $\mathbb C^3 \cap L_{\infty} = \emptyset$, so when we take off $L_{\infty}$ from $P^2\mathbb C$, there's still something to take out of $\mathbb C^3$...sooo actually, $\mathbb C^3 \setminus L_{\infty} = \mathbb C^2$ because $L_{\infty} = $ (or $\cong$) $\mathbb C$ or something. I'm right, but the book: is wrong: it should be both $\mathbb C^2$ or both $\mathbb C^3$. AI: The answer is (4). There is a typographical error in the first statment it should read $$\mathbb{C}^2 \cup L_{\infty} = P^2\mathbb{C}$$
H: Hartogs set of a well ordered set so i understood that the hartogs set of a well ordered set $A$ is defined as $H(A)$ the minimal ordinal such that $H(A)\nleq A$ (there is no injection from $H(A)$ to $A$) and i also uderstood the proof to the existance of an ordinal like that. the only thing that im having trouble to understand is, giver well ordered set $A$ can i find its Hartogs set? because it feels pretty much like hartogs set will equal $S(\alpha) = \alpha \cup \{ \alpha \}$ (when $\alpha$ is the ordinal that is isomerphic to $A$. now of course i cant just say that because i feel like it, so ill be glad if some could please give me an example when it doesnt work, or maybe just explain why we need this. thank you! AI: No, that's not true. The Hartogs number is not the least ordinal which does not "order embed" into a set, because "a set" does not have any well-ordering on it (although sometimes we have a natural candidate, e.g. $\in$ in the case of an ordinal). Instead the Hartogs number is only about injections. So $\omega+1=\omega\cup\{\omega\}$ certainly injects into $\omega$. Indeed, every countable ordinal does. So the Hartogs number of $\omega$ is by definition an uncountable ordinal, in fact the least uncountable ordinals, for obvious reasons. In other words, it is $\omega_1$. How do we construct it? Well, we can't really construct it with "elementary operations". In some sense the Hartogs function is an elementary operation, although in the proof of existence we go through: Look at all well-ordered chains of subsets of $X$ up to isomorphism; define the natural well-order on the equivalence classes; argue this well-order does not embed into $X$, and that it is the minimal one; in the case we want to use von Neumann ordinals (which is the usual case in $\sf ZF$), find the matching ordinal. We don't start adding elements one by one to an ordinal, or go through some transfinite recursion. We utilise, in some sense, much larger sets than $X$, whereas ordinals smaller than $X$ are, well, by definition smaller than $X$. But if $X$ can be well-ordered, then we can say that $H(X)$ is the smallest ordinal strictly larger in cardinality than $X$.
H: Find local extrema of the function. I have the following function and I want to find its local extremas: $f(x,y)=x^3+3xy^2-15x-12y+4$ Is it correct? Wolphram alpha says that the only solutions are -24 and 32 I am little bit confused. I have done the following calculations: $\nabla f(x,y)=[3x^2+3y^2-15,6xy-12]$ $\nabla f(x,y)=0 \Leftrightarrow \begin{cases} 3x^2+3y^2-15=0 \\6xy-12=0 \end{cases}$ $\begin{cases} 3x^2+3y^2-15=0 \\6xy-12=0 \end{cases}$ $\begin{cases} x=\frac 2y \\ \frac{12}{y^2} + 3y^2 -15 =0 \end{cases} \Leftrightarrow \begin{cases} x=\frac 2y \\ \frac{4+y^4-5y^2}{y^2} =0 \end{cases} $ $y\neq0$ $4+y^4-5y^2 = 0$ lets substitute $y^2=r$ $r^2-5r+4=0$ $r1=1$ $r2=4$ So there are 4 solutions $y=1 \lor y=-1 \lor y=2 \lor y=-2$ $\begin{cases} y=1 \\ x=2 \end{cases}$ $\begin{cases} y=-1 \\ x=-2 \end{cases}$ $\begin{cases} y=2 \\ x=1 \end{cases}$ $\begin{cases} y=-1 \\ x=-2 \end{cases}$ There are 4 local extremas : -24 minimum 32 maximum -22 minimum 30 maximum AI: Your stationary points are correct. In $(2,1)$ and $(1,2)$ the function $f$ has local minima and in $(-2,-1)$ and $(-1,-2)$ the function $f$ has local maxima (hessian matrix !)
H: Least squares problem regarding distance between two vectors in $\mathbb{R}^3$ I'm solving an exercise problem and was facing some confusion regarding how to solve it. The problem is (roughly translated to English): Given the following: $$\mathbf{A} = \begin{bmatrix} 2 & 0 \\ 1 & 1 \\ 0 & 1 \end{bmatrix},\ \mathbf{w} = \begin{bmatrix} 1 \\ 1 \\ 1 \end{bmatrix}$$ There exists no vector $\mathbf{x} = (x, y)$ that satisfies $\mathbf{A} \mathbf{x} = \mathbf{w}$. Find the values of $x$ and $y$ that minimize the distance between $\mathbf{A}\mathbf{x}$ and $\mathbf{w}$ in $\mathbb{R}^3$. My Approach The result of $\mathbf{A}\mathbf{x}$ is $(2x, x + y, y)$. Since we're minimizing distance in $\mathbb{R}^3$ I thought that finding the minima for the following would do the trick: $$ \begin{align} d & = (2x - 1)^2 + (x + y - 1)^2 + (y - 1)^2 \\ & = 5x^2 - 6x + 2xy + 2y^2 - 4y + 2 \end{align} $$ This is where my confusion mainly stems from, but in order to get the values for both the $x$ and $y$ coordinates, I obtained the partial derivatives of the above equation and set them to $0$. $$ \begin{align} & \frac{\partial d}{\partial x} = 10x - 6 + 2y \\ & \frac{\partial d}{\partial y} = \phantom{0}2x + 4y - 4 \end{align} $$ Setting both of these to $0$ and solving gives us: $$ x = \frac{4}{9},\ y = \frac{7}{9} $$ Is this approach correct? Any tips are appreciated. Thanks. AI: Yes, your approach is completely correct. For what it's worth, we can systematically obtain "least squares" solutions to such equations by solving the equation $A^TA \mathbf x = A^T \mathbf w$.
H: Given a fixed perimeter, which shape will have the minimum area? I understand that a circle will have the largest area for a given perimeter but I don't get The smallest area. Is it a triangle because it has the least amount of sides so must be smaller? AI: There is no lower-bound to the area of a shape with given perimeter $C$. To prove this, we can take a rectangle with sides $\epsilon$ and $(C/2-\epsilon)$, and area $\epsilon(C/2-\epsilon)$. As $\epsilon \rightarrow 0 $, the area goes to zero.
H: $F(x,y,z)=3x{\bf{i}}-4y{\bf{j}}+5z{\bf{k}}$ is a vector field, $S: x^2+y^2+z^2 = a^2$ is an outward oriented surface. Find $\iiint\text{div}\ F\ dV$ $F(x,y,z)=3x\textbf{i}-4y\textbf{j}+5z\textbf{k}$ is the vector field and $S: x^2+y^2+z^2 = a^2$ is an outward oriented surface. Evaluate $\iiint\ \text{div}\ F\ dV.$ I have been trying to solve this question but couldn't put my head around it. Any help will be much appreciated. AI: $div F= 3-4+5=4$, so the volume integral is just $4V$, where $V$ is the volume of the ball.
H: Quadratic matrix bounds Let A be a singular matrix with a simple (non-repeated) zero-eigenvalue. Dose the following inequality hold? $$\|Ax\|^2\geq\sigma_2\|x\|^2, \qquad \forall x\notin Null(A)$$ where $\sigma_2$ is the smallest nonzero singular value of the matrix $A$. If it is true, where can I find a proof?? AI: No, it's not true. Consider the matrix $2\times2$ matrix $A = \begin{pmatrix} \sigma_2 & 0\\ 0 & 0 \end{pmatrix}$, and any vector $x = \begin{pmatrix} a \\ b \end{pmatrix}$. Then for any $b \neq 0$, we have $\left\lVert Ax\right\rVert^2 <\sigma_2\left\lVert x\right\rVert^2$
H: Do statement 1 and 2 imply statement 3? Statement 1 : All locks are safety tools Statement 2 : Some safety tools are chain Statement 3 : Some chain are not locks I tried to answer this myself but i'm having a problem to decide the correct venn diagram for the statement "Some B are C" I tried to draw both venn diagram, and want to know whether both scenarios are possible, or not. And if not, why is that? Diagram 1 Diagram 2 Also i still need the answer to the original question in the title. Do both statement 1 and 2 imply statement 3? AI: $$L\subseteq S\land S\cap C\ne\emptyset\implies C\setminus L\ne\emptyset$$ is not true. Indeed assume that there are only elements in $C\cap L\cap S$. Then the premises are verified, but the consequence is false.
H: How to show $x^5 + 5x^4 + 4x^3 + 3x^2 + 2x + 1 = 0$ have five different roots? I have to prove this fact when determining the elementary divisors of a matrix. The numerical solution confirms that this equation indeed has five different roots (one real root, two pairs of conjugate complex roots), but how to show it theoretically? I looked into its derivatives but it didn't solve the problem. AI: To tell if $f$ has any multiple zeros, take the GCD of $f$ and $f'$. (Any multiple zero of $f$ is also a zero of $f'$.) In this case $$ \gcd(f,f') = 1 , $$ so there are no multiple zeros, so there are $5$ simple zeros. Euclidean algorithm computation. $f(x) = x^5 + 5x^4 + 4x^3 + 3x^2 + 2x + 1$, $f'(x) = 5x^4+20x^3+12x^2+6x+2$. Divide $x^5 + 5x^4 + 4x^3 + 3x^2 + 2x + 1$ by $5x^4+20x^3+12x^2+6x+2$: Remainder is $-{\frac {12\,{x}^{3}}{5}}-\frac35\,{x}^{2}+\frac25\,x+\frac35$. Divide $5x^4+20x^3+12x^2+6x+2$ by $-{\frac {12\,{x}^{3}}{5}}-\frac35\,{x}^{2}+\frac25\,x+\frac35$. Remainder is ${\frac {391\,{x}^{2}}{48}}+{\frac {83\,x}{8}}+{\frac{107}{16}}$. Divide $-{\frac {12\,{x}^{3}}{5}}-\frac35\,{x}^{2}+\frac25\,x+\frac35$ by ${\frac {391\,{x}^{2}}{48}}+{\frac {83\,x}{8}}+{\frac{107}{16}}$. Remainder is $-{\frac {116000\,x}{152881}}-{\frac{216624}{152881}}$. Divide ${\frac {391\,{x}^{2}}{48}}+{\frac {83\,x}{8}}+{\frac{107}{16}}$ by $-{\frac {116000\,x}{152881}}-{\frac{216624}{152881}}$. Remainder is the constant $\frac{13220690237}{841000000}$. Conclusion: $f$ has no multiple zero. If $a$ were a multiple zero, then $x-a$ divides both $f$ and $f'$, and therefore all the remainders in the calculation. But of course $x-a$ cannot divide a constant.
H: Bounded operators on an Hilbert space Let $H$ be an hilbert space and consider the set $L(H,H)$ of bounded linear operator $T: X\rightarrow X$. Now we know that this is going to be a Banach space since $X$ itself is a Banach space. Now is this going to be an Hilbert space ? I have tried getting a counterexample to the parallelogram's law but I got nowhere. Also is there possible that there exists an element $z$ with $||z||\leq 1$ such that $||T(z)||=||T||$? Any tips are aprecciated , Thanks in advance. AI: For an example not satisfying the parallelogram law, almost any pair of $2 \times 2$ matrices will do. For an easy one, try $A=\pmatrix{1 & 0\cr 0 & 0\cr}$ and $B =\pmatrix{0 & 0\cr 0 & 1\cr}$ which have $\|A\|=\|B\| = \|A+B\|=\|A-B\|=1$. Depending on the operator $T$, there may or may not be $x$ with $\|x\|=1$ and $\|Tx\|=\|T\|$. An example where there isn't is $H = L^2([0,1])$ with $Tf(x) = x f(x)$.
H: Help with Lagrange multipliers I need to find the absolute minima and maxima of the function $f(x,y) = 12 x^2 + 12 y^2 - x^3 y^3 -5$ in the region bounded by the disk $x^2 + y^2 \le 1$. I know that $f(x,y)$ has three critical points in its domain, but only one point of the three, namely $(0,0)$, fits in the region bounded by the disk. This point will be a absolute minimum because $f_{xx}(0,0) >0$ and $f_{yy}(0,0) >0$ and $(0,0)$ is the only minimum in the region bounded by the disk. Therefore I need to find the other "candidate" points for absolute maximums. The system of equations for Lagrange Multipliers is the following : $$ \begin{split} -3x^2 y^3 + 24x - 2kx &= 0\\ -3y^2 x^3 + 24y - 2ky &= 0\\ x^2 +y^2 -1 &= 0. \end{split} $$ As you can see, this system of equations is more complex that the typical examples on the internet.What ideas do you have to solve this problems? Thanks. AI: Note the function is symmetric in $x,y$ so the answers are expected to be symmetric as well. Case I. $x = 0$ Then $y = \pm 1$ and you get the points $(0, \pm 1)$. Case II. $y = 0$ Then $x = \pm 1$ and you get the points $(\pm 1, 0)$. Case III. $x,y \ne 0$ Then the first equation can be divided by $x$ and the second by $y$. Can you finish this case and individually evaluate the candidates? UPDATE Note in case 3, the system becomes $$ \begin{split} -3x y^3 &= 2k - 24\\ -3y x^3 &= 2k - 24\\ x^2 +y^2 &= 1. \end{split} $$ It immediately follows that $xy^3 = yx^3$ and since $x,y\ne 0$ we divide by $xy$ to get $x^2=y^2$. Use the last constraint to solve.
H: Non-decreasing function in $f:A\rightarrow B$ Let $f:A\rightarrow B$ be a function where set A contains 4 elements and set B contains 3 elements. Then find the total number of non-decreasing functions. As per the book the solution is "Number of non decreasing function is equal to number of non negative integral solutions of equation x + y + z = 4" which is $^6C_2=15$. I am not able to understand it , can some one provide me more lucid answer to this problem. AI: Let $A=\{a_1,a_2,a_3,a_4\}$ with $a_1<a_2<a_3<a_4$ and $B=\{b_1,b_2,b_3\}$ with $b_1\lt b_2\lt b_3$. Let $x_1,x_2,x_3$ be the number of pre-images of each of the three elements in $B$ (respectively) under a map $f:A\rightarrow B$, that is, $\vert f^{-1}(\{b_i\})\vert=x_i$. Then, we certainly have $$x_1+x_2+x_3=4\:\text{ with }\:x_1,x_2,x_3\geqslant0$$ There are $15$ solutions to the above equation in non-negative integers as the OP has mentioned. It is clear that each solution gives rise to a certain number of desired functions (we haven't fixed the values of the function yet). But, with the additional constraint that we want only non-decreasing functions, with each of the $15$ solutions, we can associate precisely $1$ function that satisfies the requirements of the problem. To see this, let us take a solution $(y_1,y_2,y_3)$, and let $f$ be a function with $\vert f^{-1}(\{b_i\})\vert=y_i$. Since we want non-decreasing maps $f:A\rightarrow B$, we must have the following: $$ f(a_i)= \begin{cases} b_1, & \text{if $0\lt i \leqslant y_1$} \\ b_2, & \text{if $y_1\lt i \leqslant y_1+y_2$} \\ b_3, & \text{if $y_1+y_2 \lt i \leqslant 4$} \end{cases}$$ Since this fixes the values of $f$, there is precisely one function with the required properties associated with each of the solutions to the aforementioned equation. So, indeed, there are $\boxed{15}$ such functions.
H: Are there trigonometric and hyperbolic identities that are true in $\mathbb{R}$ but not true in $\mathbb{C}$ Question: Are there trigonometric and hyperbolic identities that are true in $\mathbb{R}$ but not true in $\mathbb{C}?$ For instance, $\cos^2(z)+\sin^2(z)=1$ is still true when we move to complex plane but are there identities that are no longer true when we in complex plane? What about hyperbolic identities? The most identities I encountered seem to be still true in complex. Many thanks in advance! AI: If an identity of analytic functions holds on a set with a limit point, then it holds everywhere. Of course $\mathbb R$ is a set with a limit point. So a counterexample must involve non-analytic functions. Like this $$ x^2 = |x|^2 $$ holds on $\mathbb R$, but $$ z^2 = |z|^2 $$ fails on most of $\mathbb C$. Another example, deadly to calculus students: $$ \int\frac{dx}{x}=\log|x|+C $$ true in real calculus, but $$ \int \frac{dz}{z} = \log|z|+C $$ is false in complex calculus. The remedy: $$ \int\frac{dx}{x}=\log x +C $$ is true even in the case of a real variable, but may have a complex constant $C$ since $\log x$ may be complex when $x<0$.
H: How to prove Cauchy Schwartz Inequality for norms in Lebesgue Integration I am self studying Apostol ( Mathematical Analysis) but I couldn't prove this particular theorem given in text despite the hint given . So, I am asking here. Its part (e) , I have no idea how to use RHS from the inequality to prove the CS inequality. Any help will be really appreciated. AI: We have $$(f(x)g(y)-g(x)f(y))^2=f(x)^2g(y)^2-2f(x)g(x)f(y)g(y)+g(x)^2f(y)^2.$$ It follows $$ \int_I(f(x)g(y)-g(x)f(y))^2 dy=f(x)^2||g||^2-2f(x)g(x) (f,g)+g(x)^2||f||^2.$$ Hence $$ \int_I[ \int_I(f(x)g(y)-g(x)f(y))^2dy]dx= ||f||^2 ||g||^2-2(f,g)^2+||g||^2||f||^2.$$ From $$ \int_I[ \int_I(f(x)g(y)-g(x)f(y))^2dy]dx \ge0$$ we get $$||f||^2 ||g||^2-2(f,g)^2+||g||^2||f||^2 \ge 0$$ and Cauchy - Schwarz follows.
H: Distribution function of $Y = \max\{X \sim \text{Exp}(2), 3\}$ Let $X$ be an exponential distributed random variable with $\lambda =2$ and $Y$ the random variable that takes the maximum of $X$ and $3$. What is the distribution function of $Y$? So how can I calculate $P(Y\leq y) = P(\max\{X,3\} \leq y) = P(3\leq y, X < 3) + P(3\leq X \leq y)$, since I think $X,Y$ are not independent? $P(X < 3) = 1-e^{-6}$ and $ P(3 \leq X) = e^{-6}$ are easy but what about the $y$? AI: It is a mixed random variable, because when $X<3$, that is with a probability of $1-e^{-6}$, Y is discrete =3. for the rest, Y has the same distribution of X. $$F_Y(y) = \begin{cases} 0, & \text{if $y<3$} \\ 1-e^{-6}, & \text{if $y=3$} \\ 1-e^{-2y}, & \text{if $y>3$ } \end{cases}$$ As you can see, the $CDF_Y$ has "a jump" in $Y=3$ that is the law is discrete there. You can easily understand this with a drawing. Observe that I got the CDF of Y without doing any calculation, but only looking at the drawing... this is the drawing of the transformation function: the max(X;3) is the red line. As you can see, Y-domain is $[3;\infty)$ and when the X spreads its probability through the interval $(0;3)$ the Y remains 3. That is Y is discrete with a positve mass of probability... for the rest of the interval $X=Y$ in distribution If you want you can derive also the mixed pdf of Y. Be careful that you cannot calulate the derivative of CDF in Y=3 to get the pdf because it is discrete there...you have to do $F_Y(3)-F_Y(3^-)$
H: Probabilities in the expectation of a hypergeometric random variable So I've read this explanation and the answer given by Andi R helped to give me some intuition of my problem. Now would like to understand it more mathematically rigorously. Suppose we have a jar of marbles with $r$ red balls and $w$ white balls. We draw n balls. Find the expected number of red balls. From the expectation of a hypergeometric random variable know that the expected number of red balls is going to be $\frac{nr}{r + w}$ Let's now use the method of indicators random variables to point at my problem. Let $X$ be the number of red balls. $$X = X_1 + X_2 + X_3 +... + X_n$$ $$\begin{align*} X_i&=\begin{cases} 1,&\text{if }i\text{-th ball selected is red}\\ 0,&\text{otherwise} \end{cases}\\ \end{align*}$$ By linearity of expectation, $$E[X] = \sum_i{E[X_i]} = \sum_i {P(X_i = 1)}$$ That all makes sense to me. Here's where I start to get confused. $$P(X_i = 1) = \frac{r}{r+w}\quad \forall i$$ However let's look at our second ball for example, using the law of total probability: $$P(X_2 = 1) = P(X_2 = 1| X_1 = 1)P(X_1 = 1) + P(X_2 = 1| X_1 = 0)P(X_1 = 0)$$ $$P(X_2 = 1) = \frac{r - 1}{r + w - 1}\frac{r}{r+w} + \frac{r}{r + w - 1}\frac{w}{r+w} \neq \frac{r}{r+w}$$ Where am I going wrong? Did I use the total law of probability wrong? We draw the balls successively so the probabilities can't be the same, can it? AI: OP here- I made an algebraic error as @drhab and @saulspatz pointed out. From what I wrote: $$P(X_2 = 1) = \frac{r - 1}{r + w - 1}\frac{r}{r+w} + \frac{r}{r + w - 1}\frac{w}{r+w} \\ = \frac{r(r-1)}{(r+w-1)(r+w)} + \frac{rw}{(r+w-1)(r+w)} \\ = \frac{r(r-1) + rw}{(r+w-1)(r+w)} \\ = \frac{r^2 +rw-r}{(r+w-1)(r+w)}\\ = \frac{r(r + w-1)}{(r+w-1)(r+w)}\\ = \frac{r}{r+w} $$ And a very similar algebraic motions can be followed for $X_3 = 1$, $X_4 = 1$, and so forth...
H: Prove that the tangent $A$ and $B$ are perpendicular . Let $f(x)=x^x$ and $g(x)=\Big(\frac{1}{x}\Big)^{x}$ .Let $A$ be the tangent of $f(x)$ at $x=1$ and $B$ the tangent of $g(x)$ at $x=1$ . Then A and B are perpendicular . Proof without first derivative Clearly since the equation $x^x=x$ have a unique solution which is $x=1$ we conclude that $y=x$ is a tangent of the function $f(x)$ using convexity and the three chord lemma .The same reasoning apply to the equation $\Big(\frac{1}{x}\Big)^{x}=2-x$ and using convexity conducts to $y=2-x$ is a tangent of the function $g(x)$ . Now it's easy to conclude . So I think it's a problem with many solutions so feel free to share it ! Thanks in advance for all your advices comments and answers ! AI: Let $f=h$ and $g=1/h$. Then $f=g\iff h^2(x)=1$. Furthermore, $$f'\cdot g'=h'\cdot(-1)\cdot\frac{1}{h^2}h'.$$ Thus the tangents in the point of intersection are perpendicular iff $(h'(x))^2=1.$
H: There exists $M \in \mathscr{M}(n \times n,\mathbb{R})$ such that $e^M=L^2$? Problem: Define $C=\left(\begin{matrix} a & -b \\ b & a \end{matrix}\right)$ and $\Delta=\left(\begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix}\right)$. Define $L=\left(\begin{matrix} C & \Delta & \\ &C & \Delta & \\ & & \ldots \\ & & & C & \Delta \\ & & & & C \end{matrix}\right) \in \mathscr{M}(n \times n,\mathbb{R})$ where $\mathscr{M}(n \times n,\mathbb{R})$ denote the space of real matrix $n \times n$. Remembering that we define $e^B=\sum_{k \geq 0}\frac{B^k}{k!}$ for $B \in \mathscr{M}(n \times n,\mathbb{R})$, how can we conlclude that there exists $M \in \mathscr{M}(n \times n,\mathbb{R})$ such that $e^M=L^2$? I had this problem reading this answer and I would like to know if we can conclude the statement above without using such advanced (for me) arguments of algebra (i.e. Lie groups, Lie algebra and so on). Remark: I am not sure if this post really needs the tag "lie-algebras". AI: Define $D$ to be the block-diagonal $D = \operatorname{diag}(C,\dots,C)$, and let $N = L - D$. First, we find an $Q$ such that $e^{M_1} = D^{-1}L = I + D^{-1}N$. We see that $D^{-1}N$ is nilpotent of order $n/2$, just like $N$. Define $$ Q = \ln(I + D^{-1}N) = D^{-1}N - \frac 12 [D^{-1}N]^2 +\cdots + (-1)^{n/2 - 1}[D^{-1}N]^{n/2 - 1}. $$ Let $K$ be such that $e^K = C$, and let $P = \operatorname{diag}(K,\dots,K)$. It follows that $D = e^P$. Because $P$ and $Q$ commute, we have $$ e^{P + Q} = e^P e^{Q} = D(D^{-1}L) = L. $$ So, taking $M = P + Q$ gives us $e^M = L$. For the case of $L^2$, we simply have $L^2 = (e^M)^2 = e^{2M}$.
H: How can I find the interval of x? I apply ratio test and get indeterminate. $\sum_{n=1}^\infty({(n+1)(n+2)....(2n)\over n^n})\space (x-2)^n$ $$\sum_{n=1}^\infty({(n+1)(n+2)....(2n)\over n^n})\space (x-2)^n$$ Find the radius and interval of convergence of the power series given above. Hi! I am trying to find the interval of x first; however, whenever I apply ratio test, I get indeterminate. How can I find the interval of x? Here is what I've done: Convergence: $$\lim_{n\to\infty}{\cfrac{\lvert(n+2)(n+3)...(2n+2)\space(x-2)^{n+1}\rvert}{\lvert(n+1)^{(n+1)}\rvert}\over\cfrac{\lvert(n+1)(n+2)...(2n)\space(x-2)^n\rvert}{\lvert n^n\rvert}}<1$$ $=>$ $$\lim_{n\to\infty}\cfrac{\lvert 2(2n+1)(x-2)n^n\rvert}{\lvert(n+1)^{n+1}\rvert}<1$$ $=>$L'Hospital: $$\lvert x-2\rvert\space\lim_{n\to\infty}\cfrac{\lvert (4n+2)\space n^n\rvert}{\vert(n+1)^{n+1}\rvert}<1$$ $=>$ $$\lvert x-2\rvert\space\lim_{n\to\infty}\cfrac{\lvert 4n^n+(4n+2).n^n.\ln(n)\rvert}{(n+1)^{n+1}.\ln(n).(\ln1)}$$ As you can see, ln1 makes it indeterminate and I am unable to find. How can I find the interval of x, so that I can find the interval of convergence and then the radius of convergence? AI: $$\lvert x-2\rvert\space\lim_{n\to\infty}\cfrac{\lvert (4n+2)\space n^n\rvert}{\vert(n+1)^{n+1}\rvert}<1\\ \iff|x-2|\lim_{n\to\infty}\dfrac{4n+2}{n}\times\lim_{n\to\infty}\dfrac{n\cdot n^{n}}{(n+1)^{n+1}}<1\\ \iff4|x-2|\times\lim_{n\to\infty}\left(1-\dfrac{1}{n+1}\right)^\frac{1}{n+1}<1\\ \iff|x-2|<\dfrac e4$$ Therefore, the required radius of convergence is $\boxed{\dfrac e4}$.
H: Number of irreducible factors of $x^{p^n + 1} - 1$ over $\Bbb F_p$ Let $p,n$ be two odd prime numbers. I want to show that the number of irreducible factors of $x^{p^n + 1} - 1$ over $\Bbb F_p$ is $$N = \frac{p^n-p}{2n} + \frac{p-1}{2} + 2$$ I know that this is equal to $\sum_{d\mid (p^n+1)}\frac{\phi(d)}{\operatorname{ord}_d(p)}$ (see How many irreducible factors does $x^n-1$ have over finite field?). I checked the equality with SAGE for some pairs $(p,n)$. But how would I prove the above equality? Thank you! AI: The number of irreducible factors of $X^m-1$ for $p\nmid m$, is the number of cycles of the Frobenius map $F:\alpha\to\alpha^p$ in its action on $\mu_m$, the set of $m$-th roots of unity in an algebraic closure of $\Bbb F_p$. This is the same as the number of orbits of $\phi:a\mapsto pa$ on $\Bbb Z/m\Bbb Z$. In our case $m=p^n+1$. Then $\phi^n(a)=-a$ and so $\phi^{2n}(a)=a$. Every orbit of $\phi$ on $\Bbb Z/m\Bbb Z$ has length dividing $2n$. For $k\mid 2n$, the number of solutions to $\phi^k(a)=a$ is the number of solutions to the congruence $(p^k-1)a\equiv0\pmod{p^n+1}$ and equals $a_k=\gcd(p^k-1,p^n+1)$. Now $a_1=\gcd(p-1,p^n+1)=\gcd(p-1,p+1)=2$. Also $a_2=\gcd(p^2-1,p^n+1)=\gcd(p^2-1,p+1)=p+1$ as $p^n\equiv p\pmod{p^2}$. Finally $a_n=\gcd(p^n-1,p^n+1)=\gcd(p^n-1,2)=2$. The orbits of length $1$ comprise two elements, those of length $2$ comprise $(p+1)-2 = p-1$ elements and there are no orbits of length $n$. Therefore the orbits of length $2n$ comprise $$p^n+1-2-(p-1)=p^n-p$$ elements and so there are $2$ orbits of length $1$, $(p-1)/2$ of length $2$ and $$\frac{p^n-p}{2n}$$ of length $2n$. The number of irreducible factors is the total number of orbits, namely $$\frac{p^n-p}{2n}+\frac{p-1}2+2.$$
H: A case where the Euler-Lagrange equations produces $1=0$ As a reference, I asked the same question (https://physics.stackexchange.com/questions/561449/equations-of-motions-of-mathcall-phi-x-phix) in the physics community but I am interested in the mathematical reason why this produces $1=0$. Suppose a Lagrangian of this form $$ L[f,x]=f[x] \tag{1} $$ where $f:\mathbb{R}\to\mathbb{R}$. The Euler-Lagrange equations are: $$\partial_\mu\frac{\partial L}{\partial(\partial_\mu f)}=\frac{\partial L}{\partial f}$$ There are no derivates of $f$, thus the left-most term is equal to zero: $$\partial_\mu\frac{\partial L}{\partial(\partial_\mu f)}=0$$ Finally, the right-most term is: $$ \frac{\partial L}{\partial f}=1 $$ Thus, the result is $1=0$. Now, I am just overall skeptical of the whole thing; why is it that the Euler-Lagrangian equation able to produce a contradiction? I suspect that $L[f,x]=f[x]$ violates one of the assumptions used to derive the Euler-Lagrange equations? AI: A solution $f$ to the E.L equations is an stationary point of the action $ S = \int_a^b L[f]dx = \int_a^b f dx $. The E.L equations in this case admit no solution, as there is no stationary point of this functional.
H: Compute number of elements in a cycle Given numbers from 1 to N and a number K can we find the number of cycles beginning at number 1. K is the distance to the next number. Elements 1 to N are arranged in a circular order.(N and 1 are adjacent). For eg : N = 7 and K = 2 then, 1 --> 3 --> 5 --> 7 --> 2 --> 4 --> 6 --> 1 and the cycle terminates. Number of elements covered is 7. AI: There is only one such cycle, determined by the value of $K$. The title and your example suggest that what you mean to ask about is the length of the cycle. For that question here's a hint. Work out all the examples for $N=6$ and think about the greatest common divisor of $N$ and $K$. (You can make nice pictures of stars with chords of a circle.)
H: UC Berkeley Integral Problem: Show that $\int_0^{2\pi} \frac{\min(\sin x, \cos x)}{\max(e^{\sin x},e^{\cos x})}\ {\rm d}x = -4\sinh(1/{\sqrt2})$. Show that $$\int_0^{2\pi} \frac{\mathrm{min}(\sin{x},\, \cos{x})}{\mathrm{max}\left(e^{\sin{x}},\, e^{\cos{x}}\right)}\ \mathrm{d}x = -4\sinh\left(\frac{1}{\sqrt{2}}\right).$$ this problem comes from the 2020 UC Berkeley Integration Bee and was not solved by either of the contestants. Any hints? My initial approach was to compute the maximum and minimum of the specified function by observing the graph for $x\in (0, 2\pi)$ but could not get very far. Thank you! AI: Because the function is periodic, the integral over any interval of length $2 \pi$ leads to the same result. With that said, rewrite this as $$ \int_{\pi/4}^{9 \pi /4} f(x)\,dx = \int_{\pi/4}^{5\pi/4} f(x)\,dx + \int_{5 \pi/4}^{9\pi/4} f(x)\,dx\\ = \int_{\pi/4}^{5 \pi/4} \frac{\cos(x)}{e^{\sin(x)}}\,dx + \int_{\pi/4}^{5 \pi/4} \frac{\sin(x)}{e^{\cos(x)}}\,dx\\ = \int_{\pi/4}^{5 \pi/4} e^{- \sin(x)}\cos(x)\,dx + \int_{\pi/4}^{5 \pi/4} e^{- \cos(x)}\sin(x)\,dx. $$ The integrals can be handled separately, via $u$-substitution.
H: Convolution with Dirac delta How to solve this expression: $$\int_{-\infty}^{\infty} \left[ \delta(k-k_0)f(k)\right]*f(k)dk=?$$ Here $\delta$ represents the Dirac delta function and $*$ represents the convolution over the $k$ variable. What I think: $$\int_{-\infty}^{\infty} \left[ \delta(k-k_0)f(k)\right]*f(k)dk=\int_{-\infty}^{\infty} \delta(k-k_0)*\left[f(k)f(k)\right]dk = \int_{-\infty}^{\infty} f(k-k_0)^2dk =\int_{-\infty}^{\infty}f(k)^2dk $$ However, I have doubts about the solution as the influence of the Dirac function seems to disappear? AI: Two approaches. Option 1: $$ \int_{-\infty}^{\infty} \left[ \delta(k-k_0)f(k)\right]*f(k)dk=\\ \int_{-\infty}^{\infty} \int_{-\infty}^\infty \left[ \delta(\tau-k_0)f(\tau)\right]f(k-\tau)\, d\tau \,dk = \\ \int_{-\infty}^{\infty} \left[\int_{-\infty}^\infty [f(\tau)f(k-\tau)]\,\delta(\tau-k_0)\, d\tau\right] \,dk = \\ \int_{-\infty}^{\infty} f(k_0)\,f(k-k_0) \,dk = \\ f(k_0)\int_{-\infty}^\infty f(u)\,du. $$ Option 2: note that $\delta(k - k_0)f(k) = f(k_0) \delta(k - k_0)$. It follows that $$ \left[ \delta(k-k_0)f(k)\right]*f(k) = f(k_0) [\delta(k - k_0) * f(k)] = f(k_0)f(k-k_0). $$ That brings us to $\int_{-\infty}^{\infty} f(k_0)\,f(k-k_0) \,dk$, like before.
H: convergences of improper irrational Integral Finding Convergence or Divergence of $$\int^{\infty}_{0}\frac{1}{\sqrt{x^6+1}}dx$$ What i Try: I am trying to prove $(x^2+1)\leq \sqrt{x^6+1}$ for all $x\geq 0$ $(x^2+1)^2\leq (x^6+1)\Longrightarrow x^4+2x^2+1\leq x^6+1$ Getting $x^4+2x^2\leq x^6$ but which is false. How do i finding that integral is converges. Help me Thanks AI: $$(\forall x\ge 1)\;\; x^6+1>x^6$$ $$\implies \; (\forall x\ge 1)\;\; \sqrt{x^6+1}> \sqrt{x^6}$$ $$\implies \;\;(\forall x\ge 1)\;\; \frac{1}{\sqrt{x^6+1}}<\frac{1}{x^3}$$ but $\int_1^{+\infty}\frac{dx}{x^3} $ converges, thus, by comparison criterion, $\int_1^{+\infty}\frac{dx}{\sqrt{x^6+1}}$ converges and finally $$\int_0^{+\infty}\frac{dx}{\sqrt{x^6+1}} \;\; is \;\; convergent$$
H: $(X,Y)$ on a triangle Let $(X,Y)$ a random variable uniformly distributed on the triangle $(0,0)$, $(0,1)$, $(1,0)$. Find the density of $(X,Y)$. $\rightarrow f_{X,Y}(x,y)=2$ Determine if $X$ and $Y$ are independent or not, and find $cov(X,Y)$. $\rightarrow$ $X,Y$ are not independent, so $cov(X,Y)=\mathbb{E}[XY]-\mathbb{E}[X]\mathbb{E}[Y]=\frac{1}{6}-\frac{1}{3}(1-y)$. Let $A=({(x,y):x<\frac{1}{2},y>\frac{1}{2},y<\frac{1}{2}+x})$. Find $\mathbb{P}(A)$. $\rightarrow \mathbb{P}(A)=\mathbb{P}[(x<\frac{1}{2}) \cap (\frac{1}{2}<y<\frac{1}{2}+x)]=\int_{0}^{\frac{1}{2}}[\int_{\frac{1}{2}}^{\frac{1}{2}+x}f_{X,Y}(x,y)dy]dx=\frac{1}{4}$ Is it correct? I fear I may have done a few mistakes in the points 2) and 3). Thanks in advance. EDIT: Looking tommik's answer I wrote: if $y<x \Rightarrow 0<y<x<1 \Rightarrow \left\{\begin{matrix} 0<y<1-x\\ 0<x<1\end{matrix}\right.$ so $f_X(x)=\int_{0}^{1-x}2dy=2(1-x)$ if $y>x \Rightarrow 0<x<y<1 \Rightarrow \left\{\begin{matrix} 0<x<1-y\\ 0<y<1\end{matrix}\right.$ so $f_Y(y)=\int_{0}^{1-y}2dx=2(1-y)$ Being $X,Y \sim U(0,1)$: $\mathbb{E}(X)=\int_{0}^{1}xf_X(x)dx=\int_{0}^{1}2x(1-x)dx=\frac{1}{3}=\mathbb{E}(Y)$ and $\mathbb{E}(XY)=\int_{0}^{1}[\int_{0}^{1-x}2xydy]dx$ AI: The variables are not independent as the joint domain is not a rectangle (necessary condition for independence) now I check the covariance you calculated The correct covariance calculation is the following $$E(XY)=2\int_0^1 x dx\int_0^{1-x} y dy=\frac{1}{12}$$ $$E[X]=2\int_0^1 x dx\int_0^{1-x} dy=\int_0^1 2x(1-x)dx=\frac{1}{3}$$ $E[Y]=E[X]$ for self evident symmetry. Thus $$cov(X,Y)=\frac{1}{12}-\frac{1}{9}=-\frac{1}{36}$$
H: Convergence of a rearrangement of conditionally convergent series $\{a_n\}$ is a sequence of real numbers.$\space\sum_{n=1}^{\infty} a_{2n}$ and $\sum_{n=1}^{\infty} a_{2n-1}$ are both conditionally convergent. Is there such $\sum_{n=1}^{\infty} a_{n}$ that is divergent? I understand that $\sum_{n=1}^{\infty} a_{2n}$ and $\sum_{n=1}^{\infty} a_{2n-1}$ being conditionally convergent means that $\sum_{n=1}^{\infty} (a_{2n}+a_{2n-1})$ is also conditionally convergent, and that $\sum_{n=1}^{\infty} (a_{2n}+a_{2n-1})$ is also a rearrangement of $\sum_{n=1}^{\infty} a_{n}$. I also know that due to the Riemann series theorem there should be such rearrangements that are both conditionally convergent and divergent. I just don't know if specifically $\sum_{n=1}^{\infty} a_{n}$ can be divergent. How could I find such series? Or show that there isn't any? AI: No. Asserting that the series $\sum_{n=1}^\infty a_n$ converges is equivalent to asserting that the series$$0+a_2+0+a_4+0+a_6+\cdots\tag1$$converges. And asserting that the series $\sum_{n=1}^\infty a_{2n-1}$ converges is equivalent to asserting that the series$$a_1+0+a_3+0+a_5+0+\cdots\tag2$$converges. But the sum of the series $(1)$ and $(2)$ is the series$$a_1+a_2+a_3+a_4+\cdots,$$which therefore converges.
H: How can I calculate the self-convolution-like integration? Assume $f(x)$ is a real, differentiable and continuous function over $R$, I want to calculate an integration : $$ \lim_{a\to \infty} \int_0^a f^m(x) \frac{df(a-x)}{d(a-x)}dx. $$ where $m$ is any positive integer. The conditions are: $f(x) = f(-x)$, $f(0) = 1$, $\frac{df}{dx}|_{x=0} = -1$, and $f(\infty)$ is also known, defined as $f_{\infty}$. I tried in this way: $$ \begin{align} &\int_0^a f^m(x) \frac{df(a-x)}{d(a-x)}dx \\ &= -\int_0^a f^m(x) \frac{df(a-x)}{dx}dx \\ &= -\frac{d}{dx}\int_0^a f^m(x) f(a-x) dx \\ \end{align} $$ Here I use the relation that the derivation is distributive over the convolution. I am also not sure that whether the calculation is correct. Then I can not go any further. Could someone give me some suggestions on this problem please? AI: $$f^m(x)\frac{df(a-x)}{d(a-x)}dx\tag{1}$$ notice that: $$\frac{d(a-x)}{dx}=-1$$ and so: $$\frac{1}{d(a-x)}=-\frac 1{dx}$$ which gives: $$(1)=f^m(x)df(a-x)$$ and so your integral is: $$-\int_0^af^m(x)df(a-x)$$ if we let $u=(a-x)$ then $dx=-du$ and so: $$I=\int_a^0f^m(a-u)df(u)$$ $$=-\int_0^af^m(a-u)f'(u)du$$ as $a\to\infty$ I believe we have: $$I=-\int_0^\infty f_\infty^mf'(u)du=-f_\infty^m\left[f(u)\right]_0^\infty=f_\infty^mf(o)-f_\infty^{m+1}$$
H: Line $CD$ meets the leg $\overline{AB}$ at $P$ and line $BF$ meets the leg $\overline{AC}$ at $Q$. Prove $|AP|=|AQ|$. $\triangle ABC$ is a right-angled triangle at $A$. There are squares $AEDB$ and $ ACFG$ described from the outside on the legs $\overline{AB}$ and $\overline{AC}$, respectively. Line $CD$ meets the leg $\overline{AB}$ at $P$ and line $BF$ meets the leg $\overline{AC}$ at $Q$. Prove $|AP|=|AQ|$. My attempt: If we assume $|AP|=|AQ|$ indeed, then $\measuredangle APQ=\measuredangle PAD=45^\circ\implies\overline{PQ}\parallel\overline{FD}$. Then $\triangle APQ$ and $\triangle PQF$ have a common edge $\overline{PQ}$ and equal altitudes and, therefore, $\operatorname{Area}(\triangle APQ)=\operatorname{Area}(\triangle PQF)$. Let $S$ be the intersection point of $BF$ and $CD$. Then $$\boxed{\operatorname{Area}(\triangle DSQ)=\operatorname{Area}(\triangle PSF)}\tag 1$$ Since $BD\parallel AC$, $\triangle DBQ$ and $\triangle DBC$ with a common edge $\overline{BD}$ have equal altitude and, therefore, $\operatorname{Area}\triangle DBQ=\operatorname{Area}\triangle DBC$ $$\implies\operatorname{Area}(\triangle DSQ)=\operatorname{Area}(\triangle DBQ)-\operatorname{Area}(\triangle DBS)=\operatorname{Area}(\triangle DBC)-\operatorname{Area}(\triangle DBS)=\operatorname{Area}(\triangle BCS)$$ From $(1)$, it follows: $$\boxed{\operatorname{Area}(\triangle PSF)=\operatorname{Area}(\triangle BCS)}$$ Picture: but I'm not sure if that information is relevant in this proof. May I ask for advice on how to solve this task? Thank you in advance! AI: Let $AB=c$ and $AC=b$. Thus, $$\frac{AP}{DE}=\frac{AC}{EC},$$ which gives $$AP=\frac{bc}{b+c}.$$ By the same way $$AQ=\frac{bc}{b+c}$$ and we are done!
H: Infinitely iterating the cosine function yields the Dottie number For simplicity’s sake, Let’s define our function to be cos(x). For any value of x, iterating this function will yield some constant, take a calculator and try it. But quite surprisingly, I recognized that number to be the solution to the equation cos(x)=x. I do have an intuitive but not so rigorous answer though, and I may present it: We will start by this really obvious equality cos(cos(cos(cos(...cos(x)=cos(cos(cos(cos(...cos(x) (note that this iteration is infinite somehow, and it will be clear why in the next step) Let’s set y=cos(cos(cos(...cos(x)) Substituting in our new variable yields this transcendental equation cos(y)=y so y= 0.73908513322... (which is known as the Dottie number) But does this apply to any function f(x)? I have so many questions. AI: For a given function $f : X \to X$, we call $x^*$ a fixed point iff $f(x^*) = x^*$. You've observed that $\cos : \Bbb R \to \Bbb R$ has a fixed point $x_0=0.73908\ldots$. Moreover, since for $\cos$ there is some interval $I$ containing $x_0$ such that $\cos^{(n)} x \to x_0$ (as $n \to \infty$), for all $x \in I$ (in fact, we can take $I = \Bbb R$ here), we call $x_0$ an attracting fixed point of $\cos$. Some functions have no fixed points, let alone attracting ones: For example, for $$g: \Bbb R \to \Bbb R, \qquad g(x) = x + 1,$$ there is no value $x^*$ such that $g(x^*) = x^*$. (Indeed, $g^{(n)}(x) = x + n$, so $g^{(n)}(x) \to \infty$ for all $x \in \Bbb R$.) Other functions have more than one fixed point. For example, the fixed points of $$h: \Bbb R \to \Bbb R, \qquad h(x) = x + \sin x,$$ are $x^* = k \pi$, $k \in \Bbb Z$, but $k \pi$ is an attracting fixed point only when $k$ is odd. There are conditions that guarantee a given function $f : X \to X$ has some fixed point. A special case of the Banach Fixed-Point Theorem says that if $f : \Bbb R \to \Bbb R$ is a differentiable function and $I$ is some interval, and there is some $\alpha \in [0, 1)$ such that $|f'(x)| < \alpha$ for all $x \in I$, then $f$ has some attracting fixed point $x^* \in I$ and $f^{(n)}(x) \to x^*$ for all $x \in I$. We cannot apply this result immediately to $\cos : \Bbb R \to \Bbb R$, since $\left\vert\cos'\left(\frac{\pi}{2}\right)\right\vert = 1$, so for $I = \Bbb R$ there is no $\alpha$ satisfying the above condition. But $\cos(\Bbb R) = [-1, 1]$, and on that interval, $|\cos'(x)| \leq \sin 1 < 1$, so the theorem tells us that there is an attracting fixed point $x_0$ in $[-1, 1]$, and we can conclude that $\cos^{(n)}(x) \to x_0$ for all $x$.
H: Which of these functions are in the range of $T$? Let $C[0,1]$ be the vector space of continuous functions on the interval $[0,1]$, and let $T: C[0,1] \to C[0,1]$ be the linear transformation that takes a function $f(x)$ to $\int_0^x f(t) \ dt$. Which of the following functions are in the range of $T$? (Select all that apply.) Select one or more: a. $g(x) = x + 1$ b. $g(x) = x^2$ c. $g(x) = \cos x$ d. $g(x) = \sin x$ e. $g(x) = \dfrac{1}{x+2}$ I'm confused on how to approach questions with continuous functions, so can I just have a hint or two about how to approach this problem. AI: For each $f\in C[0,1]$, $T(f)(0)=\int_0^0f(t)\,\mathrm dt=0$. So, that excludes a., c., and e. Now, concerning b., can you think about some function $f\in C[0,1]$ such that$$(\forall x\in[0,1]):\int_0^xf(t)\,\mathrm dt=x^2?$$And what about d.?
H: Creating new metrics by combination of other metrics Once we have some metrics, e.g., $d_1$ and $d_2$, we can perform some operations to create new metrics $d$ based on the former ones. For example: $d=\dfrac{d_1}{1+d_1}$ $d= \text{min}(d_1, 1)$ (See equivalent metrics in wikipedia) $d= d_1 + d_2$ (See question here) Is there general rule(s) for creating new metrics from given metrics? Which are the most general rules for creating new metrics from given metrics? AI: Suppose $d = f(d_1, d_2)$, i.e. $d(x,y) = f(d_1(x,y), d_2(x,y))$. The only requirements for a metric are: $d \ge 0$, $d(x,y)=0$ iff $x=y$, $d(x,y) = d(y,x)$, and $d(x,z) \le d(x,y) + d(y,z)$. Suppose these are metrics on a set $M$. Let $S$ be the set of pairs $\{(d_1(x,y), d_2(x,y)): x,y \in M\}$. This is a subset of $[0,\infty)^2$. For (1) you need $f(a,b) \ge 0$ for all $(a,b) \in S$. For (2) (noting that $(0,0) \in S$ but $(0,x) \notin S$ for $x \ne 0$), you need $f(0,0) = 0$ and $f(a,b) > 0$ for all other $(a,b) \in S$. (3) is automatically true. (4) is more complicated, because in principle it requires knowing the set of sextuples $T = \{ (d_1(x,y), d_1(y,z), d_1(x,z), d_2(x,y), d_2(y,z), d_2(x,z)): x,y,z \in M\}$. You need $$ f(t_3, t_6) \le f(t_1, t_4) + f(t_2, t_5)\ \text{for all}\ (t_1,t_2,t_3,t_4,t_5,t_6) \in T$$ A sufficient condition would be $$ f(t_3, t_6) \le f(t_1, t_4) + f(t_2, t_5)\ \text{whenever}\ t_3 \le t_1 + t_2 \ \text{and}\ t_6 \le t_4 + t_5 $$
H: question about limes inferior/superior Does $\lim\limits_{n \to \infty}\sup x_n \leq \lim\limits_{n\to \infty} \inf x_n$ implies $\lim\limits_{n \to \infty}\sup x_n = \lim\limits_{n\to \infty} \inf x_n=\lim\limits_{n \to \infty} x_n$? AI: Yes, as by definition: $$ \inf_{k \geq n} x_k \leq x_n \leq \sup_{k \geq n} x_k $$ and: $$ \lim_{n \to \infty} \inf_{k \geq n} x_k = \liminf_{n \to \infty} x_n \leq \limsup_{n \to \infty} x_n = \lim_{n \to \infty} \sup_{k \geq n} x_k $$ Your first inequality forces equality for the second identity, so by squeeze theorem, the sequence must converge.
H: Matrix differential equation set to zero I have a bijective continuous function $f$, which maps an $(n\times1)$ dimensional column vector $t=[t_1,...t_n]'$ to another $(n\times1)$ dimensional column vector $f(t)=[f_1(t),...f_n(t)]'$. I define the matrix derivative $\frac{df(t)}{dt}$ as follows: $$\frac{df(t)}{dt}=\begin{bmatrix}\frac{\partial f_1(t)}{\partial t_1} & ... & \frac{\partial f_n(t)}{\partial t_1} \\ \vdots & \ddots & \vdots \\ \frac{\partial f_1(t)}{\partial t_n} & ... & \frac{\partial f_n(t)}{\partial t_n} \end{bmatrix}$$ Let $a$ be a fixed vector of dimension $(n \times 1)$. I want to find a $t_0$ satisfying the below, expressed in terms of $A$ and $f$ (and the derivatives of $f$ if necessary): $$\frac{d f(t)}{d t}\bigg|_{t=t_0}a-f(t_0)=0$$ Where the $|_{t=t_0}$ represents that the derivative is being evaluated at $t=t_0$. Is it possible to find an analytic solution to this problem? If so, how? And where can I read more on this sort of problem? Perhaps there is a tensor based solution? AI: This isn't even feasible when $n = 1$. Let $f$ be any horrible transcendental expression or polynomial with roots that cannot be expressed in radicals you like, say $$ f(t) = \begin{pmatrix} t_1^5 - t_1 - 1 \end{pmatrix} \text{.} $$ Then your equation is $$ (5 t_0^4 - 1)a - (t_0^5 - t_0 - 1) = 0 \text{,} $$ the same thing as $$ -t_0^5 +5at_0^4 +t_0-a+1 = 0 \text{,} $$ which, for instance, does not have roots representable by radicals for any $a$ which is an integer in $[-10,10]$ except $a = 0$. So in the lowest dimensional case, there is no analytical formula that works for general $a$. The best one can say is "whatever the roots of this polynomial are". An example in the transcendental direction is to take $f(t) = \sin \cos t_1$. Then you want to solve $$ -a (\cos \cos t_0)\sin t_0 - \sin \cos t_0 = 0 \text{,} $$ which is hopeless. Without serious constraints on the functions appearing in $f$, the problem you pose contains the general version of "find the zeroes of an arbitrary function", which is hopeless.
H: Prove that $a$ is primitive root modulo $p^2$ I really need to answer this question quickly for my homework due tomorrow: Let $a,p \in \Bbb N$, $p$ is prime, $a$ is a primitive root modulo $p$ that $p^2\nmid (a^{p-1}-1)$. Prove that $a$ is primitive root modulo $p^2$. My thoughts: I proved that $a^{\phi (p^2)} = a^{{(p-1)}^p} \equiv 1\ mod\ p^2$ but I don't know how to continue from here. Thank you for helping AI: Denote $$n:=ord_a(p^2)$$ which is the smallest positive integer $\ n\ $ satisfying $$a^n\equiv 1\mod p^2$$ Since $\ \varphi(p^2)=p(p-1)\ $ , we know $\ n\mid p(p-1)\ $ We also know $\ n\nmid p-1\ $ otherwise we would have $\ a^{p-1}\equiv 1\mod p^2\ $ So, $\ n\ $ must be a multiple of $\ p\ $. Assume $\ n=kp\ $ with $\ k<p-1\ $ Then, we would have $\ a^{pk}=(a^p)^k\equiv a^k\equiv 1\mod p\ $ , but this is impossible since $\ a\ $ is a primitive root mod $\ p\ $. Hence, we must have $\ n=p(p-1)\ $
H: in which bases we have $\frac 1x=0.\bar x$? In base ten we obtain the fact:$$1/3=0.\bar3=0.3333333333333\dots$$I want to know in which other bases thre is a number like that and what is that number. AI: You're actually asking for which base $a$ there is a number $b$ such that:$$\frac 1b=\sum_{n=1}^\infty ba^{-n}$$ but this is a geometric sequence so we need to find $$\frac 1b^2=\frac{1/a}{1-1/a}=\frac 1{a-1}$$ you want$$b=\sqrt{a-1}$$ I guess you want $b$ to be an integer so you need a base which is one above a square of a natural number (such: $2,5,10,17,\dots$) and the corresponding $b$ is $\sqrt{a-1}$
H: Question about the definition of the right limit Let $\lim_{x\rightarrow 5^+}f(x)=7$.Then: a) exists $a>5: \forall x \in [5,a): f(x)>6$ b) exists $a\in [4,7): \forall x \in (5,a]: f(x)>6$ c) exists $a\in[5,7]: \forall x \in [5,a): f(x)<8$ d) exists $a>7: \forall x \in [7,a): f(x)<6$ for the definition of limit: $\forall r>0$ exists $\epsilon >0: 5<x<5+\epsilon$ then $7-r<f(x)<7+r$ so the d) can be false. AI: For the limit definition, inequalities are strict. Since the limit is for a right-continuous function, it implies that $$ \exists \delta: 0<x-5< \delta \Leftrightarrow 5<x<5+\delta $$ If you set $\delta=1$ Since $f$ can approach $7$ both from below and above, so $$ |f-7| < \varepsilon \Leftrightarrow 7 - \varepsilon < f< 7 + \varepsilon $$ now set $\varepsilon =1, 6 <f<8$. So a) is correct,.
H: Sets related combinatorics How many subsets of the set {1, 2, 3, 4, . . . , 30} have the property that the sum of the elements of the subset is greater than 232? I really have no idea how to move ahead with this problem. AI: Gathering comments and including the punchline: Note that $1+2+3+\dots+30 = \dfrac{30\times 31}{2}=15\times 31 = 465$ is the sum of all elements in the set. Recognize that $232+233=465$ so $232$ is the floor of half of the total sum of all elements. We conjecture that the subsets whose sum is strictly more than $232$ is in bijection with the subsets whose sum is less than or equal to $232$. Indeed, suppose you have a set whose sum is $s> 232$. Then it's complement has sum $465-s<233$ and vice versa. As such, we learn that the number of subsets with sum more than $232$ is exactly equal to the number of subsets with sum less than or equal to $232$ and so the number of subsets whose sum is more than $232$ is exactly half of the total number of subsets, $2^{30}$ The final answer then: $$2^{29}$$
H: b) If $ det A = 0 $ then $ Kerf $ contains a vector different from the zero vector Let $f : R^n → R^n$ be a linear mapping and let $A = [f]$ be the matrix of f. Decide whether the following statements are true or not: a) If $ Kerf $ contains a vector different from the zero vector, then $ det A = 0 $. b) If $ det A = 0 $ then $ Kerf $ contains a vector different from the zero vector My opinion is that if $ det A = 0 $ it means vectors which span $ Kerf $ are dependent and contains vector different from the zero vector but I am not sure. AI: Both (a) and (b) are true. For (a): if $v \in\ker f$ and $v \neq 0$, then $Av=0$ thus $0$ is an eigenvalue and determinant is $0$. For (b): if $\det A=0$, then $A$ is not invertible. Thus columns of $A$ are not linearly independent, hence $Ax=0$ has a non-trivial solution. So $\ker f$ must contain a non-zero vector.
H: Calculate the distance between 2 points on a sinusoidal curve I'm fairly novice at maths so please excuse my imperfect description of the problem at hand. I'm trying to find the formula that would calculate the length of the $y = \sin(\pi x)$ curve contained between to x points. See this curve for reference, here with a $x$ range = $[0.2, 0.7]$: sin(πx) curve I would like to know the distance on the red curve between $x = 0.2$ and $x = 0.7$. AI: As people have mentioned - indeed this problem screams out for Calculus; and in fact we have a formula for calculating the exact thing you are talking about - it's called the "arc length". Generally, the arc length does not have a nice closed form (by closed form, I mean it cannot be written nicely in terms of other numbers we know, like ${\pi,e,\sqrt{2}}$ etc etc). In fact I believe this is one of those cases. Not to say it always won't have a nice answer, but with a lot of functions, it mostly doesn't. Ultimately, if you just want a numerical value, you can simply approximate the arc length (to arbitrary precision) by "slicing up" the sine curve between ${x=0.2,0.7}$ into smaller and smaller slices and approximating it in each slice to be a straight line (being the hypotenuse of a right-angled triangle); the following graphic explains what I mean with $4$ slices for example: And so the arc length overall can be approximated by using the Pythagorean theorem, and summing up those hypotenuses (do ${\sqrt{\Delta x^2 + \Delta y^2}}$ for each slice then add them up). As you do more and more slices, the answer will be closer and closer to the true value (and each slice will obviously be becoming smaller). And this is essentially what Calculus does! It takes what's called the "limit" as we use more and more of these slices (and as a result the slices are getting closer and closer to $0$), and gives us a final answer at the end to what it's "approaching". If you take a look at the Calculus formula for arc length $${\int_{x=a}^{x=b}\sqrt{1+\left(\frac{dy}{dx}\right)^2}dx}$$ you can see the similarity between what we were doing with the whole square root stuff and the Calculus answer. And that big long ${S}$ shaped symbol essentially means we are summing up pieces that are getting infinitely small (as these slices would). I didn't know whether you knew Calculus, so I tried to write this answer in such a way that it'd be comprehensible either way. If you do know Calculus - now you have a general formula for any function above you can use (as the formula suggests though, the function needs to be differentiable so it can be locally approximated to be linear. If you don't understand what this means, don't worry) :D. I hope this helps explain
H: Let $p$ be a prime number. For all $1 \le k,r < p$, there exists $n \in \mathbb N$ such that $nk \equiv r \pmod{p}$ I'm solving a problem in group theory. A key argument in my attempt is to assess if the following statement is true. Let $p$ be a prime number. For all $1 \le k,r < p$, there exists $n \in \mathbb N$ such that $nk \equiv r \pmod{p}$. I tested for some small numbers on MS Excel for which it holds true. Could you please elaborate if it's actually true? Thank you so much! AI: Consider the $p-1$ numbers $nk\bmod p$ for $n=1,2,\ldots,p-1$. These numbers are all different, because if $nk=mk\bmod p$, that means that $(n-m)k$ is a multiple of $p$, which can only happen if $n-m$ is a multiple of $p$ (because $p$ is prime). This in turn can only happen if $m=n$ (because both $m$ and $n$ are less than $p$). Note also that none of them can be zero, for a similar reason. So the $p-1$ numbers $nk\bmod p$ are equal to the $p-1$ numbers $1,2,\ldots,p-1$ in some order. Hence one of them must be equal to $r$.
H: $\left(\sum_i a_i\right)^2\ge (n-1)\sum_i a_i^2 + b\implies 2a_i a_j \ge b/(n-1) \quad (\forall i\ne j)$ Let $a_1, \dots, a_n(n\ge 2), b$ be real numbers, and $$ \left(\sum_{i=1}^n a_i\right)^2\ge (n-1)\sum_{i=1}^n a_i^2 + b, $$ Then we have $$ 2a_i a_j \ge b/(n-1) \quad (\forall i\ne j) $$ $n=2$, clear. Suppose that for some $n>2$, we have $$ \left(\sum_{i=1}^n a_i\right)^2\ge (n-1)\sum_{i=1}^n a_i^2 + b\implies 2a_i a_j \ge b/(n-1) \quad (\forall i\ne j). $$ I want to show that $$ \left(\sum_{i=1}^{n+1} a_i\right)^2\ge n\sum_{i=1}^{n+1} a_i^2 + b\implies 2a_i a_j \ge b/n \quad (\forall i\ne j), $$ but it is a little diffcult for me to show $ \left(\sum_{i=1}^{n+1} a_i\right)^2\ge n\sum_{i=1}^{n+1} a_i^2 + b\implies \left(\sum_{i=1}^{n} a_i\right)^2\ge (n-1)\sum_{i=1}^{n} a_i^2 + b $ first. Could you please give me a better solution? Thanks! AI: Let $n>2$, $b>2(n-1)a_ia_j,$ and $\sum\limits_{k\neq i,k\neq j}a_k=t$ Thus, $$\left(\sum_{k=1}^na_k\right)^2\geq(n-1)\sum_{k=1}^na_k^2+b>(n-1)\sum_{k=1}^na_k^2+2(n-1)a_ia_j,$$ which is a contradiction because we'll prove now that $$(n-1)\sum_{k=1}^na_k^2+2(n-1)a_ia_j\geq\left(\sum_{k=1}^na_k\right)^2.$$ Indeed, by C-S $$(n-1)\sum_{k=1}^na_k^2+2(n-1)a_ia_j=(n-1)(a_i+a_j)^2+\frac{n-1}{n-2}\cdot(n-2)\sum\limits_{k\neq i,k\neq j}a_k^2\geq$$ $$\geq(n-1)(a_i+a_j)^2+\frac{n-1}{n-2}\cdot t^2.$$ Thus, it's enough to prove that: $$(n-1)(a_i+a_j)^2+\frac{n-1}{n-2}\cdot t^2\geq(a_i+a_j+t)^2$$ or $$(n-2)(a_i+a_j)^2+\frac{1}{n-2}\cdot t^2\geq2(a_i+a_j)t,$$ which is true by AM-GM. Id est, it's enough to solve our problem for $n=2$, which you made.
H: Primitive root modulo $2p$ The question: Let $a,p \in \Bbb N$,$ \ $ $p$ is an odd prime, $a$ is a primitive root modulo $p$. prove that: if $a$ is odd, $a$ is primitive root modulo $2p$. if $a$ is even, $a+p$ is primitive root modulo $2p$. Thank you AI: $\textbf{First one:}$ Since,$\phi(2p)=p-1$. So,First we prove that $a^{p-1} \equiv 1 \pmod{2p}$ But since $a^{p-1} \equiv 1 \pmod{p}$ and $a^{p-1} \equiv 1 \pmod{2}$ for a being odd. The result follows immediately from chinese remainder theorem. If $a^d \equiv 1 \pmod{2p}$ for smaller d. Then, $2p \mid a^d-1 \implies p \mid a^d-1 \implies a^d \equiv 1 \pmod{p}$ contradiction. For the second one, First we prove that $(a+p)^{p-1} \equiv 1 \pmod{2p}$ But since $(a+p)^{p-1} \equiv a^{p-1} \equiv 1 \pmod{p}$ and $(a+p)^{p-1} \equiv 1 \pmod{2}$ for $a+p$ being odd. The result follows immediately from chinese remainder theorem. If $(a+p)^d \equiv 1 \pmod{2p}$ for smaller d. Then, $2p \mid (a+p)^d-1 \implies p \mid (a+p)^d-1 \implies p \mid a^{p-1}-1\implies a^d \equiv 1 \pmod{p}$ contradiction.
H: Is this notation for a domain standard? I guess they are defining domain in a very general way. So you can say a function $f: \mathbb{R} \rightarrow \mathbb{R}$ is taking values (as a function) in $\mathbb{R}^{\mathbb{R}}$. Is this a common thing? I read a lot of mathematics (applied generally) and I have not seen this before, at least that I can remember. AI: What I have seen is this: A function $f$ on $\mathbb{R}$ is a relation such that $f :=$ {$(a,b): a \in \mathbb{R}, b \in \mathbb{R}$} $\subseteq \mathbb{R} \times \mathbb{R}$. However, as $f$ is a subset of $\mathbb{R}^2$, we interpret the function $f$ as assigning to every $a$ in $\mathbb{R}$ exactly one $b$ in $\mathbb{R}$ and write $f : \mathbb{R} \to \mathbb{R}$ such that $f(a):=b$. As mentioned in the comments, for any sets $X$ and $Y$, the set of all functions $f : X \to Y$ is denoted $Y^X$.
H: Area of $\{(x,y)\in \mathbb{R}^2:\ 1\leq x y\leq 2, x \leq y \leq 2x\}$ Trying to evaluate the area spanned by the set $\{(x,y)\in \mathbb{R}^2:\ 1\leq x y\leq 2, x \leq y \leq 2x\}$, I use the following reasoning: \begin{equation} 1\leq x y \leq 2\ \Longrightarrow\ \frac{1}{y}\leq x\leq \frac{2}{y} \end{equation} But $x\leq y\leq 2x$, so \begin{equation} \frac{1}{y}\leq y \leq \frac{4}{y}\ \Longrightarrow\ 1 \leq y^2 \leq 4 \Longrightarrow y \in [-2,-1] \cup [1,2] \end{equation} To calculate the total area, we integrate the area element $dx\,dy$ over these boundaries. Now, given that the area spanned by a set of points $(x,y)$ satisfying $\frac{1}{y}\leq x\leq \frac{2}{y}$ is the same for either $y\in [-2,-1]$ or $y \in [1,2]$, I conclude that the total area is twice the area spanned by $\{(x,y):\ \frac{1}{y}\leq x\leq \frac{2}{y}, y\in [1,2] \}$. Hence, the total area is \begin{equation} 2\int_1^2\int_{1/y}^{2/y}dx\,dy = 2 \log{2} \end{equation} However I am said the correct value of the area is $\frac{1}{2}\log{2}$. I wonder therefore what is wrong with my reasoning and how do we get the correct answer. AI: Besides the simple and nice way Carlo described, you can do it directly...but observe you have an irregular domain, so you must find intersection points and subdivide the domain. We thus can evaluate for $\;x,y\ge0\;$ (as this follows from the domain's definition) and then multiply by two by symmetry: $$\begin{cases}y=2x=\frac1x\implies x=\frac1{\sqrt2}\\{}\\ y=2x=\frac1{2x}\implies x=\frac12\\{}\\ y=x=\frac1x\implies x=1\\{}\\ y=x=\frac1{2x}\implies x=\frac1{\sqrt2}\end{cases}$$ and thus $$A=\int_{1/2}^{1/\sqrt2}\int_{1/x}^{2x} dy\,dx+\int_{1/\sqrt2}^1\int_x^{2/x} dy\,dx=\int_{1/2}^{1/\sqrt2}\left(2x-\frac1x\right)dx+\int_{1/\sqrt2}^1\left(\frac2x-x\right)dx=$$ $$=\left.\left(x^2-\log x\right)\right|_{1/2}^{1/\sqrt2}+\left.\left(2\log x-\frac12x^2\right)\right|_{1/\sqrt2}^1=\frac12-\log\frac1{\sqrt2}-\frac14+\log\frac12+0-\frac12-2\log\frac1{\sqrt2}+\frac14=$$ $$=\log\frac12-3\log\frac1{\sqrt2}=-\log2+3\log\sqrt2=-\log2+\frac32\log2=\color{red}{\frac12\log2}$$ Of course, much easier and faster with variables change...
H: Prove that $\int |\nabla u||\nabla v|+|u||v|\leq |u|_{H^1}|v|_{H^1}$ Prove that $\int |\nabla u||\nabla v|+|u||v|\leq |u|_{H^1}|v|_{H^1}$. Here $H^1$ is Sobolev Space. Then my attempt is first (im not sure) apply Inequality'Holder so $ \int |\nabla u||\nabla v|+|u||v|\leq |\nabla u|_{L^2}|\nabla v|_{L^2}+| u|_{L^2}|v|_{L^2}$. My second step (im not sure) i want to apply $ab\leq \frac{a^2+b^2}{2}$ but with this my last inequality $\leq \frac{|u|_{H^1}+ |v|_{H^1}}{2}$. I will apprecciate one hint for finish the statement, thank you!!! Im working with $|U|_{H^1}^{2}=|u|_{L^2}+|\nabla u|_{L^2}$ AI: Applying AM-GM as in your idea, you get $$\int (|\nabla u| |\nabla v| + |u| |v|) \le \frac{1}{2}(\|\nabla u\|_{L^2}^2 + \|u\|_{L^2}^2+ \|\nabla v\|^2_{L^2} + \|v\|^2_{L^2}) =\frac{1}{2} (\|u\|_{H^1}^2 + \|v\|_{H^1}^2). $$ This actually renders your Hölder's inequality step unnecessary. Now let $c>0$ be an arbitrary constant and apply the above inequality with $u$ replaced by $cu$ and $v$ replaced by $c^{-1} v$. This yields the improvement $$\int (|\nabla u| |\nabla v| + |u| |v|) \le \frac{1}{2}(c^2 \|u\|_{H^1}^2 + c^{-2} \|v\|_{H^1}^2).$$ Then optimize over $c$; the minimum on the right side is attained when $$c = \sqrt{\frac{\|v\|_{H^1}}{\|u\|_{H^1}}}$$ in which case you have as desired $$\int (|\nabla u| |\nabla v| + |u| |v|) \le \|u\|_{H^1} \|v\|_{H^1}.$$ This standard trick is sometimes called a "Peter-Paul inequality", after the saying about robbing Peter to pay Paul: you make one term smaller at the expense of making the other larger.
H: A basic property of convex sequences A sequence $\{a_n:n\in\mathbb{Z}_+\}\subset\{(0,\infty)$ is convex if $$ a_{n-1}+a_{n+1}-2a_n\geq0,\quad n\geq1 $$ This is equivalent to saying that $\{a_n-a_{n+1}:n\in\mathbb{Z}_+\}$ is a monotone non increasing sequence. Convex sequences that converge to $0$ (i.e. $\lim_na_n=0$) are useful in determining integrability properties of trigonometric series. In this setting, it is obvious that $$ \sum_n(a_n-a_{n+1})=a_0 \tag{1}\label{one} $$ What I am trying to determine is why $\lim_nn(a_n-a_{n+1})=0$ holds. I can see this is not too surprising for if $b_n=a_n-a_{n+1}$, then from the convergence of $\eqref{one}$ it follows that $b_n\geq0$ and so, $\sum_k 2^kb_k<\infty$ by a theorem of Cauchy. Hence $\lim_n nb_n=0$ holds along the subsequence $n_k=2^k$. A hint will be appreciated. AI: You may refer to this: Series converges implies $\lim{n a_n} = 0$ HINT Let $b_n = a_n - a_{n+1}$. $(b_n)$ is non-negative and non-increasing, and as you mentioned, $\sum_n b_n$ converges. Try finding an upper bound of $n b_n$ that consists in the sum of $n$ terms of the sequence. Now show that this upper bound converges to $0$ as $n$ goes to infinity, using the fact that the series $\sum_n b_n$ converges. SOLUTION $0 \leq n b_n \leq \sum_{k=n}^{2n-1} b_k = \sum_{k=0}^{2n} b_k - \sum_{k=0}^{n} b_k \xrightarrow[n \to \infty]{} 0$
H: Littlewood's Second Principle : Simple function is continuous on restricted domain I am stuck on understanding one point Proposition 11 (Real Analysis by Royden and Fitzpatrick, 4th edition, Page-66) Proposition 11: Let$ f$ be a simple function defined on a set E of finite measure. Then for each $ε>0$, there is a continuous function g on R and a closed set F contained in E for which $f=g$ on F and $m(E−F)<ε$. Proof(in short): Let $a_1,a_2,…,a_n$ be the finite number of distinct values taken by $f$, and let them be taken on the sets $E_1,E_2,…,E_n$ respectively. The collection ${{E_n}}_{n=1}^{k}$ is disjoint since the $a_k$'s are distinct. According to Theorem 11 of Chapter 2, we may choose closed sets $F_1,F_2,…,F_n$ such that for each index $k,1≤k≤n$, $F_k⊆E_k$ and $m(E_k−F_k)<ε/n$. Define $ g$ on $F$ to take the value $a_k$ on $F_k$ for $1≤k≤n$. Since the collection ${F_k}_{k=1}^{n} $ is disjoint, $g$ is properly defined Now $m(E-F) =m(\bigcup_{i=1}^{n} [E_k-F_k])=\sum_{n=1}^{n} {(E_k−F_k)<ε}$ The Part i am having trouble is Assuming $ m(E_k−F_k)<ε/n$ why $\sum_{n=1}^{n} {(E_k−F_k)<ε} $ ? How $ε+ ε/2+ ε/3 +.....=ε/n<ε?$ AI: You are mixing the index with the number of elements. The sum is $$ \sum_{k=1}^n\frac{\varepsilon}n=\frac{\varepsilon}n\,\sum_{k=1}^n1=\varepsilon. $$
H: Probability of getting 1 Head and 2 Tails with a Fair Coin If a fair coin is tossed three times what is the probability that heads shows once and tails shows twice? I thought the answer would be $1/4 + 1/2$ because the probability of getting heads is $1/2$ and the probability of getting tails twice is $1/2 * 1/2 = 1/4,$ but this is incorrect. I am not sure what I did wrong. AI: This can happen in $\dbinom{3}{1}$ ways. Therefore probability $= 3 \times \dfrac{1}{2} \times \dfrac{1}{2}\times \dfrac{1}{2}$
H: Derivative of a multivariable, piecewise function I was asked to show that the function $h(t)=f(g(t))$ is differentiable at $t=0$, where $f(x,y)=\begin{cases} \frac{x^2y}{x^2+y^2} & (x,y)\neq 0 \\ 0 & (x,y)=0 \end{cases}$ and $\vec{g}(t)=(kt,pt)$ for constants $k$ and $p$. I made a diagram and said "$h$ is a function of $x$ and $y$, which are functions of $t$. Hence, $h'(t)=f_x(x,y)k+f_y(x,y)p$." I then plugged in $t=0$, which gave me $h'(0)=0$. (I had calculated $f_x$ and $f_y$ beforehand and found them to be $0$ at $(x,y)=(0,0)$.) However, my answer is marked wrong and the correct answer is $\frac{k^2p}{k^2+p^2}$. Why am I wrong, and why is this the correct answer? AI: To see that this is the correct answer, we compute the derivative from first principles $$h'(0)=\lim_{t\to 0} \frac{h(t)-h(0)}{t}$$ which reduces to $$h'(0)=\lim_{t\to 0}\frac{k^2pt^3}{k^2t^3+p^2t^3}=\frac{k^2p}{k^2+p^2}$$ The reason your calculation is wrong is because $f(x,y)$ is not differentiable at $(0,0)$, which is to say there doesn't exist a linear map $df_0$ such that $$\lim_{(x,y)\to(0,0)}\frac{||f(x,y)-f(0,0)-df_0(x,y)||}{||(x,y)||}=0$$ In the proof of chain rule, we use this fact, which is why chain rule doesn't apply here. Conceptually, this is saying the directionally derivative doesn't change in a "sensible" way, which we would need to be able to write it in terms of the partial derivatives
H: yes/ No : Is there exist disjoint subsets of $A$ and $B$ of $\mathbb{R}$ such that $m^*( A \cup B) = m^*(A) + m^*(B) ?$ Is there exist disjoint subsets of $A$ and $B$ of $\mathbb{R}$ such that $m^*( A \cup B) = m^*(A) + m^*(B) ?$ My attempt : If I take $A= [-1,1]$ and $B= [-2,2]$ Here $m^*([-1,1]) = 2, m^*([-2,2]) = 4$ then $m^*( A \cup B) \neq m^*(A) + m^*(B) $ since $m^*( A \cup B) = m^*([-1,1] \cup [-2,2])= m* ([-1,2])= 3$ AI: By definition a measure is countably additive, so in this case, suppose $\{A_n\}$ is a sequence of pairwise disjoint subsets of $\mathbb{R}$, then: $$ m^*\Big( \bigcup_n A_n \Big) = \sum_n m^*(A_n). $$ As an example: Let $A = \mathbb{N}$ and $B = \{ - n \: | \: n \in \mathbb{N} \}$. Evidently, these sets are disjoint and $m^*(A) = m^*(B) = 0$. The union $A \cup B$ are just the integers $\mathbb{Z}$, excluding $0$, which again has measure zero. So: $$ m^*(A \cup B) = m^*(A) + m^*(B). $$
H: How can I evaluate: $\int_a^b \frac{1}{\sqrt{1-\cos \theta}}d\theta$? I want to evaluate: $$\int_a^b \frac{1}{\sqrt{1-\cos \theta}}d\theta$$ I get stuck if I try a $u$ substitution and I have tried changing the expression using trigonometric identities but still no luck. How would I go about evaluating this ? AI: Use the fact that $1-\cos\theta=2\sin^2\left(\frac{\theta}{2}\right)$ to transform the integral in question into $$\frac{1}{\sqrt{2}} \int_a^b \left\vert \:\csc\left(\frac{\theta}{2}\right) \right\vert \, d\theta$$ Does that answer your question?
H: Is it a necessary condition for an even function to have a local extremum (for $f(x)=k,$ derivative${}=0$) at $x=0$ Let $f(x)$ be an even function ($f(-x)=f(x)$) if $f(x)$ is continuous and differentiable at $x = 0$ will it be necessary for it to have a local extremum? Or more generally, have it's derivative $=0$ at $x=0$? I thought this as:$$f(x+h)-f(x)=f(x-h)-f(x) \\ \text{(for $x=0 , h>0$)}$$ so the derivative should also be zero. Am I correct or is there a counter example? AI: The absolute value function is an even function about which it is not true that its derivative at $0$ is $0,$ since its deriative is not defined at $0$ at all. However, all even functions $f$ that are differentiable at $0$ satisfy $f'(0)=0,$ thus: $$ f'(0) = \lim_{h\to0} \frac{f(h)} h = \lim_{h\to0} \frac{f(-h)} h = \lim_{h\to0} \frac{-f(-h)} {-h} = -f'(0). $$ Since $f'(0) = -f'(0),$ we have $f'(0)=0.$ Now suppose $$ f(x) = \begin{cases} x^2\cos(1/x) & \text{if } x\ne0, \\ 0 & \text{if } x=0. \end{cases} $$ Then $f$ is an even function that is differentiable at $0,$ but it does not have a local extremum at $0.$
H: Is This Proof for "If $\sup A < \sup B$, show that there exists an element $b\in B$ that is an upper bound for $A$" correct? This question is from Understanding Analysis (Stephan Abbot) Exercise $1.3.9$. The Question is If $\sup A < \sup B$, show that there exists an element $b \in B$ that is an upper bound for $A$. My proof is as follows: If there exists an element $b\in B$ that is an upper bound for $A$ than $(\exists b\in B)(\forall a\in A) a <b$. Assume (for the sake of contradiction) that $\sup(A) < \sup(B)$ but $(\forall b\in B)(\exists a\in A)b \leq a$. Since $\sup(A) \geq a \geq b (\forall a \in A, b \in B)$ and $\sup(A) < \sup(B)$, $\sup(A)$ is an upper bound for B which is less than $\sup(B)$ which is a contradiction. Therefore, if $\sup A < \sup B$, there exists an element $b\in B$ that is an upper bound for $A$. Is this proof correct? AI: Your proof is correct but be careful with quantifiers. $(\forall a \in A ,b \in B\, a \geq b)$ is not true. We only know that for each $b \in B$ there exists $a \in A$ such that $a \geq b$, but in general this is not true for every element $a \in A$. Here is another proof which I find be more natural: Let $c \in ]\sup A, \sup B[$. There exists $b \in B$ such that $c \leq b$, and thus for all $a \in A$ we have $a \leq \sup A \leq c \leq b$ so $b$ is an upper bound for $A$. You will find that this proof is rather intuitive when making a drawing of the situation.
H: $ Av=0 $ is $ r(A) We now that $ n \times m $ matrix $ A $ and the vector $ v \in R^m $, $ v \neq 0$ $Av=0$. Is $ r(A)<m$ or $ r(A)<n$? I think since $Av=0 $ has infinitely many solutions then $r(A)<m$ But I am not sure Update: The question is active and there is no accepted answer AI: $r(A)$ is the dimension of the column space of $A$. Since $A$ is $n\times m$, $r(A) \leq m$. Moreover, knowing that $Av=0$ with $v\neq 0$ means that the columns of $A$ are linearly dependent, so the dimension of the column space is less than $m$. Note that $r(A)$ is also the dimension of the row space of $A$. So $r(A)\leq n$ but we cannot say $r(A)<n$ without additional information. example $n=2$, $m=3$, $r(A)=2$ \begin{equation} A = \left[ \begin{matrix} 1 & 1 & 1 \\ 1 & 0 & 0 \\ \end{matrix} \right] \end{equation}
H: Let $f$ be a measurable function on $[0,1]$ such that $\int_0^1 f(x)\,dx<\infty$ Let $U_1,\dots$ be a sequence of i.i.d Uniform $(0,1)$ I have problems with this sequence, don't know how to start. Could anyone give me a hint, please? Let $f$ be a measurable function on $[0,1]$ such that $\int_0^1 f(x) \, dx < \infty$ Let $U_1,\dots$ be a sequence of i.i.d Uniform $(0,1).$ Define $$I_n=\frac{f(U_1)+\cdots+f(U_n)} n.$$ Show that $I_n \overset{\mathbb{P}}\to\int_0^1 f(x) \, dx$ AI: The $f(U_i)$ are i.i.d with finite expected value $E[f(U_1)] = \int_0^1 f(x) dx$ so by the strong law of large numbers $\frac{f(U_1) + \dots + f(U_n)}{n}$ converges to $E[f(U_1)] = \int_0^1 f(x) dx$ almost surely and thus in probablity.
H: Analogue of $(a^2 + b^2)(c^2 + d^2) = (ac - bd)^2 + (ad + bc)^2$ for vectors The Brahmagupta–Fibonacci identity $(a^2 + b^2)(c^2 + d^2) = (ac - bd)^2 + (ad + bc)^2$ allows us to write a product of squares as a sum of squares. Is there an analogue of this identity when $a, b, c, d$ are vectors in $\mathbb{R}^n$ and the multiplication is the ordinary scalar product? One of the main hurdles is of course $(a \cdot c)(b \cdot d) \neq (a \cdot b)(c \cdot d)$. The Cauchy-Binet identity gives an expression for $(a \cdot c)(b \cdot d) - (a \cdot b)(c \cdot d)$, however I do not see a way to turn this into a simple expression. EDIT: I do not mind having extra terms in the end: something like $(a\cdot a + b \cdot b)(c \cdot c + d \cdot) = (a\cdot c - b\cdot d)^2 + (a\cdot d + b\cdot c)^2 + \mbox{other terms}$ where the remaining terms can be written explicitly. AI: I assume (perhaps unreasonably closed-mindedly) that you want an analogue of the form \begin{align} \left(\left<a,a\right>^2 + \left<b,b\right>^2\right) \left(\left<c,c\right>^2 + \left<d,d\right>^2\right) = \left<v,v\right>^2 + \left<w,w\right>^2 \end{align} that holds for any vectors $a, b, c, d \in \mathbb{Q}^n$, where $v$ and $w$ are two vectors in $\mathbb{Q}^n$ whose entries are fixed degree-$2$ homogeneous polynomials in the entries of $a, b, c, d$. Such an analogue exists only if $n \in \left\{1,2,4\right\}$. Indeed, if we number the entries of the vectors $a, b, c, d$ such that $a = \left(x_1, x_2, \ldots, x_n\right)^T$ and $b = \left(x_{n+1}, x_{n+2}, \ldots, x_{2n}\right)^T$ and $c = \left(y_1, y_2, \ldots, y_n\right)^T$ and $d = \left(y_{n+1}, y_{n+2}, \ldots, y_{2n}\right)^T$, then the above analogue would imply that the polynomial $\left(x_1^2 + x_2^2 + \cdots + x_{2n}^2\right) \left(y_1^2 + y_2^2 + \cdots + y_{2n}^2\right)$ can be written as $u_1^2 + u_2^2 + \cdots + u_{2n}^2$ for some quadratic forms $u_1, u_2, \ldots, u_{2n}$ in the $x_i$ and $y_i$. But such quadratic forms only exist if $n \in \left\{1,2,4\right\}$, according to Theorem 1.1 in Keith Conrad, The Hurwitz theorem on sums of squares by representation theory. (Note that my $2n$ is his $n$.)
H: For every $k \in \Bbb Z$ there is $0 \le x \le p-1$ such as $x^3\equiv k \pmod {p}$ This question is looking like an easy one but I have been trying to solve it for the last couple days and I haven't been able to prove it - so I need some help. The question: Let $p$ be a prime number, $p\equiv 5 \pmod{6}$ prove that for every $k \in \Bbb Z$ there is $0 \le x \le p-1$ such as $x^3\equiv k \pmod{p}$. I thought to prove it with induction but it didn't work, although I still sure this is the right way. Thanks for your help AI: If $k \bmod p=0$, then $x=0$ is a solution to $x^3\equiv k\pmod p$. Otherwise, can you show that $x=k^{(2p-1)/3} \bmod p$ is a solution? (Note that $p\bmod6=5\implies p\bmod3=2\implies (2p-1)/3$ is an integer.)
H: Proof verification: $g$ is continuous iff $g^{-1}(B)$ is open whenever $B \subseteq \mathbb{R}$ is an open set. I am trying to prove: Let $g$ be defined on all of $\mathbb{R}$. If $B$ is a subset of $\mathbb{R}$, define the set $g^{-1}(B)$ by \begin{equation*} g^{-1}(B) = \{x \in \mathbb{R}: g(x) \in B\} \end{equation*} Show that $g$ is continuous iff $g^{-1}(B)$ is open whenever $B \subseteq \mathbb{R}$ is an open set. My attempt: $(\Rightarrow)$Suppose $g$ is continuous on $\mathbb{R}$ and $B \subseteq \mathbb{R}$ is open. Let $b \in B$. Then, $\exists N_r(b)$ such that $N_r(b) \subseteq B$, that is, $(b - r, b+ r) \in B$. Since $g$ is continuous at $b - r \in \mathbb{R}$, for $r > 0, \exists \delta > 0$ such that \begin{equation*} \left|(b+ r) - (b- r)\right| < \delta \implies \left|g(b+ r) - g(b- r)\right| < r \end{equation*} That is, $g[N_r(b)] \subseteq B \implies N_r(b) \subseteq g^{-1}(B) \implies g^{-1}(B)$ is open. $(\Leftarrow)$ Suppose $B \subseteq \mathbb{R}$ is open so that $g^{-1}(B)$ is open. Let $b \in B \textrm{ such that } g(b) \in B$. The openness of $B$ tells us that $\exists N_r(b)$ such that $N_r(b) \subseteq B$, that is, \begin{equation*} \left|(b+r) - (b-r)\right| \le \operatorname{diam} B < \operatorname{diam} B +1 \end{equation*} But, the openness of $B$ implies that $g^{-1}(B)$ is open. So, $N_r(b) \subseteq g^{-1}(B) \implies g[N_r(b)] \subseteq B$, that is, \begin{equation*} \left|g(b+r) - g(b-r)\right| < \color{blue}{\operatorname{diam} B + 1} \end{equation*} My questions: Is my proof correct so far? In the proof for $(\Leftarrow)$, I ideally want to replace the blue expression with something like an arbitrary $\epsilon > 0$. How can this be made possible? Then, the continuity of $g$ would follow immediately. (Currently, $\operatorname{diam} B > 0$ is not arbitrary.) Thanks! AI: I think you are mixing up different notions in your proof, which makes it incorrect. Here is a simple proof: Suppose $g$ is continuous and that $B$ is a (non-empty) open set in $\mathbb R$. Let $x\in g^{-1}(B)$. We have $g(x) \in B$ and $B$ is open so there exists $\varepsilon > 0$ such that $N_\varepsilon(g(x)) \subset B$. By continuity of $G$, there exists $\delta >0$ such that for all $x'$, $|x'-x|<\delta$ implies $|g(x') - g(x)| < \varepsilon$. Therefore $g(N_\delta(x)) \subset N_{\varepsilon}(g(x)) \subset B$ so $N_\delta(x) \subset g^{-1}(B)$. Therefore $g^{-1}(B)$ is open. Now suppose $g^{-1}(B)$ is open for all open subset $B$. Take $x\in \mathbb R$ and $\varepsilon > 0$. $N_\varepsilon(g(x))$ is open so the $g^{-1}(N_\varepsilon(g(x)))$ is also open. Since $x$ is in $g^{-1}(N_\varepsilon(g(x))$, there exists $\delta > 0$ such that $N_\delta(x) \subset g^{-1}(N_\varepsilon(g(x))$. This translates as $$ \forall x' \in \mathbb R, |x'-x| <\delta \Longrightarrow \ |g(x) - g(x')| < \varepsilon. $$ so $g$ is continuous.
H: Recursive equation answer Consider the following recursive equations: $$3A_n = 2A_{n-1}+B_{n-1}$$ $$3B_n = A_{n-1}+2B_{n-1}$$ Let $A_0=2$ and $B_0=-1$. I know there are supposed to be different ways to solve this but I was thinking about solving one equation and replace it in the other to find the final answer I tried but I couldn't find the result I will be glad if you help me. Any suggestions would be sincerely appreciated. Thanks in advance. AI: If you use the observation by J. W. Tanner in the comments, you can reduce the problem to a single first-order recurrence: $A_n+B_n$ is constant as $n$ varies, so $A_n+B_n=A_0+B_0=1$ for all $n\ge 0$. Thus, $3A_n=2A_{n-1}+1-A_{n-1}=A_{n-1}+1$, and we have the recurrence $$A_n=\frac13A_{n-1}+\frac13\;,$$ which is easily solved in a variety of ways. E.g., let $x_n=A_n-d$ for some $d$ as yet to be determined; then $$x_n+d=\frac13(x_{n-1}+d)+\frac13\;,$$ so $$x_n=\frac13x_{n-1}-\frac{2d}3+\frac13=\frac13x_{n-1}$$ if we set $d=\frac12$. It is then immediate that $$x_n=x_0\left(\frac13\right)^n=(A_0-d)\left(\frac13\right)^n=\frac32\left(\frac13\right)^n=\frac1{2\cdot3^{n-1}}$$ for $n\ge 0$ and hence that $$A_n=x_n+d=\frac1{2\cdot3^{n-1}}+\frac12=\frac12\left(1+\frac1{3^{n-1}}\right)$$ for $n\ge 0$. Finally, $$B_n=1-A_n=\frac12\left(1-\frac1{3^{n-1}}\right)\;.$$
H: Under what condition is a complete bipartite graph $K_{m,n}$ a regular graph This is a quick question and I know all about these graphs what I am supposed to know. But I am unable to explain myself in words. Like I know for regular graph the vertex must have same degree and bipartite graph is a complete bipartite iff it contain all the elements m.n(say) I am looking for a mathematical explanation. AI: In the complete bipartite graph $K_{m,n}$, the vertices have degree $m$ or degree $n$ (and both of these degrees are reached). Thus, if you want it to be regular, a sufficient and necessary condition is $n=m$.
H: Let $G$ be a finite group. Then the number of elements of prime order $p$ is divisible by $p − 1$ I'm trying to prove the following proposition from textbook Groups, Matrices, and Vector Spaces - A Group Theoretic Approach to Linear Algebra by James B. Carrell. Let $G$ be a finite group. Then the number of elements of prime order $p$ is divisible by $p − 1$. Could you please verify if my attempt is fine or contains errors? Thank you so much for your help! My attempt: Let $H$ be the set of all elements of prime order $p$. For $x \in H$, $\langle x \rangle := \{x, x^2, \ldots, x^{p-1}, 1\}$ is a sub-group. Together with $p$ is prime, we get $\langle x \rangle$ is the only sub-group of itself. Consequently, either $\langle x \rangle = \langle y \rangle$ or $\langle x \rangle \cap \langle y \rangle = \emptyset$ for all $x,y \in H$. It follows from $p$ is prime that if $1 \neq y \in \langle x \rangle$ then $y \in H$. Let $k$ be the cardinality of $\{\langle x \rangle \mid x \in H\}$. Then $|H|= |\cup_{x \in H} \left (\langle x \rangle \setminus \{1\} \right )| = k(p-1)$. AI: This is essentially correct, except that $\langle x\rangle$ has two subgroups, including the trivial subgroup, and $\langle x\rangle \cap \langle y\rangle$ is never empty, but rather equal to the trivial subgroup if $y\notin\langle x\rangle$. You could rephrase it as saying that a subgroup of order $p$ has $p-1$ elements of order $p$, and two subgroups of order $p$ that are different intersect trivially, thus if $k$ is the number of subgroups of order $p$ we obtain $k(p-1)$ nonidentity elements, hence this is the number of elements of order $p$. I tend to favor less notation though, and your way is also fine.
H: Show that if $A$ is a $m \times n$ matrix with $AA^T = 2J + 5I$ then $n \geq m$ ($J$ is matrix of ones) I am studying for a qualifying exam and I got stuck on this question: Show that if $A$ is a $m \times n$ matrix with $AA^T = 2J + 5I$ then $n \geq m$. Here $J$ is the matrix of all $1$s. The hint for the problem is to use Fischer's inequality. I was reading about this on wikipedia but I fail to see how I can apply this to this problem. I was able to prove the previous part of the problem which was to show that $J$ is diagonalizable and to find the eigenvectors. To prove this is not a homework problem, here is a link to the practice exam. I would really appreciate a hint. Thank you AI: Hint: Show that $2J+5I$ is invertible. Then $$m = \mathrm{rank}(AA^T) \le \mathrm{rank}(A) \le \min(m,n).$$
H: Represent ${f(x) = \arctan(x) + x\ln(4+x^2)-x(1+\ln(4))}$ by a Maclaurin series. I have the following problem: Represent the function: $${f(x) = \arctan(x) + x\ln(4+x^2)-x(1+\ln(4))}$$ by a Maclaurin series. I do not know how to represent this part: ${x\ln(4+x^2)}$ Could you please help me? AI: Okay, so notice that $${\ln(1+x)=\sum_{n=0}^{\infty}\frac{(-1)^{n}x^n}{n}}$$ which converges for ${-1<x\leq 1}$. We can write ${\ln(4+x^2)=\ln\left(4\left(1+\frac{x^2}{2}\right)\right)=\ln(4)+\ln\left(1+\frac{x^2}{2}\right)}$. Replace $x$ with ${\frac{x^2}{2}}$, and you get $${\ln\left(4+x^2\right)=\ln(4)+\sum_{n=0}^{\infty}\frac{(-1)^{n}x^{2n}}{n\times 2^n}}$$ And this will converge for ${0 \leq \frac{x^2}{2}\leq 1\Leftrightarrow -\sqrt{2}\leq x\leq \sqrt{2}}$. Now, just multiply both sides by ${x}$ to get your expansion for ${x\ln(4+x^2)}$. Notice as well the ${x\ln(4)}$ will cancel out in the end result for ${f(x)}$. I found it odd that that term was just lurking around, but it makes sense now. Your teacher probably just wanted to cancel out the awkward term at the front
H: Manipulating $C=\frac{C_a-C_b}{a-b}+\frac{aC_b-bC_a}{(a-b)m}$ into $C=\frac{1}{m}\Bigl(\frac{m-b}{a-b}C_a+\frac{a-m}{a-b}C_b\Bigr)$ I need to know the step to get this: $$C=\frac{1}{m}\Bigl(\frac{m-b}{a-b}C_a+\frac{a-m}{a-b}C_b\Bigr)$$ from: $$C=\frac{C_a-C_b}{a-b}+\frac{aC_b-bC_a}{(a-b)m}$$ Thank you! AI: $$C=\frac{C_a-C_b}{a-b}+\frac{aC_b-bC_a}{(a-b)m}$$ Multiply numerator and denominator in first term by $m$ $$C=\frac{C_a m-C_b m}{(a-b)m}+\frac{aC_b-bC_a}{(a-b)m}$$ Give $1/m$ as factor $$C=\frac 1m\left(\frac{C_a m-C_b m}{a-b}+\frac{aC_b-bC_a}{a-b}\right)$$ Group terms with $C_a$ and separately $C_b$ $$C=\frac 1m\left(\frac{C_am-bC_a}{a-b}+\frac{-C_bm+aC_b}{a-b}\right)$$
H: Compute $\pi_2(S^2 \vee S^2)$ As far as I know, there are two ways to calculate higher homotopy groups. One way is if we have a fibration then we get a long exact sequence in homotopy. The other is if we know a space is $(n-1)$-connected, then by Hurewicz Theorem, $\pi_n \cong H_n$. I know $H_2(S^2 \vee S^2)=\mathbb{Z} \oplus \mathbb{Z}$ and $\pi_1(S^2 \vee S^2)$ is trivial as the pushout of $\pi_1(S^2) \leftarrow \pi_1(\ast) \rightarrow \pi_1(S^2)$ (using van Kampen's Theorem). Is it true that a wedge of $n$-spheres is $(n-1)$-connected? If so, why? If this is the case, Hurewicz applies. Also, is there any fibration involving $S^2 \vee S^2$? I thought about the Hopf fibration $S^1 \rightarrow S^3 \rightarrow S^2$, but intuitively I doubt that this produces another fibration $S^1 \vee S^1 \rightarrow S^3 \vee S^3 \rightarrow S^2 \vee S^2$. Any help you can provide would be appreciated! AI: I think the question meant to ask about $\pi_2(S^2 \vee S^2)$. Since $\pi_0$ and $\pi_1$ vanish, the Hurewicz map gives an isomorphism $\pi_2(S^2 \vee S^2) \to H_2(S^2 \vee S^2) = \mathbb{Z}^2.$
H: How to find the inverse of the complex function $ f(z)=\dfrac{z}{\sqrt{1+|z|^{2}}}$ I'm trying to calculate the inverse function of : $$ f(z)=\dfrac{z}{\sqrt{1+|z|^{2}}}$$ where $z\in$ $\mathbb{C}$. Can someone help me? AI: If you mean the inverse, $1-|f|^2=\frac{1}{1+|z|^2}\implies z=f\sqrt{1+|z|^2}=\frac{f}{\sqrt{1-|f|^2}}$.
H: Continuity of this Piecewise function $f:\mathbb{R}^2\to \mathbb{R}$ I have shown that $f:\mathbb{R}^2\to\mathbb{R}$ given by $f(0,0) = 0$ and $\displaystyle f(x,y)=\frac{x|y|}{\sqrt{x^2+y^2}}$ if $(x,y)\ne (0,0)$ isn't differentiable at $(0,0)$, now I'm trying to show whether it is continuous or not. My attempt: I must show that $\displaystyle\lim_{(x,y)\to(0,0)}\frac{x|y|}{\sqrt{x^2+y^2}}=f(0,0) = 0$. But $x^2+y^2-2|xy| = (|x|-|y|)^{2}\ge 0$ so $\displaystyle|xy|\le \frac{x^2+y^2}{2}$. So $\displaystyle\left|\frac{x|y|}{\sqrt{x^2+y^2}}\right| =\frac{|xy|}{\sqrt{x^2+y^2}}\le \frac{x^2+y^2}{\sqrt{x^2+y^2}} = (x^2+y^2)^{1/2}$ and once $(x^2+y^2)^{1/2}\to0$ as $(x,y)\to(0,0),$ we would have it. Is this correct? AI: Let us consider the directional derivative of $f$ in the direction $v = (a,b)$ at the point $(0,0)$. We have \begin{align*} D_{v}f(0,0) & = \lim_{t\to 0}\frac{f((0,0) + t(a,b)) - f(0,0)}{t}\\\\ & = \lim_{t\to 0}\frac{ta|tb|}{t\sqrt{t^{2}a^{2}+t^{2}b^{2}}} = \frac{a|b|}{\sqrt{a^{2}+b^{2}}} \end{align*} On the other hand, its partial derivatives are given by \begin{align*} \begin{cases} \displaystyle\frac{\partial f}{\partial x}(0,0) = \lim_{t\to 0}\frac{f((0,0) + t(1,0)) - f(0,0)}{t} = 0\\\\ \displaystyle\frac{\partial f}{\partial y}(0,0) = \lim_{t\to 0}\frac{f((0,0) + t(0,1)) - f(0,0)}{t} = 0 \end{cases} \end{align*} Finally, if $f$ was differentiable, we should have that \begin{align*} D_{v}f(0,0) = a\frac{\partial f}{\partial x}(0,0) + b\frac{\partial f}{\partial y}(0,0) = a\times 0 + b\times 0 = 0 \end{align*} which is different from the previous result. Thus $f$ is not differentiable at $(0,0)$. EDIT Your approach is fine. Here I propose another way to solve it. To begin with, notice that $|y| = \sqrt{y^{2}} \leq \sqrt{x^{2} + y^{2}}$. Consequently, one has \begin{align*} \frac{|y|}{\sqrt{x^{2}+y^{2}}} \leq 1 \Rightarrow \frac{|xy|}{\sqrt{x^{2}+y^{2}}} \leq |x| \xrightarrow{(x,y)\rightarrow(0,0)} 0 \end{align*} Hence $f$ is continuous at $(0,0)$.
H: Computing singular points of curves, exercise 5.1 (Hartshorne) I am just trying to cross check my answer as it slightly differs from https://math.berkeley.edu/~reb/courses/256A/1.5.pdf to be sure of any mistake I am making. Here $k$ is an algebraically closed field with ${\mathrm{char}}~k \neq 2$. 5.1(a) The curve $Y \subseteq {\mathbb{A}}^2(k)$ is given by $F(x,y) = x^4 + y^4 - x^2$. The Jacobian $$ J_P = \Big( ~{\frac {\partial F}{\partial x}}, {\frac {\partial F}{\partial y}}~ \Big) = \Big( 4x^3 - 2x, 4y^3 \Big) $$ Surely, $y=0$ and if $x \neq 0$, then from $J_P$ we have $2x^2 = 1$ and from the curve $x^2 = 1$. This is absurd and hence $(0,0)$ is the only singular point. 5.1(c) $Y$ is given by $F(x,y) = x^4 + y^4 - x^3 + y^2$. $$ J_P = \Big( 4x^3 - 3x^2, 4y^3 + 2y \Big). $$ I checked that $(0,0)$ is always a singular point. Now solving the equations in ${\mathrm{char}}~k = 7$, we get $(-1, \pm {\sqrt{3}})$ seems to be singular points, where ${\sqrt{3}} \in {\mathbb{F}}_7({\sqrt{3}}) \cong {\mathbb{F}}_{49} \subseteq k$. Similarly, in ${\mathrm{char}}~k = 13$, we get $(4, \pm {\sqrt{6}})$ seems to be singular points, where ${\sqrt{6}} \in {\mathbb{F}}_{13}({\sqrt{6}}) \cong {\mathbb{F}}_{169} \subseteq k$. I somehow can't see any mistake here. AI: Your numbering is a little wacky: I think you might have made a typo when you say "5.4(c)". Your solution to the first exercise is correct. You may alternately see that if $y=0$, then $x^4-x^2=0$, or $x^2(x^2-1)=0$, while at the same time we need $2x(2x^2-1)=0$, and the only root in common of these equations is $x=0$. For the second, you are indeed correct that there are singular points which are not the origin in characteristics $7$ and $13$ (and no other characteristics). Here is a slightly different approach which verifies your solution. Take partial derivatives of $x^3=y^2+x^4+y^4$ and set them equal to zero, obtaining the equations $4x^3-3x^2=x^2(4x-3)=0$ and $2y+4y^3=2y(1+2y^2)=0$. Clearly $x=y=0$ satisfies all of these equations for every field. If $\operatorname{char} k\neq 2$, then we may have singular points which aren't the origin if $x=\frac34$ and $y^2=\frac12$. Plugging in to our original equation and simplifying, we see that such a point satisfies our equation iff $91=7\cdot13=0$ in $k$, or $k$ is of characteristic $7$ or $13$, as you have noted.
H: How prove the finite induction Theorem (Finite induction) Let be P$(x)$ a property. So we suppose that P$(0)$, P$(n)\rightarrow$P$\big(S(n)\big)$, for all $n<k$ So P(n) is true for all $n<k$. Unfortunately I can't formally prove the theorem. So could someone help me, please? AI: Let $\pi:=\{h\in k:\mathbf{P}(h)\}$. So if it was $k\setminus \pi\neq\varnothing$ then there exist $m:=\min{k\setminus \pi}$ so we analise the case where $m=0$ and the case where $m\neq 0$. So for the hypothesis 1 it is impossible that $m=0$. Finally if $m\neq 0$ then $m=S(l)$ for some $l\in k$ and then $l\in\pi$ so that $\mathbf{P}(l)$ is true and so $\mathbf{P}(m)$ is true and this would be impossible.
H: Matrix with spectral radius 1 that converges Let $\mathbf{A}$ be a matrix of size $n \times m$ with $\sum_{j = 1}^m A_{ij} = 1$ and $\mathbf{E}$ be a matrix of size $n \times m$ with $\sum_{i = 1}^n E_{ij} = 1$ and $\forall i,j \ \ \ \ 1 \geq E_{ij} \geq 0, \ \ \ 1 \geq A_{ij} \geq 0$. Consider the following iterative process: $$\mathbf{x}(t) = \mathbf{A}^T \mathbf{E} \mathbf{x}(t-1) $$ where $\mathbf{x}$ is a vector of size $m$ I have experiments for randomly initialized matrices that satisfy the above conditions. All of these matrices have spectral radii less than or equal to 1. For the case when the spectral radius is equal to 1, the process still converges in my experiment. (Note that the matrix $(\mathbf{A}^T \mathbf{E})^k$ converges to a non-zero matrix as $k \to \infty$). Two questions: How could I prove a bound on the spectral radius of the matrix $\mathbf{A}^T \mathbf{E}$? How could I prove that the process converges when the spectral radius is equal to 1. Are there any conditions that I can put on the matrices $\mathbf{A}$ and $\mathbf{E}$ such that even if the spectral radius is 1, the process will still converge? P.S.: I know that an iterative process converges when the spectral radius is strictly less than 1 and that it diverges when the spectral radius is strictly greater than 1. However, I am having trouble understanding the case when the spectral radius is 1, as I have seen discussion making use of the Jordan normal form to prove convergence when the spectral radius is equal to 1. Could you point me to any resources that could allow me to solve this? AI: To prove the bound on the spectral radius, consider the transpose $E^TA$ which has the same spectral radius. Let $x$ be an eigenvector of eigenvalue $\lambda$ and let $x_k$ be the entry of $x$ with the largest absolute value. Then $$|(E^TAx)_k| = \left|\sum_{j=1}^m(E^TA)_{kj}x_j \right|=|\sum_{j=1}^m\sum_{i=1}^nE^T_{ki}A_{ij}x_j|=|\sum_{i=1}^nE_{ik}\sum_{j=1}^mA_{ij}x_j|\leq\sum_{i=1}^nE_{ik}\sum_{j=1}^m A_{ij}|x_j|\leq\\ \sum_{i=1}^nE_{ik}\sum_{j=1}^m A_{ij}|x_k|\leq \sum_{i=1}^nE_{ik}|x_k|=|x_k|$$ We have also that $|(E^TAx)_k|=|\lambda x_k|$ so we have shown that $|\lambda x_k|\leq |x_k|$, which implies that result. Now, it is not true that the process will necesarily converge if the spectral radius is equal to $1$. This is to be expected as there is a reason theorems about convergence are stated for spectral radius strictly less than $1$. To see a counter-example, take $n=m=2$, $E=I_2$ and $A=\begin{pmatrix}0&1\\ 1&0\end{pmatrix}$ so that $A^TE=A$. Then $A^n=\begin{cases}A& \text{n is odd}\\I_2&\text{n is even}\end{cases}$ so the process will not converge when applied for instance to $x=(1,0)$.
H: Three-Body Problem - how to find the Figure-eight solution? Suppose the coordinates of the Earth and the Moon are fixed and let $(u,v)$ be the coordinates of the satellite. I'm looking for the numerical solution of the three-body problem: $$u'' = 2v + u - \frac{c_1(u+c_2)}{((u+c_2)^2 + v^2)^\frac{1}{2}} - \frac{c_2(u-c_1)}{((u-c_1)^2 + v^2)^\frac{1}{2}}$$ $$v'' = -2u + v - \frac{c_1v}{((u+c_2)^2 + v^2)^\frac{1}{2}} - \frac{c_2v}{((u-c_1)^2 + v^2)^\frac{1}{2}}$$ where $c_1 \in (0,1)$ and $c_2 = 1 - c_2.$ Initial conditions are $[u,u',v,v']' = [1,0,0,-1]'.$ I need to find the constants such that the solution becomes a figure 8. I'm not sure if I can vary only $c_1$ or also the initial conditions, but any approach or idea would be helpful. I've read about similar problems but this one is very specific and very simplified and I can't find anything that helps. Thanks in advance! AI: I would start your simulation on the line through the Earth and Moon, on the outside of the Earth. The nice thing about that start is you know $u'=0$, so the motion is fully in the $y$ direction. You now have a two parameter question instead of four (starting position and velocity in one vs. two dimensions). You need the total energy to be negative so the orbit is bound. If you compute the apogee of the orbit ignoring the moon, it has to be at least close to the gravitational balance point between the Earth and Moon. This gives you a range of starting velocities depending on the starting altitude. I am guessing that if you start too slow, the spacecraft will just orbit the earth with the ellipse distorted a bit by the Moon's gravity. Somewhat faster will get your figure 8. Faster yet and you will do one orbit around the two bodies. Finally you will escape.
H: If $X$ and $Y$ are two ordered sets, how many orderings of $X \times Y$ exist that preserve the orderings of $X$ and $Y$? Suppose $X$ and $Y$ are two totally ordered sets with $|X| = n_X$ and $|Y|=n_Y$. We'll say an ordering ($\preceq$) of $X \times Y$ preserves the orderings of $X$ and $Y$ if for any elements $x_1,\,x_2 \in X$ and $y_1,\,y_2 \in Y$, we have $$x_1 \leq x_2 \implies (x_1, y_1) \preceq (x_2, y_1)$$ and similarly $$y_1 \leq y_2 \implies (x_1, y_1) \preceq (x_1, y_2)$$ How many orderings existing with this property? This order preservation property naturally induces a partial ordering on $X \times Y$ and the number of linear extensions of a poset is a $\sharp P$-complete problem but given the natural structure of this order preservation, I'm hoping there may be a close-form solution or a polynomial-time algorithm that answers the question. AI: I will use $a$ and $b$ in place of $n_X$ and $n_Y$. Linear extensions of the poset $X\times Y$ are in bijection with standard young tableaux whose underlying Ferrer's diagram is a an $a \times b$ rectangle. The number of standard young tableaux is enumerated by the hook length formula, which for a rectangle simplifies, assuming $a<b$, to $$ \frac{(ab)!}{1^1\cdot 2^2\cdots a^a\cdot (a+1)^a(a+2)^a\cdots b^a(b+1)^{a-1}(b+2)^{a-2}\cdots(a+b-1)^{1} } $$ This can be written more simply and symmetrically as $$ (ab)!\cdot \frac{(\prod_{k=1}^{a-1}k!)(\prod_{k=1}^{b-1}k!)}{(\prod_{k=1}^{a+b-1}k!)} $$ For example, when $a=b=3$, the number of posets would be $$9!\cdot \frac{1!\cdot 2!\cdot 1!\cdot 2!}{1!\cdot 2!\cdot 3!\cdot 4!\cdot 5!}=42.$$ For more information, see https://oeis.org/A060854, which references your problem directly.
H: Three series theorem - convergence of $\sum X_n$ with $f_n(x) = \frac{1}{\pi} \frac{n}{1+(nx^2)}$ Suppose X_s are independent r.v.s with densities: $$ f_n(x) = \frac{1}{\pi} \frac{n}{1+(nx^2)} $$ Does the following series converge with probability 1: $$ \sum_{n=0}^\infty X_n $$ Now, I've calculated the CDF to be $$ F_n(t) = \frac{1}{\pi}\arctan(nt)+\frac{1}{2} $$ and tried using $c=1$ in Kolmogorov's Three Series Theorem: $$ \sum_{n}^{\infty}P(|X_n| > 1) =\sum_{n}^{\infty} F_n(-1) + 1 - F_n(1) = \sum_{n}^{\infty} \frac{1}{\pi}\arctan(-n)+\frac{1}{2}+1-(\frac{1}{\pi}\arctan(n) + \frac{1}{2}) = \\ = \sum_{n}^{\infty}\frac{1}{\pi}\arctan(-n)-\frac{1}{\pi}\arctan(n)+1 $$ What could be done next to show this series is divergent(I already know the answer)? AI: "What could be done next to show that this series is convergent (I already know the answer)?" The answer is "nothing can be done to show that the series converges." It diverges. Note that for $x>0$, $\arctan(x)=\pi/2-\arctan(1/x)$. Thus, for $n\ge1$, we have $$\begin{align} \arctan(-n)-\arctan(n)+\pi&=2\arctan(1/n)\\\\ &=\frac2n +O\left(\frac1{n^3}\right) \end{align}$$ Inasmuch as the harmonic series, $ \sum_{n=1}^\infty \frac1n$, diverges, the series of interest, $ \sum_{n=1}^\infty \left(\frac1\pi \arctan(-n)-\frac1\pi \arctan(n)+1\right)$, diverges likewise.
H: If $M$ has only two definable subsets, must it have a transitive automorphism group? I was told that a sufficient condition for a structure $M$ to have only the empty set and $M$ itself as parameter-free definable subsets, is for $M$ to have a transitive automorphism group. Is the converse true? AI: No, it's not - there can be obstacles to transitivity which are "invisible to first-order logic." The simplest example is probably just a structure $M$ consisting of an equivalence relation with two classes, one of which is countably infinite and the other of which is uncountable. By downwards Lowenheim-Skolem, $M$ has a countable elementary substructure $M_0$; it's not hard to show that $M_0$'s automorphism group acts transitively. That means that $M_0$ has no nontrivial definable sets, which by elementarity means that $M$ doesn't either. But $M$'s automorphism group does not act transitively: no element of the countable class is automorphic to any element of the uncountable class. Of course in some sense this is a bit unsatisfying - the reliance on cardinality leaves open the question of whether the same situation can happen with countable structures, which - per the downward Lowenheim-Skolem theorem - is arguably the right question. It turns out this still happens for countable structures, although less trivially. In my opinion, the most satisfying argument is the following: Given linear orders $A,B$, let $AB$ be the linear order consisting of "$B$-many copies of $A$." Specifically, $AB$ has domain $A\times B$ and is ordered via $$\langle a,b\rangle\le \langle c,d\rangle\iff b<d\vee (b=d\wedge a\le c).$$ What can you say about the definable sets in $\mathbb{Z}B$ for any linear order $B$? (A good tool for this is Ehrenfeucht-Fraisse games.) Can you find a linear order $B$ such that $\mathbb{Z}B$'s automorphism group does not act transitively? (HINT: think about what the transitivity of the action of $Aut(\mathbb{Z}B)$ on $\mathbb{Z}B$ would imply about the action of $Aut(B)$ on $B$.)
H: find all $n$ such that $\varphi(\sigma(2^n)) = 2^n$ Problem: Find all positive integers $n$ such that $\varphi(\sigma(2^n)) = 2^n$, where $\varphi(n)$ is Euler's totient function and $\sigma(n)$ is the sum of all divisors of $n$. I know that $\sigma(2^n) = 1+2+2^2+2^3+\dots+2^n = 2^{n+1}-1$, so we only need to find all $n$ such that $\varphi(2^{n+1}-1) = 2^n$. Trying out a few $n$, we find that $n=1,3,7,15,31$ work. Not sure how to prove this though. Any answers? AI: This problem beautifully connects Mersenne numbers to Fermat numbers. @RossMillikan hints that $n=2^k-1$ is special. By noting $\prod_{j=0}^{k-1}(2^{2^j}+1)$ is a product of coprime Fermat numbers, you can prove $\varphi(2^{2^k}-1)=2^{2^k-1}$, provided those Fermat numbers are prime. Now for the converse. If $\varphi(2^{n+1}-1)$ is a power of two, so is $\varphi(p)$ for each prime factor $p$ of $2^{n+1}-1$, so such primes are $1$ more than a power of $2$, i.e. are Fermat primes. Note $\prod_{j\ge1}(1+2^{-2^j})=\frac43$ by the above telescoping logic. So while a product of Fermat numbers exceeds the product of the powers of $2$ below them, the closest it can get to the next power of $2$ is to be $1$ less than it, by using all Fermat numbers up to a certain point, giving us the previous case.
H: Evaluate $\int_0^{\frac{\pi}{4}} \left( \frac{\sin^2{(5x)}}{\sin^2{x}} -\frac{\cos^2{(5x)}}{\cos^2{x}} \right)\mathop{dx}$ Evaluate $$\int_0^{\frac{\pi}{4}} \left( \frac{\sin^2{(5x)}}{\sin^2{x}} -\frac{\cos^2{(5x)}}{\cos^2{x}} \right)\mathop{dx}$$ I tried substitutions like $u=\frac{\pi}{4}-x$, and trig identities like $\cos^2{x}=1-\sin^2{x}$ after getting a common denominator. $$4\int_0^{\frac{\pi}{4}} \frac{\sin^2{(5x)\cos^2{x}-\cos^2{(5x)}\sin^2{x}}}{\sin^2{(2x)}} \mathop{dx}$$ Where should I go from here? Any help is appreciated! AI: Rewriting the integral where you left off: $$4\int_0^{\frac{\pi}{4}} \frac{\left(\sin{(5x)}\cos{x}-\cos{(5x)}\sin{x}\right) \left(\sin{(5x)}\cos{x}+\cos{(5x)}\sin{x}\right)}{\sin^2{(2x)}} \;dx$$ $$=4\int_0^{\frac{\pi}{4}} \frac{\sin{(4x)}\sin{(6x)}}{\sin^2{(2x)}} \; dx$$ $$=8\int_0^{\frac{\pi}{4}} \frac{\cos{(2x)}\sin{(6x)}}{\sin{(2x)}} \; dx$$ There are several ways to continue from here. I will let $u=2x$: $$=4\int_0^{\frac{\pi}{2}} \frac{\cos{u}\sin{(3u)}}{\sin{u}} \; du$$ Again, there are multiple ways to proceed from here. I will use the identity for $\sin{(3x)}$: $$\sin^3{x}={\left(\frac{e^{ix}-e^{-ix}}{2i}\right)}^3=-\frac{1}{4} \cdot \frac{e^{3ix}-e^{-3ix}-3e^{ix}+3e^{-ix}}{2i}=-\frac{1}{4} \left(\sin{(3x)}-3\sin{x}\right)$$ $$\sin{(3x)}=3\sin{x}-4\sin^3{x}$$ $$=4\int_0^{\frac{\pi}{2}} \frac{\cos{u} \left(3\sin{u}-4\sin^3{u}\right)}{\sin{u}} \; du$$ Let $t=\sin{u}$: $$=4\left(3t-\frac{4t^3}{3}\right) \bigg \rvert_0^1$$ And so, $$\int_0^{\frac{\pi}{4}} \left( \frac{\sin^2{(5x)}}{\sin^2{x}} -\frac{\cos^2{(5x)}}{\cos^2{x}} \right)\mathop{dx}=\boxed{\frac{20}{3}}$$
H: Soft Question Regarding the Divergence of $\sum\limits_{k=1}^{\infty}\frac{(-1)^k k}{3k+2}$ Determine whether $\sum\limits_{k=1}^{\infty}\frac{(-1)^k k}{3k+2}$ converges or diverges. Consider the function $f(x)=\frac{x}{3x+2}$ that generates the unsigned terms of our series. Taking the derivative of $f$ we have $f'(x)=\frac{2}{(3x+2)^2}$ Since $f'(x)>0$ for all $x$, we know that $f(x)$ is increasing for $x>1$, therefore the alternating series test is inconclusive. Taking $\lim\limits_{x\rightarrow\infty}f(x)=\lim\limits_{k\rightarrow\infty}\frac{k}{3k+2}=\frac{1}{3}$ we see that by the divergence test, our series must diverge. I'm fairly confident that I've reached the correct conclusion here. However, some subsequent computing has piqued my curiosity. Certainly the plot itself doesn't say a ton but after computing $$S_{10^2}\approx0.08320124$$ $$S_{10^3}\approx0.08418853$$ $$S_{10^4}\approx0.0842884$$ $$S_{10^5}\approx0.0842984$$ $$S_{10^6}\approx0.0842994$$ the series certainly seems to be converging to something. Am I seeing something in the wrong way here or is it just a case of the series diverging so slowly as to appear like it is converging? I realize this kind of soft question is generally discouraged here but I was genuinely startled by this series and was hoping for some clarity. Sincerely, calculus student trying to build intuition. AI: Look at the sum of the first $n$ terms. $\begin{array}\\ \sum\limits_{k=1}^{n}\frac{(-1)^k k}{3k+2} &=\sum\limits_{k=1}^{n}(-1)^k\frac{ k}{3k+2}\\ &=\sum\limits_{k=1}^{n}(-1)^k\left(\frac{ k}{3k+2}-\frac13+\frac13\right) \qquad\text{(because the terms tend to } \frac13)\\ &=\sum\limits_{k=1}^{n}(-1)^k\left(\frac{ -2}{3(3k+2)}+\frac13\right)\\ &=\sum\limits_{k=1}^{n}(-1)^k\left(\frac{ -2}{3(3k+2)}\right)+\sum\limits_{k=1}^{n}(-1)^k\frac13\\ &=-\frac23\sum\limits_{k=1}^{n}\frac{ (-1)^k}{3k+2}+\frac13\sum\limits_{k=1}^{n}(-1)^k\\ \end{array} $ The first sum converges by the alternating series test, while the second sum alternates between $-\frac13$ and $0$. Therefore the sum does not converge. Your plots seem to show convergence probably because you only tested even values of $n$.
H: Why is $\mathbb{R}^2\setminus \mathbb{Q}^2$ not a topological manifold? I would like to understand why $\mathbb{R}^2\setminus \mathbb{Q}^2$ endowed with the subspace topology is not a topological manifold. It seems to me it is Hausdorff and second countable. So I am wondering. Why is it not locally Euclidian? AI: Any neighbourhood of any point in $\Bbb R^2\setminus\Bbb Q^2$ is not simply connected, so no it cannot be homeomorphic to $\Bbb R^n$.
H: Misconception about basic mixed fraction We know, $$3 \frac12=3+\frac12$$ Then, if we have $$3 \frac12 ÷ 3 \frac12$$ It means: $a)\,\frac72 ÷ \frac72=1$ or $b)\, 3+\frac12 ÷ 3+\frac12 =\frac{7}2$ or $c)\, 3 + 1 ÷ 2 ÷ 3 + 1 ÷ 2=\frac{11}3$ Which one is true? Sorry, maybe it appears on another question, but i want to make it sure between b) and c). AI: This is correctly read as $${3{1\over 2}\over 3{1\over 2}}=1,$$ essentially because juxtaposition always binds tighter than explicitly-written infix operations like $\div$. However, I would rather not read it at all! Not only is the notation a bit ambiguous, but mixed-fraction notation is generally pretty unwieldy for arithmetic operations - when looking at anything more complicated than addition of fractions with the same denominator, so-called "improper fractions" are better than mixed fractions.
H: Does adding a constant or reducing all the numbers by 5% change the z score? I have a set of numbers. I have two cases: If I add a constant to them, does the mean, standard deviation and z score change? What I think: Mean changes, std deviation doesn't change and z score doesn't change. If I reduce all the numbers by 5% does the mean, standard deviation and z score change? What I think: Mean changes by 5%, standard deviation changes but the percentage difference remains the same, and the z score changes by 2.5% Am I thinking correctly? AI: Suppose that you have $x_1, x_2, \cdots, x_n$. Using the definitions of those statistics, you have $$\bar{x} =\frac{\sum x_i}{n}$$ $$s=\sqrt{\frac{\sum (x_i-\bar{x})^2}{n-1}}$$ $$z-score \; of\; x_i=\frac{x_i-\bar{x}}{s}$$ Now if you add a constant to all observations, it is easy to see that $\bar{x}$ changes by the same amount, but the other two statistics remain the same. If you reduce all numbers by 5%, each $x_i$ becomes $0.95x_i$. Thus, both $\bar{x}$ and $s$ decrease by 5% as well, but the z-score remains the same.
H: Central limit theorem with a biased dice We want to know the prbabiltiy $p \in (0,1)$ that a biased die rolls a $6$. We roll the die $n$ independent times $X_i=\begin{cases} 1 & \textrm{if the i-th roll is a 6} \\ 0 & \textrm{else} \\ \end{cases} $ calculate with the clt the approximate min. number of rolls so that $p$ can be estimated with $\bar{X_n}=\frac{1}{n} \sum_{i=1}^n X_i$ and a uncertainty of $0,01$ with a probability of min. $0,9$. Therefore I have to calculate $P(|\bar X_n-p|\le 0.01)\ge 0.9$ If I use $S=\sum X_i=n\bar X_n$ we have $P(|\frac{\sum X_i}{n}-p|\le 0.01)\ge 0.9$ But I don't know how to continue. AI: Hint: Using the central limit theorem means using $\ \mathcal{N}\left(\mu_n, \sigma_n^2\right)\ $ to approximate the distribution of $\ \overline{X}_n\ $, where $\ \mu_n\ $ is the mean of $\ \overline{X}_n\ $ and $\ \sigma_n^2\ $ is its variance. What are its mean and variance? Reply to query from OP: Since \begin{align} P\left(\left|\,\overline{X}_n-p\,\right|\le 0.01\right)&=P\left(-0.01\le\overline{X}_n-p\le 0.01\right)\\ &=P\left(-\frac{0.01}{\sigma_n}\le\frac{\overline{X}_n-p}{\sigma_n}\le \frac{0.01}{\sigma_n}\right)\\ &\approx{\cal N}(0,1)\left(\frac{0.01}{\sigma_n}\right)-{\cal N}(0,1)\left(-\frac{0.01}{\sigma_n}\right)\\ &=2{\cal N}(0,1)\left(0.01\sqrt\frac{n}{p(1-p)}\right)-1\ , \end{align} and $\ {\cal N}(0,1)\ $ is a strictly increasing function, you need to find the (unique) $\ x\ $ such that $\ {\cal N}(0,1)\left(x\right)=\frac{1+0.9}{2}=0.95\ $. The value of $\ n\ $ you want is then the smallest integer such that $\ 0.01\sqrt\frac{n}{p(1-p)}\ge x\ $—that is, such that $\ n\ge 10000\,p(1-p)x^2\ $.
H: calculate expected value and variance of $Y := 1 − X$ I'm struggling with the following exercise: Given the random variable $X$ with expectation value $\mu$ and variance $\sigma^2$: What is the expectation value and variance of $Y := 1 − X$ Isn't it just $E(X) = 1-\mu$? Thanks in advance AI: Comment: And the same variance. In general, $$Var(a + bX) = b^2Var(X).$$ So here $Var(1 - X) = (-1)^2Var(X) = \sigma^2.$ Example: Suppose $X \sim \mathsf{Gamma}(\mathrm{shape}= 3, \mathrm{rate}=1/6)$ and we use R to take a large sample of size $n=100\,000.$ set.seed(624) x = rgamma(10^5, 3, 1/6) mean(x); var(x) [1] 18.03389 # aprx E(X) = 18 [1] 109.3349 # aprx Var(X) = 108 y = 1-x mean(y); var(y) [1] -17.03389 # aprx E(Y) = 1-E(X) = -17 [1] 109.3349 # aprx E(Y) = E(X) par(mfrow=c(1,2)) hist(x, prob=T, br=50, col="skyblue2") curve(dgamma(x, 3, 1/6), add=T, col="red",lwd=2) hist(y, prob=T, br=50, col="skyblue2") par(mfrow=c(1,1)) If you look closely you can see evidence that the support of $Y$ is $(-\infty, 1).$ summary(x) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.2528 10.3442 16.0457 18.0339 23.5705 90.9903 summary(y) Min. 1st Qu. Median Mean 3rd Qu. Max. -89.9903 -22.5705 -15.0457 -17.0339 -9.3442 0.7472
H: Prove that every collection of partitions $T$, there exists $\inf{T}$ and $\sup{T}$ I am self-studying Hrbacek and Jech's Introduction to Set Theory (3rd edition), and I want to know if the following solution to problem 5.10 (c) is correct. Unfortunately the book contains no answers and the only solution manual for the book I found thought it was too trivial to include its solution. For problem 5.10 (d) I would like some sort of clue, since the hint given in the book is, I think, a typo. Let $A$ be a non empty set and $\operatorname{Pt}{(A)}$ be the set of all partitions of $A$. Define a relation $\preceq$ in $\operatorname{Pt}{(A)}$ by: $S_1 \preceq S_2$ if and only if for every $C \in S_1$ there is $D \in S_2$ such that $C \subseteq D$. Take $T \subseteq \operatorname{Pt}{(A)}$. Prove $\inf{T}$ exists. The idea is to find the greatest partition $S$ of $A$ such that, $S \preceq Q,$ for every $Q \in T$. Since $A \neq \emptyset$, take $x \in A$ and consider the set $P_x = \{ z \in A: \forall Q \in T, \exists E \in Q, (x,z \in E) \}$. Define $S = \{P_x: x \in A \}$. I want to prove $S = \inf T$. First we show $S$ is indeed a partition of $A$: $S$ contains non empty sets: This is easy, since for any $x \in A$, $x \in P_x$ (Remember, for every $Q \in T, A = \bigcup Q$). $S$ is a collection of disjoint sets: Take $P_x, P_y \in S$ and $z \in P_x \cap Py$. Then for every $Q \in T$, there is some $E \in Q$ and some $F \in Q$ such that $x,z \in E$ and $y, z \in F$. Since $E$ and $F$ belong to the partition $Q$ and $z \in E \cap F$, we must have $E = F$. This implies $P_x = P_y.$ $A = \bigcup S$: $x \in A \Longleftrightarrow$ for every $Q \in T, x \in \bigcup Q \Longleftrightarrow$ for every $Q \in T$ there is some $E \in Q$ such that $x \in Q \Longleftrightarrow x \in P_x,$ for $x \in A$ $\Longleftrightarrow x \in \bigcup S$. So $S \in \operatorname{Pt}{(A)}$, as desired. We now proceed to prove $S$ is indeed the infimum of $T$: $S$ is a lower bound of $T$: We want to show that $S \preceq R$, for every $R \in T$. So let $R$ be any partition in $T$. If $P_x \in S,$ for $x \in A$, let $F$ be the set in $R$ that contains $x$ (there is one and only one such set); we show $P_x \subseteq F$. If $z \in P_x$, then for every $Q \in T$ there is some $E \in Q$ such that $x,z \in E$. In particular, for partition $R$ we have $x,z \in F$. Then $P_x \subseteq F$ and we conclude $S \preceq R$, as wanted. $S = \inf{T}$: Let $S_0 \in \operatorname{Pt}{A}$ such that $S_0 \preceq Q$, for every $Q \in T$. We prove $S_0 \preceq S$, so let us take $F \in S_0$ and $x \in F$ (the set $F$ is not empty). Then, for every $Q \in T$ there is some $E \in Q$ such that $F \subseteq E$; so every element in $F$ (in particular $x$) is contained in all those sets $E$, hence $F \subseteq P_x$, with $P_x \in S$. We conclude $S_0 \preceq S$. We conclude $\inf{T} = S$. Prove $\sup{T}$ exists. For this the book suggests forming $T_0 = \{ R \in \operatorname{Pt}{(A)}: \forall Q \in T, Q \preceq R \}$, and then showing $\sup{T_0} = \inf{T}$; however I think this is a typo, and the authors meant $\sup{T} = \inf{T_0}$. For this part, actually, I am a bit lost. I tried writing the definitions, but couldn't even show $\inf{T}$ is an upper bound for $T$. In problem 5.10 (c) I would like any feedback, especially from the very last part ($S = \inf{T}$), as it feels a bit sketchy for me. Thank you for the help. Update: Thank you for the answer. I will fill in the details in your argument, if only for the sake of future reference: Prove $\inf{T}$ exists. Let us start by recalling that there is an asossiated equivalence relation $\sim_Q$ in $A$ for every $Q \in \operatorname{Pt}{(A)}$: $x \sim_Q y \Longleftrightarrow$ there is some $C \in Q$ such that $x, y \in C$. Let $Q, R \in \operatorname{Pt}{(A)}$. Then, $Q \preceq R \Longleftrightarrow \sim_Q \subseteq \sim_R$: First the ($\Longrightarrow$) direction: Assume $Q \preceq R$. If $x \sim_Q y$, then there is some $C \in Q$ such that $x,y \in C.$ But this implies the existence of some $D \in R$ such that $C \subseteq D$ (definition of $\preceq$). Hence $x, y$ both belong to some set in $R$ (namely $D$), so $x \sim_R y$. We concluce $\sim_Q \subseteq \sim_R$. Now the ($\Longleftarrow$) direction: Assume $\sim_Q \subseteq \sim_R$. If $C \in Q$, then $C \neq \emptyset$, so we take some $x \in C$. The relation $\sim_Q$ is reflexive so $x \sim_Q x$, which implies $x \sim_R x$. This is, $x$ belongs to some $D \in R$. Notice this set $D$ is unique, for if $y \in C$ and $x \neq y$, then $x \sim_Q y$; this again implies $x \sim_R y$, which means $x$ and $y$ belong to the same set in $R$. But $R$ is a partition, so $x$ can belong to only one set in this partition, the set $D$. Thus, $C \subseteq D$ and $Q \preceq R$. Now we define $\sim = \bigcap \{\sim_Q: Q \in T\}$, which is an equivalence relation in $A$ (we leave this easy verification for the reader). This relation induces a partition of $A$, $A/\sim = \{[x]_\sim: x \in A\}$, which will be denoted by $S$ for simpliticy. We show $\inf{T} = S$: $S$ is a lower bound of $T$: We know $\sim \subseteq \sim_Q$, for every $Q \in T$. Then, $S \preceq Q$, for every $Q \in T$. $\inf{T} = S$: Let $S_0 \in \operatorname{Pt}{(A)}$ be such that $S_0 \preceq Q$, for every $Q \in T$. Then $\sim_{S_0} \subseteq \sim_Q$, for every $Q \in T$. Hence, $\sim_{S_0} \subseteq \sim$, which implies $S_0 \preceq S$ as desired. We conclude $\inf{T} = S$. Prove $\sup{T}$ exists. Let $T_0 = \{ R \in \operatorname{Pt}{(A)}: \forall Q \in T, Q \preceq R \}$. By the previous exercise, $\inf{T_0} = F$ exists and its induced equivalence relation is: $\sim_F = \bigcap\{\sim_R: R \in T_0\}$. We show $\sup{T} = F$. $F$ is an uper bound of $T$: Take $Q \in T$. Then we have $Q \preceq R$, for every $R \in T_0$, which implies $\sim_Q \subseteq \sim_R$, for every $R \in T_0$. Hence $\sim_Q \subseteq \sim_F$, and thus $Q \preceq F$. So $F$ is an upper bound of $T$. $\sup{T} = F$: The partition $\inf{T_0} = F$ is an upper bound for $T$, but $T_0$ is the set of upper bounds for $T$ and, then, it is the least upper bound for $T$. We conclude $\sup{T} = F$. AI: Your proof of the first part is fine. You’re right about the typo in the second part: you should indeed be trying to show that $\sup T=\inf T_0$. For each $R\in\operatorname{Pt}A$ let $\sim_R$ be the associated equivalence relation on $A$: $a\sim_Rb$ iff there is an $E\in R$ such that $a,b\in E$. Then $\sim_R$ is a subset of $A\times A$, and it’s not hard to verify that for any $Q,R\in\operatorname{Pt}A$, $Q\preceq R$ iff $\sim_Q\subseteq\sim_R$. Let $\sim=\bigcap\{\sim_R:R\in T_0\}$; then $\sim$ is an equivalence relation on $A$, so it induces a partition, and it’s not hard to check that this partition is $\inf T_0$. (In fact this is an alternative approach to the first part of the problem.) Now let $Q\in T$. Then for each $R\in T_0$ we have $Q\preceq R$ and hence $\sim_Q\subseteq\sim_R$. It follows that $\sim_Q\subseteq\bigcap\{\sim_R:R\in T_0\}=\sim$ and hence that $Q\preceq\inf T_0$. Thus, $\inf T_0$ is an upper bound for $T$, and since $T_0$ is the set of upper bounds for $T$, $\inf T_0$ is actually the least upper bound for $T$.