text
stringlengths
83
79.5k
H: parametrization and orientation of the Möbius band I know that the Möbius band is a nonorientable surface. However, the following exercise seems to contradict this. A Möbius band can be constructed as a ruled surface by $x(u,v)=\beta(u)+v\gamma(u)$, where $-1/3<v<1/3$ $\beta(u)=(\cos u, \sin u, 0)$ and $\gamma(u)=(\cos [u/2]\cos u, \cos [u/2]\sin u,\sin[u/2])$. Then the mapping $x$ from an open set of $\mathbf{R}^2$ to the band is regular and one to one. Its unit normal vector field can be got by the cross porduct of partial deravatives of $x$. This seems to contradict nonorientability of the Möbius band, because it has a unit normal vector field. Where am I mistaken? AI: In order to get a parametrization defined on an open subset of $\mathbf{R}^{2}$ you need to restrict your angular parameter $u$ to the open interval $(0, 2\pi)$. But the image of $x(u,v)$ in this case is not precisely the Möbius band, but the band minus one vertical line where it should "close". If you take $u$ belonging to the interval $[0, 2 \pi]$, then you cover the band, but you also obtain a normal field that is not globally continuous. Indeed, doing the calculations for the central line of your band (that is, to the curve $v = 0$), you may check that $N(0,0) = - N(2\pi, 0)$, where $N$ is a normal field obtained by taking the cross product of the coordinate derivatives $x_{u}$ and $x_{v}$.
H: Construct a topological embedding For an arbitrary discrete space X, construct a compact topological space Y and a topological embedding Y. I am think about construct $X={0,1}$ equipped with the discrete topology. Topology on Y={∅,{0,1},{1}} which is the Sierpinski space. Thus Y is immediately compact. I try to define the map as $f(∅)=∅, f(X)=X, f({0})=f({1})={1}$. Then the $ab:X->f(X)$ is homeomorphism. Would anyone help me to check if my idea is correct? AI: If $X$ is finite, $X$ is already compact, and you might as well let $Y=X$ with the identity map as your embedding. There’s no reason to add an extra point. The exercise becomes interesting only when $X$ is infinite, and then adding another point is necessary. There is a way to do it by adding just one point: it’s called the Alexandrov one-point compactification of $X$. See if you can work it out from that little article; if you get stuck, feel free to leave a question for me.
H: $4$ and $a_{2n + 1}$ are coprime? Suppose $a_i$ is a sequence of positive integers. Define $a_1 = 1$, $a_2 = 2$ and $a_{n+1} = 2a_n + a_{n-1}$. Does it follow that $$ \gcd(a_{2n+1} , 4 ) = 1 $$ ??? Im trying to see this by induction assuming above holds, we need to see that $\gcd(a_{2n+3} , 4 ) = 1$. But, $\gcd(a_{2n+3} , 4 ) = n_0(2a_{2n+1} + a_{2n-1}) + 4n_1$ for integers $n_0, n_1$. But this quantity does not seem to give me $1$. Can someone help me with this problem? thanks AI: Here is your inductive step. Note that the statement $gcd(a,4)=1$ says just that $a \bmod{2} = 1$. So, here is the inductive step: $$ (a_{2n+3} \bmod{2}) = (2a_{n+2} \bmod{2}) + (a_{n+1} \bmod{2})=0+1=1. $$
H: Evaluating $\int _{-1}^{e} \frac{1}{x}dx$ Here very easily by the Fundamental Theorem of Calculus $$\int _{-1}^{e} \frac{1}{x}dx=\ln(e)-\ln(-1)$$ From Euler's identity $e^{i \pi}$=-1 we can easily deduce that $\ln(-1)=i \pi$. Thus the integral becomes $$\int _{-1}^{e} \frac{1}{x}dx=1-i \pi$$ But we know that calculating an integral is calculating area and area cannot be a complex number (Which is coming in this case) and moreover $\ln(0)=- \infty$ then how come the bounds $(-1,e)$ give a finite positive value? Whats wrong here? AI: The function $1/x$ is not Riemann-integrable over $[-1,e]$ (just try plotting it and see what happens to the "area under the curve" near $x=0$, it has a non-integrable singularity there), which is why the integral you wrote down does not exist. You can't apply the fundamental theorem of calculus to such an undefined integral. To actually make sense of the integral, you need to say that it is an improper integral, and say that you are looking for its Cauchy principal value. See this (using the same example) and this. The reason you got a complex value where you expected a real value is that you incorrectly calculated the integral. The subintegrals over $[-1,0)$ and $(0,1]$ should cancel, and the correct (principal) value of the improper integral is $$ \mathop{\mathrm{P.V.}}\int_{-1}^e \frac{dx}{x} = \int_1^e \frac{dx}{x} = 1. $$ See also this and this.
H: Is this an identification map or not? The definition of identification map is the following: A surjective map $f: X \to Y$ is an identification map iff $U$ in $Y$ is open if and only if $f^{-1}(U)$ is open in $X$. I have an idea of what identification spaces should look like. Like for example you have a space and you identify all the points in one subset. That should give a new space. The problem is that this seems to violate the definition above. For consider this example: Let $X$ be the unit disk $D^2$ centered at the origin in $\mathbb R^2$ with the subspace topology and let $S \subset D^2$ be an open ball around the origin of radius $1/2$. Define $Y$ to be the quotient space $X/\sim$ where $x \sim x'$ iff $x,x' \in S$. Let $f: X \to X/\sim$ be the map $x \mapsto x$ if $x \notin S$ and $x \mapsto (0,0)$ otherwise. Then $f^{-1}((0,0)) = O$ and $f$ is not an identification even though it should. What is going on here? AI: You do have a perfectly good identification space; it’s just that when you identify a non-closed set to a point, you get results that may at first seem a little odd. The equivalence relation $\sim$ needs to be defined on all of $X$: $x\sim y$ iff $x=y$ or $\{x,y\}\subseteq S$. Then the points of $X/\sim$ are the sets $\{x\}$ for $x\in X\setminus S$, and the set $S$. Let $Y=\{\langle 0,0\rangle\}\cup(X\setminus S)$; $Y$ is a closed annulus together with one extra point, the origin. Let $\varphi:Y\to X/\sim$ be defined by $\varphi(x)=\{x\}$ if $x\in X\setminus S$, and $\varphi(\langle 0,0\rangle)=S$; then $\varphi$ is a bijection. Let $$g=\varphi^{-1}\circ f:X\to Y\;;$$ instead of talking about $X/\sim$ and $f$, I’ll talk about $Y$ and $g$, pretending that $Y$ is the quotient and $g$ the quotient (or identification) map. (You did this implicitly; I’m being a little more careful, at least mentioning that $Y$ isn’t actually the same as $X/\sim$.) By definition a set $U\subseteq Y$ is open in $Y$ iff $g^{-1}[U]$ is open in $X$. If $U$ is disjoint from the closed disk of radius $\frac12$ centred at the origin, then $g^{-1}[U]=[U]$, so $U$ is open in $Y$ iff it’s open in $X$. Thus, if $\overline{S}$ is the Euclidean closure of $S$, the quotient (or identification) topology on $Y\setminus D$ is just Euclidean. What are open nbhds of $\langle 0,0\rangle$ in $Y$? If $\langle 0,0\rangle\in U\subseteq Y$, then $$g^{-1}[U]=S\cup g^{-1}[U\setminus\{\langle 0,0\rangle\}]=S\cup\big(U\setminus\{\langle 0,0\rangle\}\big)=S\cup U\;.$$ If $U=\{\langle 0,0\rangle\}$, this is just $S$, which is open in $X$, so $\langle 0,0\rangle$ is an isolated point in $Y$. I’ll leave it to you to check that if a point $p$ lies on the circle $C$ of radius $\frac12$ centred at the origin, and $p\in U\subseteq Y$, then $g^{-1}[U]$ is open in $X$ iff $\langle 0,0\rangle\in U$ and $U\setminus\{\langle 0,0\rangle\}$ is open in the relative Euclidean topology on $X\setminus S$. Thus, every point of $C$ is an accumulation point of $\{\langle 0,0\rangle\}$, and $Y$ is not a $T_1$ space, let alone Hausdorff: $\operatorname{cl}_Y\{\langle 0,0\rangle\}=\{\langle 0,0\rangle\}\cup C$. This really isn’t surprising when you think it through: every point of $C$ is in the closure of $S$ in $X$, and the identification doesn’t do anything to $X\setminus S$, so we should expect these points still to be in the closure of the point-formerly-known-as-$S$.
H: If arrivals are not occuring at random, why do arrival times still have a probability distribution? If the arrival times are, say, uniformly distributed U(0,10), I don't quite understand why the arrivals in this case are said to be not occurring at random (this claim was implied in my textbook chapter on queues), since they aren't deterministic either. Sure, the next arrival is certain to occur within 10 mins, and the arrival time is not memoryless, but it isn't deterministic either... Does all this mean that a variate can be neither random nor deterministic? I'm not sure whether my comprehension issue is with the logic or with the terminology. AI: It's just a terminology issue. The word "random" is vague and its use should in general be avoided. For example, for a lot of people "a random situation" means "all events are equally likely" -these people would have a hard time understanding why "random arrivals" follow the Poisson and not the Uniform. Sometimes the word "random" is used as a "verbal substitute" for the word "independence". In most cases, the Poisson distribution models essentially the allocation of probabilities over the possible values of a sum of independent indicator functions, (and this is why it is so closely related to the binomial distribution). To obtain the Poisson, we need to make two assumptions related to stochastic independence: a) That each indicator function (will I call the call center or not in the current minute?) is independent of all the others (whether you call does not affect the probability that I will) (So for example, when the service is down, you can see that calls to call center will stop being independent from each other, since calls will have a common source) and b) Each indicator function, viewed as a stochastic process, is independent of its own past: whether or not I called the call center one minute ago, does not affect the probability that I will call in the current minute. (for this to be realistic one should decide on the length of the interval juidiciously -which is part of the applied art of working with Poisson: it is stupid to say that if I called the last minute does not affect the probability that I will call in the current minute, but perhaps, if you set your time period as "day" or "week" the Poisson becomes acceptable). It is these two assumptions related to independence, that people express by the use of word "random" in this case, which is very confusing indeed. Now, what would it mean to say that "arrivals follow the uniform"? It would mean that it is equally probable that we will observe, say, 1 arrival or 1.000 arrivals. Or that it is equally probable that out of the $N$ indicator functions, only one will take the value unity, or that 1.000 of them will take the value unity. Or any other value from 1 to $N$, for that matter. So what kind of random variables are these, that their sum follows the uniform?
H: Question in analysis from an entrance exam paper This propped up while I was going through some old question papers. If $f \in C^1[0,1]$ so that $ \lim \limits_{x\to \infty} \dfrac{x.f(x)}{f'(x)}=2$.Then : 1) Show that for $s<2$, $\lim\limits_{x \to \infty} x^{-s}f(x) \longrightarrow \infty$ 2) Find $\dfrac{\int \limits_{0}^{1}f(x)\,\mathrm dx}{x.f'(x)}$ The first I tried employing L'Hospital's rule, thought I am still not getting the answer. The second I thought may be approached by writing $f(x) = f(0) + x\int_{0}^{1}f'(tx)\mathrm dt$, though again not sure where this leads. AI: First problem : We want to show that $$\lim_{x\rightarrow \infty}f=\infty$$ If not, $\lim_{x\rightarrow \infty}f=C>0$ Then $\lim_{x\rightarrow \infty}f'=0$ It is a contradiction. If $ \lim_{x\rightarrow \infty} f=0$, then $$ 0> \lim_{x\rightarrow \infty} \frac{x^2}{\ln\ f} = \lim_{x\rightarrow \infty}\frac{2x}{\frac{f'}{f}}=4$$ It is also a contradiction. So $$ \lim_{x\rightarrow \infty} \frac{f}{x^s }= \lim_{x\rightarrow \infty} \frac{f'}{sx^{s-1}}= \lim_{x\rightarrow \infty} \frac{f'}{sx^{s-1}}\frac{xf}{2 f'}= \lim_{x\rightarrow \infty} \frac{x^{2-s}f}{2s}=\infty$$ where $s<2$.
H: problem on similar triangles In the adjacent figure, $\frac{AG}{GD} = \frac{3}{4}$ and $\frac{BD}{DC}=\frac{4}{7}$ and $AE=12 $ cm. Find the length of $EC$. 1) 33 2) 36 3) 44 4) 48 Figure: I think similarity of triangles is to be used but can't really find the target triangles. AI: You can use Menelaus' Theorem $$\frac{BD}{BC}.\frac{CE}{AE}.\frac{AG}{GD}=1\Rightarrow\frac{4}{11}.\frac{CE}{12}.\frac{3}{4}=1$$
H: Find the domain and range of $f(x) = y$ $ y = \sec^{-1}(2x - x^2)$. If you graph the equation the domain and range can be inferred I don't how to solve it algebraically. AI: This is a partial answer $$ \sec:\mathbb{R}-\{\frac{k\pi}{2}\}\rightarrow(-\infty,-1]\cup[1,+\infty) $$ $$ \sec^{-1}:(-\infty,-1]\cup[1,+\infty)\rightarrow\mathbb{R}-\{\frac{k\pi}{2}\} $$ where $k=\pm1,\pm3,\pm5,...$ so $$ 2x-x^{2}\leq-1\Rightarrow x\geq1+\sqrt{2},\quad x\leq1-\sqrt{2} $$ and $$ 2x-x^{2}\geq1\Rightarrow x=1 $$ Therefore domain will be $$ \left\{ x\in\mathbb{R\mid}\quad x\leq1-\sqrt{2}\: or\: x=1\: or\: x\geq1+\sqrt{2}\right\} $$
H: Prove that the $\sigma$ - algebras are equal I want to show that $\sigma$-algebras on $\mathbb{R}$ generated by $(a,b), \ (a,b], [a, b), [a,b], (-\infty, a), (-\infty, a], (b, +\infty), [b, +\infty)$ for $a,b \in \mathbb{R}$ and $a,b \in \mathbb{Q}$ are all equal. Here is what I've come up with so far: ($\mathcal{M} $ - sigma-algebra) Would it suffice to say that $\sigma$-algebras generated by $(a,b), \ (a,b], [a, b), [a,b], \ a,b \in \mathbb{R}$ are the same because $(a,b) \in \mathcal{M} \ \Rightarrow \mathbb{R} \setminus (a,b) = (-\infty, a] \cup [b, +\infty) \in \mathcal{M}$, so $[a,b] = \bigcap_{n \in \mathbb{N_+}} (a-\frac{1}{n}, b+\frac{1}{n}) = \bigcap_{n \in \mathbb{N_+}} (\mathbb{R} \setminus (-\infty, a-\frac{1}{n}] \cup [ b+\frac{1}{n}, +\infty) )=$ $ = \mathbb{R} \setminus(\bigcup_{n \in \mathbb{N_+}} (-\infty, a-\frac{1}{n}] \cup [ b+\frac{1}{n}, +\infty) ) \in \mathcal{M}$ Similarly, $[a,b) = \bigcap_{n \in \mathbb{N_+}} (a-\frac{1}{n}, b)$, $(a,b] = \bigcap_{n \in \mathbb{N_+}} (a, b +\frac{1}{n})$. When it comes to $(-\infty, a), (-\infty, a], (b, +\infty), [b, +\infty)$ for $a,b \in \mathbb{R}$, would it be enough to say that $(-\infty, a) = \bigcup_{n \in \mathbb{N}}(-n, a)$ and etc? In case of $a,b \in \mathbb{Q}$ would it be all right to say that $\mathbb{Q}$ is dense in $\mathbb{R}$ so for every $r \in \mathbb{R}$ we will find a sequence $\{q_n\}_{n \in \mathbb{N}}$ that converges to $r$, so for example $(r, b) = \bigcup_{n \in \mathbb{N}}(q_n, b)$ ? Could you tell me if my approach is right? Please, help. Thank you. AI: The easiest way is to show that any $\sigma$-algebra containing rational intervals must contain all real intervals. You've got the right idea: the rationals are a countable dense subset of the real numbers, so you can always find a sequence of rational numbers converging to any real number, which you can use to construct countable unions and intersections to make all the types of interval you want.
H: Visualize the effect of adding another constraint I have 2 eqns $$ x_1+4x_3\leq4$$ $$ x_2+4x_3\leq4 $$ $$x_1\geq0$$ $$x_2\geq x_3\geq0$$ By drawing geometrical figure I have vertices whose co-ordinates is $(0,0,0) , (4,0,0) (0,0,1) ,(0,4,0) ,(4,4,0)$ I want to include an in-equality named $$x_1+x_2+x_4+x_5\geq \epsilon$$ I cant visualize the result of adding this . Is (0,0,1) point is out of the geometrical figure for adding the constraint ? Can you please help me to visualize the affect of adding another constraint ? AI: To start, try and visualize the solutions to $x_1+4x_3-4=0$. Compare with $mx-y=0, x^2+y^2-1=0.$ Then consider the regions that are separated from eachother by this curve. If only two regions meet, one must be greater than 0 and the other less than it. Now, you can consider this region in 3D if you use 3 coordinates in your original formulation, but to combine it with constraints not in exactly those 3 variables, you must extend that figure along the span of those other variables. For example, $x^2+y^2 \leq 1$ and $-1\leq z \leq 1$ would combine to describe a cylinder from $-1$ to $1$. With this setup in place, one simply takes the intersection of all the regions described by the individual constraints to obtain the region described by the simultaneous inequalities. All the constraints can be viewed as having coefficient-0 terms for the variables that don't appear, which is another way of seeing that the figure satisfying the inequality will have unconstrained axes for each variable beyond those appearing in that inequality.
H: some integral of curvature I have a problem : Let $C$ be a curve defined by $C := \{\ (x,y)\in \mathbb{R}^2 |\ x^4 + y^4 = 1\}$, and let $k$ be its curvature. Compute the integral $\int_C k$ This is an problem in a exam What is the definition of the integral? And how can I evaluate? AI: Use Global Gauss-Bonnet Theorem : $$ \int_{C = \partial R} k_g +\int\int_R K =2\pi \chi (R)\ ( \ast)$$ Since $R$ is a plane so $\chi(R)=1$ and Gauss curvature $K$ is $0$. Also, geodesic curvature $k_g$ is equal to curvature $k$ because of plane. Hence $2\pi$. ( Given curve is just a simple closed curve with no singularity so that there exists no external angle term in $\ast$ )
H: How find the $\frac{1}{e}\le R\le 1$ Qustion: let $a_{n}> 0$,and such $\displaystyle\sum_{n=1}^{\infty}a_{n}$ converge,let $$b_{m}=\sum_{n=1}^{\infty}\left(1+\dfrac{1}{n^m}\right)^na_{n}$$. show that $$\dfrac{1}{e}\le R\le 1$$ where the $R$ is the radius of convergence of $\displaystyle\sum_{m=1}^{\infty}b_{m}x^m$ My try:I know this therom: If the radius of convergence of $\sum_{n=0}^{\infty}a_{n}x^n$ and $\sum_{n=0}^{\infty}b_{n}x^n$ are $R_{1}$ and $R_{2}$, then the radius of convergence $R$ of $\sum_{n=0}^{\infty}a_{n}b_{n}x^n$ satisfies $R\ge R_{1}R_{2}$ But for my problem,I can't work this.Thank you AI: For every $m\geqslant1$ and $n\geqslant1$, one has $1\leqslant\left(1+\frac1{n^{m}}\right)^n\leqslant\left(1+\frac1{n}\right)^n\leqslant3$. Hence, if every $a_k$ is nonnegative, $\alpha\leqslant b_m\leqslant3\alpha$ for every $m\geqslant1$, where $\alpha=\sum\limits_{k=1}^\infty a_k$. Thus, as soon as every $a_k$ is nonnegative, not every $a_k$ is zero, and the series $\sum\limits_ka_k$ converges, the radius of convergence of the series $\sum\limits_{n=1}^\infty b_mx^m$ is $1$.
H: compare complex number with 0? im reading the book "what is mathematics" and find the questions. Calculate $\sqrt{5+12i}$ i followed the hint and wrote the equation $\sqrt{5+12i} = x+yi$ and solved it with these results : x=3,y=2 x=-3,y=-2 so here my intuition told me maybe I should pick the 3+2i. but I really don't know why?How to know whenther the complex number is >0 or <0 ? or maybe I was wrong? AI: The function $\sqrt{\ }$ is not defined on $\mathbb C$, roughly for the reason you explain. Note that it is not even defined on $\mathbb R$. What is well defined is a function $\sqrt{\ }:\mathbb R_+\to\mathbb R_+$, which everybody knows. Thus, "compute $\sqrt{x}$" when $x$ is in $\mathbb C\setminus\mathbb R_+$ can only mean providing the two complex numbers $z$ and $-z$ such that $z^2=x$. If $x=5+12\mathrm i$, you showed that $\{z,-z\}=\{3+2\mathrm i,-3-2\mathrm i\}$. Edit: As mentioned in the comments, one can define a function $v$, continuous on the angular sector $(-\pi,\pi]$, and such that $v(z)^2=z$ for every $z$. Each $z$ in $\mathbb C$, $z\ne0$, can be uniquely written as $z=r\mathrm e^{\mathrm it}$ for some $r\gt0$ and $-\pi\lt t\leqslant\pi$. Define $v(0)=0$ and $v(z)=\sqrt{r}\cdot\mathrm e^{\mathrm it/2}$ for every such $z\ne0$. Note that $v$ is continuous but for a twisted topology of the complex plane $\mathbb C$, where the (small) neighborhoods of every $z$ not in $\mathbb R_-^*$ are the usual (small) ones and the (small) neighborhoods of $z$ in $\mathbb R_-^*$ are the intersections of the usual (small) ones with the halfplane $\Im\geqslant0$. In particular, for every $r\gt0$, $v(r\mathrm e^{\mathrm it})\to-\mathrm i\sqrt{r}$ when $t\to-\pi$, $t$ in $(-\pi,\pi]$, while $v(r\mathrm e^{\mathrm it})\to+\mathrm i\sqrt{r}$ when $t\to\pi$, $t$ in $(-\pi,\pi]$.
H: Finding branch points and branch cuts of arctan I am studying complex analysis and I do not yet fully understand branch points and branch cuts. I am trying to figure out how it works by looking at the following: $z \rightarrow \frac{1}{2i} \log(\frac{1+iz}{1-iz})$ (arctan(z)= $\log(\frac{1+iz}{1-iz})$ ) Now how do I find the branch point and branch cuts? AI: Hint: for every $a,b\in\mathbb{C}$, find those $z\in\mathbb{C}$ such that $\frac{z-a}{z-b}\in\mathbb{R}$. Then find exactly when the given fraction is positive, and when it's negative. Edit: for any $u,w\in\mathbb{C}$ we have $\arg{\frac{u}{w}} = \arg{u}-\arg{w}$. It follows that the quotient is real iff $u,w$ have the same argument (they 'point' in the same direction) or opposite arguments. How can we apply that to the first hint? Edit: the next part is more explicit, and I'll try to give some intuition based on your familiarity with Möbius transformation (however, if you're not done thinking about the problem, you might wish to delay reading it). As mentioned by @AndrewD.Hwang in the comments above, an analytic branch of logarithm exists in any simply connected domain not including zero. Put differently, when you think of the complex plane as the Riemann sphere (infinity as the 'north' pole), the logarithm has branch points at the poles (zero and infinity), and removing any arc connecting both poles (that arc becomes the branch cut) will yield a simply connected surface on which an analytic branch of logarithm indeed exists. Now, $\varphi(z) = \frac{1+iz}{1-iz}$ is a Möbius transformation. This means that it's conformal on the entire Riemann sphere (if you'd like, it 'distorts' the sphere in general, but locally it behaves very similarly to shifts and rotations). Note that it maps $i\mapsto 0$, $-i\mapsto \infty$, implying that the image of any arc connecting $\pm i$ under $\varphi$ is an arc connecting $0,\infty$, and indeed $\pm i$ are the branch points of $\arctan := \log\circ\varphi$, and any arc connecting them would be a branch cut (in other words, removing such an arc would yield a domain in which an analytic branch of $\arctan$ exists). Now, as AndrewD.Hwang also mentioned, the standard 'choice' of branch cut for logarithm is the non-positive real line. It is here that my original hint could help us, as it allows us to find the arc mapped by $\varphi$ to that cut. As I said, for any $a,b\in\mathbb{C}$ we have $\frac{z-a}{z-b}\in\mathbb{R}$ iff $(z-a),(z-b)$ point in similar or opposite directions. This happens exactly when $z$ lies on the unique line passing through $a,b$, and the quotient is negative iff $z$ lies on the segment connecting $a,b$ (because that is when the arrows from $a,b$ to $z$ point in opposite directions). More rigorously, one notes: $$\frac{z-a}{z-b}=t \iff z-a=t(z-b) \iff (1-t)z = a - tb = (1-t)a - t(b-a)\\ \iff z = a + \frac{t}{t-1}(b-a),$$ and indeed $\frac{t}{t-1}\in(0,1)$ ($z$ lies between $a,b$) iff $t$ is non-positive. Finally, note that $$\varphi(z) = \frac{i(z-i)}{i(-i-z)} = -\frac{z-i}{z+i}$$ is a non-positive real exactly when $z$ lies on the imaginary axis but not between $\pm i$, i.e. on $$\{it\mid t\in\mathbb{R}, |t|\geq 1\}.$$ (This, you see, is an arc in the Riemann sphere connecting $\pm i$, which has the 'nice' property of passing through infinity--the point on the sphere that we 'dislike' to work with. Or, if you'd like, it's the only arc connecting $\pm i$ which doesn't pass through the real line, allowing us to truly extend the familiar $\arctan$ on reals.)
H: Are complex eigenvalues special? I've noticed that complex eigenthings are treated as a whole separate topic to real eigenthings (I say things to mean values and vectors). I see no reason for this distinction, yes it makes the geometric interpretations different.... but it's not like one one has to do a completely different method to find them. So my question is should I go read about complex eigenvalues in great detail, like they are a new topic, or just plough into it. My situation is that there's some homework and I'm thinking "is this actually different to what I know already?" I am sure of course there is value to studying them and I shall, but is it a "big deal" is a question that reads "find the complex eigenvectors and values" different in any significant way from "find the eigenvectors and values" (assumed to be real) Sorry this is a bit of a null question, I just - I want to know if I've missed something. Addendum: I've thought about it and there's no reason the definitions I know already to only work with real numbers. AI: Are complex roots of polynomials special? Yes, they are, in a sense that a polynomial may have no real roots, or less of them than its degree (even if its coefficients are all real!), while it will have exactly the same number of complex roots (counted with their multiplicity) as is its degree. In the same way, complex eigenvalues are "special". A real matrix may have no real eigenvalues. For example, $\left[\begin{smallmatrix} 0 & 1 \\ -1 & 0 \end{smallmatrix}\right]$ has eigenvalues $\pm i$. However, when the matrix is real, its complex eigenvalues come in conjugate pairs (if $\lambda \in \mathbb{C}$ is an eigenvalue, then $\overline{\lambda}$ is also an eigenvalue), which also correspond to some matrices of order $2$, which is why the real Schur decomposition has the form it has. This is not very different from the fact that each real polynomial can be written in form $$\prod_k q_k(x) \prod_k l_k(x),$$ where $l_k$ are linear polynomials, while $q_k$ are irreducible quadratic polynomials. This is not really surprising, given that eigenvalues are the roots of a polynomial $\det(A - \lambda{\rm I})$. Personally, I prefer doing everything with complex numbers and then, if needed, restricting that to the real vector space. I leave it to you to decide if this makes complex eigenthings special or not.
H: How do I get $\int^\infty_{-\infty} ue^{-u^2/2} du = [-e^{-u^2/2}]$ How do I get: $$\int^\infty_{-\infty} ue^{-u^2/2} du = [-e^{-u^2/2}]^\infty_{-\infty}$$ AI: Oh I should use integration by substitution. Let $y = -u^2/2$
H: When can you replace some random variable $X$ with another random variable $Y$? Is there some condition that one random variable $X$ can be replaced by some other random variable $Y$ provided that $Y$ has the same distribution as $X$? AI: If you are consistent then renaming will do no harm. Replacing is another thing. For instance if you are asked to calculate the expectation of $X^{2}=X\times X$ where $X$ is symmetrically distributed rv (i.e $X$ and $-X$ have the same distribution) then you cannot replace the second $X$ by $-X$ wich would lead to the calculation of the expectation of $X\times (-X)=-X^{2}$.
H: Solve $(z+ \bar{z}=|z^2+1|)$ Solve this equation: $$z+ \bar{z}=|z^2+1|$$ I tried the following. $$x+iy+x-iy=|z^2+1|$$ $$2x=|z^2+1|$$ $$x=(|z^2+1|)/2$$ and I came to a dead end. How can I proceed? AI: Hints: $$z^2+1=(x+iy)^2+1=(x^2-y^2+1)+i(2xy)$$ $$|a+ib|^2=a^2+b^2$$
H: Solving inequality. Did I do it right? Solve the following inequality: $$5(y-2)-3(y+4)\ge2y-20$$ I made calclations and I found: $$ 0\ge2 $$ What does it mean? Are my calculations right? Here's how I did it: $$ 5(y-2)-3(y+4)\ge2y-20\\ 5y-10-3y-12\ge2y-20\\ 2y-22\ge2y-20\\ 2y\ge2y+2\\ 2y-2y\ge2\\ 0\ge2 $$ AI: Your calculations are correct. The last result tells you that there are no solutions for $y$ in $\mathbb{R}$ to satisfy the inequality $\displaystyle 5(y-2)-3(y+4)\geqslant2y-20$.
H: The limit as $x \to \infty$ of $ \frac {\sqrt{x+ \sqrt{ x+\sqrt x}} }{\sqrt{x+1}}$ $$\lim_{x\to\infty} \frac {\sqrt{x+ \sqrt{ x+\sqrt x}} }{\sqrt{x+1}}$$ I managed to find the limit of the above function, which according to my calculations is 1, but I don't know how to prove my answer is correct. AI: If you want a definition based proof ($M-\epsilon$) just note that for large $x$: $$ \left|\frac{\sqrt{x+\sqrt{x+\sqrt{x}}}}{\sqrt{x+1}}-1\right|=\\ \left|\frac{x+\sqrt{x}-1}{\sqrt{x+\sqrt{x}}\cdot \left(\sqrt{x+\sqrt{x+\sqrt{x}}}+\sqrt{x+1}\right)\cdot\left(\sqrt{x+\sqrt{x}}+1\right)}\right|\leq\frac{2x}{x\sqrt{x}}=\frac{2}{\sqrt{x}}. $$
H: How find this limit: $\displaystyle \lim_{n\to\infty} \int_0^1 (1+x)^{-n-1}e^{x^2}\ dx$ I wish to find: $$\lim_{n\to\infty} n\int_0^1 (1+x)^{-n-1}e^{x^2}\ dx\ \ ( > n=1,2,\cdots)$$ I maybe have to evaluate: $$\int_0^1 (1+x)^{-n-1}e^{x^2}\ dx$$ But I can't, can someone give me some help? AI: We can partially evaluate the integral, by integrating by parts, $$\begin{align} \int_0^1 \frac{n}{(1+x)^{n+1}}e^{x^2}\,dx &= \left[-\frac{e^{x^2}}{(1+x)^n} \right]_0^1 + \int_0^1 \frac{2xe^{x^2}}{(1+x)^n}\,dx\\ &= 1 - \frac{e}{2^n} + \int_0^1 \frac{2xe^{x^2}}{(1+x)^n}\,dx \end{align}$$ The last two terms converge to $0$, so the limit is $1$.
H: Spot my error in solving a linear system I almost always get the unit matrix if I try to get to an row reduced echelon form. I probably always make a mistake. Can you spot the error? What illegal operations could a beginner do while trying to solve a linear system? \begin{array}{} 1 & 1 & 3 \\ -4 & -3 & -8 \\ -2 & -1 & -2 \\ 1 & 2 & 7 \end{array} 2. \begin{array}{} 1 & 1 & 3 \\ 0 & 1 & 6 \\ 0 & 1 & 4 \\ 0 & -1 & -4 \end{array} 3. \begin{array}{} 1 & 1 & 3 \\ 0 & 1 & 6 \\ 0 & 1 & 4 \\ 0 & 0 & 0 \end{array} 4. \begin{array}{} 1 & 1 & 3 \\ 0 & 1 & 6 \\ 0 & 0 & 2 \\ 0 & 0 & 0 \end{array} 5. \begin{array}{} 1 & 1 & 3 \\ 0 & 1 & 6 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} 6. \begin{array}{} 1 & 1 & 3 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} 7. \begin{array}{} 1 & 1 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} 8. \begin{array}{} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ 0 & 0 & 0 \end{array} AI: In the move from $1$ to $2$, you incorrectly obtained $6$ when you should have obtained 4: $4 \times R_1 + R_2 \to R_2 = 0\;\;1\;\; 4\;$ thus giving to identical rows, rows 2 & 3. \begin{pmatrix} 1 & 1 & 3 \\ -4 & -3 & -8 \\ -2 & -1 & -2 \\ 1 & 2 & 7 \end{pmatrix} 2. \begin{pmatrix} 1 & 1 & 3 \\ 0 & 1 & 4 \\ 0 & 1 & 4 \\ 0 & -1 & -4 \end{pmatrix} 3. \begin{pmatrix} 1 & 1 & 3 \\ 0 & 1 & 4 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix} 4. \begin{pmatrix} 1 & 0 & -1 \\ 0 & 1 & 4 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}
H: What are the values of a and b such that $a^4 + 4 b^4$ is prime What values of a and b will ensure $a^4 + 4 b^4$ is prime AI: HINT: $$a^4+4b^4=(a^2)^2+(2b^2)^2=(a^2+2b^2)^2-2\cdot a^2\cdot2b^2=(a^2+2b^2)^2-(2ab)^2$$ $$=(a^2+2b^2-2ab)(a^2+2b^2+2ab)=\{(a-b)^2+b^2\}\{(a+b)^2+b^2\}$$ For primality one of the two factor must be $=1$ If $(a-b)^2+b^2=1\implies $ Assuming $a,b$ to be integers either $a-b=\pm1,b=0\implies a=\pm1$ or $a-b=0,b=\pm1\implies a=b=\pm1$
H: Volume Integrals I encountered this problem in Griffiths' Introduction to Electrodynamics. The first problem under volume integrals (Example 1.8). It reads: Calculate the volume integral of $T = xyz^2$ over the prism in Fig. 1.24. I understand how to perform integrals, but in this particular one I get lost while going through the solution provided. It's as if a step is jumped. The integral performed along $x$ confuses me since it has limits of $0$ and $(1 -y)$. How does $y$ not appear in the answer? It's as if some nicety is performed. I hope someone has come across the question in Griffiths before (and maybe even has the book at hand) AI: How does $y$ not appear in the answer? Because you are integrating $y$ over the range 0 to 1: $$ \int_0^1\left(y\int_0^{1-y}dx\,x\right)dy = \int_0^1\,y\cdot\frac12\left(1-y\right)^2dy=\left.\frac12\left(\frac13y^3-y^2+y\right)\right|_0^1=\frac16 $$ The reason you integrate $x$ from 0 to $1-y$ in the first place is because the maximum $x$ depends on your $y$ position. If $y=0$, then the maximum $x$ you have is 1; if $y=1$, then the maximum $x$ you have is 0; any $y$ in between has a linear relation with $x$.
H: Largest palindrome made from the product of two 3-digit numbers I am doing a python problem and I need to find the largest palindrome made from the product of two 3-digit numbers. So, how would I get all possible combinations? I imagine it is multiplying all combinations of 100 to 999 but am a bit stuck on how to do this. I hope this makes sense. def genYieldThreeValues(stop): j = 999 while stop >99 and j > 99: multiples = str(stop * j) front = multiples[:3] # get front three numbers back = multiples[-3:] # get last three numbers stop-=1 j-=1 yield [front,back,multiples] # yield a list with first three, last three and all numbers def highestPalindrome(n): for x in genYieldThreeValues(n): if x[1] ==x[0][::-1]: # compare first three and last three digits reversed return x[2] # if they match return value print(highestPalindrome(999)) AI: So you've started Project Euler? Since you are asked for the largest palindrome from two 3-digit numbers, then why not start two loops and cycle downwards: for i in range(999,100,-1): for j in range(999,100,-1): product = i*j Then you just have to reverse it & compare the product with the reverse. There is an easy way to do this in Python, but part of PE is that you figure it out for yourself :).
H: Solving $\frac{5}{t-3}-2=\frac{30}{t^2-9}$ I need help with solving this equation: $$ \frac{5}{t-3}-2=\frac{30}{t^2-9} $$ I tried to solve, but I always get false result. The result should be $-\frac{1}{2}$ but I always get $-\frac{1}{2}$ and $3$. This is how I did it: $$\begin{align} \frac{5}{t-3}-2&=\frac{30}{t^2-9}\\ \frac{5-2\cdot(t-3)}{t-3}&=\frac{30}{t^2-9}\\ \frac{5-2t+6}{t-3}&=\frac{30}{t^2-9}\\ \frac{-2t+11}{t-3}&=\frac{30}{t^2-9}\\ \frac{-2t+11}{t-3}\cdot\left(t^2-9\right)&=30\\ \frac{(-2t+11)\cdot\left(t^2-9\right)}{t-3}&=30\\ \frac{(-2t+11)\cdot(t-3)\cdot(t+3)}{t-3}&=30\\ (-2t+11)\cdot(t+3)&=30\\ -2t^2-6t+11t+33&=30\\ -2t^2+5t+33&=30\\ -2t^2+5t+33-30&=0\\ -2t^2+5t+3&=0\\ \end{align}$$ Solving $$ -2t²+5t+3=0 $$ I get: $$ -\frac{1}{2}\text{ and }3 $$ Am I doing something wrong? AI: The original equation is not defined for $t = 3$, hence, we need to omit that as a "solution" to the original equation. Important: Always test possible solutions to see if they are defined for, and/or solve, the original equation. If not, we cannot include those as solutions to the original equation.
H: What direction does a vector with more than two entries point at? Say you are given theses two vectors: u = (1, -2, 4) v = (-2, 4, 8) Since there are three entries, how do you know if they point in the opposite/same/different direction? AI: Regardless of the dimension, two vectors are in the same direction if the ratio between all pairs of corresponding components is the same and positive. They are opposite if the ratio is the same and negative. Otherwise they are in different directions. In your case, $\frac {-2}1=-2, \frac 4{-2}=-2,\frac 84=2$ Since they disagree, the vectors are in different directions. How could you change one component of one vector to make them opposite? How could you change two components of one vector to make them the same?
H: Positive real number has a finite number of binary when is in form $ m/2^n $ Prove that positive real number $ ( x \in \mathbb{R} \ x > 0) $ has a finite number of binary if and only if when is in form $ \frac{m}{2^n} $, where $ m, n \in \mathbb{N} $ I found this solution: Floating point arithmetic But I don't understand a few things: 1) What does it mean: $ \alpha=\sum_{k=d}^\infty e_k 2^{-k} $, why we can use this designation and why $ k = d $, why not $ k = 0 $ ? 2) What is this $ e_k \in {0, 1} $ 3) Wherein the step showed that is finite ? 4) What give us $ 2^n\alpha = 2^n\sum_{k=d}^n e_k 2^{-k}= \sum_{k=d}^n e_k 2^{n-k}\in\mathbb Z $ ? I added this task, because I can't comment above task. Thanks for answers. AI: The $\alpha=\sum_{k=d}^\infty e_k 2^{-k}$ with $e_k\in\{0,1\}$ is the binary expansion of $\alpha$. Starting from $d\in \mathbb Z$ means that $\alpha$ can have non zero integer part. Note that $\alpha$ has finite binary expansion if and only if $2^n\alpha\in\mathbb Z$ for some $n\in \mathbb N$.
H: showing $\arctan(\frac{2}{3}) = \frac{1}{2} \arctan(\frac{12}{5})$ I'm having problem with showing that: $$\arctan(\frac{2}{3}) = \frac{1}{2} \arctan(\frac{12}{5})$$ I would need some help in the right direction AI: $(3+2i)^2=5+12i$, so $\arctan\frac23+\arctan\frac23=\arctan\frac{12}{5}$. Alternatively, study this diagram:
H: How can I show that sample mean has the smallest variance? Let the population distribution is $N(\mu,1)$. Sample mean: $\bar{X_n}=\frac{\sum_{i=1}^{n} X_i}{n}$ Then $E(\bar{X_n})=\mu$ and $V(\bar{X_n})=\frac{1}{n}$ It is an unbiased estimator, and as $n \rightarrow \infty$ it converges to $\mu$. But I want to show that it is the minimun variance estimator, for example, take another unbiased estimator $Y_n$ and show $V(\bar{X_n}) \le V(Y_n)$. AI: If you wanted to show only that the sample mean has a smaller variance than every other weighted average of the observations, then this would be an exercise in Lagrange multipliers. But if you want to include all unbiased estimators of $\mu$ based on $X_1,\ldots,X_n$ (for example, the sample median is one such estimator, and is not a weighted average of the observations), then this becomes equivalent to the one-to-one nature of the two-sided Laplace transform. Observe that the conditional distribution of $(X_1,\ldots,X_n)$ given $\bar X = (X_1+\cdots+X_n)/n$ does not depend on $\mu$. (I could add the details of how to find the conditional distribution if necessary.) In other words, the sample mean $\bar X$ is a sufficient statistic for $\mu$. Therefore, the Rao–Blackwell theorem tells us that any minimum-variance estimator is to be found only among functions of $\bar X$. Therefore it is enough to show that the only function $g(\bar X)$ of $\bar X$ (where of course, which function $g$ is, is not allowed to depend on $\mu$; i.e. $g(\bar X)$ is actually a statistic) that is an unbiased estimator of $\mu$ is $\bar X$ itself. The density function of $\bar X$ is $$ x\mapsto \text{constant}\cdot \exp\left(\frac{-1}{2}\cdot\left(\frac{x-\mu}{1/\sqrt{n}}\right)^2\right). $$ In order that the function $g(\bar X)$ be an unbiased estimator of $\mu$, we must $g(\bar X)-\bar X$ being an unbiased estimator of $0$. Let $h(x) = g(x)-x$; then we must have $$ \int_{-\infty}^\infty (\text{same constant})\cdot h(x) \exp\left(\frac{-1}{2}\cdot\left(\frac{x-\mu}{1/\sqrt{n}}\right)^2\right) \, dx = 0 $$ for all values of $\mu$. Hence $$ \text{same constant}\cdot \exp\left(\frac{-n\mu^2}{2}\right) \cdot \int_{-\infty}^\infty \left(h(x) \exp\left(\frac{-n}{2} x^2\right)\right) \exp\left(nx\mu\right) \, dx = 0 $$ regardless of the value of $\mu$. Thus the two-sided Laplace transform of the function $$ x\mapsto h(x)\exp\left( \frac{-nx^2}{2} \right) $$ is $0$ for all values of $\mu$. Since the two-sided Laplace transform is one-to-one, it can map only one function to the identically zero function.
H: Writing a parametrization of the cissoid by using $\theta$ The cissoid of Diocles is the curve whose equation in terms of polar coordinates $(r,\theta)$ is $$r = \sin\theta \tan\theta, −\frac{\pi}{2}<\theta <\frac{\pi}{2}$$ Write down a parametrization of the cissoid using $\theta$ as a parameter and show that $$\gamma(t)=\left(t^2, \frac{t^3}{\sqrt{1-t^2}}\right)$$ for $-1\lt t\lt 1$ is a parametrization of it. I researched what is the cissoid of Diocles. And I reached the following graph result; I found the question a differential geometry textbook while I am studying by myself . Please help me solving the question. Thank you:) AI: You almost have the answer already. Recall the definition of polar coordinates: $$x=r\cos\theta,\qquad y=r\sin\theta.$$ Now you have an expression for $r$ in terms of $\theta$. Replacing it in these formulas we get \begin{align} &x=\sin\theta\tan\theta\cdot\cos\theta=\sin^2\theta,\\ &y=\sin\theta\tan\theta\cdot\sin\theta=\frac{\sin^3\theta}{\cos\theta}. \end{align} This is a parameterization of $x$ and $y$ by $\theta$. Now if instead of $\theta$ we use $t=\sin\theta$ as a new parameter, these formulas will transform into $$x=t^2,\qquad y=\frac{t^3}{\sqrt{1-t^2}}.$$
H: The definition of Borel sigma algebra In the text of Probability Essentials by J.Jacod & P.Protter, a theorem: The Borel $ \sigma $- algebra of $R $ is generated by intervals of the form $(-\infty,a ]$, where $a \in Q$. As far as I've known the Borel sigma algebra is generated by all open subsets of $R$, which surely contains sets like (x, y), here x is irrational. So my question is how can 'a', in theorem, a rational generate such set (x, y). AI: $(x,y) = \displaystyle\bigcup_{\substack{a,b\in\mathbb{Q} \\ x<a<b<y}} (a,b)$.
H: composition of a strict monic with a section In Abstract and Concrete Category of Adamek, Herrlich and Stretcker, I'm dealing with exercise 7Dd. It says that a strict monomorphism $f:A\to B$ followed by a section $g:B\to C$ give rise to a a strict monomorphism $fg:A\to C$. I try to prove it: Let $f'$ such that, for any $r,s:C\to\cdot$,we have $fgr=fgs$ implies $f'r=f's$. We have to show that $f'=af$ for a unique $a$. Let $h$ such that $gh=1$. Then $fu=fv$ implies $fghu=fghv$ which implies $f'hu=f'hv$. Since $f'$ is strict, by assumption, we obtain $f'h=af$ for a unique $f$. I have difficulties to deduce $f'=afg$ from it. AI: Let $E$ denote the domain of $f'$. The trick is to factor $f': E \to C$ through $g$ first, using the fact that $g$ is the equalizer of the pair $hg, 1_C: C \to C$. Indeed, since $fg(hg) = fg(1_C)$, we have $f'(hg) = f' 1_C$ by hypothesis. Because of the observation of $g$ being the equalizer, we can therefore write $f' = bg$ for some (unique) $b: E \to B$. Again by hypothesis, we have that $f' = bg$ equalizes every pair $r, s: C \to D$ that is equalized by $fg$, i.e., $$fgr = fgs \Rightarrow bgr = bgs.$$ In particular, using $gh = 1_B$, we now have a sequence of implications $$fr' = fs' \Rightarrow fg(hr') = fg(hs') \Rightarrow bg(hr') = bg(hs') \Rightarrow br' = bs'$$ so that by strictness of $f$, we may write $b = af$ for some (unique) $a$. We conclude $f' = bg = afg$.
H: Limit as n goes to infinity of $(1+x^{n})^{\frac{1}{n}}$ Question: I can't seem to show that the limit of $(1+x^{n})^{\frac{1}{n}}$ as $n\rightarrow\infty$ is $x$, where $x$ is in the interval $[1,2]$. Attempt: Let $y_{n}=(1+x^{n})^{\frac{1}{n}}$. Then $\lim_{n\rightarrow\infty}{y_{n}}=\lim_{n\rightarrow\infty}{(1+x^{n})^{\frac{1}{n}}}$. Taking logarithms we have $\lim_{n\rightarrow\infty}{\ln{y_{n}}}=\lim_{n\rightarrow\infty}{\frac{1}{n}\ln{(1+x^{n})}}$. Here I decided to substituted $v=1/n$, so the limit is now taken as $v$ goes to $0$. Then $\lim_{n\rightarrow\infty}{\ln{y_{n}}}=\lim_{v\rightarrow{0}}{v\ln{(1+x^{\frac{1}{v}})}}$. Now the RHS is equivalent to $\lim_{v\rightarrow{0}}{v}\cdot\lim_{v\rightarrow{0}}{\ln{(1+x^{\frac{1}{v}})}}$. The problem now is that this becomes the indeterminate $0\cdot\infty$. Any help will be appreciated :) AI: Since $1\leq x\leq 2$, then $x^n\leq 1+x^n\leq 2x^n$, that is, $x\leq (1+x^n)^{\frac{1}{n}}\leq 2^{\frac{1}{n}}x$. therefore, $lim_{n\to\infty}(1+x^n)^{\frac{1}{n}}=x$.
H: Does $\bigcup _{j\in\mathbb{N}}A_j=A$ I'm reading R. Schilling's Measure, Integrals and Martingales and in a proof he makes the following statement. "Since $A=\bigcup _{j\in\mathbb{N}}A_j$...", is this allways true? The context: The object is the verify the third property for a $\sigma$-algebra, that is $(A_j)_{j\in\mathbb{N}}\subset\mathcal{A}\implies\bigcup _{j=\mathbb{N}}A\in\mathcal{A}$, where $\mathcal{A}$ is a $\sigma$-algebra, for the trace $\sigma$-algebra: Let $\subset X$ be any set and let $\mathcal{A}$ be som $\sigma$-algebra in $X$. Then $\mathcal{A}_{E}:=\{E\cap A: A\in\mathcal{A}\}$ To prove this Schilling starts with: Let $(B_j)_{j\in\mathbb{N}}\subset \mathcal{A}_{E}$. Then there are $(A_j)_{j\in\mathbb{N}}\subset \mathcal{A}$ such that $B_j=E\cap A_j$. Since $A=\bigcup _{j\in\mathbb{N}}A_j$..." There is no statement about the sequence or definition of $A$. Thanks in advance! Alexander AI: Let $\mathcal{A}_E$ be the trace of $\sigma$-algebra $\mathcal{A}$ on the subspace $E$. Then for all $(A_i)_{i \in \mathbb{N}} \subset \mathcal{A}_E$, we have $\bigcup A_i = \bigcup (A_i \cap E) = E \cap \bigcup A_i$, so the trace is closed under countable union.
H: Another math contest problem: $\int_0^{\frac{\ln^22}4}\,\frac{\arccos\frac{\exp\sqrt x}{\sqrt2}}{1-\exp\sqrt{4\,x}}dx$ Prove: $$ {\Large\int_{0}^{\ln^{2}\left(2\right) \over4}}\, \frac{\arccos\left(\vphantom{\huge A} {\exp\left(\vphantom{\large A}\sqrt{x\,}\right) \over \sqrt{\vphantom{\large A}2\,}}\right)} {1-\exp\left(\sqrt{4x\,}\,\right)} \,{\rm d}x = -\,\frac{\,\,\pi^{3}}{192} $$ I haven't solved it yet. AI: With the substitution $x \mapsto \frac{1}{4}\log^{2}\left(\frac{2}{x^{2}+1}\right)$, it follows that \begin{align*} \int_{0}^{\frac{\log^{2}2}{4}} \frac{\arccos\left( \frac{\exp\sqrt{x}}{\sqrt{2}} \right)}{1 - \exp\sqrt{4x}} \, dx &= \int_{0}^{1} \frac{x \arctan x}{1 - x^{2}} \log\left(\frac{1+x^{2}}{2}\right) \, dx \\ &= -\frac{1}{2} \int_{-1}^{1} \frac{\arctan x}{1 + x} \log\left(\frac{1+x^{2}}{2}\right) \, dx. \end{align*} Now you can refer to this solution. Actually, I obtained this integral representation by applying the following chain of much human-friendly substitutions: $$ \exp\sqrt{4x} = t, \qquad t = 2\cos^{2}u, \qquad x = \tan u $$
H: Confused by Example in Herstein's "Topics in Algebra" The following comes from I.N. Herstein's "Topics in Algebra", just after defining subgroups. He gives the following example Let $S$ be any set and $A(S)$ be the set of one-to-one mappings of $S$ onto itself, made into a group under the composition of mappings. For any $x_0 \in S$ define $H(x_0) = \{ \phi \in A(S) : x_0\phi = x_0\}$. $H(x_0)$ is a subgroup of $A(S)$. If $x_1 \neq x_0 \in S$, what is $H(x_0) \cap H(x_1)$? To me it seems like $H(x_0)$ is just the trivial group $\{e\}$, since $$ x_0\phi = x_0 = x_0e \implies \phi = e$$ by left cancellation. Am I wrong? If so, can you help me devise a counterexample? If I am right, do you have any idea why Herstein chose such a trivial example? AI: Since $x_0$ is not a group element you cannot cancel it. Remember $\phi\in A(S)$ and $x_0\in S$. The group $H(x_0)$ is called the stabilizer; it is the set of all functions that fix $x_0$. Obviously in general more than one bijection on a set $S$ can fix a given point. For example on the set of reals the three bijections $x\mapsto x$ (identity), $x\mapsto -x$ (negation) and $x\mapsto x^3$ (cubing) all fix $0$, but clearly these three functions are all distinct from each other.
H: Simplify the boolean expression to two literals Expression: $$[AB'(C+BD) + A'B']C$$ I start off using the distributive law, and then nowhere to go. I need help. AI: Indeed, we do need the distributive law multiple times. It's used in the first line, again in the second line, and once more in the fourth line. $$\begin{align} [AB'(C+BD) + A'B']C & = [AB'C + A\underbrace{B'B}_{=\; 0}D+ A'B']C\\ \\ & = AB'\underbrace{CC}_{=\;C} + 0 + A'B'C \\ \\ & = AB'C + A'B'C \\ \\ & = [\underbrace{A + A'}_{=\;1}](B'C) \\ \\ & = B'C\end{align}$$
H: Integration of $sin^2(x)$ using trig substitution Hi I'm just learning integration, so I'm sorry if this is really basic. I know this is true: $\displaystyle \int sin^2(x)d x = \frac{1}{2} (x - sin(x)cos(x)) + C$ Because you can use Power reducing substitution http://www.youtube.com/watch?v=VRNuPqA_Vo8 But my question is why can't you use trig substitution like below? $$\int sin^2x$$ $$= \int \frac{tan^2x}{sec^2x} $$ $$Let\;a = tan x$$ $$As sin^2(x)+cos^2(x)=1...$$ $$= \int \frac{a^2}{1+a^2}$$ $$= 1-tan^-1(a)$$ $$= 1-x$$ Please let me know where I'm assuming something incorrect. AI: Putting $\tan x=u,$ $\displaystyle x=\arctan u\implies dx=\frac{du}{1+u^2}$ and $\sec^2x=1+\tan^2x=1+u^2$ $$\implies\int\sin^2xdx=\int\frac{\tan^2x}{\sec^2x}dx=\int\frac{u^2}{(1+u^2)^2}du$$
H: $\{A_\alpha\}$ closed, locally finite, $f\restriction_{A_\alpha}$ continuous; show $f$ continuous Let $\{ A_\alpha \}$ be a collection of subsets of $X$; let $X=\bigcup_\alpha A_\alpha$. Let $f:X\rightarrow Y$; suppose that $f\restriction_{A_\alpha}$, is continuous for each $\alpha$. Show that if $\{ A_\alpha \}$ is locally finite and each $A_\alpha$ is closed then $f$ is continuous. I have seen a solution to this problem, my question was concerning the method I tried to use to prove this. I was wondering if I could complete it the way I was going: Say each $\alpha \in J$ is the index set. If $J$ is finite, then the result follows from a previous part to this question (If $\{A_\alpha\}$ is finite and each $A_\alpha$ is closed, then $f$ is continuous.). Suppose $J$ were infinite. It follows as $\{A_\alpha\}$ is locally finite, $\bigcap A_\alpha = \varnothing.$ If we let $U_\alpha=X-A_\alpha$, then each $U_\alpha$ is open and it follows that $\{ U_\alpha \}$ is a collection of sets whose union is $X$. We have a theorem in the text which says that $f:X\rightarrow Y$ is continuous if $X$ can be written as the union of open sets $V_\alpha$ such that $f\restriction_{V_\alpha}$ is continuous for each $\alpha$. My question is how could I show that in my case, that each $f\restriction_{U_\alpha}$ is continuous which would mean that $f$ is continuous by the theorem in the text. I've tried picking open sets in $Y$ and trying to look at their pre-image under some $f\restriction_{U_\alpha}$ and show that that must be open in $U_\alpha$. I've not had much luck here as I couldn't find out how to write the pre-image set as the union of open sets. The problem I was having is that I wanted to try to relate $U_\alpha=X-A_\alpha$ to $A_\alpha$ somehow but they are disjoint sets and if I picked an open set in $Y$, its pre-image under the $f\restriction_{A_\alpha}$ is open and then if I tried to translate this to the $U_\alpha$ subspace of $X$ then I could make a closed subset of $U_\alpha$ which isn't helpful as I need something open, I think it isn't helpful at least. Any hints? AI: The problem is that if you say that $\bigcap A_α$ is empty, then you are only using that the collection is point finite, i.e. each point is only in a finite number of sets. However, the collection is locally finite, and this is essential for the proof. If $f$ is continuous on each $A_α$ for a point finite cover, it need not be continuous in $X$. For an example, take $X=I$ and $A_n=\left[\frac1n, \frac1{n+2}\right]$ for natural $n>0$ and $\ A_\infty=\{0\}$. This gives a point finite closed cover $(A_n)_{n\in\Bbb N^*}$. If you set $f|_{A_n}=1$ and $f(0)=0$, then this map won't be continuous. To prove your statement for locally finite closed covers, try to show that a set $B\subseteq X$ is closed in $X$ if $B\cap A_α$ is closed in $A_α$ for each $α$.
H: If $X$ invertible is $XX^*$ PD? I am wondering if $X$ invertible is $XX^*$ positive definite (PD)? I know $XX^*$ is PSD but looking for PD. Thanks. AI: First for any matrix $X$, we have $XX^*$ to be positive semi-definite, since for any vector $a$, we have $$a^*XX^*a = \Vert X^*a\Vert_2^2 \geq 0$$ All we need to show now is that if $a \neq 0$, then $X^*a \neq 0$, so that we have $\Vert X^*a\Vert_2^2 > 0$. If $\Vert X^*a\Vert_2^2 = 0$, this means that $\Vert X^*a\Vert_2 = 0$, which in-turn means that $X^*a = 0$. However, since we are given that $X$ is invertible, the only $a$ such that $X^*a = 0$ is the vector $a=0$. Hence, for any non-zero vector $a$, we have $\Vert X^*a\Vert_2^2 > 0$. Hence, $XX^*$ is positive definite.
H: Finding an equation for a growth formula Given a tree that has three nodes each level I want to find the formula that predicts the number of all nodes with a given tree height. I fitted the data into Numbers with an exponential function and it gave me a numeric formula: But I'd like to know how to derive a non-numeric (calculus type) formula for this. What I did was finding out the growth formula y(t) = a * e^(kt) where a = initial value y = value after time/steps k = growth factor t = time (or step) But fitting my data into this formula doesn't give right predictions. For example: a = 1 t = 4 y(t) = 40 Solving for k in step 4 (t=4): (1) 40 = 1 * e^(4k) (2) ln(40) = 4k (3) k = ln(40) / 4 = 0.9222 Predicting number of all nodes in step 5 (t=5): (4) y(5) = e^(0.9222*5) = 12.53 The answer is wrong because the tree has at 5th step already 121 nodes. What am I doing wrong? What is the correct way to calculate this? Thanks. Pom. AI: The following equation is suitable for growth $$\frac{3^k-1}{2}$$ where $k$ is the number of the steps.
H: what is the significance of the annulus in laurent series? I am looking at ways which you can write a function as a series. I am aware that one can use the Taylor series. I am currently trying to understand the Laurent series. I understand there are cases where the Taylor series will not work as all the terms in it cannot have a negative degree. The Laurent series is there to aid this kind of problem. Upon examining texts, including the wikipedia page I see that for a function $f:C\subset \mathbb{C} \rightarrow \mathbb{C}$, the Laurent series at a point $z_0 \in C$ is given by \begin{equation} f(z) = \sum_{n=-\infty}^{\infty}a_n(z-z_0)^n \end{equation} where the $a_n$ is a somewhat a generalized version of the Cauchy integral formula. I have the following questions with regards to this: Why do you need to have an annulus around $z_0$ When calculating the $a_n$'s, can you pick any closed curves surrounding $z_0$ within the annulus and integrate along their boundaries? AI: If $f$ is not analytic at $z_0$ (and the singularity is isolated and not removable), then the Laurent series at $z_0$ will converge in an annulus around $z_0$. Why an annulus? What follows assumes you accept why Taylor series gives you a radius/disk of convergence. If you ignore negative powers, then it is like a regular Taylor series and you already know Taylor series have a radius of convergence (outer circle) $|z-z_0| < R$. Now if you focus on the convergence of the negative powers, think of it as a Taylor series of a different change of variable. $\sum_{n > 0} a_{-n} (z-z_0)^{-n}$ becomes $\sum_{n>0} a_{-n} y^n$ for $y = (z-z_0)^{-1}$. Note that there is a radius of convergence for this series, $|y| < r$, which translates to convergence when $|z-z_0| > r^{-1}$. Conclude convergence in annulus $r^{-1} < |z-z_0| < R$. This is the largest region for which the Laurent series converges (since this is the case for the individual Taylor series). As for the second part, the answer is yes. In fact the curve is allowed to even leave the annulus as long as you avoid other singularities of the function by consequences of Cauchy's theorem (tells you that you can deform the contour while preserving the integral so long as you don't cross any other singularities)
H: Triple chain rule problem $$ f(x) = \log _4 \log _2 \tan x $$ $$ f'(x) = \frac{1}{log _2 (\tan x) \ln (4)} \cdot \frac{1}{\cos ^2x \tan x \ln (2)}= $$ Is there anything else I can do with this besides changing $\tan x$ to $\frac{\sin x}{\cos x}$ and getting rid of cos's square? AI: As $\displaystyle\frac{d(\ln y)}{dy}=\frac1y$ and $\displaystyle\log_ab=\frac{\log_cb}{\log_ca}$ $$\log_4(\log_2\tan x)=\frac{\ln (\log_2\tan x)}{\ln 4}$$ Now, $$\frac{d\{\ln (\log_2\tan x)\}}{dx}=\frac{d\{\ln (\log_2\tan x)\}}{d(\log_2\tan x)}\frac{d(\log_2\tan x)}{dx}=\frac1{\log_2\tan x}\frac{d(\log_2\tan x)}{dx}$$ Again, $\displaystyle\log_2\tan x=\frac{\ln\tan x}{\ln2}$ Now, $$\frac{d(\ln\tan x)}{dx}=\frac{d(\ln\tan x)}{d(\tan x)}\frac{d(\tan x)}{dx}=\frac1{\tan x}\sec^2x$$
H: Cross product intersection sets Here's what I'm, trying to prove. Let $A, B, C$ be non-empty sets. Prove that $A \times (B \cap C) \subseteq (A \times B) \cap (A \times C)$ First I would need to prove $A \times (B \cap C) = (A \times B) \cap (A \times C)$ If I used distributivity rule then I get: $(A \times B) \cap (A \times C) = (A \times B) \cap (A \times C)$ I feel like using distributivity here is wrong because both sides equal each other, which becomes a proper subset. Am I wrong? Then I would get $(A \times B) \cap (A \times C) \subset (A \times B) \cap (A \times C)$ AI: Edit: From the comment, it seems you are being asked to show equality (distributivity of $$A \times (B \cap C) = (A \times B) \cap (A \times C)$$ and you can do this by "element chasing" I.e., show that both the following hold: $$(x,y) \in \Big[A \times (B \cap C)\Big] \implies (x, y) \in\Big[ (A \times B) \cap (A \times C)\Big] $$ $$ (x,y) \in\Big[ (A \times B) \cap (A \times C)\Big] \implies (x, y) \in \Big[A \times (B \cap C)\Big]$$and you're done. But note that you cannot use what you are asked to prove (distributivity of the cross product). Use the definitions of the cross-product, and the definition of set intersection to prove the above (and also distributivity over conjunction/set intersection). You can also start with unpacking the definition of $A \times (B\cap C)$ using set-builder notation, and through step by step equivalency, arrive at the set defining $(A \times B) \cap (A \times C)$, showing that we do in fact have that equality holds. For example: $$\begin{align} A \times (B\cap C) & = \{(x, y)\mid x \in A \land y \in (B \cap C) \}\\ \\ & = \{(x, y)\mid x \in A \land (y \in B \land y \in C)\} \\ \\ & = \qquad \vdots \\ \\ & = \{(x, y)\mid (x \in A \land y \in B) \land (x \in A \land y \in C)\} \\ \\ &= (A \times B) \cap (A \times C)\end{align}$$
H: Does the 13th sphere fit in? I've been searching for a formula, but couldn't find any. Here is the question: What is the highest number of equal nonoverlapping spheres that touch a unit sphere? The distance between the points that the outer spheres touch to the inner sphere wont be any smaller than 1 unit, and this distance measuring will be along the surface of the inner sphere. AI: One radian is about $57.29577951^\circ.$ The maximum mutual distance of 13 points on the unit sphere is about $57.1367^\circ,$ which is very slightly smaller. As a result, it is not possible to place 13 points on the unit sphere, such that the angular distance between any two of the points is at least one radian. See the first three pages in Musin and Tarasov This is generally called the http://en.wikipedia.org/wiki/Tammes_problem They refer to chapter 6, section 4 of L. Fejes Toth, Lagerungen in der Ebene, auf der Kugel und in Raum, Springer-Verlag,1953; Russian translation, Moscow, 1958
H: The Banach-Steinhaus theorem for seminormed spaces Assume that we have a vector space $X$ over reals with a countable sequence of seminorms $p_n$ on $X$ such that: $$ p_n(x)\leq p_{n+1}(x) \textrm{ for } n\in \mathbb N, x\in X, $$ $$ \textrm{ for } x\in X\setminus \{0\} \textrm{ there is } n\in \mathbb N \textrm{ such that } p_n(x)\neq 0. $$ Then $X$ is a metric space with the metric $$ d(x,y)=\sum_{n=1}^\infty \frac{1}{2^n} \frac{p_n(x-y)}{1+p_n(x-y)}, \ x,y\in X. $$ Such a space $X$ is called the countably seminormed space. Let's consider in a complete countably seminormed space $X$ a sequence $T_k:X\rightarrow \mathbb R$ of linear continuous mappings such that for each $x\in X$ the sequence of numbers $(T_k(x))_{k \in \mathbb N}$ is bounded. Then by the Banach-Steinhaus theorem for the Frechet spaces the family $(T_k)_{k\in \mathbb N}$ is equi-continuous. Moreover, by properties of linear functionals on countably seminormed spaces, for each $k\in \mathbb N$ there exists $N_k$ such that $T_k$ is continuous with respect to the seminorm $p_{N_k}$. I.e. $|T_k(x)|\leq M p_{N_k}(x)$ for all $x\in X$, where $M>0$ is some constant depending on $k$. Does there then exist an $N\in \mathbb N$ and $M>0$, not depending on $k\in \mathbb N$, such that $$ |T_k(x)|\leq M p_N(x) \textrm{ for } x\in X, k \in \mathbb N ? $$ AI: Yes, that is what equicontinuity means. That the family is equicontinuous means there is one neighbourhood $V$ of $0$ such that $$T_k(V) \subset \mathbb{D}$$ for all $k$ (where $\mathbb{D}$ is the open unit disk/interval). Since the family of seminorms is increasing, there is one $N$ with $V \supset \{ x : p_N(x) < r\}$ for some $r > 0$. But then $$\lvert T_k(x)\rvert \leqslant \frac1r p_N(x)$$ for all $x$ and $k$.
H: Expectation of concave function I want to proof that the function $\phi:\mathbb R\rightarrow\mathbb R: \phi(\lambda)=\mathbb{E}U(w+\lambda X)$, which is everywhere finite-valued, is concave. $U:\mathbb R\rightarrow\mathbb R$ is concave and the random variable $X$ has zero mean. My first idea was to use Taylor expansion to show that the second derivative is smaller than $0$. When I use Taylor expansion I get approximately $U(w)+U'(w)\mathbb{E}(\lambda X)+U''(w)\mathbb{E}\lambda^2X^2=U(w)+\lambda^2U''(w)\mathbb E{X^2}$ and I now that $U''(w)\le0$ because $U$ is concave, but how can I continue? AI: Hint: $U$ is concave iff $$ U(\eta w_1+(1-\eta) w_2) \ge \eta U(w_1)+(1-\eta)U(w_2),\;\;\forall w_1, w_2 \mbox{ and } \eta \in [0,1] $$ Now $$ EU(\eta w_1+(1-\eta) w_2 + \lambda X)=EU\big(\eta (w_1+ \lambda X)+(1-\eta) (w_2 + \lambda X)\big)\ge\ldots $$
H: 90,8 by 30,1 egyptian division I am having trouble using the Egyptian method of division. So far I have 1 30,1 2 60,2 90,8 - 90,3 -> need 5 additional units. I know at this point I need to take parts of 30,1 but I am not sure which direction I should go from here. I tried the 2/3 part. 1 30,1 2/3 20, 2/3 1/3, 1/9 13, 2/3, 1/9 I am not sure where to go from here. AI: I have figured this out on my own. All lines that are used in the final answer are denoted by a "\" placed in front of that line. Continuing after 2 \ 1 30,1 \ 2 60,2 As I stated earlier we need 5 additional units to reach 90,8 but we can decrease this by taking other parts \ 1/8 3, 1/2, 1/4, 1/8 \ 1/31 1 From here we only need an 1/8 more to reach 90,8. However the Egyptians never repeated unit fractions in their notation (1/8 was used when taking the 1/8 part). So we must change 1/8 into other unit fractions. 1/8 = .125 = .1, .025 = 1/10, 1/40 1 30,1 \ 10 300,10 \ 40 1000, 200, 40 We now take the numbers on the right and reciprocate them. These form the unit fractions that sum to 1/8. Our final answer is 3, 1/8, 1/31, 1/310, 1/1240. Checking this with your simplest method for division you will find that this equates to 98/31 or 3.161290323....
H: Prove by induction that $n < 2 ^n $ where $n \in \mathbb{N}$ Example question in a textbook that I don't understand. Proof works for n = 1 Setting for k makes $k < 2^k $ Setting for k + 1 makes $k+1 < 2^{k+1} $. Here, I would be stuck, the book takes the equation to: $k+1<2^k +1\leq 2^k+2^k = 2 \cdot 2^k=2^{k+1}$. NB: $<2^k +1$ is not a typo. There doesn't seem to be a good explanation for this in the book (although it does mention adding 1to both sides of the equation), could I have some advice on how the method used works? AI: claim : $ n < 2^n$ Proof by Induction theorem. For $ n = 1 $ . It is true $ 1 < 2^1 = 2 $ Let's say it is true for $ n = k $ $$ k < 2^k \dots (*)$$ Now, if prove it is true for $ n = k+1 $ then it will be true for all k > 1 R.T.P :- $ (k+1)< \mathrm {2}^{k+1},$ we have $$ k < 2^k \dots (*) $$ Multiply by 2 in (*) , we get $$ 2k < \mathrm{2}^{k+1} \dots (A) $$ And, we all ready have $ k+1 < 2k, \space \forall k > 1$ Therefore , $ k+1 < 2k < \mathrm {2}^{k+1} $ $ \implies k+1 < \mathrm {2}^{k+1}, $ Hence, proved
H: Convergence of a series We are considering the series of general term: $(1+\frac{1}{n})^n$ I need to find if this series converges or diverges. 1) The Alembert rule can't be applied since we find the limit equal to 1. 2) I therefore tried rewriting: $(1+\frac{1}{n})^n= \exp(n\ln(1+\frac{1}{n}))\\= \exp(n(\frac{1}{n}-\frac{1}{2n^2}+o\big(\frac{1}{n^2}\big)\big)\\=\exp(1-\frac{1}{2n}+ O\big(\frac{1}{n^2}\big)\big)$ How can I continue from here ? AI: Hint: $$\left(1+\frac1n\right)^n\xrightarrow[n\to\infty]{}e\neq 0$$
H: Find functions such that under the Cartesian coordinate system $F(x, y) = f(x) g(y)$ but under the polar coordinate system $F(x, y) = h(r)$. Find all non-constant function $F(x, y)\in C^2(\mathbb{R}^2)$ such that under the Cartesian coordinate system $F(x, y) = f(x) g(y)$ but under the polar coordinate system $F(x, y) = h(r)$. My thoughts: setting x=0 and y=0 we get $F(x,0)=F(0,x) = h(|x|)$. So $f(0)g(x)=f(x)g(0)$. WLOG, $g(0)=1$. Now taking partial derivative w.r.t $\theta$ in $F(r\cos\theta,r\sin\theta)=h(r)$, we get $\frac{g'(x)}{xg(x)}=\frac{f'(x)}{xf(x)}=\frac{g'(y)}{yg(y)}$ where $x=r\cos\theta,y=r\sin\theta$. So this equality is true for any $(x,y)$ with $x^2+y^2=r^2$. But if we suppose $\frac{g'(x)}{xg(x)}=\frac{g'(y)}{yg(y)}$ is true for any $x,y\in[-r,r]$, then we can set it equal to some constant $c_0$ and solve the differential equation. And we should get $F(x,y)=k\exp(c(x^2+y^2))$. But the challenge is to show this form of solution of unique. AI: It looks as if you have already solved the problem, but didn't realise this. Your argument shows that there exists an absolute (i.e. dependent only on $f,g,h$) constant $c_0$ such that $f'(x)/x f(x) = c_0$. (You don't really have to worry about the constraint $x,y \in [-r,r]$ - just argue that changing $r$ does not change $c_0$, and then take any $r$ large enough.) It follows that $f(x) = k_1 e^{c x^2}$ with $c = c_0/2$, and $g(x) = k_2 e^{c y^2}$, and finally $F(x,y) = k e^{c x^2 + y^2} = k e^{r^2}$. The above argument shows that: if $F$ satisfies the assumptions, then it is of the form $F(x,y) = k e^{c x^2 + y^2}$ - no other $F$ are under consideration. Now, of course, any function of the above form satisfies the assumptions - so, you have indeed classified them all.
H: Let $\mathcal{U}(4)$ be a subspace of $\mathcal{P}(4)$ consisting of all polynomials that are even functions Let $\mathcal{U}(4)$ be a subspace of $\mathcal{P}(4)$ consisting of polynomials that are even functions. Show that there exists a subspace $W \subset \mathcal{P}(4)$ such that $$\mathcal{P}(4) = \mathcal{U}(4) \oplus W.$$ How do I approach this problem? I'm not certain where I should start. I know that a function $f:\mathbb{R} \mapsto \mathbb{R}$ is even if $f(x) = f(-x)$ for all $x$. AI: Hint: $$\begin{align} f(x) = \frac{f(x)+f(-x)}{2} + \frac{f(x)-f(-x)}{2}. \end{align} $$
H: Connectedness of a union of sets. Assume that $E$ and $F$ are conneceted subsets of the metrix space X, such that $\bar{E} \cap F \neq \emptyset$. Prove that $E \cup F$ is conneceted as well. When I draw A picture the statement appears pretty logic to me, but I don't know how I should prove it. Here are my attempts. First of all, it's easy to show $\bar{E} \cup F$ is connected. Maybe I could somehow prove that this implies that $E \cup F$ is connected but I failed to do so. I know that we can make a sequence that converges to a point in F. Can you please give me a hint to go on? AI: HINT: Suppose that $U$ and $V$ are open sets in $X$ such that $E\cup F\subseteq U\cup V$, and $$U\cap(E\cup F)\ne\varnothing\ne V\cap(E\cup F)\;.$$ Without loss of generality we may assume that $U\cap E\ne\varnothing$. Show that $E\subseteq U$. Show that if $U\cap F\ne\varnothing$, then $F\subseteq U$, so $U\cap F=\varnothing$, and $F\subseteq V$. Conclude that $U\cap V\ne\varnothing$.
H: If a subring of a ring R has identity, does R also have the identity? I know it does not make sense that if a subring of a ring $R$ is commutative, then $R$ is also commutative. (For example, the set consisting of the matrices whose all entries except (1,1)-entry are zero, is a subring of ring of 2x2 real matrices.) I also considered the case of a ring containing a subring with identity, but I had no ideas. Maybe it seems that it does not make sense, either. Who give me some examples supporting my guess? AI: Let $\mathbb R^{(\left\{1,2,3,...\right\})}$ be the subring of $\mathbb R^{\left\{1,2,3,...\right\}}$ consisting of all $\left(a_1,a_2,a_3,...\right)$ such that all but finitely many $i\in\left\{1,2,3,...\right\}$ satisfy $a_i = 0$. Then, $\mathbb R^{(\left\{1,2,3,...\right\})}$ is a (strictly) nonunital ring, but its subring formed by all $\left(a_1,a_2,a_3,...\right)$ such that all $i \geq 2$ satisfy $a_i = 0$ is a unital ring.
H: Additive form of a spectral decomposition? I am in a class on the mathematical foundations of quantum mechanics. My professor has been talking about spectral decompositions, but they are of a different form than the ones I am used to. I have mostly used the form $A=S\Lambda S^{-1}$ where $S$ is a matrix of eigenvectors, and $\Lambda$ is a matrix of eigenvalues. However, he has phrased the entire discussion using an "additive form" $A=\displaystyle \sum_{Ax=\lambda x} \lambda P_\lambda$ where the $P_\lambda$ are projection matrices. I was wondering if these really represent the same concept, and if so, if there is a convenient translation between the two. AI: These two are the same, let $E_1, \cdots, E_n$ be the eigensubspaces and $P_i$ be the projection onto $E_i$. Let $x$ be a vector, then $Ax = A(x_1+ \cdots x_n)$ for some $x_i \in E_i$. Hence $Ax = Ax_1 + \cdots +Ax_n = \sum \lambda_i x_i= \sum \lambda_i P_i (x)$. On the other hand, if you choose a unitary basis $\{v_1, \cdots, v_n\}$ such that $v_i \in E_i$, then the matrix $S$ with column vector $v_i$ would satisties $A = S\Lambda S^*$. It is more convenient to use projections as one needs to deal with infinite dimensional spaces and one cannot use matrices to describe linear map on this space. In QM, for example, the state space is infinite dimensional and the Schrodinger operator is a operator on the state space.
H: Show that if $S\subset \mathbb{R}$, $S\neq \emptyset$, and $S$ is bounded, then $\inf S \leqslant \sup S$ I proved this for a finite and/or countably infinite subset of $\mathbb{R}$, but then it quickly dawned upon me that $S$ doesn't have to be countably infinite or finite. Any hints? thanks! AI: Let $s\in S$. Then $(\inf S) \le s \le (\sup S)$.
H: How to evaluate indeterminate form of a limit I can't evaluate indeterminate form of a limit like this: $$\lim \limits_{x\to \infty} \left (\frac {x-1}{x+4}\right)^{3x+2}$$ I tried to solve this problem by multiplying fractions' top and bottom by the conjugate of the denominator. I did it many times but I don't have any success and I even don't know if this way right or wrong. How this limit can be solved? AI: It will be easier to rewrite this limit in the form: $$ \lim_{x} \left( \frac{1-1/x}{1+4/x} \right)^{3x+2}. $$ Now, it will suffice to learn how to compute limits of the form: $$ \lim_{x} (1+a/x)^x.$$ This is not very difficult. You can show, for example, that $\ln(1+a/x) = a/x + O(1/x^2)$, so $1+a/x = e^{a/x + O(1/x^2)}$, and finally $(1+a/x)^x = e^{a+O(1/x)}$. Therefore, $\lim_{x} (1+a/x)^x = e^a$. Applying this in the limit you want to compute, we get: $$\lim_{x} \left( \frac{1-1/x}{1+4/x} \right)^{3x+2} = \left(\frac{e^{-1}}{e^4}\right)^3= 1/e^{15}$$. Note that the "big O" notation was used. Vaguely speaking, $O(1/x^2)$ stands for a function that tends to $0$ at least as fast as $1/x^2$ does for $x \to \infty$.
H: Linear Algebra Problem about Direct Sums, Kernel and Image. Let $V$ be a vector space, $W$ a subspace of $V$ and $T : V \to V$ a linear operator. Suppose $V = \text{Im } T \oplus W$ and $W$ is $T$-invariant. I have to prove that (a) $W \subseteq \text{Ker }T$, where $\text{Ker }T$ is the kernel of $T$. (b) If $V$ is finite dimensional, then $W = \text{Ker }T$. For (a), I take $w \in W$, and I have prove that $T(w)=0$. Since $W$ is $T$-invariant, then $T(w) \in W$. I really don't know what's next, because I don't know how to use the fact that $V = \text{Im } T \oplus W$. Can I say $W = \text{Im } T \oplus V$ from this fact? Also, I don't have any idea of how to prove (b). Any hints or ideas will be very appreciated. Thanks. AI: a) Since $V = \operatorname{im} T \oplus W$, then $\operatorname{im} T \cap W = \{0\}$. To see why, let $v \in \operatorname{im} T \cap W$. We have $v = v + 0 = 0 + v$. $v$ can be uniquely written as the sum of two vectors in each of $\operatorname{im} T$ and $W$. Therefore we must have $v = 0$. Now let $w \in W$. Since $T w \in W$ and $T w \in \operatorname{im} T$, it follows that $T w = 0$ and $w \in \ker T$. b) We have $\dim V = \dim \operatorname{im} T + \dim \ker T = \dim \operatorname{im} T + \dim W$. Thus $\dim \ker T = \dim W$. Since $W \subset \ker T$, it follows that $\ker T = W$.
H: Explanation of Lagrange Interpolating Polynomial Can anybody explain to me what Lagrange Interpolating Polynomial is with examples? I know the formula but it doesn't seem intuitive to me. AI: The Lagrange interpolating polynomial is a tool which helps us construct a polynomial which goes through any desired set of points. Lets say we want a polynomial that goes through the points $(1,3), (3,4), (5,6)$ and $(7,-10)$. First we define the polynomial $P(x)=(x-1)(x-3)(x-5)(x-7)$. This has roots at the x-coordinates of each of the points we want to interpolate. Then we construct the following polynomials from this, $$f_1(x) = P(x)/(x-1)$$ $$f_2(x) = P(x)/(x-3)$$ $$f_3(x) = P(x)/(x-5)$$ $$f_4(x) = P(x)/(x-7)$$ Notice that in particular $f_1(x)=(x-3)(x-5)(x-7)$. This function has the following property: It is zero at $x=3,5,$ and $7$ and nonzero at $x=1$. This means that it is "on" when we are at the first x-coordinate and "off" at the others. Each of them are designed to work this way. Now consider the following expression, $$ L(x) = 3 \frac{f_1(x)}{f_1(1)} + 4 \frac{f_2(x)}{f_2(3)} + 6 \frac{f_3(x)}{f_3(5)} -10 \frac{f_4(x)}{f_4(7)} $$ Notice that this functions goes through all four designated points. When we plug in the desired value of $x$ only one of the four functions $f_j$ is turned on and the others are zero. The coefficients are designed to force the expression to equal the corresponding $y$-coordinates. In particular consider $L(5)$, $$ L(5) = 3 \frac{f_1(5)}{f_1(1)} + 4 \frac{f_2(5)}{f_2(3)} + 6 \frac{f_3(5)}{f_3(5)} -10 \frac{f_4(5)}{f_4(7)} $$ $$ L(5) = 0 + 0 + 6 \frac{f_3(5)}{f_3(5)} -0 $$ $$ L(5) = 6 \frac{f_3(5)}{f_3(5)} $$ $$ L(5) = 6 (1) $$ $$ L(5) = 6 $$ So we have the desired point $(5,6)$. Try explicitly writing out the polynomial and plugging in the other points to really see it work.
H: Show that Function Compositions Are Associative My intent is to show that a composition of bijections is also a bijection by showing the existence of an inverse. But my approach requires the associativity of function composition. Let $f: X \rightarrow Y, g: Y \rightarrow Z, h: Z \rightarrow W$ be functions. $((f \circ g) \circ h)(x) = h((f \circ g)(x)) = h(g(f(x)))$, and $(f \circ (g \circ h))(x) = (g \circ h)(f(x)) = h(g(f(x)))$. However, I am having problems in justifying that the two compositions, $(f \circ g) \circ h$ and $f \circ (g \circ h)$, have the same domain and range. When I consulted ProofWiki, whose link is at the bottom, I got even more confused. Specifically, for $(f \circ g) \circ h = f \circ (g \circ h)$ to be defined, ProofWiki requires that dom$g =$ codom$f$ and dom$h =$ codom$g$. First of all, I think that it should be dom$g =$ range$f$ .... Moreover, as you can see in the example below, you actually have to adjust domains and ranges of $f, g, h$ for the requirement to hold true. Let $f: \mathbb R \rightarrow \mathbb R$ be $f(x) = 2x$, $g: \mathbb R^+ \rightarrow \mathbb R$ be $g(y) = ln(y)$, $h: \mathbb R \rightarrow \mathbb R$ be $h(z) = z - 10$. Then $((f \circ g) \circ h)(x) = ln(2x) - 10 = (f \circ (g \circ h))(x)$, with dom$((f \circ g) \circ h) = \mathbb R^+$ = dom$(f \circ (g \circ h))$. As a result, we need to set dom$f = \mathbb R^+$, range$f = \mathbb R^+$; dom$g$, range$g$, dom$h$, and range$h$ remain the same. Am I allowed to do that? This adjustment implies that when we say dom$f = X$, $f$ must be defined for all elements in $X$, but $X$ may not be the entire set of elements for which $f$ is defined. http://www.proofwiki.org/wiki/Composition_of_Mappings_is_Associative AI: Usually, when $f\colon X\to Y$ and $g\colon Y\to Z$ are maps, their composition is written $g\circ f$, rather than $f\circ g$: in this way you write $$ g\circ f(x)=g(f(x)) $$ by definition. You seem to confuse codomain and range. The range, or image, of $f$ is the subset of the codomain $Y$ consisting of the elements $f(x)$, for $x\in X$. The range has no role whatsoever when composition of maps is considered. At least, when maps are supposed to be defined on the whole domain as is the case when talking of surjectivity or bijectivity. Associativity is almost obvious. If you have another function $h\colon Z\to W$, you have, by definition, that $g\circ f\colon X\to Z$ and $h\circ g\colon Y\to W$. Thus one can consider also the compositions $$ h\circ(g\circ f) \qquad\text{and}\qquad (h\circ g)\circ f $$ and both are maps $X\to W$, so it makes sense to ask if they are equal. They are, because for each $x\in X$ we have $$ h\circ(g\circ f)(x)=h(g\circ f(x))= h(g(f(x))=h\circ g(f(x))=(h\circ g)\circ f(x). $$ If you can't parse this, just set $y=f(x)$, $z=g(y)$, $F=g\circ f$ and $G=h\circ g$, so that $F(x)=g(f(x))=g(y)=z$. Then $$ h\circ(g\circ f)(x)=h\circ F(x)=h(F(x))=h(z) $$ and $$ (h\circ g)\circ f(x)=G\circ f(x)=G(y)=h\circ g(y)=h(g(y))=h(z) $$ so the two elements are the same.
H: Defining an inductive set I'm having some difficulties solving an induction task. Here is the task i'm working on: Give an inductive definition of the given language below: $\{(ab)^n\mid n\in\{0,1,2,\dots\}\} = \{\Lambda,ab,abab,ababab,...\}$ I'm very new to induction. What I dont understand is the statement above. what is $n$ and $(ab)^n$. I just can't figure out how i can solve a induction tasks like this. I would appreciate if you can tell me step by step on what to do to solve this task. I've tried to google on how to solve tasks like this, but with no luck really. I hope I can get the help I need here. Thanks alot! AI: If you're looking just at the set $$ L = \{\Lambda,ab,abab,ababab, \ldots\}$$ then one possible inductive definition of $L$ might be something like: The empty string is in $L$ Whenever $w\in L$ for some string $w$, then $wab$ is in $L$ too. Nothing else is in $L$. It is possible that in the particular context where you found the exercise, you're supposed to phrase the inductive definition in a particular formal way. For example, in certain context you might be expected to say, $L$ is the least fixed point of the function $$ S \mapsto \{\Lambda\} \cup \{wab\mid w \in S \} $$ That depends a lot on context you have not given, though.
H: Prove Divisibility test for 11 Prove Divisibility test for 11 "If you repeatedly subtract the ones digit and get 0, the number is divisible by 11" Example: 11825 -> 1182 - 5 = 1177 1177 -> 117 - 7 = 110 110 -> 11 - 0 = 11 11 -> 1-1 = 0 Therefore 11825 is divisible by 11. Note 11825 = 1075*11 I was thinking that we let x = $a_ka_{k-1}.....a_1a_0$ where the following $a_i$'s are the digits. AI: If you split off the ones-digit and subtract it, you transform the number from $10\cdot a + b$ to $a - b$. Now, $$10\cdot a + b = (11 - 1)\cdot a + b = 11\cdot a - (a-b),$$ so $10\cdot a + b$ is divisible by $11$ if and only if $-(a-b)$ is divisible by $11$, which is the case if and only if $a-b$ is divisible by $11$. Repeat until you have a one-digit number.
H: Theorem 3.55 Rudin (rearrangement and convergence) If $\sum a_n$ is a series of complex numbers which converges absolutely, then every rearrangement of $\sum a_n$ converges, and they all converge to the same sum. Proof: Let $\sum a_n'$ be a rearrangement , with partial sums $s_n'$. Given $\epsilon > 0$ there exist an integer $N$ such that $m \geq m \geq N$ implies $$\sum\limits_{i=n}^m \vert a_i\vert \leq \epsilon. \tag{26}\label{26}$$ Now choose $p$ such that the integers $1, 2, 3, \ldots, N$ are all contained in the set $k_1, k_2, \ldots, k_p$. [Here $\{k_n\}, n= 1, 2, 3, \ldots$ is a sequence where every positive integer appears once and only once; $a_n' = a_{k_n}$ for each $n= 1, 2, 3, \ldots$. Then we say that $\sum a_n'$ is a rearrangement of $\sum a_n$.] Then if $n > p$ , the numbers $a_1,...,a_N$ will cancel in the difference $s_n - s_n'$ , so that $\vert s_n - s_n' \vert \leq \epsilon$, by \eqref{26} Hence ${s_n'}$ converges to the same sum as $\{s_n\}$. My question is how can I conclude this from \eqref{26}; $\vert s_n - s_n' \vert \leq \epsilon$ AI: The sum $s_n$ comprises $a_1,\, \dotsc,\, a_N$, and also $a_{N+1},\, \dotsc,\, a_n$. The sum $s_n'$ comprises $a_1,\, \dotsc,\, a_N$, and also the $a_k$ for $k$ in a finite set $F$ disjoint from $\{1,\, \dotsc,\, N\}$. Let $G = \{N+1,\, \dotsc,\, n\}$. Then $$s_n - s_n' = \sum_{k\in G} a_k - \sum_{k\in F} a_k = \sum_{k\in G\setminus F} a _k - \sum_{k \in F\setminus G} a_k,$$ whence $$\lvert s_n - s_n'\rvert \leqslant \sum_{k \in G\setminus F} \lvert a_k\rvert + \sum_{k \in F\setminus G} \lvert a_k\rvert = \sum_{k \in (G\setminus F) \cup (F\setminus G)} \lvert a_k\rvert \leqslant \varepsilon,$$ since $(G\setminus F) \cup (F\setminus G) \subset \{N+1,\, \dotsc,\, m\}$ for large enough $m$.
H: Another probability word problem Again, forgive me if this is basic: Assume you are taking two courses this semester (A and B). The probability that you will pass course A is 0.835 and the probability that you will pass both courses is 0.276. The probability that you will pass at least one of the courses is 0.981. What is the probability that you will pass course b? So I figure out (0.835 - 0.276 (both)) is .559 and then because of the wording I assume I'm doing the addition law again so I do 0.835 + .559 - 0.981 to arrive at 0.413. Is this logic correct? AI: Hint: $0.559$ is the chance you will pass $A$ only. I don't understand your second calculation. If you draw a Venn diagram, you have the values in three of the four regions, missing only the chance that you pass $B$ only. If you deduct the two that involve failing $B$ from $1$ you are there.
H: Given the coordinates of two of three collinear points such that $PM=MQ$, find the third point P,M and Q are three collinear points and PM=MQ. If P is the point (-1,4) and M is the point (5,8), find the coordinates of the point Q. What I've Done: I've found the distance MQ which is sqrt 52. I've made the equation $$\sqrt{52}=\sqrt{(5-x_1)^2 + (8-y_1)^2}$$ where $x_1$, $y_1$ are coordinates for $Q$. What do I do now? Help would be greatly appreciated. AI: This is easier to solve using the midpoint formula: If $Q=(x,y)$, then you have $$\big(\frac{x+(-1)}{2},\frac{y+4}{2}\big)=(5,8);$$so now you can solve for x and y. If you want to continue with your solution, though, find the equation for the line which passes through P and M, and then use this to solve for y in terms of x and then substitute into your formula.
H: On functions whose derivative equals zero almost everywhere Suppose $f: [0,1] \rightarrow \mathbb{R}$ is continuous everywhere and differentiable almost everywhere in $[0,1]$, and $f'(x)=0$ whenever the derivative exists. Is it true that $f(x)$ equals a constant? It seems like the answer should be yes. However, I'm having trouble finding a version of the fundamental theorem of calculus which would apply to $f$. AI: More dramatically, the Cantor Singular Function $f$ is increasing, continuous, $f' = 0$, a.e, $f(0) = 0$ and $f(1) = 1$. http://en.wikipedia.org/wiki/Cantor_function
H: About the multiplication map $I\otimes M\rightarrow M$ so I am learning about flat modules and I found this criterion for flatness. An $R$-module $M$ is flat iff for every finitely generated ideal $I$, we get that the multiplication map $I\otimes_R M\rightarrow M$ is an injection (Eisenbud Proposition 6.1) I am confused because say we have any $M$ (an $R$ module), and say that $i\otimes m\mapsto 0$. This means that $im=0$, but wouldn't this imply that $i\otimes m=1\otimes im=1\otimes 0=0$, so the kernel is trivial, and hence the map is injective always? I am obviously doing something wrong, so I'd appreciate anyone that could clarify things a bit! AI: You can't write $i\otimes m=1\otimes im$ unless $1\in I$, i.e., unless $I=R$.
H: Is the definition of linearity redundant? For some two functions f(x) and g(y) and for the transformation T, T is linear if: 1. T(f(x) + g(y)) = T(f(x)) + T(g(y)) 2. T(cf(x)) = cT(f(x)) for c in reals. This definition seems redundant because the first property gives: T(cf(x)) = T(f(x) + f(x) ... [c times] + f(x)) = T(f(x)) + T(f(x)) + ... [c times] + T(f(x)) = cT(f(x)) So why is the second property necessary for the definition of linearity? AI: Its not redundant. Your argument from the first property only applies to integer coefficients. Notice: $ 3 f(x) = f(x) + f(x) +f(x)$, but there is no such corresponding expression for $\pi f(x)$. The misconception is related to thinking of multiplication as repeated addition. This is emphatically not true when multiplying by non-integers. See If multiplication is not repeated addition for more on this concept.
H: How to have this definite integral? Suppose that a function $f$ of $x$ and $y$ be defined as follows:$$f(x,y) = \begin{cases} \frac{21}{4}x^2y & \text{for $x^2 \leq y\leq 1$,} \\ 0 & \text{otherwise. } \\ \end{cases}$$ I have to determine the value of integral for which $y\leq x$ also holds. The answer is $\frac{3}{20}$ and I also get it using figure for $x$ and $y$, but don't know how to get it with calculations. AI: The given condition $y\le x$and $x^2\le y\le1$ implies $x^2\le y\le x$. Since $x^2\le x$, $0\le x\le 1$. $$\int_{0}^1\int_{x^2}^x\frac{21}{4}x^2ydydx=\int_{0}^1\left[\frac{21}{8}x^2y^2\right]^x_{x^2}dx=\int_{0}^1\frac{21}{8}x^2(x^2-x^4)=\frac{21}{8}\left[\frac{x^5}{5}-\frac{x^7}{7}\right]^1_{0}=\frac{3}{20}$$
H: Finding local normal form of a holomorphic function So I'm trying to find local coordinates to compute the local normal form of a holomorphic function. I have $f : \mathbb{P}^1 \to \mathbb{P}^1$ given by $f(z) = \frac{z}{(z-1)^2}$. Now we have a nice coordinate system $\varphi(z)= \frac{1}{z-1}$ and $ \psi(z) = \frac{-1 + \sqrt{1+4z}}{2}$ around $\infty$ and $f(\infty) = 0$ respectively. So we have the lovely fact that $\psi \circ f \circ \varphi^{-1}(z) = z$. Now I believe this is the local normal form of $f$ around $\infty$. This should also work nicely with only some mild alterations to $\varphi$ for most points in the domain. With I believe the only exception of $z = 1 \in \mathbb{P}^1$. I think that the normal form around $z=1$ should have power 2, since there is a pole of that order there. I can't for the life of me figure it out though. Any ideas? AI: Around $z = 1$, you want a normal form of $w \mapsto w^2$. Since $f$ has a pole in $1$, you can choose the chart $\psi(z) = \frac1z$ around $\infty$, so you want $$w^2 = \frac{(z-1)^2}{z},$$ with a little abuse of notation. Thus you can use the chart $$\varphi(z) = \frac{z-1}{\sqrt{z}}$$ around $1$. I suggest using the branch of the square root that has $\sqrt{1} = 1$. A small computation shows $$\varphi^{-1}(w) = \left(\sqrt{1+ \frac{w^2}{4}} + \frac{w}{2}\right)^2,$$ and as desired $$\psi \circ f \circ \varphi^{-1} \colon w \mapsto w^2.$$
H: When does the triangle have the smallest area? The following triangle has an area $S$, and the sides $AO$ and $BO$ have the length $a$ and $b$, respectively. There is a fixed point $X$ at $(x,y)$. A point $C$ is put on the line segment $OA$, and the point $D$ is put on the intersection between the line segment $OB$ and the line $CX$. When does the area of the triangle $DCO$ have the smallest value? I think it is either when $DX=XC$, when $D$ is at $B$, or when $C$ is at $A$. Yet, if I try to prove this, calculation becomes so complicated. AI: Be warned that there is an instant "flash of insight" solution from sitting and pondering the problem long enough, and you are not far from it, so you might lose some enjoyment reading the answer. Hint: remove A and B from the picture Solution. With increasing detail (move mouse/cursor over the hidden texts to reveal), Parallelogram which is centered at X and made by rotating the figure 180 degrees
H: If $(4,-1)$ is on the graph of $f$, what point must be on the graph of $g$, where $g(x)= 2f(-2(x+1))-2$? Given that $g(x)= 2f(-2(x+1))-2$ and the point $(4, -1)$ is on the graph of $f(x)$. What point must exist for $g(x)$? I don't know how to start this problem. AI: Hint: If the only thing you know is $f(4)$, then try making the inside of $f$ look like $4$; that is, set $$-2(x + 1) = 4$$ Then solve for $x$ and evaluate $g(x)$.
H: Computing the integral $\int(x-ab)x^{a-1}e^{-\frac{x}{b}}dx$ Can someone justify to me why $\int(x-ab)x^{a-1}e^{-\frac{x}{b}}dx = -bx^ae^{-\frac{x}{b}}$? WolframAlpha gives the answer but does not explain why. I'm absolutely new to this kind of integration. Thanks! AI: The right hand side is the antiderivative of the integrand. By the way since you have an indefinite integral there should be a constant of integration on the RHS. We can expand: \begin{align} \int (x-ab)x^{a-1}e^{-\frac{x}{b}}\ \mathrm{d}x&=\int x^ae^{-\frac{x}{b}}\ \mathrm{d}x+\int ax^{a-1}(-be^{-\frac{x}{b}})\ \mathrm{d}x\\ &\equiv\mathcal{I}+\mathcal{II} \end{align} and use integration by parts for $\mathcal{II}$ \begin{equation} \mathcal{II}=-bx^ae^{-\frac{x}{b}}-\int x^ae^{-\frac{x}{b}}\ \mathrm{d}x+c \end{equation}where c is the constant of integration. Then we have \begin{equation} \mathcal{I}+\mathcal{II}=-bx^ae^{-\frac{x}{b}}+c \end{equation}which is what your right hand side is. Of course, if you can identify the antiderivative which is what the answer before mine is trying to get at then you get your result straightaway.
H: Proving Equality of 2 Functions I have a general question, illustrated by a specific example. The general question is how to methodically prove that two functions are equal. Much like trying to prove an "if-and-only-if" statement (by first proving the positive, then the converse), or writing a proof by induction (by proving the base case, then writing the induction step), I think there is a general method for this. The specific example is how to show that for a bijection $f:A \rightarrow B$, the following is true: $f^{-1} \circ f = {\text {id}}_{A}$, where $\text{id}$ is the identity function. The intuition itself seems obvious to me, I just don't know how to best write the proof! AI: Two functions $f$ and $g$ are equal if and only if they have the same domain and $f(x) = g(x)$ for all $x$ in the domain. So in your example you will have to prove the following assertions: The domain of $f^{-1} \circ f$ is equal to $A$ (the domain of id$_A$) For any $x \in A$ we have $(f^{-1} \circ f) (x) =$ id$_A(x)$
H: List of (pre-graduate level) exercises I am about to get my undergraduate degree in (pure) mathematics, but I feel like I'm ill prepared to go through a graduate program. This is why I'm looking for texts like this one http://www.math.kent.edu/~white/qual/list/all.pdf in other fields (algebra, analysis and topology.) I've already looked at "Berkeley problems in mathematics", and I found it pretty tough. Sorry if this question is too broad, or the site is not really for these kinds of questions (I've seen some in related topics, though). Edit: I don't know how to make this question community wiki. Do you still have that feature? AI: http://homepages.uconn.edu/~rib02005/real.html This one, titled Real Analysis for graduate students (don't be scared by the title, it's not a graduate level textbook!), is a good book full exercises that I discovered surfing randomly on amazon. As you can see the index is amazing: measure theory, lebesgue integration, topology, probability, harmonic functions, distributions, basic functional analysis and so on. The level, as I was saying, is not extremely high, so it could be more enjoyable that the berkeley one. Just a needless remark: the author included so many things that you can't expect that everything is detailed as it would be in a course on that subject, but it is a good starting point (and it is free :D). I hope you'll find it interesting :D
H: How can I prove $288\mid 7^{2n+1}-48n-7$? How can I prove $$288\mid 7^{2n+1}-48n-7$$ for all nonnegative integers $n$? My only thought was to write $$7^{2n+1}-7-48n=7(7^n+1)(7^n-1)-48n.$$ This didn't seem beneficial at all. Please help me understand the problem. AI: Work with induction on $n$. The base case is easy. Assume that the statement is true for some $n$. We find $7^{2n+3} - 48(n+1) - 7 = 49 \cdot 7^{2n+1} - 48n - 48 - 7 = (7^{2n+1} - 48n - 7) + 48(7^{2n+1} - 1)$. The term $7^{2n+1} - 48n - 7$ is divisible by $288$ per induction hypothesis. As $288 = 48 \cdot 6$ it is left to show that $6 \:|\: 7^{2n+1} - 1$. Can you complete the proof with the hint you were given?
H: Deducing $\sin x + \cos x \ge 1$ from $(\sin x + \cos x)^2 = 1 + 2 \sin x \cos x$ So in trig, say I have an acute angle $X$. And one can intuitively conclude that $\sin x + \cos x \ge 1$, but how does the fact that $$(\sin x + \cos x)^2 = 1 + 2 \sin x \cos x$$ tell me that it is true that $\sin x + \cos x \ge 1$? I don't quite see the connection. Thanks AI: If $x$ is acute $\sin x$ and $\cos x$ are both positive, and therefor $$2\sin x\cos x\ge 0\\ 1+2\sin x\cos x\ge 1\\ \sin^2x+\cos^2x+2\sin x\cos x\ge 1\\ \left(\sin x+\cos x\right)^2\ge 1$$ Where both $\sin x$ and $\cos x$ are positive, so is their addition: $$\sin x+\cos x\ge 1$$
H: $\mathbb{R}^n\times\{0\}$ has measure zero in $\mathbb{R}^{n+1}$ I want to show that $\mathbb{R}^n\times\{0\}$ has measure zero in $\mathbb{R}^{n+1}$. For example, take $n=1$. I want to show that the $x$-axis has measure zero in the plane. I cover it with the sets $[-1,1]\times[-\epsilon/8,\epsilon/8]$, $[-2,2]\times[-\epsilon/32,\epsilon/32]$, $\ldots$ The goal is to have the volumes be $\epsilon/2, \epsilon/4, \ldots$ so that they sum to $\epsilon$. I think this method generalizes to $\mathbb{R}^n$, simply by choosing $[-1,1]^{n-1}\times[-\epsilon/2^{n+2},\epsilon/2^{n+2}],\ldots$. I don't think it is very elegant though. Is there a "better" way to do this? AI: You can do something similar with cubes in $\mathbb{R}^n$ : Show that $[-k,k]^n \times \{0\}$ has measure zero for each $k \in \mathbb{N}$ Proof : For $\epsilon > 0$, choose $\delta > 0$ so that $$ 2\delta \prod_{i=1}^n (2k+2\epsilon) < \epsilon $$ Then note that $$ U = (-k-\epsilon, k+\epsilon)^n \times (-\delta, \delta) $$ contains $[-k,k]^n\times \{0\}$, and has measure $< \epsilon$ Note that $$ m(\mathbb{R}^n \times \{0\}) = \lim_{k\to \infty} m([-k,k]^n\times \{0\}) $$
H: Finding a directional derivative Find the directional derivative of $f(x,y,z)=3xy+z^2$ at the point $(5,1,−4)$ in the direction of a vector making an angle of $π/3$ with $∇f(5,1,−4)$. $f_\vec u(5,1,−4)=D_\vec uf(5,1,−4)=?$ I know how to do directional derivative questions but I have no idea about this one. I'm guessing that I'm thinking about the question wrong. $D_\vec uf=f_x\vec u_x+f_y\vec u_y+f_z\vec u_z =\nabla f \cdot \vec u$ where $\|\vec u\|=1$ So $\nabla f= \langle 3y, 3x, 2z \rangle$ and $\nabla f(5, 1, -4)= \langle3,15,-8\rangle$ Then it says $\vec u$ makes a $\pi/3$ angle with $\nabla f(5, 1, -4)$ which would mean $\nabla f(5, 1, -4) \cdot \vec u = \|\nabla f(5, 1, -4)\| * \| \vec u\| * \cos(\pi/3)$ and knowing $\|\vec u\| = 1$ gives $3\vec u_x+15\vec u_y-8\vec u_z=\sqrt {298} * \cos(\pi/3)$ and with $\vec u_x^2+\vec u_y^2+\vec u_z^2=1$ gives and unsolvable system of equations (I think) So... i'm wondering where I went wrong. AI: You don't need to know $\;u\;$ . Since your function is differentiable everywhere, we get that $$D_uf(5,1,-4)=\nabla f(5,1,-4)\cdot\frac u{||u||}$$ But you also know that $$\frac12=\cos\frac\pi3=\frac{u\cdot\nabla f(5,1,-4)}{||u||\,||\nabla f(5,1,-4)||}\implies$$ $$\nabla f(5,1,-4)\cdot\frac u{||u||}=\frac12||\nabla f(5,1,-4)||=\ldots$$...and voila!
H: Proving $mn=nm$ for all $n,m\in \mathbb N$ How can I prove that $mn=nm ,\forall m,n \in \mathbb N$, by using just the following axioms? $(\forall n \in \mathbb N)(n\cdot 0=0)$ $(\forall m,n \in \mathbb N)(m(n+1)=mn+m)$, The axioms and properties of addition. AI: Claim 1: $0 \cdot n = 0$ for every natural number. Proof: We induct on n. The base case is $0 \cdot 0 = 0$ but that follows immediately by definition of product, since $m \cdot 0 = 0$ for every $m$ in particular when $m=0$. Now suppose inductively that the claim holds for n. We have to show that also holds for $n+1$. $0 \cdot (n+1) = 0 \cdot n + 0$. But by hypothesis we know that $0 \cdot n = 0$, so $ 0 \cdot n + 0 = 0 +0 = 0$, which close the induction. Claim 2: $(m+1) \cdot n = m \cdot n+n$ for every natural number. Proof: We induct on $n$. When $n=0$ then we have $(m+1) \cdot 0 = 0$ and $m \cdot 0+0 = 0$. Assume that the claim holds for $n$ in other words assume that $(m+1) \cdot n = m \cdot n+n$ we wish to show that also holds for $n+1$. $(m+1) \cdot (n+1) = (m+1) \cdot n + (m+1)$ and by the inductive hypothesis we know that $(m+1) \cdot n = m \cdot n+n $. Thus we have: $(m+1) \cdot n + (m+1) = (m \cdot n+n )+ m +1= m \cdot (n+1) +(n+1)$ as desired. Proposition: $m \cdot n = n\cdot m$ for each natural number. Proof: We induct on n. When $n =0$ by definition we know that $m \cdot 0 =0$ and by the claim 1 it follows that $0\cdot m = 0$ which prove the base case. Now suppose the claim holds for $n$ and we shall show that also holds for $n+1$. $m \cdot (n+1) = m \cdot n + m$ by definition of product and $(n+1)\cdot m = n \cdot m +m$ by the claim 2. Furthermore, by the inductive hypothesis we know that $ m \cdot n = n \cdot m$. Thus $m \cdot (n+1) = (n+1)\cdot m$ which close the induction.
H: List all cosets of H and K Let $G = \mathbb Z_3 \times \mathbb Z_6$, $H = \langle (1,2)\rangle$ and let $K = \langle (1,3)\rangle$. List all cosets of $H$ and $K$. Can somebody please explain me how to do this problem. First of all I'm having trouble how to obtain the set of $H$ and $K$. Definition: Let H be a subgroup of the group $G$, and let a element of $G$. The set $aH$ = {x element of G | x = ah for some h element of H} is called the left coset of $H$ in $G$ determined by $a$. Similarly, the right coset of $H$ in $G$ determined by $a$ is the set $Ha$ = {x element of G |x = ha for some h element of H}. The number of left cosets of $H$ in $G$ is called the index of $H$ in $G$, and is denoted by $[G : H]$ I need some explanation on how to do this kind of problems I really want to understand. Any help will be appreciated. AI: Let's do this for $H$, and leave $K$ as a similar exercise : $H = \langle (1,2)\rangle = \{(1,2), (2,4), (0,0)\}$ since $$ (3,6) \equiv (0,0) $$ You need to check that there are no other elements. Hence, $|H| = 3$ and $[G:H] = 18/3 = 6$. Thus, there are 6 cosets you need to find. The cosets are all of the form $(i,j) + H$, where $(i,j) \in G$. $$ (0,0) + H, (1,0) + H, \ldots, (2,4)+H, (2,5) + H $$ There are 18 such elements; but there are overlaps : $$ (i,j) + H = (k,l) + H \Leftrightarrow (i-k,j-l) \in H $$ $$ \Leftrightarrow (i-k,j-l) \in \{(1,2), (2,4), (0,0)\} $$ Now we list the cosets : $$ (0,0) + H = (1,2) + H = (2,4) + H $$ $$ (1,0) + H = (2,2) + H = (3,4) + H $$ $$ (2,0) + H = (0,2) + H = (1,4) + H $$ $$ (0,1) + H = (1,3) + H = (2,5) + H $$ $$ (1,1) + H = (2,3) + H = (3,5) + H $$ $$ (2,1) + H = (0,3) + H = (1,5) + H $$ These are all the cosets of $H$. Can you do the same thing with $K$?
H: Closure Criterion for convergence of sequences I know that $\{z\}=\bigcap\{\operatorname{cl}\,\{x_n\mid n\in S\} \mid S\subseteq \mathbb{N}\ \text{and}\ S\ \text{is infinite}\}$ is one of the criteria's of convergence of sequences in a metric space. Here is my question : Can the intersection of the closures of the tails of a sequence be a singleton set without the sequence converging? AI: $$(1,0,2,0,3,0,4,0,5,0,\ldots)$$
H: Modulo operation of large powers I came through this property in a cryptography book. $(ab)\bmod n=\bigl((a \bmod n)(b \bmod n)\bigr)\bmod n$. There is an example in the book, $10^n\bmod 3= (10\bmod n)^n$. Now if I have to calculate $8^15 \bmod 17$ can I calculate $8 \bmod 17$ and multiply the answer (which is $9$) $15$ times, so the answer becomes $16$? AI: Yes, except that $8 \bmod 17$ is just $8$. You can also continually square to save some operations. So you get: $$8 \equiv 8 \pmod {17},$$ $$8^2 = 64 \equiv 13 \pmod {17},$$ $$8^4 \equiv 13^2 = 169 \equiv 16 \pmod {17},$$ $$8^8 \equiv 16^2 = 256 \equiv 1 \pmod {17},$$ $$8^{15} = 8^8 \cdot 8^4 \cdot 8^2 \cdot 8 \equiv 1 \cdot 16 \cdot 13 \cdot 8 = 1664 \equiv 15 \pmod {17}.$$
H: Two part question about $\sup_y\inf_xf(x,y)\leqslant\inf_x\sup_yf(x,y)$ Background Info: Let $X,Y\neq\emptyset$ and let $f:X\times Y\to\mathbb{R}$ have a bounded range in $\mathbb{R}$ . Also, let $f_{1}(x)=\sup\{f(x,y):\: y\in Y\}$ and $f_{2}(y)=\sup\{f(x,y):\: x\in X\}$ Establish the Principle of Iterated Suprema: \begin{align*} \sup\{f(x,y):\: x\in X,y\in Y\} &=\sup\{f_{1}(x):\: x\in X\}\\ &=\sup\{f_{2}(y):\: y\in Y\} \end{align*} I proved the Principle of Iterated Suprema but now I am stuck on the proceeding question The Question let $f$ and $f_{1}$ be as in the preceding exercise and let $$g_{2}(y)=\inf\{f(x,y):\: x\in X\}$$ Prove that $$\sup\{g_{2}(y):y\in Y\}\leqslant\inf\{f_{1}(x):\: x\in X\}$$ Show that strict inequality can hold. We sometimes express this inequality as $$\underset{y\quad x}{\sup\inf}f(x,y)\leqslant\underset{x\quad y}{\inf\sup}f(x,y)$$ I don't really know where to start. A hint would be greatly appreciated! Thanks AI: Hints: $g(y) \leq \sup\limits_y\ g(y)$ $h(x) \leq k(x) \Rightarrow \inf\limits_x\ h(x) \leq \inf\limits_x\ k(x)$ $l(y) \leq M \Rightarrow \sup\limits_y\ l(y) \leq M$
H: $\int^{\infty}_{-\infty} \exp \{-\frac{1}{2} y^2\} \; dy$ $$\int^{\infty}_{-\infty} \exp \{-\frac{1}{2} y^2\} \; dy$$ I tried letting $u = -\frac{1}{2} y^2$ then $dy = - \frac{1}{\color{red}y} du$... but theres still a $y$ term in $dy$? AI: Hint: If $I$ is the desired integral, then $$I^2 = \int_{-\infty}^{\infty} e^{-x^2/2} dx \int_{-\infty}^{\infty} e^{-y^2/2} dy = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-\frac{1}{2}(x^2 + y^2)} dy dx$$ Now convert to polar coordinates.
H: Showing that the theory DTO is consistent Toward the end of Kunen's Models of Set Theory section in his most recent Set Theory text, after talking about relativization, he begins to mention the idea of relative consistency proofs. I've been looking at the following exercise in this section. Give a purely finitistic proof that the theory $DTO$ of dense total orders without first or last elements is consistent. Here, $\mathcal{L} = \{ < \}$. There is a hint for the exercise: Give a purely finitistic definition of the notion $\mathbb{Q} \models \varphi[s]$, and then prove that $\mathbb{Q} \models \varphi$ whenever $\varphi$ is a sentence that is provable from $DTO$. Not working with consistency proofs before, nor seeing the theory $DTO$ anywhere in the previous sections of the text, I'm not entirely sure what to do. Also, what does he mean by "purely finitistic?" How would I go about giving a purely finitistic definition of $\mathbb{Q} \models \varphi[s]$? I would greatly appreciate it if anyone would be able to help me with this one. AI: DTO would refer to "dense linear orders without endpoints" as described in Wikipedia. The axioms would be as follows: $( \forall x ) ( \neg x < x )$; $(\forall x ) ( \forall y ) ( \forall z ) ( ( x < y \wedge y < z ) \rightarrow x < z )$; $( \forall x ) ( \forall y ) ( x < y \vee x = y \vee y < x )$; $( \forall x ) ( \forall y ) ( x < y \rightarrow ( \exists z ) ( x < z \wedge z < y ) )$; $( \forall x ) ( \exists y ) ( y < x )$; $( \forall x ) ( \exists y ) ( x < y )$; I think that here "purely finitistic" refers to only ever making finitely many checks to see if a formula is true. With this interpretation the basic idea is that given any formula $\varphi ( x , y , z )$ and rationals $a < b$, then for all $s,t$ which share the same relationship with both $a$ and $b$ we have that $\mathbb{Q} \models \varphi [ s,a,b ]$ iff $\mathbb{Q} \models [t,a,b]$: the model $\mathbb{Q}$ "cannot distinguish rationals within an interval". So to check whether $\mathbb{Q} \models \exists x \psi$ it suffices to only check representatives in the intervals. So let us finitistically define $\mathbb{Q} \models \varphi [ s_1 , \ldots , s_n ]$ for all $\varphi$. if $\varphi \equiv x = y$, then $\mathbb{Q} \models \varphi [ s , t ]$ iff $s = t$; if $\varphi \equiv x < y$, then $\mathbb{Q} \models \varphi [ s , t ]$ iff $s < t$; if $\varphi \equiv \neg \psi$, then $\mathbb{Q} \models \varphi [s_1 , \ldots , s_n ]$ iff $\mathbb{Q} \not\models \psi [ s_1 , \ldots , s_n ]$; if $\varphi \equiv \psi \wedge \theta$ then $\mathbb{Q} \models \varphi [ s_1 , \ldots , s_n ]$ iff $\mathbb{Q} \models \psi [ s_1 , \ldots , s_n ]$ and $\mathbb{Q} \models \theta [ s_1 , \ldots , s_n ]$; if $\varphi \equiv \exists x \psi$, then given $s_1 , \ldots , s_n \in \mathbb{Q}$ rearrange them as $s_1^\prime \leq \cdots \leq s_n^\prime$, then $\mathbb{Q} \models \varphi [ s_1 , \ldots , s_n ]$ iff at least one of the following holds: $\mathbb{Q} \models \psi [ s_1^\prime - 1 , s_1 , \ldots , s_n ]$; $\mathbb{Q} \models \psi [ s_1^\prime , s_1 , \ldots , s_n ]$; $\mathbb{Q} \models \psi [ \frac{s_1^\prime + s_2^\prime}{2} , s_1 , \ldots , s_n ]$; $\mathbb{Q} \models \psi [ s_2^\prime , s_1 , \ldots , s_n ]$; $\vdots$ $\mathbb{Q} \models \psi [ \frac{s_{n-1}^\prime + s_n^\prime}{2} , s_1 , \ldots , s_n ]$; $\mathbb{Q} \models \psi [ s_n^\prime , s_1 , \ldots , s_n ]$; $\mathbb{Q} \models \psi [ s_n^\prime + 1 , s_1 , \ldots , s_n ]$; (If $\varphi \equiv \exists x \psi$ where $\psi$ has no variables other than $x$ free, then $\mathbb{Q} \models \varphi$ iff $\mathbb{Q} \models \psi [ 0 ]$.) Let us show that this works with the following axiom of DTO: $$( \forall x ) ( \forall y ) ( x < y \rightarrow ( \exists z ) ( x < z \wedge z < y ) )$$ which is logically equivalent to $$ \neg ( \exists x ) ( \exists y ) ( x < y \wedge \neg ( \exists z ) ( x < z \wedge z < y ) ) ).$$ $\mathbb{Q} \models \neg ( \exists x ) ( \exists y ) ( x < y \wedge \neg ( \exists z ) ( x < z \wedge z < y ) ) )$ iff $\mathbb{Q} \not\models ( \exists x ) ( \exists y ) ( x < y \wedge \neg ( \exists z ) ( x < z \wedge z < y ) ) )$ iff $\mathbb{Q} \not\models ( \exists y ) ( 0 < y \wedge \neg ( \exists z ) ( 0 < z \wedge z < y ) ) )$ iff each of the following are true $\mathbb{Q} \not\models ( 0 < -1 \wedge \neg ( \exists z ) ( 0 < z \wedge z < -1 ) ) )$; $\mathbb{Q} \not\models ( 0 < 0 \wedge \neg ( \exists z ) ( 0 < z \wedge z < 0 ) ) )$; $\mathbb{Q} \not\models ( 0 < 1 \wedge \neg ( \exists z ) ( 0 < z \wedge z < 1 ) ) )$; Look at these three separately: $\mathbb{Q} \not\models ( 0 < -1 \wedge \neg ( \exists z ) ( 0 < z \wedge z < -1 ) ) )$ iff $\mathbb{Q} \not\models 0 < -1 $ or $\mathbb{Q} \not\models \neg ( \exists z ) ( 0 < z \wedge z < -1 ) ) )$, which is true since $\mathbb{Q} \not\models 0 < -1$. $\mathbb{Q} \not\models ( 0 < 0 \wedge \neg ( \exists z ) ( 0 < z \wedge z < 0 ) ) )$ iff $\mathbb{Q} \not\models 0 < 0 $ or $\mathbb{Q} \not\models \neg ( \exists z ) ( 0 < z \wedge z < 0 ) ) )$, which is true since $\mathbb{Q} \not\models 0 < 0$. $\mathbb{Q} \not\models ( 0 < 1 \wedge \neg ( \exists z ) ( 0 < z \wedge z < 1 ) ) )$ iff either $\mathbb{Q} \not\models 0 < 1$ or $\mathbb{Q} \not\models \neg ( \exists z ) ( 0 < z \wedge z < 1 )$ iff (since $\mathbb{Q} \models 0 < 1$) $\mathbb{Q} \not\models \neg ( \exists z ) ( 0 < z \wedge z < 1 )$ iff $\mathbb{Q} \models ( \exists z ) ( 0 < z \wedge z < 1 )$ iff at least one of the following is true $\mathbb{Q} \models 0 < -1 \wedge -1 < 1$; $\mathbb{Q} \models 0 < 0 \wedge 0 < 1$; $\mathbb{Q} \models 0 < \frac 12 \wedge \frac 12 < 1$; $\mathbb{Q} \models 0 < 1 \wedge 1 < 1$; $\mathbb{Q} \models 0 < 2 \wedge 2 < 1$; and the third one is clearly true.
H: Alternatinve proof for the principle of the Iterated Suprema The back of the book gave a proof similar to the proof here Proving principle of the Iterated Suprema, but I proved it following way before I checked the back of the book. Could some one verify this proof? Let $X,Y\neq\emptyset$ and let $f:X\times Y\to\mathbb{R}$ have a bounded range in $\mathbb{R}$ . Also, let $f_{1}(x)=\sup\{f(x,y):\: y\in Y\}$ and $f_{2}(y)=\sup\{f(x,y):\: x\in X\}$ Prove that \begin{align*} \sup\{f(x,y):\: x\in X,y\in Y\}&=&\sup\{f_{1}(x):\: x\in X\}\\&=&\sup\{f_{2}(y):\: y\in Y\} \end{align*} proof. Since $\text{Ran}f$ is a bounded subset of $\mathbb{R}$ we have that the set $\{f(x,y):\: x\in X,y\in Y\}$ indeed has a supremum (and an infimum). If we let $u=\sup\{f(x,y):\: x\in X,y\in Y\}$ then we have that $(\forall x\in X)(\forall y\in Y)(f(x,y)\leqslant u)$ , this implies that $(\forall x\in X)(f_{1}(x)\leqslant u)$ and $(\forall y\in Y)(f_{2}(y)\leqslant u)$ , hence $\sup f_{1}(x)\leqslant u$ and $\sup f_{2}(y)\leqslant u$ . Now suppose $\sup f_{1}(x)<u$ Then there exist a $x'$ such that $f(x',y)\leqslant u'<u$ for all $x$ and $y$ hence $u'$ is an upper bound on $f(x,y)$ , but this contradicts our selection of $u$ as the supremum of $\{f(x,y):\: x\in X,y\in Y\}$ , thus $(u\leqslant\sup f_{1}(x)\leqslant u)\implies\sup f_{1}(x)=u$ . Likewise, if we suppose that $\sup f_{2}(y)<u$ then $(\exists y')(\forall x,y)(f(x,y)\leqslant f(x,y')<u)$ which means that $f(x,y')$ is an upper bound on $f(x,y)$ again this contradicts our selection of $u$ as $\sup\{f(x,y):\: x\in X,y\in Y\}$ . Consequetly, we have that $(u\leqslant\sup f_{2}(y)\leqslant u)\implies(\sup f_{2}(y)=u)$ . So indeed we have that \begin{align*} \sup\{f(x,y):\: x\in X,y\in Y\}&=&\sup\{f_{1}(x):\: x\in X\}\\&=&\sup\{f_{2}(y):\: y\in Y\} \end{align*} AI: The proof you give is correct.
H: Is it necessarily true that if a polynomial is irreducible in $\mathbb Z_n$ ($n$ is prime) then it is irreducible in $\mathbb{Q}$? I have played around with a couple examples and I've consistently seen a pattern where the polynomials that are irreducible in $\mathbb{Z}_n$ are irreducible in $\mathbb{Q}$. Can anyone challenge this statement? How about in the reals - that is if a polynomial is irreducible in $\mathbb Z_n$ does the same hold in $\mathbb{R}$? I have not taken a formal course in abstract algebra, but I am curious as an outsider what the reasoning is like from a mathematician's point of view. AI: Edit: I am interpreting $\mathbb Z_n$ as $\mathbb Z/n\mathbb Z$, is this the case here? This is indeed true if the leading coefficient of the polynomial is equal to $1$ and the coefficients are in $\mathbb Z$: if $p$ is reducible over $\mathbb Q$, then $p=qr$, and the leading coefficients of $q$ and $r$ is $1$, and so $q$ and $r$ are nonconstant in $\mathbb Z_n$, so $p$ is reducible in $\mathbb Z_n$. If the leading coefficient is not $1$, then this is not always true: $2x^2+3x+1$ is irreducible over $\mathbb Z_2$, but not irreducible over $\mathbb Q$, since it is equal to $(2x+1)(x+1)$. Finally, $x^2-2$ is irreducible in $\mathbb Z_3$, but not in $\mathbb R$.
H: Does the limit of function of two variables exist If $(a,b) \in \Bbb R^2$ and $a+b=1$, calculate the following limit or demonstrate that there is none: $$ \lim_{(x,y) \to (a,b)} \frac {y\sin(\pi x)}{x+y-1}$$ I'm doing that: $\lim_{(x,y) \to (a,b)} \frac {y\sin(\pi x)}{x+y-1}=\lim_{(x,y) \to (0,0)} \frac {(y+b)\sin(\pi (x+a))}{x+y}$ $$ f(x,y)= \frac { (y+b) \sin(\pi (x+a))}{x+y} \Rightarrow f(0,\frac {1}{n})= \frac{(\frac{1}{n}+b) \sin \pi a}{ \frac{1}{n}}= \sin \pi a + n b \sin \pi a $$ I can get that : If $ a \in \Bbb Z \lim _{n \to \infty} f(0,\frac {1}{n})=0 $ and if $a \notin \Bbb Z $ function doesn't limit. Why this funtion no limit ,when $ a \in \Bbb Z $ ? Help me, please. AI: You see that the denumerator $x+y-1$ is null on a line passing by your limit point $(a,b)$. Instead the denumerator is null on the axis $y=0$ and on the lines $x=k$. Hence close to the point $(a,b)$ you will easily find points where the function is arbitrarily large, just take any point very close to the line $x+y-1$ and not too close to the lines mentioned above.
H: calculate velocity using parametric functions if i have the following parametric functions where time is m/s : x = 8 t y = -5 t2 + 6 t and i want to find the initial velocity can i do the following: v^2 = 8^2 + 6^2 v^2 = 100 v = 10 m/s is this correct? also if i were to add in air friction into the horizontal movement (but not into the vertical movement) how would i go about calculation velocity now? lets say x is now: x = 8t -0.1t^2 AI: yeah you are correct but it is better to do it like this.. $dx/dt = 8$ and $dy/dt = -10t+6$ here, the $x$-component of velocity is constant therefore initially also it will be 8. And the $y$-component of velocity varies with time, therefore we put $t=0$ to get initial vertical velocity which comes out to be 6. Now, $v^2 =(dx/dt)^2$ at $t=0$ + $(dy/dt)^2$ at $t=0$ which gives magnitude of initial velocity = 10. Now, calculating velocity when there is air friction becomes easy when we follow this approach.
H: Finding root between two points The function $f:[0,1]\to \mathbb{R}$ is continuous, $f(0)<0$, $f(1)>0$ and there is one root in between. Using $f(0)$ and $f(1)$, the expression $\frac{1\cdot f(0)-0\cdot f(1)}{f(0)-f(1)}$ would approximate the root. Question is, if $f'(0)$ and $f'(1)$ are available too, what would an expression look like? I have been trying to solve and simplify some cubic equations with no success, and I think the problem might be simpler. AI: If you write a cubic equation you then have to find the solution of that equation... which is not very simple. Instead I would suggest to take the two tangent lines in the points $(0,f(0))$ and $(1,f(1))$, find the two intersection of these lines and take the mean value of them, if both points are inside the interval. Otherwise if only one point is inside the interval take that point. If both points are outside, fall back to your original method. But see also Newton's method.
H: Finding the remainder when a polynomial is divided by a product of numbers whose remainders are known We have a polynomial $f(x)$ with rational roots that leaves remainders $15, 2x + 1$ when divided by the polynomials $x - 3, (x-1)^2$ respectively. What is the remainder when $f(x)$ is divided by $(x - 3)(x-1)^2$? From the remainder theorem we know that, $f(3) = 15$. I thought quite hard about this and tried various methods, but couldn't come up with anything useful. AI: Write $$f(x)=A(x-3)(x-1)^2+B(x-1)^2+C(x-3)+D$$ where $A$ is an integer polynomial or constant $\implies 15=f(3)=4B+D\implies D=15-4B$ $\implies f(x)=A(x-3)(x-1)^2+B(x-1)^2+C(x-3)+15-4B$ Now, $f(x)\equiv C(x-3)+15-4B\pmod{(x-1)^2}\equiv Cx+15-4B-3C $ But $f(x)\equiv 2x+1\pmod{(x-1)^2}$ $\implies C=2$ and $15-4B-3C=1$ and so on HINT: Write $$f(x)=A(x-3)(x-1)^2+Bx^2+Cx+D$$ $\implies 15=f(3)=9B+3C+D\ \ \ \ (1)$ Now, $f(x)\equiv Bx^2+Cx+D\pmod{(x-1)^2}\equiv(2B+C)x+D-B\pmod{(x-1)^2} $ But $f(x)\equiv2x+1\pmod{(x-1)^2}$ $\implies 2B+C=2\ \ \ \ (2)$ and $D-B=1\ \ \ \ (3)$ Can you solve for $B,C,D$ from $(1),(2),(3)$?
H: Testing the convergence of the given integral and finding its value @. Does the integral $$\int_{-1}^1\sqrt{1+x\over 1-x}dx$$ exist ? If so, find it value. I compared it with $1\over \sqrt {1-x}$ and showed that its convergent. To find its values used the substitution $x=\sin\theta$ after conjugate multiplication of the integral which gives me the value of integral as $-\pi$. Is this correct ? Is there a direct/better way of finding the integral using the convergence ? AI: If $0<a<1$ then $$\int_{-a}^a\sqrt{\frac{1+x}{1-x}}dx=\int_{-a}^a\frac{1}{\sqrt{1-x^2}}dx+\int_{-a}^a\frac{x}{\sqrt{1-x^2}}dx =2\arcsin(a)+0.$$ The last two integrals can be evaluated by finding the respective antiderivatives $\arcsin(x)$ and $-\sqrt{1-x^2}$, although the second integral is $0$ because the integrand is odd, so you don't need to find the antiderivative. Letting $a\to 1-$ shows that the integral converges to $2\arcsin(1)=\pi$.
H: Injection from $\Bbb N$ to the set of functions from the naturals to $\{0,1\}$. Let $\Bbb N$ be the set of natural numbers and let $F$ be the set of total functions from $\Bbb N$ to $\{0,1\}$. Construct a total injective function $g_1\colon \Bbb N\to F$. Sounds easy, but sorta lost. Thank you. AI: For each $n\in\Bbb N$, $g_1(n)$ must be a function from $\Bbb N$ to $\{0,1\}$, and you want to design $g_1$ so that if $m\ne n$, then $g_1(m)\ne g_1(n)$. Suppose that $A$ is any subset of $\Bbb N$; I can define a function $$\chi_A:\Bbb N\to\{0,1\}:n\mapsto\begin{cases} 1,&\text{if }n\in A\\ 0,&\text{if }n\notin A\;. \end{cases}$$ This function $\chi_A$ is called the indicator function or characteristic function of $A$. Notice that if $A,B\subseteq\Bbb N$, and $A\ne B$, then $\chi_A\ne\chi_B$: there is at least one $n\in\Bbb N$ such that $\chi_A(n)\ne\chi_B(n)$. Thus, one way to solve the problem is to find for each $n\in\Bbb N$ a subset $A_n$ of $\Bbb N$ such that $A_m\ne A_n$ whenever $m\ne n$, and let $g_1(n)=\chi_{A_n}$: since the sets $A_n$ are all distinct, their indicator functions $\chi_{A_n}$ are all distinct, and $g_1$ will be injective.
H: Find the fundamental group of torus with two points removed I'm trying to find a fundamental group of Torus \ {two points}. Any help would be really appreciated. AI: Can you see that if you remove two points from the torus, that the resulting topological space $X$ deformation retracts to the wedge of three circles? It follows that the fundamental group is the free group on three generators, $\mathbb{Z}*\mathbb{Z}*\mathbb{Z}$.
H: Find the value of : $\lim_{x\to\infty} \sqrt{x+\sqrt{x}}-\sqrt{x}$ I tried to multiply by the conjugate: $\displaystyle\lim_{x\to\infty} \frac{\left(\sqrt{x+\sqrt{x}}-\sqrt{x}\right)\left(\sqrt{x+\sqrt{x}}+\sqrt{x}\right)}{\sqrt{x+\sqrt{x}}+\sqrt{x}}=\displaystyle\lim_{x\to\infty} \frac{x-x+\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x}}=\displaystyle\lim_{x\to\infty} \frac{\sqrt{x}}{\sqrt{x+\sqrt{x}}+\sqrt{x}}$ I don't even know if my rewriting has helped at all. How would I go about doing this? AI: Dividing by $\sqrt{x}$ we get: $$\lim_{x\to\infty}\frac{1}{\sqrt{1+\frac1{\sqrt x}}+1}$$
H: How is the formula for partition of a set derived. I don't understand how we get from the first step to the second step. I understand that it would be the product of all of the subsets, but I don't understand the simplification that is made thereafter AI: You’re starting with $$\binom{n}{n_1}\binom{n-n_1}{n_2}\binom{n-n_1-n_2}{n_3}\ldots\binom{n-n_1-\ldots-n_{r-1}}{n_r}\;.\tag{1}$$ In general $\dbinom{n}k=\dfrac{n!}{k!(n-k)!}$, so a typical factor in $(1)$ is $$\binom{n-n_1-\ldots-n_m}{n_{m+1}}=\frac{(n-n_1-\ldots-n_m)!}{n_{m+1}!\color{brown}{(n-n_1-\ldots-n_m-n_{m+1})!}}\;,\tag{2}$$ and $(1)$ is equal to $$\frac{n!}{n_1!(n-n_1)!}\cdot\frac{(n-n_1)!}{n_2!(n-n_1-n_2)!}\cdot\ldots\cdot\frac{(n-n_1-\ldots-n_{r-1})!}{n_r!(n-n_1-\ldots-n_r)!}\;.\tag{3}$$ Notice that the brown factor $\color{brown}{(n-n_1-\ldots-n_m-n_{m+1})!}$ in the denominator of the fraction in $(2)$ is the numerator of the next factor in $(1)$, which is $$\binom{n-n_1-\ldots-n_m-n_{m+1}}{n_{m+2}}=\frac{\color{brown}{(n-n_1-\ldots-n_m-n_{m+1})!}}{n_{m+2}!(n-n_1-n_2-\ldots-n_m-n_{m+1}-n_{m+2})!}\;.$$ Thus, every numerator in $(3)$ except the first one, $n!$, cancels with the second factorial in the previous denominator: the second numerator, $(n-n_1)!$, cancels with the $(n-n_1)!$ in the first denominator, the third numerator, $n-n_1-n_2)!$, cancels with the second factorial in the second denominator, and so on. The only things that don’t cancel are the first numerator, $n!$, the first factorials in each denominator, and the second factorial in the last denominator. Thus, the product $(3)$ collapses to $$\frac{n!}{n_1!n_2!n_3!\ldots n_r!(n-n_1-\ldots-n_r)!}\;,\tag{4}$$ and since $n_1+n_2+\ldots+n_r=n$, $$(n-n_1-n_2-\ldots-n_r)!=0!=1\;,$$ and $(4)$ turns out to be just $$\frac{n!}{n_1!n_2!n_3!\ldots n_r!}\;.$$
H: prove Cauchy sequence I have a problem in this exercise Suppose that ${(a_n)}$ is a sequence such that ${a_{2n}}$ ${}\le{}{}$ ${a_{2n+2}}$ ${}\le{}{}$ ${a_{2n+3}}$ ${}\le{}{}$ ${a_{2n+1}}$ for all n ${}\geq{}{}$ 0. Show that this sequence is Cauchy iff $\lim_{n \to \infty} |a_n-a_{n+1}|=0.$ Please suggest me for my work. Thank you very much. AI: If $(a_n)$ is a Cauchy sequence then it's convergent so $\lim_{n\to\infty}|a_n-a_{n+1}|=0$. Conversely, with the hypothesis we have the two sequences $(a_{2n})$ and $(a_{2n+1})$ are adjacent so they are convergent sequences to the same limit so by a simple using of the definition we can prove that $(a_n)$ is convergent then it's a cauchy sequence.