text
stringlengths
83
79.5k
H: Integration of an exponential function with an extra constant I have an integral as shown below. $$ \int_{-a}^a e^{jkx(sin\theta cos\phi-\alpha) }dx$$ Normally I would define the $sin\theta co\phi$ as $x$ and solution of the integral would become $$a\dfrac{sinX}{X}$$ What does $\alpha$ term change in the calculations? Note that $j$ is the imaginary unit like $i$. AI: Well, we have the following integral: $$\mathcal{I}_\text{n}\left(\alpha\right):=\int_{-\alpha}^\alpha\exp\left(\text{n}x\right)\space\text{d}x\tag1$$ Substitute $\text{u}=\text{n}x$, so: $$\mathcal{I}_\text{n}\left(\alpha\right)=\frac{1}{\text{n}}\int_{-\text{n}\alpha}^{\text{n}\alpha}\exp\left(\text{u}\right)\space\text{du}=\frac{1}{\text{n}}\cdot\left[\exp\left(\text{u}\right)\right]_{-\text{n}\alpha}^{\text{n}\alpha}=\frac{\exp\left(\text{n}\alpha\right)-\exp\left(-\text{n}\alpha\right)}{\text{n}}\tag2$$ Now, you can substitute: $$\text{n}=\text{k}\left(\sin\left(\theta\right)\cos\left(\phi\right)-\beta\right)i\tag3$$ So: $$\exp\left(\alpha\text{k}\left(\sin\left(\theta\right)\cos\left(\phi\right)-\beta\right)i\right)=$$ $$\cos\left(\alpha\text{k}\left(\sin\left(\theta\right)\cos\left(\phi\right)-\beta\right)\right)+\sin\left(\alpha\text{k}\left(\sin\left(\theta\right)\cos\left(\phi\right)-\beta\right)\right)i\tag4$$ $$\exp\left(-\alpha\text{k}\left(\sin\left(\theta\right)\cos\left(\phi\right)-\beta\right)i\right)=\exp\left(\alpha\text{k}\left(\beta-\sin\left(\theta\right)\cos\left(\phi\right)\right)i\right)=$$ $$\cos\left(\alpha\text{k}\left(\beta-\sin\left(\theta\right)\cos\left(\phi\right)\right)\right)+\sin\left(\alpha\text{k}\left(\beta-\sin\left(\theta\right)\cos\left(\phi\right)\right)\right)i\tag5$$ So: $$\frac{\exp\left(\text{n}\alpha\right)-\exp\left(-\text{n}\alpha\right)}{\text{n}}=\frac{2 \sin (\alpha \text{k} (\beta -\sin (\theta ) \cos (\phi )))}{\beta \text{k}-\text{k} \sin (\theta ) \cos (\phi )}\tag6$$
H: Is every invertible matrix a composition of elementary row operations? If $A$ is invertible that means we can multiply it on the left by matrices $E_1, ..., E_k$ that correspond to elementary row operations until we get the identity matrix $I$. In other words, we get $E_1 \cdots E_k A = I$. Hence the inverse of $A$ is $E_1 \cdots E_k$, which is a composition of elementary row operations. Since the inverse of a composition of elementary row operations is also a composition of elementary row operations, $A$ is also a composition of elementary row operations. I'm just wondering if this means that the set of all compositions of elementary row operations is the same as the set of all invertible matrices. I just find this interesting and somewhat surprising because I've never heard about it described this way. Am I thinking about it correctly or is there a flaw in my reasoning? AI: Given an $n$-dimensional vector space $V$ and an ordered basis $\mathscr B$ of $V,$ it is true that one can identify a linear operator $T : V \to V$ with an $n \times n$ matrix $A.$ Explicitly, we can compute $T(v_i)$ for each of the vectors $v_i \in \mathscr B,$ and we can subsequently form the matrix $A$ whose $i$th column is $v_i.$ Like you mentioned, if $A$ is an invertible $n \times n$ matrix, then one can compute the inverse of $A$ by a sequence of elementary row operations $E_1, \dots, E_k.$ Each elementary row operation is a linear operator $E_i : V \to V.$ Composition of linear operators corresponds to multiplication of the matrices that represent the linear operators, so as you said, we find that $E_k \cdots E_1 A = I.$ (Here, I am slightly abusing notation and using $E_i$ for both the linear operator and the matrix that represents it with respect to the ordered basis $\mathscr B.$) From this, you can see (as you have) that an invertible $n \times n$ matrix gives rise to a composition of elementary row operations. Conversely, we may start with a composition $E_k \circ \cdots \circ E_1$ of elementary row operations $E_i : V \to V.$ Observe that for each elementary row operation $E_i,$ there exists an elementary row operation $F_i$ such that $F_i \circ E_i = I,$ from which it follows that $(F_1 \circ \cdots \circ F_k) \circ (E_k \circ \cdots \circ E_1) = I.$ (Basically, $F_i$ is the linear operator that does the "opposite" of what $E_i$ does. For instance, if $E_i$ sends $R_1$ to $3R_1 - R_2,$ then $F_i$ sends $R_1$ to $\frac{1}{3}(R_1 + R_2),$ and we have that $F_i \circ E_i = I = E_i \circ F_i$). Consequently, we have that $T = E_k \circ \cdots \circ E_1$ is an invertible linear operator, hence there exists an invertible $n \times n$ matrix that corresponds to $T.$ (Use the construction from the first paragraph above.) From this, you can see (as you have) that a composition of elementary row operations gives rise to an invertible $n \times n$ matrix.
H: Trying to solve $\frac{f(x) f(y) - f(xy)}{3} = x + y + 2$ for $f(x)$ Let $f : \mathbb{R} \to \mathbb{R}$ be a function such that $$\frac{f(x) f(y) - f(xy)}{3} = x + y + 2$$ for all $x,y \in \mathbb{R}$. Find $f(x)$. I started by multiplying both sides by $3$, which gets $$f(x)f(y)-f(xy)=3x+3y+6.$$ I tried to find something by substituting $y=0$, so $$f(x)f(0)-f(0)=3x+6.$$ However, I don't see anything useful. How would I continue on this problem. AI: Setting $x=y=0$ in the functional equation yields $$f(0)^2-f(0)-6=0 \implies f(0)=3 \text{ or } f(0)=-2$$ Set $y=0$ to obtain $$f(0)(f(x)-1)=3x+6$$ Now $f$ can be determined by substituting the values obtained for $f(0)$. Substituting back into the original functional equation tells us that only one of them is valid. Finally, we have a unique solution for $f$: $$f(x) = x+3$$
H: How should I calculate $||\underline{u}-\underline{w}||_{2}$? I'm trying to calculate $||\underline{u}-\underline{w}||_{2}$ where: $$ u=\begin{bmatrix}1 & 3\\ 2 & 2\\ 3 & 1 \end{bmatrix},\,\,\, w=\begin{bmatrix}3 & 1\\ 2 & 2\\ 1 & 3 \end{bmatrix} $$ I'm not familiar with the $|| \cdot ||_2$ operator and I'm not sure how to search it in the search engine. How should I calculate $||\underline{u}-\underline{w}||_{2}$? AI: What you're looking for is Matrix Norm $\|\cdot\|_p$ given by $$\|A\|_p:=\max_{|x|_p=1}|Ax|_p$$ where $|x|_p$ is the vector norm. In particular for $p=2,$ $$\|A\|_2=\sqrt{\lambda_{\max}(A^*A)}$$ where $\lambda_{\max}(A)$ denotes the largest eigenvalue of $A.$
H: Question about notation: linear map For a linear map $A\in Hom(U,V)$ and a linearly independent subset $L\subseteq U$ where U and V are vector spaces, what does the following statement mean: $A$ is injective $\left.\Rightarrow A\right|_{\text {Span } L}$ is injective $\left.\Leftrightarrow A\right|_{L}$ is injective and $A(L)$ is linearly independent. I do not know what the notation $\left.A\right|_{\text {Span } L}$ means and therefore don't understand the statement. AI: Since $L \subset U$, one can talk of the linear span of $L$. The span of a subset is the smallest subspace that contains the given set. The span of a subset always exists(it is the intersection of all the subspaces that contain the given set). The span of a subset $S$ is usually denoted $span(S)$. Since $A$ is a linear transformation on $A$, you can restrict it to the subspace $span(L)$ and this new linear transformation is denoted by $A|_{span(L)}$.
H: Proving a given set is not a vector space Let $V$ denote the set of ordered pairs of real numbers. If $(a_1,a_2)$ and $(b_1,b_2)$ are elements of $V$ and $c\in \mathbb{R}$, define $$(a_1,a_2)+(b_1,b_2)=(a_1+b_1,a_2b_2)$$ and $$c(a_1,a_2)=(ca_1,a_2)$$ Is $V$ a vector space over $\mathbb{R}$ with these operations? Justify your answer. Here is my answer: If $V$ were a vector space, then since $(a_1,a_2)+(0,1)=(a_1,a_2)$,$\;$$(0,1)$ would be the zero vector. But for the scalar $0$, we have $0(0,2)=(0,2)\neq(0,1)$. This violates the following theorem about vector spaces: $\forall \;x\in V\;(0x=0)$. Is my answer correct? I worry because this is an indirect argument. I don't explicitly show that some vector space axiom fails to hold. Can problems like this one be solved indirectly this way? This is my first time studying linear algebra so I'm trying to be extra careful so I don't mess up my foundations. AI: Yes, this is completely correct. Well done! If you want a more direct approach: what axiom(s) in the vector space definitions are not satisfied?
H: How many kinds of average are there? I have learned that the mean, median and mode are three kinds of average. Are there other kinds of average or are there just these three ? AI: There are two main categories of averages. One, mathematical averages focus on using mathematical tools to find the "average". These are Arithmetic, Geometric, and Harmonic Means. The Positionals are concerned with finding values within the data set only, these are Median and Mode. However, if you wish to, you can arbitrarily create new "averages" for usage in different situations with finding values within the data set only.
H: Finding $x$ such that $2^{4370} \equiv x \ (\mathrm{mod} \ 31)$ How to find $x$ such that $2^{4370} \equiv x \ (\mathrm{mod} \ 31)$? The task is to compute $2^{4370} \ (\mathrm{mod} \ 4371$). I know it's $4371=3 \cdot 31 \cdot 47$, so it's $2 \equiv -29 \ (\mathrm{mod} \ 31)$. With Fermat's little theorem it's $-29^{30} \equiv 1 \ (\mathrm{mod} \ 31)$ $\Rightarrow 2^{4370} \equiv -29^{4370} \equiv -29^{145 \cdot 30+20} \equiv -29^{20} \ (\mathrm{mod} \ 31)$. But how to continue? I want to find a smaller number than $-29^{20}$ without a calculator. The calculator says $x=1$, but how to find it without? AI: One way to proceed is to find an $n$ to get $2^n$ close (either on the left or right) of $31$. Well $\quad 2^5 = 32 \equiv 1 \;(\text{ mod 31})$ Couldn't come out that much better; yes, $0 \lt 1$, but... So $\quad \displaystyle 2^{4370} = ({2^5})^{874} \equiv (1)^{874} \;(\text{ mod 31}) \equiv 1 \;(\text{ mod 31})$ Fermat's little theorem works like a charm for modulus $3$ (resp. $47$) since $3 -1 = 2$ divides $4370$ (resp. $47 - 1 = 46$ divides $4370$). But even though $30$ doesn't divide $4370$, we can still use it when working in modulus $31$. Copying J.W.Tanner's comment, $\quad 2^{4370}\equiv2^{4350}2^{20}\equiv(2^{30})^{145}2^{20}\equiv 2^{20} \bmod31$ Applying any 'divide and conquer' tactic you'll find that $\quad 2^{20} \equiv1\bmod31$
H: How do I obtain the pdf of a random variable, which is a function of random variable. A random variable, $X$, has a value of zero with probability $1/3$, and follows a uniform distribution over $[-1, 1]$ with probability $2/3$. How can I derive the pdf of $X$? In my opinion, $X$ can be formulated as $$X=\cases{0, &if $Y \le 1,$\\Z, &if $Y>1$,}$$ where $Y$ is a random variable uniformly distributed over $[0,3]$ and $Z$ is a random variable uniformly distributed over $[-1, 1]$. If the conditions (e.g., $Y\le1$ and $Y >1$) are defined in terms of $X$ (e.g., $X\le1$ and $X>1$), the pdf of $X$ may be obtained by finding the cdf of $X$ and differentiating it. However, $Y$ makes me crazy. How can I find the pdf of $X$? Actually, my final goal is to find $\mathbb{E}\left[-\log_2 f_X(x) \right]$, where $f_X(x)$ denotes the pdf of $X$. AI: Finding the cdf is the best route forward. For a uniform distribution on $[-1,1]$, the cdf is $\frac{1}{2}(x+1)$ for $x \in [-1,1]$ With a probability of 1/3 at 0, you have to split it up to 3 cases, where x is negative, 0, and positive. x has a 1/3 probability of being negative, with a uniform distribution, indicating that the cdf is $\frac{1}{3}(x+1)$ on $[-1,0)$. For $x=0$ the cdf will equal $\frac{2}{3}$. For $x>0$, the cdf will equal $\frac{2}{3}+\frac{1}{3}x$. To find the pdf, you can differentiae, but you will not get a finite value at $x=0$, which is to be expected for a discrete mass point. For a pdf, it is sometimes written with a delta function.
H: If I have two consecutive Integers and I have the following formula $n(m+1)^2$ is it even of odd? I am helping my sister study for the praxis exam of this study book, and I reviewed a question based on number theory. I see it involves constant integers my question is: If $m$ and $n$ are consecutive integers, which can never be even? Choose all that apply. However, I am focusing on this particular one, \begin{equation}n(m+1)^2 \end{equation} My question is based on these two substitutions which tell me that this equation is odd: \begin{equation}1(2+1)^2 =9\end{equation}\begin{equation}3(2+1)^2=27 \end{equation} These are odd. However, in the back of the book it says that this can be even are my substitutions wrong. AI: The expression $n(m+1)^2$ being even or odd depends on $n$: $$n \text{ even}\Longrightarrow n(m+1)^2\text{ even}$$ $$n \text{ odd}\Longrightarrow n(m+1)^2\text{ odd}$$ That's because $n$ and $m$ are consecutive integers, so $n$ even implies $m+1$ even (analogous for odd). Seeing that The multiplication of two even integers is another even (same for two odds) The square of an even integer is another even (same for two odds) (This one's a corolary of the statement above). (this statements have an easy proof using prime factors) you can conclude the implications a gave at first.
H: Prove that $\forall x \in \mathbb{R}:f(x + 2\pi) = f(x)$ I'm trying to prove that if $f : \mathbb{R} \rightarrow \mathbb{R} \phantom{2}$ is a function that verifies : $\exists\, K \in \mathbb{R^+}, \phantom{1}\forall\, x,y \in \mathbb{R}: \lvert f(y)-f(x) \rvert \le K\lvert \cos y - \cos x \rvert$ then $\forall x \in \mathbb{R}:f(x + 2\pi) = f(x)$ Maybe it could be useful the fact that I recently proved that f is a Lipschitz function since by mean value theorem $\dfrac{|\cos y - \cos x|}{|y-x|}\le 1 \implies |\cos y - \cos x|\le|x-y|$ $\forall x,y \in \mathbb{R} $ as $\cos$ is differentiable over all of $\mathbb R$. So $$\exists\, K \in \mathbb{R^+}, \forall\, x,y \in \mathbb{R}: |f(y)-f(x)| \le K|\cos y - \cos x|\le K|x-y|$$ The proof seems easy but I'm not sure how can I prove it, any suggestions? AI: Put $y=x+2\pi.$ We get $|f(x+2\pi)-f(x)|\leq K|\cos(x+2\pi)-\cos x|=0$ as $\cos$ is $2\pi$-periodic.
H: Finding all zeros of $f(z)=\sin(\frac{z}{\pi})$ I'm trying to find the zeros of the function $f(z)=\sin(\frac{z}{\pi})$. I began by noting that we can define $g(z)=\sin(z)$ as $g(z)=\frac{1}{2i}\left(e^{iz}-e^{-iz}\right)$, so: $$f(z)=\frac{1}{2i}\left(e^{i\frac{z}{\pi}}-e^{-i\frac{z}{\pi}}\right)$$ Now we need to find all values of $z$ that satisfy: $$e^{i\frac{z}{\pi}}=e^{-i\frac{z}{\pi}} \Leftrightarrow e^{2i\frac{z}{\pi}}=1$$ This is only true if $2\frac{z}{\pi}=2k\pi$ with $k \in \mathbb{Z}$. So we end up with $z=k\pi^2$. But my complex analysis textbook says that the zeros of that funcion are all $z\in \mathbb{Z}$. So my question is: Is the textbook wrong? Or am I making some kind of obvious mistake that I did no notice? AI: $\displaystyle \sin{\frac z{\pi}} = 0 = \sin(k\pi)$ $\displaystyle \frac z{\pi} = k\pi$ $\displaystyle z = k\pi^2$ Your textbook answer is wrong for the problem you've given. If the problem had been to find the zeroes of $\displaystyle \sin(z\pi)$, that would've been correct.
H: Finding a diagonal matrix $B$ and a unitary matrix $C$ that satisfy $B=C^{-1}AC$. The matrix $A$ is given as $$A=\frac{1}{9} \begin{bmatrix} 4+3i & 4i & -6-2i \\ -4i & 4-3i & -2-6i \\ 6+2i & -2-6i & 1 \end{bmatrix}$$ Find a diagonal matrix $B$ and a unitary matrix $C$ that satisfy $B=C^{-1}AC$. Could anyone help me deal with this + explain the algorithm to find $B$ and $C$ in a little bit more detailed way? I didn't understand everything well by reading the book. I've already found the eigenvalues $\lambda_1=1, \ \ \lambda_2=i, \ \ \lambda_3=-i$ and the eigenvectors $v_1=(-2i \ \ -2 \ \ 1)^T, \ \ v_2=(i \ \ -1/2 \ \ 1)^T, \ \ v_3=(-i/2 \ \ 1 \ \ 1)^T$. AI: Now, you divide each of those vectors by its norm, thereby getting$$w_1=\frac13(-2i\ \ -2\ \ 1)^T,\ w_2=\frac13(2i\ \ -1\ \ 2)^T\text{, and }w_3=\frac13(-i\ \ 2\ \ 2)^T.$$And then you take$$C=\frac13\begin{bmatrix}-2i&2i&-i\\-2&-1&2\\1&2&2\end{bmatrix},$$whose columns are the vectors $w_1$, $w_2$, and $w_3$. And you're done.
H: why $x_m$ converges weakly to $x_\infty$? Let $(X,\|.\|)$ be reflexive Banach space and $Y$ be a closed separable subspace of $X$ $\big((Y ,\|.\|)$is clearly a separable reflexive Banach space$\big)$, then the dual space $Y^*$ of $Y$ is separable. Let $\{y_n^*\}$ be a countable dense subset of $Y^*$. Let $\{x_m\}$ be a bounded sequence in $X$, such that $$ \langle y_n^*, x_m\rangle\underset{m}{\to }z_n\qquad \forall n $$ With $z_n\in\mathbb{R}$. We suppose that the sequence $\{x_m\}$ has a subsequece $\{x_{m_i}\}$ weakly convergente in $Y$ to an element $x_\infty$. Then $$ \langle y_n^*, x_\infty\rangle=z_n\qquad \forall n\qquad (*) $$ Since $\{y_n^*\}$ separates the points of $Y$, it follows from $(*)$ that every limit point of $\{x_m\}$ must equal $x_\infty$. My problem I don't understand why : we can conclude that $x_m$ converges weakly to $x_\infty$ This result was used in the article Infinite-Dimentional Extension of a Theorem of Komlos of Erik J.Balder, on pages 186-187. In the context of the article, the auther says that: "$\{s_n(t)\}$ converges weakly to a point $y_t$ in $Y$." But i don't understand why. An idea please. AI: By Banach-Alaoglu we know that any norm bounded sequence in a reflexive Banach space has a weakly convergent subsequence. Thus any subsequence of $(x_m)$ must have have a weakly convergent subsequence, which by $(*)$ must have $x_\infty$ as a limit. Thus every subsequence has a further subsequence converging weakly to $x_\infty$, so we must have the original sequence converging weakly to $x_\infty$. This holds due to the following fact: In any topological space, if any sequence $(x_n)$ satisfies the property that every subsequence contains a convergent subsequence converging to the same limit, then $(x_n)$ itself converges to that limit. This is easy to prove by contradiction.
H: Prove $\sqrt{a^2 + ab + b^2} + \sqrt{b^2 + bc + c^2} + \sqrt{c^2 + ac + a^2} \ge \sqrt{3}(a + b + c)$ Prove $\sqrt{a^2 + ab + b^2} + \sqrt{b^2 + bc + c^2} + \sqrt{c^2 + ac + a^2} \ge \sqrt{3}(a + b + c)$ So, using AM-GM, or just pop out squares under square roots we can show: $$\sqrt{a^2 + ab + b^2} + \sqrt{b^2 + bc + c^2} + \sqrt{c^2 + ac + a^2} \ge \sqrt{3}(\sqrt{ab} + \sqrt{bc} + \sqrt{ca}),$$ i.e. we need next to show that $(\sqrt{ab} + \sqrt{bc} + \sqrt{ca}) \ge (a + b + c)$, but i don't know how to do it. Any help appreciated AI: By Minkowski (triangle inequality) $$\sum_{cyc}\sqrt{a^2+ab+b^2}=\sum_{cyc}\sqrt{\left(a+\frac{b}{2}\right)^2+\frac{3}{4}b^2}\geq$$ $$\geq\sqrt{\left(\sum_{cyc}\left(a+\frac{b}{2}\right)\right)^2+\frac{3}{4}\left(\sum_{cyc}b\right)^2}=\sqrt3(a+b+c).$$
H: Is $f$ differentiable at $0$? and if it is what is the value of $f'(0)$ I'm studing that if $f : \mathbb{R} \rightarrow \mathbb{R} \phantom{2}$ is a function that verifies : $\exists\, K \in \mathbb{R^+}, \phantom{1}\forall\, x,y \in \mathbb{R}: \lvert f(y)-f(x) \rvert \le K\lvert \cos y - \cos x \rvert \Rightarrow f$ is a Lipschitz function $\forall x \in \mathbb{R}:f(x + 2\pi) = f(x)$ Then f is differentiable at $0$ The exercise seems easy but I'm not sure how can I prove it, any suggestions?And if it's differentiable what's the value of $f'(0)$? AI: Use identity $$ \cos x - \cos y = -2 \sin \frac{x+y}{2}\cdot \sin\frac{x-y}{2} $$ to obtain the estimate (say, WLOG $x\ge y$, so we can use $|\sin \alpha| \le \alpha$ for positive $\alpha$) $$ |f(x) - f(y)| \le 2K \left| \sin \frac{x-y}{2} \cdot \sin \frac{x+y}{2}\right| \le K|x-y|\left|\sin\frac{x+y}{2}\right| $$ Can you now prove that $f$ is Lipschitz and establish differentiability at zero?
H: Find $\lim_\limits{x\to 0}\frac{1-(\cos x)^{\sin x}}{x}$ using little o $\lim_\limits{x\to 0}\frac{1-e^{\sin(x) \ln(\cos x)}}{x}=\lim_\limits{x\to 0}\frac{1-e^{(x+o(x))\ln(1-\frac{x^2}{2}+o(x^2))}}{x}=\lim_\limits{x\to0}\frac{1-e^{(x+o(x))(-\frac{x^2}{2}+o(x^2))}}{x}$. If this is correct, what will happen with $o()$ after multiplication? Will it be $o(x),o(x^2)$ or $o(x^3)$ and how to finish it afterwards? I am solving this problem using this method because the question asks me to do so. AI: Proceed like this: \begin{align} & \frac{1 - (\cos x)^{\sin x}}{x} \\ = & \frac{1 - \exp(\sin x \ln(\cos x))}{x} \\ = & \frac{1 - \exp\left(\sin x \ln(1 - \frac{1}{2}x^2 + o(x^3))\right)}{x} \quad (\text{expand } \cos x)\\ = & \frac{1 - \exp\left(\sin x \times \left(- \frac{1}{2}x^2 + o(x^3)\right)\right)}{x} \quad (\text{expand } \ln(1 + x)) \\ = & \frac{1 - \left[1 + \left(\sin x \times \left(- \frac{1}{2}x^2 + o(x^3)\right)\right) + o(x^4)\right]}{x} \quad (\text{expand } e^x) \\ = & \frac{\frac{1}{2}x^2\sin x + o(x^3)}{x} \\ = & \frac{1}{2}x\sin x + o(x^2) \to 0 \end{align} as $x \to 0$.
H: How to find such U and V such that U$\cap$V=$\emptyset$ Assume that A and B are closed disjoint subsets. Then there exist open sets U$\supset$A and V$\supset$B with U$\cap$V =$\emptyset$ I am said to deduce it from (Urysohn's Lemma). Let A, B be two disjoint closed subsets of a metric space. There exists a continuous function f: X$\rightarrow$R such that 0$\leq$f$\leq$1 and f = 0 on A and f = 1 on B. To prove this one i considered the function f(x)= $\frac{d_A(x)}{d_A(x)+d_B(x)}$ as suggested and proved the following. But i have no idea how to use this to conclude U$\cap$V=$\emptyset$ AI: Well, your function is particular for a metric space. In this case, it is well defined, is equal to $1$ exactly on $B$ and to $0$ exactly on $A$. Moreover, it is continuous so that the preimage of any open subset of $[0,1]$ is open in $E$. Let $U = f^{-1}([0,1/4))$ and $V=f^{-1}((3/4,1])$. Prove they are solutions of your problem.
H: Would symmetry positive semi-definite matrix always decomposable? Given symmetry positive semi-definite matrix $A \in R^{n\times n}$. And $Det(A) \geq 0$. Would there always exist real matrix $B$, such that $A = B \cdot B^T$? If so why? Or why not? AI: The answer is positive. There exists several decompositions like that. See for example Cholesky_decomposition. Another way is to use the fact that a symmetric positive semi-definite real matrix is diagonalizable in an orthonormal basis with non negative eigenvalues. See symmetric matrices.
H: Limit problem $\lim_{T \to 0} \frac{1}{T} \int_0^T S_u du$ Consider $dS_t = \mu S_t dt + \sigma S_t dW_t$ with initial $S_0 > 0$. We may obtain that $S_t = S_0 \exp\left[(\mu - \frac{1}{2}\sigma^2) t +\sigma W_t\right]$. Hence we may consider average value of $S_T$. $A(T) = \displaystyle \frac{1}{T}\int_0^T S_0 e^{(\mu - \frac{1}{2}\sigma^2)t + \sigma W_t} dt$. We want to determine $\lim_{T \to 0} A(T)$. The main problem that we can't manage substitutions like $t = Tu$, because it's SDE. We should be careful about $W_t$. I've tried to consider $\xi \stackrel{d}{=} e^{\sigma W_t}$ and obtain $\mathbb{E} \displaystyle \frac{1}{T}\int_0^T \xi e^{(\mu - \frac{1}{2}\sigma^2)t}dt$. But that gives me limit in $L_1$. Not actually we interested in. Any ideas? Maybe I miss something? AI: The fundamental theorem of calculus shows that $$\lim_{T \to 0} \frac{1}{T} \int_0^T f(t) \, dt = f(0)$$ for any continuous function $f$. Applying this result for $f(t):=S_0(\omega) \exp\left(t \left[\mu-\frac{\sigma^2}{2} \right] + \sigma W_t(\omega) \right)$ with $\omega$ fixed, we find that $$\lim_{T \to 0}\frac{1}{T} \int_0^T S_0(\omega) \exp\left(t \left[\mu-\frac{\sigma^2}{2} \right] + \sigma W_t(\omega) \right) \, dt = S_0(\omega).$$
H: If $a$ is relatively prime to $m$ and $a \equiv b\ (\textrm{mod}\ m)$, is $b$ relatively prime to $m$? If $a$ is relatively prime to $m$ and $a \equiv b\ (\textrm{mod}\ m)$, is $b$ relatively prime to $m$? Hint:. Recall that $a \equiv b\ (\textrm{mod}\ m)$ if and only if $a$ and $b$ differ by a multiple of $m$. So we have that $\gcd(a, m)=1$ and we would like to see if $\gcd(b,m) = 1.$ So since $a$ and $m$ are relatively prime doesn’t that imply that $m$ is a prime? If so, then the only way that $\gcd(b, m) = 1$ would hold is if $b$ is a prime(?). From the hint we have that $b=a-k\cdot m$, and since we’re multiplying $m$ by some constant $k$ the only way that $b$ would be a prime is that if $a$ is some number that results in $b$ being prime when subtracted from $k \cdot m$(?). By the same logic that $m$ would be prime since $\gcd(a,m)=1$ we would also have that $a$ is a prime. Now do we have some theorem that would state that subtracting some prime $p$ from some number that’s not a prime would result in a prime or am I going totally in a wrong direction here? All help would be appreciated. AI: The answer is yes. Note that if $d$ is a common divisor of $b,m$ and $a = b + mk$ (for some integer $k$), then $d$ must also be a divisor of $a = b - mk$.
H: Help with Conditional probability for a future event I have the following problem, in a statement I am given the following conditional probabilities $$P(x_i | x_{i-1}) = 0.7$$ $$P( \overline{x_i} | x_{i-1}) = 0.3$$ $$P(x_i | \overline{x_{i-1}}) = 0.4$$ $$P(\overline{x_i} | \overline{x_{i-1}}) = 0.6$$ These indicate the probability of an event occurring or not occurring given whether or not it occurred the day before. Based on that, the following tree of probabilities is made up until a day $i=3$, but in theory it would be up to a day $i=n$ tree of probabilities here We are asked to express the probability of the event occurring on a day $i$ given that it did not occur today, i.e. $$P(x_i | \overline{x_0})$$ I can't find an expression for what is required for any day $i$ in the future. Until day two, based on the probability tree, it is clear that there would be two possible paths and the probability would be $$P( x_2 | \overline{x_{1}}) P(\overline{x_{1}}|\overline{x_0}) + P(x_2 | x_{1}) P(x_{1}|\overline{x_0})$$ $$ 0.6 \cdot 0.4 + 0.4 \cdot 0.7 $$ But when generalizing and finding an expression for a day $i>2$, since we haven't seen the random variable theory, I don't know how to do it. AI: Describe the problem using the tools of Linear Algebra and Matrices. Let $A=\begin{bmatrix}0.7&0.4\\0.3&0.6\end{bmatrix}$ and let $v_n=\begin{bmatrix}P(x_n)\\P(\overline{x_n})\end{bmatrix}$ We have $v_{n}=Av_{n-1}=A^nv_0$ and in particular $P(x_n\mid \overline{x_0})$ will be the first entry of $A^n\begin{bmatrix}0\\1\end{bmatrix}$. Armed with this set up, one can now apply all the usual tools and methods available from Linear algebra, in particular the use of eigenvalues and of diagonalization. By diagonalizing $A$ as $SDS^{-1}$ with $D$ diagonal, it follows that $A^n = SD^nS^{-1}$ at which point we can simplify $A^n v_0$. I leave the details to you to work out.
H: Inequality involving ranks I'm trying to prove the following inequality, $$ \rho(AB) + \rho(BC) \le \rho(B) + \rho(ABC) $$ where $A, B, C \in L(V)$, $V$ is a finite-dimensional vector space and $\rho(A)$ means the rank of the linear operator $A$. I know that $\min(\rho(AB), \rho(BC)) \le \rho(B)$. So I tried to prove that $\max(\rho(AB), \rho(BC)) \le \rho(ABC))$ but couldn't get anywhere. Any hint is appreciated, thanks. AI: Hint: It is helpful to rewrite the inequality as follows: $$ \rho(B) - \rho(AB) \geq \rho(BC) - \rho(ABC) $$ Similarly, we could write $\rho(B) - \rho(BC) \geq \rho(AB) - \rho(ABC)$. Further hint: Note that $\rho(AB) = \rho(B) - \dim(\ker(A) \cap \operatorname{im}(B))$, where "im" denotes the image/range and "ker" denotes the kernel/nullspace. We can prove this as follows: let $T$ denote the map $T:\operatorname{im}(B) \to V$ defined by $T(x) = A(x)$ (i.e. $T = A|_{\operatorname{im}(B)}$, the restriction of $A$ to $\operatorname{im}(B)$). We note that $\rho(T) = \rho(AB)$, and by the rank-nullity theorem we have $$ \rho(T) = \dim\operatorname{im}(B) - \dim \ker (T) = \rho(B) - \dim (\ker(A) \cap \operatorname{im} (B)). $$ Similarly, we observe that $\rho(A[BC]) = \rho(BC) - \dim(\ker(A) \cap \operatorname{im}(BC))$.
H: Modified Newton method and contraction principle I am studying Newton's method modified by the book Zorich, Mathematical analysis II, page 39,40: It seems to me, if I make no mistakes, that there is a problem in the derivative of $ A (x) $. The author says that $ | A '(x) | = | [f' (x_0)] ^ {- 1} \cdot f '(x) | $, while I would say that: $$ | A '(x) | = | 1- [f' (x_0)] ^ {- 1} \cdot f '(x) | $$ Am I wrong? AI: Yes, your observation is correct. This is also confirmed by the fact that this corrected expression is the smaller the closer $f'(x_0)$ is to $f'(x)$, that is, the closer the step is to the Newton method. Perhaps they mixed this up with the derivative of the Newton step where this first term indeed cancels, $$ N(x)=x-[f'(x)]^{-1}f(x)\implies N'(x)=I-I+[f'(x)]^{-1}[f''(x)][f'(x)]^{-1}f(x) $$
H: Show that $K_{r, s}$ is planar if and only if $\min$ {r, s} ≤ 2. So I've done some draws and this is true, but How can I argument to prove that, by the maximum number of edges in $K$ ? Or by $d(v)$ Any help? AI: This basically boils down to a special case of Kuratowski's theorem. In the special case of bipartite graphs, this will say that your graph will be planar if and only if it does not contain $K_{3, 3}$ as a minor. First, you would show that $K_{3, 3}$ is not planar (and hence neither is any bipartite graph containing $K_{3, 3}$ as a minor), and then show that $K_{r, 2}$ is in fact planar for any $r$ (you can just explicitly draw $K_{r, 2}$ as a plane graph in this case to demonstrate planarity).
H: Range of the function $f:\mathbb{Z} \to (\mathbb{Z}/4\mathbb{Z},\mathbb{Z}/6\mathbb{Z})$ Let $f:\mathbb{Z} \to (\mathbb{Z}/4\mathbb{Z},\mathbb{Z}/6\mathbb{Z})$ be he function given by $f(n)=(n$ mod 4,$n $ mod $6)$.Then $(1)(0$ mod $ 4 ,3$ mod $6)$ is in the image of $f$ $(2)(a$ mod $ 4 ,b$ mod $6)$ is in the image of $f$ ,for all even integers $a$ and $b$. $(3)$ image of $f$ has exactly $6$ elements. $(4)$kernel of $f=24\mathbb{Z}$ Some general observation by me:- Let $n\in \mathbb{Z}$ .Then $n=4q+r=6q_1+r_1$ for some integers $q,q_1$ and $0\le r\lt 4, 0\le r_1 \lt 6$ Hence $r-r_1=6q_1-4q=2(3q_1-2q)$ , thus the difference of the remainders is always even which discard option $(1)$ kernel of $f=12\mathbb{Z}$ . So $(4)$ false Even integers under congruence modulo 4 are of the type $4q,4q+2$ and that under congruence modulo $6$ are of the type $6q',6q'+2,6q'+4$ Whatever even integers $a$ and $b$ may be $(a$ mod $4,b$ mod $6)$ will belong to $\{0,2\}×\{0,2,4\}$ .Now I show each of the ordered pairs $(0,0),(0,2),(0,4),(2,0),(2,2),(2,4)$ is in the image of $f$ by the following list of numbers:- $4×2=6×1+2 \to (0,2)$ $4×3=6×2 \to (0,0)$ $4×4=6×2+4 \to (0,4)$ $4×1+2=6×1 \to (2,0)$ $4×3+2=6×2+2\to (2,2)$ $4×5+2=6×3+4 \to (2,4)$ Hence $(2)$ is true. Obviously $(3)$ is false since $f(25)=(1,1)$ and others are also there. The problem with my answer is that it seems too childish and long.Please give a review on mg answer and suggest better approach. Thanks a lot. AI: Since $\mathbb Z$ is generated by $1$, the image of $f$ is generated by $f(1)=(1,1)$, which has order $lcm(4,6)=12$. Therefore, the image has size $12$, which of course is consistent with $\ker f = 12 \mathbb Z$. This settles (3) and (4). Finally, by the Chinese remainder theorem, the equations $n \equiv a \bmod 4$ and $n \equiv b \bmod 4$ have a common solution iff $a \equiv b \bmod \gcd(4,6)$. This settles (1) and (2).
H: $\frac{w_k}{x-x_k}$ expansion into decreasing powers of $x$ How can $\dfrac {(w_k)}{(x-x_k)}$ becomes:$$\dfrac {w_k}x+\dfrac {w_kx_k}{x^2}+\dfrac {w_kx_k^2}{x^3}+...$$ I couldnt figured out the process. AI: Hint: You can use this: $$\dfrac 1 {1-x}=\sum_{n=0}^\infty x^n$$ For $|x| <1$ and note that you have: $$\dfrac {w_k}{(x-x_k)}=\dfrac {w_k}{x}\dfrac {1}{(1-x_k/x)}$$
H: Proof that the set of polynomial in $Z[x]$ with linear term coefficient equal to $0$ is a domain Can anybody help me on that? I'm having trouble AI: Hint: Let $R$ be the set in question. It is enough to prove $R$ is a subring of $\mathbb Z[x]$. It'll automatically be a domain. To prove that $R$ is a subring, it is enough to prove that it is closed under subtraction and multiplication, since it clearly contains $0$ and $1$. There is no need to check associativity etc, because these hold in $\mathbb Z[x]$. All the above is generic. To actually prove something, you'll need to use the definition of $R=\mathbb Z + x^2 \mathbb Z[x]$ or as the set of all polynomials $f$ such that $f'(0)=0$.
H: write down functions in terms of complex coordinate $z=x+iy$ Indentifying $\Bbb R^2$ with the complex plane $\Bbb C$ via the map $(x,y)→ x+iy$, write down the following functions in terms of complex coordinate $z = x + iy$. i) the translation by the vector (1,2) ii) a rotation anticlockwise by $\theta$ iii) a reflection in the x-axis iv) reflection in the line $y=x+10$ v) reflection in the line $y=x$ vi) inversion in the circle centered at $(0,0)$ with radius r Here are my answers, I am just looking for verification. Thanks! ANSWERS: i) $t(z) = z + 1 + 2i$ ii) $t(z) = e^{i\theta}z$ iii) $t(z) = \overline z$ iv) ? v) ? vi) ? AI: Think about what a reflection about the line $y = x+10$ means. For instance, the point $(0,10)$ is on the line and gets mapped to the same point. Similarly, the point $(1,11)$ stays where it is. So the point $(1,10)$ would be mapped to $(0,11)$ and vice versa. What kind of transformation does this? Well, in general, it seems that you would want to take flip the $x$- and $y$-values, and then add $(-10,10)$; i.e., $$(x,y) \mapsto (y-10,x+10).$$ Well, while the above works, it seems to lack some rigor. So let's think about this more methodically. You may note that the reflection about the line $y = x$ is just $$(x,y) \mapsto (y,x).$$ So if we do a translation of $y = x+10$ by $(0,-10)$, followed by the reflection, then another translation back by $(0,10)$, we have the composition of mappings $$(x,y) \mapsto (x,y-10) \mapsto (y-10,x) \mapsto (y-10,x+10),$$ which is what we found earlier. How do we write this transformation in terms of complex numbers? We clearly want $$z = x + yi \mapsto (y-10) + (x+10)i.$$ But is there a function of $z$ that does this? Well, one way is to observe $$z + \bar z = (x+yi)+(x-yi) = 2x, \\ z - \bar z = (x+yi)-(x-yi) = 2yi,$$ so that $$(y-10) + (x+10)i = \left(\frac{z - \bar z}{2i} - 10 \right) + \left(\frac{z + \bar z}{2} + 10\right)i = -10 + (\bar z + 10)i.$$ I will skip the next part and go to the last. Note that inversion in the circle with radius $r$ requires that $$|t(z)||z| = r^2,$$ that is to say, the product of the magnitudes of the inverted point and the original point must equal the square of the radius. Moreover, we must also have $$\arg(t(z)) = \arg(z),$$ meaning their angles remain the same. So if $$z = |z|e^{i\arg(z)},$$ what is $t(z)$?
H: How to prove that $\lim_{n\to\infty}\int_{0}^{2}\frac{x^n}{x+1}=\infty$ I'm asked to prove that $$\lim_{n\to\infty}\int_{0}^{2}\frac{x^n}{x+1}=\infty$$ I've tried using the fact that for every $$x\in[0,2], \frac{x^n}{x+1}\ge\frac{x^n}{3}$$ but I can't seem to be able to calculate $\int_{0}^{2}\frac{x^n}{3}$ using only darboux sums. I did notice that for partition of [0,2] to n equal intervals, where every interval is in the general form of $[\frac{2k-2}{n},\frac{2k}{n}]$, the upper and lower Darboux sums satisfy $$U(f,p)=\frac{2^{n+1}}{n^{n+1}}\sum_{k=1}^{n}k^n\gneq \frac{2^{n+1}}{n+1}\gneq\frac{2^{n+1}}{n^{n+1}}\sum_{k=1}^{n-1}k^n=L(f,p)$$ but I'm not sure that I can determine anything from that because the aforementioned partition is not arbitrary and moreover does not satisfy $$\lim_{n\to\infty}\Omega(f,p)=0$$ Any idea on how to calculate $\int_{0}^{2}\frac{x^n}{3}$ using only darboux sums or is there another approach I'm missing? I'm not allowed to calculate the integral by using antiderivative, as we haven't formalized it yet AI: Hint: $ \int_0^2 \frac{x^n}{1+x}dx \geq \int_{3/2}^2 \frac{x^n}{1+x}dx \geq \int_{3/2}^2 \frac{(3/2)^n}{1+2}dx = (1/6)(3/2)^n $.
H: Is there an infinite set with a discrete cyclic order? Let's call a cyclic order of a set discrete if every cut of the order is a jump. A cut of a cyclic order is a linear order $<$ such that $x < y < z \implies (x, y ,z)$ for any elements $x$, $y$, $z$ of the set. A cut of a cyclic order is a jump if it has the least and the greatest elements. Clearly, the induced cyclic order of integers is not discrete since the natural linear order of integers does not have the least and the greatest elements. However, there are other ways of ordering integers cyclically, e.g. https://math.stackexchange.com/a/2196717/427611. I am wondering if it is possible to find a discrete cyclic order of integers or maybe of some other infinite set. If it is not possible, what would be the easiest way to prove that? By cyclic order I mean a total strict cyclic order defined in here: https://en.wikipedia.org/wiki/Cyclic_order#The_ternary_relation AI: Given a cyclic order on $A$ and an element $a\in A$, we can define $<$ as $$ x<y\iff [x,y,a]\lor x\ne y=a$$ (i.e., we "cut" immediately behind $a$). This obviously has $a$ as a maximal element. Assume that there is also a minimal element, no matter what $a$ we pick. Call it $S(a)$, and we have a successor map on $A$. By the same argument, we obtain a predecessor map and this is clearly inverse to the successor map. Using these (and picking an element $a_0\in A$) we can map $\iota\colon\Bbb Z\to A$ such that no elements of $A$ are between the images of consecutive integers. If $\iota$ is not injective, then it must be periodic and so $\iota(\Bbb Z)$ finite. In that case $\iota$ must be onto because there is no way to "squeeze" any further elements of $A$ in-between. As we are interested in the case of infinite $A$, we can ignore this case. [Thanks to a comment by Eric Wolsey] Now we can make a new cut "above $\Bbb Z$", i.e., we define $$x\prec y\iff \exists n\in\Bbb Z\colon [x,y,\iota(n)]. $$ This does not have a maximal element.
H: $f_n:(0,\infty) \to\mathbb{R}, f_n(x)=\frac{1}{1+nx}$ uniform convergence If $(f_n)_{n\in \mathbb{N}}$ is pointwise convergent: Is the limit function continuous? Is it uniform convergent? $f_n:\mathbb{R} \to\mathbb{R}, f_n(x)=xsin(nx)$ $f_n:(0,\infty) \to\mathbb{R}, f_n(x)=\frac{1}{1+nx}$ diverges $\lim_{n\to\infty}(\frac{1}{1+nx})=0$, so this is pointwise convergent and also continuous. I don't know how to prove that it's uniform convergent. How do I continue with $0-\frac{1}{1+nx}<\epsilon$? AI: It is not uniformly convergent, since it converges pointwise to $0$, but $(\forall n\in\Bbb N):f_n\left(\frac1n\right)=\frac12$.
H: Trouble getting to an explicit solution Given the ODE: $(y^2-1)\frac{dy}{dx} =4xy^2$ I can get to an implicit solution easily enough: $y+\frac{1}{y} = 2x^2$+c. However, I've been given an explicit solution: $y(x)=x^2-c_2\pm\sqrt{(c_2-x^2)^2-1}$ and I can't figure out how to get there. I'm teaching myself ODEs so there's a very good chance that there is some obvious algebraic manipulation trick that I'm missing. I've been wracking my brains for hours over this so any help would be appreciated. AI: Your equation has the form $$ y+ y^{-1}= f(x) $$Multiply by $y$, isolate, and use the quadratic formula: $$ y^2 - f(x)\cdot y + 1 = 0 $$ $$ y = \frac{f(x) \pm \sqrt{f(x)^2-4}}{2} $$
H: Finding the general formula for the sequence with $d_0=1$, $d_1=-1$, and $d_k=4 d_{k-2}$ Suppose that we want to find a general formula for the terms of the sequence $$d_k=4 d_{k-2}, \text{ where } d_0=1 \text{ and } d_1=-1$$ I have done the following: \begin{align*}d_k=4d_{k-2}&=2^2d_{k-2} \\ &=2^2\left (2^2d_{(k-2)-2}\right )=2^4d_{k-4} \\ & =2^4\left (2^2d_{(k-4)-2}\right )=2^6d_{k-6} \\ & = 2^6\left (2^2d_{(k-6)-2}\right )=2^8d_{k-8} \\ & = \ldots \\ & = 2^id_{k-i}\ , \ \ i \text{ even}\end{align*} If $k$ even, then at the last step we have for $i=k$ (since $k$ is the maximum even number $\leq k$) : $d_k=2^kd_{k-k}=2^kd_0=2^k$. If $k$ odd, then at the last step we have for $i=k-1$ (since $k-1$ is the maximum even number $\leq k$) : $d_k=2^{k-1}d_{k-(k-1)}=2^{k-1}d_1=-2^{k-1}$. How can we find the general form for the terms of the recurrence relation? Or do we distinguish cases when $k$ is even and odd? I am interested to find the general formula without using the characteristic equation. AI: You found $d_k=2^k$ when $k$ is even and $d_k=-2^{k-1}$ when $k$ is odd. To put this in one formula, note that $\dfrac{1+(-1)^n}2$ is $0$ when $n$ is odd and $1$ when $n$ is even, whereas $\dfrac{1-(-1)^n}2$ is $1$ when $n$ is odd and $0$ when $n$ is even. So you could say $d_k=2^k\dfrac{1+(-1)^k}2-2^{k-1}\dfrac{1-(-1)^k}2,$ which simplifies to $2^{k-2}\left[1+3(-1)^k\right]$.
H: Find maximum $\theta$ such that $|x + \theta a| \leq b$ I have an optimisation problem: $$ \max_{\theta} \quad \theta \\ \text{such that} \qquad |x + \theta a| \leq b $$ where $x, a \in \mathbb{R}^{n}$. We know that $|x| \leq b$. The norm referred to here is the $\ell_{1}$-norm. Is there a simple way to solve this kind of optimisation problem that does not require using a full blown convex optimisation technique? Given that the problem is one dimensional my only thought was to solve this problem for different intervals of $\theta$ manually. i.e. for $\theta \in [0, 10]$ we may know that $x_{1} + \theta a_{1} \leq 0$ and $x_{i} + \theta a_{i} \geq 0$ for all $ 1< i \leq n$. Then we can replace the constraint with a simple sum and solve. But this would still require a number of checks dependent on the dimension of $a$ which I would like to avoid if possible. Does this problem have a well known analytical solution? AI: Here is one computational solution: We can write the constraint as $\sum_{a_k \neq 0} |x_k+\theta a_k| = b-\sum_{a_k = 0} |x_k|$, so we can suppose that $a_k \neq 0$ for all $k$. Let $f(\theta) = \|x+\theta a\|_1$, we are given that $f(0) \le b$. Note that $f$ is convex and piecewise affine. Let $\theta_k^*$ solve $x_k+\theta a_k = 0$ and note that $f(\theta) = \sum_k |a_k| |\theta-\theta_k|$. Let $B = \{ \theta_k^* | \theta_k^* >0 \} \cup \{0\}$. Sort the collection into $t_0=0,t_1,...,t_m$ If $f(t_m) \le b$ and $\theta \ge t_m$ then $f(\theta) = \sum_k |a_k| (\theta-\theta_k)$ and so the solution is given by $\theta^* = {b + \sum_k \theta_k^* |a_k| \over \sum_k |a_k| }$. Otherwise we have $f(t_m) >b$. Find the largest index $k$ such that $f(t_k) \le b$, then we know that the solution lies in $[t_k,t_{k+1})$ and is given by $\theta^* = \lambda t_{k+1} + (1-\lambda)t_k$ where $\lambda = {b - f(t_k) \over f(t_{k+1})-f(t_k)}$.
H: Is Banach–Tarski paradox false without axiom of choice? I know that you need axiom of choice to prove Banach–Tarski paradox. But what happens with paradox when we remove axiom of choice? Does theorem become false? Or is there just no proof of it without axiom of choice? AI: As explained in the last paragraph here, if you combine ZF set theory with the assumption that the axiom of choice is false, the Banach-Tarski paradox becomes undecidable rather than refutable. Indeed, ZF plus something weaker than AC called the ultrafilter lemma renders the BT a theorem; it doesn't need full ZFC. For more details, see the 1991 paper that proved this. As a result, the models of ZF in which AC is false, i.e. the models of ZF$\neg$C, include some in which BT follows because the ultrafilter lemma is true, but also some (such as the Solovay model) in which the BT is false. This is why the BT is undecidable in ZF$\neg$C. (These specific examples are owed to @Reveillark.)
H: What is the time complexity of the function $5^{\log_3(n)}+n^{1.5}\sum_{j=0}^{log_3n-1}\left(\frac{5}{3^{1.5}}\right)^j$? I need to find the $\Theta$ complexity of this function: $$5^{\log_3(n)}+n^{1.5}\sum_{j=0}^{log_3n-1}\left(\frac{5}{3^{1.5}}\right)^j$$ It shouldn't be too hard, and I already have simplified it, the problem is, the result should be $\Theta\left(n^{1.5}\right)$. P.S how do you write mathematical expressions on this site?? Thanks! AI: It's correct, on the following grounds: $$5^{\log_3(n)}=\left(e^{\log(5)}\right)^{\log_3(n)}=\left(e^{\log(n)}\right)^{\frac{\log(5)}{\log(3)}}=n^{\frac{\log(5)}{\log(3)}}\in\mathcal{O}\left(n^{1.5}\right)$$ and $$\frac{5}{3^{1.5}}<1\quad\Rightarrow\quad\sum_{j=0}^{\text{anything}}\left(\frac{5}{3^{1.5}}\right)^j\leq\frac{1}{1-\frac{5}{3^{1.5}}}\in\mathcal{O}(1)$$
H: If $f(x)$ is a polynomial in $\mathbb{Z}$ and $f(a)\equiv k\pmod{n}$, prove that, for all integer $m$, $f(a+mn)\equiv k\pmod{n}$ I'm trying to solve this excercise: If $f(x)$ is a polynomial in $\mathbb{Z}$ and $f(a)\equiv k\pmod{n}$ . Prove that for all integer m $f(a+mn)\equiv k\pmod{n}$. I know that if $f(a)\equiv k\pmod{n}$ then there exist some $p\in\mathbb{Z}$ such that $f(a)-k=np$ But after this I don't know what to do. Any suggestion? Thanks! Edit: I only can use basic properties of congruence. AI: Let $f(x)=c_0+c_1x+c_2x^2+\cdots+c_dx^d$. Note that from binomial expansion $(a+mn)^k\equiv a^k\bmod n.$ Therefore $f(a+mn)=a_0+c_1(a+mn)+c_2(a+mn)^2+\cdots+c_d(a+mn)^d$ $\equiv a_0+c_1a+c_2a^2+\cdots+c_da^d=f(a)\bmod n$.
H: Probabilistic Proof of a Hausdorff-Young Type Inequality Let $1 \leq p <2$ and let $q$ be the Holder conjugate of $p$ so that $\frac{1}{p} + \frac{1}{q} = 1$. Show that for any $\epsilon >0$, there exists a Schwartz function $f \in S(\mathbb{R}^d)$, such that: $$ \|\hat{f}\|_{L^{q}(\mathbb{R}^d)} \leq \epsilon \|f\|_{L^p(\mathbb{R}^d)} $$ The exercise suggests that as a hint, one should use Khintchine's inequality: If $\epsilon_{n}$ is a IID sequence of $\mathrm{Unif}(\{-1,1\})$ random variables (random choice of signs) and $x_n$ is (finite) sequence of complex numbers we have a constant $C(p) >0$ with: $$ \frac{1}{C(p)}\left(\sum_{n = 1}^{N}|x_n|^2\right)^{1/2} \leq \left(\mathbb{E}\left[\left(\sum_{n = 1}^N\epsilon_nx_n\right)^p\right]\right)^{1/p} \leq C(p)\left(\sum_{n = 1}^{N}|x_n|^2\right)^{1/2} $$ Does anyone have any ideas as to how one should properly apply this inequality? AI: We exploit a trick called randomisation. Broadly, the idea of the trick is to introduce some random signs into a sum and then use the Khintchine inequality to see that there is some deterministic choice of those signs that has some desired behaviour. Fix a non-negative, non-zero smooth function $\varphi$ supported in the unit ball. Now fix a probability space $(\Omega, \mathcal{F}, \mathbb{P})$ supporting a sequence of IID Rademacher random variables $(\epsilon_n)_{n \geq 1}$ as suggested in the hint. We also fix for now an $N$ which will later be taken to be suitably large (in a way that depends only on $\varepsilon$, $p$ and $\varphi$). Choose points $x_1, \dots, x_N$ such that $\varphi_j(\cdot) = \varphi(\cdot - x_j)$ have disjoint support. Define for $\omega \in \Omega$, $$\Phi_\omega(x) = \sum_{j=1}^N \epsilon_j(\omega) \varphi_j(x)$$ First note that by the disjoint support condition we have that $\|\Phi_\omega\|_{L^p} \sim N^{1/p}$ where the constant depends only on the choice of $\varphi$. Also $$\mathbb{E} \left[ \|\hat{\Phi}_\omega\|_{L^q}^q \right ] = \mathbb{E}\left[ \int \left | \sum_{j=1}^N \epsilon_j e^{-2\pi i \langle x_j, x \rangle}\hat{\varphi}(x) \right|^q dx \right] \lesssim \int \left(\sum_{j=1}^N |\hat{\varphi}(x)|^2 \right)^{q/2} dx \sim N^{q/2}$$ where the inequality follows by applying Fubini's theorem followed by Khintchine's inequality. It follows that there exists a fixed $\omega \in \Omega$ (which just means a choice of signs for the $\epsilon_j$) such that $$\|\hat{\Phi}_\omega\|_{L^q} \lesssim N^{1/2}.$$ Hence for this $\omega$, $$\frac{\|\hat{\Phi}_\omega\|_{L^q}}{\|\Phi_\omega\|_{L^p}} \lesssim N^{\frac12 - \frac1p}.$$ Since $1 \leq p <2$, the right hand side of this goes to $0$ as $N \to \infty$. This means that the argument shows that for every $N$ there is a $\Phi_\omega$ such that inequality holds with an implicit constant that is independent of $N$, which means that by taking sufficiently large $N$ we get the desired result.
H: Proving that $I_{n}+\lambda C^{T}C$ is a positive defined matrix I'm trying to prove that the matrix $A=I_{n}+\lambda C^{T}C$ is positive defined (PD) for $\lambda >0$ and some $C_{n\times m}$. I have already proved that the matrix $A$ is symmeric and of order $n\times n$. I'm trying to prove that for every $\underline{x}\neq 0$ we get $\underline{x}^T A\underline{x}>0$. I get: $$ \underline{x}^T A\underline{x}=\underline{x}^T\left(I_{n}+\lambda C^{T}C\right)\underline{x}=I_n+\underline{x}^T\lambda C^{T}C\underline{x} $$ But how should I procced? AI: Your $A$ is the sum of the positive defined matrix $I_n$ and the positive semi-defined matrix $C^TC$, therefore $A$ is positive defined. $C^TC$ is always positive semi-defined because: $$\underline{x}^T C^T C\underline{x} = (C\underline{x})^T (C\underline{x}) = \|C\underline{x}\|^2 \geq 0$$ for each $\underline{x} \in \mathbb{R}^n$.
H: Combinatorics - how many ways to divide balls in two groups Suppose I have: 8 black balls 3 white balls 5 blue balls how many ways there are to divide those balls into two different groups (note that there is no need to divide into two groups with even number of balls, one group could have 1 e the other 15). In my first attempt I did this: $${ 16! \over (3!\cdot 5!\cdot 8!) } \cdot 17$$ The number of permutations times the number of ways to divide into two groups. After some time I conclude this is wrong because it over calculate the permutations inside which group. My second attempt I wrote this: $$ \sum_{i=0}^{16} {16 \choose i} $$ And again wrong, I figure out that I can't handle like having more than one ball per color. I hope this isn't too confusing and if anyone can help me it would be very helpful. AI: I think it's just $ (8+1) \times (3+1) \times (5+1) =216$, by the product principle. The "plus 1's" come from the fact that you could have zero black balls on one side, (or zero White balls or zero blue balls). Now, $216$ is the answer only if (a) you consider: 3 Black, 1 White and 4 Blue in "left group" and 5 Black, 2 White and 1 Blue in "right group" different to: 5 Black, 2 White 1 Blue in "left group" and 3 Black, 1 White and 4 Blue in "right group" and you're OK with having one group being empty. If you consider swapping groups as above to be the same, then just halve the 216. And there are 2 out of 216 where 1 group is empty. So you need to clarify your question a bit...
H: Linear map triangulizable What's the definition of a linear map that is triangulazible? I can't find it anywhere. In addition, I was asked to find a linear map that doesn't have any invariant sub-spaces. I know that if a map is triangulazible it does have invariant sub-spaces, from there my request on the exact definition. Do you know of a linear map that doesn't have invariant subspaces? Thank so much! AI: Here's a definition: Let $V$ be a finite-dimensional vector space over a field $\Bbb{F}$, and let $T:V \to V$ be linear. We say $T$ is triangualrizable over $\Bbb{F}$ if there is a basis $\beta$ of $V$ such that the matrix representation $[T]_{\beta}$ is a triangular matrix. By the way, every linear map has invariant subspaces, namely $\{0\}$ and $V$. You're probably interested in non-trivial invariant subspaces, i.e a subspace $W$ such that $\{0\}\subsetneq W \subsetneq V$ and $T[W] \subset W$. The following theorem addresses this question: Let $V$ be a non-zero finite-dimensional vector space over a field $\Bbb{F}$, and $T:V \to V$ be linear. Then, $\{0\}$ and $V$ are the only $T$-invariant subspaces if and only if the characteristic polynomial of $T$ is irreducible over $\Bbb{F}$. So, this tells you exactly when a linear map has no (non-trivial) invariant subspaces.
H: Need help understanding statement "By linear algebra we know $\left|A,B,C\right|=-(A\times C)\cdot B=-(C\times B)\cdot A$ I am reading a paper for a famous ray-triangle intersection procedure https://cadxfem.org/inf/Fast%20MinimumStorage%20RayTriangle%20Intersection.pdf They use Cramer's rule to solve a set of equations but do the simplification described in the title: $\left|A,B,C\right|=-(A\times C)\cdot B=-(C\times B)\cdot A$ I'm wondering how it is that these are equal. Posting image from paper, here are some variable explanations: $D$ = ray direction $E1, E2$ = triangle edges $T$ = translation of ray start to origin $t$ = ray distance $u,v$ = two of three barycentric coordinates AI: These are properties of the Triple product https://en.wikipedia.org/wiki/Triple_product The first step is a property where the det(a,b,c) = dot(a, cross(b,c)) The second is a property of scalar product where dot(a, cross(b,c)) = dot(cross(a,b),c)
H: Solution of the following integral Equation $\varphi(x) - \lambda\int\limits_{-1}^1 x e^t\varphi(t) \: dt=x$ Consider that the following equation is solvable then analyze with respect to $\lambda$ $$\varphi(x) - \lambda\int\limits_{-1}^1 x e^t\varphi(t) \: dt=x$$ Can someone tell me how can I solve it ? AI: If there is no typo in the equation this is very easy. Since $c=\int_{-1}^{1}e^{t}\phi(t)dt$ is just a constant we get $\phi (x)=\lambda cx+x=x(1+c\lambda)$. Now multilply by $e^{t}$ and integrate this. You get $c=\int_{-1}^{1}te^{t}dt (1+c\lambda)$. Solve this for $c$ and you get your solution.
H: Does $f_{n}(x)=\frac{x^{2n}}{1+x^{2n}}$ converge pointwise / uniformly? Since our lectures were cancelled because of the ongoing situation, I have to essentially self-study for my analysis exam in two months. Understandably this comes with a great deal of trouble, so I would like someone to help me with the following question: Question: Show that $f_{n} : \mathbb{R}\rightarrow\mathbb{R}$, $f_{n}(x):=\frac{x^{2n}}{1+x^{2n}}$ pointwise converges on $\mathbb{R}$. Examine whether it uniformly converges on $I=[0,2]$, $J=[2,\infty)$. Pointwise convergence: From what I've understood so far, I have to show $\lim_{n \to \infty}f_n(x) = f(x)$ for every $x$ to prove it. This is my solution: $lim_{n\rightarrow\infty}\frac{x^{2n}}{1+x^{2n}}=lim_{n\rightarrow\infty}\frac{1}{\frac{1}{x^{2n}}+1}$ $f_n(x)\rightarrow f(x) =\begin{cases}0 &if& |x|<1\\\frac{1}{2} & if&|x| = 1\\1&if&|x|>1\end{cases}$ Uniform convergence: Here, I can show that: $\big|~f_n(x) - f(x)~\big| = \displaystyle \begin{cases}\displaystyle\frac{x^{2n}}{1+x^{2n}} & \text{if }~ |x|<1\\ 0 &\text{if } |x|=1\\ \frac{x^{2n}}{1+x^{2n}}-1 &\text{if } |x|>1 \end{cases} $ Since $\frac{x^{2n}}{1+x^{2n}}-1$ = $\frac{-1}{1+x^{2n}}$ it would be equal to $0$ for $|x|>1$ as well. This would mean that this function does indeed uniformly converge on $J=[2,\infty)$. On the other hand, $f_{n}(x)$ doesn't converge uniformly on $I=[0,2]$ since it converges to a discontinuous function on this interval. Is this correct? AI: Your argument for $x>2%=$ is not at all clear. A complete proof would be to say that $|f_n(x)-1|=\frac 1 {1+x^{2n}} \leq \frac 1 {x^{2n}} \leq \frac 1 {2^{2n}}$ for all $x>2$ and hence $sup_{x >2}|f_n(x)-1| \to 0$.
H: Show that $\Gamma(x) \sim \sqrt{2 \pi} e^{-x}x^{x-\frac12}$. The gamma function is defined by $$\Gamma(x) = \int_0^\infty t^{x-1} e^{-t} dt $$ where $x > 0$. Show that $\Gamma(x) \sim \sqrt{2 \pi} e^{-x}x^{x-\frac12}$. $\sim$ denotes that the ratio between the left and the right side tends to $1$. I think that it is equivalent to showing $$ \lim_{x \to \infty} \frac1{\sqrt{2\pi}} \int_0^\infty t^{x-1}e^{-t+x} x^{\frac12 -x} dt =1.$$ This means that the limit of this integral tends to $\sqrt{2\pi}$, but I don't know how to show this. I would appreciate if you give some help. AI: It suffices to show that $$ \Gamma(x+1) = x\Gamma(x) \sim \sqrt{2\pi} \, x^{x+\frac{1}{2}}e^{-x}. $$ Substitute $t = x + \sqrt{x}s$ and define $$ f_x(s) := \left(1 + \frac{s}{\sqrt{x}}\right)_+^{x} e^{-\sqrt{x}s}, $$ where $x_+ := \max\{0, x\}$ is the positive part of $x$. Then the integral defining $\Gamma(x+1)$ boils down to: \begin{align*} \Gamma(x+1) &= \int_{-\sqrt{x}}^{\infty} \left(x + \sqrt{x}s\right)^{x} e^{-(x + \sqrt{x}s)} \sqrt{x} \, \mathrm{d}s \\ &= x^{x+\frac{1}{2}}e^{-x} \int_{-\sqrt{x}}^{\infty} \left(1 + \frac{s}{\sqrt{x}}\right)^{x} e^{-\sqrt{x}s} \, \mathrm{d}s \\ &= x^{x+\frac{1}{2}}e^{-x} \int_{-\infty}^{\infty} f_x(s) \, \mathrm{d}s, \end{align*} Now we note the following observations: $\displaystyle \lim_{x\to\infty} f_x(s) = e^{-\frac{s^2}{2}} $ It is easy to prove that $(1 + x)_+ \leq e^{x - \frac{x^2}{2(1+x_+)}}$ for all $x \in \mathbb{R}$. Using this, we get $$ 0 \leq f_x(s) \leq e^{-\frac{s^2}{2(1+s_+)}} \qquad \text{for all} \quad x \geq 1, \ s \in \mathbb{R}.$$ So, by the dominated convergence theorem, we can interchange the order of limit and integration to get: $$ \lim_{x\to\infty}\int_{-\infty}^{\infty} f_x(s) \, \mathrm{d}s = \int_{-\infty}^{\infty} \lim_{x\to\infty}f_x(s) \, \mathrm{d}s = \int_{-\infty}^{\infty} e^{-\frac{s^2}{2}} \, \mathrm{d}s = \sqrt{2\pi} $$ Therefore the desired conclusion follows.
H: Every neighborhood is an open set proof question Theorem: Every neighborhood is an open set. Proof: Consider a neighborhood $E=N_r(p)$, and let $q$ be any point of $E$. $$\text{There is a positive real number $h$ such that $d(p,q)=r-h$} \tag 1$$ $$\text{For all points s such that $d(q,s) \lt h$, we have $d(p,s) \leq d(p,q)+d(q,s) \lt r-h+h=r$, so that $s \in E$.} \tag 2 $$ Thus $q$ is an interior point of $E$. Definition: A neighborhood of $p$ is a set $N_r(p)$ consisting of all q such that $d(p,q) \lt r$, for some $r \gt 0$. The number $r$ is called the radius of $N_r(p)$. I know from definition, to prove a set $E$ is open, we need to show every point of $E$ is an interior point of $E$. I still don't get those tagged equations as to why. AI: I'm assuming that your definition of "neighborhood" is a ball of the form $N_r(p)=\{ x \mid d(x, p) \lt r \}$ and that your definition of an "open set" is a set that has the property that every point of the set has a ball around it that's completely contained in the set. Then seeing that balls are in fact open sets is just using the triangle inequality. Any point $x$ in a ball around $p$ must be some positive (i.e., non-zero) distance $\varepsilon$ from the boundary of the ball. That's because we use strict inequality to define the ball -- our proof would fail if we used the alternative definition $\overline{N_r(p)}= \{x \mid d(x, p) \leq r \}$ because some points of this "closed ball" lie exactly on the boundary, not some positive distance away from it. So now think about a ball of radius $\frac \varepsilon 2$ around $x$. The triangle inequality tells us that entire ball around $x$ must be contained in the ball around $p$. Since $x$ was an arbitrary point in the ball around $p$, we've proved that any point in the ball around $p$ is in fact in its interior, so the ball is open and we have justified our use of the term "open ball" for a ball of this form.
H: prove that function mapping is injective iff ker (f) = {e} Heyall, would appreciate some help with abstract algebra because my undergrad brain is fried from doing all the proofs my prof asked me to do. I've hit a bit of a wall with this one; it involves group homomorphisms - super basic but the proof has got to be quite sophisticated cuz my mind is blank. there are two groups $G$ and $F$ and a mapping $\phi : G \rightarrow F$ the kernel is defined as $ker (\phi ) := \left \{ x\in G : \phi (x) = e_{H} \right \}$ whereby $e_H$ is the identity element in $H$ to prove: $\phi$ is injective iff $ker(\phi) = \{e_{G}\}$ whereby $e_G$ is the identity element in $G$ would appreciate any tips too, thanks a bunch xx AI: Suppose $\phi$ is injective. As $\phi$ is a group homomorphism, $\phi(e_G) = e_H$, right? So, $g \in G$ is in the kernel of $\phi$ if and only if $\phi(g) = \phi(e_G)$, and by injectivity, $g$ is in the kernel if and only if $g = e_G$. Assume that $\ker \phi$ consists only of $e_G$. To prove the injectivity, suppose that two elements of $G$, $g$ and $h$, satisfies that $\phi(g) = \phi(h)$ and we want to prove that $g = h$. But see this: $$e_H = \phi(g)\phi(h)^{-1} = \phi(gh^{-1})$$ and so, $gh^{-1} \in \ker \phi$, that is, $gh^{-1} = e_G$, and then, $g = h$.
H: finiteness of Koszul groups A basic question about Koszul homology from Matsumura's Commutative Ring Theory In Theorem 16.5(ii) it is assumed that $(A,m)$ is a local ring and $x_1,\ldots,x_n \in m$, and $M$ is a finite $A$-module. Then it is claimed without much explanation that the Koszul homology groups $H_p(X,M)$ are finite $A$-modules for all $p$. Why is this so obviously true? AI: If $A$ is Noetherian, then this is indeed obvious: the homology group $H_p(X,M)$ is defined as $\ker g/\operatorname{im}f$ for certain maps $$M^i\stackrel{f}\to M^j\stackrel{g}\to M^k$$ and certain $i,j,k\in\mathbb{N}$. Since $M$ is finitely generated, so is $M^j$, and thus so is $\ker g$ since $A$ is Noetherian, and thus so is $H_p(X,M)$. If $A$ is not assumed to be Noetherian, then this is not true. For instance, $A$ could be $k[t_1,t_2,t_3,\dots]/(t_1,t_2,t_3,\dots)^2$ for a field $k$. Then for $M=A$, $n=1$, and $x_1=t_1$, the Koszul complex is just $$0\to A\stackrel{t_1}\to A\to 0.$$ So $H_1(X,A)$ is just the kernel of $t_1:A\to A$ which is the maximal ideal $(t_1,t_2,t_3,\dots)$, which is not finitely generated.
H: Limit of the exponent of Random Variables Suppose $X_1, X_2,\ldots$ are i.i.d. normal ($\mu=0, \sigma^2=1$) random variables and let $S_n$ denote the sum of first $n$ $X_i$'s. Show that $$\lim_{n\to \infty} \exp(2S_n - 2n)=0$$ I think I am supposed to use the Martingale Convergence Theorem here, where $M_n=\exp(2S_n-2n)$ is a martingale with respect to $\mathcal{\{F_n\}}$ (information in $X_1, \ldots, X_n$). I have shown that this martingale satisfies the properties of MCT so the limit exists, but I don't know how to show that it is $0$. AI: Let $X_1,...$ be i.i.d normal variables $\mathcal N(0,1)$. Let $S_n = \sum_{k=1}^n X_k$. Let $M_n = \exp(2S_n - 2n) = \exp(2(S_n - n))$. You showed it's a martingale with respect to filtration $(\mathcal F_n)$. By martingale convergence theorem ($M_n$ is nonnegative) we have the existence of limit $M_\infty = \lim_{n \to \infty} M_n$ almost surely. Now, since $\exp$ is continuous, it means that $\lim_{n \to \infty} S_n - n = Z$ almost surely for some $Z$. Note that $S_n - n = n(\frac{S_n}{n} - 1)$. You should already know that $\frac{S_n}{n} \to 0$ almost surely by SLLN, so $S_n - n \to -\infty$ almost surely, which means that $\exp(2(S_n-n)) \to \exp(-\infty) = 0$ almost surely
H: How to come up with a set of three linearly dependent vectors in a systematic way Give an example of three linearly dependent vectors in $\mathbb{R}^{3}$ such that none of the three is a multiple of another. Three vectors that satisfy this property are the vectors :$\{(-1,2,1), (3,0,-1),(-5,4,3)\}$. Now that's all fine and dandy, but I only got this solution by looking at the answer. Prior to looking at the solution I was trying to come up in some systematic way the necessary set of vectors. But I failed at that task. My question is how could I come up with a set of vectors in a systematic way instead of depending on blind luck and hoping to get a correct set? AI: Take two non-zero vectors $\vec u$ and $\vec v$ that are not multiples of one another (i.e., there is no real number such that $\vec v = c\vec u$). Take a linear combination of them $\vec w=a\vec u + b\vec v$, with $a,b\in\mathbb R\setminus\{0\}$. Then $\vec u, \vec v$, and $\vec w$ are linearly dependent because $a\vec u + b\vec v - \vec w=\vec 0$, but, if $\vec w=d\vec u$, then $\vec v=\dfrac{d-a}b \vec u$, contradicting the choice of $\vec u$ and $\vec v$ as not multiples of each other, and a similar argument shows that $\vec w$ is not a multiple of $\vec v$ either. The solution you gave is an example of this, with $\vec u =(-1,2,1), \vec v=(3,0,-1), a=2, $ and $b=-1$.
H: Good reference that discusses NP hardness in the context of optimization? Sometimes I read a book on optimization and the author states (without proof) that finding a certain solution to the (non-convex) optimization problem is NP hard. I've learned about complexity theory in the past through a course in CS, and I remember we started by alphabets, language, string, certificate, Turing machine, and sets like $\Sigma^\star$ (forgot what they called - Ok apparently they are Kleene star operator) http://www.cs.toronto.edu/~sacook/csc463h/notes/np_463.pdf https://cs.uwaterloo.ca/~watrous/CS360.Spring2017/Lectures/19.pdf None of these notions ever appears in optimization literature. How is there such a gap between the usage of complexity theory between these two fields? Can anyone recommend a reference that explains complexity theory in the context of optimization? AI: There is a book called Network flows: theory, algorithms and applications by Ahuja, Magnanti and Orlin. In its appendix B NP-Completeness and NP-Hardness are introduced. I haven't learned the complexity theory in the CS way, so I don't know exactly how different this is from that in CS. But since this book mainly talks about optimization, I believe it will be helpful to you.
H: In a triangle, G is the centroid of triangle ADC. AE is perpendicular to FC. BD = DC and AC = 12. Find AB. G is the centroid of the triangle ADC. AE is perpendicular to FC. BD = DC and AC = 12. Find AB. According to the solution manual, we can let the midpoint of AC be H. D, G, and H are collinear as G is the centroid. Given that AGC is a right triangle, AG is 6, and DG is 12. How come the DG is 6 and DG is 12? AI: Given AG $\perp$ CG, the midpoint H is the circumcenter of AGC, which yields GH = $\frac12$AC = 6 and in turn DH = 3GH = 18 due to the centroid point G. Then, AB = 2DH = 36 since D and H are the midpoints.
H: Entire function $f(\frac{1}{p})=\frac{1}{1+p}$ for all prime $p$. I'm working on this problem "Find all entire functions $f(\frac{1}{p})=\frac{1}{1+p}$ for all prime $p$." My approach is using identity theorem. But in this case it does not seem good. We have $$f(1/p)=\frac{1/p}{1+1/p}$$ So naturally, I set $g(z)=f(z)-\frac{z}{z+1}$. Consider the plane excluded a circle around $z=-1$. Then $g$ is analytic in that region. And $g=0$ on the set of $\{1/p\}$ which has a limit point. So $g=0$ on the constructed given. But how do I extend it to $-1$? Or I can't? AI: There is no such entire function $f$. Take$$\begin{array}{rccc}g\colon&\Bbb C\setminus\{1\}&\longrightarrow&\Bbb C\\&z&\mapsto&\dfrac z{z+1}.\end{array}$$Then, for each prime $p$, $f\left(\frac1p\right)=g\left(\frac1p\right)$. Since $f$ and $g$ are holomorphic, since $\Bbb C\setminus\{1\}$ is connected and since $\{z\in\Bbb C\setminus\{1\}\mid f(z)=g(z)\}$ has an accumulation point ($0$), the identity theorem implies that$$(\forall z\in\Bbb C\setminus\{1\}):f(z)=g(z).$$But that's impossible, since $\lim_{z\to-1}f(z)=f(-1)$, whereas the limit $\lim_{z\to-1}g(z)$ doesn't exist in $\Bbb C$.
H: Generate a Poisson random variable from a standard uniform random variable. I can't solve the following exercise: A random number generator generates random values $U \sim \text{U}(0,1)$ from the standard uniform distribution. Use $U$ to generate a random variable $P \sim \text{Pois}(\lambda = 5)$ from a Poisson distribution with rate parameter equal to five. Comment: In previous tasks I was asked to use $U$ to generate an exponential random variable $E \sim \text{Exp}(\lambda)$. The solution was to take $E \equiv -\tfrac{1}{\lambda} \ln(1-U)$. I think that this can be helpful because of the relation between Poisson distributions and exponential, but I'm not sure. AI: There are many ways to do this; some of which are more computationally efficient than others. If you want to use a single generated value $U \sim \text{U}(0,1)$ then you can use inverse transform sampling using the cumulative distribution function for the Poisson distribution. This gives the output: $$P \equiv \min \Bigg\{ p =0,1,2,... \Bigg| U \leqslant \exp(-\lambda) \sum_{i=0}^p \frac{\lambda^p}{p!} \Bigg\}.$$ Alternatively, if you are willing to use multiple independent generated values $U_1,U_2,U_3, ... \sim \text{IID U}(0,1)$ then you can use the fact that a Poisson random variable with parameter $\lambda$ is given by the number of sequential events occurring in time $\lambda$ where the times between the events are independent exponential random variables with unit rate. Applying this relationship yields the alternative method: $$P \equiv \min \Bigg\{ p =0,1,2,... \Bigg| - \sum_{i=1}^p \ln(1-U_i) \leqslant \lambda \Bigg\}.$$ In both cases you can program these with a simple while loop in an appropriate computational platform with a uniform pseudo-random number generator.
H: Show that if $f:X\to\textbf{R}$ is a continuous function, so is the function $|f|:X\to\textbf{R}$ defined by $|f|(x) = |f(x)|$. Show that if $f:X\to\textbf{R}$ is a continuous function, so is the function $|f|:X\to\textbf{R}$ defined by $|f|(x) = |f(x)|$. MY ATTEMPT According to the definition of continuity, for every $x_{0}\in X$ and every $\varepsilon > 0$, there is a $\delta > 0$ such that \begin{align*} x\in X,\,d(x,x_{0}) < \delta \Rightarrow ||f(x)| - |f(x_{0})|| \leq |f(x) - f(x_{0})| < \varepsilon \end{align*} whence we conclude that $|f|$ is also continuous. Is the wording of my proof correct? I am curious if there is another way to solve it too. AI: The version of the proof my answer was based on: MY ATTEMPT According to the definition of continuity, for every $x_{0}\in X$ and every $\varepsilon > 0$, there is a $\delta > 0$ such that for every $x\in X$ it holds \begin{align*} d(x,x_{0}) < \delta \Rightarrow ||f(x)| - |f(x_{0})|| \leq |f(x) - f(x_{0})| < \varepsilon \end{align*} whence we conclude that $|f|$ is also continuous. Its basically right since it has the key step (application of the reverse triangle inequality $||a|-|b||\le |a-b|$) but you compressed the entire proof into one line that doesn't make much logical sense. What you wrote is not the definition of continuity of $f$, and not of $|f|$, but a sentence that combines everything I mentioned thus far. Properly written, maybe as a text book might have, it should first have the definition of continuity of $f$, then the above inequality, and then you point out that $|f|$ satisfies the definition of being continuous.
H: In the isosceles triangle, the two squares (white) both have an area of four. Find the area of the shaded. In the isosceles triangle below, the two squares (white) both have an area of four. Find the area of the shaded. According to my answer key, the answer is $9\sqrt{2}$ square units. How can I show the solution through the parallel line theorem? AI: I'm not entirely sure how you would answer this using theorems about parallel lines. Here's a sketch of a solution using similar triangles. Let $x$ be the distance from the top of the triangle to the middle of the top edge of the top most white square. Let $y$ be the distance from the top of the triangle to the centre of the lower white square. By similar triangles we have $x / 1 = y/ \sqrt{2}$. Since $y = x + 2 + \sqrt{2}$, solving gives $x = 4 + 3 \sqrt{2}$. So the height of the whole is isosceles triangle is $h = 6 + 5\sqrt{2}$. Again by similar triangles we can determine that the length of the base is $b = h(2/x)$. And so the area of the whole isoceles triangle is $bh/2 = h^2/x = 8 + 9\sqrt{2}$.
H: Spivak's Calculus Q 1-20 Question 1-20: Prove that if $|x-x_0| < \frac{\epsilon}{2}$ and $|y-y_0| < \frac{\epsilon}{2}$ then $|(x+y) - (x_0 + y_0)| < \epsilon$ and $|(x-y) - (x_0 - y_0)| < \epsilon$.** I have proven the first inequality by expanding the absolute value into $x_0 - \frac{\epsilon}{2} < x < x_0 + \frac{\epsilon}{2}$ and similarily for $y$. Then, I concatenated both of the inequalities and simplified to end up with the desired inequality $|(x-y) - (x_0 - y_0)| < \epsilon$. How do I do the same for part two of this question (**)? I tried to assume that $\epsilon > 0$ but that isn't a given. By assuming $\epsilon > 0$ I ended up that $|(x-y) - (x_0 - y_0)| < 0 < \epsilon$, the desired conclusion. How can I circumvent this $\epsilon > 0$ issue? Edit: The last line is utter bupkis, considering the absolute value forces $(x-y) - (x_0 - y_0) > 0$ AI: Use the Triangle Inequality: $$ |(x+y)-(x_0+y_0)|\leq |x-x_0|+|y-y_0| < 2\cdot \varepsilon/2 = \varepsilon $$ $$ |(x-y)-(x_0-y_0)|\leq |x-x_0|+|-y+y_0| < 2\cdot \varepsilon/2 = \varepsilon $$
H: Obtain for n > 0 a relation of the form $I(m, n) = kI(m, n − 1)$ The function $I(m, n)$, where $m ≥ 0$ and $n ≥ 0$ are integers, is defined by $$I(m, n) = \int_{0}^{1} x^m(-\ln x)^n dx$$ Obtain for $n > 0$ a relation of the form $I(m, n) = kI(m, n − 1)$, where $k$ is to be found. Hence obtain an explicit formula for $I(m, n)$, and show that $I(5, 4) = \dfrac{1}{324}$ AI: This is just a problem of integration by parts $$u=(-\log (x))^n \implies du=-\frac{n (-\log (x))^{n-1}}{x}$$ $$dv=x^m dx \implies v=\frac{x^{m+1}}{m+1}$$ $$I(m, n) = \int x^m(-\ln x)^n dx=\frac{x^{m+1} (-\log (x))^n}{m+1}+\frac n {m+1}\int{ x^m (-\log (x))^{n-1}}\,dx$$
H: Can the proof about direct sum decomposition of the inner product space be generalize to infinitely dimension space There is a theorem about the finite-dimensional inner product space. Suppose a finite-dimensional inner product space $V$ with a subspace $W$, then $V=W\bigoplus W^{\bot}$. And the proof is as follows: Suppose an orthonormal basis of $W$ is $u_i, \cdots, u_m$, then for $\alpha \in V$: $$(\alpha-\sum_{i=1}^{m}a_iu_i,\sum_{j=1}^{m}b_ju_j)=0.$$ $$\Leftrightarrow(\alpha-\sum_{i=1}^{m}a_iu_i,u_j)=0, j=1,2,\cdots, m.$$ $$\Leftrightarrow a_i=(\alpha,u_i).$$ $$\Leftrightarrow \alpha-\sum_{i=1}^{m}(\alpha,u_i)u_i\in W^{\bot}$$ And since every nonzero vector in $W^{\bot}$ is indenpendent with the vector in $W$. So the above decomposition is direct sum decomposition. But I am confused if this proof itself can be generalized to the infinete-dimensional inner product space. Actually, I was always confused what properties can be generalized into infinite-dimensional space throughout the linear algebra learning. Can you give me some direct or give some reference that about that. Thank you in advance. AI: An infinite dimensional inner product space with an inner product (and the additional property of completeness) is called a Hilbert space. It turns out that in a Hilbert space, your statement does not necessarily hold true. In particular a subspace $W$ of a Hilbert space $\mathcal H$ will satisfy $\mathcal H = W \oplus W^\perp$ if and only if $W$ is (topologically) closed. For example, we necessarily have $\mathcal H = W \oplus W^\perp$ when $W$ is a finite dimensional subspace of $\mathcal H$. To be more specific about where the direct generalization of your proof fails, a subspace $W$ that fails to be closed does not have a Schauder basis. The study of infinite dimensional vector spaces like this one falls under the domain of functional analysis. If you are interested in a relevant reference, you might want to try reading Kreyszig's Introductory Functional Analysis with Applications, which I find to be "beginner friendly" (yet fairly comprehensive) relative to similar texts.
H: Show that the direct sum $f\oplus g:X\to\textbf{R}^{2}$ defined by $f\oplus g(x) = (f(x),g(x))$ is uniformly continuous. Let $(X,d_{X})$ be a metric space, and let $f:X\to\textbf{R}$ and $g:X\to\textbf{R}$ be uniformly continuous functions. Show that the direct sum $f\oplus g:X\to\textbf{R}^{2}$ defined by $f\oplus g(x) = (f(x),g(x))$ is uniformly continuous. MY ATTEMPT Let $\varepsilon/2 > 0$. Then there exist $\delta_{1} > 0$ and $\delta_{2} > 0$ such that for every $x,y\in X$ \begin{align*} \begin{cases} d_{X}(x,y) < \delta_{1} \Rightarrow |f(x) - f(y)| < \varepsilon/2\\\\ d_{X}(x,y) < \delta_{2} \Rightarrow |g(x) - g(y)| < \varepsilon/2 \end{cases} \end{align*} Let us equip $\textbf{R}^{2}$ with the Euclidean metric. Since the following inequality holds \begin{align*} \sqrt{|f(x) - f(y)|^{2} + |g(x) - g(y)|^{2}} \leq |f(x) - f(y)| + |g(x) - g(y)| \end{align*} for every $\varepsilon > 0$ there corresponds a $\delta = \min\{\delta_{1},\delta_{2}\}$ such that for every $x,y\in X$ \begin{align*} \sqrt{|f(x) - f(y)|^{2} + |g(x) - g(y)|^{2}} \leq |f(x) - f(y)| + |g(x) - g(y)| < \varepsilon \end{align*} whenever $d_{X}(x,y) < \delta$, and the proposed result is valid. Does anyone want to make any suggestion or critique? Any of them are welcome. AI: This is entirely correct, but I'd like to make 2 small nitpicks: It's probably better to start the proof by saying "Let $\varepsilon > 0$" (and if you like you can proceed to say "so that $\varepsilon/2 > 0$"), because $\varepsilon$ is the thing you really need to pick arbitrarily. It only makes sense to ask if a function is uniformly continuous when its domain and codomain are metric spaces: if you haven't specified a metric on $\mathbb{R}^2$, it doesn't even make sense to ask if $f \oplus g$ is uniformly continuous. So you don't get to make the choice of which metric to equip $\mathbb{R}^2$ with, but you're right that the Euclidean metric is (probably) what's intended here. You should rephrase your homework solution(?) to begin with "I will assume that $\mathbb{R}^2$ has the standard Euclidean metric", or something like that.
H: How can $z = xa + x$ be differentiated with only chain rule? I am trying to put some rigour to my understanding of the Chain Rule (with Leibniz Notation). I came across this question and the second answer there (David K's) states, $\frac{dz}{dx} = a + 1 + x\frac{da}{dx}$, as you surmised, though you could also have gotten that last result by considering $a$ as a function of $x$ and applying the Chain Rule. I understand how could one evaluate $\frac{dz}{dx}$ using the product rule, \begin{align*} \frac{dz}{dx} & = \frac{d(xa + x)}{dx} \\ & = \frac{d(xa)}{dx} + \frac{d(x)}{dx} \\ & = \Bigg(\frac{d(x)}{dx}a + x\frac{d(a)}{dx}\Bigg) + 1 && \text{Product Rule}\\ & = a + x\frac{da}{dx} + 1 \end{align*} Though, I seem to be getting something wrong when using the Chain Rule, \begin{align*} \frac{dz}{dx} & = \frac{d(xa + x)}{dx} \\ & = \frac{d(xa)}{dx} + \frac{d(x)}{dx} \\ & = \Bigg(\frac{d(xa)}{d(xa)} \times \frac{d(xa)}{d(a)} \times \frac{d(a)}{d(x)} \Bigg) + 1 && \text{Chain Rule} \\ & = \Bigg(1 \times x \times \frac{d(a)}{d(x)} \Bigg) + 1 \\ & = x\frac{d(a)}{d(x)} + 1 \end{align*} I am missing the $a$ in the chain rule variant. Did I expand the chain rule correctly in terms of the Leibniz notation? What am I missing here? EDIT: I think my mistake was the assumption that I can differentiate $z = xa + x$ without any form of product rule. With that being said, here is how one could do it with chain rule first and product rule second, \begin{align*} \frac{dz}{dx} & = \frac{d(xa + x)}{dx} \\ & = \frac{d(xa)}{dx} + \frac{d(x)}{dx} \\ & = \Bigg(\frac{d(xa)}{d(xa)} \times \frac{d(xa)}{d(a)} \times \frac{d(a)}{d(x)} \Bigg) + \frac{d(x)}{dx} && \text{Chain Rule} \\ & = \Bigg(\frac{d(xa)}{d(xa)} \times \Big(\frac{d(x)} {d(a)}a + \frac{d(a)}{d(a)}x\Big) \times \frac{d(a)}{d(x)} \Bigg) + \frac{d(x)}{dx} && \text{Product Rule} \\ & = \Bigg(\frac{d(xa)}{d(xa)} \times \Big(\frac{d(x)} {d(a)}\frac{d(a)}{d(x)}a + \frac{d(a)} {d(a)}\frac{d(a)}{d(x)}x\Big)\Bigg) + \frac{d(x)}{dx} \\ & = \Bigg(1 \times \Big(a + \frac{d(a)}{d(x)}x\Big)\Bigg) + 1 \\ & = a + \frac{da}{dx}x + 1 \end{align*} AI: The chain rule refers here rather to the function $$z=z(a,x)$$ $$\Rightarrow \frac{dz}{dx} = \frac{\partial z}{\partial a}\cdot\frac{da}{dx} + \frac{\partial z}{\partial x} \cdot \frac{dx}{dx}$$ $$ = x\frac{da}{dx}+ a +1$$
H: Why is $f$ differentiable on each straight line through the origin because on the straight line $y = m x$, it has the value $\frac{m x}{m^2 + x^2}$? I am reading "Analysis on Manifolds" by James R. Munkres. Let $f(0, 0) := 0$ and $f(x, y) := \frac{x^2 y}{x^4 + y^2}$ for $(x, y) \ne (0, 0)$. There are the following sentences in this book: "The function $f$ is particularly interesting. It is differentiable (and hence continuous) on each straight line through the origin. (In fact, on the straight line $y = m x$, it has the value $\frac{m x}{m^2 + x^2}$.) But I cannot understand the above sentences. Why is $f$ differentiable at any point on each straight line through the origin because on the straight line $y = m x$, it has the value $\frac{m x}{m^2 + x^2}$? By the way, I know $f$ is differentiable at any point on each straight line through the origin except the origin because $f$ is a $C^1$ function on $\mathbb{R}^2 - \{(0, 0)\}$. Maybe Munkres just wanted to say that all directional derivatives of $f$ exist at any point on $\mathbb{R}^2$. AI: You should read that as “the restriction of $f$ to a straight line through the origin is differentiable at each point on the line.” That is, if you view the restriction of $f$ to a line through the origin as a function $\phi:\mathbb R\to\mathbb R$, then $\phi$ is differentiable everywhere. The point of this example is that differentiability of multi-variable functions is a rather strong condition: even having all of its directional derivatives exist isn’t sufficient for a function to be differentiable. Just as with limits, you have to consider all paths that lead to the point. Looking only along straight lines isn’t enough.
H: Understand significance of theorem related to normal subgroups I found in an online book about abstract algebra the following theorem: The following theorem is fundamental to our understanding of normal subgroups. Theorem 10.3. Let $G$ be a group and $N$ be a subgroup of $G$. Then the following statements are equivalent. The subgroup $N$ is normal in $G$. For all $g \in G\text{,}$ $gNg^{-1} \subset N\text{.}$ For all $g \in G\text{,}$ $gNg^{-1} = N\text{.}$ Since I'm learning right now about normal groups and factor groups, I wonder why the author says that the "theorem is fundamental to understanding normal subgroups." I found the proofs rather complicated, because we first show that $gNg^{-1} \subset N$, then the other way round that $N \subset gNg^{-1}$ to conclude $gNg^{-1} = N$ for a subgroup $N$ that is normal in a group $G$. Somehow I fail to understand why this is so fundamental, can someone explain it? AI: The map $G \to G$, $h \to ghg^{-1}$ is important in group theory, and is known as conjugation by $g$. The significance of the theorem is in showing normal subgroups are exactly the subgroups of $G$ which are invariant under conjugation by any element $g \in G$.
H: Is my limit development correct? I have this limit to find: $\lim\limits_{x\to0}\frac{\sqrt[n]{1+x}-1}{x}$ My development was: Let $\large{u^n = 1+x}$, from here if $x\to 0$ implies that $\large{u^n \to 1}$ And i got: $\Large{\lim_{u^n \to 1}\frac{u-1}{u^n - 1}}$ and using that $\Large{u^n - 1 = (u-1)\sum_{j=0}^{n-1}{u^j}}$ Finally i got $\Large{\lim_{u^n\to1}\frac{1}{\sum_{j=0}^{n-1}{u^j}} = \dfrac{1}{n}}$ I know that the result is correct, but i want to know if all my steps are correct. Thanks in advance. AI: Your steps are correct except you write $u\to 1$ instead of $u^n\to 1$. Here is another simple approach $$\lim_{x\to 0}\frac{\sqrt[n]{1+x}-1}{x}$$ $$=\lim_{x\to 0}\frac{(1+x)^{1/n}-1}{x}$$ $$=\lim_{x\to 0}\frac{\left(1+\frac{1}{n}\cdot x+\left(\frac1n\right)\left(\frac1n-1\right)\frac{x^2}{2!}+\ldots\right)-1}{x}$$ $$=\lim_{x\to 0}\left(\frac{1}{n}+\frac{(1-n)x}{2n^2}+\ldots\right)$$ $$=\frac1n$$
H: the family of analytic functions with positive real part is normal. I'm reviewing Complex Analysis and I don't quite understand the concept of normal family. There is an exercise in Ahlfors' Complex Analysis: Prove that in any region the family of analytic functions with positive real part is normal. I believe this is wrong. Theorem 15 says a family of analytic functions is normal with respect to $\mathbb{C}$ iff the functions are uniformly bounded on every compact set. However, this set contains the constant functions with arbitrary real part. So there is no way those functions are locally bounded. Where am I wrong? AI: In the same section Ahlfors gives a modified definition ("Classical definition") of normal families. A family is normal if every sequence in it has a subsequence converging uniformly on compacts sets or a subsequence tending to $\infty$ uniformly on compacts sets .Your example fails with this definition.
H: A sequence $a_1=f'(0),a_2=f''(0),...$ I am working on this problem from my past Qual "Give a sequence s.t. there is no analytic function $f:D\to \mathbb{C}$ s.t. $a_1=f'(0),a_2=f''(0),...$" where $D$ is the unit disk." The only thing I can think of the Cauchy's integral formula$$f^{(n)}(0)=\frac{n!}{2\pi i} \int \frac{f(w)}{w^n}dw$$ But that's it. I don't see a relation between these to construct a counterexample. How do I proceed? AI: Take $a_n=(n!)^{2}$. If such a function exist then $\sum \frac {f^{(n)} (0)} {n!} z^{n}$ would converge for $|z| <1$. But $\sum \frac {f^{(n)} (0)} {n!} z^{n}=\sum (n!)z^{n}$ and this series converges only for $x=0$. [If you prefer you can take $a_n=n^{n}n!$].
H: Let $S \subset \mathbb{R^2}$ consisting of all points $(x,y)$ in the unit square $[0,1] × [0,1]$ for which $x$ or $y$, or both, are irrational. Full question: Let $S$ be the subset of $\mathbb{R^2}$ consisting of all points $(x,y)$ in the unit square $[0,1] × [0,1]$ for which $x$ or $y$, or both, are irrational. With respect to the standard topology on $\mathbb{R^2}$, $S$ is: A - closed, B - open, C - connected, D - totally disconnected, E - compact. So if $x$ is irrational and $y$ is rational, we just have the unit square and I'd say thats $A,C$, and $E$. And if both are irrational, I'd say its $D$ because the set of $\mathbb{I}^2$ is totally disconnected. However the answer key says the answer is $C$. Can someone help explain? AI: It is path-connected, and we can simply find a path between any two points in S. For example, if $a$ is rational and $b,c,d$ are irrational, we construct a path $(a,b) \to (c,d)$ as follows. First pick some irrational number $r$, then go $(a,b) \mapsto (r,b) \mapsto (r,d) \mapsto (c,d)$, taking straight lines between the mentioned points. Other cases have similar paths. The result then follows since path-connected $\implies$ connected.
H: Injective function from unit circle Let $S$ denote the set of points on the unit circle centred at $(0,0)$. Does there exist an injective function $f : S \rightarrow S \setminus \{(1,0)\}$? AI: Let $v_1=(1,0), v_2=(\cos 1, \sin 1), v_3=(\cos \frac1 2, \sin \frac 1 2 ),v_4=(\cos \frac 1 3, \sin \frac 1 3 ),....$. Define $f(v)=v$ if $f \notin \{v_1,v_2,...\}$, $f(v_1)=v_2,f(v_2)=v_3,...$. This gives a bijection from $S$ onto $S\setminus \{(1,0)\}$.
H: Where is $f(x) = |x^2(x+1)|$ differentiable? And where are they $C^1$ and $C^2$? I'm a maths student taking a real-analysis paper and I'm currently working down my problem sheet. I've been asked the above question. First I define a piece-wise function to describe the absolute function above. $$f(x)= \begin{cases} x^2(x+1) & x \geq -1 \\ -(x^2)(x+1) & x < -1 \\ \end{cases} $$ $$f(x)= \begin{cases} x^3+x^2 & x \geq -1 \\ -x^3-x^2 & x < -1 \\ \end{cases} $$ My guess is that since both pieces are polynomials that $f(x)$ is smooth and thus the infinity differentiable everywhere. I'm still wrapping my head around all this $C^1$ and $C^2$, smoothness, etc. So if anyone has any tips or tricks I'd love to know! Thanks for your time! AI: The function $f$ is not differentiable at $-1$, since$$\lim_{x\to-1^+}\frac{f(x)-f(-1)}{x+1}=5$$and$$\lim_{x\to-1^-}\frac{f(x)-f(-1)}{x+1}=-5;$$therefore the limit $\lim_{x\to-1}\frac{f(x)-f(-1)}{x+1}$ doesn't exist. But $f$ is differentiable at every point $a\ne-1$ since: if $a>-1$, $\displaystyle\lim_{x\to a}\frac{f(x)-f(a)}{x-a}=\lim_{x\to a}\frac{x^2(x+1)-a^2(a+1)}{x-a}=3a^2+2a$; if $a<-1$, $\displaystyle\lim_{x\to a}\frac{f(x)-f(a)}{x-a}=\lim_{x\to a}\frac{-x^2(x+1)+a^2(a+1)}{x-a}=-3a^2-2a$;
H: Is the restriction of an inverse function invertible? Let $f:A\rightarrow B$, where $A$ and $B$ are open sets of $\mathbb{R^n}$ for some n, be invertible. Let $C$ and $D$ be open subsets of $A$ and $B$ respectively. Is $f:C \rightarrow D$ invertible? AI: One can discuss this question more generally, without restriction to $\mathbb{R}^n$ and open sets: "Let $$f:A\rightarrow B\tag{1.1}$$ be an invertible function and $C\subset A$ and $D\subset B$. Is the function $$f:C\rightarrow D\tag{1.2}$$ invertible, too?" I think this question is not well posed. If $f$ is a fuction from $A$ to $B$ it cannot be a function from $C$ to $D$ at the same time, at least from my definiton of a function. So I would wirte the question this way: Let $$f:A\rightarrow B\tag{2.1}$$ be an invertible function and $C\subset A$ and $D\subset B$. Is the function $g$ invertible, too, where $$g:C\rightarrow D\tag{2.2}$$ and $$g(x)=f(x), \forall x \in C\tag{2.3}$$ This still does not satisfy me, because without further restriction, we cannot guarantee that $(2.2)$ and $(2.3)$ are meaningful definitions, if we do not require that $$f(x) \in D, \forall x \in C$$ So finally I would pose the following question "Let $$f:A\rightarrow B\tag{3.1}$$ be an invertible function and $C\subset A$ and $D\subset f(A)$. Is the function $g$ invertible, too, where $$g:C\rightarrow D\tag{3.2}$$ where $$g(x)=f(x), \forall x \in C\tag{3.3}$$ Now the $g$ is a correctly defined function. A function $h:X\rightarrow Y$ is invertible if and only if $h$ is injective: $$\forall x_1 \in X\;\forall x_2 \in X\!: h(x_1)=h(x_2) \implies x_1=x_2$$ $h$ is surjective: $$\forall y \in Y \;\exists x \in X\!: f(x)=y$$ From $$g(x_1)=g(x_2)$$ follows $$f(x_1)=f(x_2)$$ and if $f$ is injective then $$x_1=x_2$$ So if $f$ is injective then $g$ is injective. If $y \in D$ then there is an $x\in A$ such that $f(x)=y$. But we do not know if $x \in C$. We can find simple examples, where this does not hold, e.g. $$f:x\mapsto x$$ where $A=B=\{1,2,3\}$ and $C=\{1\}, D=\{1,2\}$.Here $g(x)=2$ does not have a solution. But if $D=f(C)$ then $g$ is surjective, too. The requirement that $A,B,C,D$ are open subsets of $\mathbb{R}^n$ and does not change anything. So finally we have: "Let $$f:A\rightarrow B$$ be a function and $C\subset A$ and $f(C)\subset D$, and $g$ the function $$g:C\rightarrow D$$ where $$g(x)=f(x), \forall x \in C$$ if $f$ is injective, then $g$ is injective. $g$ is surjective if and only if $D=f(C)$. $g$ is invertible (bijective) if and only if $f(C)=D.$ If $g$ is the restriction of $f$ to $C$, then this is often written as $f|_C$, then this is usually defined as $$f|_C:C\rightarrow B$$ $$f|_C(x)=f(x),\forall x \in C$$ If $f$ is injective then $f|_C$ is injective. If $f$ is surjective then $f|_C$ must not be surjective.
H: What's the relation between $\mathbf E(X)$ and $\mathbf E(e^X)$? Given $\mathbf E(X)=0$ and $-1\le X\le 1$, show that $\mathbf E(\text{exp}(\sqrt{2}X))\le e^{\sqrt{2}}-\sqrt{2}$. It seems that Jensen's inequality will help, but I have no idea. Thanks in advance. AI: Hint: Since $-1 \le X \le 1$, we have $X^k \le 1$, and thus, $E[X^k] \le 1$ for all integers $k \ge 2$. Also, you are given $E[X] = 0$. Now, use $e^{X\sqrt{2}} = \displaystyle\sum_{k = 0}^{\infty}\dfrac{(\sqrt{2})^k}{k!}X^k$ along with linearity of expectation and see what you get.
H: What tools i need to show the following? Let $X,Y$ be NLS. Let $T: X\rightarrow Y$ be a linear map. Prove that $T$ is continuous iff it is continuous at $0\in$X. Honestly I don't understand this question if $T$ is continuous on $X$ then it is continuous at each point of $X$ and since $0\in X$ then it is continuous at $0$. What does this question mean? What tools do I need to know to prove this. I already know that the NLS $\Vert$ $\Vert$ is continuous on $X$ and $Y$. AI: All you need is linearity: Given $\epsilon >0$ there exists $\delta >0$ such that $\|Tz\|=\|Tz-T0\|<\epsilon$ whenver $\|z-0\|<\delta$. Hence $\|Tx-Ty\|=\|T(x-y)\|<\epsilon$ whenever $\|x-y\|<\delta$. And this proves continuity, in fact uniform continuity, of $T$.
H: What is the minimum score after which all numbers can be scored? There are two types of scores present in a game 4,7. What is the minimum score after which all numbers can be scored? I found the answer '18' without any accurate logic. This a math problem of Olympiad. Can you please give me a method? AI: The link posted by @Matti P gives a general method to solve it. If we are allowed score increments of $x$ and $y$, where $x$ and $y$ are coprime, then the maximum score which can't be obtained (the Frobenius number) is $xy - x - y$. Putting in our values of $x=4$, $y=7$ gives 17. So any score greater than 17 can be obtained.
H: Show that $|\sin(0.1) - 0.1| \leq 0.001$ with the lagrange remainder Show that $|\sin(0.1) - 0.1| \leq 0.001$ I know that's a basic exercise on taylor polynomial but I have made a mistake somewhere that I don't find out. Anyway, here's my attempt : Because the function $f: \mathbb{R} \rightarrow \mathbb{R}$, $x \rightarrow \sin(x)$ is $1$ time derivable, by the Taylor polynomial formula we find: \begin{equation*} \sin(x) = x + R^1_0 \sin(x) \end{equation*} Therefore, \begin{equation*} |sin(0.1) - 0.1| = |R^1_0 \sin(0.1)| \end{equation*} Because $f$ est 2 times derivable, by the Lagrange remainder formula, $\exists c \in ]0, 0.1[$ such that \begin{align*} R^1_0 \sin(0.1) &= f^{(2)}(c) \frac{(0.1)^2}{2!} \\ &= - sin(c) \frac{0.01}{2} \\ |R^1_0 \sin(0.1)| &= |sin(c) \cdot 0.005| \end{align*} Because $|sin(x)| \leq 1$, $\forall x \in \mathbb{R}$ \begin{align*} |R^1_0 \sin(0.1)| &\leq |0.005| \end{align*} However, $0.005 > 0.001$ so I'm wondering where I did a mistake ? AI: You also know that $\vert \sin x \vert \le \vert x \vert$. Hence $$\vert \sin(c) \vert \frac{0.01}{2} \le 0.1 \frac{0.01}{2} = 0.0005 < 0.001$$
H: How to find the pre-image of a relation given the interval? $$ \begin{aligned} &\begin{array}{l} \text { 2) Given the following relations: } \\ \qquad f=\left\{(x, y) \text { , } x, y \in Z, y=x^{4}+4\right\}, \text { a relation from } Z \text { to } Z \text { . } \end{array}\\ &\begin{array}{l|l} \mathrm{g}=\{(x, y) & \left.x, y \in \mathbf{R}, x^{2}+y^{2}=4\right\}, \text { a relation from } \mathbf{R} \text { to } \mathbf{R} . \\ \mathrm{h}=\{(x, y) & \left.x, y \in \mathbf{R}, x^{2}=y-1\right\}, \text { a relation from } \mathbf{R} \text { to } \mathbf{R} . \end{array} \end{aligned} $$ $$ \text { c) For each of } f, g, \text { and } h, \text { find the preimage of the interval } B=[-2,2] $$ Can someone please explain how to find the pre-image of the interval $B[-2,2]$? AI: For example, in the first case, you are looking for all integers $x\in\mathbb{Z}$ such that $-6\leq x^4\leq -2$, which is clearly the empty set. In the second case, $g^{-1}([-2,2])$ is the set of all real numbers $x\in\mathbb{R}$ such that $(x, \pm\sqrt{4-x^2})$ is a point of the circle of radius $2$ around the origin, hence $x\in[-2,2]$. As for $h$, the closed interval $[-1,1]$ will work.
H: What will be the remainder when 7^2020 is divided by 4? Problem: "What will be the remainder when $7^{2020}$ is divided by $4$?" I can't get a step to approach such type of question but all I know is the answer is $1$. AI: $7\equiv -1\pmod{4}$ So, $7^{2020}\equiv (-1)^{2020}\equiv 1\pmod{4}$
H: Expectation of score function (partial derivative of the log-likelihood function) according to the Wikipedia: https://en.wikipedia.org/wiki/Score_(statistics), expected value of a score function should equals to zero and the proof is following: \begin{equation} \begin{aligned} \mathbb{E}\left\{ \frac{ \partial }{ \partial \beta } \ln \mathcal{L}(\beta|X) \right\} &=\int^{\infty}_{-\infty} \frac{\frac{ \partial }{ \partial \beta } p(X|\beta)}{p(X|\beta)} p(X|\beta) dX \\ &= \frac{ \partial }{ \partial \beta }\int^{\infty}_{-\infty} p(X|\beta) dX = \frac{ \partial }{ \partial \beta } 100\% = 0 \end{aligned} \end{equation} My question is why the probability density function of random variable $\frac{ \partial }{ \partial \beta } \ln p(X|\beta)$ is $p(X|\beta)$? Many thanks!! AI: $$\frac{\partial}{\partial \beta} \left[\log \mathcal L(\beta \mid \boldsymbol X)\right]$$ is a function of the sample $\boldsymbol X$, thus is a function on the joint density $p_{\boldsymbol X}(\boldsymbol x \mid \beta)$ of $\boldsymbol X$. So its expectation is naturally $$\int_{\boldsymbol x \in \Omega} \frac{\partial}{\partial \beta} \left[\log \mathcal L(\beta \mid \boldsymbol x)\right] p_{\boldsymbol X}(\boldsymbol x) \, d\boldsymbol x$$ where $\Omega$ is the support of $\boldsymbol X$. This is analogous to the much simpler "law of the unconscious statistician" $$\operatorname{E}[g(X)] = \int_{x \in \Omega} g(x) f_X(x) \, dx$$ in the univariate case.
H: Sum of arithmetic progression formula The question that is asked is to find the series described, and than calculate the sum of the first n terms. Now I have done some research and I found a formula I thought might work, which is $$ s_n = \frac{n}{2}(2a_1 + (n-1)d)$$ where the series is of the form $a_n = a_1 + (n - 1)d$ Now I have got the following series: $ 1 + 5 + 9 + 13$ $+$ $...$ This is what I though to be $s_n$, so that $a_1 = 1$, $a_2 = 5$, etc. I found the series description: $a_n = a_1 + 4(n-1)$ So I think $a_1 = 1$ and $d = 4$. When I substitute these values in the formula I get: $$ s_n = \frac{n}{2} (2 + 4n - 4) $$ $$ s_n = \frac{2n + 4n^2 - 4n}{2} $$ $$ s_n = 2n^2 - 2n $$ Yet, when I check the solution to the problem, it is expected to be $$ s_n = n(2n - 1)$$ $$ s_n = 2n^2 - n$$ I can't seem to figure out why I have almost the same result, yet a small difference. AI: You did a little mistake $\frac{-2n}{2}=-n$, and not $-2n$ as you have typed
H: How to prove the Riemann Zeta fuction tends to infinity when $x$ tends to $1$ The Riemann Zeta Function is convergent over the interval $(1,\infty)$, and $\sum_{n=1}^{\infty}\frac{1}{n^x}$ tends to infinity when $x\rightarrow 1^ {+} $, it seems one can feel it is right because the when $x=1$ the function is infinite and it is monotonely decreasing over the interval $(1,\infty)$. But since $\sum_{n=1}^{\infty}\frac{1}{n^x}$ is not uniformly convergent at the interval $(1,1+\delta)$, so I can't simply exchange the limit order as follows: $$\lim_{x\rightarrow1}\sum_{n=1}^{\infty}\frac{1}{n^x}=\sum_{n=1}^{\infty}\frac{1}{n}$$, so how can I compute this: $$\lim_{x\rightarrow1}\sum_{n=1}^{\infty}\frac{1}{n^x}$$ AI: We show that for every $M \in \Bbb R_{>0}$, there exists $\delta > 0$ such that $\zeta(x) > M$ for all $x \in (1, 1+\delta)$. Assume that such an $M$ is given. As $\sum 1/n$ diverges, there exists $N \in \Bbb N$ such that $$\sum_{n=1}^N\dfrac{1}{n} > 2M.$$ Also, note that for each $n \in \{1, \ldots, N\}$, we have a $\delta_n > 0$ such that $$\dfrac{1}{2n} < \dfrac{1}{n^x} , \qquad (*)$$ for all $x \in (1, 1+\delta_n).$ (This is because $n^x \to n$ as $x\to1^+$.) By choosing $\delta = \min\{\delta_n \mid n \in \{1, \ldots, N\}\}$, we see that $(*)$ holds for all $x \in (1, 1 + \delta)$ and for all $n\in\{1,\ldots,N\}$. Thus, we have \begin{align} \sum_{n=1}^N\dfrac{1}{n^x} &> \sum_{n=1}^N\dfrac{1}{2n}\\ &> \dfrac{1}{2}(2M)\\ &= M, \end{align} for all $x \in (1, 1 + \delta)$. This gives us that \begin{align} \zeta(x) &= \sum_{n=1}^\infty \dfrac{1}{n^x}\\ &>\sum_{n=1}^N \dfrac{1}{n^x}\\ &> M, \end{align} for all $x \in (1, 1 + \delta)$, as desired.
H: sigma algebra created by partition So there is a countable (disjoint) partition of $\Omega = \bigcup_{i \in \mathbb{N}}B_i$ and now I'm interested in the $\sigma$-algebra created by this partition $\sigma(\{B_i:i\in\mathbb{N}\})$. I've been wondering whether this is just the Powerset $\mathcal{P(\Omega)}$, because all singletons are available, but then I remembered the Vitali set. There is supposed to be an explicit form, but I really don't know where to start here. Thanks in advance. AI: It is the collection of all possible unions of the $B_i$'s (i.e. sets of the form $\bigcup_{i \in I} B_i$ where $I $ is some subset of $\{1,2,3...\}$). This class contains each $B_i$ and it is a sigma algebra: closure under countable unions is obvious. The complement of a union of the $B_i$'s is the union of the remaining $B_j$'s. It now follows that the sigma algebra generated by the partition is containes in this collection. The reverse inclusion is obvious.
H: If $Z_n = X_n + Y_n$ for $X_n\in M$ and $Y_n\in N$ then $(X_n)$ and $(Y_n)$ converge Let $H$ be a Hilbert space (infinite dim) with $M,N\subset H$ being closed subspaces satisfying $N\subset M^\perp$. I'm trying to show that $M+N$ is closed. If $(Z_n)_{n=1}^\infty \subset M+N$ is a sequence then $Z_n = X_n+Y_n$ for some sequences $(X_n)_{n=1}^\infty$ and $(Y_n)_{n=1}^\infty$ in $M$ and $N$. I can show $M+N$ is closed if I know that $X_n\to X$ and $Y_n\to Y$ for some $X\in M$ and $Y\in N$, so my question is: why do these two sequences converge? AI: From the orthogonality of $Y_n-Y_m$ and $X_n-X_m$ we get $\|(X_n+Y_n)-(X_m+Y_m)\|^{2} =\|X_n-X_m\|^{2}+\|Y_n-Y_m\|^{2}$. Hence $\|X_n-X_m\|^{2} \leq \|(X_n+Y_n)-(X_m+Y_m)\|^{2}\to 0$. So the Cauchy sequence $X_n$ converges to some $X$. Since $X_n+Y_n$ also converges we see that $Y_n$ converges too. Can you finish?
H: Prove that $\det(B^TB) \neq 0$ I have a matrix $A_{N \times M}$ such that $$A=U^T_{N \times N} \cdot B_{N\times M} \cdot V_{M \times M},$$ where $U,V$ orthogonal and $B_{ij}$ may has nonzero values only for $i\le j \le i+1$ and $A$ is full order matrix. How to prove that $B^TB$ also has $\det(B^TB) \neq 0$. I think that it is easy conclusion from the fact that $A$ has full order matrix but I don't know how exactly to prove it. AI: I guess that “full order” means that $A$ has full column rank, that is, with your notation, $\operatorname{rk}A=M$. This is equivalent to $\det(A^T\!A)\ne0$. Indeed, if $\det(A^T\!A)\ne0$, you can consider $L=(A^T\!A)^{-1}A^T$ and immediately see that $L$ is a left inverse of $A$. Conversely, it's not difficult to prove that $A$ and $A^T\!A$ have the same rank. Multiplying a matrix on the left or on the right by an invertible matrix doesn't change the rank. Since $$ B=V^TAU $$ the rank of $B$ is the same as the rank of $A$.
H: whats the simplest way to find this circle's center if known its tangent line the circle has a tangent line $y = 2x + 1$ at $(2,5)$ and its center on the line $y = 9 - x$. If that's circle intersect the $x$ -axis at $x_1, x_2$ what's $x_1 + x_2$ ? i understand than $x_1 + x_2 = 2x_0$ when $x_0$ is the circle's center. we can use $(x-a)^2 + (y-b)^2 = r^2$ when $b = 9-a$ can we use different method? using $y=mx \pm r\sqrt{m^2+1}$ seems complicated. AI: The perpendicular line to $y=2x+1$ at $P: (2,5)$ intersects the line $y=9-x$ in $C: (6,3)$ which is the center of your circle. Now $CP$ is a radius, so the equation of the circle is $(x-6)^2+(y-3)^2=20.$ Can you take it from here?
H: Prove that the function $f :\Bbb R \to \Bbb R$ defined by $f(x) = e^{-\cos(x)^2}$, for all $x \in\Bbb R$, has a unique fixed point on $\Bbb R$. Hint: some arguments might be simpler if you recall the trigonometric formula $2\sin(x)\cos(x) = \sin(2x)$. Remember also that $\cos$ and $\sin$ are $2\pi$-periodic functions. I am a bit lost with this question. I wanted to apply the contraction mapping theorem but it involves a closed set which $\Bbb R$ isn't. So I started off with let $a \in \Bbb R$ and $a > 0$, then the set $[-a,a] = I$ is a subset of $\Bbb R$ which $f(x)$ is defined on. Then assume for some $x_1 \in I$, $f(x_1) = x_1$ and $x_2$. I was then going to apply the mean value theorem where $f'(c) = \dfrac{f(x_1) -f(x_2)} {x_1 - x_2}$ which would mean $f'(c) = 0$, however $f '(x)$ is $2e^{-\cos(x)^2}\cos(x)\sin(x)$ which will not equal to $0$? I am not sure if I am heading in the right direction because so far it doesn't look right AI: Note that $\Bbb R$ is a complete metric space, which is all we need to appeal to the contraction mapping theorem. Assume $x_1, x_2 \in \Bbb R$ with $x_1 < x_2$. Then, by MVT, we have that $$f'(c) = \dfrac{f(x_2) - f(x_1)}{x_2 - x_1} \qquad (*)$$ for some $c \in (x_1, x_2)$. Thus, to show that $f$ is a contraction, it would suffice to find a nice bound on $f'(c)$. We turn our attention to this. Note that $f'(x) = e^{-\cos^2x}\sin2x.$ We now show that there exists $k \in (0, 1)$ such that $|f'(x)| < k$ for all $x\in \Bbb R$. Note that $f'$ is periodic with period $\pi$ and thus, it suffices to consider $f'$ on $[0, \pi]$. However, note that $I = [0, \pi]$ is compact and so, by the extreme value theorem (EVT), $|f'|$ attains its maximum (supremum) on it. Let $k = \displaystyle\sup_{x\in I}|f'(x)|.$ By EVT, $k = |f'(x_0)|$ for some $x_0 \in I$. We now wish to show that $k < 1$. (Emphasis on the strict inequality. It is clear that $k \le 1$.) Note that $e^{-\cos^2x} \le 1$ and $|\sin 2x| \le 1$ for all $x \in I$. Thus, if $k = 1$, then we must have both $$e^{-\cos^2x_0} = 1 \text{ and } |\sin 2x_0| = 1.$$ However, it can be seen that the former is possible iff $x_0 = \pi/2$ corresponding to which we get that $\sin 2x_0 = 0$, a contradiction. Thus, we see that $k = 1$ is not possible. With the above result, we can rearrange $(*)$ to get \begin{align} \left|\dfrac{f(x_2) - f(x_1)}{x_2 - x_1}\right| &= |f'(c)|\\ &< k\\ \implies |f(x_2) - f(x_1)| &< k|x_2 - x_1|, \end{align} for some $k \in (0, 1)$ and all $x_1, x_2 \in \Bbb R$. You can now appeal to the contraction mapping theorem and conclude the result.
H: Monotonicity and strict order relations Suppose we have a function $g$ that is differentiable (and hence continuous) and monotonically increasing on the interval $[P,Q]$. I know that this alone is not enough to imply that if $a,b\in [P,Q]$ and $a>b$, then $g(a)>g(b)$, because this is a strict order relation and we only know that $g$ is monotonically increasing. However, it is definitely true that $\forall m,n\in\mathbb{R}$, if $m>n$ then $m^3>n^3$. This strict order relation is definitely true, even though the cubic function $x^3$ is only monotonocally increasing on $(-\infty,\infty)$. My thought is that this is true because the cubic function $x^3$ is strictly increasing except at $x=0$. This gave me the following thought: Question Suppose there is a function $f(x)$ that is differentiable for all $x\in [P,Q]$. Also, $f(x)$ is strictly increasing for $x\in[P,Q]$, except for finitely many values of $x$. And, over the interval $[P,Q], f$ is monotonically increasing. Is the following statement true?: $\forall m,n\in [P,Q]$, if $m>n$ then $f(m)>f(n)$ To clarify, the function is monotonocally increasing on $[P,Q]$, and it is only stationary at finitely many points. I made this restriction so that the graph of $f(x)$ has no "horizontal lines" in $[P,Q]$, and this should mean that the strict order relation holds. To me, this is intuitively true, although I haven't been able to come up with a formal proof. Like the function $x^3$, the strict order relations will hold for my function $f$, because the function is only stationary at a finite number of points. Thanks in advance. AI: Just prove it by contardiction. If $m>n$ and $f(n)=f(m)$ then $f(m)\geq f(x) \geq f(n)$ for all $x$ between $n$ and $m$ (by monotonicity) which makes $f$ constant througout the interval $(n,m)$. But this contradicts your hypothesis.
H: contour integration complex variables Using Contour integration, evaluate $$ \int_{-\infty}^{\infty} \frac{\cos x}{ (x^2 +1)^2}\, dx $$ AI: Consider integrating the function $$f(z) = \frac{e^{iz}}{(z^2+1)^2}$$ along the contour $C_R$, the curve from -R to R along the real line and then along the half circle in the upper half plane from R to R (semicircle lying on real line). For $R>1$ , by the residue theorem and $f$ being holomorphic in $\{Im(z)>0\} \setminus \{i\}$ we have $$\oint_{C_R}f(z) dz = 2\pi i \text{Res}_f(i)$$ Now $f$ has a pole of order 2 at $i$ so $$\text{Res}_f(i) = \left. \frac{d}{dz}\left(\frac{e^{iz}}{(z+i)^2}\right)\right|_{z=i} = -\frac{i}{2e}$$ then let $R \rightarrow \infty$ and use Jordan's lemma to get that the integral around the half circle arc of radius R tends to 0. So $$\int_{-\infty}^{\infty}f(t) dt = \frac{\pi}{e}$$ Hence by taking the real part $$\int_{-\infty}^{\infty}\frac{\cos(x)}{(x^2+1)^2} dx = \frac{\pi}{e}$$
H: Continuous function on $[0,1]\to [0,1]$ that is not Lipschitz Continuous? Continuous function on $[0,1]\to [0,1]$ that is not Lipschitz Continuous? One example I could perhaps think of is $f(x)=sin(\frac{1}{x})$ where we define $f(0)=0$. Then this function has the required domain and range. Now, I was wondering by defining $f(0)=0$, would we have continuity at $x=0$? Also, I guess this is also Lipschitz continuous since $f'(x)=-\frac{1}{x^2}f(x)$ is unbounded? How could I improve my argument? Thanks! AI: Your function is not continuous at $0$ so does not work. It also cannot made continuous since the limit $x \to 0$ does not exist. Instead, consider $$f: [0,1] \to \mathbb{R}: x \mapsto \sqrt{x}$$ This is continuous but not Lipschitz-continuous since the derivative is unbounded near $0$.
H: Find inverse of $[x+1]$ in factor ring $\mathbb{Q}[x]/\left\langle x^3-2 \right\rangle$ Find inverse of $[x+1]$ in factor ring $\mathbb{Q}[x]/\left\langle x^3-2 \right\rangle$. I remember that I need to use the extended Euclidean algorithm, but it has been some time, so I am a bit rusty. Thanks in advance! Edit: I tried it and, by solving $(x+1)(ax^2+bx+c)=1$, with $x^3=2$, i got the solution for the inverse: $[x^2/3 -x/3 + 1/3]$, is the method and result correct? AI: Extended Euclidean algorithm is exactly right. Maybe $x^3 -2 = (x^2 -x +1) (x+1) - 3 $ $ x+1 \, \, = (-x/3)(-3) + 1$. This, we may write as $(-x/3)(-3) = (x+1) -1$ So that finally, $(x^3 - 2) (-x/3) = (x^2 -x +1) (x+1) (-x/3) - 3 (-x/3) $ The right side further becomes $(x^2 -x +1) (x+1) (-x/3) + (x+1) - 1 = (x+1) \big[p(x) \big] - 1 $. Thus $$ (x^3 -2) + 1 = (x+1)p(x) $$
H: How many possible passwords are there for 8-100 characters? Requirements/Restrictions: Minimum of 8. Maximum of 100. At least 1 letter from the latin alphabet (capitalisation doesn’t matter—g is same as G, 26 letters), at least 1 number (0-9, 10 numbers) and it may also include special characters (33 special characters/symbols). Here’s what I did: $26 \times 10 \times 69^6 \times 70^{92} = \\1574300283675196381393274771319731729003339411333808462\\ 6274484198008813380319383474217713060000000000000000000\\ 00000000000000000000000000000000000000000000000000000000000000000000000000$ One of the characters must be a letter (26), then another must be a number ($26 \times 10$), there are 6 spots left to reach the minimum, all the possible characters added together is $69 (= 33 + 26 + 10)$ so ($26 \times 10 \times 69^6$). Then there are 92 spots left, since they don’t have to be filled, the user has the option of leaving it blank, so now instead of 69 options, it’s 70. Hence, ($26 \times 10 \times 69^6 \times 70^{92}$). I would appreciate it if someone confirmed whether I am correct or not. Thank you :) AI: 26 alphabet characters, 10 numeric characters, 33 special characters (including blank space, which can be placed anywhere) - that’s 69 total For an N-length password: If there were no “at least 1” conditions, you’d have $69^N$ posibilities. Then, to satisfy “atleast 1 letter”, we need to remove all options without a letter: $(69-26)^N$. And to satisfy “at least 1 number”, we need to remove all options without a number: $(69-10)^N$. But then we need to add back in all options without a number OR a letter, i.e., with only special characters (since we’ve removed those twice): $33^N$ So, for an N-length password, the number of possibilities is $69^N - 43^N - 59^N + 33^N$ So for the total you want the sum of each different possible length, i.e., your answer is $$\sum_{N=8}^{100}(69^N - 43^N - 59^N + 33^N)$$ Stick this in some code to calculate it nice and quickly, and you get 7784830887958863955006123907413479732322179460547538436324402442938556231147248052155742398941820197907332216822738647248662040228937662527224075073302550548119136634116724084874159040 $\approx 7.78 \times 10^{183}$ options.
H: Uniform Convergence Infinite Series I have been given the following series: $$ \sum_{n=1}^\infty e^{-n^2x^2} $$ Let now $a>0$. Argue that the series converge uniformly on the interval $[a,\infty)=\{x\in\mathbb{R}:x\geq a\}$. To do this i have been using Weierstrass' M-test. First i have said that as the exponential function grows faster than the power function the following should be true: $ \frac{1}{e^{n^2x^2}}\leq \frac{1}{n^2} $. We know that $\sum_{n=1}^\infty \frac{1}{n^2}$ is convergent which makes it a convergent majorant series. Pr. Weierstrass' M-test the series $\sum_{n=1}^\infty e^{-n^2x^2}$ is then uniformly convergent. Is this approach okay? AI: You have to prove that your upper bound is valid. Use the fact that $e^{x} \geq x$ for $x >0$ (which follows from the Taylor expansion). Now $e^{n^{2}x^{2}} \geq e^{n^{2}a^{2}} \geq n^{2}a^{2}$ so $\frac 1 {e^{n^{2}x^{2}}} \leq \frac 1 {n^{2}a^{2}}$, now compare with the series $ \sum \frac 1 {n^{2}a^{2}}$ which is convergent.
H: Using definition of derivative at an inequality The question is really simple but I'm not sure how can I prove it. Let $f : \mathbb{R} \rightarrow \mathbb{R} \phantom{2}$ a function that verifies : $\exists\, K \in \mathbb{R^+}, \phantom{1}\forall\, x,y \in \mathbb{R}: \lvert f(y)-f(x) \rvert \le K\lvert \cos y - \cos x \rvert$ Then f is differentiable at $0$ I proved that f is a Lipschitz function since by mean value theorem $$ \forall x,y \in \mathbb{R} \phantom{3}:\frac{|\cos y - \cos x|}{|y-x|}\le 1 \implies |\cos y - \cos x|\le|x-y| $$ Then $$\exists K \in \mathbb{R^+} \forall x,y \in \mathbb{R}: |f(y)-f(x)| \le K|\cos y - \cos x|\le K|x-y|$$ So for $x = 0$ we have that $|f(0)-f(x)| \le K|0-x|$ But I'm stilled confused on how can I apply this inequality on $$ \lim_{x\to c} \frac{f(x) - f(c)}{x-c} $$ And if it's differentiable what's the value of $f′(0)$? AI: Observe that $$ \lvert f(y)-f(x) \rvert \le K\lvert \cos y - \cos x \rvert;\forall\, x,y \in \mathbb{R},\\ \\ \implies \frac{| f(y)-f(x)|}{|y-x|}\le K\frac{\lvert \cos y - \cos x \rvert}{|y-x|}, \forall x\ne y $$ $\therefore \lim_{x\to 0} \frac{| f(0)-f(x)|}{|0-x|}\le K\cdot\lim_{x\to 0}\frac{\lvert \cos 0 - \cos x \rvert}{|0-x|}=K\cdot |\sin 0|=0$. This because $\cos $ is differentiable. So you have $\lim_{x\to 0} \frac{| f(0)-f(x)|}{|0-x|}=0$. You know that $\lim_{x\to c}g(x)=0 \Leftrightarrow \lim_{x\to c}|g(x)|=0 $ and hence $$f'(0)=\lim_{x\to 0}\frac{f(x)-f(0)}{x}=0$$.
H: Calculating positive elements of $a_n$ with formula for $a_{n+1}$ I know this is a very simple high-school problem, but there is one detail that won't let me sleep. The question: For all $n\in \mathbb{N}_+$, the sequence $\{a_n\}$ satisfies following equations: $$ a_n+a_{n+1}=\frac{-n^2+3n+17}{n^2+1}\\ a_n-a_{n+1}=\frac{6n+19}{n^2+1} $$ Calculate which elements of the sequence are positive. Of course the task is very easy, you just need to add these two equations to get the formula for $a_n$ and from there you get that $a_n$ is positive when $n\in(-3;12)$, so the elements we are looking for are: $a_1,a_2,...,a_{11}$. However, my student asked me why when trying to derive formula for $a_{n+1}$ instead of $a_n$ (by substracting these two equations), we get completely different (and wrong) answer, instead of just shifting the interval by $1$. Unfortunately I couldn't answer his question, could you enlighten me? AI: To summarize the discussion in the comments: The problem here is that the two recursions are not consistent. As a result there is no such sequence and the problem can not be answered as stated. To illustrate the problem, consider a simpler example: $$b_b-b_{n+1}=1\quad \&\quad b_n+b_{n+1}=1$$ Adding we get $b_n\equiv 1$ while subtracting yields $b_n\equiv 0$. Again, the issue is that the two recursions are inconsistent. I don't know that it's obvious that the given pair of recursions is inconsistent. The students method (computing $a_{n+1}$ in two ways and getting two different answers) may well be optimal or close to it. As another method, you can use the closed form "solution" obtained via addition and check whether or not it satisfies the two recursions.
H: How to compute $\parallel f \parallel_{L_2(\mathbb{R}^2)}$ for $f(x,y)=\frac{1}{1+(x-y)^2}$? So I want to compute $$\int\limits_\mathbb{R} \int\limits_\mathbb{R} \frac{1}{(1+(x-y)^2)^2} dxdy.$$ As I understand, I cannot reduce it to $1$-dimensional integrals, since Fubini's theorem requires measure of whole space to be finite. Thank you. AI: The integrand is positive. Thus, you can use Fubini's theorem: \begin{align} \int_{\mathbb{R}^2} \dfrac{1}{\left(1+(x-y)^2\right)^2} \mathrm{d}\lambda &= \int_{\mathbb{R}} \left(\int_\mathbb{R}\dfrac{1}{\left(1+(x-y)^2\right)^2} \mathrm{d}x \right)\mathrm{d}y \\ &= \int_{\mathbb{R}} \left(\int_\mathbb{R}\dfrac{1}{\left(1+u^2\right)^2} \mathrm{d}u \right)\mathrm{d}y \text{ by substitution } u=x-y \\ &= \int_{\mathbb{R}} C\mathrm{d}y \\ &= +\infty \end{align} where $C >0$ is a constant
H: How to solve $ x^\top A x = 0$? We could assume $A$ is a positive-definite matrix, if that makes a difference. How does one solve the equation $x^\top A \; x = 0$ ? Is there a name to call such an $x$ which is a solution to the above equation? AI: $x$ has to be the zero vector (in $\mathbb{R^n}$ or $\mathbb{C}^n$) as $A$ is positive-definite. This is a different story if $A$ is, say, positive semi-definite.
H: How to solve this integral with transformation to polar coordinates? How do I determine new limits when transforming to polar coordinates. I have this example, and I don't know how to solve it correctly. $$ \iint_D \frac{\ln\left(x^2+y^2\right)}{x^2+y^2}\,dx\,dy $$ where $D: 1\leq x^2+y^2\leq e^2.$ So I transformed $x$ and $y$ to polar coordinates: $x=r\cos\phi$ and $y=r\sin\phi$ ; $x^2+y^2=r^2\cos \phi +r^2\sin \phi=r^2.$ I got $$ \iint \frac{\ln\left(x^2+y^2\right)}{x^2+y^2}\,dx\,dy = \int d\phi \int \frac{\ln r^2}{r^2}r\,dr. $$ My question is how to determine the new limits of integration after transforming them to polar coordinates. AI: First off, $D$ is described in polar coordinates by $1\leq r\leq e$, and $0\leq \phi\leq 2\pi$. That's the easy part. You read the bounds for $r$ straight off the inequality for $x$ and $y$ (that inequality literally says $1\leq r^2\leq e^2$). And $D$ goes all the way around the origin, so $\phi$ goes from $0$ to $2\pi$. $D$ is rotationally symmetric, and bounded by circles centered at the origin, so this is as easy as it gets in polar coordinates. The slightly more tricky part is that $dx\,dy$ does not just become $dr\,d\phi$. Just like in the one-dimensional case, there is an additional factor appearing here when doing substitution, related to the derivatives of one variable expressed as a function of the other. In this case, $dx\,dy$ becomes $r\,dr\,d\phi$. So what you want is $$ \int_1^e\left(\int_0^{2\pi}\frac{\ln(r^2)}{r^2}r\,d\phi \right)dr\\ = 2\pi\int_1^e\frac{2\ln(r)}{r}\,dr $$ (where the inner $\phi$ integral disappears because the integrand does not depend on $\phi$).
H: Find $f(x,y)$ when $f(x)$ and $f(y)$ are known I have a problem related to the combination of 2 relations. I know the relation between the diffusion coefficient and the temperature (say D(T)) and I know the relation between the diffusion coefficient and the humidity (say D(H)). Now, I would like to write a function for the relation between the diffusion coefficient and both temperature and humidity (say D(T,H)). $D(T)=\frac{e^{-4.054-\frac{3151.5}{T}}}{3600}$ $D(H)=5.128*10^{-13}*e^{13.5*H}$ Can I combine D(T) and D(H) to make a function D(T,H) and if so, than how can I do this? Thanks for the advice! AI: You can't do that. $f(x)$ is known for a single $y_0$, and similarly $g(y)$ is known for a single $x_0$. $f(x,y_0)$ and $f(x_0,y)$ do not allow you to extrapolate reliably to arbitrary $f(x,y)$. E.g. assume we have $f(x,1)=x$ and $f(1,y)=y$. This is compatible with $$f(x,y)=x+y-1$$ but also $$f(x,y)=xy,$$ two pretty different functions. Update: There are physical situations where you have enough insight in the modelled phenomenon to know that the effects of the variables are independent. For example, if you have multiplicative coefficients, you may adopt a separable model, such as $$f(x,y)=g(x)h(y).$$
H: Wrong proof of $TM$ is diffeomorphic to $M\times \mathbb{R^m}$ I want to see what's wrong in here: Let $M$ be a smooth manifold with dimension $m$. I will show $TM$ is diffeomorphic to $M\times \mathbb{R^m}$. proof) Define $F:TM\rightarrow M\times \mathbb{R^m}$ by $F(p,v)=(p,v^1,...,v^m)$ where $v=v^i\frac{\partial}{\partial x^i}\in T_pM$. Let $(U,\phi)$ be a chart containing $p$. Then, $(\pi^{-1}(U),\widetilde{\phi})$ is a chart containing $(p,v)$ where $\pi:TM\rightarrow M$ given by $\pi(p,v)=p$ and $\widetilde{\phi}(p,v)=(\phi(p),v^1,...,v^m)$. And $(U\times \mathbb{R^m},\phi \times Id)$ is a chart containing $F(p,v)$. Using above, $(\phi\times Id)\circ F\circ \widetilde{\phi}^{-1}:\widetilde{\phi} (\pi^{-1}(U))\rightarrow \phi(U)\times \mathbb{R^m}$ is an identity map (Note that $\widetilde{\phi} (\pi^{-1}(U))$ is $\phi(U)\times \mathbb{R^m}$ by calculation.). Thus $F$ is smooth. $F^{-1}:M\times \mathbb{R^m}\rightarrow TM$ is given by $F^{-1}(p,v^1,...,v^m)=(p,v)$ where $v=v^i\frac{\partial}{\partial x^i}\in T_pM$. With above charts, we have $\widetilde{\phi}\circ F^{-1}\circ (\phi\times Id)^{-1}:\phi(U)\times \mathbb{R^m}\rightarrow \widetilde{\phi}(\pi^{-1}(U))$ is also identity map. Thus $F^{-1}$ is smooth. $\blacksquare$ But I know $TM$ may not diffeomorphic to $M\times \mathbb{R^m}$. What's wrong in my proof? AI: First of all, the $F$ you have defined is not actually a map on the tangent bundle of the manifold.The point is,when you are explicitly using coordinates to define a map,what you need to check is whether such a thing is independent of the chart chosen.The reason is that, the map you are actually defining takes a point on the manifold and gives you an output, but the same point on the manifold may have different coordinate representations.That is the reason why you cannot just use a chart and work your way with respect to it, it simply might not give you a map on the manifold at all.In your case, for example, if I use a different local chart at that point, then a point on $TM$ would end up having two different coordinate representations, what will $F$ be then? Edit :- I thought I need to mention this.Another very common mistake with using charts to define maps is, a 'proof' that all smooth $n$-manifolds are orientable.Indeed, one just takes the 'everywhere positive $n$-form' given by $\omega = dx^1\wedge dx^2\wedge ..\wedge dx^n$.Spot the error. :-)
H: $\Lambda(f) = f(N)$ for each $x$ is a bounded linear functional on $N^*$ of norm $||x||$. In this case $N$ is a normed linear space and $N^*$ is the dual space with norm $$\|f\| = \sup_{\|x\|\leq 1} \{ |f(x)| \} $$ I am required to show that the mapping $\Lambda : f \to f(x)$ for each $x\in N$ is a bounded linear functional on $N^*$ with norm $\|x\|$. So far I have been able to show that $N^*$ is a Banach space. I gather that this site is discussing something similar http://mathonline.wikidot.com/the-natural-embedding-j but it is instead from $X^*$ to $X^{**}$ so I was unsure how similar it actually was. Any help would be really appreciated. (This is from Rudin Chapter 5). AI: From the very definition of $\|f\|$ where $f$ is a linear function, you know that for every $x$, $f$ \begin{align} | f(x) | \leqslant \|f\| \|x\| \end{align} thus, if $x$ is fixed, the mapping $f \mapsto f(x)$ is bounded on the unit sphere $\left\{f~ |~ \|f\| = 1\right\}$ by $\|x\|$! You can moreover try to find a linear function $f$ for which it is attained, or at least a sequence $f_n$ for which it converges to.
H: Does $[−2, 3]\subset \operatorname{Im} f'$ for the defined function? I'm trying to prove that if a function $$ f : [−1, 1] \rightarrow \mathbb{R}$$ is continuous in $[−1, 1],\phantom{2}$ differentiable in $(−1, 1)$ and verifies $$ f(−1) = 1,\phantom{1} f(0) = −1, \phantom{1} f(1) = 2 $$ Then the interval $[−2, 3]$ is contained on the image of the derivative $f'(x)$. I tried to solve it using Intermediate value theorem since $f(-1) \gt f(0) \Rightarrow \forall k \in (-1,1) \phantom{2}\exists c \in (-1,0):f(c) = k $ $f(0) \lt f(1) \Rightarrow \forall k \in (-1,2) \phantom{2}\exists c \in (0,1):f(c) = k $ But I didn't get nothing at all, any suggestions? AI: Define $$ q(x)=f(x)-f(x-1)$$ on $[0,1]$. Then $q$ is continuous, $q(0)=-2$, $q(1)=3$. If $-2\le c\le 3$, IVT gives us $\xi\in[0,1]$ with $q(\xi)=c$. Then MVT gives us $\eta\in (\xi-1,\xi)$ with $f'(\eta)=q(\xi)=c$.