text
stringlengths
83
79.5k
H: Total Variation Distance between Two Matrices I have two $n \times n$ matrices $P$ and $Q$. They are given as follows: $$P = \begin{bmatrix}p_{11}&p_{12}&........&p_{1n}\\p_{21}&p_{22}&........&p_{2n}\\...&...&........&...\\...&...&........&...\\p_{n1}&p_{n2}&........&p_{nn}\end{bmatrix}$$ $$Q = \begin{bmatrix}q_{11}&q_{12}&........&q_{1n}\\q_{21}&q_{22}&........&q_{2n}\\...&...&........&...\\...&...&........&...\\q_{n1}&q_{n2}&........&q_{nn}\end{bmatrix}$$ Now, I know that the Total Variation Distance between $P$ and $Q$ is at most $\delta$, i.e., $d_{TV}(P,Q) \leq \delta$. Is there any way I can relate this $\delta$ to the difference between each entry, for example $(p_{11} - q_{11})$, or $(p_{12} - q_{12})$, and so on? AI: No. In general, the best you can say is that $|p_{ij} - q_{ij}| \leq 2\delta$ for all $i,j$. This is clearly tight: take $P = 2\delta e_i e_j^T$ and $Q = 0$. (Note: this is a consequence of the general fact that a $\ell_1$ norm implies a very pessimistic bound on the $\ell_\infty$ norm.)
H: If $STU=Id_v$ find $T^{-1}$ Suppose $V$ is a vector space of finite dimension, $S,T,U:V\to V$ linear transformations. Suppose further that $STU=Id_v$. Show that $T$ is invertible and determine $T^{-1}$. Show the statement is not necessarily true if the hypothesis that is finite is removed. Well I know that S, T and U are invertible by definition, $STU=Id_v$ implies S, T and U are invertibles. After, $(ST)U=S(TU)$ then $(TU)S=Id_v$ by definition, applying $T^{-1}$ in both sides $US=T^{-1}$ Am I right or wrong? ANd Im stucked in the last point AI: For the first part you need to use the multiplicative property of the determinant. For the second part consider operations on the vector space of infinite sequences of elements of the field. To simplify matters let one of $S$ or $U$ be the identity. Can you think of two simple operations you can perform on a sequence, whose composition is the identity one way, but not if you compose them the other way?
H: How to find the closed form of $\int _0^{\infty }\frac{\ln \left(x^n+1\right)}{x^n+1}\:\mathrm{d}x$ Is there a closed form for $$\int _0^{\infty }\frac{\ln \left(x^n+1\right)}{x^n+1}\:\mathrm{d}x$$ I tried multiple techniques such as the elementary ones but none really work out which leads me to think that it can maybe be expressed in special functions. ¿Can you please help me find if this has a closed form or not?. AI: Consider the following identity, $$\int _0^{\infty }\frac{1}{\left(x^n+1\right)^m}\:dx=\frac{1}{n}\:\frac{\Gamma \left(\frac{1}{n}\right)\Gamma \left(m-\frac{1}{n}\right)}{\Gamma \left(m\right)}$$ If we differentiate both sides with respect to $m$ we get, $$\int _0^{\infty }\frac{\ln \left(x^n+1\right)}{\left(x^n+1\right)^m}\:dx=\frac{1}{n}\frac{\Gamma \left(\frac{1}{n}\right)\Gamma \left(m-\frac{1}{n}\right)\left(\psi \left(m\right)-\psi \left(m-\frac{1}{n}\right)\right)}{\Gamma \left(m\right)}$$ Now setting $m=1$ will get us the result of your integral, $$\boxed{\int _0^{\infty }\frac{\ln \left(x^n+1\right)}{x^n+1}\:dx=-\frac{1}{n}\Gamma \left(\frac{1}{n}\right)\Gamma \left(1-\frac{1}{n}\right)\left(\gamma +\psi \left(1-\frac{1}{n}\right)\right)}$$ Where $\gamma$ is the Euler–Mascheroni constant and $\psi $ the Digamma function. Some interesting values can be obtained with this, $$\int _0^{\infty }\frac{\ln \left(x^2+1\right)}{x^2+1}\:dx=-\frac{1}{2}\Gamma \left(\frac{1}{2}\right)\Gamma \left(\frac{1}{2}\right)\left(\gamma +\psi \left(\frac{1}{2}\right)\right)=-\frac{\pi }{2}\left(\gamma -\gamma -2\ln \left(2\right)\right)$$ $$=\pi \ln \left(2\right)$$
H: Is this a lattice of $6$ elements? I am reading "Set Theory & General Topology" by Fuichi Uchida. In this book, the author wrote all lattices($15$ lattices) of $6$ elements. I wonder the following is also a lattice of $6$ elements, but the following is not in the list written by the author. Is the following a lattice or not? AI: Call the two atoms (elements in the lower row) $a$ and $b$, and the two co-atoms (in the upper row) $x$ and $y$. This is not a lattice because $a$ and $b$ do not have a supremum (join). In particular, $x$ and $y$ are both upper bounds for $a$ and $b$, but are incomparable.
H: Need help with Alternative Factoring method I was working on some factoring, as I have always been terrible at it, when I found 3B1B's video on an easier method. There's a TL:DR at the bottom if you're familiar. The basics are as follows: Imagine the graph of a quadratic. $x^2 - 1$ for example. It's got 2 roots $r$ and $s$ the same distance $d$ apart from a midpoint $m$. The method only works for equations that look like $x^2 +b-c=0$, so if you've got an $a$, scale everything else down by dividing everything by $a$. So now you've got that: $r+s=b$ and $r \cdot s = c$. We also know that: $r=m-d$ and $s=m+d$. We can do some algebra to the above to realize the following: $m=\frac{-b}{2}$ and $d = \sqrt{m^2-c}$ So neat! we've got a simple way to factorize! Except something must turn out wrong, because I've gotten a wrong result. $2x^2 -5x -3 = 0$ scales down to $x^2 -\frac{5}{2}x -\frac{3}{2} = 0$ And after some crunching, we get that $r$ and $s$ are $1$ and $\frac{6}{4}$, respectively. That's not accurate. The right answers are $-\frac{1}{2}$ and $3$. what gives? TL;DR: Using alternative quadratic method on $x^2 -\frac{5}{2}x -\frac{3}{2} = 0$ gets me the x-intercepts at $x=-3$. Why? and how can I make sure this doesn't happen again? AI: Given $$x^2+bx+c=x^2-\frac{5}{2}x-\frac{3}{2}$$ We get $m=\frac{-b}{2}=\frac{5}{4}$ and $$d=\sqrt{m^2-c}=\sqrt{\frac{25}{16}+\frac{3}{2}}=\sqrt{\frac{49}{16}}=\frac{7}{4}$$ Therefore our roots are $\frac{5}{4}\pm\frac{7}{4}=\{\frac{-1}{2},3\}$.
H: simple method for expanding binomial with 3 or more terms? I've seen Binomial Theorem Question (Expansion of Three Terms) Binomial Theorem with Three Terms Expanding Equation with Binomial Theorem but I'm not such a math expert, I need things explained in simple terms. Basically I've heard that the solve (x + y)^ it's essentially (x + y)(x + y) and then multiply the first terms, (leftmost), then first term of one with last term of the other (outer) then second term of one with first term of other (inner) then the two last terms of each (right most), and add them up, so x * x + x * y + y * x + y * y = x^2 + 2xy + y^2 that is pretty much all I know, now if I want to solve a more complicated binomial, with three or more terms, for example (x + y + z)(x + y + z) would I use a similar method? Meaning do I start with the left most terms, then x * y, then, what? then do I do x * z and then move on to the next right term, y, and do y * x + y * y + y * z, and then do the same for z, meaning z * x + z * y + z * z ? Am I missing something here, or is that it? AI: You do not need any theorems or results other than the distributive property: $a(b+c)=ab+ac$. Applying this once, we have $(x+y)(x+y)=x(x+y)+y(x+y)$. Here, the left $x+y$ is playing the role of $a$, and the right $x$ and $y$ are playing the roles of $b$ and $c$ respectively. Then we can use the same property again to see $x(x+y)=x^2+xy$ and $y(x+y)=yx+y^2$. Since $xy=yx$, we have \begin{align} (x+y)(x+y)&=x(x+y)+y(x+y)\\ &=x^2+xy+yx+y^2\\ &=x^2+xy+xy+y^2\\ &=x^2+2xy+y^2. \end{align} For three terms we can do the same thing: \begin{align} (x+y+z)(x+y+z)&=x(x+y+z)+y(x+y+z)+z(x+y+z)\\ &=x^2+xy+xz+xy+y^2+yz+xz+yz+z^2\\ &=x^2+y^2+z^2+2xy+2xz+2yz. \end{align}
H: Can the interval defined in the fundamental theorem of calculus be $[-\infty,\infty]$? With regards to the fundamental theorem of calculus, the statement defines a continuous function $f$ inside a closed interval $[a,b]$. Most examples I can find online uses finite numbers for $a$ and $b$. However, in problem 3 c) starting at the bottom of page 12 of this problem set, the solution involves using the theorem where the one of the limits is $\infty$ or $-\infty$. Is this valid because $(-\infty,\infty)$ is considered both open and closed per this answer? AI: Riemann integrals are defined on bounded intervals. Functions can be Riemann integrable on these intervals, a term which I won't define, but is weaker than continuity but stronger than boundedness. That is, a Riemann integrable function on a bounded interval is always bounded (i.e. there exists some $M$ such that $|f(x)| \le M$ for all $x$ in the interval), and every continuous function is always integrable (a consequence of the fundamental theorem of calculus). We can extend this definition of Riemann integration to unbounded intervals, or intervals for which the function is unbounded (e.g. when asymptotes are involved), using limits. These are known as improper integrals. I won't go into the asymptotes; let's examine the integrals on unbounded intervals. The improper integral $\int_{-\infty}^a f(t) \, \mathrm{d}t$ is defined to be $$\int_{-\infty}^a f(t) \, \mathrm{d}t = \lim_{x\to -\infty}\int_x^a f(t) \, \mathrm{d}t.$$ If the definite integrals inside the limit don't exist, or the limit of this function of $x$ doesn't exist, then the improper integral doesn't exist. Similarly, $$\int_a^\infty f(t) \, \mathrm{d}t = \lim_{x\to \infty}\int_a^x f(t) \, \mathrm{d}t.$$ On the other hand, we define $$\int_{-\infty}^\infty f(t) \, \mathrm{d}t = \int_{-\infty}^a f(t) \, \mathrm{d}t + \int_a^\infty f(t) \, \mathrm{d}t,$$ where $a$ is some arbitrary element of $\Bbb{R}$ (it doesn't matter which; changing it will not change the result). Note that this requires both the one-sided improper integrals to exist independently of each other; if one or both of them do not exist, then the full integral does not exist. So, let's examine this with respect to the fundamental theorem of calculus. Note that, for a continuous function on $\Bbb{R}$, it is not possible to substitute in $\infty$ or $-\infty$, as such points simply don't exist in the domain. It is sometimes possible, however, to compute limits to $\infty$ or $-\infty$. Let's say we have a continuous function $f : \Bbb{R} \to \Bbb{R}$ and a corresponding antiderivative $F : \Bbb{R} \to \Bbb{R}$. Then, $$\int_{-\infty}^a f(t) \, \mathrm{d}t = \lim_{x \to -\infty} \int_x^a f(t) \, \mathrm{d}t = \lim_{x \to -\infty} [F(a) - F(x)].$$ Since $a$ is constant with respect to $x$, this limit will exist if and only if $\lim_{x \to -\infty} F(x)$ exists, and so $$\int_{-\infty}^a f(t) \, \mathrm{d}t = F(a) - \lim_{x \to -\infty} F(x),$$ which is as close to the FTC applying to unbounded intervals as you're going to get. Similarly, $$\int_a^\infty f(t) \, \mathrm{d}t = \lim_{x \to \infty} F(x) - F(a).$$ We finally get, $$\int_{-\infty}^\infty f(t) \, \mathrm{d}t = \int_{-\infty}^a f(t) \, \mathrm{d}t + \int_a^\infty f(t) \, \mathrm{d}t = \lim_{x \to \infty} F(x) - \lim_{x \to -\infty} F(x),$$ which again exists if and only if both limits exist independently. So, short answer is, yes, the FTC applies with unbounded intervals, provided you know to take limits rather than "substitute" in infinity.
H: Prove that $n !$ is a divisor of $ \prod_{k=0}^{n-1}\left(2^{n}-2^{k}\right) $ Prove that for any natural number $n, n !$ is a divisor of $ \prod_{k=0}^{n-1}\left(2^{n}-2^{k}\right) $ i have already seen it here $\prod_{i=0}^{n-1}(2^n-2^i)$ can be divided by $n!$ but my doubt is different. Solution - for $p=2$ we can easily prove that $v_{2}\left(\prod_{k=0}^{n-1}\left(2^{n}-2^{k}\right)\right)=\sum_{k=0}^{n-1} v_{2}\left(2^{n}-2^{k}\right) \geq n-1$ > $n-S_{2}(n)=e_{2}(n)$ let now $p>2$ ,and author proves that $ v_{p}\left(\prod_{k=0}^{n-1}\left(2^{n}-2^{k}\right)\right) \geq\left\lfloor\frac{n}{p-1}\right\rfloor $ ( using FLT) and $ e_{p}(n)=\frac{n-s_{p}(n)}{p-1} \leq \frac{n-1}{p-1}<\frac{n}{p-1} $ and since $e_{p}(n)$ is an integer, we must have $ e_{p}(n) \leq\left\lfloor\frac{n}{p-1}\right\rfloor $ now i did not get this last step. why $ e_{p}(n) \leq\left\lfloor\frac{n}{p-1}\right\rfloor $ we know that $x>[x]$ so how we get reversed signs ??? thankyou AI: If you have an ineteger $n<5.5$ .Then it implies $n\le \lfloor5.5\rfloor=5$ This is the same thing.Since $e_p(n) < \frac{n}{p-1}$ and $e_p(n)$ is an integer .We have that, $e_p(n) \le \lfloor \frac{n}{p-1}\rfloor$
H: A property associated with restricted class of analytic functions Question: Is my argument, described below, right? Let $f(z)$ be an analytic function in a region containing the unit disc with $f(z)\neq 0$ in $|z|<1,$ and suppose for some fixed $M>0,$ $$ \Re \frac{zf'(z)}{Mf(z)}\leq \frac{1}{2} $$ for all $z$ on $|z|=1$ for which $f(z)\neq 0.$ Then $$ \left|1-\frac{zf'(z)}{Mf(z)}\right|\geq \left|\frac{zf'(z)}{Mf(z)}\right| $$ for all $z$ on $|z|=1$ for which $f(z)\neq 0.$ But then $$ |Mf(z)-zf'(z)|\geq |f'(z)|, $$ on $|z|=1$ which further implies $$ |Mf(z)-zf'(z)|> |\lambda f'(z)| $$ on $|z|\leq 1$ and for any complex number $\lambda$ with $|\lambda|<1. $ Therefore for $|z|\leq 1$ and $|\lambda|<1, $ $$Mf(z)-zf'(z)-\lambda f'(z)\neq 0.$$ AI: What you have done is correct if you split the proof into then cases $f'(z)=0$ and $f'(z) \neq 0$. In the first case use the hypothesis that $f(z)\neq 0$ so $Mf(z)-zf'(z)-\lambda f'(z)=Mf(z) \neq 0$. In the second case your argument works fine.
H: Integral of ydx + xdy I know this is a very simple question but why is this wrong?$$\int(xdy+ydx)=\int xdy+\int ydx=x\int dy+y\int dx=2xy$$ I saw a similar question on Stack Exchange, but it was too complicated for me to understand. I am in 11th Grade and I have just done basic differentiation and integration for physics. Any help would be appreciated! AI: Since $x,\,y$ are in general not independent, you can't treat $x$ as a constant as in $\int xdy=x\int dy$. Your original problem would make this clearer if you wrote $x(y)dy+y(x)dx$. In fact,$$\int(xdy+ydx)=\int d(xy)=xy+C.$$
H: How to describe ceiling- and floor-like functions that round to a specific decimal place? I am trying to describe floors and ceilings with non-integer factors. Rather than rounding up or down to the nearest integer, I need to for example round to the nearest 0.1. For example, in what I'm writing, $\lfloor3.21\rfloor$ should give $3.2$ How can I represent that? $\lfloor3.21\rfloor^{0.1}$ ? $\lfloor3.21\rfloor_{0.1}$ ? I could just guess and use a superscript or subscript, as shown above but I wanted to know if there's a more formal way of representing such non-integer floors/ceilings and I have not been able to find any. AI: You are of course free to devise your own notation, provided you define it before you use it. However, your notation should be designed to avoid confusion or ambiguity when used with other established notations; e.g., a superscript would be confused with exponentiation, as well as become unwieldy if you actually wanted to use both operations; e.g., what does $$(\lfloor 3.21 \rfloor^{0.1})^2$$ mean? But there is no "standard" or canonical notation for what you propose. The way we would mathematically write such an operation is to do what was proposed in one of the comments: for some base $b$, the expression $$\frac{\lfloor b^m n \rfloor}{b^m}$$ represents the greatest number less than or equal to $n$ to within $m$ digits of base-$b$ precision; e.g. for $b = 10$ and $m = 1$, this is the greatest tenth of an integer less than or equal to $n$. For $b = 2$ and $m = 3$, this is the greatest binary number to within $3$ bits precision less than or equal to $n$, so if $n = (11.3257)_{10}$, that is to say, we wrote this in base $10$, $$\frac{\lfloor 8n \rfloor}{8} = (11.25)_{10} = (1011.010)_{2}.$$
H: Proof verification of a number theory problem involving sequences. $\textbf{Question:}$Does there exist an infinite sequence of integers $a_1, a_2, . . . $ such that $gcd(a_m, a_n) = 1 $ if and only if $|m - n| = 1$? $\textbf{My solution:}$Suppose we have a $n$ element sequence that satisfies the condition.say $a_1,\cdots,a_n$. Now take $n-1$ distinct primes that divides none of the elements of this sequence.call the primes $p_1,\cdots,p_{n-1}$ Then the sequence $a_1p_1,\cdots,a_{n-1}p_{n-1},a_n$ also satisfies the condition. Now,simply take $a_{n+1}=p_1....p_{n-1}$. Then $a_1p_1,\cdots,a_{n-1}p_{n-1},a_n,a_{n+1}$ satisfies the condition.Hence,we can always increase the size of the sequence. In addition $a,b$ with $(a,b)=1$ is a two element sequence that satisfies the condition.Hence we can form an infinite sequence that satisfies the given conditions. If there is any flaw in my argument do tell me. AI: As has been pointed out, you show the existsnece of arbitrarily long finite sequences. To construct an infintie sequence, you must come up with a construction that leaves the old terms unchanged. This should work: $$ a_n=p_{2n-1}p_{2n}\prod_{1\le i< 2n-3\atop i\equiv n\pmod 2}p_{i}$$ as $a_n$ and $a_{n-1}$ have no prime in common whereas one of $p_{2m-1},p_{2m}$ divides $a_n$ if $1\le m<n-1$.
H: Countability of the set Let $f$ be differentiable function from $\mathbb{R}$ to $\mathbb{R}$. Consider the set $$A_y=\{x \in \mathbb{R} : f(x)=y \}$$ I want to know whether $A_y$ is countable for each $y\in \mathbb{R}$. I can verify using simple function like polynomial , exponential function, sine, cosine function; it is countable there. Is it true for any differentiable function? AI: No, for the simple reason that constant functions exist. But there are also nontrivial examples: the function $$ f(x) = \begin{cases} e^{-1/x^2}, &\text{if } x>0, \\ 0, &\text{if } x\le 0 \end{cases} $$ is differentiable everywhere.
H: Probability of a password not having 14 being the first two digits We have a password that consists of $4$ digits, where the password is only numerical values from $0-9$. What is the probability that the first $2$ digits are not $14$? So, I know that the sample space is $10 \cdot 10 \cdot 10 \cdot 10 = 10 ^4$ but I was told that the total amount of passwords starting with 14 is $10^2$. Can anyone please explain to me why? I can't seem to wrap around my head on how they got that answer. Is it because it's the first two digits and therefore is ($10$ possible outcomes first digit $\times \ 10$ possible outcomes second digit) = $10^2$ AI: You have a $4$ digit password, and $2$ digits are fixed. The fact that the digits are $1$ and $4$ don't matter, whether the two digits chosen were the same two or two different digits also doesn't matter. The fact that it is the first two digits that are fixed also doesn't matter. The important information is that $2$ out of $4$ digits are already chosen, there is no choice regarding these. So when you're "choosing" your four digit password you can ignore the first two digits. This leaves a choice for $2$ digits, each with $10$ possible states $(0-9)$, so there are $10^2$ possible passwords whose first two digits are $14$. Since there are $10^2$ passwords whose first digits are $14$, there are $10^4-10^2$ ($10^4$ is the cardinality of the sample space) passwords whose first two digits are not $14$. So the probability that the first two digits of the password are not $14$ is $$\frac{10^4-10^2}{10^4}=99\%$$
H: Selection of four distinct non-consecutive natural number Four distinct numbers are random. Four distinct numbers are randomly selected out of set of first 20 natural numbers. Find the Probability that no two of them are consecutive. Let the sets A={1,2,3,4,...,19,20} The number of ways of selecting 4 natural number is $^{20}C_4$. The answer is $\frac{^{17}C_4}{^{20}C_4}$ with no proper explanation provided. I would like to do this with inclusion-exclusion principle whereas we select 2 cases , 3 cases and then four cases. AI: You can do this without the inclusion-exclusion principle, since the form of the given answer certainly suggests that way. First, select 4 arbitrary balls from 17 white balls that is lined up, by marking them red. Now insert one white ball after the first, second, and third red ball. This makes the total 20. Number the balls from 1 to 20. Do the numbers of the red balls satisfy the requirements (i.e. non-consecutive)? Do they cover all the possible cases? Indeed, the "non-consecutive" requirement can always be removed this way.
H: Will $\sqrt{h \sum_{i =0}^{N-1} (1 - u_i^2)^2} < C_1$ imply there exist $C_2$ satisfies $\sqrt{h \sum_{i=0}^{N-1} u_i^2} < C_2$ Assume the interval $[a, b]$ is divided by uniform grids $x_i = a + i * h, i = 0, 1, \cdots, N$, where $h = \frac{b - a}{N}$, $\mathbf u = [u_0, \cdots, u_{N-1}]^T$ is a grid funciton, will the boundedness of $\|1 -\mathbf u^2\|_{l^2}$, i.e. \begin{align*} \sqrt{h\sum_{i = 0}^{N-1} (1 - u_i^2)^2} \leq C_1 \end{align*} impliy the boundedness of $\|\mathbf u\|_{l^2}$, i.e. there exists $C_2$ satisfies \begin{align*} \sqrt{h \sum_{i = 0}^{N-1} u_i^2} \leq C_2 \end{align*} AI: $ a \in \mathbb R^N \mapsto (\sum_{i=1}^N a_i^2)^{1/2}$ is a norm on $\mathbb R^N$. In particular, it satistifies the triangle inequality. Therefore $$ \| u^2\| \leq \| u^2 - 1\| + \| 1\| \leq C_1 + \sqrt {b-a} $$ and by Cauchy-Schwarz inequality $$ \| u\|^2 \leq \| 1 \| \|u^2\| = (b-a)\|u^2\| \leq (b-a)(C_1 + \sqrt {b-a}) $$ so $$ \| u\| \leq C_2 $$ with $C_2 = \sqrt{(b-a)(C_1 + \sqrt {b-a})}$. Details for the use of Cauchy-Schwarz inequality : $$ \| u\|^2 = h \sum_i u_i^2 \leq h \left(\sum_i 1^2 \right)^{1/2} \left(\sum_i u_i^4 \right)^{1/2} = \| 1\| \| u^2 \| $$
H: Find all the elements of order $63$ in $S_{50}$ Find all the elements of order 63 in the permutation group $S_{50}$. I know that the number of elements of $S_{50}$ is $50!$, and $63=9\cdot7$, and that the number of cycles is $62!$, but I don`t quite know how to provide an answer. I am new to this type of problems and I do not have many examples, could you provide a full proof, or at least in the form of an answer, such that it would serve as a model for similar problems I encounter? Thank you very much!!! AI: One of the cycle types of order $63$ in $S_{50}$ consists of a $7$-cycle and a $9$-cycle. For the $7$-cycle, there are $50$ possibilities for the first entry, $49$ for the second, $48$ for the third, $47$ for the fourth, $46$ for the fifth, $45$ for the sixth, and $44$ for the seventh; that's $50\times49\times48\times47\times46\times45\times44$ possibilities. However, since $(abcdefg)=(bcdefga)=(cdefgab)=(defgabc)=(efgabcd)=(fgabcde)=(gabcdef)$, we divide by $7$, to get $71916768000$ possibilities. Likewise, for the $9$-cycle, there are $43\times42\times41\times40\times39\times38\times37\times36\times35/9=22737334838400$ possibilities. Overall, the number of possibilities for a $7$-cycle and a $9$-cycle is the product of those two large numbers. You would also have to do similar computations to get the other possibilities with other cycle types.
H: Find all of the x-intercepts using the Newton Method. Use the Newton Method and find all of the x-intercepts of the function: $$f(x)=x^3-4x^2+1$$ The Newton Method to finding the x-intercept is: $$x_{i+1}=x_i - \frac{f(x_i)}{f'(x_i)}$$ Step $1$: $$f(0)=(0)^3-4(0)^2+1=1 < 0$$ $$f(1)=(1)^3-4(1)^2+1=-2 > 0$$ Thus: $$\frac{0+1}{2}=0.5=x_1$$ Step $2$: $$f(0.5)=0.125$$ $$f'(0.5)=-3.25$$ $$x_2=0.5- \frac{0.125}{-3.25}=0.5285$$ Step $3$: $$f(0.5285)=-0.00377...$$ $$f'(0.5285)=-3.43805325...$$ $$x_3=0.5285 - \frac{-0.00377...}{3.43805325...}=0.53740...$$ It's fair to say that $x \approx 0.53740...$. However, from the Fundamental Theorem of Algebra we know that $f(x)$ should have $2$ more x-intercepts. How can I find those? AI: You're going to have find out where else the function switches sign. The newton method for finding intercepts basically takes you to the closest intercept your initial point was at. So find out where else the function switches sign and repeat what you did here. It's a tad tedious but that's how it is sometimes.
H: Is $U_{7^{n}}$ cyclic? Let $(U,\cdot)$ be a group such that $U=\{z\in \mathbb{C}|\exists n \in \mathbb{N}, z^{7^{n}}=1\}$. Prove that any proper subgroup of $U$ is cyclic. Is $U$ cyclic? I know that our group $U$ is $U_{7^{n}}$ and that any $z\in U$ can be written with deMoivre formula, but I am not sure how to handle it from here on. I am new to this type of problems and I do not have many examples, could you provide a full proof, or at least in the form of an answer, such that it would serve as a model for similar problems I encounter? Thank you very much!!! AI: It's easy to show that $U$ can't be cyclic. First, note that $U$ is infinite. But $z \in U \Rightarrow z \in U_{7^n}$ for some $n \in \Bbb N$, so $z$ has finite order $7^n$ and the group it generates is necessarily finite. Thus, $z$ can't generate all of $U$. Now assume that $H \subseteq U$ is a proper subgroup and assume $z \in U \setminus H$. Choose $n$ such that $z \in U_{7^n}$. If $m \geq n$ and $w \in H$ with $\operatorname{order}(w) = 7^m$, then $\exists k ~ (w^k=z)$, which contradicts the assumption that $z \notin H$. Thus, $H \subseteq U_{7^m}$ for some $m \lt n$, which means $H$ is a subgroup of a cyclic subgroup, and therefore must itself be cyclic.
H: Wrong arrangement leads to wrong answer 10 ambassadors are being arranged uniformly at random in a row. What is the probability that: The French ambassador is next to the Russian ambassador? The answer in the textbook is : "By viewing the two ambassadors as one, we infer that there are exactly 2! · 9! possibilities in which the French ambassador is next to the Russian ambassador" Answer is : $\frac{2!\cdot 9!}{10!}$. I don't understand why it is written there 9 instead of 8. because we have 2 taken seats, and left with 8 that can be arranged(8!) AI: There is an alternative method which uses $8!$: There are $9$ adjacent pairs of seats to place these two ambassadors: $1\&2, 2\&3, 3\&4,\ldots,9 \&10$ These two ambassadors can be seated $2!$ ways in the selected pair of seats The other eight ambassadors can be seated $8!$ ways in the remaining seats Divide by $10!$ to reflect the number of originally equally likely possibilities and that gives a result of $\frac{9\cdot 2!\cdot8!}{10!}= \frac15$ as before
H: Union of finitely generated submodule is a finitely generated submodule?? Let $A_i$ be a finitely generated submodule of $M$, for all $i\in\mathbb{N}$. Then $\bigcup_{i\in\mathbb{N}}A_i$ is a finitely generated submodule of $M$. I know for normal submodule this is not true. AI: $\Bbb Q^{(\Bbb N)}:=\{f:\Bbb N\to\Bbb Q\,\mid\, \left\lvert\Bbb N\setminus f^{-1}(0)\right\rvert<\aleph_0\}$ is not finitely generated as a $\Bbb Q$-module, yet $\Bbb Q^{(\Bbb N)}=\bigcup_{x\in\Bbb Q^{(\Bbb N)}} \Bbb Qx$. This is a counterexample to your claim because $\left\lvert\Bbb Q^{(\Bbb N)}\right\rvert=\aleph_0$.
H: Prove that $\ker(T)$ of $T:V \rightarrow W$ is a subset of V I understand how $\ker(T)$ would be a subspace of $V$ from the following post Proof that a Kernel of a Linear Mapping is a Subspace But how do we know that vectors in $\ker(T)$ would be in $V$ in the first place? Why is that a valid assumption? AI: Based on the definition of the kernel. $\ker(T)$ is defined as the set of vectors in $V$ that map to the zero vector in $W$, or equivalently: $$\ker(T)=\{\mathbf{v}\in V|T(\mathbf{v})=\mathbf{0}\}$$ T is a linear transformation and is just a function that maps from $V$ to $W$, that fulfills the following conditions: $$T(u+v)=T(u)+t(v)$$, $$T(av)=aT(v)$$ $$ T(0)=0$$. That linear transformation can be represented as matrix $A$ by defining the following functions: $$G^{-1}:\mathbb{R}^n\rightarrow V$$ $$H:W\rightarrow \mathbb{R}^n$$ Thus, our matrix $A$ can be written as: $$HTG^{-1}=A$$ By applying those functions to our basis vectors of $V$ you can write out the matrix $A$ where the columns of that matrix are the functions applied to each basis vector.
H: Finite simple group has order a multiple of 3? Checking the list of finite simple groups, it seemed to me that all groups have order a multiple of $3$. This clear for alternating groups and checked case by case for sporadic groups. For groups of Lie type it looked like the orders are always multiples of $q(q^2 - 1)$ for a prime power $q$, and this quantity is always a multiple of $3$. Upon closer inspection there is an outlier, namely the Suzuki groups. Are these the only exception? Is there a reason why this is the case, or is it just a corollary of the classification? I have seen that there are many constructions of the Suzuki groups. Could you recommend a reference to read about them? AI: First of all : We must assume that the group is non-abelian, otherwise the cyclic groups with prime order , except $\mathbb Z_3$ , would already be counterexamples. The Suzuki groups have orders not divisible by $3$, $5$ is however always a prime factor of the order of those groups. All the other finite simple non-abelian groups have order divisible by $3$.
H: Let $f$ be continuous. If $f(x) = 0 \implies f$ is strictly increasing at $x$, then $f$ has at most one root. This is similar to this question I asked yesterday. I just need someone to check my proof (or offer an alternative proof) of the following statement Let $f : \mathbb R \rightarrow \mathbb R: x \mapsto f(x)$ be a continuous function. If $f(x) = 0 \implies f$ is strictly increasing on an open neighbourhood of $x$, then $f$ as at most one root. Here's my attempt at a proof by contradiction. Case 1. Let $x_1 < x_2$ be two roots with no other root in (x_1,x_2) . Since $f$ is strictly increasing on a neighbourhood of each root we can find $\delta > 0$ such that $f> 0$ on $(x_i,x_i+\delta)$ and $f<0$ on $(x_i-\delta,x_i)$. Using the intermediate value theorem we can find another root $c$ somewhere between $x_1$ and $x_2$ , a contradiction. Case 2. By the first part we can always find a root of $f$ between any two given roots of $f$. Let x_1 < x_2 be two roots. We will show that $f = 0$ on $(x_1,x_2)$ which contradicts the fact that $f$ is strictly increasing at it's roots. Let $\tilde x \in (x_1,x_2).$ Define $$x_1' = \sup \{ x \in [x_1,\tilde x] : f(x) = 0 \}$$ $$x_2' = \inf\{x \in [\tilde x,x_2]: f(x) = 0\}.$$ Since $x_i'$ is the $\inf$ (or $\sup$) of a bounded set we can find a sequence of roots wich converges to $x_i'$ so by continuity of $f$ we have $f(x_1') = f(x_2') = 0.$ Clearly $ x_1' \leq \tilde x \leq x_2'$ so we need only consider the two following cases If $\tilde x = x_1'$ or $\tilde x = x_2'$ then $f(\tilde x) = 0.$ If $\tilde x \in (x_1',x_2')$ then since $x_1'$ and $x_2'$ are roots we can find a new root $c$ in $(x_1',x_2')$. If $\tilde x \leq c$ then we have a contradiction with the definition of $x_2'$ and similarly $c \leq \tilde x$ contradicts the definition of $x_1'$. Therefore we must have $f(\tilde x) = 0.$ Therefore $f$ cannot have several roots since $f$ would then be equal to $0$ on an interval which contradicts the fact that $f$ is strictly increasing on a neighboorhood of it's roots. AI: It looks correct, except for one thing. In Case 1, you wrote “Let $x_1<x_2$ be two roots with no other root in $(x_1,x_2)$.” What you wrote after that is fine. But then, in Case 2, you wrote “By the first part we can always find a root of $f$ between any two given roots of $f$”. But in Case 1 you had an extra assumption, namely that there is no root between $x_1$ and $x_2$. So, you cannot apply Case 1 to any two given roots of $f$. My suggestion then is this: do your proof in two steps: prove that between any two distinct roots of $f$ there has to be another root; use this to prove what you want to prove.
H: Is there any closed curve whose area is proportional to its perimeter? The question is in the title: Is there any closed curve whose area is proportional to its perimeter? If not, why is it so? Can it be proved? I tried all the simple shapes I know, but couldn't find a solution. AI: You cannot get more proportional than equality. There is a family of rectangles whose perimeters are equal to their areas: They have sides for the form $x + \sqrt{x^2-4x}$ and $x - \sqrt{x^2-4x}$ with $x \ge 4$ and so perimeters and areas each equal to $4x$. Nice examples from this family are $x = 4$ gives $4\times4 = 16$ $x = 4.5$ gives $6\times3 = 18$ $x = 6.25$ gives $10\times2.5 = 25$
H: Clarification regarding the outcome space of a stochastic process. In his book Stochastic Differential Equations - An Introduction with Applications, Øksendal gives the following definition of a stochastic process: A stochastic process is a parametrized colletcion of random variables $$\{ X_t\}_{t\in T} $$ defined on a probability space $(\Omega, \mathcal{F}, P)$ and assuming values in $\mathbb{R}^n$. He then notes that it may be useful to think of $t$ as time and each $\omega \in \Omega$ as an individual experiment, such that $X_t(\omega)$ would represent the result at time $t$ of the experiment $\omega$. He also notes that a path of a stochastic process is obtained by the mapping $t \mapsto X_t(\omega)$ for a fixed $\omega \in \Omega$. This seems to indicate that the outcome space $\Omega$ does not vary with time, and that the set of possible outcomes for each experiment, parametrized by $t$, is not dependent on $t$. However, it is not clear to me how this view would represent such experiments in this context. Take for instance the example of a random walk. At each time $t \in \mathbb{N}^+$ a coin is flipped. If the outcome is $H$, a step is taken vertically upwards, if $T$ a step downwards. If each $X_t$ would represent the step taken at time $t$, would not the outcome of the experiment (the coin toss at time $t$) be $\omega \in \{ H, T, \emptyset \} = \Omega$? But then, fixing $\omega' \in \Omega$, for each time $t$ the variable $X_t(t)$ would have the same outcome, so this cannot be the correct interpretation. The question becomes: What would each experiment be in this context, and is it then true that $\Omega = \{H, T, \emptyset\}$? In this context, how would the accumulated position of the random walk at time $t$ be formulated? AI: $\Omega$ often takes the form of the so-called canonical space: $\Omega=(\mathbb{R}^n)^T,$ rather than the $\mathbb{R}^n$ you seem to propose. In this case, we can take a slightly different approach. A natural choice would be $\Omega=\{1,-1\}^{\mathbb{N}^+}$. You might then define $X_t(\omega)=\sum_{i=1}^{\lfloor t\rfloor}\omega_i$.
H: Bourbaki's definition of function I saw this definition and I got confused by it: "Let E and F be two sets, which may or may not be distinct. A relation between a variable element x of E d a variable element y of F is called a functional relation in y if, for all x ∈ E, there exists a unique y ∈ F which is in the given relation with x. We give the name of function to the operation which in this way associates with every element x ∈ E the element y ∈ F which is in the given relation with x, and the function is said to be determined by the given functional relation. Two equivalent functional relations determine the same function." The thing that confused me in the definition above was the sentence "We give the name of function to the operation which in this way associates with every element x ∈ E the element y ∈ F which is in the given relation with x..." (he didn't define the word "operation") In 1954, Bourbaki defined a function as a triple f = (F, A, B). Here F is a functional graph, meaning a set of pairs where no two pairs have the same first member, and he hasn't used the term "operation" which he hasn't defined in the first definition. my problem with this definition is the fact that it does not resemble the notion of function as a process... My questions are: why did he define in the first definition function as an operation (he didn't define what is an operation in the first place)? where the notion of function as a process appear in any of those two definitions? Thank you for your patience and time! The definitions appear in the following links, paper, and book: https://en.wikipedia.org/wiki/History_of_the_function_concept "Evolution of the Function Concept: A Brief Survey by Israel Kleiner" https://en.wikipedia.org/wiki/Function_(mathematics) Nicolas Bourbaki - Set theory (book) AI: Maybe some more context will help... See Elements of Mathematics: Theory of sets (Engl. transl.1968). The "usual" mathematical object called relation in set theory is called by Bourbaki a graph, i.e. a set of ordered pairs [II.3.1]. A graph is said functional [II.3.4.: Def.9] when the "functionality" condition is satisfied. What Bourbaki calls a relation is an expression of the language, i.e. an atomic formula based on a predicate symbols or a boolean combination, etc [see I.1.1, page 16 and Remark page 20: "intuitively terms represent objects and relations represent assertions"]. What is a functional relation (as defined in I.5.3)? In a nutshell it is a formula $\varphi(x,y)$ satisfying the condition that: if $\varphi(x,y)$ and $\varphi(x,z)$, then $y=z$. Thus, a functional graph is a mathematical object while a functional relation is a linguistic object. Wiki's quote above is the English translation of the text of the first French edition: Bourbaki (1939). We can find into Archives de l'Association des Collaborateurs de Nicolas Bourbaki the corresponding "manuscript". See page 8: Soit $E$ et $F$ deux ensembles... Une relation... If my conjecture is correct, the 2nd edition moved the "relation" name to the language of the theory and replaced it with "graph" and "correspondence" for the mathematical object.
H: $Z$ has no accumulation point in $C$? We say that $z_0$ is an accumulation point in a domain $D$ if there exists a sequence $(z_n)_{n\in N}\subset D$ s.t $z_n$ converges to $z_0$. I would like to know, using this definition, why $\Bbb Z$ has no accumulation point in $\Bbb C$ ? AI: You are using the wrong definition. It should be: $z_0$ is an accumulation point of a domain $D$ if there exists a sequence $(z_n)_{n\in\Bbb N}$ of elements of $D\setminus\{z_0\}$ such that $z_n$ converges to $z_0$. With this definition, $\Bbb Z$ has no accumulation points in $\Bbb C$.
H: Sum $ \sum_{k=0}^\infty \frac{k^2}{4^k}$ I am an Economics undergraduate who was reading through a textbook on statistical theory. On one of the questions, I had to find the Variance of $X$ the joint probability distribution, $f(x,y)=\frac{1}{4^{x+y}}$, where $x$ and $y$ were discrete random variables $x=0,1,2,...$ and $y=0,1,2,...$ When calculating $Var(x)$, and trying to find $E(x^2)$ I got stuck at the summation for $\frac{X^2}{4^X}$ for $0\le X$. Previously in the part when I calculated $E(x)$, I was able to sum $\frac{X}{4^X}$ using an AGP. However, when looking at the variance portion, I'm not sure what kind of series this is, and what method I can use to derive an answer. Any help would be greatly appreciated !! AI: So, you want the value of $$S := \sum_{k=0}^\infty \frac{k^2}{4^k}$$ Let us start with the geometric series, with $x \in (-1,1)$: $$\sum_{k=0}^\infty x^k = \frac{1}{1-x}$$ Take the derivative of this on both sides, multiply by $x$, then do both again. (The derivative of the sum can be taken term-by-term since the series converges absolutely.) You'll get that $$\sum_{k=0}^\infty k^2 x^k = \frac{x(x+1)}{(1-x)^3}$$ This can be applied to your case if you notice that $$S = \sum_{k=0}^\infty k^2 \left( \frac 1 4 \right)^k$$ i.e. use $x=1/4$.
H: Finding the maximum volume of a tetrahedron with 3 concurrent edges Can someone help me out? I am not good at math. Thank you. Find the maximum volume of the tetrahedron that have three concurrent edges and satisfy the following condition: The sum of three edges is constant. One edge is double the length of another edge. I found a formula to calculate an irregular tetrahedron without having to know the height but I have no idea how to know which volume is larger since the given formula is in a matrix format. Am I heading to the right direction on this question? The source of formula (very new here, don't know how to insert matrix): http://mathforum.org/dr.math/faq/formulas/faq.irreg.tetrahedron.html#:~:text=It%20is%20irregular%20if%20and,not%20all%20of%20equal%20measure. AI: If you operate with the relations between $x_1,x_2,x_3$ you are given, you can define the volume $V$ in function of $x_2$ this way: $$ x_1=2x_2 \phantom{a},\phantom{a} x_2=x_2 \phantom{a},\phantom{a} x_3=L-3x_2. $$ So you en up with the volume being (it is $V=\frac{1}{6}x_1x_2x_3$ because that's the maximal volume formula for the irregular tetrahedron, with the edges making right angles between them): $$ V(x_2)=\frac{1}{6}x_1\cdot x_2\cdot x_3=\frac{1}{3}Lx^2_2-x^3_2. $$ If you derivate this expression, you end up with $$ V'(x_2)=x_2\left(\frac{2L}{3}-3x_2\right). $$ The equation $V'(x_2)=0$ gives you two solutions, one is obviously $x_2=0$, but that's the minimun, not what we want. The other solution is the maximun, which is $\boxed{x_2=\frac{2L}{9}}$. This gives us the final results: $$\boxed{x_1=\frac{4L}{9},\quad x_2=\frac{2L}{9},\quad x_3=\frac{L}{3}}.$$ $$\boxed{V=\frac{1}{6}\frac{8L^3}{3^5}=\frac{4L^3}{3^6}}$$ Tell me if there's anything you don't understand from my solution, I'll try to explain it to you.
H: Can I solve this equation, that always gives me square root? I have this equation: Fig.1 I need to solve it for b, so I can square it: Fig.2 and use: Fig.3 But problem is, that I still have a square root there and I can't do anything more than just square it over and over again. Does anyone have an idea how to solve it? Thanks. AI: Do not square, but rewrite in the form $$p\cos b=q\sin b$$ or $$\tan b=\frac pq.$$
H: How is the sum of two Lebesgue integrable functions? I'm practicing for the final exam in real-analysis and I am at the chapter Measure Theory and Integration. I found this exercise, but I don't know how to solve it...Could you please help me? Let $(X, \cal{A}, \mu)$ be a measure space and $f, g : X → \mathbb{R}$. Determine if the following implications hold in general: (i) both functions $f$ and $g$ are integrable $⇒ f + g$ is integrable; (ii) $f + g$ is integrable $⇒$ at least one of the functions $f$ or $g$ is integrable; AI: I assume that when you say integrable you mean that the integral of the function is finite. I belive that for the item $(i)$ you can use the property that the integral is additive and you will get your answer. For $(ii)$ I belive it is false because you can consider $f=-x$ and $g=x$ and you get that $f+g=0$.
H: Applications of dominated convergence theorem for Lebesgue integrals I have been working through measure theory, specifically the dominated convergence of Lebesgue integrals and its applications such as differentiating under the integral sign. There I came across the following example For $t>0$ it holds $\int_{-\infty}^{\infty} e^{-x^2} \cos(tx) dx = \sqrt{\pi}e^{-t^{2}/4}$ In the solution, I see they first rewrite $\cos(tx) = \lim_{N\to\infty} S_N$, where $S_N=\sum_{n=0}^N \frac{(-1)^n (tx)^{2n}}{(2n)!},$ and then reorganize it to use the dominated convergence theorem as \begin{align} |e^{-x^2}S_N|&\leq \Big |e^{-x^2}\sum_{n=0}^N \frac{(-1)^n (tx)^{2n}}{(2n)!}\Big|\\ & \leq e^{-x^2} \sum_{n=0}^N \Big|\frac{(-1)^n (tx)^{2n}}{(2n)!}\Big| \\ &\leq e^{-x^2} \sum_{n=0}^N \frac{(tx)^{2n}}{(2n)!}\\ &\leq e^{-x^2} \sum_{n=0}^N \frac{(t_0x)^{2n}}{(2n)!} \\ &=e^{-x^{2}+t_{0} x} \\ &=e^{-(x-\frac{t_{0}}{2})^{2}} e^{\frac{t_{0}^2}{4}}=: g(x) \in L(\mathbb{R}) \end{align} where $t_0 > 0$ is a fixed number such that $t\in (0,t_0)$. I don't really see how $e^{-(x-t_0/2)^2} e^{{t_0}^2 /4} := g \in L(\mathbb{R}),$ here $L(X)$ denotes the set of all Lebesgue intergable functions. AI: The function $g:x\mapsto\exp(-(x-t_0/2)^2)\exp(t_0^2/4)$ lies in $L^1(\Bbb R)$ for each $t_0\in\Bbb R$. We can write $g(x)=A\exp(-(x-B))^2$ where $A=\exp(t_0^2/4)$ and $B=t_0/2$. Since $x\mapsto\exp(-x^2)$ is Lebesgue integrable, so is its translate $x\mapsto\exp(-(x-B)^2)$, and so also the multiple of that, $x\mapsto A\exp(-(x-B)^2)$.
H: Proving that $f(x)=rx+x_0 $ is open in $(\mathbb{R}^n,\varepsilon_n)$ I have the following example in my lectures notes: In $(\mathbb{R}^n,\varepsilon_n)$ let $ B=B(O,1)$ be the open ball with center the origin $O$ and radious $1$. $B$ is homeomorphic to any open ball $ B(x_0,r)$ with rispect to the euclidean metric. In fact, it is enough to osserve that the mapping $f(x)=rx+x_0 $ has as an inverse $f^{-1}(y)=r^{-1}(y-x_0)$,and that $f$ and $ f^{-1}$ are both continuous and that $f(B)=B(x_0,r)$ (that is that f is open, it maps open sets into open sets) I am having trouble proving $f(B)=B(x_0,r)$. Intuitively I know it must be like that, but I am having trouble writing it down formally. Besides I am not so comfortable with the notation since they look like numbers but since we are in $\mathbb{R}^n$, we are dealing with vector and matrices here. So far I have only that the origin is mapped to the new center $x_0$: $f(O)=f((0,0...,0))=r0+x_0=x_0$ How do I go about it? AI: The statement $f(B)=B(x_0,r)$ is a statement about set equality, so use that style of proof: Suppose that $y\in B$, then $\|y\|<1$. Then $f(y)=ry+x_0$, which means that we scale $y$ by $r$ and add $x_0$. Then, $\|x_0-f(y)\|=\|ry\|=r\|y\|<r$. Therefore, $f(y)\in B(x_0,r)$. On the other hand, suppose that $z\in B(x_0,r)$. Then $\|z-x_0\|<r$. Let $y=\frac{1}{r}(z-x_0)$. Then, $f(y)=r\left(\frac{1}{r}(z-x_0)\right)+x_0=z.$ Moreover, $\|y\|=\frac{1}{r}\|z-x_0\|<1$. Therefore, $z\in f(B)$.
H: Prove that there a non-zero continuous function $f$ on $[-1,2]$ for which $\int_{-1}^2 x^{2n} f(x) \; dx = 0$ for all $n \geqslant 0$. I've been trying to find an example of a function $f \in \mathcal{C}[-1,2]$ with $f \neq 0$ such that $\int_{-1}^2 x^{2n} f(x) \; dx = 0$ for all $n \geqslant 0$, but I'm finding it very difficult. I thought about just proving existence (without finding an explicit function), but not sure how to do it. I'm fairly certain it has something to do with the Stone-Weierstrass Theorem or the Weierstrass Approximation Theorem though. Any ideas to point me in the right direction would be greatly appreciated. AI: Since $x^{2n}$ is even function $\forall n \in \mathbb{Z}$, this implies that for any odd function $f$, $\int_{-a}^{a}x^{2n}f(x)dx=0$. Under the condition that $f(x)$ continuously tapers off to $0$ when $x$ approaches $\pm a$, and $f(x)=0$ when $|x|\geq a$, then $\int_{-a}^{b}x^{2n}f(x)dx=0$. One such example is $f(x)=\arcsin(\sin(k\pi x))\lfloor e^{-x^2}+1-\frac{1}{e}\rfloor$ for any $k \in \mathbb{Z}$ An important note is that $f$ doens't have to have kinks either, Take for instance $f(x)=(1-\cos(2k\pi x))\lfloor e^{-x^2}+1-\frac{1}{e}\rfloor sgn(x)$, where $sgn$ is the sign function. In general, any odd function $O$, and any even function $A$ such that $A(x)=0$ when $|x|$ larger than some $a$ implies that: $$\int_{x_0}^{x_1}AOdx=0$$ For any bounds such that $\min(|x_0|,|x_1|) \geq a$ or $|x_0|=|x_1|$
H: How is the product of two Lebesgue integrable functions? Let $(X, \cal{A}, \mu)$ be a measure space and $f, g : X → \mathbb{R}$. Determine if the following implications hold in general: (i) both functions $f$ and $g$ are integrable $⇒ f \cdot g$ is integrable; (ii) $f \cdot g$ is integrable $⇒$ at least one of the functions $f$ or $g$ is integrable; For $(i)$, I think it follows immediately from the theory? For $(ii)$, I was thinking to find a counter example, namely, $f$ $\cdot$ $g$ to be integrable, but neither $f$, nor $g$ to be integrable, but I don't know if it is possible... EDIT: For $(i)$, if we take $X=(0,1)$ and $$f(x)=g(x)=x^{−1/2}.$$ Then $f,g∈\cal{L}$$(X)$, but $(fg)=1/x∉\cal{L}$$(X)$. AI: (i) It might well happen that $\int|f|d\mu<\infty$ and $\int f^2d\mu=+\infty$. (ii) Let $A$ be a set with $\mu(A)=\infty=\mu(A^{\complement})$. Then let $f$ be the indicator function of $A$ and let $g$ be the indicator function of $A^{\complement}$.
H: PDF goes unbounded. Is probability of event infinite? This is follow up from here: Curve above $x$ axis but area is negative? I have a PDF which has unit area but it goes unbounded to infinity at $x=b$ (please refer to attached link). Does it mean the probability of events near $b$ is infinite? AI: No. Assume $$f_X(x)=\begin{cases}-\ln x&,\quad 0< x\le 1\\0&,\quad \text{otherwise}\end{cases}$$then the CDF would be $$F_X(x)=\begin{cases} 0&,\quad x\le 0\\ x-x\ln x&,\quad 0< x\le 1\\1&,\quad x>1\end{cases}$$which is bounded, but of astonishingly high ascent near $x=0$.
H: How do I find the integer solutions that satisfy $xyz = 288$ and $xy + xz + yz = 144$? Find all integers $x$, $y$, and $z$ such that $$xyz = 288$$ and $$xy + xz + yz = 144\,.$$ I did this using brute force, where $$288 = 12 \times 24 = 12 \times 6 \times 4$$ and found that these set of integers satisfy the equation. How do I solve this without using brute force? AI: Without loss of generality, suppose that $x\geq y\geq z$. From the given system of Diophantine equations, we obtain an Egyptian fraction problem: $$\frac{1}{x}+\frac{1}{y}+\frac{1}{z}=\frac{yz+zx+xy}{xyz}=\frac{144}{288}=\frac12\,.\tag{*}$$ Since $xyz=288>0$, the number of variables with negative values among $x$, $y$, and $z$ is either $0$ or $2$. We consider two cases. Case I: $x>0>y\geq z$. Let $u:=-y$ and $v:=-z$. Then, $$\frac{1}{x}-\frac1{u}-\frac1{v}=\frac{1}{2}\,.$$ Thus, $\dfrac{1}{x}>\dfrac12$, making $x<2$. Therefore, $x=1$. This implies $$yz=xyz=288$$ and $$y+z=x(y+z)=144-yz=144-288=-144\,.$$ Consequently, the polynomial $$q(t):=t^2+144t+288$$ has two roots $y$ and $z$. It is easily seen that $q(t)$ has no integer roots, so this case is invalid. Case II: $x\geq y\geq z>0$. Then, $$\frac{3}{z}\geq \frac{1}{x}+\frac{1}{y}+\frac{1}{z}=\frac12\,.\tag{#}$$ This shows that $z\leq 6$. Furthermore, it is clear that $z>2$. Hence, there are four possible values of $z$, which are $3$, $4$, $5$, and $6$. If $z=6$, then by (#), we conclude that $x=6$ and $y=6$. However, $xyz\neq 288$. This subcase yields no solutions. If $z=5$, then this is impossible, as $xyz=288$ implies that $z$ divides $288$. This subcase is eliminated. If $z=4$, then $$xy=\dfrac{288}{z}=\dfrac{288}{4}=72$$ and $$x+y=\dfrac{144-xy}{z}=\dfrac{144-72}{4}=18\,.$$ Thus, $t=x$ and $t=y$ are the roots of the quadratic polynomial $$t^2-18t+72=(t-6)(t-12)\,.$$ This means $x=12$ and $y=6$. If $z=3$, then $$xy=\dfrac{288}{z}=\dfrac{288}{3}=96$$ and $$x+y=\dfrac{144-xy}{z}=\frac{144-96}{3}=16\,.$$ Thus, $t=x$ and $t=y$ are the roots of the quadratic polynomial $t^2-16t+96$, but this polynomial has no real roots. In conclusion, all integer solutions $(x,y,z)$ to the required system of Diophantine equations are permutations of $(4,6,12)$. Remark. Note that all $(x,y,z)\in\mathbb{Z}^3$ that satisfy (*) are permutations of the triples listed below. $$(1,-3,-6)\,,\,\,(1,-4,-4)\,,\,\,(k,2,-k)\,,\,\,(4,3,-12)\,,\,\,(5,3,-30)\,,$$ $$(6,6,6)\,,\,\,(10,5,5)\,,\,\,(20,5,4)\,,\,\,(12,6,4)\,,\,\,(8,8,4)\,,$$ $$(42,7,3)\,,\,\,(24,8,3)\,,\,\,(18,9,3)\,,\text{ and }(12,12,3)\,,$$ where $k$ is any positive integer.
H: Show that a partial derivative exists in $(0,0)$ $f: \mathbb{R}^2 \rightarrow \mathbb{R}$ $f(x,y)=\begin{cases} xy \frac{x^2-y^2}{x^2+y^2} & ;(x,y)\neq(0,0) \\ 0 & {; (x,y)=(0,0)} \end{cases}$ Show that all partial derivatives of $f$ exist everywhere and calculate these. Distinguish between $(x,y)=(0,0)$ and $(x,y)\neq(0,0)$. Show that $D_1D_2f(0,0)$ and $D_2D_1f(0,0)$ exist but $D_1D_2f(0,0)\neq D_2D_1f(0,0)$. $\frac{\delta}{\delta x}(xy\frac{x^2-y^2}{x^2+y^2})=y(\frac{x^2-y^2}{x^2+y^2}+\frac{4x^2y^2}{(x^2+y^2)^2})$, $\frac{\delta}{\delta y}(xy\frac{x^2-y^2}{x^2+y^2})=x(\frac{x^2-y^2}{x^2+y^2}-\frac{4x^2y^2}{(x^2+y^2)^2})$ Can someone help me with this? AI: From definition we will have $f_x(0,0)=f_y(0,0)=0$. Using these we obtain: $$f_{xy}(0,0) = \lim_{y \to 0}\frac{f_x(0,y)-f_x(0,0)}{y} = \lim_{y \to 0}\frac{-y^3}{y^3} = -1$$ $$f_{yx}(0,0) = \lim_{y \to 0}\frac{f_y(x,0)-f_y(0,0)}{y} = \lim_{y \to 0}\frac{x^3}{x^3} = 1$$ It can be shown that in $(x,y) = (0,0)$ mixed partial derivatives have discontinuity. Let's consider $(x,y) \ne (0,0)$ $$f_{xy}(x,y)=f_{yx}(x,y)=\frac{x^2-y^2}{x^2+y^2}\left(1+\frac{8x^2y^2}{(x^2+y^2)^2} \right)$$ Now if we consider $(\frac{a}{n}, \frac{1}{n})$ then we obtain $\frac{a^2-1}{a^2+1}\left(1+\frac{8a^2}{(a^2+1)^2} \right)$ for $(0,0)$ For $(x,y) \ne (0,0)$: $$f_x(x,y)=y\frac{x^2-y^2}{x^2+y^2}+\frac{4x^2y^3}{(x^2+y^2)^2}$$ $$f_y(x,y)=x\frac{x^2-y^2}{x^2+y^2}-\frac{4x^3y^2}{(x^2+y^2)^2}$$
H: How to solve $\omega^4-[(\frac{eB}{m})^2+2\omega_0^2]\omega^2+\omega_0^4=0$ in the simplest way I was solving a normal-mode problem and got a different result for the quadratic equation. The book provides a simpler solution than mine so I suspect I am the one who's wrong. Let's check it out. Let us start from the following determinant $$ \begin{vmatrix} \omega_o^2-\omega^2 & \frac{-ieB\omega}{m} \\ \frac{ieB\omega}{m} & \omega_o^2-\omega^2 \\ \end{vmatrix}= (\omega_o^2-\omega^2)^2-\Big(\frac{eB\omega}{m}\Big)^2=\omega^4-\Big[\Big(\frac{eB}{m}\Big)^2+2\omega_0^2\Big]\omega^2+\omega_0^4 $$ OK so far. From here on I proceeded as follows; I looked for the roots, i.e. $\omega^4-\Big[\Big(\frac{eB}{m}\Big)^2+2\omega_0^2\Big]\omega^2+\omega_0^4=0$ $$\omega^2= \frac 1 2 \Big[\Big(\frac{eB}{m}\Big)^2+2\omega_0^2 \pm\ \sqrt{\Big[\Big(\frac{eB}{m}\Big)^2+2\omega_0^2\Big]^2-4\omega_0^4}\Big]$$ This leads to pretty ugly roots for $\omega$. However, the book states that $(\omega_o^2-\omega^2)^2-\Big(\frac{eB\omega}{m}\Big)^2$ leads to $\omega^2 \pm \frac{eB\omega}{m} - \omega_o^2$. This leads to good looking roots for $\omega$. My struggle is that I do not see how to show that's indeed the case. AI: Factorise $(\omega_0^2 - \omega^2)^2 - \left({\dfrac {eBw} m}\right)^2$ by difference of two squares and you get: $\left({\omega_0^2 - \omega^2 - \left({\dfrac {eBw} m}\right) }\right) \left({\omega_0^2 - \omega^2 + \left({\dfrac {eBw} m}\right) }\right)$ which gets you practically there. Note that the equation you are left with is itself a quadratic which has not yet been solved.
H: Problem with summation by method of difference Question: What would be the result of: $$\sum_{k=1}^{n}\frac{1}{n(n+2)}$$ My Approach: Let $T_n$ denote the $n^{th}$ term of the given series. Then we have $$T_1=\frac12 \left(\frac11-\frac13\right)$$ $$T_2=\frac12 \left(\frac12-\frac14\right)$$ $$T_3=\frac12 \left(\frac13-\frac15\right)$$ And so on up till $$T_n=\frac12 \left(\frac1n-\frac1{n+2}\right)$$ I can see that the series telescopes and the terms start to cut each other after an interval of one. My only problem is, how do I find the terms that remain in the end? AI: For all $k \geq 1$ you have $$ \frac{1}{k(k+2)} = \frac{1}{2} \left( \frac{1}{k} - \frac{1}{k+2} \right) $$ so $$ \begin{aligned} \sum_{k=1}^n \frac{1}{k(k+2)} &= \frac{1}{2}\sum_{k=1}^n \left( \frac{1}{k} - \frac{1}{k+2} \right) \\ &= \frac{1}{2}\left( \sum_{k=1}^n \frac{1}{k} - \sum_{k=1}^n \frac{1}{k+2} \right) \\ &= \frac{1}{2}\left( \sum_{k=1}^n \frac{1}{k} - \sum_{k=3}^{n+2} \frac{1}{k} \right) \\ &= \frac{1}{2}\left( 1 + \frac{1}{2} - \frac{1}{n+1} - \frac{1}{n+2}\right) \\ \end{aligned} $$
H: How to Calculate center of mass for 20 dimensions particles I have a problem with calculation the center of mass of 20-dimension particles, some thing like this: A = [1 6 8 54 6 8 5 4 8 9 6 4 7 9 6 6 3 8 43 9] , Mass = 0.25 B = [2 6 3 4 6 8 4 4 8 5 6 4 2 2 6 6 3 8 1 1] , Mass = 0.6 C = [4 3 4 53 6 2 5 21 8 1 6 2 37 2 6 2 3 1 43 9] , Mass = 0.05 Could anyone guide me to find the solution for center of mass in each dimension (x1, x2, ... x20)? Thanks AI: The center of mass is simply a weighted average. That is, we will have $$ P_{\text{center}} = \frac{0.25 A + 0.6 B + 0.05 C}{0.25 + 0.6 + 0.05}. $$
H: If a finite sum is a unit, then it has a term that is a unit. Source: Theorem 19.1 (A First Course in Noncommutative Rings by T.Y. Lam) Local Ring on Wikipedia Theorem 19.1 For any nonzero ring R, the following statements are equivalent: (1) $R$ has a unique maximal left ideal. (2) $R$ has a unique maximal right ideal. (3) $R/rad R$ is a division ring. (4) $R$\ $U(R)$ is an ideal of $R$. (5) $R$\ $U(R)$ is a group under addition. (6) For any $n$, $a_1+...+a_n\in U(R)$ implies that some $a_i\in U(R)$. (7) $a+b\in U(R)$ implies that $a\in U(R)$ or $b\in U(R)$. In the sketch of the proof by Lam, its said that (4)=>(5)=>(6)=>(7) are tautologies. Let $R$ is commutative and $R$ is a local ring then its satisfy (1) and (2), how to prove (6)? AI: If you mean "prove 6 from 5" then I'll comment on that. Suppose (5) holds and $\sum a_i\in U(R)$. Then if all the $a_i\in R\setminus U(R)$, it would follow from (5) that $\sum a_i\notin U(R)$ since $\sum a_i\in R\setminus U(R)$. So apparently one of the $a_i$'s has to fall in $U(R)$. I guess you did not mean "prove 7 from 6" because 7 is just a special case of 6.
H: Check if a sequence converges. Suppose that a sequence $(x_n )$ in $R$ satisfies $x_{n+1} = 1 −\sqrt{1 − x_n}$ for all n ∈ N. Show that $(x_n )$ converges. To what does it converge? Does $(x_{ n+1}/ x_{n}) $ converge? I have solved the first and found that except for $x_1=1$, all other initial values makes the sequence converge to $0$. For the second part, (if the ratio converges) : if the initial value is 1, the ratio for sure converges to $1$ as all terms are $1$. If the initial value is $0$, it doesn't as none of the terms of the ratios are defined. For the intermediate values I am not sure how to prove that the ratio converges to some number( which my intuition tells me to be 1)? As I can't use ratios of limit result as the limits are $0$ $$$$ Constraint: Can't use taylor series as the book didn't reach there yet. AI: If the sequence converges to some limit $z$, it must satisfy $$ z = 1-\sqrt{1-z} \Rightarrow (1-z)^2 = 1-z \Leftrightarrow z=0 \vee z=1. $$ So, if the sequence converges, if must converge to either $0$ or $1$. In fact, the sequence will converge to $0$, unless $x_0=1$, in which case it will converge to 1. Now, if $x_0<1$, $$ \lim \frac{x_{n+1}}{x_n} = \lim \frac{1-\sqrt{1-x_n}}{x_n}=\lim\frac{x_n}{x_n (1+\sqrt{1-x_n})}=\frac 12. $$ If $x_0=1$, $\lim \frac{x_{n+1}}{x_n} = \frac{1}{1}=1$. If $x_0 = 0$, the sequence $\frac{x_{n+1}}{x_n}$ is not even defined.
H: Implicit function theorem for $f(x,y,z)=z^2x+e^z+y$ $f: \mathbb{R}^3 \rightarrow \mathbb{R}$, $f(x,y,z)=z^2x+e^z+y$ Show that an neighboorhood $V$ of $(1,-1)$ in $\mathbb{R}^2$ and a continuous differentiable function $g:V \rightarrow \mathbb{R}$ with $g(1,-1)=0$ and $f(x,y,g(x,y))=0$ for $(x,y) \in V$ exists. Calculate $D_1g(1,-1)$ and $D_2g(1,-1)$. $\nabla f(x,y,z)=\begin{pmatrix} f_x\\ f_y\\ f_z \end{pmatrix}=\begin{pmatrix} z^2\\ 1\\ e^z+2zx \end{pmatrix}$ Can someone help me with this? AI: Show that $f(1,-1,0)=0$ and $f_z(1,-1,0) \ne 0$.The implicit-function - theorem then shows that 1. holds. For $(x,y) \in V$ we then have $$(*) \quad 0=xg(x,y)^2+e^{g(x,y)}+y.$$ Differentiate $(*)$ with respect to $x$, plug in $(x,y)=(1,-1)$, then you can compute $D_1g(1,-1).$ Differentiate $(*)$ with respect to $y$, plug in $(x,y)=(1,-1)$, then you can compute $D_2g(1,-1).$
H: Finding the total number of possible matches Consider six players $P_1, P_2, P_3, P_4, P_5$ and $P_6$. A team consists of two players. (Thus, there are $15$ distinct teams.) Two teams play a match exactly once if there is no common player. For example, team $\{P_1, P_2\}$ can not play with $\{P_2, P_3\}$ but will play with $\{P_4, P_5\}$. Then the total number of possible matches is? I've tried to count it in an easy way but I always end up getting confused, can someone help? AI: Hint: how many teams does $\{P_1,P_2\}$ have to play? The answer is the same for each team. If you add up the number of matches played by each team, you have counted every match twice.
H: Density of $\sqrt{Z}=\sqrt{X+Y}$ Let $(X,Y)$ be a random variable with density $f_{XY}(x,y)=\frac{1}{2}(x+y)e^{-(x+y)},x>0,y>0$. Verify that it is indeed a density i.e : $\rightarrow \frac{1}{2}\int_{0}^{+\infty}[\int_{0}^{+\infty}(xe^{-x}e^{-y}+ye^{-x}e^{-y})dy]dx=1$ Find the marginal densities and the expected value of $X$. $\rightarrow f_X(x)=\frac{1}{2}e^{-x}(x+1)\mathbb{I}_{[0,+\infty)}(x);f_Y(y)=\frac{1}{2}e^{-y}(y+1)\mathbb{I}_{[0,+\infty)}(y);\mathbb{E}(X)=\frac{3}{2}$ Find the density of $Z=X+Y$. $\rightarrow X+Y\sim \Gamma(3,1)$ Find the density of $\sqrt{Z}$. For point 4), I wrote $\left\{\begin{matrix} \sqrt{x+y}=u\\ x=v\end{matrix}\right.\Rightarrow \left\{\begin{matrix} y=u^2-v\\ x=v\end{matrix}\right.\rightarrow \mathbb{E}[g(\sqrt{Z})]=\int_{0}^{+\infty}[\int_{\sqrt{v}}^{+\infty}u^3e^{-u^2}du]dv=\frac{1}{2}\int_{0}^{+\infty}[\int_{v}^{+\infty}te^{-t}dt]dv=\frac{\Gamma(3)}{2}$. Where am I wrong? Thanks in advance for any help. AI: point 4) is requesting the density of $\sqrt{Z}$ Without a lot of calculation, having already found $f_Z(z)=\frac{z^2}{2}e^{-z}$ $z>0$ Applying the known formula $f_Y(y)=f_X[g^{-1}(y)]|\frac{d}{dy}g^{-1}(y)|$ immediately you get $f_W(w)=\frac{w^4}{2}e^{-w^2}2w=w^5e^{-w^2}$ $w>0$
H: A quadratic form problem. Given a symmetric $n\times n$ real matrix $A$,if we have for all $x\in \mathbb{R}^n$ ,$\|x\|_2=1$,and $x^tAx = c$ for some constant $c$. Prove that $A = \lambda I$ for some $\lambda$. My solution is since $A$is symmetric,we can take some orthogonal transformation $Q$ that makes $Q^tAQ$ as diagonal matrix $D = diag(d_1,d_2,...,d_n)$, and we know $x \to q (=Qx)$ is an isomorphism on the sphere, so $q^tAq = c$ i,e, $\sum_n d_ix_i^2 = c$ so all the $d_i$ take the same value,so $A = dI$. It's there some better solution? AI: Let $S=A-cI$. Then $x^TSx=0$ for every vector $x$. It follows from the polarisation identity that $$ x^TSy=\frac14\left[(x+y)^TS(x+y)-(x-y)^TS(x-y)\right]=0 $$ for all vectors $x$ and $y$. In particular, $\|Sx\|^2=x^TS(Sx)=0$. Hence $Sx$ is identically zero, i.e. $A=cI$.
H: Evaluating series using operator Consider, $$ S= \sum_{k=0}^{k=\infty} \frac{ k!}{x^{k}} (-1)^k$$ now this is, $$ S = ( 1 +D+D^2 +D^3...) ( \frac{1}{x})$$ using geometeric series $$ 1+D+D^2.. = \frac{1}{1-D}$$ So, $$ S= \frac{1}{1-D} \frac{1}{x}$$ $$ S = \frac{1}{x-1}$$ Therefore , for x<1 $$ S= \sum_{k=0}^{k=\infty}(-1)^k \frac{ k!}{x^{k}}=\frac{1}{x-1}$$ Is this proof correct? Can I find an expression for l.h.s for x>1 ? AI: The equation $$ S= \frac{1}{1-D} \frac{1}{x} $$ should be interpreted as the inverse operator $$ S = (1-D)^{-1}\frac{1}{x} . $$ It would mean $$ (1-D)S = \frac{1}{x} . $$ Solve the differential equation $S(x) - S'(x) = \frac{1}{x}$ to get $$ S = \operatorname{Ei}_1(x)\;e^x , $$ involving an exponential integral function. This is, indeed, (almost) the Borel sum of the divergent series $S$.
H: if I have $n$-bit binary number $x$, if add 1 at its $m$-bit ($m>n$), how would the counterpart base-10 number change? If $x$ (base-10) is an $n$-bits number in binary, such as $(x)_{10}=\underbrace{11\cdots 1}_{n\text{}}$, if I add $1$ in $m$-bit position, it becomes $(y)_{10}=\underbrace{10\cdots0011\cdots 1}_{m\text{}}$. What is the formula relating $y$ and $x$? AI: $x$ and $y$ are numbers irrespective of what base you represent them in. What is constant as you change bases is the value, not the representation. The form you give for $x$ is the binary form. The subscript $2$ should go on the right to indicate that. $x$ does not need a subscript as you have not committed to what base it is represented in. You can write $13_{10}=1101_2$. These are two different representations of the same number. If we start counting the bits from $1$ and put a $1$ in the $m^{th}$ place, that bit is worth $2^{m-1}$ and we are adding that much to our number. In this example, for $m=7$ we get $1001101_2=77_{10}$. We have added $2^{6}=64_{10}=1000000_2$
H: Smallest residue over $\Bbb Z[\omega]$ I'm asked to prove that $\Bbb Z[\omega]$, where $\omega^2+\omega+1=0$, is a Euclidean domain. The norm is $N(a+b\omega)=(a+b\omega)(a+b\omega^2)$. My strategy is to write $\alpha=\beta\gamma+\rho$, then look at $$\frac{\alpha}{\beta}=\gamma+\frac{\rho}{\beta}$$ for $\alpha,\beta,\gamma,\delta\in\Bbb Z[\omega]$, and sketch the area where $\gamma$ is the closest Eisenstein integer, in order to bound $N(\rho)$ by $N(\beta)$. Sketching this area is where my confusion starts. I start by doing this with $0$. I divide all the distances from $0$ to neighboring Eisenstein integers by $2$, then mark them. These are the red dots. Then i draw lines between them. $\frac{\rho}{\beta}$ needs to stay inside the star for $\gamma$ to be the closest Eisenstein integer. According to the book, the shaded region should be: What am I doing wrong? Edit: I now see that tiling with parallelograms works far better, and that my star had gaps. Let $\frac{\rho}{\beta}=a+b\omega$. The longest straight line inside an equilateral triangle, with side length $1$, is the height of value $\frac{\sqrt 3}{2}$. Therefore $$\left|\frac{\rho}{\beta}\right|^2=a^2+b^2\leq\frac{3}{4}\\N\left(\frac{\rho}{\beta}\right)=a^2-ab+b^2\leq\frac{3}{4}-ab$$ If the blue dot hits the upper left corner it will have the coordinates $$\frac{\sqrt 3}{2}(-\cos(30),\sin(30))=\frac{\sqrt 3}{2}(-\frac{\sqrt 3}{2},\frac{1}{2})$$ This gives $$N\left(\frac{\rho}{\beta}\right)\leq\frac{3}{4}-\left(-\frac{\sqrt 3\sqrt 3}{2\cdot 2}\right)\left(\frac{\sqrt 3}{2\cdot 2}\right)= \frac{3}{4}+\frac{3\sqrt{3}}{16}>1$$ This gives a norm greater than $1$, how could this happen? Is it not a good idea to consider the parallelogram? AI: You only need to show that, for any complex number $z$, there exists an Eisenstein integer $x$ such that $|z - x| < 1$. This is quite obvious from your picture: if $z$ lies in a triangle, then at least one vertex of the triangle has distance $\leq \frac 1 {\sqrt 3}$ to $z$. Once you know that, you can perform Euclidean division as follows. For any pair of elements $a, b\in \Bbb Z[\omega]$ with $b \neq 0$, let $z$ be the quotient $a / b$ and let $x$ be an Eisenstein integer such that $|z - x| < 1$. We then have $|a - bx| = |b|\cdot |z - x| < |b|$, which gives $|a - bx|^2 < |b|^2$.
H: The correlation between the Residuals and the prediction $Cov(e,\hat{Y}) =0 $ assume a linear regression model: $y_i$ = $\beta_0$ + $\beta_1x_{i1}$..... + $\beta_px_{ip}$+ $\epsilon_i$ I'm asked to prove that: $Cov(e,\hat{Y})$ = $0$ where: $e$ = the residuals vector $\hat{Y}$ = the predicted vector of Y Hint: use the fact that $X^Te$ = $0$ (I already proved this fact) AI: You can write your estimator: $$\hat{Y} = X\hat{\beta}$$ Therefore, and by rules of Cov, you can take the constant matrix out (right side with transpose): $$Cov(e, \hat{Y}) = Cov(e, X\hat{\beta}) = Cov(e, \hat{\beta})X^T$$ Now you can proove that: $$Cov(e, \hat{\beta}) = 0$$ Since: $$Cov(e, \hat{\beta}) = Cov(Y - \hat{Y}, \hat{\beta})= Cov(Y, \hat{\beta})-Cov(\hat{Y}, \hat{\beta})$$ $$Cov(Y, \hat{\beta}) = Cov(Y, (X^TX)^{-1}X^TY) = Cov(Y,Y)\times((X^TX)^{-1}X^T)^T = \sigma^2IX(X^TX)^{-1} = \sigma^2X(X^TX)^{-1}$$ $$Cov(\hat{Y}, \hat{\beta}) = Cov(X\hat{\beta}, \hat{\beta}) = X[Cov(\hat{\beta},\hat{\beta})] = X[\sigma^2(X^TX)^{-1}] = \sigma^2X(X^TX)^{-1}$$ Therefore: $$Cov(Y, \hat{\beta})-Cov(\hat{Y}, \hat{\beta}) = \sigma^2X(X^TX)^{-1}-\sigma^2X(X^TX)^{-1}=0$$ Both equations rely on knowledge about the distribution of $\hat{\beta}\sim N\big(\beta, \sigma^2(X^TX)^{-1}\big)$ and that $\hat{Y} = X\hat{\beta} = X(X^TX)^{-1}X^TY$
H: Show that the set $\{(x,-2x)\mid x \in \mathbb Z\}$ is denumerable. Show that $A = \{(x,-2x)\mid x \in \mathbb Z\}$ is denumerable. I know that I have to show that a bijection between $A$ and the set of natural numbers (or the set of integers since both are known to be denumerable) exists but I'm not sure what the function would be. AI: $f(z)=(z,-2z)$, where $z\in\Bbb{Z}$ works. Look at a graph of the points in $A$. They all lie on a straight line and every integer is the $x$ coordinate for one of the points.
H: What are the odds of drawing the same card $3$ times in a row in a $4$ card deck ( $3$ of the same card and $1$ joker ) I made the question simple but there are $2$ things that i'd like to know: In a deck of $4$ randomly shuffled cards with $3$ aces and $1$ joker, what are the odds of drawing $3$ aces in a row, and what are the odds of the joker being the last of those cards (in the same conditions, so $4$ cards randomly shuffled, $3$ of which are aces and $1$ is a joker). And if they are any different, why ( with the steps or method for calculating them would be better ). It's a dumb question probably, but i feel like i am missing something or doing something wrong Thanks EDIT: So to clear some doubts, what i mean by drawing is taking a card out of the deck and not placing it back, so every time i draw i find myself with $1$ less card in the deck i am drawing from. Also, for the first part of the question, i want to know the odds of drawing $3$ aces in a row and what it takes to calculate that. The second part, is referring to how many odds i have of having the joker as the last card in the deck and why is that different from saying drawing $3$ aces in a row (If there is any difference). Am i being clear? Sorry if i am not. I will clarify further if needed. AI: Drawing(but not replacing) from a shuffled deck of $4$ cards consisting of $3$ aces and a joker, what is the probability that The first three are aces. The last one to be drawn is the joker. Are they different? If yes, why? \begin{align*} P(\text{the joker is the last card to be drawn})&=P(\text{three aces are drawn one after the other})\\ &=P(\text{first is an ace})\cdot P(\text{second is also an ace})\cdot P(\text{third is also an ace})\\ &=\frac34\cdot \frac 23\cdot \frac 12=\frac14\\\end{align*} They are not different because drawing three cards in a row with a certain probability leaves the joker as the last card definitely.
H: Implicit differentiation of $x^2+y^2-1$ $f(x,y) = x^2+y^2-1$ $0=x^2+y^2-1 \Rightarrow y=\sqrt{1-x^2}$ I differentiaded $g(x)=\sqrt{1-x^2}$ with the chain rule and got $g'(x)=-\frac{x}{\sqrt{1-x^2}}$. Can someone tell me how to do it with implicit differentiation? I tried this formula $y'(x)=-\frac{f_x}{f_y}=-\frac{x}{y}$, the solution should obviously be the same so I guess I might not be allowed to use the formula here? We had the implicit function theorem and $dF(x,y)\begin{pmatrix}h\\k\\\end{pmatrix}=\begin{pmatrix}h\\df(x,y)(h,k)\\\end{pmatrix}$ but I don't really know how to apply this here. AI: Notice that you have made the definition $g(x)=y$. Your implicit differentiation formula tells you that $$y'=-\frac{x}{y}$$ but you said earlier $$y=g(x)=\sqrt{1-x^2}$$ substituting this into your result for $y'$, you are left with $$y'=-\frac{x}{\sqrt{1-x^2}}$$ so your method was correct, the "problem" is that you are left with a $y$ in your solution which is actually fine. If you dislike it, you can always solve for $y$ in the equation $$0=x^2+y^2-1$$ and substitute it in, which we have already done.
H: Reference Request: $H^1(\mathfrak g, V)=0$ for semisimple Lie algebra $\mathfrak g$ and $\mathfrak g$-module $V$ I read the following theorem in the lecture note of Victor Kac. Let $\mathfrak g$ be a finite-dimensional semisimple Lie algebra over an algebraically closed field of characteristic zero. Theorem(Vanishing Theorem) If $V$ is a finite-dimensional $\mathfrak g$-module, then $H^1(\mathfrak g, V)=0$ I would like to request a reference for this theorem. In particular, I seek an introductory textbook which covers this theorem. AI: A different reference is: Hilton Stammbach "A course in Homological Algebra", Chap VII, Proposition 5.6 and 6.1. Moreover, I have to mention that all the Chapter VII is an introduction to cohomology of Lie Algebras and that section 5 and 6 analyze the special case of semisimple Lie algebras.
H: Commutativity of multiplication for natural numbers (Terence Tao's Analysis I exercise 3.6.5) Exercise 3.6.5: Let $A$ and $B$ be sets. Show that $A\times B$ and $B\times A$ have equal cardinality by construction an explicit bijection between the two sets. Then use Proposition 3.6.14 (the one about the cardinal arithmetic) to conclude an alternative proof of Lemma 2.3.2 (this lemma proves the commutativity of multiplication). The bijection is quite easy. But I've no idea why is he asking me to prove a property of multiplication with the cardinality of cartesian products. He defined the natural numbers and its operations with the Peano axioms, no with cardinality, so Tao hasn't really provided a construction of the naturals using only set theory. What's the point of the exercise? Am I supposed to provide this construction, define the multiplication operation and then prove it or am I missing something? AI: Following the hint you're given for 3.6.14, I see: (e) Let $X$ and $Y$ be finite sets. Then Cartesian product $X \times Y$ is finite and $\#(X \times Y ) = \#(X) \times \#(Y )$. So if you have sets $A,B$ with $|A|=n$ and $|B|=m$, $|A\times B|=n\times m$. Using exercise 3.6.5, one would also see $|A\times B|=|B\times A|=m\times n$, proving $n\times m = m\times n$ in a different way.
H: Contexts in Natural Deduction This is my first post. I have a basic question about the use of context in natural deduction. If $A$ is true in an empty context, written as $\vdash A$ then, by monotonicity, in any context $\Gamma$, $A$ is true as well, written as $\Gamma\vdash A.$ However, the interpretation of $\Gamma\vdash A$ is usually that $A$ is true under the assumptions in $\Gamma$. My question is, if there is any way to signal that $A$ is true and independent from any assumption in $\Gamma$ in the sequent $\Gamma\vdash A$? In other words, how is it possible to keep the interpretation of $\vdash A$ while adding a context $\Gamma$ (by monotonicity) in front of the sequent? Thanks! AI: The interpretation of Γ⊢ is usually that is true under the assumptions in Γ. Not quite: It is "under at most the assumptions in $\Gamma$". The definition of $\vdash$ reads $\Gamma \vdash A$ iff there is a derivation $\mathcal{D}$ with end formula $A$ and $\text{Hyp}(\mathcal{D}) \subseteq \Gamma$ (where $\text{Hyp}(\mathcal{D})$ is the set of open assumptions of the derivation). Note the $\subseteq$ rather than $=$: It is nowhere required that all or even any of the assumptions in $\Gamma$ actually be used in the derivation of $A$. The possibility that there are formulas in $\Gamma$ on which $A$ is not dependent, or conversely, the ability to add arbitrary premises to the left-hand side of the turnstile, is already built into the definition of derivability. To express that an empty context suffices, write just that: $\vdash A$. And to express that all premises in a given context are necessary for the derivation and can not be reduced further, write that: "$\Gamma \vdash A$, and there exists no $\Delta \subsetneq \Gamma$ such that $\Delta \vdash A$".
H: Probabilities of bivariates $$ ( ≥\frac12| ≥\frac12) $$ ¿is it correct using the intervals 1/2 to infinity? I don´t get it AI: $$P[X>\frac{1}{2}|Y>\frac{1}{2}]=\frac{\int_{\frac{1}{2}}^{2}dx \int_{\frac{1}{2}}^{1}\frac{3}{2}y^2 dy}{\int_{\frac{1}{2}}^{1}3y^2dy}=\int_{\frac{1}{2}}^2\frac{1}{2} dx=P[X>\frac{1}{2}]=0.75$$ This because X,Y are independent
H: Show that $x^4 + 8x - 12$ is irreducible in $\mathbb{Q}[x]$. Is there a nice way to show that $x^4 + 8x - 12$ is irreducible in $\mathbb{Q}[x]$? Right now I'm going with the rational root theorem to show there are no linear factors and this result, involving the cubic resolvent, to show there are no irreducible quadratic factors. AI: Let $f(x):=x^4+8x-12$. Then, the polynomial $$g(x):=f(x+1)=x^4+4x^3+6x^2+12x-3$$ satisfies the hypothesis of the Extended Eisenstein's Criterion with respect to the prime $3$. This means $g(x)$ has an irreducible factor of degree at least $3$. If $f(x)$ is reducible, then $g(x)$ is reducible, so $g(x)$ must have a linear factor. You now need to show that $g(x)$ has no linear factors, which is not too difficult (i.e., you simply need to check that $g(x)\neq 0$ for $x\in\{\pm 1,\pm3\}$), so the assumption that $f(x)$ is reducible cannot be true. Remark. In general, if you are given a polynomial $f(x)\in\mathbb{Z}[x]$ and you want to find a prime natural number $p$ such that there exists a "shift" of $f(x)$ to which the Extended Eisenstein's Criterion can be applied, then you look at the discriminant $\Delta(f)$ of $f(x)$. This prime $p$ should divide $\Delta(f)$. For $f(x)=x^4+ax+b$, where $a$ and $b$ are integers, $\Delta(f)=256b^3-27a^4$. Particularly, when $f(x)=x^4+8x-12$, we get $$\Delta(f)=-552960=-2^{12}\cdot 3^3\cdot 5\,,$$ which mean the choices of possible $p$ are $2$, $3$, and $5$. Now, $5$ has only exponent $1$ in $\Delta(f)$, which means that no matter how you shift $f$ so that the constant term and the linear term are both $0$ modulo $5$, the quadratic term will not be $0$ modulo $5$. Therefore, $p=5$ is not a good choice. The remaining candidates are $p=2$ and $p=3$. It is clear that $p=2$ will not work well (since $f(x)\equiv x^4\pmod{2}$, so there is not much information to gain). The best candidate is $p=3$.
H: How to show that the following set is connected? Let $X$ be a (metric) space. Let $S$ and $L_i$ ($i\in I$) be connected subsets of $X$. Assume that $S\cap L_i \neq \phi$. Show that $S\cup (\cup_i L_i)$ is a connected subset of $X$. My work: I know that union of two connected set is connected if the intersection is non-empty. Using this fact, it is easy to see that $S\cup L_i$ is a connected subset for all $i$. But I can not proceed from here because I was trying to use that fact that if $x,y\in S\cup ( \cup_i L_i)$, then there exists some connected set $A$ such that $x,y\in A$. Now case 1: $x,y\in$S . Case 2: WLG if $x\in$S and $y\in$$L_i$ for some i and case 3 : $x,y\in L_i$ for some $i$ for this case, I see that x,y can lie in the same connected subset but what is for the case if $x\in L_i$ and $y\in L_j$ for $i\neq j$. Because it is not given that $L_i$'s are disjoint or not. Help me to understand this. AI: You're on the right track, you just need to use induction to show that the whole union is empty. If $I$ is not assumed to be countable, this may be a bit awkward, if you're not comfortable with transfinite induction. So another way is to argue as follows: Set $Y=S\cup \bigcup_i L_i$, let $s_0$ be some point in $S$, and let $R$ be the connected component of $Y$ which contains $s_0$. Since $S$ is connected, necessarily $R$ contains all of $S$. Let $i_0\in I$; we know $L_{i_0}$ intersects $S$, so it intersects $R$, so (since $L_{i_0}$ is connected and $R$ is a connected component of $Y$) necessarily $R$ contains all of $L_{i_0}$. Since $i_0$ was an arbitrary element of $I$, we get that $R$ contains $L_i$ for every $i$. Hence $R$ contains $S\cup \bigcup_i L_i=Y$. But $R$ was a connected component of $Y$, so it is contained in $Y$ as well, which means $Y=R$. By definition, $R$ is connected, so $Y=R$ is connected, as needed.
H: A quiz question in real analysis I am trying to solve this quiz questions of senior batch in Real Analysis: For disproving (A) (B) option $f(x) =x^{6}$ was sufficient. But I am unable to think how to prove/ disprove (C) , (D) the problem arising due to function being given bounded in (C) and infinitely differentiability asked in (D). AI: To rule out (D) start with a nonnegative continuous function that's not differentiable - perhaps $g(x) = |x|$. Then construct $f$ by integrating twice, so that $g$ is its second derivative. Then the integral of $f$ will be only three times differentiable.
H: There are only two six-digit integers $N$, each greater than $100,000$. for which $N^2$ has $N$ as its final six digits There are only two six-digit integers $N$, each greater than 100,000 for which $N^2$ has $N$ as its final six digits (or $N^2-N$ is divisible by $10^6$). What are these two numbers? Is the problem solvable by the Chinese Remainder Theorem? If so, how? AI: Yes, we can solve $N^2-N\equiv0\bmod10^6$ with the Chinese remainder theorem to get $N\equiv0 $ or $1\bmod 2^6=64$ and $N\equiv0$ or $ 1\bmod5^6=15625$. We don't want the solutions $N\equiv0\bmod2^6$ and $5^6$ or $N\equiv1\bmod2^6 $ and $5^6$. We want the solutions $N\equiv1\bmod 2^6$ and $N\equiv0\bmod5^6$ and vice versa. The extended Euclidean algorithm gives $15625=244\times64+9$ and $64=7\times9+1$, so $1=1709\times64-7\times15625$. Thus, one solution is $1709\times64$, and the other is $(64-7)\times15625$.
H: For a $2 \times 2$ matrix having eigenvalues 1,1 will the matrix satisfy a two degree monic polynomial other than characteristic polynomial? For a 2 X 2 matrix (except the identity matrix) having eigenvalues 1,1 is it necessary for the matrix to satisfy a two degree monic polynomial (X-1)(X-K) for some real K (K is not equal to 1) (the matrix clearly cannot satisfy a monomial). For example, an identity matrix can satisfy X(X-1) (the identity matrix also has eigenvalues 1,1) but can we always find an n degree polynomial except the characteristic polynomial for all n x n matrices having a repeated eigenvalue. AI: The matrix must satisfy its characteristic polynomial, i.e. $(X-I)^2=0$. If also $(X-I)(X-kI) = 0$, then their difference $(X-I)^2 - (X-I)(X-kI) = (X-I)(k-1) = 0$ so if $k \ne 1$, $X-I = 0$ i.e. $X$ is the identity matrix.
H: Find upper bound for $P(X>Y+15)$ Let $E(X)=E(Y)=75$ and $Var(X)=10$ and $Var(Y)=12$ and $Cov(X,Y)=-3$. Then find upper bound for this values a) $P(X>Y+15)$ b) $P(Y>X+15)$ I tried to solve this question by calculate $E(X^2) , E(Y^2) , E(XY)$ but i havn't find the upper bound with this datas. AI: You can find a nontrivial upperbound by noting that $$ P(X>Y+15)\leq P(X-Y>15)\leq P(|X-Y|>15)=P((X-Y)^2>15^2)=\frac{E[(X-Y)^2]}{15^2} $$ You can compute $E[(X-Y)^2]=EX^2+EY^2-2EXY$ from the information given.
H: Given a probability density, how can I sample from the induced distribution? Let $f$ be an integratable function such that $\int f(x) dx=1$. If we want to take random samples from this, using whatever programming language one pleases, we should compute $F(t)=\int_{-\infty}^t f(x) dx$, invert this function and feed it numbers drawn from a uniform distribution on $[0,1]$. However I now want to sample coordinates $(u,v)\in \mathbb{R}^2 \backslash \lbrace 0 \rbrace$ such that the probability of $(u,v)$ lying in a set $E$ is given by $$ \int_E (u^2+v^2)^{-\frac{3}{2}}du dv $$ I don't see how I can now try to find $$F(s,t)=\int_{-\infty}^s\int_{-\infty}^t (u^2+v^2)^{-\frac32} du dv$$ as I get into troubles close to the zero. What other way is there to obtain a sample following this distribution? A note for the context: If we consider all lines in the Euclidean plane with Cartesian coordinates, not passing through the origin, they can be represented via $ux+vx=1$. If we impose the condition that the probability densitiy should be invariant under Euclidean transformations, then we arrive at the above distribution. See here AI: Your probability density on $u,v$ is not a valid probability density. A probability density should have $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f(u,v)\mathrm{d}u\mathrm{d}v=1$ If you convert to polar coordinates you will see that $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}(u^2+v^2)^{-3/2}\mathrm{d}u\mathrm{d}v=\int_0^{\infty} 2\pi r^{-2} \mathrm{d}r=\infty$ Removing the point $\{0\}$ from $\mathbb{R}^2$ won't help either - you have to remove a neighborhood of zero in order to make your density normalisable.
H: If every $x \in \mathbb{R}^{n}$ has a neighbourhood whose intersection with the set $A$ is closed, then $A$ is closed. How would I go about proving this statement? Denoting the each neighbourhood by $V_{x}$, I have tried to use the following facts: $$ \bigcup V_{x} = \mathbb{R}^{n} $$ and thus $$ \bigcup \left(A \cap V_{x}\right) = A \cap \left(\bigcup V_{x}\right) = A $$ But the left hand side is an infinite union of closed sets and so this path of thinking does not seem that useful. AI: Let $x$ be a limit point of $A$. Then there is a sequence $x_n \in A$ such that $\lim x_n = x$. By assumption there is a neighbourhood $U$ of $x$ such that $U \cap A$ is closed. Then there is an $N \in \Bbb N$ such that for $n \geq N $, $x_n \in A \cap U$. So $x$ is a limit point of $U \cap A$. So $x \in U\cap A$(since it's closed). Now $x \in A\cap U \subset A$. So $A$ is closed.
H: Is the radius of convergence related to the ratio limit or half of the interval of convergence? I have a series $S$ with general terms $a_n=\frac{(-1)^n(x-1)^n}{(2n-1)2^n}$, $n\ge 1$: $$S = \sum_{n=1}^\infty \frac{(-1)^n(x-1)^n}{(2n-1)2^n}$$ Finding the ratio $\left|\frac{a_{n+1}}{a_n}\right|$ and then finding the limit of the ratio as $n\to\infty$, I find the limit to be $1$ and the interval to be $-1 \lt x \lt 3$. More declaratively, the interval is $\left|\frac{x−1}{2}\right| \lt 1$ which I've refined to what was said earlier. I've read conflicting sites that state the radius $R$ of convergence is $\frac{1}{N}$, where $N$ is the limit as found earlier, but also that it's half the interval length. Here's my work: $$\begin{align} \left|\frac{a_{n+1}}{a_n}\right| &= \left|\frac{\frac{(-1)^{n+1}(x-1)^{n+1}}{(2(n+1)-1)2^{n+1}}}{\frac{(-1)^{n}(x-1)^{n}}{(2n-1)2^{n}}}\right| \\ &= \left|\frac{(-1)^{n+1}(x-1)^{n+1}(2n-1)(2^n)}{(-1)^n(x-1)^n(2(n+1)-1)(2^{n+1})}\right| \\ &= \left|\frac{(-1)(x-1)(2n-1)}{(2n+2-1)(2)}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times \left|\frac{2n-1}{2n+2}\right| \end{align}$$ Then, finding the limit $L$: $$\begin{align} L &= \lim_{n\to\infty} \left(\left|\frac{-(x-1)}{2}\right| \times \left|\frac{2n-1}{2n+1}\right|\right) \\ &= \left|\frac{-(x-1)}{2}\right| \times \lim_{n\to\infty} \left|\frac{2n-1}{2n+1}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times \lim_{n\to\infty} \left|\frac{\frac{2n}{n}-\frac{1}{n}}{\frac{2n}{n}+\frac{1}{n}}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times \lim_{n\to\infty} \left|\frac{2-\frac{1}{n}}{2+\frac{1}{n}}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times \left|\frac{\lim_{n\to\infty} \left(2-\frac{1}{n}\right)}{\lim_{n\to\infty} \left(2+\frac{1}{n}\right)}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times \left|\frac{2}{2}\right| \\ &= \left|\frac{-(x-1)}{2}\right| \times 1 \\ &= \left|\frac{-(x-1)}{2}\right| \end{align}$$ Then I know my interval is $\left|\frac{-(x-1)}{2}\right| \lt 1$: $$\left|\frac{-(x-1)}{2}\right| \lt 1 \\ -1 \lt \frac{x-1}{2} \lt 1 \\ -2 \lt x-1 \lt 2 \\ -1 \lt x \lt 3$$ If the limit found earlier is $1$, the radius would be $R = \frac{1}{1} = 1$, yet I've found the interval to be $(-1, 3)$, which would imply $R = 2$. Where have I made an error? AI: For a power series $$ \sum_{n=0}^\infty c_n (z-a)^n, $$ the radius of convergence is $R = \frac1N,$ where $$ N = \lim_{n\to\infty}\left|\frac{c_{n+1}}{c_n}\right| $$ provided that the limit exists and is a real number. Other sources say simply that the radius is $$ R = \lim_{n\to\infty}\left|\frac{c_n}{c_{n+1}}\right|, $$ which is equivalent except (arguably) in the case $N=0.$ See Ratio test and the radius of convergence. Note that $c_n$ is not a term of the series; it's only a coefficient of a term of the series. The $n$th term is $a_n = c_n(z-a)^n.$ If you're looking at a site that says the radius of convergence is $\frac1N,$ this is the way they are most likely applying the ratio test. (Another possibility is that you have found a page with misinformation. Such things do exist on the web!) You have defined $$a_n=\frac{(-1)^n(x-1)^n}{(2n-1)2^n},$$ so $a_n$ is not $c_n$ in the expression above. Instead, $a_n$ is a function of $x$ and the limit $$ \lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| $$ depends on the value of $x$ at which you evaluate it, as you showed in your calculations (which are correct). That's not the limit of ratios from which people derive the radius of convergence on pages like the ones you described. It would be nonsense for the radius of convergence to be a function of $x.$ The use of the limit $$ N = \lim_{n\to\infty}\left|\frac{c_{n+1}}{c_n}\right| $$ to find the radius of convergence actually is based on the general ratio test that is defined for a general series. Namely, if you have a power series whose $n$th term is $a_n = c_n(x-a)^n,$ then $$ \lim_{n\to\infty}\left|\frac{a_{n+1}}{a_n}\right| = \lim_{n\to\infty}\left|\frac{c_{n+1}(x-a)^{n+1}}{c_n(x-a)^n}\right| = \lim_{n\to\infty}\left|\frac{c_{n+1}}{c_n}\right| \lvert x-a\rvert = N \lvert x-a\rvert, $$ where $N \geq 0,$ provided that the limits exist and are real numbers. We have convergence by the general ratio test when $N \lvert x - a\rvert < 1,$ which (if $N > 0$) is true exactly when $$ \lvert x - a\rvert < \frac1N. $$ If we take the limit $N$ in the way it is meant to be taken on one of those "$\frac1N$" pages, we have $$c_n=\frac{(-1)^n}{(2n-1)2^n}$$ (note: everything that is in $a_n$ except the factor $(x-1)^n$) and therefore $$ N = \lim_{n\to\infty}\left|\frac{c_{n+1}}{c_n}\right| = \frac12, $$ and the radius of convergence is $\frac1N = 2.$ This actually agrees with your calculations. You found that the limit of your ratio of terms was $\left|\frac{-(x-1)}{2}\right|.$ The thing is, $N$ is supposed to be multiplied by $|x-a|,$ not by $\left|\frac{-(x-a)}{2}\right|.$ But if you see that $$ \left|\frac{-(x-1)}{2}\right| = |x-1|\times\frac12 $$ then that factor $\frac12$ is your $N.$ Alternatively, we can compute $$ R = \lim_{n\to\infty}\left|\frac{c_n}{c_{n+1}}\right| = \lim_{n\to\infty}\left|\frac{\left(\frac{(-1)^n}{(2n-1)2^n}\right)} {\left(\frac{(-1)^{n+1}}{(2(n+1)-1)2^{n+1}}\right)}\right| = \lim_{n\to\infty}\left|\frac{-2(2n+1)}{2n-1}\right| = 2 $$ in order to get the radius of convergence $R.$ And then, since $a = 1,$ indeed the interval of convergence is $$\left(a - \frac{1}{N}, a + \frac{1}{N}\right) = (a - R, a + R) = (1 - 2, 1 + 2) = (-1, 3).$$ You're correct in your own calculations, but you're comparing them to a different set of calculations that are done in a slightly different way, even though they are justified by the same theorem and produce the same interval of convergence.
H: A $m$-dimensional differentiable manifold that has a non zero continuous $m$-form is orientable. If $M$ is a $m$-dimensional differentiable manifold and $\omega$ is a continuous $m$-form on $M$ such that $\omega(x) \neq 0$ for every $x \in M$, then $M$ is orientable. The author takes $A$ as the set of parameterizations $\varphi: U_0 \to U$ such that, for each point $x = \varphi(u)$, we have $\omega(x) = a(u)du_1 \wedge \cdots \wedge du_m$ with $a(u) > 0$. Assuming $U_0$ connected, we can see that $a > 0$ for every $u \in U_0$. Here is my doubt: the author says it implies that $A$ is an atlas. I cannot see why the parameterizations of $A$ covers $M$. AI: If $p \in M$ then let $\phi' : U_0 \to U \ni p$ be a chart containing $p$. Since $\omega$ is nowhere zero and a top-degree form there is a nonzero function $a : U \to \mathbb{R}$ such that $\omega = a(u)du_1 \wedge \cdots \wedge du_m$. This function $a$, by continuity, is either always positive or always negative. If it is always positive we are done. If it is always negative, then switch coordinates $u_1$ and $u_2$, say, so that after this switch, the new volume form (call it $dv_1 \wedge \cdots \wedge dv_m$) is of the opposite sign as $du_1 \wedge \cdots \wedge du_m$. The minus sign can be absorbed into the function $a$, so now $$ (-a(v))dv_1 \wedge \cdots \wedge dv_m = a(u) du_1 \wedge \cdots \wedge du_m = \omega. $$
H: Cardinality of the set of all the subsets of $X$ which have cardinality less than $|X|$ Let $X$ be an infinite set of cardinality $|X|=\kappa$, and let $\mathcal{P}_{< \kappa}(X)$ be the set of all subsets $S$ of $X$ such that $|S| < \kappa$. Is it true that $|\mathcal{P}_{< \kappa}(X)| < 2^{\kappa}$? I do not know the anser to the question, and any idea is welcome. Thank you very very much in advance for your help. NB. I have an elementary knowledge of set theory. All that I know about this issue is what I found stated and proved in Jech, Set Theory, Third Millenium Edition, pp. 51- 52: \begin{equation} | \mathcal{P}_{< \kappa}(X) | = \kappa^{< \kappa}, \end{equation} where $\kappa^{< \kappa}$ is defined as \begin{equation} \kappa^{< \kappa}= \sup \{ \kappa^{\mu}: \mu \textrm{ is a cardinal and } \mu < \kappa \}. \end{equation} AI: It’s consistent that $2^\omega=2^{\omega_1}$, so that $2^{\omega_1}=2^\omega\le\omega_1^\omega\le\omega_1^{\omega_1}=2^{\omega_1}$, and in that case $$|\wp_{<\omega_1}(\omega_1)|=\omega_1^{\omega}=2^{\omega_1}\;.$$
H: Rank of matrix over $GF(2)$ whose rows have exactly $k$ elements $1$ Consider the $\binom{n}{k}\times n$ matrix $A$ whose rows have $k$ $1$'s and $n-k$ $0$'s. There are no repeated rows. What is the rank of $A$ over $GF(2)$? AI: You’re asking for the dimension over $\mathbb{F}_2$ of the span of the functions $\{1,\ldots,n\}\rightarrow \mathbb{F}_2$ that have a support of size exactly $k$. If $k=1$, it’s clearly $n$. If $k=0$, it’s clearly $0$; if $k=n$, it’s $1$. Now we assume $2 \leq k < n$. In particular, if $1 \leq a < b \leq n$ are integers, there is a subset $S$ of $\{1,\ldots,n\}$ containing neither $a$ nor $b$ with size $k-1$. Consider the difference of the functions with support $S \cup \{a\}$ and $S \cup \{b\}$: it’s a function with support $\{a,b\}$ exactly. So if $S_{n,k}$ is the space generated by these functions, then $S_{n,k} \supset S_{n,2}$. But it’s easy to see that $S_{n,2}$ is the space of functions whose values sum to $0$ (a hyperplane). So $S_{n,k}$ has rank $n$, unless all of its generators are in $S_{n,2}$ (which is equivalent to $k$ being even) and then $S_{n,k}=S_{n,2}$ has rank $n-1$. Finally: if $k=n$, $A$ has rank $1$, if $k=0$, $A$ has rank $0$, if $1 \leq k < n$, $A$ has rank $n-1$ when $k$ is even and $n$ when $k$ is odd.
H: Convergence to 0 of certain integral by DCT I need to prove the following property: Let $f:\mathbb{R}^N\to \mathbb{R}$ a integrable function in $B(0,1)$. Then it is satisfied that $$\lim_{\varepsilon\to 0}\int_{|x|<\varepsilon}f(x)dx=0.$$ My attempt consists in trying to use the Dominated Convergence Theorem. I write $$\int_{|x|<\varepsilon}f(x)dx=\int_{\mathbb{R}^N}1_{B(0,\varepsilon)}f(x)dx.$$ For $\varepsilon<1$ we can bound $1_{B(0,\varepsilon)}f(x)$ by an integrable function, but I don't have a rigorous proof of the fact that $1_{B(0,\varepsilon)}f(x)\xrightarrow[]{\varepsilon\to0}0$ for each $x$. Sorry if this proof is obvious but I dont't see it in this moment. So, any help will be welcome. AI: This is very simple: suppose that $x_0\neq0$. Then $\|x_0\|>0$. For each $\varepsilon\in(0,\|x_0\|)$ it is $1_{\|x\|<\varepsilon}(x_0)=0$, so for any $x_0\neq0$ it is $$\lim_{\varepsilon\to0^+}1_{\|x\|<\varepsilon}(x_0)f(x_0)=0,$$ as this quantity is constantly $0$ eventually. so $1_{\|y\|<\varepsilon}(x)f(x)\xrightarrow{\varepsilon\to0}0$ almost everywhere, since $\{0\}$ has measure $0$. As you said, for any $\varepsilon>0$ it is $|1_{\|y\|<\varepsilon}(x)f(x)|\leq|f(x)|\in L^1$ (i.e. your functions are dominated), so you may apply DCT to get your result.
H: $\int_0^\infty \frac{1}{(x^p+2020)^q} \,dx $ Let $p,q>0$, when $$\int_0^\infty \frac{1}{(x^p+2020)^q} \,dx $$ converges? I know when $q=1$, $p\ge2$ is the condition, and if $p\ge2$, $q\ge1$ is the condition, but in other case, I have no idea. Thank you for your help in solving this. AI: Hint $$ \int_0^\infty \frac{1}{(x^p+2020)^q} \,dx$$ converges if and only if $$\int_1^\infty \frac{1}{(x^p+2020)^q} \,dx$$ Now apply the Limit Comparison Theorem for $$\int_1^\infty \frac{1}{(x^p+2020)^q} \,dx \, \mbox{ and } \, \int_1^\infty \frac{1}{x^{pq}} \,dx$$
H: Finding the determinant of a matrix by using the adjoint Problem: Find the inverse of the following matrix by finding its adjoint: $$ \begin{bmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{bmatrix} $$ Answer: The first step is to find the determinant of the matrix. \begin{align*} \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= -1 \begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} - 2 \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} + 3 \begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} \\ \begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} &= 45 - 48 = -3 \\ \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} &= 36 - 42 = -6 \\ % \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 3 - 2(-6) + 3 \begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} \\ % \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 3 + 12 + 3( 32 - 35) \\ \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 15 - 3(3) = 6 \end{align*} We are now going to find the cofactors. \begin{align*} C_{11} &=\begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} = 45 - 48 \\ C_{11} &= -3 \\ C_{12} &= - \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} = -(28 - 42) = -28 + 42 \\ C_{12} &= 14 \\ C_{13} &=\begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} = 32 - 35 \\ C_{13} &= -3 \\ C_{21} &= - \begin{vmatrix} 2 & 3 \\ 8 & 9 \\ \end{vmatrix} = -( 18 - 24) \\ C_{21} &= 6 \\ C_{22} &=\begin{vmatrix} -1 & 3 \\ 7 & 9 \\ \end{vmatrix} = -9 - 21 \\ C_{22} &= -30 \\ C_{23} &= - \begin{vmatrix} -1 & 3 \\ 7 & 8 \\ \end{vmatrix} = -( -8 - 21 ) \\ C_{23} &= 29 \\ C_{31} &= \begin{vmatrix} 2 & 3 \\ 5 & 6 \\ \end{vmatrix} = 12 - 15 \\ C_{31} &= -3 \\ C_{32} &= - \begin{vmatrix} -1 & 3 \\ 4 & 6 \\ \end{vmatrix} = -( -6 - 12 ) \\ C_{32} &= 18 \\ C_{33} &= \begin{vmatrix} -1 & 2 \\ 4 & 5 \\ \end{vmatrix} = -5 - 8 \\ C_{33} &= 13 \\ \end{align*} Now we need to find the adjoint of the matrix. $$ C = \begin{bmatrix} -1 & 14 & 3 \\ 6 & -30 & 29 \\ -3 & 18 & 13 \\ \end{bmatrix} $$ Now, here is the adjoint of the original matrix: $$ \begin{bmatrix} -1 & 6 & -3 \\ 14 & -30 & 18 \\ 3 & 29 & 13 \\ \end{bmatrix} $$ Now to find the inverse of the original matrix we divide the adjoint by the determinate. This gives us the following matrix: $$ \begin{bmatrix} -\frac{1}{6} & \frac{6}{6} & -\frac{3}{6} \\ \frac{14}{6} & - \frac{30}{6} & \frac{18}{6} \\ \frac{3}{6} & \frac{29}{6} & \frac{13}{6} \\ \end{bmatrix} $$ Simplyfing the matrix we get: $$ \begin{bmatrix} -\frac{1}{6} & 1 & -\frac{1}{2} \\ \frac{7}{3} & -5 & 3 \\ \frac{1}{2} & \frac{29}{6} & \frac{13}{6} \\ \end{bmatrix} $$ However, SciLab gets the following matrix for the invese. Where did I go wrong? $$ \begin{bmatrix} -0.5 &1.& -0.5 \\ \frac{7}{3} & -5 & 3 \\ \frac{1}{2} & \frac{29}{6} & \frac{13}{6} \\ \end{bmatrix} $$ Based upon comments from the group, I have updated my answer. I know believe it is correct. I am hoping that somebody can confirm that or tell me why I am wrong. Here is my updated answer. The first step is to find the determinant of the matrix. \begin{align*} \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= -1 \begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} - 2 \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} + 3 \begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} \\ \begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} &= 45 - 48 = -3 \\ \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} &= 36 - 42 = -6 \\ % \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 3 - 2(-6) + 3 \begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} \\ % \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 3 + 12 + 3( 32 - 35) \\ \begin{vmatrix} -1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \\ \end{vmatrix} &= 15 - 3(3) = 6 \end{align*} We are now going to find the cofactors. \begin{align*} C_{11} &=\begin{vmatrix} 5 & 6 \\ 8 & 9 \\ \end{vmatrix} = 45 - 48 \\ C_{11} &= -3 \\ C_{12} &= - \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} = -(36 - 42) = -36 + 42 \\ C_{12} &= 6 \\ % C_{13} &=\begin{vmatrix} 4 & 5 \\ 7 & 8 \\ \end{vmatrix} = 32 - 35 \\ C_{13} &= -3 \\ C_{21} &= - \begin{vmatrix} 2 & 3 \\ 8 & 9 \\ \end{vmatrix} = -( 18 - 24) \\ C_{21} &= 6 \\ C_{22} &=\begin{vmatrix} -1 & 3 \\ 7 & 9 \\ \end{vmatrix} = -9 - 21 \\ C_{22} &= -30 \\ % C_{23} &= - \begin{vmatrix} -1 & 2 \\ 7 & 8 \\ \end{vmatrix} = -( -8 - 14 ) = 8 + 14 \\ C_{23} &= 22 \\ C_{31} &= \begin{vmatrix} 2 & 3 \\ 5 & 6 \\ \end{vmatrix} = 12 - 15 \\ C_{31} &= -3 \\ C_{32} &= - \begin{vmatrix} -1 & 3 \\ 4 & 6 \\ \end{vmatrix} = -( -6 - 12 ) \\ C_{32} &= 18 \\ C_{33} &= \begin{vmatrix} -1 & 2 \\ 4 & 5 \\ \end{vmatrix} = -5 - 8 \\ C_{33} &= -13 \\ \end{align*} Now we need to find the adjoint of the matrix. $$ C = \begin{bmatrix} 3 & 6 & -3 \\ 6 & -30 & 22 \\ -3 & 18 & -13 \\ \end{bmatrix} $$ Now, here is the adjoint of the original matrix: $$ \begin{bmatrix} 3 & 6 & -3 \\ 6 & -30 & 18 \\ -3 & 22 & -13 \\ \end{bmatrix} $$ Now to find the inverse of the original matrix we divide the adjoint by the determinate. This gives us the following matrix: $$ \begin{bmatrix} \frac{-3}{6} & \frac{6}{6} & -\frac{3}{6} \\ \frac{6}{6} & - \frac{30}{6} & \frac{18}{6} \\ \frac{-3}{6} & \frac{22}{6} & -\frac{13}{6} \\ \end{bmatrix} $$ Simplyfing the matrix we get: $$ \begin{bmatrix} -\frac{1}{2} & 1 & -\frac{1}{2} \\ 1 & -5 & 3 \\ -\frac{1}{2} & \frac{11}{3} & -\frac{13}{6} \\ \end{bmatrix} $$ AI: It should be $$C_{12} = - \begin{vmatrix} 4 & 6 \\ 7 & 9 \\ \end{vmatrix} = -(\color{red}{36} - 42) = 6 $$
H: $\inf{d(x,y);x∈A, y∈B}>0$ Let $A$ be a subset of sequence of points which converges to point a∈$R^n$. For a closed subset $B$ of $\mathbb R^n$ satisfying closure of $A$ and $B$ has no intersection,can we say $\inf{d(x,y);x∈A, y∈B}>0$? I guess true, but I cannot proof this.. Any help would be appreciated, thank you. P.S Sorry,the first question was trivial..closure of A and B has no intersection, sorry.. AI: No. Consider the closed set $B=\{(x,0): x\in\mathbb{R}\}\subset\mathbb{R}^2$ and $A=\{(0,1/n): n\geq1\}\subset\mathbb{R}^2$. Then $\inf_{x\in A, y\in B}d(x,y)=0$. If both $A,B$ are closed, this is still not true (Brian's comment provides a counter-example). On the other hand: the result is true if one set is closed and the other is compact.
H: Are projective modules over $\mathbb{Z}[x_1,...,x_m]$ free? The Quillen-Suslin theorem states that any finitely generated projective module over $\mathbb{k}[x_1,...,x_m]$ is free, for $\mathbb{k}$ a field. Is it known whether this statement is true in the case that $\mathbb{k}=\mathbb{Z}$, rather than a field? Alternatively, a counter-example would be great. AI: Finitely generated projective modules are free over $R[x_1,\dots,x_m]$ for any PID $R$. This was proved by Quillen in his original proof; I'm not sure about Suslin's proof. See Lam's Springer monograph "Serre's Problem on Projective Modules". (In fact all projective modules are free by a 1963 result of Bass.)
H: Prove that $a_{n}\to L$ as $n\to\infty$ iff $\limsup_{n\to\infty}(a_{n}) = \liminf_{n\to\infty}(a_{n}) = L$ Suppose that $(a_{n})_{n=0}^{\infty}$ is a bounded sequence and that $L\in\textbf{R}$. Then $a_{n}\to L$ as $n\to\infty$ if and only if \begin{align*} \limsup_{n\to\infty}(a_{n}) = \liminf_{n\to\infty}(a_{n}) = L \end{align*} Proof I am mainly interested in the direction $(\Rightarrow)$. The book which I am reading provides the following argument: "Suppose that $a_{n}\to L$ as $n\to\infty$. Then if $\varepsilon > 0$ there exists $n_{0}$ s.t. $L - \varepsilon/2 < a_{n} < L + \varepsilon/2$ for $n\geq n_{0}$. Thus $L - \varepsilon < M_{n} < L + \varepsilon$ for $n\geq n_{0}$ so that $M_{n}\to L$ as $n\to\infty$. Thus $\limsup_{n\to\infty}(a_{n}) = L$. Similar reasoning applies to the $\liminf_{n\to\infty}(a_{n})$." Here, the notation $M_{n}$ means $\sup\{a_{j}:(j\in\textbf{N})\wedge(j\geq n)\}$. My question I did not understand why $L - \varepsilon/2 < a_{n} < L + \varepsilon/2$ implies that $L - \varepsilon < M_{n} < L + \varepsilon$. Could someone help me interpreting this? AI: For all $ k \geq n_0$, we have $L-\epsilon/2 < a_k < L+ \epsilon/2$. So for all $ n \geq n_0$ $$ L-\epsilon/2 \leq \sup_{k \geq n} a_k \leq L+ \epsilon/2 $$ and thus $$ L-\epsilon < M_n < L+ \epsilon. $$ Note how the strict inequalities become large inequalities when taking the supremum.
H: Clarifying why compactness in a topology, implies compactness in a coarser topology If $(X,\tau) $is compact and $\tau'\subseteq \tau$, then $(X,\tau')$ is compact. I have already read several posts on the subject, but it is still unclear to me. The usual argument is: "In a coarser space, more sets are compact, essentially because there are fewer open covers to need finite subcovers. That is, if a set is compact in the finer topology then it is compact in the coarser topology." (as found here What does compactness in one topology tell us about compactness in another (coarser or finer) topology?) But still I am not very convinced, specifically because, if I go to a coarser topology, some open sets are missing with respect to the initial topology, and what if I needed those sets for the extract the fine subcover, what guarantees they aren't needed? AI: The usual argument is the proof, which is very short and straightforward: Let $\mathscr{U}\subseteq\tau'$ be a $\tau'$-open cover of $X$. Then $\mathscr{U}\subseteq\tau$, so $\mathscr{U}$ is a $\tau$-open cover of $X$, and there is therefore a finite $\mathscr{R}\subseteq\mathscr{U}$ that covers $X$. $\mathscr{R}\subseteq\mathscr{U}\subseteq\tau'$, so $\mathscr{R}$ is a finite $\tau'$-open subcover of $\mathscr{U}$, and $\langle X,\tau'\rangle$ is therefore compact. In words, if we start with a $\tau'$-open cover $\mathscr{U}$, it is also automatically a $\tau$-open cover, so it has a finite subfamily that covers the $X$. The members of that subfamily are members of $\mathscr{U}$, so we have the desired finite subcover; no extra sets can possibly be needed, because we’re using only sets that are in the original cover $\mathscr{U}$. It would be different if we were asking for an open refinement with some particular property instead of for a subcover: then we might actually need some of the sets in $\tau\setminus\tau'$. For example, let $\tau'$ be any non-paracompact topology on $X$, and let $\tau$ be the discrete topology. Then $\tau'\subseteq\tau$, $\langle X,\tau\rangle$ is paracompact, and $\langle X,\tau'\rangle$ is not paracompact.
H: Prove that the following maps are group homomorphisms. Show that this map is (not) injective and/or (not) surjective. $ \mathbb{R}^+ \to \mathbb{C}^{\ast}$ with $x \mapsto e^{2\pi i x}$. To prove that this is a group homomorphism, we prove $f(a+b) = f(a) \cdot f(b)$. $$f(a+b) = e^{2\pi i (a+b)} = e^{2\pi i a + 2 \pi i b} = e^{2\pi i a} \cdot e^{2\pi i b} = f(a) \cdot f(b)$$ Thus group homomorphism is proven. Next we show $f$ is injective. To prove this we show that $\forall x,y \in \mathbb{R}^+$ s.t. if $f(x) = f(y)$ then $x=y$. $$f(x) = f(y)$$ $$e^{2\pi i x} = e^{2\pi i y}$$ $$2\pi i x = 2\pi i y$$ $$ix = iy$$ $$x = y$$ Thus injectivity is proven. Now my problem is to show that the map is (not) surjective. The definition for surjectivity is $\forall x \in \mathbb{C}^{\ast}, \exists y \in \mathbb{R}^+$ s.t. $f(y) = x$. Is showing that $$y = e^{2\pi i x}$$ $$ln(y) = 2 \pi i x$$ $$\frac{ln(y)}{2 \pi i} = x$$ enough to conclude that the map is surjective, since the inverse exists $\forall x \in \mathbb{C}^{\ast} $? ps. I use the fact that I know that the map is injective to check if a bijection exists (bijective iff. surjective and injective). I am unsure how to prove that the map is surjective without using this fact. AI: For surjectivity: note that $f$ maps to only those complex numbers that lie on the unit circle since $|e^{2\pi i x}|=1$. So choose any complex number outside the unit circle to show it is not surjective. Your proof for injectivity also needs to be checked: Note that $e^{2\pi i x}=e^{2\pi i y} \implies 2\pi i x = 2\pi i y$ is not true. Think about $e^{2\pi i(1)}=e^{2\pi i (2)}=1$. So the given $f$ is a homomorphism but it is not one-one and not onto.
H: $f:\aleph_{\omega_1}\to\aleph_{\omega_1}$ strictly increasing and continuous with $\aleph_1$ fixed points. Can $f$ exist? Continuous function: $\forall \lambda$ limit ordinal $f(\lambda)=\underset{\gamma<\lambda}\bigcup{f(\gamma)}$ If I prove that $Fix(f):=\{\alpha\in\aleph_{\omega_1}|f(\alpha)=\alpha\}$ is unlimited on $\aleph_{\omega_1}$ then $|Fix(f)|\geq cf(\aleph_{\omega_1})=cf(\omega_1)=\aleph_1$. $\forall \alpha\in \aleph_{\omega_1}$ I define for countable recursion $\begin{cases} a_0=\alpha \\ a_{n+1}=f(a_n) \end{cases}$ . $\{a_n\}_{n\in\omega} $ is a strictly increasing sequence, so $\underset{n\in\omega}{\bigcup}a_n=\lambda \;$ is a limit ordinal and $f(\lambda)=f(\underset{\gamma\in\lambda}{\bigcup}\gamma)=\underset{\gamma\in\lambda}{\bigcup}f(\gamma)=\underset{n\in\omega}{\bigcup}f(a_n)\leq\underset{n\in\omega}{\bigcup}a_{n+1}=\lambda$ and $f(\lambda)\geq\lambda$ because $f$ goes from a well-ordered set to itself. Now, my problem is to prove that $|Fix(f)|\leq\aleph_1$ and show that kind of function exists. AI: Define $f:\omega_{\omega_1}\to\omega_{\omega_1}$ as follows: $$f(\xi)=\begin{cases} \omega_2+\xi,&\text{if }\xi\le\omega_1\\ \omega_{\alpha+2}+\eta,&\text{if }\xi=\omega_\alpha+\eta\in(\omega_\alpha,\omega_{\alpha+1}]\text{ for some }\alpha<\omega_1\\ \xi,&\text{if }\xi=\omega_\gamma\text{ for some }\alpha<\omega_1\text{ such that }\operatorname{cf}\gamma=\omega\;. \end{cases}$$
H: Given $n$ slots and $k$ objects to fill the slots, what is the probability of a given slot to be filled. Problem Given: $n$ slots, numbered from $1\ldots n$ $k$ objects a slot can be filled by one object what is the probability that a slot at some position $i$ to be filled? Some Notation For a visual representation we can denote slot at position $i$ with: $\square_i$ if it is empty or $\blacksquare_i$ if it is filled A sample configuration can be: $$ (\blacksquare_1, \blacksquare_2, \square_3, \ldots, \square_n) $$ where slots $1$ and $2$ are filled, and other $k-2$ slots somewhere between $4$ and $n-1$. A special configuration is: $$ (\blacksquare_1, \blacksquare_2, \ldots, \blacksquare_k, \square_{k+1}\ldots, \square_n) $$ where slots from $1$ to $k$ are filled, and the following are empty. . Therefore we are interested in the value of $\Pr(\blacksquare_i)$ A possible approach We can count all the possible configurations by with combinations, by thinking how can we pick $k$ numbers from $(1, 2, \ldots, n)$ irregardles of the order, the picked number is the filled slot: $$ \texttt{total} = \frac{n!}{k!(n-k)!} $$ How would we count the ones that are match the $\blacksquare_i$? By just removing the $i$ slot and ending up with $n-1$ slots and $k-1$ fills, and doing the same calculation as above, only on a smaller set $(1, 2, \ldots, i-1, i+1, \ldots, n)$: $$ \texttt{remaining} = \frac{(n-1)!}{(k-1)!(n-1-k+1)!} $$ If we divide them we get our result. $$ \frac{\texttt{remaining}}{\texttt{total}} = \ldots = \frac{k}{n} $$ Discussion I am having a hard time understanding intuitively the final result: $$ \Pr(\blacksquare_i) = \frac{k}{n} $$ is this the most intuitive way of explaining that the position is not relevant? (maybe by counting the probabilities of each position?) is there some math concept we can use here? how can this problem be reducible to: What is the probability of a random slot to be filled or, if we like to think of balls in a bag, that come in 2 colors, white, $\square$, and black, $\blacksquare$ the question would be: What is the probability of the extracted ball to be black Thanks! AI: The intuition is such: you have $k$ balls and $n-k$ voids, and you need to pick what to put at slot $i$. You have $k$ objects that will make it full, so the probability is indeed $\tfrac{k}{n}$. Due to symmetry, the answer is independent of $i$. Choosing the slots are random will not change this result, as they are all symmetric and their names don't matter.
H: Do we need gammas to determine $\nabla$? I know that something must be wrong with the following calculation - otherwise, the covariant derivative could be defined intrinsically on a differentiable manifold - but I don't seem to be able to find the mistake. Let $(M,\mathcal{O},\mathcal{A},\nabla)$ be a differentiable manifold with a linear connection, taking a pair of a vector field and a $(p,q)$-tensor to another $(p,q)$-tensor. For any vector fields (($1,0$)-tensors) $X,Y\in\Gamma(TM)$, considered within a chart $u\in\mathcal{A}$, we have $$\nabla_XY\stackrel{u}{=}X^i\cdot\nabla_{e_i}Y\stackrel{u}{=}X^i\cdot\nabla_{e_i}Y^j\cdot e_j$$ where we use the Einstein summation convention and $e_n(p)$, $1\leq n\leq\text{dim}M$, are the chart induced basis vectors at $T_pM$. We recall the Leibniz rule for covarient derivatives, i.e., $$\nabla_X(T(f))=(\nabla_XT)(f)+T(\nabla_Xf)$$ where $T$ is a vector field. Hence, $$(\nabla_{e_i}Y^j\cdot e_j)(f)=\nabla_{e_i}(Y^j\cdot e_j(f))-Y^j\cdot e_j(\nabla_{e_i}f)=e_i(Y^j\cdot e_j(f))-Y^j\cdot e_j(e_i(f))$$ Due to the Leibniz rule for differentiation, we have $$e_i(Y^j\cdot e_j(f))-Y^j\cdot e_j(e_i(f))=e_i(Y^j)\cdot e_j(f)+Y^j\cdot e_i(e_j(f))-Y^j\cdot e_j(e_i(f))$$ Therefore: $$\nabla_XY=X^i\cdot e_i(Y^j)\cdot e_j(f)+X^i\cdot Y^j\cdot e_i(e_j(f))-X^i\cdot Y^j\cdot e_j(e_i(f))$$ So we derived an expression of the covariant derivative on a vector field that does not depend on the Gammas, $$\Gamma^q_{ij}\cdot e_q:=\nabla_{e_i}ej\text{,}$$ that usually represent the choice necessary for differentiating vector fields on a manifold consistently. Where is the mistake in the above derivation? Did I misuse one of the two Leibniz rules somehow? AI: I think the issue is your first "Leibniz rule" for covariant derivatives. In that equation (and in a few other places) you seem to be trying to take the covariant derivative of a function, i.e. writing something like $\nabla_X f$, while $\nabla_X$ is actually a function on vector fields. So I'll assume you mean $\nabla_X f = X(f)$. But even if this is the case, then that equation becomes $$ X(T(f)) = (\nabla_XT)(f) + T(X(f)) \implies (\nabla_XT)(f) = X(T(f))-T(X(f)) = [X,T](f), $$ or in other words, $$ \nabla_XT = [X,T]. $$ Clearly this isn't the "arbitrary" connection you started with, and in fact, it's not even a connection at all, since we must have $\nabla_{fX} T = f\nabla_X T$ for all $f: X \to \mathbb{R}$ if $\nabla_X$ is a connection, but $[fX,T] \neq f[X,T]$ in general. Edit (to reflect question in the comments). In short, (12.8) in "Manifolds and differential geometry" is expressing "how do I covariantly differentiate $(p,q)$-tensor fields?" The equation you wrote is trying to express "what happens when I covariantly differentiate $T(f)$?" These are totally different questions, explaining why (12.8) isn't relevant for what you were trying to do. To understand this better, let's look at (12.8) more closely. Let's fix some notation first. Fix a vector field $X$ on our manifold $M$, and let $\mathscr{T}^{(p,q)}(M)$ be the space of $(p,q)$-tensor fields on $M$. Note that $\mathscr{T}^{(1,0)}(M) = \mathscr{X}(M)$, the space of vector fields on $M$, and $\mathscr{T}^{(0,1)}(M) = \Omega^1(M)$, the space of $1$-forms on $M$. Here's how (12.8) works. There, we assume that all we know is $\nabla_X: \mathscr{X}(M) \to \mathscr{X}(M)$. For consistency, let's write this as $\nabla_X: \mathscr{T}^{(1,0)}(M) \to \mathscr{T}^{(1,0)}(M)$. Now we want to define $\nabla_X: \mathscr{T}^{(p,q)}(M) \to \mathscr{T}^{(p,q)}(M)$ using the Leibniz rule (12.8). To see how this works, let's practice by defining $\nabla_X: \mathscr{T}^{(0,1)}(M) \to \mathscr{T}^{(0,1)}(M)$. That is, given a $1$-form $\alpha \in \mathscr{T}^{(0,1)}(M) = \Omega^1(M)$, we want to define the $1$-form $\nabla_X\alpha \in \mathscr{T}^{(0,1)}(M) = \Omega^1(M)$. I can specify $\nabla_X\alpha$ by telling you the function $(\nabla_X\alpha)(Y)$ for all vector fields $Y \in \mathscr{X}(M)$. (Think about this pointwise; for some $x \in M$, I can specify a linear map $(\nabla_X\alpha)_x: \mathsf{T}_xM \to \mathbb{R}$ by telling you the number $(\nabla_X\alpha)_x(Y_x)$ for all vectors $Y_x \in \mathsf{T}_xM$.) So I'll define $$ (\nabla_X\alpha)(Y) := \underbrace{X(\underbrace{\alpha(Y)}_{\text{function}})}_{\text{function}} - \underbrace{\alpha(\underbrace{\nabla_XY}_{\text{vector field}})}_{\text{function}}, $$ which is (12.8) in our present case, $p=0$ and $q=1$. If we want to define $\nabla_X: \mathscr{T}^{(0,2)}(M) \to \mathscr{T}^{(0,2)}(M)$, then for an $(0,2)$-tensor field $\beta \in \mathscr{T}^{(0,2)}(M)$, I can similarly define the $(0,2)$-tensor field $\nabla_X\beta\in \mathscr{T}^{(0,2)}(M)$ by specifying $(\nabla_X\beta)(Y_1,Y_2)$ for all vector fields $Y_1,Y_2 \in \mathscr{X}(M)$. Now, let's use this line of thinking to see how to translate $\nabla_X: \mathscr{T}^{(1,0)}(M) \to \mathscr{T}^{(1,0)}(M)$ into the setup of (12.8). Above, when I wanted to specify a $(0,1)$- or $(0,2)$-tensor field, I told you how it acted on all vector fields. This time, given a vector field $Y \in \mathscr{T}^{(1,0)}(M)$, I want to specify a $(1,0)$-tensor field $\nabla_X Y\in \mathscr{T}^{(1,0)}(M)$ to you. What does $\nabla_X Y$ act on? It acts on $1$-forms. (This is just linear algebra; for a finite-dimensional vector space $V$, we have $V \cong V^{**}$, so $V$ acts on $V^*$.) So to specify $\nabla_X Y\in \mathscr{T}^{(1,0)}(M)$, I can tell you the function $(\nabla_XY)(\omega)$ for all $1$-forms $\omega \in \Omega^1(M)$. This leads us to $$ (\nabla_XY)(\omega) = \underbrace{X(\underbrace{Y(\omega)}_{\text{function}})}_{\text{function}}-\underbrace{Y(\underbrace{\nabla_X\omega}_{\text{$1$-form}})}_{\text{function}}, $$ which is (12.8) in our present case, $p=1$ and $q=0$. In your question, the $1$-form $\omega$ was replaced by a function $f$. Here, $Y$, being (locally) a function on $\Omega^1(M)$, literally takes $1$-forms like $\omega$ as inputs pointwise; at each $x \in M$, we evaluate $Y_x$ on $\omega_x$. On the other hand, $Y(f)$ is just notation; $Y$ is not being evaluated at any elements.
H: Calculate partial derivatives of $f(x,y) =\begin{cases}0, & xy\neq0 \\ 1, & xy=0\end{cases}$ Calculate the partial derivatives of: $$ f(x,y) = \begin{cases}0, & xy\neq0 \\ 1 , & xy=0\end{cases} $$ I'm not sure how to evaluate it. Can anyone give me a direction? AI: So $xy = 0$ is only true if $x = 0$ or $y = 0$ or both, which let's us know that these lines are going to be special. In the case where $x \ne 0$ and $y \ne 0$ (away from the special lines) then we can use the limit definition of the derivative to determine that the partial derivatives in these regions will be 0. In the case where $x = 0$ and $y \ne 0$ then using the limit definition of the derivative, the partial with respect to $x$ is undefined due to the discontinuity. However the partial with respect to $y$ is simply 0. Vice versa for $y = 0$ and $x \ne 0$. For the case of $x = 0$ and $y = 0$ the partial derivatives in both orthogonal directions will be zero by the same logic as the previous two cases. However, in any other direction it will be undefined. You can summarize this like this: $$ \begin{split} \frac{\partial f(x,y)}{\partial x} &= \begin{cases} \text{undefined}, \quad x = 0 \\ 0, \quad \text{otherwise} \end{cases} \\ \frac{\partial f(x,y)}{\partial y} &= \begin{cases} \text{undefined}, \quad y = 0 \\ 0, \quad \text{otherwise} \end{cases} \\ \text{For} \, a \ne 0, b \ne 0, u = ax + by:& \\ \frac{\partial f(x,y)}{\partial u} &= \begin{cases} \text{undefined}, \quad x = 0 \, \text{or} \, y = 0 \\ 0, \quad \text{otherwise} \end{cases} \\ \end{split} $$ As requested, the limit definition of the derivative is: $$ \lim_{h \rightarrow 0} \frac{f(x + h) - f(x)}{h} $$ Observe that in the cases when the limit is undefined, the difference $f(x + h) - f(x)$ does not get any smaller. It should be straightforward to prove the non-existence of the limit from there.
H: Finding $a$ such that these two vectors are orthogonal Suppose we have an inner product space V, with inner product $<x,y>$. In this space, we have two nonzero vectors u and v. I am trying to find an arbitrary, real $a$ for which the following two vectors are orthogonal: $av-u$ and $v$. I know two vectors are orthogonal when their inner product equals zero. When I take the inner product of these two vectors, I get $(av-u)v = 0$ . Is it correct that this means I should solve $(av-u)=0$? And would $a$, therefore, be $u/v$? Am I doing this correctly? thanks! AI: Just expand the inner product as follows (assuming $v \neq 0$, if it is, then the dot product is always zero and $v$ is orthogonal to every vector.) $$ <av-u|v> = a<v|v> - <u|v> = a||v||^2 - <u|v> = 0 \implies a = \frac{<u|v>}{||v||^2} $$
H: Proof Verification: $\text{Hom}(\mathbb Z[x],S)=S$ (as rings) How to prove $\text{Hom}(\mathbb Z[x],S)=S$ (as rings), where S is any ring? My attempt: took an element $b$ in $S$, defined a map , $b: \mathbb Z[x]\to S$ which maps $f(x)$ to $f(b)$. Clearly $b$ is a ring homomorphism, hence we proved one side inclusion. for other side, take $b$ in $\text{Hom}(\mathbb Z[x],S)$ with the same mapping, since $b$ is a ring homomorphism from $\mathbb Z[x]$ to $S$, clearly $f(b)$ is in $S$. Since $f(x)$ is in $\mathbb Z[x]$, surely $b$ is in $S$. Is my proof valid? Also in case to show the generalised result for $n$ variables, can I use induction? AI: As I understand, you give a map let's call it $\phi : S \rightarrow Hom(\mathbb{Z}[x], S)$ which takes $b \in S$ to the map $\phi_b: \mathbb{Z}[x] \rightarrow S$ which sends a polynomial $p \in \mathbb{Z}[x]$ to $\phi_b(p) = p(b)$. Note $S$ has an identity element, call it $e$, and $\phi_b(1) = e$. To check this map is well defined, for each $b \in S$ we should check $\phi_b$ is indeed a ring homomorphism. So we would show things like $\phi_b(p\cdot q) = (p\cdot q)(b) = p(b)\cdot q(b) = \phi_b(p) \cdot \phi_b(q)$. To show that this map is indeed a bijection we can construct an inverse. Let's make a map $\psi : Hom(\mathbb Z[x], S) \rightarrow S$. We want $\psi(\phi_b) = b$ so a natural choice for $\psi$ is a map sending any homomorphism $f : \mathbb Z[x] \rightarrow S$ to $\psi(f) = f(x)$ where $x \in \mathbb Z[x]$ is a polynomial. We have $\psi(\phi_b) = \phi_b(x) = b$ as desired. We just need to show that $\phi_{\psi(f)} = f$ for any $f \in Hom(\mathbb Z[x], S)$. So take any $p \in \mathbb Z[x]$, we have $\phi_{\psi(f)}(p) = p(\psi(f)) = p(f(x)) = f(p)$. The last equality $p(f(x)) = f(p)$ should be justified carefully! And this is the part which (I think) will break when you try to generalise this to $n$ variables. Addendum for non-unital rings. If $S$ doesn't have an identity then $Hom(\mathbb Z, S)$ may be trivial when $S$ is non-trivial. E.g. Take $S = 2\mathbb Z$ a non-unital ring contained in $\mathbb Z$, then take $ f \in Hom(\mathbb Z[x], S)$. We have $f(1) = 2n$ for some $n \in \mathbb Z$. Since $f$ is a homomorphism of rings, $2n = f(1) = f(1^2) = f(1)^2 = 4n^2$ hence $n = 0$. So $Hom(\mathbb Z[x], S)$ is the zero ring but $S$ is not the zero ring. To avoid these pathologies we have assumed $S$ has an identity element.
H: Gauss curvature derived from unit normal vector I want to know more about the differential geometry of surfaces, especially Gaussian curvature. Obviously, we can get the mean curvature of a surface from the divergence of the unit normal vector of the surface. However, can the Gaussian curvature be derived from the divergence or curl of the unit normal vector of the surface? Perhaps there is also some historical / background information about their importance? Thank you in advance. Supplement: The mean curvature of a surface specified by an equation $\displaystyle\,\!F(x,y,z)=0$ can be calculated by using the gradient $\displaystyle\nabla F=\left(\frac{\partial F}{\partial x}, \frac{\partial F}{\partial y}, \frac{\partial F}{\partial z} \right)$ and the divergence of the unit normal. A unit normal is given by $\displaystyle\frac{\nabla F}{|\nabla F|}$ and the mean curvature is $\displaystyle H = -{\frac{1}{2}}\operatorname{div}\left(\frac{\nabla F}{|\nabla F|}\right)$ AI: You can refer to Diferential Geometry Of Three Dimension Vol II by Charles Ernest Weatherburn https://archive.org/details/diferentialgeome031396mbp/page/n91/mode/2up $$\displaystyle K_G = \dfrac{1}{2}\operatorname{div}\left[\frac{\nabla F}{|\nabla F|}\cdot\operatorname{div}\left(\frac{\nabla F}{|\nabla F|}\right)+\frac{\nabla F}{|\nabla F|}\times\operatorname{curl}\left(\frac{\nabla F}{|\nabla F|}\right)\right]$$ Weatherburn annotates Gaussian curvature as the second curvature of the surface and the mean curvature as the first curvature of the surface.
H: Subgradient of argmax and chain rule Let $\mathcal{X} \subset \mathbb{R}^n$ and $c \in \mathbb{R}^n$. Moreover, define $$ f(c) := \max_{x\in\mathcal{X}} \ x^\top c \quad \text{and} \quad \bar{x}_c := \text{arg}\max_{x\in\mathcal{X}} \ x^\top c. $$ In this paper, Proposition 3.1, it is argued that $\bar{x}_\hat{c}$ is a subgradient of $f$ at $\hat{c}$. They prove it using the following argument: For any $c \in \mathbb{R}^n$, $$ f(c) - f(\hat{c}) = \max_{x\in\mathcal{X}} \ x^\top c - \max_{x\in\mathcal{X}} \ x^\top \hat{c} = \max_{x\in\mathcal{X}} \ x^\top c - \bar{x}_\hat{c}^\top \hat{c} \geq \bar{x}_\hat{c}^\top (c - \hat{c}). $$ My question is twofold: In order to show that $\bar{x}_\hat{c}$ is a subgradient of $f$ at $\hat{c}$, is it enough to show that $f(\hat{c}) + \bar{x}_\hat{c}^\top (c - \hat{c})$ lower bounds $f(c)$ for any $c$, as was done in the paper? By definition, $f(c) = \bar{x}_c^\top c$. Since $\bar{x}_c$ is a function of $c$, can we derive a (sub)gradient of $f$ w.r.t. $c$ using some kind of chain rule? Moreover, does it yield the same result as shown in the paper? AI: Yes, that's the definition of the subgradient, $\partial f(x_0) = \{g : f(x) - f(x_0) \ge g' (x-x_0), \quad \forall x \in X\}$. That's called the envelope theorem. $\nabla f(c) = \bar{x}_c$ means that the solution correspondence is a supporting hyperplane to the choice set, $X$.
H: What does positive dimensional variety mean? I recently heard the term 'positive dimensional variety'. Does this simply mean that the variety is nonempty and not a point? Or am I misunderstanding this? AI: Yes, a variety is zero dimensional if it's a finite set of points (over an algebraically closed field), and positive means at least one-dimensional.
H: Interpretation of zero angle between two elements in a inner product space Take $f,g \in V$, where $V$ is an inner product space. Let $\langle \cdot, \cdot \rangle : V \times V \to [0,\infty)$ denote the inner product operator in $V$. Let the "angle" $\theta$ between $f$ and $g$ be defined through the rule $$ \cos(\theta) = \frac{\langle f, g\rangle}{{\left\|f\right\| \left\|g\right\|}} $$ where norms on $V$ are defined in terms of the inner product as $\left\|\cdot\right\| \doteq \langle \cdot, \cdot \rangle $. My question is simple: if $\cos(\theta) = 1$, what conclusions can be made? In particular, I would like to know if I can conclude that $f = g$ almost everywhere, and if not, I would like to know what extra assumptions are needed to get that result. In particular, I am interested to find out if $f = g \text{ }\mathrm{a.e.}$ when $\cos(\theta) = 1$ for the restricted case when $V$ is the space of bounded, real valued functions whose domain is a closed interval in the real line. Thank you very much for your help! AI: (The question is tagged with [real-analysis], therefore I'll assume that $V$ is an inner product space over $\Bbb R$.) If $$ \tag{*} \frac{\langle f, g\rangle}{{\|f\| \|g\|}} = 1 $$ then $f$ and $g$ are necessarily non-zero, and $\langle f, g\rangle$ is a positive real number. It follows that $$ |\langle f, g\rangle| = \langle f, g\rangle = \|f\| \|g\| \, , $$ i.e. we have equality in the Cauchy–Schwarz inequality, which is the case if and only if $f$ and $g$ are linearly dependent. Since both are non-zero, we have $$ f = c g \text{ for some } c \in \Bbb R $$ and since $\langle f, g\rangle > 0$ $$ \tag{**} f = c g \text{ for some } c > 0 \, . $$ Conversely, if $(**)$ holds for non-zero $f, g \in V$ then $$ \langle f, g\rangle = c \langle g, g\rangle = c \| g \|^2 = \| f \|\| g \| $$ so that $(*)$ and $(**)$ are actually equivalent. If $V$ is the space $L_2(I)$ of square-integrable functions on some interval $I$ with the inner product $$ \langle f, g\rangle = \int_I f(x) g(x) \, dx $$ then functions which agree almost everywhere are identified. In that case $f=cg$ in $L_2$ means that $f(x) =cg(x)$ a.e. in $I$.
H: Is it true for all values in probability? - Intersection of $2$ sets I know that B can have values only from $0$ to $1$ . If I have this probability: $\text{Pr}(A=a, B \leq 1)$ Is it true to say that: $\text{Pr}(A=a, B \leq 1) = \text{Pr}(A=a)$ It seems trivial but I am not sure.. Thank you! AI: In general if $\mathsf{P}(B)=1$, then $\mathsf{P}(A\cap B)=\mathsf{P}(A)$ for any event $A$. To see this, note that $\mathsf{P}(A\cap B)\le \mathsf{P}(A)$ and $\mathsf{P}(A^c \cup B^c)\le \mathsf{P}(A^c)+\mathsf{P}(B^c)=\mathsf{P}(A^c)$.
H: Evaluate $\int_0^{\frac{\pi}{2}} \frac{\sin^3{(2x)}}{\ln{\left(\csc{x}\right)}} \mathop{dx}$ Challenge problem by friend is $$\int_0^{\frac{\pi}{2}} \frac{\sin^3{(2x)}}{\ln{\left(\csc{x}\right)}} \mathop{dx}$$ I know you can write $\ln{\left(\csc{x}\right)}=-\ln{\sin{x}}$ and $\sin{(2x)}=2\sin{(x)}\cos{(x)}$. I tried rewriting the integral but then could not go further. Even Wolfram Alpha (https://www.wolframalpha.com/input/?i=integral+of+sin%5E3%282x%29%2F%28log%28csc%28x%29%29%29+dx+from+0+to+pi%2F2) could not get a closed form!? Is it even possible. AI: You have the correct idea of expanding $\sin{(2x)}$: $$I=8\int_0^{\frac{\pi}{2}} \frac{\sin^3{x}\cos^3{x}}{\ln{\left(\csc{x}\right)}} \; dx$$ Substituting $u=\ln{\left(\csc{x}\right)}$ will be very helpful: $$I=8\int_{\infty}^0 \frac{\sin^3{x}\cos^3{x}}{u} \cdot \frac{du}{-\cot{x}}$$ $$=8\int_{0}^{\infty} \frac{\sin^4{x}\left(1-\sin^2{x}\right)}{u} \; du$$ $$=8\int_{0}^{\infty} \frac{e^{-4u}\left(1-e^{-2u}\right)}{u} \; du$$ $$=8\int_{0}^{\infty} \frac{e^{-4u}-e^{-6u}}{u} \; du$$ Now, this is a simple application of the Frullani integral: $$8 \cdot \ln{\left(\frac{-6}{-4}\right)}=\boxed{8\ln{\left(\frac{3}{2}\right)}}$$ So, it turns out that this integral does have a closed form expression.
H: Can a finite set have a topology with an infinite number of open sets? Can a finite set have a topology with an infinite number of open sets? ..(1) The question originated when my professor gave us as an example that if $X$ is finite or $\tau$ is finite, $(X, \tau)$ is compact And that that was so, even if, in the case of finite $X$ , $\tau$ had an infinite number of open sets... Now if the topology has a finite number of open sets it is clear we can always extract a finite subcover, which is the initial cover itself, but what about the infinite case, why is it true, provided (1) is possible? AI: A finite set can only have finitely many subsets. Therefore every topology on a finite set is finite itself. What may confuse you, however, is that the proof that every finite space is compact does not go through "the power set is finite, therefore we are done", but rather by saying that if $\{U_i\mid i\in I\}$ is an open cover, then for every $x\in X$ we can choose some $i_x$ such that $x\in U_{i_x}$, and therefore $\{U_{i_x}\mid x\in X\}$ is a finite subcover. This is a "more correct" proof, because it actually shows that every finite set is compact in every topology. Even if the space itself is infinite, every finite subset is compact.
H: Evaluating: $\lim_{t\to\infty}\frac1t\int_0^t\sin(\alpha x)\cos(\beta x)dx$ I tried evaluating the integral but maybe there's an easier way. Please help. Here is what I did: $\begin{aligned}\lim_{t\to\infty}\frac1t\int_0^t \sin(\alpha x)\cos(\beta x)dx&=\lim_{t\to\infty}\frac1t\int_0^t\frac12(\sin(\alpha x+\beta x)+\sin(\alpha x-\beta x))dx\\&=\lim_{t\to\infty}\frac1{2t}\left(\frac{\cos((\alpha-\beta)t)}{\alpha-\beta}-\frac{\cos((\alpha+\beta)t)}{\alpha+\beta}-2\right)\end{aligned}$ AI: There is indeed a simple way. Express each sine and cosine in terms of exponentials via Euler formula. Multiply things together. The "infinite time average" of exponentials is straightforward. Make sure you pay attention to all the cases (e.g. $\alpha=\pm \beta$).
H: Not getting the right answer in this limit with absolute value $\lim_{x \to a} \dfrac{\sqrt{ax}-|a|}{ax-a^2}$ , a<0 im getting: $\lim_{x \to a} \dfrac{1}{\sqrt{ax}+a}$ So my final answer is: $\dfrac{1}{|a|+a}$ But the right answer is: $\dfrac{1}{2|a|}$ Im not sure why, can you help me please? AI: It should be $$ \lim_{x \to a} \dfrac{\sqrt{ax}-|a|}{ax-a^2}= \lim_{x \to a} \dfrac{\sqrt{ax}-|a|}{ax-a^2}\cdot\dfrac{\sqrt{ax}+|a|}{\sqrt{ax}+|a|}=\lim_{x \to a} \dfrac{1}{\sqrt{ax}+|a|}=\frac{1}{2|a|} $$
H: If matrix $A-I$ is positive semidefinite, does $\lambda_{\inf} \geq 1$ hold? If matrix $A-I$ is positive semidefinite, does the following hold? $$\lambda_{\inf} \geq 1$$ where $\lambda_{\inf}$ is the infimum of the set of all eigenvalues of $A$. If so, why? Thanks in advance. AI: HINT: Suppose you had an eigenvector $v$ with corresponding eigenvalue $\lambda$. What does $\langle (A-I)v,v\rangle\ge 0$ tell you?
H: Proof of $n-$dimensional Brownian motion identities for components of $B_t$ The person in this post (Proving Kolmogorov's continuity condition holds for Brownian motion?) used the following two identities in their proof for an $n$ dimensional Brownian motion: $$ \mathbb{E}((B_{t,i}-B_{s,i})^4) = 3(t-s)^2$$ and $$ \mathbb{E}((B_{t,i}-B_{s,i})^2(B_{t,j}-B_{s,j})^2) = (t-s)^2$$ $$ $$ For the first identity I will use that $B_t$ is a Gaussian process so that for any $u_1, u_2 \in \mathbb{R}$ we must have that $$ \mathbb{E}(\exp(i u_1 B_{t,j} + iu_2 B_{s,j})) = \exp\Big(-\frac{1}{2}\sum_{k,m=1}^2u_k c_{km}u_m + i(u_1 M + u_2M)\Big)$$ where $\mathbb{E}(B_{t,j})=\mathbb{E}(B_{s,j}) = M$ and $C = [c_{k,m}]$ is the covariance matrix of $(B_{s,j},B_{t,j})$. Choosing $u_1 = u$ and $u_2 =-u$ for any $u\in\mathbb{R}$ and assuming that $s\leq t$, we are left with $$ \mathbb{E}(\exp(i u( B_{t,j} - B_{s,j}))) = \exp\Big(-\frac{1}{2}u^2(t-2s+s)\Big) =\exp\Big(-\frac{1}{2}u^2(t-s)\Big) $$ Now taking the Taylor Expansion of both sides we are left with $$ \sum_{n=0}^\infty \frac{(iu)^n}{n!}\mathbb{E}((B_{t,j}-B_{s,j})^n) = \sum_{k=0}^\infty \frac{(-1)^ku^{2k}(t-s)^{k}}{k!2^k}$$ Now the right hand side is strictly real, so the left handside must also be strictly real. Thus we can do some trickery and conclude that the sum of the imaginary components is 0. Hence we may rewrite as the following $$ \sum_{k=0}^\infty \frac{(-1)^k u^{2k}}{(2k)!}\mathbb{E}((B_{t,j}-B_{s,j})^{2k}) = \sum_{k=0}^\infty \frac{(-1)^ku^{2k}(t-s)^{k}}{k!2^k}$$ This must hold for all $u\in\mathbb{R}$, by the Linear Independence of polynomials we conclude that $$ \mathbb{E}((B_{t,j}-B_{s,j})^{2k}) = \frac{(2k)!}{k!2^k}(t-s)^{k}$$ In particular for $k=2$ we are left with $$ \mathbb{E}((B_{t,j}-B_{s,j})^{2k}) = 3(t-s)^2$$ $$ $$ For the second statement I understand that the component of $B_t$ are independent and thus uncorrelated. I want to be able to do something as simple as $$ \mathbb{E}((B_{t,i}-B_{s,i})^2(B_{t,j}-B_{s,j})^2) = \mathbb{E}((B_{t,i}-B_{s,i})^2)\ \mathbb{E}((B_{t,j}-B_{s,j})^2) = (t-s)^2$$ In order to do that I need to show that $B_{t,i}-B_{s,i}$ and $B_{t,j}-B_{s,j}$ are independent and thus $(B_{t,i}-B_{s,i})^2 $ and $(B_{t,j}-B_{s,j})^2$ are independent as well. All I have to go on is that $B_{t,i}$ and $B_{t,j}$ are independent variables and similarly $B_{s,i}$ and $B_{s,j}$ are as well. It seems that theres a simple transformation rule that I am missing. Edit: Including this link because it shows the claim that $\sigma(B_{t,i}:t\geq 0)$ and $\sigma(B_{t,j}:t\geq 0)$ are independent if $i\not=j$. Showing that two Gaussian processes are independent AI: All I have to go on is that $B_{t,i}$ and $B_{t,j}$ are independent variables and similarly $B_{s,i}$ and $B_{s,j}$ are as well. That is not all you have to go on! The definition of $n$ dimensional Brownian motion includes the property that the processes $(B_{t,i} : t \ge 0)$ and $(B_{t,j} : t \ge 0)$ are independent (indeed, mutually independent as $i$ varies). This means that the $\sigma$-fields $\sigma(B_{t,i} : t \ge 0)$ and $\sigma(B_{t,j} : t \ge 0)$ are independent; in particular, given any multivariate Borel functions $f,g$ and any finite number of indices $t_1, \dots, t_n$, $s_1, \dots, s_m$, the random variables $$f(B_{t_1, i}, \dots, B_{t_n, i}), \quad g(B_{s_1,j}, \dots, B_{s_m,j})$$ are independent. (It also holds for countably many indices once you define what that means.) Observe that this is much stronger than merely saying that $B_{t,i}, B_{t,j}$ are independent for each $t$. So in fact, $B_{t,i}-B_{s,i}$ is independent of $B_{t,j}-B_{s,j}$, and so your proposed argument is perfectly well justified. The first problem really has very little to do with Brownian motion. You know that $B_{t,i}-B_{s,i}$ has a normal distribution with mean 0 and variance $t-s$, so this is really just asking you to compute the fourth moment of a normal random variable, i.e. $E[(\sqrt{t-s} Z)^4] = (t-s)^2 E[Z^4]$ where $Z \sim N(0,1)$. There are several ways to verify that $E[Z^4]=3$: write down an integral involving the Gaussian density and integrate by parts four times; use the moment generating function or characteristic function; or look it up on Wikipedia.
H: Meromorphic function with a removable singularity and a few poles Here is the question: Let $f$ be a meromorhic function on $\mathbb{C}$, having poles at the following three points: $z=5$, $z=1+3i$ and $z=3-4i$. Also, let $f$ have one removable singularity at $z=3$. For the following, find the value or explain why not enough information is given to find the quantity. a) $\lim_{z\rightarrow5}|f(z)|$ b) $\lim_{z\rightarrow1+2i}(z-1-2i)f(z)$ c) $\lim_{z\rightarrow\infty}|f(z)|$ My thoughts: I was wondering if I would be able to write $f$ as a rational function, such as $f(z)=\frac{(z-3)^m}{(z-3)^m(z-5)^n(z-(1+3i))^k(z-(3-4i))^l}$ for $m, n, k, l\in\mathbb{Z^{+}}$? This just doesn't feel right, because I am not really sure how this would help me do $(a)$ or $(b)$. For $(c)$, wouldn't the limit just be $0$ (I suppose if I can write $f$ in the above form). My other idea was to try and write $f$ as a Laurent series, but I am not quite sure how to "give" $f$ the removable singularity as well as all the poles. I suppose I could try and think of a Laurent series that would satisfy the conditions, prove that it satisfies the conditions, and then try and find $(a), (b), (c)$, but I am not sure if this would be the most efficient way, or if there would be another way. I appreciate any ideas, thoughts, etc. Thank you! AI: Consider the fact that if $f:\mathbb C\backslash S\to\mathbb C$ is a function satisfying your given conditions, where $S$ is the set containing the four given singularities, and $g$ an entire function, then $\tilde f:=f+g$ also satisfies the given conditions, while possibly resulting in different limits. I recommend not to try to find a general term describing your function, and instead relying on general facts about meromorphic functions and their singularities. a) There's a theorem which says that an isolated singularity $z_0$ is a pole iff $\lim_{z\to z_0}\vert f(z)\vert=\infty$. b) I think you're missing a $z$. And maybe you wanted to make it 3, not 2? Is it supposed to be $\lim_{z\to(1+3\mathrm i)}(z-1-3\mathrm i)f(z)$? If yes, remember the definition of the order of a pole, and consider that you don't know the order. c) You found an example where this limit is $0$ (though you should really remove the numerator in your fraction for it to work). Now add an arbitrary entire function as I did above. Is the limit still $0$?
H: Calculating lim, why not infimum? why the lim of the following is 1 and not infinity when n goes to infinity? Why I believe so? The greatest power in numerator is -2.5 and in denominator it's 1/17 -5 since the greatest power is in the numerator then the limit is infinity AI: When $n \to \infty$, we need to find the largest power of $n$ in the numerator and denominator. In this case, the largest power in both cases is 0 since $1 = n^0$ and all the other exponents are negative. Since the largest powers are equal, we take the ratio of the coefficients which is 1/1 = 1.
H: Copula with a certain correlation What does it mean for values to be "drawn from a normal copula with correlation $\rho\in [0, 1]$"? Is that a normal distribution with a covariance matrix whose entries are uniform random in $[0, 1]$? Would sampling from a multivariate normal distribution using a probability package (like numpy) be equivalent to "drawn from a normal copula with correlation $\rho\in [0, 1]$"? AI: It certainly does not mean what you suggest. The number $\rho\in[0,1]$ is not random. It means you have two random variables $U,V,$ each marginally uniformly distributed, so that $X = \Phi_{\mu,\sigma^2}^{-1}(U)$ and $Y=\Phi_{\nu,\tau^2}^{-1}(V)$ are normally distributed, and and the joint distribution of $U,V$ is such that $\operatorname{corr}(X,Y)=\rho,$ and Every constant (i.e. non-random) linear combination of $X$ and $Y$ is normally distributed. You can also do this with more than two random variables, in which case the correlation between any two of them is $\rho.$ The joint distribution of $(U,V)$ is the copula. Generally, a copula is a multivariate distribution whose marginals are $\operatorname{Uniform}(0,1).$
H: Convergent sum of an Harmonic-like series Let $b$ be an integer greater than $1$ and let $d$ be a digit $0\leq d<b$. Let $A$ denote the set of all $k\in \mathbb{N}$ such that its $b$-adic expansion of $k$ fails to contain the digit $d$. If $a_k=1/k$ for $k\in A$ and $a_k=0$ otherwise, prove $\sum_{k=1}^\infty a_k<\infty$. I would appreciate if someone can give a hint on how to prove this statement. I have done the following but I can't see a clear relation to bound the sum from above. The $b$-adic expansion of a natural number $n$ is $$ n= \sum_{j=0}^p r_jb^j, $$ where $0\leq r_j< b$ and $r_p\neq 0$. Now, for $p$ fixed, there are $(b-1)^{p+1}$ numbers that fail to contain digit $d$. Moreover they form an incrasing sequence as $$ k_1^{(p=0)}<k_2^{(p=0)}<\dots<k_{b-1}^{(p=0)}<k_1^{(p=1)}<\dots <k_{(b-1)^2}^{p=1}<\dots $$ This means that the partial sums $S_m = \sum_{k=1}^m a_k$ are strictly increasing. Obviously if $S_m<|M|$ for $M$ fixed, the series would converge by monotone convergence. However, I can't find such $M$. Another option could be to bound the sum as \begin{equation} \begin{split} &\frac{1}{k_1^{(p=0)}}+\frac{1}{k_2^{(p=0)}}+\dots+\frac{1}{k_{b-1}^{(p=0)}}+\frac{1}{k_1^{(p=1)}}+\dots+\frac{1}{k_{(b-1)^2}^{(p=0)}}+\dots \\ &< \frac{b-1}{k_1^{(p=0)}}+\frac{(b-1)^2}{k_1^{(p=1)}}+\dots \\ &= \sum_j \frac{(b-1)^{j+1}}{k_1^{(p=j)}}, \end{split} \end{equation} but here I can't see how the lat sum can converge. AI: Let's assume $d \ne 0$. There are $(b-2)(b-1)^{d-1}$ $d$-digit numbers with no $d$'s (as the leading digit is anything but $0$ and $d$), and each is at least $b^{d-1}$, so its reciprocal is at most $b^{1-d}$. The sum of those reciprocals is at most $(b-2)\left(\frac{b-1}{b} \right)^{d-1}$. The sum of this for $d$ from $1$ to $\infty$ is a convergent geometric series.