text
stringlengths
83
79.5k
H: Topology exercises - closure, frontier Are my proofs correct? The topology is $\mathbb{R^n}$ Exercises. Prove that a set $A$ is closed iff $Fr(A)\subseteq A$ A set $A$ is closed iff $A=Cl(A)$ For any set, $A$, $Fr(A)$ is closed For any set $A\subseteq \mathbb{R^n}$, $Fr(A)=Fr(\mathbb{R^n}-A)$ For any set $A\subseteq \mathbb{R^n}$, $Fr(A)=Cl(A)\cap Cl(\mathbb{R^n}-A)$ A set $A$ is open iff $A=Int(A)$ Proofs. The closure of a set $A$ is defined by $Cl(A)=A\cup Fr(A)$ Suppose $Fr(A)\subseteq \mathbb{R^n}-A$. As $A$ is closed, $\forall x\in \mathbb{R^n}-A, \exists N_x$ such that $N_x\cap A=\emptyset$. This is true for every $x\in Fr(A)$ (by assumption). But $x\in Fr(A) \implies$ every neighborhood of $x$ intersect $A$ non-trivially; a contradiction. So $Fr(A)\subseteq A$. Conversely, suppose $Fr(A)\subseteq A$ and let $x\notin A$, then $\exists N$ neighborhood of $x$ such that $N\cap A=\emptyset$ thus $A$ is closed. Suppose $A$ is closed, then $Fr(A)\subseteq A \implies Fr(A)\cup A \subseteq A \implies Cl(A)\subseteq A$. As $A\subseteq Cl(A)$ for any $A$, it follows that $A=Cl(A)$. Conversely, suppose $A=Cl(A)=A\cup Fr(A)$, then $Fr(A)\subseteq A$ thus $A$ is closed (by the previous result). As $Fr(Fr(A))\subseteq Fr(A)$, it follows (by 1.) that $Fr(A)$ is closed. Let $x\in Fr(A)$, then $\exists$ neighborhood $N$ such that $N\cap A\neq \emptyset$ and $N\cap (\mathbb{R^n}-A)\neq \emptyset$. As $A=\mathbb{R^n}-(\mathbb{R^n}-A)$, it follows that $N\cap (\mathbb{R^n}-(\mathbb{R^n}-A))\neq \emptyset$ and $N\cap (\mathbb{R^n}-)\neq \emptyset$ thus $x\in Fr(\mathbb{R^n}-A)$. The converse can be proved similarily. $Cl(A)\cap Cl(\mathbb{R^n}-A) = (A\cup Fr(A))\cap ((\mathbb{R^n}-A)\cup Fr(A))=Fr(A)\cup (A\cap (\mathbb{R^n}-A))=Fr(A)$ Suppose $A=Int(A)$. As $Int(A)$ is a union of open sets it's open thus $A$ is open. Conversely, suppose that $A$ is open, then there is an open neighborhood of every point $x\in A$ such that $x\in N\subseteq A$. As $x\in N$, $x$ is in any union of open sets containing $N$ so $x\in Int(A)$. As $Int(A)\subseteq A$ for any $A$, it follows that $A=Int(A)$ AI: $\newcommand{\fr}{\operatorname{Fr}}\newcommand{\cl}{\operatorname{cl}}$You get off on the wrong foot in the proof of (1): the negation of $\fr(A)\subseteq A$ is $\fr(A)\nsubseteq A$, not the much stronger statement $\fr(A)\subseteq\Bbb R^n\setminus A$. For that direction of the equivalence you can argue as follows: Suppose that $A$ is closed and $x\notin A$; then $x$ has a nbhd $N$ such that $N\cap A=\varnothing$. This clearly implies that $x\notin\fr(A)$, and it follows at once that $\fr(A)\subseteq A$. For (3) I take it that you’ve already shown elsewhere that $\fr(\fr(A))\subseteq\fr(A)$ for any set $A$. In (4) you have the wrong quantifier: if $x\in\fr(A)$, then for every nbhd $N$ of $x$ it’s true that $N\cap A\ne\varnothing\ne N\cap(\Bbb R^n\setminus A)$, and you need this fact for every nbhd of $x$, not for just one, in order for your argument actually to yield the conclusion that $x\in\fr(\Bbb R^n\setminus A)$. The rest looks fine, however.
H: Given $\mathbf{\alpha}$, Find the positive-entried vectors $\mathbf{x}$ maximizing $x_1^{\alpha_1}\dotsb x_n^{\alpha_n}$ A problem from an old Advanced Calculus qualifying exam: "Choose positive real numbers $\alpha_1, \dotsc ,\alpha_n$ such that $\sum_1^n \alpha_i = 1$ and let $f:[0,\infty)^n\to \mathbb{R}:(x_1, \dotsc , x_n)\mapsto x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n}$. Define also $C:=\{\mathbf{x}\in [0,\infty)^n \mid \sum_1^n \alpha_i x_i =1 \}$. (a) Show that there exists $\mathbf{a}\in [0,\infty)^n$ such that $f(\mathbf{a})=\sup_{x\in C}f(x)$, and $a_i >0$ for all $i$. (b) Find all points such that $f(\mathbf{a})=\sup_{\mathbf{x}\in C}f(\mathbf{x})$. Deduce that for any $x\in C$, $x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n}\leq 1$ (c) Deduce that for any $x\in [0, \infty)^n$, $x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n}\leq \sum_i^n \alpha_i x_i$. When does equality hold?" I believe I've solved part (a). The relevant domain of $\mathbf{x}$ is actually compact, even though it may not look like it at first. First, if one or more $\alpha$, say $\alpha_{i_1}$ is zero, we are not concerned about $x_{i_1}$, because it does not contribute to the function, and we may regard the function as over only the other variables. Take $m=\min_{i}\{\alpha_i \mid \alpha_i \neq 0\}$, and we find that the $x_i$ that count must all be less than $1/m$. So it attains its maximum. I have been less successful at parts (b) and (c). Here are some ideas I've been playing with: Cauchy-Schwarz says that $(\sum_1^n \alpha_i x_i)^2 \leq (\sum_{1}^n \alpha_i)^2(\sum_1^n x_i)^2$. Since $\sum_1^n \alpha_i =1$ and $\alpha_i >0$, we have that $\alpha_i^2 \leq \alpha_i$, so $\sum_1^n \alpha_i^2 \leq \sum_1^n \alpha_i$, and putting this with what we had before, we have $(\sum_1^n \alpha_i x_i)^2 \leq \sum_1^n x_i^2$. For $\mathbf{x}\in C$, $(\sum_1^n \alpha_i x_i)^2=1$, so we have that $\|x\|\geq 1\,\,\,\forall \mathbf{x}\in C$. Also, since $\sum_1^n \alpha_i =1$, we have $\min_{i}x_i \leq x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n} \leq \max_i x_i$. If we consider Lagrange multipliers, we have to have $\nabla f - \lambda \nabla g = 0$, for $f = x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n}$ and $g = \alpha_1x_1 + \dotsb + \alpha_nx_n$, that is, $(\alpha_1 x_1^{\alpha_1 -1}\dotsb x_n^{\alpha_n}, \dotsc, \alpha_n x_1^{\alpha_1}\dotsb x_n^{\alpha_n -1}) - \lambda \vec{\alpha} = 0$. But that's as far as I've gotten. Any ideas? AI: Hint : for b) you can analize the expression $\log(x_1^{\alpha_1}x_2^{\alpha_2}\dotsb x_n^{\alpha_n})$. After some simple algebraic work you can apply Jensen inequality. Then b) implies c) and you can use the equality condition of Jensen inequality to analize the equality case.
H: Combination of items with and without replacement How do you determine the number of combinations of items if some items can be replaced and some cannot? For example, I have $2$ lists: $X = \{A,B,C\}$ - cannot be replaced $Y = \{1,2,3\}$ - can be replaced How many combinations are there when combining both lists? I know that all of the combinations with replacement is $\dfrac{(n+r-1)!}{r!(n-1)!}$ and without replacement is $\dfrac{n!}{r!(n-r)!}$. How do I combine the two? Thanks in advance for your help. AI: If you want $r$ of the no replacement type, and $s$ of the replacement type, where $r$ and $s$ are given, then just count separately the number of ways to obtain $r$ no replacement objects, and the number of ways to obtain $s$ replacement objects, using the formulas you quoted. Then multiply. If only the total number of items to be picked is specified, like $13$, then calculate separately the probabilities of $0$ of the no replacement type, $13$ of the replacement type, $1$ of no replacment, $12$ of replacement, and so on, using the method of the first paragraph, and add up.
H: A sequence of measurable functions and the sup, lim sup of them So I have stumbled upon the following theorem: Let $\left\{f_n\right\}$ be a sequence of measurable functions. For $x \in X$, put $$ g(x) = \sup \left\{ f_n (x) \mid n \in \mathbb{N} \right\} \\ h(x) = \limsup_{n \to \infty} f_n (x) $$ Then $g$ and $h$ are measurable The proof is as follows: $$ \left\{ x \mid g(x) > a \right\} = \bigcup_{n=1}^{\infty} \left\{ x \mid f_n (x) > a \right\} $$ At least to me, this seems to not be true, so I was wondering if someone could help me understand where I go awry. It seems to me that you could have a sequence of functions $f_n (x) = x + \frac{x}{n} : n \neq 3 , f_3 (x) = 1000$. From the proof $$ g(x) = \sup \left\{ x + \frac{x}{n} \mid n \in \mathbb{N} \right\} = \left\{ \begin{array}{rr} 2x & : x \ge 0 \\ x & : x < 0 \end{array} \right. $$ Now it seems to me that each $f_n (x)$ is measurable in $\mathbb{R}$ however the statement in the proof isn't true here. Put $a = 3$ then $\left\{ x \mid g(x) > 3 \right\} = \left( \frac{3}{2}, \infty \right)$ however $\bigcup_{n=1}^{\infty} \left\{ x \mid f_n (x) > 3 \right\} = (- \infty , \infty )$ because $f_3 (x) > 3 \; \forall \; x \in \mathbb{R}$, so wouldn't this be a counter example to the proof given? Thanks in advance! AI: You have a sequence $\langle f_n\rangle$ and you set $$g(x)=\sup_{n\geqslant 1}\{f_n(x)\}$$ The claim is that $$\left\{ x \mid g(x) > a \right\} = \bigcup_{n=1}^{\infty} \left\{ x \mid f_n (x) > a \right\}$$ First, suppose $x$ is such that $g(x)>a$. This means $$\sup_{n\geqslant 1}\{f_n(x)\}>a$$ This means that there exists $m$ such that $f_m(x)>a^{\dagger}$, so $x\in \{x\mid f_m(x)>a\}$ and thus $x$ is in that union. Conversely, suppose $x$ is in the union. Then for some $m$ we have $f_m(x)>a$, thus $\sup\limits_{n\geqslant 1 }\{f_n(x)\}=g(x)>a$. $\dagger$: Note that if for each $n$ we had $f_n(x)\leq a$ then the $\sup$ would be $\leq a$. Thus, by contraposition, we must have $f_m(x)>a$ for at least one $m$.
H: Nonhomogeneous Linear ODE Method of Solution Question So I have the following differential equation: $$ \frac{dy}{dt}-0.07y=5000 $$ I tried solving it using an integrating factor and ended up getting $y=Ce^{0.07t}-350$. I plugged the ODE into Wolfram Alpha and it seemed to solve the problem as a separable equation. obtaining the result: $Ce^{0.07t}-71428.6$. Have I made a mistake in choosing to solve the equation with an integrating factor and if I am mistaken than when am I allowed to use this method and why not in this case? AI: Any linear first order differential equation can be solve using an integrating factor. As this is indeed linear and first order, you are certainly on the right track. Your error was multiplying 5000 by 0.07. Indeed, this is exactly what you would do if you were to differentiate $5000e^{-0.07t}$. However, in using the integrating factor method, you are instead supposed to integrate this term.
H: Big $\Omega$ question! Prove $(n-1)(n-2)(n-3)$ is $\Omega(n^3)$ Problem Prove $(n-1)(n-2)(n-3)$ is $\Omega(n^3)$. Attempt @ Solution $f(n) = n^3(1-6/n+11/n^2-6/n^3)$ $g(n) = n^3$ Show that there exists a $C > 0$ and $n_0$ such that $f(n) \ge Cg(n)$ for all $n > n_0$. I tried plugging in different numbers for $n$ that would make $f(n) > n^3$. I found that setting $n = 7$ makes sure that $f(n)$ is greater than $g(n)$. So, is that my answer? Evaluating the expression with $n=7$ to solve for $C$, and setting $n_0$ as $7$? Is that a sufficient proof? Also, Does my constant have to be a Natural number, or can it simply be a Rational number? AI: You will not be able to show that $f(n)\gt g(n)$, because it is in fact smaller. What I would suggest is that if $n\ge 6$, then $n-3\ge \frac{n}{2}$, as are $n-2$ and $n-1$. Thus for $n\ge 6$, we have $f(n)\ge \frac{1}{8}g(n)$. So we can take $C=\frac{1}{8}$. And $C$ certainly does not have to be an integer. In our particular problem, we cannot even find a positive integer $C$ with the desired property. Remark: Dividing by $n^3$ like you did was a good idea, expanding was not. When we divide by $n^3$ we get $$\left(1-\frac{1}{n}\right)\left(1-\frac{2}{n}\right)\left(1-\frac{3}{n}\right).$$ Now you can take your favourite $n\ge 4$. Let's pick $6$. Then if $n\ge 6$, the above expression is $\ge \frac{5}{6}\cdot \frac{4}{6}\cdot \frac{3}{6}$. We can pick this for our $C$.
H: Word problem involving equations Two bike riders $X$ and $Y$ both start at 2 PM riding towards each other from $40$ km apart. $X$ rides at $30 \frac{\mathrm{km}}{\mathrm{h}}$, and $Y$ at $20 \frac{\mathrm{km}}{\mathrm{h}}$. If they meet after $t$ hours, find when and where they meet. AI: Let $40 \text{km}= x + y$, where $x$ is trace of $X$ and $y$ trace of $Y$. So $y=40-x$. From their speed we can say: $30t=x$ and $20t=y$, so $t=\frac x{30}$ and $t=\frac{40-x}{20}$. Now just modify a little bit $\frac x{30}=\frac{40-x}{20}$. $x/3=(40-x)/2$, $2x=120-3x$ $5x=120$ $x=24$km Ergo $y=16 $km and time is $t=0$,$8 $h, so they met after $48$ minutes, ($2:48$ pm). (sorry for the style, just figuring everything out)
H: How can i solve this equation $|x|\hat x= 1991x$ I found this problem in an old exam and i want to know how to do it, since i couldn't at the time, it's in spanish so i'll leave my translation and the original: Solve this equation $|x|\hat x= 1991x$. Here $|x|$ is the biggest integer less than or equal than $x$, and $\hat x=x-|x|$, ie, it's the fractionary part of $x$. Resuelve la ecuacion $|x|\hat x= 1991x$. Aquí $|x|$ denota al mayor entero menor o igual que $x$, y $\hat x=x-|x|$, es decir, es la parte fraccionaria de $x$. AI: For the sake of simplicity denote $\hat x=x-|x|=a$. Then rewrite the equation as $$(x-a)a=1991x.$$ Find $x$:$$x=\frac { { a }^{ 2 } }{ a-1991 } .$$ We know $a<1$, hence $-1<x\le0.$ As a result $ x-a=\left\lfloor x \right\rfloor=-1$ . By solving $$-a=1991(a-1)$$ we get $a=\frac{1991}{1992}$. We know $x-a=-1$, so $x=-\frac{1}{1992}$
H: Figuring out which functions are Big-O of other functions (of a of 9 different functions). Where do I start? Problem I need to arrange the following functions in order, so that each function is big-oh of the next function. Functions Attempt @ Solution Understanding: I don't understand what to do here. My best guess is to determine what each function is Big-Oh of, and then rank the functions in order of magnitude of Big-Oh. Is that a way to start? AI: The problem is essentially to rank the nine functions in order of increasing growth rates. You know, for instance, that $2^n$ grows faster than $n^2$, and hence that $n^2$ is $O(2^n)$; thus, $n^2$ will come somewhere before $2^n$ in your ranking. The first five are already fairly basic functions: you should definitely know the proper order for $n!$, $2^n$, $n^2$, $\sqrt n$, and you should probably know where $\log_2n$ fits into that list. The other four will require some work; in each case it would be helpful to find a simple function $g(n)$ such that the given function is $\Theta\big(g(n)\big)$, i.e., both $O\big(g(n)\big)$ and $\Omega\big(g(n)\big)$. I’ll get you started on one of them and add some hints for the rest. From the formula for the sum of a geometric series you know that $$\sum_{i=0}^n2^i=\frac{2^{n+1}-2^0}{2-1}=2^{n+1}-1\;.$$ This suggests that you should try to show that $\sum_{i=0}^n2^i$ is $\Theta(2^{n+1})$, since for large $n$ that $-1$ is going to be insignificant. You may have to dig it out, but there’s a known formula for the sum of the first $n$ perfect squares; you can use that to work out the growth rate of $\sum_{i=0}^ni^2$. Use the properties of logarithms to simplify $\sum_{i=1}^n\log_2i$; it’s the log base $2$ of what function of $n$? As for the last one, what limit does it approach as $n\to\infty$?
H: Theorem 2.13 in Walter Rudin's Principles of Mathematical Analysis While reading Walter Rudin's Principles of Mathematical Analysis, I ran into the following theorem: Theorem 2.13. Let $A$ be a countable set, and let $B_n$ be the set of all $n$-tuples $\left(a_1,\dots,a_n\right)$, where $a_k\in A$ ($k=1,\dots,n$), and the elements $a_1,\dots,a_n$ need not be distinct. Then $B_n$ is countable. Proof. That $B_1$ is countable is evident since $B_1=A$. (pause) Okay, from the theorem's statement, I can see that $B_1=\left\{a_1:a_1\in A\right\}=A$ (assuming $\left(a_1\right)=a_1$). So far so good. (continue) Suppose $B_{n-1}$ is countable ($n=2,3,4,\dots$). The elements of $B_n$ are of the form $$ \left(b,a\right)\qquad\left(b\in B_{n-1},a\in A\right).\text{ (pause)} $$ This makes sense because $B_n$ differs from $B_{n-1}$ in the sense that $B_n$ has an additional element, $a$. Now, this is what throws me off: (continue) For every fixed $b$, the set of pairs $\left(b,a\right)$ is equivalent to $A$. (pause) I am going to end right here. How is this true? I mean, if I take $b\in B_1$, then $\left(b,a\right)$ is a $2$-tuple, and $A$ has no element that is a $2$-tuple. What am I missing? AI: Fix any $b\in B_{n-1}$, and let $S_b=\{\langle b,a\rangle:a\in A\}$; the assertion is that $S_b$ is equivalent to $A$. By this he means simply that there is a bijection between $S_b$ and $A$, and it’s easy to describe one: the map $$f_b:S_b\to A:\langle b,a\rangle\mapsto a$$ is such a bijection.
H: Doubling Time for certain bacteria Say a culture of bacteria doubles in weight every 24 hours. If it originally weighed 10g, what would be its weight after 18 hours? I know how to calculate half-life but don't know about doubling time. What is the easiest formula to use in order to solve this? AI: Since the initial weight is $10$g, and the weight doubles every $24$ hours, you know that after $t$ hours the weight will have doubled $\frac{t}{24}$ times. Thus, if $w(t)$ is the weight after $t$ hours, we must have $$w(t)=10\cdot2^{t/24}\;.\tag{1}$$ You want $w(18)$, so just substitute $t=18$ into $(1)$. Note that this is exactly the way that half-lives behave, but with a power of $2$ (for doubling) instead of a power of $\frac12$ (for halving). That is, if the half-life were $24$ hours, the weight after $t$ hours would be $$10\left(\frac12\right)^{t/24}\;.$$ Thus, it’s not really anything new. You should be able to work just as will with a tripling time, or the time it takes for a reduction to one-tenth of the weight, or any similar way of giving a rate of geometric growth or decay.
H: Question from Introduction to Topology by Mendelson I'm self studying Intro to Topology by Mendelson(3rd ed.) right now and I'm stuck on a book problem. In case anyone has the book handy, its problem 2 of chapter 3 section 6. The problem is as follows, Let $O$ be an open subset of a topological space $X$. Prove that a subset $A$ of $O$ is relatively open in $O$ if and only if it is an open subset of $X$. I was able to prove the forward direction by showing that $A$ is an intersection of open sets in $X$ and hence open. I'm having trouble proving the other direction. I know that since $O$ is a subspace of $X$ all the open sets of $O$ take the form $O'\cap O$ for some open set $O'\subset X$. What I'm thinking is to show that $A=A\cap O$ or $A=O'\cap O$. The reason for the first is to show that implicitly $A\subset O$. I also realized that there is an open set $O''\supset O$ since $O=O''\cap O$. Thanks for any help. AI: Don't over-complicate matters! You have it. If $A\subset O$ then $A\cap O = A$. Remember that you're starting with the assumption that $A$ is a subset of $O$.
H: If $X$ is a separable Banach space $ \Rightarrow$ $ X=\overline{\cup_{n=1}^{\infty} X_n}$ where $\dim X_n=n$ Let $X$ be a Banach space separable. How can we prove that there is a sequence of subspaces: $X_1\subset\ X_2 \subset \cdots \subset \ X_n \subset \cdots $ of $X$ such that $\displaystyle X=\overline{\bigcup_{n=1}^{\infty} X_n}$ where $\dim X_n=n$ Any hints would be appreciated. AI: Hints: How do you usually specify an $n$-dimensional space? Recall that the definition of separability gives you a sequence of points or vectors lying in $X$. How can you use these to build the $X_n$? PS: Note that your result requires $X$ to be infinite dimensional because otherwise the sequence cannot exist. Consider the real line.
H: Mathematical Induction --- $a_n=2a_{n-1}-1$ Problem Finish the following mathematical induction showing that $a_0 = 2$ and $a_n = 2a_{n-1}-1$ implies $a_n = 2^n +1$. Basis: Prove that $a_0 = 2^0 + 1$ Proof: $a_0$ = $________$ = $________$ = $2^0 + 1$ Induction: Assume $k \ge 0$ and $a_k = 2^k +1$ Want to show that $a_{k+1} = 2^{k+1} + 1$ In the induction proof, use $a_{k+1} = 2a_k - 1$ Also use $2^{k+1}$ and $2*2^k$. Proof: $k\ge0$ and $a_k = 2^k + 1$ and $a_{k+1} = 2a_k - 1$ implies: $a_{k+1}$ = $________$ = $________$ = $________$ = $2^{k+1} + 1$ Attempt @ Solution I have not attempted to solve this, because I don't know where to begin. In plain-english, what is the problem asking for, and what is an outline of steps I need to do to find a solution to this problem? I'm not asking for the problem to be solved. I don't even understand what the problem is asking for. AI: (As a commenter points out, all you need to do to solve the problem is fill in the blanks. But it seems like you may have a bit of confusion about induction, so I will try and explain that). Induction is a method of proof for the natural numbers. Specifically, say you have a subset $X \subseteq \mathbb{N}$ then if you can show that $0 \in X$ and if $n \in X$ then $n +1 \in X$ then you know $X$ is all the natural numbers, i.e. $X = \mathbb{N}$. In your problem, your $X$ is $$X = \{ n \in \mathbb{N} \mid a_n = 2^n +1 \}$$. So first, we have to show that $0 \in X$. What does this mean? This means that we want to show $a_0 = 2^0 + 1$. (You are given $a_0 = 2$). Next, to prove $n \in X \rightarrow n+1 \in X$ we assume that $n \in X$ in other words, $a_n = 2^n+1$. Now we want to show, using this assumption, that $a_{n+1} = 2^{n+1}+1$. Then, once we have shown this, by induction we know that $X =\mathbb{N}$, which means that $a_n = 2^n +1$ for all $n \in \mathbb{N}$. Some intuition: One way to think of induction is like a line of dominoes falling over. If you set up a bunch of dominoes all you have to do is push the first one over, and make sure that each domino can reach the next in the line. Then you "know" the whole line will fall over.
H: Combinatorics - How many bit strings of length 8 with exactly two 0's are there for which the 0's are not adjacent How many bit strings of length 8 with exactly two 0's are there for which the 0's are not adjacent? I'm having a lot of trouble with this seemingly simple problem. I'm trying to do this with stars and bars. Am I correct or on the right track at least? First, I put two bars in for my two 0's, and then put one 1 in between them to make sure the 0's are not adjacent. |x| This leaves me with 5 more 1's to place in the bit string. example: x|xx|xxx would be one distribution of the 5 remaining 1's. So, this gives me c(3+5-1,3-1), or c(7,2)=21 ways. The only other way I could think to do this problem would be to get the total # of ways to have a bit string of length 8 with exactly two 0's, and subtract the total number of ways to have the 0's be adjacent. So, c(8,2) for the total number of was to have exactly two 0's, and subtract the total number of ways I can have exactly two 0's in a bit string where the 0's are adjacent which is 7. This gives me c(8,2)=28 and 28-7=21. This confirms my answer (if either of these methods are correct that is). Thanks! AI: Both your approaches are right. Perhaps you will find the following approach easier. Put down our $6$ ones, like this: $$1\qquad 1\qquad 1\qquad 1\qquad 1\qquad 1$$ There are $7$ places where $0$'s could go, the $5$ gaps between $1$'s and the $2$ ends. We must choose $2$ of these places, so the answer is $\binom{7}{2}$. This generalizes immediately to say a bit string of length $16$, with $5$ $0$'s no two of which can be adjacent.
H: Finding volume of a figure given by relations. I am stuck on the problem of finding volume of the figure given by $0 \leq z \leq 2, x^{2} + y^{2} \leq 2, x^{2} + y^{2} + z \leq 2x$. I have tried three different coordinates but the problem is $2x$ in the last relation. Thanks ahead for help! Add. Sorry that I didn't mention this before. But I want a full solution without "brute-force" computation. I did try many different tricks, so if any of those works, it means that I do not know how to work out the details. AI: Denote by $D$ the bottom boundary of the solid (its volume be $V$), which can be expressed as set $\{(x,y)\in \mathcal{R}^2 \mid x^2+y^2\leq 2\} \cap \{(x,y)\in\mathcal{R}^2 \mid x^2+y^2\leq 2 x\}$. Observing that the solid is symmetric with respect to the plane $y=0$, so we have: The Total Volume of Solid= 2 $ \times$ that of the part with $y\geq 0.$ Now denote $D_+:=D\cap \{y>0\}$, then we have $$\begin{aligned}V&=2\iint_{D_+} (2x-x^2-y^2)dxdy\\ &=2\int_0^1\bigg(\int_{1-\sqrt{1-y^2}}^{\sqrt{2-y^2}} (2x-x^2-y^2)dx\bigg)dy ~~( D_+ \text{is $x-$ simple})\\ &=2-\pi/4. \end{aligned}$$ May this can help you.
H: Converting a distance matrix into Euclidean vector I have a distance matrix between different elements. Now I want to calculate the Euclidean vectors that have resulted in that matrix. Is there any efficient method that can do so? AI: Distance matrices only give the pairwise distances within a finite set of points. The positions of at least three points must be known for the distance matrix to be enough to determine the positio0ns of the others as vectors from the origin. Given three known points, any other point is at the intersection of the circles with centre one of the three points and radius the distance from that point.
H: Differentiability of $xy^{\alpha}$ I was asked to prove that $|xy|^{\alpha}$ is differentiable at $(0,0)$ if $\alpha > \frac{1}{2}$. Since both the partial derivatives are zero, I concluded that this function is differentiable if and only if the following holds: $$ \lim\limits_{(x,y)\to (0,0)} \frac{|xy|^{\alpha}}{\sqrt{x^{2} + y^{2}}} = 0$$ However, I am not sure how to show this. What I tried is: $$\frac{|xy|}{\sqrt{x^{2}+y^{2}}} \leq \frac{1}{\sqrt{2}}$$ Hence the given expression is less than equal to $$ \frac{1}{\sqrt{2}}|xy|^{\alpha - \frac{1}{2}}$$ Now I conclude that this goes to zero as $x,y$ go to zero? I was just wondering if I am correct in all my steps? Any help would be apreciated. EDIT: Can I use other norms on $\mathbb{R}^{2}$ instead of the Euclidean norm to conclude matters of differentiability? I ask this as it would be easier to work with other norms. AI: Polar co-ordinates. $\displaystyle \lim_{(x,y)\to(0,0)}\frac{|xy|^{\alpha}}{\sqrt{x^2+y^2}}=\lim_{r\to 0}r^{2\alpha-1}|\sin \theta \cos \theta|$ which $\to 0$ if $\alpha>\frac{1}2$
H: How can I get the $(x,y)$ of a sub-line, which has 0.45 length of the original line, between two points? Given two points $(x_1, y_1)$ and $(x_2, y_2)$, they forms a straight line. The target is to find a point $(x_t,y_t)$ between these two points, and the length from $(x_1, y_1)$ to $(x_t,y_t)$ is 0.45 of the length of $(x_1, y_1)$ and $(x_2, y_2)$. Thanks! AI: Let $p=\langle x_1,y_1\rangle$ and $q=\langle x_2,y_2\rangle$. Points on the line passing through $p$ and $q$ have the form $$p(t)=\Big\langle x_1+t(x_2-x_1),y_1+t(y_2-y_1)\Big\rangle\;.$$ You can easily check that $p(0)=p$ and $p(1)=q$. It’s also easy to check that $p\left(\frac12\right)$ is the midpoint of the segment $\overline{pq}$. More generally, if $0\le t\le 1$, then $p(t)$ is the point on $\overline{pq}$ that is $t$ fraction of the way from $p$ to $q$. This is a very useful way of describing the line segment $\overline{pq}$. The points $p(t)$ with $0\le t\le 1$ are said to be convex combinations of $p$ and $q$ and are often written in the following equivalent form: $$p(t)=\Big\langle(1-t)x_1+tx_2,(1-t)y_1+ty_2\Big\rangle=(1-t)p+tq\;.$$ If we allow $t$ to range over all of $\Bbb R$ instead of just $[0,1]$, we get the entire line through $p$ and $q$. The points on the line that are on the opposite side of $p$ from $q$ are those with $t<0$; the points that are on the opposite side of $q$ from $p$ are those with $t>1$.
H: Verifying whether a map is a polynomial ring automorphism On pg.1, this article talks about an automorphism $f:R[x_{1},x_{2}]\to R[x_{1},x_{2}]$ ($R$ is a ring) defined by $$f(a)=a, \forall a\in R$$ $$f(x_{1})=x_{1}+x_{2}$$ $$f(x_{2})=x_{2}$$ An automorphism as described here is an isomorphism, which means it has to be surjective. But is this mapping really surjective? How is the element $x_{1}x_{2}$ mapped to, for instance? Thanks in advance! AI: It suffices to show that each of $R$, $x_1$ and $x_2$ lie in the image of $f$ (do you see why?). Clearly $R$ does, since $f(a) = a$ for every $a \in R$. On the other hand, we see that $f(x_2) = x_2$ and $x_1 = f(x_1) - f(x_2) = f(x_1 - x_2)$. Hence, the map is surjective. Now that we have pre-images for $x_1$ and $x_2$, do you see how to get a pre-image for $x_1 x_2$? (Do keep in mind, though, that one also must check that $f$ is injective).
H: Basic Set Theory: Existence of Three Specific Sets Do there exist sets $A$, $B$ and $C$ such that $A\cap B \neq \emptyset$, $A\cap C = \emptyset$ and $(A\cap B)\setminus C = \emptyset$? AI: No. We argue by contradiction. Suppose instead that these sets really did exist. Then since $A \cap B \neq \emptyset$, we know that there exists some $x \in A \cap B$ so that $x\in A$ and $x\in B$. There are exactly two cases to consider. Case 1: Suppose that $x\in C$. Then since $x\in A$, we know that $x\in A\cap C$. But this contradicts the fact that $A \cap C = \emptyset$. Case 2: Suppose that $x\notin C$. Then since $x \in A \cap B$, we know that $x\in (A \cap B) \setminus C$. But this contradicts the fact that $(A \cap B) \setminus C = \emptyset$. In either case, we derived a contradiction. Thus, no such sets exist, as desired.
H: $H(\kappa)$-absoluteness of a formula Let $\varphi(x,y)$ be an $\in$-formula which is absolute between transitive models of ZF minus powerset axiom. Then $\exists x\, \varphi(x,y)$ is $H(\kappa)$-absolute, where $H(\kappa)$ is the set $\{x\, |\, card(TC(\{x\}))<\kappa\}$. $\kappa$ is uncountable regular. This is a homework question and I would prefer a hint instead of a solution. AI: Assuming ZFC in $V$. Since $\kappa$ is uncountable regular, $H(\kappa)$ models all axioms of ZFC except powerset. (*) The relativization of $\exists x \, \varphi(x,y)$ to $H(\kappa)$ is $\exists x\in H(\kappa) \,\varphi(x,y)$, $\varphi$ is absolute between $V$ and $H(\kappa)$ because of (*). So it needs to be shown that, if for some $y\in H(\kappa)$ there is $x\in V$ with $\varphi(x,y)$, then there is $x\in H(\kappa)$ with $\varphi(x,y)$. For $y\in H(\kappa)$, let $T_y:=TC(\{y\})\subseteq H(\kappa)$ with $|T_y|<\kappa$. Let $\kappa'\geq \kappa$ regular be so that if there is $x\in V$ with $\varphi(x,y)$, there is $x\in H(\kappa')$ with $\varphi(x,y)$. Let $S_y$ be a Skolem hull of $T_y$ in $H(\kappa')$ and let $M_y$ be the transitive collapse of $S_y$, with $\pi:S_y\to M_y$ being a $\in\!\!-\!\!\in$ isomorphism. $|S_y|<\kappa$ and $|M_y|<\kappa$. Assume there is a $x'\in V$ with $\varphi(x',y)$. Then there is $x\in S_y$ with $\varphi(x,y)$. Since $S_y$ is an elementary substructure of $H(\kappa')$, it is a model of ZF without powerset axiom and $\varphi^{S_y}(x,y)$ holds, because $\varphi$ is absolute. Because $\pi$ is a isomorphism and its restriction to the transitive set $T_y$ is the identity, $\varphi^{M_y}(\pi(x),y)$ holds. Because $M_y$ is isomorphic to $S_y$, ZF without powerset is true in $M_y$ and therefore $\varphi(\pi(x),y)$. Now $TC(\{M_y\})=\{M_y\}\cup M_y$ has cardinality less than $\kappa$, so $M_y\in H(\kappa)$. Because $H(\kappa)$ is transitive and $\pi(x)\in M_y$, $\pi(x)\in H(\kappa)$.
H: Intro to Topology Mendelson I'm self studying intro to topology by Mendelson and I'm stuck on a book problem. The problem is, Let $Y$ be a subspace of $X$ and let $A\subset Y$. Denote the $\operatorname{Int}_X(A)$ as the interior of $A$ in the topological space $X$ and $\operatorname{Int}_Y(A)$ as the topological space $Y$. Show that $\operatorname{Int}_X(A)\subset \operatorname{Int}_Y(A)$. So I see that $A\subset Y\subset X$ and so the $\operatorname{Int}(A)\subset X$. What I don't understand is how can the interior of $A$ vary as you change topological spaces, especially in the case, were one is contained in the other. Also, when we're looking at $A\subset Y$, $A=O\cap Y$ for some open set $O\subset X$, but if we are looking at $A$ in the topological space $X$, does it still have a similar form? Any hints toward how to approach this would be much appreciated. AI: $\newcommand{\int}{\operatorname{int}}$Let $X=\Bbb R$ with the usual topology, and let $Y=\Bbb Z$. Let $A=\{0\}$. Then $\int_XA=\int_{\Bbb R}\{0\}=\varnothing$, but $\int_YA=\int_{\Bbb Z}\{0\}=\{0\}$. Here, then, is an example in which $\int_XA\subsetneqq\int_YA$. If we take $A=Y$, then $\int_YA=\int_YY=Y$, since $Y$ is certainly an open subset of itself. But if $Y$ is not an open subset of $X$, then $\int_XY\ne Y$, and again interiors with respect to $X$ and $Y$ are different. For instance, we might take $X=\Bbb R$ and $A=Y=\Bbb Q$: then $\int_XA=\int_{\Bbb R}\Bbb Q=\varnothing$, but $\int_YA=\int_{\Bbb Q}\Bbb Q=\Bbb Q$. HINT: To show that in general we have $\int_XA\subseteq\int_YA$, let $x\in\int_XA$. This means that there is a $U\subseteq X$ such that $x\in U\subseteq A$, and $U$ is open in $X$. Let $V=U\cap Y$; $V$ is open in $Y$. Use $V$ to show that $x\in\int_YA$. Since $x$ was an arbitrary element of $\int_XA$, you’ll then be entitled to conclude that $\int_XA\subseteq\int_YA$.
H: Solving differential equation $x^2y''-xy'+y=0, x>0$ with non-constant coefficients using characteristic equation? Whenever you deal with non-constant coefficients you usually use Laplace transform to solve a given differential equation, at least that's how how I learned it. But how would you solve the equation using the characteristic equation? I've got $x^2r^2-xr+1=0$, but not sure where to go from here? I am asked to obtain one solution from the equation. AI: Problem Version 1: We are given: $$\tag 1 x^2y''+y=0, ~~x \gt 0$$ This is a Euler-Cauchy type DEQ. We can let $y = x^m$, so we have: $y'(x) = mx^{m-1}, ~~y''(x) = m(m-1)x^{m-2}$. Substituting this back into $(1)$, yields: $x^2y''+ y = x^2(m(m-1)x^{m-2}) + x^m = x^m(m^2 - m + 1) = 0$. So, we have a characteristic equation: $$m^2 - m + 1 \rightarrow m_{1,2} = \dfrac{1}{2} \pm \dfrac{i \sqrt{3}}{2}$$ Now, because we are given $x \gt 0$ (if not, all the $x$ terms would have absolute values), we can write: $$\displaystyle y(x) = y_1(x) + y_2(x) = c_1 x^{m_1} + c_2 x^{m_2} = c_1 x^{\large \frac{1}{2} + \frac{i \sqrt{3}}{2}} + c_2 x^{\large \frac{1}{2} - \frac{i \sqrt{3}}{2}}$$ Using Euler's identity, some algebra, and the fact that $x \gt 0$, we can write this as: $$y(x) = c_1 \sqrt{x} \cos \left(\frac{\sqrt{3}}{2} \ln x\right) + c_2 \sqrt{x} \sin\left(\frac{\sqrt{3}}{2} \ln x\right)$$ Problem Version 2: $$\tag 2 x^2y''-x y'+ y = 0, x \gt 0$$ This requires the use of the Frobenius method to find a solution near $x = 0$. I will assume you are studying this, so I am going to map it out for you and you fill in the missing details. Here, we want the form $y'' + P(x) y' + Q(x) y = 0$, so: Dividing both sides by $x^2$, we get $P(x) = -\dfrac{1}{x}, ~ Q(x) = \dfrac{1}{x^2}$ So, $x = 0$ is a singular point and the method of Frobenius is usable. Next, to use this method, we substitute $y$ and its derivatives from: $\displaystyle y = \sum_{n=0}^\infty a_nx^{\lambda+n}$ $y' = \lambda a_0 x^{\lambda - 1} + \ldots (\lambda+n+1)a_{n+1}x^{\lambda+n}+\ldots$ $y'' = \lambda(\lambda-1)a_0x^{\lambda-2} + (\lambda + n + 1)(\lambda+n)a_{n+1}x^{\lambda + n -1}+\ldots$ We substitute these into $(2)$, collect like terms and set them equal to zero. This gives us: $$x^{\lambda}(\lambda-1)^2a_0 + x^{\lambda + 1} \lambda^2 a_1 + \ldots x^{\lambda + n}\left((\lambda + n)^2 - 2(\lambda + n) + 1\right)a_n + \ldots = 0$$ From this, we get: $\tag 3 (\lambda - 1)^2 a_0 = 0$ and generally $\tag 4 \left((\lambda + n)^2 - 2(\lambda + n) +1\right)a_n = 0$ The first expression is called the indicial equation and it gives us: $(\lambda-1)^2 = 0$, which has a double root $\lambda_{1,2} = 1$, which we substitute into the second equation, yielding $n^2 a_n = 0$, which implies $a_n = 0, ~ n \ge 1$, thus: $$y_1(x) = a_0 x$$ Now, we want a second linearly independent solution, but the indicial equations roots are equal. This complicates matters! To find a second family of solutions, we first solve for $a_n$ in terms of $\lambda$, where $\lambda$ is thought of as an independent variable in $(4)$. We obtain $a_n = 0$ for $n \ge 1$ and then substitute back into the original $y$ we used, with $\lambda$ as the independent variable, so: $$y = y(\lambda, x) = a_0x^\lambda$$ Another family of solutions to $(2)$ can be obtained by: $$y_2 = \dfrac{\partial y(\lambda, x)}{\partial \lambda} ~\text{evaluated at}~ \lambda = 1$$ Since (recall that we are given $x \gt 0$, else we would need absolute values): $y = a_0x^\lambda = a_0 e^{\lambda \ln x}$ $\dfrac{\partial y(\lambda, x)}{\partial \lambda} = a_0 \ln x e^{\lambda \ln x} = a_0 x^\lambda \ln x$ , and evaluated at $\lambda = 1$, yields: $a_0 x \ln x = y_1(x) \ln x$ This gives us the second solution as $y_2(x) = a_0 x^\lambda \ln x$. Putting this together, our solution is (test this by replacing it in the original DEQ and see that is satisfies it): $$y(x) = y_1(x) + y_2(x) = c_1 x + c_2 x \ln x$$ Lastly, it is worth noting the Wolfram Alpha finds a solution using modified Bessel and the Gamma functions.
H: Prove $f : A\rightarrow B, g: B\rightarrow C$ , and $g\circ f: A \overset{1-1}{\rightarrow}C$, then $f:A\overset{1-1}{\rightarrow}B$ Could anyone please explain how to approach this problem, I'm honestly having a hard time figuring out where to start the problem. I know that I have to show that $\forall x,y\in A$ , if $f(x)=f(y)$, then $x=y$. Therefore, my educated guess would be to start by letting $x,y\in A$ s.t. $f(x)=f(y)$ AI: Choose $x, y \in A$ subject to $f(x) = f(y)$. Take $g$ on both sides to get $$g(f(x)) = g(f(y))$$ Writing this as composition gives $$(g\circ f)(x) = (g \circ f)(y)$$ What can you now conclude, knowing that $g\circ f$ is 1-1?
H: Finding original amount in half-life problem Say the half-life of an element is 1590 years. If 10g of the element is left after 1000 years, how much was there originally? AI: Since $$\text{Amount remaining} =\text{Original Amount} \times \bigg(\frac{1}{2}\bigg)^{\text{number of half lives}} $$ solve for $X$ in the equation $$10 = X \times \bigg(\frac{1}{2}\bigg)^{\frac{1000}{1590}}$$
H: Calculating the number of times a digit is written when given two numbers My homework asks me the following: If a student writes the integers from 5 to 305 inclusive by hand, how many times will she write the digit 5? I started out by writing every number that contains 5 and I got 31, but 31 is not among the answers possible: 5, 15, 25, 35, 45, 55, 65, 75, 85, 95, 105, 115, 125, 135, 145, 155, 165, 175, 185, 195, 205, 215, 225, 235, 245, 255, 265, 275, 285, 295, 305 I counted 55, 155, and 255 as two each since there are two occurrences of the digit 5 in each. I can't figure out what I'm doing wrong. In addition, suppose I were given the numbers 1 and 100,000 - writing them out isn't efficient, and I would assume there's a formula for this but I can't figure that out either. AI: Count 000 up to 299. Of the 300 unit digits, $\frac1{10}$ are 5. Of the 300 tens digits, $\frac1{10}$ are 5. None of the hundreds digits are 5. Adding the one in 305, I count 61=30+30+1 in the integers from 5 to 305.
H: How can I calculate this exponential growth? I'm reading the book "Singularity is near", and there is a passage where the author says: "It takes 100 years to achieve this, with current rate of progress, but because we're doubling the rate of progress every decade, we'll achieve a progress of century in 25 years". Can you please help me with the calculation,i.e. the 25 years? AI: I want to eat 100 Mars bars. At my current rate of 1 Mars bar a year, it will take me 100 years. But if I start eating 2 a year after 10 years, and 4 a year after 20 years, then in 25 years I'll eat $$10\times1+10\times2+5\times4=50$$ Seems to me it would actually take 32 years: $$10\times1+10\times2+10\times4+4\times8=102$$ But this is assuming the increase is discrete, happening only at the end of each 10-year period. Most likely, the author has a continuous model in mind, and one needs to perform an integration rather than an addition. How are you on integral calculus?
H: Fibonacci Cubes: $F_n^3 + F_{n+1}^3 - F_{n-1}^3 =F_{3n}$ Prove $$F_n^3 + F_{n+1}^3 - F_{n-1}^3 =F_{3n}$$ I've tried induction, either it's just very long or a neat trick is required in the inductive step but for some odd reason it's not working out. Ideally I would like any suggestions for the inductive proof. AI: Here you'll sort of see my thought process; hopefully it helps. First you need to define the Fibonacci sequence as $F_1 = 1, F_2 = 1, F_{n+1} = F_{n} + F_{n-1}$ for this to hold. If you somehow want to use induction, you need to write $F_{n+1}$ with the recurrence relation, whence $S = F_{n}^3 - F_{n-1}^3 + (F_{n}+F_{n-1})^3 = F_{n}^3 - F_{n-1}^3 + 3F_n^2 F_{n-1} + 3F_n F_{n-1}^2 + F_{n}^3 + F_{n-1}^3.$ Writing $-F_{n-2} = F_{n-1} - F_{n}$ gives $-F_{n-2}^3 = F_{n-1}^3 - F_{n}^3 + 3F_{n}^2F_{n-1} - 3F_n F_{n-1}^2$ and thus, $S = F_n^3 + F_{n-1}^3 - F_{n-2}^3 + 2(F_n^3 - F_{n-1}^3 + 3F_n F_{n-1}^2) = F_{3n-3}+2(F_n^3 - F_{n-1}^3 + 3F_n F_{n-1}^2).$ To make it clear what exactly we need, if $S = F_{3n},$ then $S = F_{3n-1} + F_{3n-2} = 2F_{3n-2} + F_{3n-3},$ so in fact we want to prove that $F_{n}^3 + 3F_{n}F_{n-1}^2 - F_{n-1}^3 = F_{3n-2}.$ Looking back at our first equation for $S,$ this means we also need $F_{n}^3 + 3F_{n}^2F_{n-1} + F_{n-1}^3 = F_{3n-1}.$ Claim: $F_{n}^3 + 3F_{n}F_{n-1}^2 - F_{n-1}^3 = F_{3n-2}$ and $F_{n}^3 + 3F_{n}^2F_{n-1} + F_{n-1}^3 = F_{3n-1}.$ Proof: The base cases are easy to check. We proceed by double induction. That is, assume the claim holds for some $n.$ Then for $n+1,$ we have $\begin{align*} F_{n+1}^3 + 3F_{n+1}F_{n}^2 - F_{n}^3& = (F_n + F_{n-1})^3 + 3F_n^3 + 3F_n^2 F_{n-1} - F_n^3 \\&= \underbrace{F_{n}^3 + 3F_{n}^2 F_{n-1}} + 3F_{n}F_{n-1}^2 + \underbrace{F_{n-1}^3} + 2F_{n}^3 + 3F_{n}^2F_{n-1} \\&= F_{3n-1} + 2F_{n}^3 + 3F_{n}^2F_{n-1} + 3F_{n}F^2_{n-1}\\ &= F_{3n-1} + (F_n^3 + 3F_{n}^2F_{n-1} + F_{n-1}^3) + (F_{n}^3+3F_{n}F_{n-1}^2 - F_{n-1}^3)\\ &=F_{3n-1} + F_{3n-1} + F_{3n-2} = F_{3n-1} + F_{3n} = F_{3n+1} = F_{3(n+1) - 2}. \end{align*}$ Similarly, one shows that $F_{n+1}^3 + 3F_{n+1}^2 F_n + F_{n}^3 = F_{3(n+1) - 1},$ whence we are done. tl;dr: Abuse the Fibonacci recurrence relation and the inductive hypotheses.
H: Algebra has me hitting a wall Math is not my strong suit. As far as I can tell, this is what I'm looking to solve. A+B=C and B=C*D If one knows what A and D equal, can one determine the value of B and C? So far, I've tried: A+B=C A+C*D=C C*D=C-A C*D-C=-A A=C-C*D And then I get stuck. Also: A+B=C B=C-A C*D=C-A C=(C-A)/D 1=((C-A)/D)/C Stuck again. Is this doable? AI: From eqn. 1 put the value of $C$ in eqn. 2 and you get \begin{equation} \begin{split} B=&(A+B)D \\ \Rightarrow \ B-BD=&AD\\ \Rightarrow B=\frac{AD}{1-D} \end{split} \end{equation}. The last step follows by taking the $BD$ in RHS to LHS and then taking some common factor. Since you know $A,\ D$, so you know B. Now put this value of $B$ in eqn. 1 to get $$C=A+\frac{AD}{1-D}=\frac{A}{1-D}$$
H: How to solve inequalities with one variable by number line I can solve simple type of inequalities by number line . For example, if I want to solve $x>6$ by number line , then I have to plot this solution into number line which is described in the following image . How can I solve inequalities with one variable by number line ? Suppose that , I have the following inequalities: $$x^2-5x+6<0$$ Remember I have to solve this inequality by number line . Suppose , I have applied the following procedure to solve this : the solution to the above inequality is $2<x<3$ Then I have plotted the solution $2<x<3$ in number line . This solution is not acceptable because I have to solve the above inequality by number line. Can you guys plz help me to solve this problem ? AI: To start with I would talk to the teacher, asking why certain solutions are not accepted. For the inequality $$x^2-5x+6 <0\tag{1}$$ it is easier to factorise $x^2-5x+6$ first. That is we look for two numbers $A$ and $B$ such that $x^2-5x+6=A\cdot B$, because solving $$A\cdot B <0\tag{2}$$ is easy: (2) can only happen when both $A<0$ and $B>0$, or both $A>0$ and $B<0$. Now, to factorise $x^2-5x+6$ we first complete the square \begin{eqnarray}x^2-5x+6& =& \left(x - \frac{5}{2}\right)^2 -\left(\frac{5}{2}\right)^2+6\\ &=& \left(x - \frac{5}{2}\right)^2 -\frac{25}{4}+6\\ &=& \left(x - \frac{5}{2}\right)^2 -\frac{25}{4}+\frac{6\cdot4}{4}\\ &=& \left(x - \frac{5}{2}\right)^2 -\frac{25}{4}+\frac{24}{4} &=& \left(x - \frac{5}{2}\right)^2 -\frac{1}{4} \end{eqnarray} and use the conjugation rule $a^2-b^2 = (a+b)(a-b)$, so that \begin{eqnarray} x^2-5x+6 &=& \left(x - \frac{5}{2}\right)^2 -\frac{1}{4} \\&=& \left(x - \frac{5}{2} +\frac12\right)\left(x - \frac{5}{2} -\frac12\right)\\ &=& \left(x - 2\right)\left(x - 3\right) \end{eqnarray} Now, for (1) we get the inequalities (or rather system of inequalities) $$\left(x - 2\right)<0 \text{ and }\left(x - 3\right)>0\tag{3}$$ and $$\left(x - 2\right)>0 \text{ and }\left(x - 3\right)<0\tag{4}$$ which are linear and can be "solved using number line" (or by the algebraic method of adding a number to both sides of the inequalities). Note that (3) has no solution, while (4) is the line segment you found.
H: Calculate $\deg(f)$ According to Guillemin and Pollack, Differential Topology Page 109, $f: X \to Y$ are appropriate for intersection theory ($X,Y$ are boundaryless oriented manifolds, $X$ is compact), when $Y$ is connected and has the same dimension as $X$, we define the degree of an arbitrary smooth map $f: X \to Y$ to be the intersection number of $f$ with any point $y$, $\deg(f) = I(f,\{y\})$. Notice that in order to calculate $\deg(f)$, one simply selects any regular value $y$ and counts the preimage points $\{x:f(x) = y\}$, except that a point $x$ makes a contribution of $+1$ or $-1$ to the sum, depending on whether the isomorphism $df_x: T_x(X) \to T_y(Y)$ preserves or reverses orientation. So I am confused here - what are the points making a contribution of $+1$ or $-1$ to the sum, and why we want to exclude them? Thank you~ AI: They don't want to exclude. They are trying to say that you have to count the points with sign. That is, they take the inverse image. Enumerate it $\{x_1,x_2 ... x_n\}$ Here the author says that you can't count them and say that the degree is $n$. I now elaborate what the author tries to say this way: If at $df_{x_i}$ it preserves orientation, set $m_i=1$, else set $m_i=-1$ Now add the $m_i$s to get the degree.
H: For what natural numbers is $n^3 < 2^n$? Prove by induction Problem For what natural numbers is $n^3 < 2^n$? Attempt @ Solution For $n=1$, $1 < 2$ Suppose $n^3 < 2^n$ for some $n = k \ge 1$ It looks like the inequality is true for $n = 0$, $n = 1$ and $n\ge10$ But, how can I prove this through induction? AI: You need to gather more data first: $$\begin{array}{rcc} n:&1&2&3&4&5&6&7&8&9&10&11\\ n^3:&1&8&27&64&125&216&343&512&729&1000&1331\\ 2^n:&2&4&8&16&32&64&128&256&512&1024&2048 \end{array}$$ Notice that $n^3$ is not less than $2^n$ for $n=2,3,\ldots,9$; the fact that $1^3<2^1$ is an isolated success. The simplest reasonable guess at this point is that $n^3<2^n$ if $n\ge 10$. Thus, the basis step for your induction will be the calculation that $10^3<2^{10}$. For your induction step you’ll assume as induction hypothesis that $k^3<2^k$ for some integer $k\ge 10$. Your goal, then, will be to show that $(k+1)^3<2^{k+1}$. Note that $$(k+1)^3=k^3\cdot\frac{(k+1)^3}{k^3}=k^3\left(\frac{k+1}k\right)^3\;.$$ Since $k\ge 10$, $\frac{k+1}k=1+\frac1k\le 1+\frac1{10}=\frac{11}{10}$, and therefore $$\left(\frac{k+1}k\right)^3\le\left(\frac{11}{10}\right)^3=\frac{1331}{1000}<2\;,$$ and therefore $(k+1)^3<2k^3$. By the induction hypothesis $k^3<2^k$, so ... ?
H: Can a LP optimization problem have exactly two solutions? For example a linear model defined by equation Min[5 x + 7 y,8 x + 4 y] <= 7 - 5 x has a feasible region shown as below. This model forms a concave feasible region that has two corner points (x_1,y_1), 〖(x〗_2,y_2) and in the second and fourth quadrant respectively. Let's say our objective function is of the form ax+by=c,where a>0,b>0 and c is the value we wish to maximize. Also, ax_1+by_1=c ax_2+by_2=c The maximum value of our 'c' will be achieved only at these two points and nowhere else. Hence, we can have exactly two optimal solutions. Sub Question: 1) Is the region defined by Min function correctly defines a Linear model? AI: Your problem in not a linear program. You cannot express your constraint by a collection of linear inequalities. It is actually a somewhat complicated logical constraint. Linear programs have convex feasible regions; yours does not. If a linear program has more than one solution, it has infinitely many, as all convex combinations will be solutions. I would not consider this to be a linear-algebra question. More like operations research.
H: Prove $z \to z^m$ has degree $m$. I am hoping to prove this obeying author's intention - following his hint. But I am wondering if I shouldn't employ Euler's Formula, and should use a more primitive method? I also granted my proof below is correct? Prove $z \to z^m$ has degree $m$. Hint: calculate with local parametrizations derived from the map $\theta \to (\cos \theta, \sin \theta)$ of $\mathbb{R}^1 \to S^1$. So we want to show $I(f, \{y\}) = m$. Since any $y$ will work, we pick $(0,1)$ for convenience. By Euler's Formula that $$e^{ix} = \cos x + i \sin x,$$ or equivalently, $$(e^{ix})^m = e^{i(mx)}=\cos (mx) + i \sin (mx).$$ Since when $\cos (mx) = 0$ implies $\sin(mx) = 1$, we only need to solve for one. We solve $\cos (mx) = 0$, clearly $mx =\frac{\pi+2k\pi}{2},$ hence $$x =\frac{\pi}{2m}+\frac{k\pi}{m}.$$ Clearly, $x$ has $m$ solutions, namely, $k = 0,1,\dots, m-1.$ AI: Heuristic Argument: You see that this map, as $z$ moves ever so slightly in the anticlockwise direction, then so does $z^m$. So the map preserves orentation at all points and hence the degree is $m$ as the inverse image has cardinality $m$. Rigourous argument: Orient $S^1$ by anticlockwise tangent vectors. An anticlockwise tangent vector must be mapped onto an anticlockwise tangent vector by the derivative of the map $z\mapsto z^m$ at any point $z\in S^1$. It must be so, in particular for the $n$ points in the inverse image. To see why the above happens, a curve going anticlockwise at uniform angular velocity w.r.t. time $t$ in anticlockwise direction, must go to a curve going uniformly at $m$ times that velocity. So just draw the velocity vector (i.e., tangent vector) at the initial point of the curve. It must go that of the image curve. They must be in the same direction, anticlockwise! So at each point, the map is orientation-preserving so each point contributes a $+1$ and the degree aught to be $m$.
H: The no. of values of k for which $(16x^2+12x+39) + k(9x^2 -2x +11)$ is perfect square is: I wanted to know, how can i determine the no. of values of k for which $(16x^2+12x+39) + k(9x^2 -2x +11)$ is a perfect square.($x \in R$) I have tried, since $x$ is real the discriminant must be $\geq 0$. $D = 4(6-k)^2 -4(16+9k)(11k+39) \geq 0.$ which gives $k \in [-4,-1.5]$. But how can i determine the values where the question will be a perfect square. Any help appreciated. Thanks. AI: HINT: Let $(16x^2+12x+39) + k(9x^2 -2x +11)=(ax+b)^2$ $\implies (16+9k)x^2+(12-2k)x+3+11k=a^2x^2+2abx+b^2$ Comparing the coefficients of $x^2,x,x^0$ we have $a^2=16+9k,2ab=12-2k\implies ab=6-k, b^2=3+11k$ $\implies (6-k)^2=(ab)^2=(16+9k)(3+11k)$ which is a Quadratic Equation in $k$ on re-arrangement
H: What is the value of $\log_i i$? What is the value of $\log_i i$ How to start? Is it mathematically correct? AI: If you want to define $\log_ab=\frac{\log b}{\log a}$, then $\log_ii$ exists and has multiple values. Starting from $\exp(i(4n+1)\pi/2)=i$ you may find $\log_ii=\frac{4n+1}{4m+1}$ for $n,m \in \mathbb{Z}$.
H: solve the inequation : $-|y| + x - \sqrt{x^2 + y^2 -1} \geq 1$ I wanted to know, how to the following inequation $-|y| + x - \sqrt{x^2 + y^2 -1} \geq 1$ I did $x-|y| \geq\sqrt{x^2 + y^2 - 1} +1 \geq 0$ which gives $x \geq |y|$, what to do next... any help appreciated. Thanks AI: EDIT The answer refers to the inequality in the original OP question. We begin by considering only those $(x,y)\in\mathbb R^2$ s.t. $x^2+y^2-1\geq 0$. If $y\geq 0$, then the given inequality is $$x-y\geq \sqrt{x^2+y^2-1}~(\geq 0),$$ while for $y<0$ one has $$x+y\geq \sqrt{x^2+y^2-1}~(\geq 0).$$ Now we can apply the second power to both sides of the inequalities, arriving at $$2xy\leq 1 \cap y\geq 0 $$ $$2xy\geq-1 \cap y\leq 0$$ or $$\{xy\leq \frac{1}{2} \cap y\geq 0\cap x^2+y^2-1\geq 0\}\cup \{-\frac{1}{2}\leq xy \cap y\leq 0\cap x^2+y^2-1 \geq 0\}. $$ EDIT We consider now the edited inequality $x-|y|\geq 1+ \sqrt{x^2+y^2-1}$ Considering the inequality $$x-|y|\geq 1+ \sqrt{x^2+y^2-1}\geq 1+0 $$ we need to start imposing $\{x-|y|\geq 1\}\cap \{x^2+y^2-1\geq 0\}$. Squaring both sides, we arrive at $$-x|y|\geq \sqrt{x^2+y^2-1},$$ which implies $-x|y|\geq 0$. Considering the absolute value $|y|$and squaring we arrive at the inequalities $$\{x-|y|\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{-x|y|\geq 0\}\cap\{y\geq 0\}\cap\{x^2y^2\geq x^2+y^2-1\},$$ and $$\{x-|y|\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{-x|y|\geq 0\}\cap\{y\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}.$$ In the first chain the condition $y\geq 0$ (which implies $|y|=y$) gives $$\{x-y\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{-x|y|\geq 0\}\cap\{y\geq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}=\{x-y\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{x\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}.$$ In the second chain the inequality $y\leq 0$ (which implies $|y|=-y$) gives $$\{x-|y|\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{-x|y|\geq 0\}\cap\{y\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}=\{x+y\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{x\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\},$$ instead. In summary, the solutions $(x,y)$ to the original inequality are given by $$\left(\{x-y\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{x\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}\right) \cup \left(\{x+y\geq 1\}\cap \{x^2+y^2-1\geq 0\}\cap\{x\leq 0\}\cap\{x^2y^2\geq x^2+y^2-1\}\right)$$
H: Is it neccessarily the case that $f$ is measurable, if it's measurable in each variable? A well-known example of a function that is continuous in each variable but fails to be jointly continuous is: $$ f(x,y) = \begin{cases} \frac{xy}{x^2 + y^2} & (x,y) \neq (0,0) \\ 0 & (x,y) = (0,0) \\ \end{cases} $$ Question$1$: Is this function "jointly" measurable? It seems to me it suffices to ask whether the preimage of each element in the $\pi$-system that generates $\mathcal B(\Bbb R)$ is measurable. Thus it reduces to whether $f^{-1}((-\infty,a])$ is measurable and it is(union of a Borel set and a singleton, or the relative complement of a singleton with respect to a Borel set). Question$2$: Given $f : X \times Y \to Z$. For each $x_0 \in X$, $f(x_0,\cdot): Y \to Z$ is a measurable function, and for each $y_0 \in Y$, $f(\cdot, y_0): X \to Z$ is also a measurable function. Is it neccessarily the case that $f$ is measurable? AI: These two questions are mainly unrelated. The answer to Question 1 is yes, as you showed yourself in the post. The answer to Question 2 is no: consider $X=Y=Z=\mathbb R$ endowed with Borel sigma-algebras and $f$ such that $f(x,y)=1$ if $x=y$ and $x\in A$ and $f(x,y)=0$ otherwise, where $A\subset\mathbb R$ is not measurable. Then every $f(x,\ )$ and $f(\ ,y)$ is either identically zero or identically zero except at one point hence it is measurable, while $f^{-1}(\{1\})=\{(x,x)\mid x\in A\}$ is not measurable.
H: representations and modules I am reading representations of Lie Algebra in Humphreys.He is defining representation as $L$-modules. In case of group representation we have the correspondence between representation and modules over the group ring. But in both the places we have different definition for modules like in group case a representation is a module over group ring which is actually the usual definition of module over ring. But in $L$-module case, we have extra condition $[xy].v =xy.v-yx.v.$ Why do we need this condition and one more question is given any algebraic structure and its representation is there any general definition of module over that structure? How to see modules say "Categorically"? AI: This definition of an $L$-module is the analogue of $\mathbb{C}G$-modules. A difference comes from the fact that $L$ is "only" a Lie algebra - it is not an associative algebra. Consider the following. A representation of a finite group $G$ is essentially a homomorphism of groups $G\to GL(V)$ for some vector space $V$, where we use the natural group structure of composing linear isomorphisms in $GL(V)$. A representation of a Lie algebra $L$ is a homomorphism of Lie algebras $L\to {\frak gl}(V)$, where we use the natural Lie algebra structure of commutators in ${\frak gl}(V)$. That is precisely what the condition $$[xy]\cdot v=x\cdot(y\cdot v)-y\cdot(x\cdot v)\qquad(*) $$ conveys. In other words, it is not an "extra" requirement any more than the group action requirement $(gh)\cdot v=g\cdot(h\cdot v)$ is for $G$-modules. This may appear to be a bit confusing at first, because Lie algebras are often the first (and in most cases the only) non-associative structure we encounter. Do remember the trick of turning an associative algebra into a Lie algebra by "forgetting" the associative product, and replacing it with the commutator. Again you see that condition $(*)$ is then a natural consequence. After you have made quite a bit more headway into Humphreys, you will learn to "reverse" this process. Associated with every Lie algebra $L$ is an associative algebra $U(L)$, called the universal enveloping algebra. It is in a sense the smallest associative algebra with the property that every $L$-module is also a $U(L)$-module (in the sense of modules of algebras). It has the disadvantage of being infinite dimensional. Historically the Lie algebras arose (IIRC) as tangent spaces of Lie groups, groups that are also differentiable manifolds, such as $SL_n(\mathbb{C})$. A Lie group $G$ has a Lie algebra associated with it. Not surprisingly the Lie algebra of $SL_n(\mathbb{C})$ turns out to be what you know as ${\frak sl}_n(\mathbb{C}).$ If a Lie group $G$ has a representation (as a group) with the extra requirement that the group action is also differentiable in a natural sense, then such a $G$-module becomes also a module of the Lie algebra in a relatively natural way. For example, the condition $(*)$ arises (loosely speaking) by differentiating the $G$-module condition along the commutator of two paths going via the identity element of $G$. Humphreys is not interested in the Lie group side. He has done a lot of research on the algebraic group side, which is a purely algebraic way of getting a theory similar to that of Lie group representations, but with the manifolds and derivatives replaced with structures from algebraic geometry, and with extra difficulties in positive characteristics. So his goal is to get the students quickly to the rep theory and skip the analytic machinery required by Lie groups, as those are neither needed nor available in the algebraic group side anyway. Finally, you do get a category of $L$-modules in the usual way. The objects of the category are $L$-modules, and morphisms are homomorphisms of $L$-modules. Nothing new or unusual there.
H: Number of trees with a fixed edge Consider a vertex set $[n]$. By Cayley's theorem there there are $n^{(n-2)}$ trees on $[n]$, but how can one count the following slightly modified version: What is the number of trees on $[n]$ vertices where the edge $\{1,2\}$ is definitely contained in the trees? AI: We can actually do it directly from Cauchy’s theorem, without making use of a proof of that result. For each $e=\{k,\ell\}\in[n]$ with $k\ne\ell$ let $S_n(e)$ be the number of trees on $[n]$ that contain the edge $e$, let $e_0=\{1,2\}$, and let $S_n=S_n(e_0)$; clearly $S_n(e)=S_n$ for all $e\in E$, where $E$ is the set of possible edges. Then $$\sum_{e\in E}S_n(e)=\binom{n}2S_n$$ counts each tree on $[n]$ $n-1$ times, once for each of its edges. There are $n^{n-2}$ trees on $[n]$, so $$\binom{n}2S_n=(n-1)n^{n-2}\;,$$ and $$S_n=2n^{n-3}\;.$$
H: Find a formula for this sequence (and prove it). This is a 2 part problem. Part I I need help finding a formula for this sequence of numbers: $$\frac{1} {1\times 2} + \frac {1} {2\times3} + \cdots + \frac {1} {n(n+1)}$$ Part II I need to prove the formula conjectured in Part I. AI: HINT: $$\frac1{k(k+1)}=\frac1k-\frac1{k+1}$$ Now telescope.
H: Let $f(x) = \int \frac{x}{1-x^{8}}dx\,$ Let $f(x) = \int \frac{x}{1-x^{8}}dx\,$. Represent $I(x)$ by a power series $\sum^{\infty}a_{n}x^{n}$.(Find $a_{n}$) What is the radius of convergence of $I(x)$ ? Two curves are generated by polar equations $r=1+\sin\theta$ and $r=-\sin\theta$. Find the area of the region that lies inside both two curves. Find the length of the part of the curve $r=1+\sin\theta$ that lies inside the curve $r=-\sin\theta$. AI: Hint: For $(1)$ you need the identity $$ \frac{1}{1-t}=\sum_{k=0}^{\infty} t^k. $$ Added: $$ f(x)= \int\frac{x}{1-x^8} dx = \int \sum_{k=0}^{\infty}x^{8k+1}dx = \sum_{k=0}^{\infty}\frac{x^{8k+2}}{8k+2} + c. $$ Now, try to find the radius of convergence using some standard techniques.
H: Finding a basis for vector space $U$ Let $U$ denote the subspace of $M_{2\times 2}(\mathbb{C})$ defined by $$U=\left\lbrace\left(\begin{matrix}a&b\\ c&0\end{matrix}\right):a + b + c=0\right\rbrace.$$ How would one find a basis for that vector space? Any clues please. AI: Well, the only requirement is that $a+b+c=0$, and since we don't require the matrix to be invertible, a general matrix is then $$\left(\begin{matrix}a&b\\ -a-b&0\end{matrix}\right)$$ Can you work out the basis from here?
H: How to solve $i^z = \ln z$ How to solve $i^z = \ln z$? Putting $z = re^{i\theta}$ and $i = e^{i\pi/2}$ gives : $$ e^{i\pi/2re^{i\theta}}= \ln r +i\pi $$ How to continue ? Thanks AI: Use Euler's formula $e^{i\theta}=\cos\theta+i\sin\theta$ and equate the real and imaginary parts of both sides of the equation you've obtained to get the equations \begin{eqnarray} %\begin{split} e^{-\frac{\pi}{2}r\sin\theta}\cos\left(\frac{\pi}{2}r\cos\theta\right)=&\ln r \\ \ e^{-\frac{\pi}{2}r\sin\theta}\sin\left(\frac{\pi}{2}r\cos\theta\right)=&\pi %\end{split} \end{eqnarray} From these equations you'll get $$\cos\theta=\frac{2}{\pi r}\arctan\left(\frac{\ln r}{\pi}\right)$$ Use this in any one of the previous two equations to eliminate $\theta$ and get an equation only in $r$. Of course it will be a messy transcendental equation in $r$. You can numerically solve for $r$ and then use the last equation to get $\theta$. Wolfram Alpha gives the solution $r=1.17187209744601...$ and from that we get $\theta \approx 88.429765^{\circ}$
H: Find the value of this logarithmic expression involving fifth root of unity. Let $\alpha$ be the fifth root of unity. We then want to evaluate the expression $$\log |1 + \alpha + \alpha^2 + \alpha^3 - 1/\alpha |$$ Thanks in anticipation for your help in solving this! AI: HINT: $$1+\alpha+\alpha^2+\alpha^3=\frac{1-\alpha^4}{1-\alpha}=\frac{1-\frac1\alpha}{1-\alpha}\text{ as }\alpha^5=1$$ $$\implies 1+\alpha+\alpha^2+\alpha^3=\frac{1-\alpha}{-\alpha(1-\alpha)}=-\frac1\alpha\text{ as }\alpha\ne1$$
H: How can I check the nature of critical point on three variable function I have study on multivariate calculus. What is the best way to finding the nature of critical point on a real-valued three a variable function? In two variable function we can use $$D = f_{xx}(x,y)f_{yy}(x,y) - (f_{xy}(x,y))^2$$ to check what is nature of critical point. Thank you for all answers. AI: In general, for a non-degenerate critical point, you need to check if the Hessian matrix is positive definite (local minimum) or negative definite (local maximum). If the Hessian has both positive and negative eigenvalues the critical point is called a saddle point.
H: How to prove inequality on quadratic form and orthogonal projection This is from a paper I'm reading. I don't know how to prove it. Assume that $\mathbf A$ is an $n\times n$ positive semi-definite matrix which has $k$ non-zero eigenvalues. We can assume that all positive eigenvalues are larger than some positive constants $b$ and $b'$. The remaining $n-k$ eigenvalues of $\mathbf A$ are zeros. Let $\mathbf P$ be the orthogonal projection onto the image of $\mathbf A$ and $\mathbf Q$ is the projection onto its kernel, so that $\mathbf P + \mathbf Q = \mathbf I$. Question 1: The paper says 'it is easy' but I don't know how to prove it. Can anyone explain it? As $\mathbf P (\mathbf A - b'\mathbf I)^{-1} \mathbf P \succeq \mathbf 0$ and $\mathbf P (\mathbf A - b\mathbf I)^{-1} \mathbf P \succeq \mathbf 0$, it is easy to check that $$ (b-b')\mathbf P (\mathbf A - b'\mathbf I)^{-2}\mathbf P \preceq \mathbf P (\mathbf A - b\mathbf I)^{-1} \mathbf P - \mathbf P (\mathbf A - b'\mathbf I)^{-1} \mathbf P$$ Question 2: The paper also mentioned this equation, which I don't know how to prove: $$tr\left(\mathbf L^T \mathbf Q(\mathbf A - b'\mathbf I)^{-2}\mathbf Q \mathbf L\right)=\frac{||\mathbf Q\mathbf L||_F^2}{b'^2}$$ Any hints or suggestions? AI: The paper states that $b' = b - \delta$. I couldn't find what $\delta$ is and I don't feel like reading the whole paper word-by-word, but I think it's safe to assume that $\delta > 0$, so $b > b'$. In that case, $$A - b {\rm I} \preceq A - b' {\rm I},$$ so $$(A - b {\rm I})^{-1} \succeq (A - b' {\rm I})^{-1},$$ i.e., $$(A - b {\rm I})^{-1} (A - b' {\rm I})^{-1} \succeq (A - b' {\rm I})^{-2}.$$ Now, \begin{align} P (A - b {\rm I})^{-1} P &- P (A - b' {\rm I})^{-1} P = P ((A - b {\rm I})^{-1} - (A - b' {\rm I})^{-1}) P \\ &= P ((A - b {\rm I})^{-1} (A - b' {\rm I}) (A - b' {\rm I})^{-1} - (A - b {\rm I})^{-1} (A - b {\rm I}) (A - b' {\rm I})^{-1}) P \\ &= P (A - b {\rm I})^{-1} ((A - b' {\rm I}) - (A - b {\rm I})) (A - b' {\rm I})^{-1} P \\ &= P (A - b {\rm I})^{-1} (A - b' {\rm I} - A + b {\rm I}) (A - b' {\rm I})^{-1} P \\ &= (b-b') P (A - b {\rm I})^{-1} (A - b' {\rm I})^{-1} P \\ &\succeq (b-b') P(A - b' {\rm I})^{-2} P. \end{align} As for your second question, note that $(A - b'I)^{-1} = p(A - b'I)$ for some polynomial $p$ (see Horn, Johnson, "Matrix analysis", 2nd ed., Corollary 2.4.3.4). Using the fact that $Q$ is a projector on kernel, i.e., $AQ = 0$, \begin{align} (A - b'I)^{-1} Q &= p(A - b'I) Q = p((A - b'I) Q) = p(AQ - b'Q) = p(-b'Q) \\ &= p(-b'{\rm I})Q = -(b')^{-1} Q. \end{align} Obviously, from this we have $$(A - b'I)^{-2} Q = (A - b'I)^{-1} (A - b'I)^{-1} Q = -(b')^{-1} (A - b'I)^{-1} Q = (b')^{-2} Q.$$ Also, since $Q$ is a projector, it is symmetric, so $L^TQ = (Q^TL)^T = (QL)^T$. Using all this, $$\mathop{\rm tr}\left( L^T Q (A - b'I)^{-2} QL \right) = (b')^{-2} \mathop{\rm tr} (L^T Q QL) = (b')^{-2} \mathop{\rm tr} ((QL)^T(QL)) = \frac{\|QL\|_F^2}{(b')^2}.$$
H: How does one show a topological space is metrizable? Using text Intro. to Topo. by Mendelson I'm self studying Intro. to Topology by Mendelson. The problem I'm looking at is, Prove that for each set $X$, the topological space $(X,2^X)$ is metrizable. I'm not having so much trouble with this problem per se, but with the idea of metrizable spaces. From what I've understood from the reading is that a metric space $(X,d)$ satisfies a theorem which is exactly the definition of a topological space using only open sets from $X$ and so these metric spaces are known as metrizable spaces. Is my thought process sound? Going back to this problem, would I need to find a metric $d$ that can induce/create the topology on $X$? Would I need to look at the open sets of $X$? Am I even approaching this problem correctly? Thanks for any feedback or hints. AI: It is better to think of topological metrizable spaces as "metric spaces with the metric forgotten". You see, a metric space has an additional structure, namely the metric, but also "happen" to induce a topology via its open sets. So, the reason for having the two names, "metrizable space" and "metric space", is to differentiate between what we're looking at: If we say "a metric space", we mean a set with a metric function which defines distance betweeen points, satisfies the triangle inequality and so forth. If we say "a metrizable space", We mean a topological space, with open sets as its entire structure, but we also know that this topological space came from a metric, and hence satisfies many "good" qualities such as separation axioms. Metric spaces are trivially topological spaces, but proving that a topological space is metrizable is generally hard. One way to do it is "recall" the metric. If you could realize what the metric that induced the topological space was, and show that the topology induced by it is exactly the topology of the given topological space, then you've shown the space is metrizable. I was going to let you figure out what the metric is in your case, but Aneesh already told you.
H: Let $R, P , Q$ be relations, prove that the following statement is tautology Let $P, R, Q$ be relations. Prove that: $\exists x(R(x) \vee P(x)) \to (\forall y \neg R(y) \to (\exists xQ(x) \to \forall x \neg P(x)))$ is a tautology. How do I do so? please help. AI: This isn't a tautology. Consider a universe with one single object such that the object satisfies $P$ and it doesn't satisfy $R$. For simplicity let $Q$ mean the same as $P$. Then it is true that $\exists x(R(x) \lor P(x))$. It is also true that $\forall y \neg R(y)$. Furthermore it is true that $\exists xQ(x)$. However it is false that $\forall x \neg P(x)$ because of our initial hypothesis.
H: Solution of $a x+\sin x -L =0$ How to find $x$ such that $a x+\sin x -L =0$ where $L,a$ are constant and $a>0$? Thank you . AI: This type of equations are called Transcendental equations and are generally not solvable in a closed form. However, you can use numerical techniques like Newton-Raphson method, Bisection method etc to find out the solution very accurately.
H: The limit of $f(x)=\sin \frac{1}{x}$ at $x=0$ On page 96 of Spivak's Calculus, 4th Edition, he writes: ... For this function it is false that $f$ approaches $0$ near $0$. This amounts to saying that it is not true for every number $\epsilon > 0$ that we can get $|f(x)-0| \lt \epsilon$ by choosing $x$ sufficiently small, and $\neq 0$. To show this we simply have to find one $\epsilon > 0$ for which the condition $|f(x)-0| \lt \epsilon$ cannot be guaranteed, no matter how small we require $|x|$ to be. In fact, $\epsilon = \frac{1}{2}$ will do. It is impossible to ensure that $|f(x)| < \frac{1}{2}$ no matter how small we require $|x|$ to be; for if $A$ is any interval containing $0$, there is some number $x=\frac{1}{(\frac{1}{2}\pi+2n\pi)}$ which is in this interval and for this $x$ we have $f(x)=1$. My questions are: How did he find $x=\frac{1}{(\frac{1}{2}\pi+2n\pi)}$ and what does the $n$ stand for? How can we show that $x=\frac{1}{(\frac{1}{2}\pi+2n\pi)}$ is contained in any interval that contains $0$? Thank you in advance for any help provided. AI: solve $\sin(\frac{1}{x})=c$ where $c\gt\frac{1}{2}$. Here $c=1.$ $n$ stands for any integer because $\sin x$ is a periodic function. To find $x$ given $c=1$, you have to know $sin y=1$ when $y=\frac{1}{2}\pi+2n\pi$. For any $\epsilon$, you can find an $n$ such that $x=\frac{1}{(\frac{1}{2}\pi+2n\pi)}\lt\epsilon$. That means $x$ is contained in any interval that contains 0. To find $n$, solve $\frac{1}{(\frac{1}{2}\pi+2n\pi)}\lt\epsilon$, it is $\frac{1}{2}\pi+2n\pi \gt \frac{1}{\epsilon}$. So, $n\gt\frac{1}{2\pi}\cdot{}(\frac{1}{\epsilon}-\frac{1}{2}\pi)$.
H: Immediate consequence of Riemann-Roch Let $X$ be an algebraic curve, $D$ a divisor and $\mathscr{O}(D)$ the line bundle associated to $D$ in the canonical way. The following implication should follow immediately from the Riemann Roch formula $$deg(D)<0 \implies h^0(X, \mathscr{O}(D)) =0.$$ Could you help me to see why this is the case? AI: This has nothing to do with Riemann-Roch: The vector space of global sections $H^0(X, \mathscr{O}(D)) $ is equal to the set of rational functions $f\in \operatorname{Rat}(X)$ for which the divisor $E=\operatorname{div}(f)+D$ is effective. Effectivity of $E$ implies $ \deg(E)=\deg(\operatorname{div}(f))+\deg(D) \geq 0$. Since for a non-zero $0\neq f$ we have $\deg(\operatorname{div}(f))=0$, we see that if a non-zero $f\in H^0(X, \mathscr{O}(D)) $ existed we would have $ \deg D \geq 0$. Contrapositively the assumption $\deg(D)\lt 0$ forces $H^0(X, \mathscr{O}(D)) =0$, just as you required.
H: Proving uniform convergence of a sequence I have to prove the uniform convergence of this sequence $f_n(x)=\tan^{-1}nx$ in $[a,b],a>0$ What I have reached so far: $$|f_n(x)-f(x)|=\left|\tan^{-1}nx-\frac\pi 2\right|=\tan^{-1}nx-\frac\pi 2<\epsilon$$ How do I proceed further ? AI: Recalling the Taylor series of $\arctan(t)$ at $t=\infty$ $$ \frac{\pi}{2} -{\frac {1}{t}}+O \left( {t}^{-3} \right) $$ $$ \implies \arctan( nx )=\frac{\pi}{2}-{\frac {1}{ny}}+O \left( {n}^{-3} \right) $$ $$ \implies \arctan( nx )-\frac{\pi}{2}= -\frac{1}{nx}+O \left( {n}^{-3} \right) $$ $$ \implies \arctan( nx )-\frac{\pi}{2} \sim -\frac{1}{nx}. $$ Now, you can advance to finish the problem.
H: Transform quadratic ternary form to normal form Does anyone know of an integral transform which transforms the normal form $Ax^2 + By^2 + Cz^2 + Dxy + Eyz + Fzx = 0$ to the form $ax^2 + by^2 + cz^2 = 0$ ? Thanks in advance. AI: It is a two step iterative algorithm: Take the first variable $x$. If $x^2$ appears there with non-zero coefficient, complete the square with $x^2$ and the double product $xy$ with the next variable. Continue with the next variables. If $x^2$ is not there, only the double product with the next variable $xy$, then make the substitution $x=u+v$, $y=u-v$. This makes the term $u^2$ appear. Go to step $1$ with the variable $u$. This will give you a change of variable with rational coefficients. You will have to multiply the original form by a convenient factor to clear denominators. Example: $$xy+y^2+z^2.$$ The order to input the variables to the algorithm could be other, but let us do it with $x$ as the first variable. We need step $2$ because there is no $x^2$. We get $$(u+v)(u-v)+(u-v)^2+z^2=2u^2-2uv+z^2.$$ Now $u$ is our first variable. We go to step $1$. $$2(u^2-uv+v^2/4)-v^2/4+z^2=2(u-v/2)^2-v^2/4+z^2.$$ In the new variables $z_1=u-v/2$, $z_2=v$, $z_3=z$ (where $u=(x+y)/2$ and $v=(x-y)/2$) we get $$2z_1^2-z_2^2/4+z_3^2.$$ Notice we can, in this case, multiply the whole form by $16$ and get rid of the denominators in the change of variable.
H: Show that the boundary of $A$ is empty iff $A$ is closed and open. I'm reading Intro to Topology by Mendelson. The problem at hand is, Show that $\text{Bdry}(A)=\emptyset$ if and only if $A$ is closed and open. This was all the problem statement had, but I'm in the chapter covering closure, interior and boundary with respect to topological spaces. Here is my attempt at the proof, Suppose that Bdry$(A)=\emptyset$. Then $\bar{A}=A$ $\cup$ Bdry$(A)=A\cup\emptyset=A$. Hence, $\bar{A}=A$, which implies that $A$ is closed. To show that $A$ is also open we will show that $\overline{C(A)}=C(A)$, that is, $C(A)$ is closed and hence $A$ open. We already know that $C(A)\subset\overline{C(A)}$, thus it suffices to only show that $\overline{C(A)}\subset C(A)$. This is the case since we know that Bdry$(A)=\bar{A}\cap\overline{C(A)}=\emptyset$, which implies that $\overline{C(A)}\subset C(\bar{A})=$ Int$(C(A))\subset C(A)$. Thus, $A$ is open. Suppose now that $A$ is both open and closed. Since $A$ is open we know that Int$(A)=A$. Also, since $A$ is closed $\bar{A}=A=$ Int$(A)$. Now we know that Bdry$(A)=\bar{A}\cup\overline{C(A)}=$ Int$(A)$ $\cup$ $C($Int$(A))=\emptyset.$ I used an the identity $\bar{A}=A\cup\text{Bdry}(A)$, which was asked later in the problem set. I'm wondering if I should use this, since I was able to prove it, or attempt the proof assuming I'm unaware of the identity. I have also been trying to write more concise proofs and this one is definitely not one of them. Would removing some of the words help or is there a cleaner approach that can be pointed out? Thanks for any feedback! AI: As you said $\bar{A} = A \cup \operatorname{Bd}(A)$, but there's a more stronger statement that you can and you should prove: $\bar{A} = \operatorname{Int}(A) \cup \operatorname{Bd}(A)$. Now it should be fairly easy to prove that $\operatorname{Bd}(A) = \emptyset \iff \text{A is closed and open}$: (1)If $\operatorname{Bd}(A)=\emptyset$ Then $\bar{A}=\operatorname{Int}(A)$ and since $\operatorname{Int}(A)\subset A \subset \bar{A}$ we conclude that $\operatorname{Int}(A) = A = \bar{A}$. $\operatorname{Int}(A) = A$ shows that $A$ is open and $A = \bar{A}$ shows that $A$ is closed. (2)If $A$ is closed and open. A is open, then $A=\operatorname{Int}(A)$ and since $\bar{A} = \operatorname{Int}(A) \cup \operatorname{Bd}(A)$ we see that $\bar{A} = A \cup \operatorname{Bd}(A)$ But $A$ is also closed!, this means that $\bar{A} = A$ therefore $A = A \cup \operatorname{Bd}(A)$. Now remember that $\operatorname{Int}(A) \cap \operatorname{Bd}(A) = \emptyset$, then $A \cap \operatorname{Bd}(A) = \emptyset$. Because of $A = A \cup \operatorname{Bd}(A)$ and $A \cap \operatorname{Bd}(A) = \emptyset$ we conclude $\operatorname{Bd}(A) = \emptyset$
H: About the convergence of a sequence in $L^1$ Suppose that $f_n$ is a sequence of nonnegative functions such that $\int f_n d\mu=1$ for all $n$, and $f_n\to f$ in $L^1$. Let $p>1$. Is it then true that $f_n^p\to f^p$ in $L^1$? AI: This is false. Because as Chris remarks above, $f_n^p$ needn't be in $L^1$. For example: If the concerned measure space is $\mathbb R$ with the Borel $\sigma$-field and the Lebesgue measure and $f$ is as defined below:$$\begin{cases}x<0&f(x)=0\\0<x\le1&f(x)=1\\1+\frac1{2^{p+1}}+...\frac1{(n-1)^{p+1}}<x\le1+\frac1{2^{p+1}}+...\frac1{n^{p+1}}&f(x)=n\\1+\frac1{2^{p+1}}+...\frac1{n^{p+1}}+...<x&f(x)=0\end{cases}$$ So on an interval of length$\displaystyle \frac1{n^{p+1}}$, $f$ has value $n$, so the $L^1$ norm is the sum $\sum\frac1{n^p}$ which converges for $p>1$. But $f^p$ is $n^p$ on the respective intervals. So its $L^1$ norm is $\sum\frac1n$ which diverges!
H: Evaluating $\lim\limits_{x \to 0} \frac1{1-\cos (x^2)}\sum\limits_{n=4}^{\infty} n^5x^n$ I'm trying to solve this limit but I'm not sure how to do it. $$\lim_{x \to 0} \frac1{1-\cos(x^2)}\sum_{n=4}^{\infty} n^5x^n$$ I thought of finding the function that represents the sum but I had a hard time finding it. I'd appreciate the help. AI: HINT: As $x\to0, |x|<1\implies \sum_{0\le n<\infty}x^n=\frac1{1-x}$ (Proof) Differentiating wrt $x,$ $$\sum_{0\le n<\infty} nx^{n-1}=\frac1{(1-x)^2}\implies \sum_{0\le n<\infty} nx^n=\frac x{(1-x)^2}$$ Again differentiating wrt $x$ $$\sum_{0\le n<\infty}n^2x^{n-1}=\frac1{(1-x)^2}+\frac{2x}{(1-x)^3}\implies \sum_{0\le n<\infty}n^2x^n=\frac x{(1-x)^2}+\frac{2x^2}{(1-x)^3}$$ Can you continue the process to find $\sum_{0\le n<\infty}n^5x^n ,$ hence $\sum_{4\le n<\infty}n^5x^n ?$
H: Root of an exponential equation Let $0 \le a \le 1$ and $-\infty < b < \infty$. I am looking for a solution of the exponential equation. $$ a^x + abx = 0. $$ I guess closed form expression of the root in terms of $a$ and $b$ may not be there. In that case, an asymptotic expansion of the root in terms of $a$ and $b$ would be just as fine. AI: Note that $$ \begin{align} a^x+abx&=0\\ abx&=-e^{\log(a)x}\\ abx\,e^{-\log(a)x}&=-1\\ -\log(a)x\,e^{-\log(a)x}&=\frac{\log(a)}{ab} \end{align} $$ Thus, we can use the Lambert W function to get $-\log(a)x$: $$ \begin{align} -\log(a)x&=\mathrm{W}\left(\frac{\log(a)}{ab}\right)\\ x&=-\frac1{\log(a)}\mathrm{W}\left(\frac{\log(a)}{ab}\right) \end{align} $$ $\mathrm{W}(x)$ has real values for $x\ge-\frac1e$. For non-negative $x$, there is one real branch. For negative $x$, there are two real branches (which coincide at $-\frac1e$).
H: Intuition for Multiple Summation becoming One Summation - Nothing too formal/rigorous please Source. I grok addition is associative and commutative, and a term can be moved into other summations iff these other summations aren't summing this term. Hence I grok $$\sum_{i,j} g_{ij} \sum_r a_{ir} x_r \sum_s a_{js} x_s = \sum_{i,j} \sum_{r} \sum_s g_{ij} a_{ir}x_r a_{js}x_s = \color{blue}{ \sum_{i} \sum_{j} \sum_{r} \sum_s g_{ij} a_{ir}x_r a_{js}x_s}.$$ $$ \text{But } \color{blue}{ \sum_{i} \sum_{j} \sum_{r} \sum_s g_{ij} a_{ir}x_r a_{js}x_s} = \sum_{i, j, r, s} g_{ij} a_{ir}x_r a_{js}x_s \,???$$ (Follow-up 1) The R.S. of $\color{green}{\sum_{i_k} A_{i_1,i_2,\dots , i_k} = A^1_{i_1,i_2,\dots , i_{k-1}}}$ eliminates $i_k$ and introduces superscript $1$. Similarly, R.S. of $\sum_{i_{k - 1}} \color{green}{A^1_{i_1,i_2,\dots , i_{k-1}}}= \color{#7A7676}{A^2_{i_1,i_2,\dots ,i_{k - 3}, i_{k-2}}}$ eliminates $i_{k - 1}$ and introduces superscript $2$. Similarly, R.S. of $\sum_{i_{k - 1}}\color{#7A7676}{A^2_{i_1,\dots ,i_{k - 3}, i_{k-2}}}= A^3_{i_1,\dots ,i_{k - 4}, i_{k-3}}$ eliminates $i_{k - 2}$ and introduces superscript $3$ and so forth... But what do the $A^{\text{# of index removed}}_{\text{one less index than before}}$ and this whole process mean, other than writing out the $k - 1$ summation symbols? (II) More generally, if $i_k$ satisfies some property $P(i_k)$, then how to rewrite with only 1 summation $$\sum\limits_{i_1 \; : \; P(i_1)} \cdots \sum\limits_{i_{k - 1} \; : \; P(i_{k - 1})} \; \sum\limits_{i_k \; : \; P(i_k)} A(i_1, ..., i_k) \; ?$$ (III) You wrote $\sum_{i_1,i_2,\dots , i_k} A_{i_1,i_2,\dots , i_k}= \sum_{i_1}\bigl(\sum_{i_2} \dots \color{green}{\left[\sum_{i_k} A_{i_1,i_2,\dots , i_k}\right]} \cdots \bigr)$. Why are there dots after the green sum? It is the last sum, so nothing comes after? (Follow-up 2) (II) Sadly I do not understand the first three sentences in your answer. Can you please elaborate? Is $\sum_{j=1}^{r} B_j$ one sum or a multiple sum already rewritten as one sum? Also, thank you for recommending writing multiple sums. Actually, I like them better too! But I am addled by $S=\sum_{i,j} \epsilon_{ij} = -\sum_{i,j} \epsilon_{ji} =-\sum_{j,i} \epsilon_{ji} =-S$. You summed over $i, j$ but there is only one sum here. Why not two sums? Which is better? AI: The notation $\sum_{i_1,i_2,\dots , i_k}$ is just short-hand for the iterated sums $\sum_{i_1}$, $\sum_{i_2}, \dots , \sum_{i_k}$. I would say (my convention) starting with $i_k$ and proceeding outward to $i_1$: $$ \sum_{i_1,i_2,\dots , i_k} A_{i_1,i_2,\dots , i_k}= \sum_{i_1}\bigl(\sum_{i_2} \dots \color{green}{\left[\sum_{i_k} A_{i_1,i_2,\dots , i_k}\right]} \cdots \bigr) $$ In particular, if we denote $\color{green}{\sum_{i_k} A_{i_1,i_2,\dots , i_k} = A^1_{i_1,i_2,\dots , i_{k-1}}}$ then $$ \sum_{i_1,i_2,\dots , i_k} A_{i_1,i_2,\dots , i_k}= \sum_{i_1}\bigl(\sum_{i_2} \dots \sum_{i_{k - 2}} \underbrace{\sum_{i_{k-1}} \color{green}{\left[A^1_{i_1,i_2,\dots , i_{k-1}}\right]}}_{\Large{A^2_{i_1,\dots ,i_{k - 3}, i_{k-2}}}} \cdots \bigr) $$ and so forth until we're down to $\sum_{i_1,i_2,\dots , i_k} A_{i_1,i_2,\dots , i_k} =\sum_{i_1} A^{k-1}_{i_1} $. This would be my default interpretation of such an expression. Some obvious questions to ask: how am I sure it wasn't done in a different order, say starting with $i_1$ and proceeding outward until finally we finish with the sum over $i_k$? wait, does it even matter which order the summations are taken? The simplest version of this is does $\sum_i (\sum_j A_{ij}) = \sum_j(\sum_i A_{ij})$ ? If the answer to (2.) is no, then the answer to (1.) is that the order of summation does not matter. Here, we're assuming that (2.) extends to $k$-sums. But that's clear since we can always break a $k$-sum into iterated $2$-sums, in other words $\sum\limits_{i_1}\left(\sum\limits_{i_2,...,i_k}\right) = \sum\limits_{i_k}\left(\sum\limits_{i_1,...,i_{k - 1}}\right)$ So, let us address (2.). To keep it easy to understand let's look at $n=2$: $$ \sum_{i=1}^2\sum_{j=1}^2 A_{ij} = \sum_{i=1}^2 (A_{i1}+A_{i2}) = (A_{11}+A_{12})+(A_{21}+A_{22}). $$ Compare against: $$ \sum_{i=j}^2\sum_{i=1}^2 A_{ij} = \sum_{j=1}^2 (A_{1j}+A_{2j}) = (A_{11}+A_{21})+(A_{12}+A_{22}). $$ So as Hagen von Eitzen has commented, it's just rearranging parenthesis. Now, if these summations pass to infinite upper bounds (series) then we cannot rearrange these so easily. Some analytical conditions concerning uniformity of the convergence must be met. But, so long as the sums are finite, we can reorder them. Incidentally, if you did want to prove these things carefully, you'll need a definition for the finite sum. May I recommend that $\sum_{i=1}^{1} A_i = A_1$ and $\sum_{i=1}^{n+1}A_i = A_{n+1}+\sum_{i=1}^{n}A_i$. Most authors think these things are too trivial to put in books. Following the follow-up: I.) the superscript notation in my example is merely to emphasize the idea that the summations can be thought of as happening one at a time. It's much the same idea as the iterated integral $\int_0^1 \int_{0}^{x}\int_0^{1-x-y} xydz \, dy \, dx$ we integrate over $z$ leaving $\int_0^1 \int_{0}^{x} \underbrace{[xy(1-x)-xy^2)]}_{\text{like} \ A_1} \, dy \, dx$ next, integrate over $y$ leaving $\int_0^1 \underbrace{[x\frac{x^2}{2}(1-x)-x\frac{x^3}{3})]}_{\text{like} \ A_2} \, dx$ finally we're left with an integral in just one variable $\int_0^1 \underbrace{[x\frac{x^2}{2}(1-x)-x\frac{x^3}{3})]}_{\text{like} \ A_2} \, dx = \frac{-1}{24} $ My idea was to suppress the indices of summation to emphasize that after the sum is complete that index is gone for the summations that follow. Just like $z$ or $y$ is gone as we iterate the integral inside out. II.) writing multiple sums as one sum? Well, I suppose the sum is just an addition of finitely many terms thus we can place the possible indices in an ordered set and label those indices from say $1$ to $r$ where $r$ is the total number of summands then the iterated sum becomes $\sum_{j=1}^{r} B_j$. However, I don't recommend this. The point of writing multiple sums is found both from their natural origin from compound summative processes (for example, the finite sum which sets-up the double integral) as well as the nice property that repeated sums allow us to exploit symmetries between certain subsets of the summands $B_1, \dots B_r$. For example, $\sum_{i,j} \epsilon_{ij} = 0$ since, by definition, $\epsilon_{ij}=-\epsilon_{ji}$ and so: $$ S=\sum_{i,j} \epsilon_{ij} = -\sum_{i,j} \epsilon_{ji} =-\sum_{j,i} \epsilon_{ji} =-S $$ which shows $S=0$. III.) this one is easier, those dots indicate the many parentheses I did not write. In response to Following the follow-up (2): I meant to indicate that a multiple finite sum is still just the sum of finitely many things. For example, $$ \sum_{i=1}^3 \sum_{j=1}^3 A_{ij} = \sum_{r=1}^9 B_r $$ provided I define $B_1 = A_{11}, B_2 = A_{12}, \dots , B_9 = A_{33}$. This would not usually be a wise step since it hides any nice symmetries of the summands $A_{ij}$. Getting back to my other comment, to be more pedantic, \begin{align} S &= \sum_i \sum_j \epsilon_{ij} \\ &= -\sum_i \sum_j \epsilon_{ji} \qquad \text{since $\epsilon_{ij} = -\epsilon_{ji}$} \\ &= -\sum_j \sum_i \epsilon_{ji} \qquad \text{property of finite sums, can swap order}\\ &=-S \end{align} and thus $S=0$.
H: Number of times $g(p_1)$ occurs in $\sum_{d\mid n}g(d)$ $$ g(n)=\begin{cases} 1 & \text{if }n=1 \\[10pt] \sum_{d\mid n,\ d\ne n} g(d) & \text{else} \end{cases} $$ How can I calculate $g(n)$ efficiently? I was trying to collect all the $g(p)$ terms after complete decomposition of $n$. After some googling I found that whay I am looking for is perfect partition which is same as number of ordered factorization of $n$. There is a formula called Mac Mohan's formula to compute number of ordered factorization of $n$. Can someone explain how Mac Mohan's formula is derived. Also see the orignal question. AI: Let $n$ is square free. Then if $n$ has $\nu(n)$ distinct prime factors, $$f\left(\nu(n)\right):=g(n)=\sum_{d|n,\ d\ne n}g(d)=\sum_{k=0}^{\nu(n)-1}\binom{\nu(n)}{k} f(k)$$ where we denote by $f(k)$ the value of $d(n)$ when $\nu(n)=k$ (because clearly $g(n)$ depends upon the number of prime factors only, not on the prime numbers themselves). So, the function $f(n)$ is defined by $$f(n)=\left\{ \begin{array}{lr} 1 && \mbox{if}\ n=0\\ \sum_{k=0}^{n-1}\binom{n}{k} f(k) && \mbox{if}\ n\geq1 \end{array} \right. $$ I find a striking resemblance of the function $f(n)$ to the Bell number $B(n)$ which is also defined very similarly with a slight (but maybe significant) difference. To find $g(n)$ for $n$ not square free, we proceed as below. First, let $n=p^k$ for some prime $p$ and $k\in \mathbb{Z}^+$. Then, $$g(p^k)=\sum_{d|p^k,\ d\ne p^k}g(d)=\sum_{d|p^{k-1},\ d\ne p^{k-1}}g(d)+g(p^{k-1})=2g(p^{k-1})$$ So iteratively, $$g(p^k)=2^kg(1)=2^k$$ Now, let $n=p^kq$ where $p, q$ are distinct prime numbers. Then $$g(p^kq)=2g(p^{k-1}q)+g(p^k)=2g(p^{k-1}q)+2^k$$ Iteratively, $$g(p^kq)=2^kg(q)+k2^k=(k+1)2^k$$ If $n=p^kq^2$ then, similarly we will get the recurrence relation $$g(p^kq^2)=2g(p^{k-1}q^2)+g(p^k)+g(p^kq)$$ Since we know $g(p^kq)$ from the previous computation we can find $g(p^kq^2)$. Likewise, extending the idea, we get $$g(p^kq^r)=2g(p^{k-1}q^r)+\sum_{i=0}^{r-1}g(p^kq^i)$$ In general, if $n=p_1^{k_1}p_2^{k_2}\cdots\ p_r^{k_r}$, then, to find $g(n)$, i guess we have to solve this multidimensional recursive equation $$f_r(k_1,k_2,\cdots\ ,k_r)=\sum_{0\le a_1\le k_1,\cdots\ ,0\le a_r\le k_r}\frac{(a_1+a_2+\cdots\ a_r)!}{a_1!a_2!\cdots\ a_r!}f_r(a_1,a_2,\cdots\ ,a_r)$$ I don't quite know though how to solve this equation. Right now, I don't see any easier method.
H: K-theory - dependence of algebraic structure: What is the K-theory of a direct product? I want to figure out the dependence of K-class of an finitely generate projective ring and its algebraic struture. For example consider $K_0(\mathbb{C})\cong \mathbb{Z}$ and $K_0(\mathbb{R}\times\mathbb{R})$, where the algebraic on $\mathbb{C}$ is the standard multiplication and on $\mathbb{R}\times\mathbb{R}$ the multipation is given by: $(r_1,r_2)\times (r_1',r_2')\mapsto (r_1*r_1',r_2*r_2')$. Are $K_0(\mathbb{C})$ and $K_0(\mathbb{R}\times\mathbb{R})$ isomorphic? What can we say in general? AI: Presumably algebraic K-theory preserves finite direct sums. In that case $K_0(\mathbb R\times\mathbb R)$ cannot be $\mathbb Z$.
H: number of days needed : 48-hour project with 4 employees working 6 hrs/day? The paving of a road takes 48 hours if done by an employee. As a Project Manager, calculate the number of days required if you have a workforce of 4 people who can work for 6 hours a day? AI: $$4\;\text{persons}\times \frac{6\;\text{hours}}{\text{day}} \times x\;\text{days} = 48 \;\text{person-hours}$$ $$\text{Solve for x}: \quad 24x = 48 \iff x = \dfrac {48}{24} = 2\;\text{days}$$
H: Quartiles of a exponentially distributed function I am doing an exercise where I'm supposed to calculate the quartiles of the exponentially distributed function $f_\mathbb{X}(x)=\lambda e^{-\lambda x}$. So, first I calculate the distribution function, $F_\mathbb{X}(x)$, to be $F_\mathbb{X}(x) = 1 - e^{-\lambda x}$ I know this is correct. Then, to calculate the first quartile $x_{0.25}$, I set $F_\mathbb{X}(x_{0.25}) = 1-0.25$ I then perform the following calculations: $F_\mathbb{X}(x_{0.25}) = 1-e^{-\lambda x} = 1-0.25$ $-e^{-\lambda x} = -0.25$ $e^{-\lambda x} = 0.25$ $-\lambda x = ln(0.25)$ $x = -{ln(0.25)\over\lambda}$ The same calculations are made for the other two quartiles, resulting in the following three quartiles: $F_\mathbb{X}(x_{0.25}) = 1-0.25 => x = -{ln(0.25)\over\lambda}$ $F_\mathbb{X}(x_{0.5}) = 1-0.5 => x_{0.5} = -{ln(0.5)\over\lambda}$ $F_\mathbb{X}(x_{0.75}) = 1-0.75 => x_{0.75} = -{ln(0.75)\over\lambda}$ which, to me, seems pretty reasonable. The book, however, provides these answers: $x_{0.25} = -{ln(0.75)\over\lambda}$ $x_{0.5} = -{ln(0.75)\over\lambda}$ $x_{0.75} = -{ln(0.25)\over\lambda}$ These answers does, on the other hand, not seem reasonable at all. Is the book wrong? And, if it's not, where am I wrong? AI: The usual definition of the first quartile is the place $q_1$ such that $\Pr(X\le q_1)=0.25$. In our case, $F_X(x)=1-e^{-\lambda x}$ and therefore we want $1-e^{-\lambda q_1}=0.25$. This manipulates to $e^{-\lambda q_1}=1-0.25=0.75$. Taking logarihms, we get the book's answer of $-\frac{\ln (0.75)}{\lambda}$. For the second quartile, $\ln(0.75)$ is replaced by $\ln(0.5)$, and for the third by $\ln(0.25)$. Remark: Note that $-\ln(0.75)\lt -\ln(0.5)\lt -\ln(0.25)$. This feels as if it is going the wrong way. It isn't. For the logarithms are all negative. I think the book's answer is (though correct) not presented in a good way. Better would be the equivalent $q_1=\ln(1/0.75)\cdot \frac{1}{\lambda}$. Then everything is positive. Shouldn't we all be positive?
H: compute the discrete(sampled) time process noise matrix Given continuous time state transition equation as follows, $$ \frac{dx}{dt}=Ax+\nu $$ where $\nu \sim N(\mathbf{0},Q)$, $Q$ being the process noise matrix, one can compute the discrete(sampled) time process noise matrix as follow, $$ Q(\delta t)=\int\limits_{0}^{\delta t}{{{e}^{A\tau }}Q{{e}^{{{A}^{T}}\tau }}d\tau } $$ I want to ask why. AI: Indeed this is used often in electrical engineering such as design of Kalman filters, if I recall correctly. $$\mathbf Q(k)=\int\limits_{k\,T}^{(k+1)\,T}{{\mathbf \phi((k+1)T,\tau)\;}\mathbf Q(\tau){\;\mathbf \phi((k-1)T,\tau)}^\top d\tau }$$ gives the definition of the discrete-time noise covariance (usually defined for a certain time interval) it follows the mathematical definition of covariance and autocorrelation. Herein $\mathbf Q(t)$ is the continuous time process noise covariance matrix. The integral can be also approximated by a sum. In your notation, $\delta t$ corresponds to the piece of time interval $[kT,(k+1)T]$. Further, $\phi$ shall represent the variable of state or, better said, through your first equation determine the state transition. In your case this is the solution of your first differential equation (noise excluded of course) detemining state transitions following $\phi(t)\sim x(t)=\exp(\mathbf A t)$, hence: $$\mathbf \phi(\tau)=\exp(\mathbf A \tau)$$ $$\mathbf \phi^\top(\tau)=\exp(\mathbf A^\top \tau)$$ where ${}^\top$ indicates transposition, all capital bold symbols are matrices and $\phi$ is a vector. Once you apply the above integral to your case, you will see that the matrix $\mathbf Q$ acts with its elements operating as autocorrelation of the continuous process noise on the variable of state — these are commonly denoted $\sigma_{ij}$ — where $i,j$ are some indices.
H: Solve a recursion using generating functions: $F_n+F_{n-1}+⋯+F_0=3^n , n\geq0$? Given the recursive equation : $$F_n+F_{n-1}+⋯+F_0=3^n , n\geq0$$ A fast solution that I can think of is placing $n-1$ instead of $n$ , and then we'll get : $$F_{n-1}+F_{n-2}+⋯+F_0=3^{n-1} $$ Now subtracting both equations : $$F_n+F_{n-1}+⋯+F_0 - (F_{n-1}+F_{n-2}+⋯+F_0) = 3^n - 3^{n-1} $$ $$ F_n = 3^n - 3^{n-1}$$ But how can I do that using generating functions ? any hints ? Thanks AI: Define $f(z):=\sum F_n z^n$ and $g(z):=\sum z^n=\frac{1}{1-z}$. You can see $\sum_{k=0}^{n}F_k$ is the coefficient of $f(z)g(z)$ in the $n$-th term. So we get $$f(z)g(z)=\sum_{k=0}^{\infty}3^nz^n.$$ From this we get $f(z)=\frac{1-z}{1-3z}=\frac{1}{3}+\frac{2}{3}\frac{1}{1-3z}=1+\sum_{k=1}^{\infty}\frac{2}{3}3^kz^k$. The coefficient of this is your $F_n$. The way to think about these two functions $f$ and $g$ is to write $$\sum_{k=0}^{n}F_k$$ as $$\sum_{k=0}^{n}F_{k}G_{n-k},$$which is a coefficient in a product of two series. In this case we need $G_k=1$. That is why the two functions should be the $f$ and $g$ above. So, yes, you multiply both sides by $z^n$ (or $z^{-n}$, or sometimes $z^n/n!$, or $z^n/n$; which generating functions are going to be useful depends on the recurrence) summed over all $n$. On both sides you get some series. The trick is to write them in terms of known functions and $f(z)$. There are some known operations on series that have easy translations into operations on the function. See here. There is also this very good, and free(!) book.
H: What is the smallest amount of the provision? Provisions for three companies totaling $ 48 million allocated in the ratio of 8:3:1. What is the smallest amount of the provision? This is my calculation: = 8 +3 +1 = 12 = 8x12: 3x12: 1x12 = 96: 36: 12 Provision of the smallest amount is 12 million. => Refer to my exercise book, the answer is 4 million. Are my calculations wrong? AI: We have the ratios of $8x : 3x: 1x$, where $x$ is in millions. That gives us a total of $8x + 3x + 1\cdot x = 12x$ which is twelve partitions of $48$ million: 12 groups of $x$-million. So we can solve for $x$: $$12 x = 48 \;\text{million} \iff x = \dfrac{48\;\text{million}}{12} = 4 \;\text{million}$$ So that gives us a ratio of provisions with $$(8\cdot 4\;\text{million}) : (3\cdot 4\;\text{million}): (1 \cdot 4\;\text{million})$$ So the largest provision is $\;8\cdot 4 = 32 \;\text{million}$, and the smallest provision is $4$ million.
H: If a vector $v$ is an eigenvector of both matrices $A$ and $B$, is $v$ necessarily an eigenvector of $AB$? I'm preparing for my final and this question came up in one of the practices. I am tempted to say no, but I've been having trouble proving this. If a vector $v$ is an eigenvector of both matrices $A$ and $B$, is $v$ necessarily an eigenvector of $AB$? AI: Hint: Matrix multiplication is associative: $$AB(v)= A(Bv)$$
H: $\forall A\exists L(\mathcal P(L)=A)$ To be honest this idea is not mine but i saw this axiom somewhere and I dont remember where Is the axiom that say that exist the "logarithm set" $L$ for every set $A$. "$\forall A\exists L(\mathcal P(L)=A)$" Using the intuition in tha naive set theory we can see that such $L$ ca not exist always and we can derive some contradiction too (in the naive concept) like for example this we have that $\forall X (\varnothing \subset X)$ then $\forall X (\varnothing \in \mathcal P(X))$ But if we allow the logarithm set axiom we have $\forall A\exists L(\mathcal P(L)=A)$ imply $\forall A(\varnothing \in A)$ and is not true. In addition we have that the logarithm set $L$ of $A$ always belongs to $A$. $\forall A (\varnothing \in A)$ If I understand well this means that the logarithm aximom is useless beacuse states the existence of an element $L$ of the set $A$ for every set $A$, then is in contraddiction with the empty set axiom. At this point I must ask why and when this axiom can be used: 1- Do the contraddictions that I've found are enough for "kicking out of the game" this axiom for ever? 2-There can be an alternative collection of set theory axiom that can allow this axiom to be interesting? Maybe we can extend the universe of the set with "exotic" sets that make us always able to take the "Logarith set" of every set? Sorry if I made some grammar errors and thanks in advance. AI: Yes, the contradiction that you have found is sufficient to conclude that this axiom is very contradictory (i.e. it is a contradiction to one of the fundamental properties of set theory, namely the empty set). One can perhaps modify this to be an argument of cardinality. That is, $$\forall A\exists L(A\neq\varnothing\rightarrow|2^L|=|A|),$$ but that is also not going to work. Note that the integers are not closed under taking $\log_2$. If you want this axiom to make sense somehow then you need to circumvent this fact. There will never be any set whose power set contains exactly three elements. So maybe if we require that $A$ is infinite? But that also going to fail. $\aleph_0$ has the property that $|2^X|\neq\aleph_0$, no matter what $X$ is. And it is not the only infinite cardinal that has this property. So you're going to try and work harder and harder on that. This will also negate the existence of inaccessible cardinals, which have the same property as $\aleph_0$ here. So let's assume that we want to maximize the class of sets equipotent with power sets. What do I mean by maximize? Well given an infinite cardinal $\aleph_\alpha$, I want that the cardinals smaller than $\aleph_\alpha$ which are equipotent with power sets is as large as possible without causing contradictions for smaller $\alpha$'s. Assume that we can formalize this, let us observe the consequence and then see what is the correct formalization. So we know what happens with finite sets, and we know that not all infinite sets can be equipotent with power sets. $\aleph_0$ can't be. But $\aleph_1$ can be. In that case $2^{\aleph_0}=\aleph_1$. Then we want $\aleph_2$ to be a power set, and so $2^{\aleph_1}=\aleph_2$. We can continue. It is not hard to see that by the time we reached $\aleph_\omega$, the first limit cardinal, $\sf GCH$ holds below it. We have to skip $\aleph_\omega$ because it provably can't be equipotent to a power set. So we continue in a similar fashion, and it is apparent that by the time we reached $\aleph_{\omega_1}$ that the power sets of sets of smaller cardinality are exactly $\aleph_{\alpha+1}$ for $\alpha<\omega_1$. While $\aleph_{\omega_1}$ is eligible as a power set (e.g. it is consistent that $2^{\aleph_0}$ is $\aleph_{\omega_1}$), since we already have that it is a strong limit cardinal -- it can't a power set. We can continue on and on, so we have that in fact $2^{\aleph_\alpha}=\aleph_{\alpha+1}$ for every ordinal $\alpha$. Essentially, we require $\sf GCH$ to hold. That we can formalize.
H: How to show that a polynomial has real root between two given values? Let $C_0,C_1,\ldots,C_n$ are real constants. It is given that $$C_0 + \frac{C_1}{2} + \ldots + \frac{C_{n-1}}{n} + \frac{C_n}{n+1}= 0$$ We need to prove that the equation $C_0 + C_1 x + \ldots + C_{n-1} x^{n-1} + C_{n}x^{n} = 0$ has at least one real root between 0 and 1. This is taken from Rudin pg-114. I Need some help in the last part of the following proof: Partial proof: Let $f(x)= C_0 + C_1 x + \ldots + C_{n-1} x^{n-1} + C_{n}x^{n}$. We have: $$f'(x)= C_1+ 2C_2 x + 3C_3 x^2 + \ldots +nC_n x^{n-1}$$ $$f'(0)=C_1, \ f'(1)= C_1 + 2C_2 + 3C_3 + \ldots +(n-1)C_{n-1}+ nC_{n}$$ I would probably want to show that $f'(0)$ and $f'(1)$ are opposite in sign so that we can claim that there exists at least one $x \in (0,1)$ such that $f'(x)=0$. This is possible because $f(x)$ is a polynomial and hence continuous, differentiable function. Going forward, we use: $$C_0 + \frac{C_1}{2} + \ldots +\frac{C_{n-1}}{n} + \frac{C_n}{n+1}= 0$$ to get: $$C_1 = -2\left(C_0 + \frac{C_2}{3}+ \ldots +\frac{C_n}{n+1}\right)$$ Using this we get the following: $$f'(1)=-2C_0 + \sum_{k=2}^{n} \frac{(k+2)(k-1)}{k+1} C_{k}$$ $$f'(0)=-2C_0 -2 \sum_{k=2}^{n} \frac{1}{k+1} C_{k}$$ But how do we conclude that $f'(1)$ is opposite in sign of $f'(0)$ ? AI: Consider the polynomial $$ Q(x) = \int_0^x f(t)dt$$ We find that $$ Q(x) = C_{0}x + C_{1}\frac{x^{2}}{2} + \cdots + C_{n}\frac{x^{n+1}}{n+1} $$ Then with the given condition you find that $Q(1) = 0$ and obviously, $Q(0) = 0$. Now, use Rolle's Theorem.
H: Homogenous ordinary equation - Homogeneous The question is: $(x-y)dx + xdy = 0$ Trying to solve: $ \\M(x,y) = (x-y) \\N(x,y) = x $ $ \\Kx - Ky = K(x-y) \Rightarrow \text{ Homogeneous} \\Kx = K(x) \Rightarrow \text{Homogeneous}$ $ \\y = vx \\dy = vdx+xdv$ $ \\(x-vx)dx+x(vdx+xdv)=0 \\xdx + x^2dv = 0 $ I'm stucked here. I know I have to integrate now. The answer is $$x=e^{-y/x}+c$$ but I can't get there. What I did is: $$ \\x^2dv=-xdx \\\int dv=-\int\frac{x}{x^2}dx \\\int dv=-\int\frac{1}{x}dx \\v+c_1=-ln(x)+c_2 \\\frac{y}{x}=-ln(x)+c_2-c_1 \\-x(ln(x)+c_2-c_1)=y $$ And now? What should I do? AI: I'm a little confused by your notation earlier so I haven't checked it through, but at the end it's simple enough to rearrange your last line by dividing by $-x$ and then exponentiating. (Let $c=c_1-c_2$.) You get almost what you were looking for. (See my comment.) Exponentiating means raising $e$ to the powers given. $$a=b \implies e^a=e^b$$ Note that $e^{\ln y}=y$.
H: $AB=BA$ if there is an orthonormal basis of $\mathbb{R}^n$ of eigenvectors Show that if there is an orthonormal basis of $\mathbb{R}^n$ that consists of eigenvectors of both of the $n \times n$ matrices $A$ and $B$, then $AB = BA$. I'm not sure if what I have done suffices to solve the problem, but let $(v_1,v_2,v_3,v_4)$ be the basis of orthonormal eigenvevtors of $A$ and $B$. Then for example, $Av_1=\lambda_1 v_1$ and $Bv_1=\mu_1 v_1$ (where $\lambda$ and $\mu$ are corresponding eigevalues). Then we have $$ABv_1=A\mu_1 v_1=\mu_1 A v_1=\mu_1 \lambda_1 v_1 = \lambda_1 B v_1=BAv_1$$ And since we can do this for all $(v_1,v_2,v_3,v_4)$, thus $AB=BA$ If that that does not suffice, could anybody point out why? Thanks! AI: That's fine. If two linear things agree on a basis then they agree everywhere. You should definitely make sure you can prove that!
H: Question on a functional analysis exercise. These days I am doing some independent study of functional analysis. While solving problems, I could not handle the following part of an exercise (exercise 13, chapter 1 of Rudin's Functional Analysis). Let $C$ be the vector space of all complex continuous functions on $[0,1]$. Define $$d(f,g)=\int_0^1 \frac{|f(x)-g(x)|}{1+|f(x)-g(x)|}dx.$$ Let $(C,\sigma)$ be $C$ with the topology induced by this metric. Let $(C,\tau)$ be the topological vector space defined by the seminorms $$p_x(f)=|f(x)| \quad (0\le x \le 1).$$ Prove that every $\tau-$bounded set in $C$ is also $\sigma-$bounded and that the identity map $id:(C,\tau)\to (C,\sigma)$ therefore carries bounded sets into bounded sets. *Because of the definition of our seminorm, a set $E$ is $\tau-$bounded if and only if for each $x\in [0,1]$ there exists $M_x \ge 0$ such that $p_x(f)=|f(x)| \le M_x$ for all $f\in E$. To show that $E$ is $\sigma-$bounded, I tried to prove that for any $\delta>0$, $B(0,\delta)$ absorbs $E$, i.e. there exists $t>0$ such that $\frac{1}{t} E \subset B(0,\delta)$, but I am stuck here. I would really appreciate it if you could give me some sketches or hints on this. Thank you. AI: Sketch: Define $b_E(x) := \sup\limits_{f \in E} \lvert f(x)\rvert$. $b_E$ is the supremum of continuous functions, hence it is lower semicontinuous, hence measurable. Apply the dominated convergence theorem to $\frac1t b_E$.
H: How to prove that $\lim\limits_{x\to0}\frac{\tan x}x=1$? How to prove that $$\lim\limits_{x\to0}\frac{\tan x}x=1?$$ I'm looking for a method besides L'Hospital's rule. AI: Strong hint: $$\displaystyle \lim \limits_{x\to 0}\left(\frac{\tan (x)}{x}\right)=\lim \limits_{x\to 0}\left(\frac{\tan (x)-0}{x-0}\right)=\lim \limits_{x\to 0}\left(\frac{\tan(x)-\tan(0)}{x-0}\right)=\cdots$$
H: How To Count Shuffle permutations Let $n\in \mathbb{N}$ and $S(n)$ the permutation group on $\{1,\ldots,n\}$. For any $p,q\in \mathbb{N}$ with $p+q=n$, the set $Sh(p,q)\subset S(n)$ is the set of all permutations $\tau$ such that $\tau(1)< \cdots \tau(p)$ and $\tau(p+1)<\cdots <\tau(n)$, usually called the set of $(p,q)$-shuffles. More generally for any $k;p_1,\ldots,p_k\in\mathbb{N}$ with $p_1+\cdots +p_k=n$, the set of $(p_1,\ldots,p_k)$-shuffles $Sh(p_1,\ldots,p_k)\subset S(n)$ is the set of all permutations $\tau$, such that $\tau(1)< \cdots \tau(p_1)$, $\tau(p_1+1)<\cdots <\tau(p_2)$ (up to) $\tau(n-p_k+1)<\cdots <\tau(n)$. ==== Now the question is: HowTo calculate the cardinality $|Sh(p_1,\ldots,p_k)|$? I know that $|Sh(p,q)|=\binom{p+q}{q}$ but just as a fact. From empirical tests I would 'guess' $|Sh(p_1,\ldots,p_k)|=\binom{n}{p1,\ldots,p_k}$, where the expression on the left means the multinomial coefficient, but I'm more interested in deriving such an appropriate expression. AI: Choose $p$ numbers from the set $1..n$ and there is a unique shuffle such that $\tau(1)..\tau(p)$ to have those $p$ values and the other $q$ fill in the remaining places. The multinomial coefficinet $n\choose p_1...p_k$ is the number of ways to partition $1..n$ into subsets of cardinality $p_i$, and each subset can again be ordered each such partition is a unique shuffle.
H: Some theorems in euclidean geometry have incomplete proofs I have seen that, in euclidean geometry, proofs of some theorems use one instance of the 'geometric shape'(on which the theorem is based) to proof the theorem. Like, the proof of 'A straight line that divides any two sides of a triangle proportionally, is parallel to the third side' use only one instance of a triangle---like: ∆ABC is the instance Then, constructions are added to this diagram to prove the theorem. Clearly, the proof is not general.Because, only a triangle is in view. Therefore, this proof is not precise. We have had a general proof, must be having.I haven't yet visualized what the general proof might be. So, why do people call the above type of proof ,a proof?Is it a complete mathematical proof? AI: The diagram is meant to make it easier for you to explain and jot down facts. You cannot use "obvious properties" in the diagram to motivate your argument. For example, you cannot assume that all angles of a triangle must be acute. It is possible for geometric proofs that are heavily based on diagrams to be wrong. A common example is that "All triangles are isosceles", in which the flaw lies in making an innocuous assumption about the position of a point. Other instances include arguments about side lengths and angles. For example, if points $A< B, C$ are on a line, is it true that $|AC| = |AB|+|BC|$? In a general setting, this requires the use directed lengths.
H: Completion of a metric space in categorical terms Is it possible to define the completion of a metric space using categorical terms? AI: The completion of a metric space $X$ is an initial object in the category $\mathcal{U}_X$ whose objects are uniformly continuous maps $\iota_Y \colon X \to Y$, where $Y$ is a complete metric space, and whose morphisms are uniformly continuous maps $f \colon Y \to Z$ such that $f \circ \iota_Y = \iota_Z$.
H: Predicate logic example While studying predicate logic, i see some example as exercise there.But can't figure it out.Can anyone help me? (i)If a brick is on another brick, it is not on the table. (ii)Every brick is on the table or on another brick. (iii) No brick is on a brick which is also on a brick. EDIT: My answers are: i) ∀x∃y brick_on(x,y) -> ¬brick_on(x,table) ii) ∀x brick(x) -> onTable(x) ∨ (∃y on(x,y)) iii) ∀x∃y∃z brick_on(x,y) -> ¬brick_on(z,x) Could anyone tell me these are right or wrong? AI: In your posted work, you've got a great start, in terms of the logic you used. However, we need to "clean up" parentheses, make a few corrections, define the notation you are using, and use this notation consistently Let's start with defining notation, and then let's stick with it. To simplify matters, we'll let the domain of our universe consist of bricks. Let $T(x)$ denote "x is on the table". Let $O(x, y)$ denote "x is on top of y". Now to the translations: Pay particular attention to the parentheses, which are used to enclose everything which is within the scope of a quantifier which precedes the parentheses. First sentence: (i) If a brick is on another brick, it is not on the table. $$(i)\quad \forall x \Big(\exists y \big(O(x, y)\big) \rightarrow \lnot T(x)\Big)$$ Then we have (ii) Every brick is on the table or on another brick: $$(ii)\quad \forall x\,\Big(T(x) \lor \exists y\,\big(O(x, y)\big)\Big)$$ So far, your work has been very close to the above. Now let's look at the final sentence: (iii) No brick is on a brick which is also on a brick. In loglish: "For all bricks x, if there exists a brick y such that O(x, y), then there does not exist any z such that O(y,z)." Now, the full translation: $$(iii)\quad \forall x \Big(\exists y \big(O(x,y) \rightarrow \lnot\exists z(O(y,z)\big)\Big)$$ Or alternatively, we can say in loglish "For all bricks x, if there exists a brick y such that O(x, y), then for all bricks z, it is not the case that O(z,x)": $$(iii)\quad \forall x \Big(\exists y\, \big(O(x,y) \rightarrow \forall z(\lnot O(y,z)\big)\Big)$$
H: What does "prove by induction" mean? What does "Prove by induction" mean? Would you mind giving me an example? AI: Proof by induction means that you proof something for all natural numbers by first proving that it is true for $0$, and that if it is true for $n$ (or sometimes, for all numbers up to $n$), then it is true also for $n+1$. An example: Proof that $1+2+3+\dots+n = n(n+1)/2$: For $n=0$, on the left hand side you've got the empty sum, which by definition is $0$. On the right hand side, you've got $0(0+1)/2=0$. So the euqation is true for $n=0$. Assume that the formula is true for $n$. Then we can prove it for $n+1$ as follows: By assumption, $1+2+\dots+n+(n+1)=n(n+1)/2 + (n+1)$. But the right hand side is easily calculated to equal $(n+1)((n+1)+1)/2$. Now since we have proven it for $0$, and for $n+1$ assuming it's true for $n$, we have proven it by induction for all $n$. The first part is known as induction start, and the second part as induction step. Now of course you want to know: Why does it work? Imagine that you want to know whether it is true for $n=5$. Of course you could just calculate it directly, but assume you've forgotten how to calculate it, and all you remember is that you've proven it for $n=0$ and that if it is true for $n$, then it is true also for $n+1$. OK, you know it is true for $n=0$. If it is true for $n=0$, then by the induction step, which you've proven, it is true for $n+1=1$ But if it is true for $n=1$. them by the induction step it is true for $n+1=2$. But if it is true for $n=2$. them by the induction step it is true for $n+1=3$. But if it is true for $n=3$. them by the induction step it is true for $n+1=4$. But if it is true for $n=4$. them by the induction step it is true for $n+1=5$. Therefore you have proven it for $n=5$. It is easy to see that this allows you to mechanically formulate a proof for any individual $n$, and thus it is obvious that it must be true for all $n$.
H: 1st order linear DE with step function input the 1st order linear equation is: $y'(t) + \frac D M y(t) = f(t)$ with constants: $D = 100kg/s$ $M = 1000kg$ $f(t) = Fu(t)$ <-- that's Force x the unit step function an initial condition: $y(0) = 20.8m/s$ the input is a step function scaled by the Force $F$ ($Fu(t)$) we need to solve the DE and then find the Force needed to make the final velocity $27.8m/s$. also a block diagram with the Laplace transform: $f(t) \longrightarrow {\frac 1M \over (s + \frac DM)}$ thank you! here's what i have so far... first i integrated the linear function. $y'(t) + .1 y(t) = .001f(t)$ using $mu$ in the linear DE and the initial condition y(0) = 20.8 $y(t) = .01 + 20.79 e^(-.1t)$ that's e to the power of -.1t the problem is i can't figure out what to do with the right side of the equation. the step function scaled by force. i need help integrating the right side. $Fu(t)$ i need to solve the equation to a point where i can input a constant value for the force in order to aim for the target velocity of 27.8m/s. AI: We are given: $$\tag 1 y'(t) + \frac D M y(t) = \dfrac{1}{M}f(t)$$ where: $D = 100kg/s$ $M = 1000kg$ $f(t) = Fu(t)$, Force $\times$ Heaviside unit step function Initial Condition (IC): $y(0) = 20.8m/s$ Rewriting $(1)$ yields: $$\tag 2 y'(t) + \dfrac{1}{10} y = \dfrac{F}{1000} u(t)$$ Taking the Laplace Transform of $(2)$ yields: $$\mathcal{L}\left(y'(t) + \dfrac{1}{10} y = \dfrac{F}{1000} u(t)\right) = s y(s) - y(0) + \dfrac{1}{10} y(s) = \dfrac{F}{1000 s}$$ We want to group the $y(s)$ term on the LHS side and everything else on the RHS, so we have: $$y(s)\left(s + \dfrac{1}{10}\right) = y(0) + \dfrac{F}{1000 s} = 20.8 + \dfrac{F}{1000 s}$$ So we have (that last part is a partial fraction expansion): $$\tag 3 y(s) = \dfrac{20.8 + \dfrac{F}{1000 s}}{s + \dfrac{1}{10}} = \dfrac{0.01 (F+20800 s)}{s (10 s+1)} = \left(\dfrac{20.8-0.01F}{s+0.1} + \dfrac{0.01 F}{s}\right)$$ Now, we need to find the Inverse Laplace Transform of $(3)$, so we have: $$ \mathcal{L}^{-1}~(y(s)) = y(t) = \mathcal{L}^{-1}~\left(\dfrac{20.8-0.01F}{s+0.1} + \dfrac{0.01 F}{s}\right) = 0.01 \left(F-(F-2080) e^{-t/10}\right)$$ So, we have: $$y(t) = 0.01 \left(F-(F-2080) e^{-t/10}\right)$$ Now, we need to find $F$ such that the final velocity is $27.8~m/s$. We are given a final time for this velocity at $t = 100$, so we would have: $$y(100) = 0.01 \left(F-(F-2080) e^{-10}\right) = 27.8 \rightarrow F = 2780.03$$ Thus, we have: $$y(t) = 27.8003-7.0003 e^{-t/10}$$ A plot of this is:
H: Is proving that a mapping maps every element of the domain and is surjective sufficient to prove that it is a ring automorphism? The ring under consideration is $R[x_1,x_2,\dots, x_n]$. Shouldn't proving that a mapping $f:R[x_1,x_2,\dots, x_n]\to R[x_1,x_2,\dots, x_n]$ maps every element of the domain and is surjective sufficient to prove that it is an automorphism? Motivation: A mapping is either one-one or many-one. You can't have a many-one automorphism, if the mapping is surjective, provided every element of the domain is being mapped. For some time, as directed by others, I have been trying to prove that a mapping is an automorphism by proving the existence of the inverse mapping. Is that really necessary at all? Thanks in advance! AI: First, you did not mention showing that $f$ is a homomorphism, which of course you need demonstrate, in addition to bijectivity. Second, you are correct that you cannot have a many-one automorphism--that is by definition. When you say "maps every element of the domain and is surjective", I'm not sure what you mean by the first part. To prove it's an automorphism, you must demonstrate that it is a homomorphism, that it is injective, and that it is surjective. If you are dealing with a finite set, it suffices to show that it is a homomorphism and that it is surjective.
H: Why would a statistician or mathematician want to find the ratio between two maximum likelihood in a likelihood-ratio test? Why would a statistician or mathematician want to find the ratio between two maximum likelihood function in a likelihood-ratio test? I know maximum likelihood is the maximum of the probability distribution, but what is likelihood-ratio test using for? AI: This is a consequence of the Neyman-Pearson lemma.
H: Liapunov functions I would really like to see some very simple worked out or with some well pointed hints on these guys. i have two textbooks that outline the idea behind them, but both give only one example that are very contribbed imho. ( Non-Linear dynamics and intro to chaos By Steven H Strogatz) and (differential equations, dynamical systems and intro to chaos by Morris W. Hirsch.) Textbook or links to instructive material on this subject much appreciated. ( im an ungrad and not from math so please don't bludgeon me to death thx.) AI: Note: Be careful that the name is spelled two different ways "Liapunov" or "Lyapunov", so when searching, try both! These papers might be instructive: Lyapunov function methods MIT Lectures on Dynamic Systems and Control MIT - Finding Lyapunov Functions A comparison of three methods of constructing Lyapunov functions Stability theory for systems of dierential equations with applications to power control in wireless networks Books These have a handful of pages each and I like the first two most. Modern Control Theory, W. Brogan (very nice write-up on the matter, with physical and electrical circuits) The Qualitative Theory of Ordinary Differential Equations, An Introduction, F. Brauer and J.A. Nohel Ordinary Differential Equations, V.I. Arnold Dynamics and Bifurcations, J. Hale and H Kocak Nonlinear systems, H.K. Khalil There may be some applied materials from Physics areas that might also help.
H: What does $R_{0}^{+}$ mean? I'm reading this paper: http://www.aaai.org/Papers/KDD/1996/KDD96-027.pdf, and the authors used the symbol $R_{0}^{+}$ in the definition of Exact Exception Problem, such as $D: P(I) \rightarrow R_{0}^{+}$. Could anyone please help me understand what the symbol $R_{0}^{+}$ means? It seems refer to the set of real numbers, but if that's the case, what are the subscript and superscript for? Thanks, AI: It is the half line $[0,+\infty)$.
H: Double integral Calculate the iterated integral $$\int_{1} ^4\int_{1} ^2 \left(\frac xy+\frac yx\right)\,dy\,dx$$ This is the work that I've done, but it'd lead me to the wrong answer, so either I did it completely wrong or I made in error in my calculation. $$\int_{1} ^4\int_{1} ^2 \left(\frac xy+\frac yx\right)\,dy\,dx= \int_{1} ^4 \left[xln(y)+{y^2\over 2x} \right]_{1} ^2 dx=\int_{1} ^4 \left[xln(2)+ {3\over 2x}\right]dx= \left[{ln(2)x^2\over 2}+\frac32ln(x)\right]_{1} ^4= {15\over 2}ln(2)+ {3\over 2}ln(4)$$ The answer in the back of the book says it's ${21\over 2}ln(2)$ AI: You answer is correct: note that $\ln(4)=\ln(2^2)=2\ln(2)$, so your solution becomes ${15\over 2}\ln(2)+ {3\over 2}\ln(4)={15\over 2}\ln(2)+ 3\ln(2)={21\over 2}\ln(2)$.
H: convert ceil to floor Mathematically, why is this true? $$\left\lceil\frac{a}{b}\right\rceil= \left\lfloor\frac{a+b-1}{b}\right\rfloor$$ Assume $a$ and $b$ are positive integers. Is this also true if $a$ and $b$ are real numbers? AI: This is not true in general, e.g. take $b = 1/2$ and $a = 2,$ so the LHS is $4$ while the RHS is $3.$ Suppose $b\nmid a,$ since that case is trivial. Use the division algorithm to write $a = qb + r,$ where $q \in \mathbb{N}\cup \{0\}$ and $0 < r < b.$ Then the LHS is $q+1$ while the RHS is $\lfloor (q+1) + \frac{r-1}{b} \rfloor.$ Since $r$ is in fact an integer, $b > r \ge 1$ yields $\frac{b-1}{b} > \frac{r-1}{b} \ge 0,$ whence $\lfloor (q+1) + \frac{r-1}{b} \rfloor = q+1,$ as desired. Edit: I was a bit hasty in giving the general condition. Here it is corrected. If $b \mid a,$ then the LHS is $q$ and $r = 0,$ so you need $b \ge 1$ for the RHS to also be $q.$ If $b\nmid a,$ we write $a = qb + r,$ where $q \in \mathbb{Z}$ and $0 < r < |b|.$ So we need $0 \le \frac{r-1}{b} < 1.$ If $b > 0,$ this occurs so long as $r \ge 1$ (which in particular means that $b > 1$). If $b < 0,$ then we need $r > 1 + b.$ Now, we also have $r < |b| = -b,$ which gives $-b > 1 + b,$ or $b < -1/2.$ For $b \le -1,$ $r > 1+b$ holds trivially, and for $-1 < b < -1/2,$ one has to actually check that $r > 1 +b.$
H: Zariski tangent space of a point viewed as a point of a subvariety Let $X \subset \mathbb{C}^n$ be an affine variety (not irreducible). Let $Y$ be a subvariety of $X$ (again not irreducible). How can we relate the Zariski tangent space at $P \in Y$ and at $P \in X$? (Corrected after Mariano's comments) Based on my understanding, we do have a homomorphism $T_P Y \rightarrow T_P X$ of vector spaces, but can we say something more? For example, what can we say about the dimensions of the two vector spaces $T_PY$ and $T_PX$? AI: The natural $\Bbb{C}$-linear map $$ T_P(Y) \rightarrow T_P(X) $$ is indeed injective. This follows from the fact that it is dual to the $\Bbb{C}$-linear map $$ \mathfrak{m}_{X,P}/\mathfrak{m}_{X,P}^2 \rightarrow \mathfrak{m}_{Y,P}/\mathfrak{m}_{Y,P}^2 $$ which is surjective, since $Y$ is a subvariety of $X$. Hence one always has $$ \dim T_P(Y) \leq \dim T_P(X) $$ and this result cannot be improved; you can have strict inequality (e.g. $X=\Bbb{A}^1$, $Y=P$ for some point $P \in \Bbb{A}^1$) and you can have equality (e.g. $X=\Bbb{A}^2$, $Y=V(y^2-x^3)$ and $P=(0,0)$).
H: Evaluate the following Riemann Stieltjes integral Let $\alpha(x) = 3[x]$ where $[x]$ is the greatest integer function. Evaluate $$\int_{0}^{2}{\alpha\left(\dfrac{x}{\sqrt{2}}+1\right) \mathrm{d}\alpha(x)}$$ Attempt: If I apply the same idea of evaluating R-S integrals w.r.t jump functions, I get that $$\int_{0}^{2}{\alpha\left(\dfrac{x}{\sqrt{2}}+1\right) \mathrm{d}\alpha(x)} = \alpha\left(\dfrac{1}{\sqrt{2}}+1\right)\cdot [\alpha(1)-\alpha(0)] = 3\cdot 3 =9$$ My questions would be : (1) Evaluating the integral in the above fashion relies on the theorem which says that when $f$ is continuous and $\alpha(x)$ is the monotone jump function then $$\int_{a}^{b}{f \mathrm{d}\alpha} = \sum_{n=1}^{\infty}{c_n f(s_n)}$$ where $c_n$ are the values of the function $\alpha(x)$ and $s_n$ are the points at which the jump occurs. For the above integral $$f = \alpha\left(\dfrac{x}{\sqrt{2}}+1\right)$$ is not continuous, so the theorem would fail. Is the continuity of $f$ required? In the proof the continuity is used to determine that $f\in \mathbb{R}(\alpha)$ (2) If the theorem fails, and I cannot apply it to the above integral, any other ideas? I tried using integration by parts, however it circles back to the issue of (1) above. Thanks!!! AI: The theorem can be stated in a weaker form. Let $[a,b]$ be an interval. Let $f,\alpha:[a,b]\to\Bbb R$. Define $\alpha$ as follows $$\alpha(x)=\begin{cases} \alpha(a) &a\leq x<c\\\alpha(b)&c< x\leq b\\\alpha(c)&x=c\end{cases}$$ Let $f,\alpha$ be defined in such a way that at least one of $\alpha,f$ is continuous on the left of $c$ and one on the right of $c$. Then $f\in\mathscr R(\alpha)$ and $$\int_a^b fd\alpha=f(c)\cdot\left(\alpha(c^+)-\alpha(c^-)\right)$$ In particular, as it is pointed out in the comments, if $f$ is continuous at the jumps and $\alpha$ has left or right jumps, the integral exists and is what you claim.
H: A counter-example of the second isomorphism theorem for topological groups Let $G$ be a topological group and $H$ and $N$ subgroups. Suppose $H$ is contained in the normalizer of $N$, then by using arguments of the second isomorphism theorem we can show that there is a canonical continuous isomorphism $$\phi:H/H\cap N\rightarrow HN/N$$ Are there cases in which this fails to be an actual homeomorphism? AI: Presumably $H$ and $HN$ take the subspace topology from $G$ and project it onto the quotient groups? Would the following be a counterexample? Let $G=\mathbb{R}$ be the additive group of reals, $H=\mathbb{Z}\cdot\sqrt2$ and $N=\mathbb{Q}$. Then $H\cap N$ is trivial, so $H/(H\cap N)$ inherits the discrete topology from $H$. On the other hand $N$ is dense in $G$, so $HN/N$ has only trivial closed sets, i.e. it has the trivial topology.
H: $f(z) = z^k + kz$ is injective in the unit disc How would I go around proving that $f(z) = z^k + kz$ in injective in the open unit disc in $\mathbb{C}$, for each natural $k$? I don't really know where to start. I observed that its derivative is nonzero, but that was all that I came up with. Any help is appreciated. Thanks. AI: If $z_1^k+kz_1=z_2^k+kz_2$ then $$(z_1-z_2)\left( z_1^{k-1}+z_1^{k-2}z_2+...+z_2^{k-1}+k\right)=0$$ Hint if $z_1,z_2$ are in the open unit disk, then by the triangle inequality $$\left| z_1^{k-1}+z_1^{k-2}z_2+...+z_2^{k-1} \right| < k \,.$$
H: How to Tell If Matrices Are Linearly Independent If I have two matrices, for example: $\begin{pmatrix}1&0\\2&1 \end{pmatrix}$ and $\begin{pmatrix} 1&2\\4&3\end{pmatrix},$ how do I determine if they are linearly independent or not in $\mathbb{R}^4$? I am familiar with checking for independence with vectors, such as by checking the determinant to be non-zero, or using the definition of linear independence such as $a(1,2)+b(2,3)=(0,0)$ and checking if $a=b=0$ is the only solution. AI: To show if two matrices are independent, you do exactly what you always do: if your matrices are $A$ and $B$, you want to show that $\alpha A+\beta B=0$ for $\alpha,\beta\in\mathbb{R}$ (or $\mathbb{C}$, depending) if and only if $\alpha=\beta=0$.
H: isomorphic polynomial rings I'm certain that this is a dumb question, but I'll ask anyway. I know that if $\theta : F \to K$ is a field isomorphism then we get an induced isomorphism $\varphi:F[x] \to K[x]$ such that $\varphi|F = \theta$. We construct such a $\varphi$ by setting $\varphi|F = \theta$ and $\varphi(x)=x$. My question is the following: do we need to set $\varphi(x)=x$, or does this just follow? In other words, if we have an isomorphism $\varphi:F[x] \to K[x]$ such that $\varphi|F = \theta$ then do we necessarily have $\varphi(x)=x$? Thanks. AI: No, you could have an isomorphism that agrees with $\theta$ on $F$ yet sends $x$ to $-x$ (or to $ax$ for any fixed $a\in K-\{0\}$).
H: Combinatorial - Ways to create subcommittees of a certain size out of a committee? Each member of a 10 member committee must be assigned to exactly one of 3 subcommittees (management, supervisor, employee). If these subcommittees are to contain 1,3, and 6 members respectively, how many different subcommittees can be appointed? I solved this by doing this: c(10,1) for management, c(9,3) for supervisor, c(6,6) for employees. c(10,1) * c(9,3) * c(6,6) = different ways to arrange subcommittees. Is this correct? If not, what did I do wrong and what's a better way? AI: Yes, indeed, your answer is correct. There is in fact a simpler way to approach this sort of problem: through the use of multinomial coefficients (see the two rightmost terms below): $$\binom{10}{1}\cdot\binom{9}{3} \cdot\binom 66 =\dfrac {10!}{\require{cancel}1!\cdot\color{blue}{\cancel{ 9!}}} \cdot \dfrac {\color{blue}{\cancel{9!}}}{\color{blue}{\cancel{6!}}\cdot 3!}\cdot \dfrac{\color{blue}{\cancel{6!}}}{6!} = \dfrac{10!}{1!\cdot 3!\cdot 6!} = \binom{10}{1, 3, 6}$$ Notice the nice cancellations in the factorials above! Do take a moment to read over the linked entry in Wikipedia: multinomial coefficients. Multinomial coefficients are perfectly suited to just this sort of problem, and simply save you some calculations.
H: Hausdorff Space that is a non Normal Hausdorff Space Can someone give me an example of a Hausdorff space (i.e $T_2$), that is not a normal Hausdorff space (i.e $T_4$)? AI: There are many examples. One simple one is the $K$-topology on $\Bbb R$. Let $K=\left\{\frac1n:n\in\Bbb Z^+\right\}$. Let $$\mathscr{B}=\{(a,b):a,b\in\Bbb R\text{ and }a<b\}\cup\{(a,b)\setminus K:a,b\in\Bbb R\text{ and }a<b\}\;;$$ then $\mathscr{B}$ is a base for the $K$-topology. This topology is finer than the usual one, so it is certainly Hausdorff. It is not regular, because in this topology $K$ is a closed set, $0\notin K$, and $0$ and $K$ cannot be separated by disjoint open sets. Another fairly simple example is the Sorgenfrey plane. This post from Dan Ma’s Topology Blog gives much information on it, including a proof that it is not normal.
H: Finding the lowest common value in repeating sequences Assume I have N sequences of ones and zeros. Each sequence repeats every p terms. I want to find the minimum position where all sequences evaluate to "1" Here is an example set of sequences for $N = 3$ and $p = \{2, 3, 5\}$: $$ \begin{align} a_1 & = \{1, 0, 1, 0, ...\} \\ a_2 & = \{0, 1, 1, 0, 1, 1, ...\} \\ a_3 & = \{0, 1, 0, 0, 1, 0, 1, 0, 0, 1,...\} \\ \end{align} $$ Observing that $14 = 0\ mod\ 2$ and $14 = 2\ mod\ 3$ and $14 = 4\ mod\ 5$ all of which correspond to "1"s in the above sequence we can verify that 14 is a possible solution. By simply extending the pattern we can find that the 14th element is indeed the first to yield "1" for each sequence. I attempted to solve this analytically by converting the patterns into closed form equations; however this seems impractical once N or p become non-trivial. I may be wrong but I can't find a good method for solving large systems of complex equation over integers. For my problem it can be assume that all sequence repeat sizes (p) are coprime. It can also be assumed that $100 < N < p < 1000$. That is, I could be looking at 100 to 1000 sequences that have patterns that repeat every 1000 terms. Each pattern will have approximately $p/2$ "1"s. Therefor, trying to brute force the solution by solving every set of congruencies is not efficient. Is there a general name for this type of problem? What are some efficient ways of finding the lowest common term position? AI: In my view this problem is hopeless in terms of computing an actual answer. If you let $p_i$ be an index where $a_i$ is 1 ($1\le p_i\le p$), and you do this for all $i$ ($1\le i\le N$), then you can solve this specific problem with the CRT. However there are over $50^N$ such problems that you are minimizing over, and the function you're trying to minimize is not particularly natural here, since $\mathbb{Z}/n\mathbb{Z}$ has no innate order, and you need to project to $\mathbb{N}$ to exploit the order there. The simple method of just computing all the terms and waiting for all 1's won't work, because it will take on average $2^N$ steps to completion. Induction won't help, because the optimal solution for $N$ sequences need have nothing at all to do with the optimal solution for $N+1$ sequences.
H: Sampling 100 widgets to test for defective ones Given a 100 widgets. The probability of a widget being defective is $\frac{1}{2}$. Let $A$ be the event that $k$ sampled widgets are all functioning properly, for $0\leq k\leq 100$. $B$ be the event that $6$ or more of the $100$ widgets are defective. What is the minimum number of widgets $k$ which must be sampled to ensure that $P(A\cap B)< .1$ We can write $P(A\cap B)=P(A\cap (B_6\;\cup ...\cup\; B_{100})\;)$. Where $B_i$ is the event that exactly $i$ widgets are defective. Now since the $B_i$ are disjoint, we can write this as $$P(\; (A\cap B_6)\;\cup ...\cup\;(A\cap B_{100})\;)=\sum_{i=6}^{100}P(A\cap B_i)=\sum_{i=6}^{100}P(A \mid B_i)P(B_i).$$ Now by the hypergeometric distribution we know that $$P(A\mid B_i)=\frac{\binom{100-i}{k}}{\binom{100}{k}}\;.$$ And $$P(B_i)=\frac{\binom{100}{i}}{2^{100}}\;.$$ Thus we obtain $$\sum_{i=6}^{100}P(A \mid B_i)P(B_i)=\sum_{i=6}^{100}\frac{\binom{100-i}{k}}{\binom{100}{k}}\frac{\binom{100}{i}}{2^{100}}=\frac{1}{2^{100}}\sum_{i=6}^{100}\binom{100-k}{i}\;.$$ However the answer in the back of the book is $k=32$ which when I plug that into my formula gives me a number which is far too small to be correct. Have I gone wrong somewhere in my reasoning? AI: Given the probability of a widget being defective being $\frac 12, P(B)$ is very close to $1$. Let's just concentrate on getting $P(A) \lt 0.1$ Now each widget is a coin flip, so even four tests give a chance of $\frac 1{16}$ of none being defective. $k=32$ is clearly wrong. The binomial approximation gives a standard deviation of $\sqrt{100(\frac 12)^2}=5$, so at the 2 SD level there are at least $40$ defectives and we need only have $0.6^k \lt 0.1$ which requires $k \gt 4.5$
H: What is the correct order when multiplying both sides of an equation by matrix inverses? So my questions is let's say you were asked to solve for $A$, and you have something like this: $$BAC=D$$ where B, C , and D are matrices. So the way I would solve this would be to multiply both sides by $B^{-1}$ and $C^{-1}$ (inverse of B and C), but since the order in the multiplication matters $A = DB^{-1}C^{-1}$ would be different than say $A = B^{-1}DC^{-1}$. My question (maybe stupid or I am just missing something) is how do you know which order is the correct one? AI: You must be sure to multiply on the correct side. To get rid of the $B$ in $BAC$, you must multiply on the left by $B^{-1}$, so you must do the same on the righthand side of the equation: $$AC=B^{-1}BAC=B^{-1}D\;.$$ To get rid of the $C$ in $AC$, you must multiply $AC$ on the right by $C^{-1}$, so you must do the same thing on the other side of the equation: $$A=ACC^{-1}=B^{-1}DC^{-1}\;.$$
H: Using the definition of big-oh notation, show that for any $k,\gamma>1$, $n^k=O(\gamma^n)$. This question had been on my midterm in a course I took last year: Prove that for any $k,\gamma>1$, $n^k=O(\gamma^n)$. Intuitively, this makes sense. Even the fastest exponential algorithm (for example, $1.001^n$) will eventually take a longer time to finish than the slowest polynomial algorithms (for example, $n^{999}$). Apparently, the easiest way to do this question is to show that: $$ \lim_{n\to\infty}\dfrac{n^k}{\gamma^n}=0 $$ by applying L'Hopital's Rule $\lceil k \rceil$ times. However, I want to try to prove this using the definition of big-oh notation. That is, I want to explicitly come up with some constants $C,n_0$ such that: For all $n \ge n_0$, $0 \le n^k \le C\gamma^n$. I've tried some small examples, hoping that the process would generalize (for example, I can show that $n^3<2^n$ for any integer $n\ge10$ using induction, although I don't know how to prove this for all real numbers). I've tried to prove the easier problem of showing that $n^k \le k^n$ for large enough $n$ by using induction and the binomial theorem, but that was a mess. I've tried working backwards and taking logs of both sides, with no luck. I'd appreciate your thoughts. Apologies if this is a duplicate; this is likely not the first time this question has been asked. AI: We show that there is a constant $C$ such that if $n\ge 1$ then $n^k \lt C\gamma^k$. A preliminary simplification is useful. It is enough to show that there is a constant $D$ such that if $n\ge 1$, then $n\lt D(\gamma^{1/k})^n$. Note that $\gamma^{1/k}\gt 1$. Call it $1+\delta$. By the Bernoulli Inequality we have $(1+\delta)^n \ge 1+\delta n$. Thus if we take $D=1/\delta$ and $n\ge 1$, then $n^k\lt D^k\gamma^n$.
H: Notation in propositional logic If in propositional logic one is trying to simplify a formula by evaluating its subformula, would it be considered an abuse of notation to actually substitute the bits $\{0,1\}$ in for the formula, to say something like "$0\wedge 1\equiv 0$" or "$0\wedge 1=0$". AI: Judging by the comments it seems that what you are trying to do is fine. I think that it would be slightly clearer to use some predetermined symbols for "True" and "False", e.g. $\top$ and $\perp$ or $\Bbb T$ and $\Bbb F$. This makes it easier to understand that you're talking about truth values. Of course, if the context is clear that $0$ and $1$ are truth values, then writing those is fine as well. Let me add my usual advice, be sure to write something like this at the start: Given a proposition $A$ and an assignment $\sigma$, we shall write $1$ if $A$ is true in that assignment, and $0$ otherwise. This way you can hint the reader how you are going to abuse them. Or the notation.
H: If $f_i(X)$ is connected for all $i=1,2,...,n$ then $X$ is connected. For each $i\in\{1,...,n\}$, consider the map $f_i:\mathbb{R}^n\to \mathbb{R}$ defined by $f_i(x)=x_i$ for all $x=(x_1,...,x_n)\in\mathbb{R}^n$. I would like to know if the following statement is true or false: if $f_i(X)$ is connected for all $i=1,...,n$ then $X$ is connected. AI: No. Let $X$ be the graph of $y=\sqrt{1-x^2}$ together with the origin, a closed semicircle together with the centre of the circle. The projections are $[-1,1]$ on the $x$-axis and $[0,1]$ on the $y$-axis; these are both connected, but $X$ is not.