text
stringlengths
83
79.5k
H: A bishop on a grid Suppose that we have an $n\times m$ chessboard and bishop on the square $(1,1)$. It starts to move diagonally with the following rules: If bishop is in any corner square except $(1,1)$, it stops moving. If bishop meets a boundary square which is not a corner, it changes the direction of motion by continuing in the only direction that is not in the opposite direction the path just traveled (i.e. will travel along positions $(3,1) \to (2,2) \to (1,3) \to (2,4) \to \cdots $) Is it possible to determine where the bishop will stop and its path? I found the case $\min\{n,m\}\not\mid\max \{n,m\}$ hard. AI: A hint: Denote the centers of the squares by $(i,j)$ with $i$, $j$ integers $\geq0$. The size of the board is then $n'\times m'$ with $n'=n-1$, $\>m'=m-1$, and the starting point is $(0,0)$. Replace the board with its universal cover, i.e. the integer lattice ${\mathbb Z}^2$, and mark all points $(k n', l m')$, $\ k,l\in{\mathbb Z}$ with at least one of $k$, $l$ odd, with a black star. The bishp's complicated travel can now be "lifted" to ${\mathbb Z}^2$ and appears as a straight line. In this way it becomes easier to analyze under which circumstances this journey will hit a "forbidden corner" or will become periodic.
H: Trigonometry - How do I simplify this expression? We have the expression $$ 13 \sin [ \tan ^{-1} (\dfrac{12}{5}) ] $$ Apparently the answer is 12, and I have to simplify it, and I'm assuming it means I have to show it's 12, without using a calculator. Normally I show my own work in the questions, but in this case I have absolutely no clue how to. The only thing I know that might help is that $\tan(x) = \dfrac{\sin(x)}{\cos(x)}$. AI: Hint: There is a right triangle with side lengths $5$, $12$ and $13$. Draw this triangle, and choose one of the two non-right angles $t$ for which $$\tan{t} = \frac{12}{5}$$ Recall that the tangent is the opposite side over the adjacent side.
H: On subspace verification I am struggling with the following Problem: \begin{align}Y= \lbrace (x^4-y^4,0,0,0) \mid x,y \in \mathbb{R} \rbrace \subset \mathbb{R}^4 \end{align} Question, is the given Set a subspace of $\mathbb{R}^4 ?$ (Answer given by my tutor: Yes) I thought about it as follows \begin{align}(x^4-y^4,0,0,0)=\underbrace{(x^4,0,0,0)}_{:=u}+\underbrace{(-y^4,0,0,0)}_{:=v} \end{align} I defined two new vectors, because the notation $x,y$ is a bit confusing for me (usually the statements only depend on $x$). Then I could say that $u \in Y$ and $v \in Y$ and clearly, the sum would be in $Y$ too. Intuitive that seems fine to me, but I have the daunting feeling that I am being cyclic in this problem or make things too easy for myself. However, if I continue on this way I would also need to verify that if $u \in Y$ then $\lambda u $ must be in $Y$ too. However, here I see the following the problem, if $u \in Y$ then when $\lambda$ is applied to this vector, this would scale the $x$-component and would contribute to make it not a subspace anymore. I used WolframAlpha to visualize my idea: http://www.wolframalpha.com/input/?i=%28x%5E4-y%5E4%2C0%2C0%29+ (before lambda) http://www.wolframalpha.com/input/?i=%285x%5E4-y%5E4%2C0%2C0%29+ (after lambda being applied) AI: This just has to do with the fact that $f\colon\mathbb{R}\rightarrow\left[0,\infty\right)$ with $f\left(a\right)=a^{4}$ is onto. Suppose $u,v\in Y$. Then \begin{align*} u+v & =\left(x_{u}^{4}-y_{u}^{4},0,0,0\right)+\left(x_{v}^{4}-y_{v}^{4},0,0,0\right)\\ & =\left(\underbrace{\left(x_{u}^{4}+x_{v}^{4}\right)}_{r_{1}}-\underbrace{\left(y_{u}^{4}+y_{v}^{4}\right)}_{r_{2}},0,0,0\right) \end{align*} Since $r_{1},r_{2}\in\left[0,\infty\right)$, we can take $x=r_{1}^{1/4}$ and $y=r_{2}^{1/4}$ to get $$ u+v=\left(x^{4}-y^{4},0,0,0\right)\in Y. $$
H: Why can't you count real numbers this way? Sorry but this is probably a naive question. Why can't you generate real numbers by a*10^b, the same way as rational numbers by a/b? a and b could be integers so that you would start counting real numbers like: a\b 0 1 -1 2 -2 0 0 0 0 0 0 1 1 10 0.1 100 0.01 -1 -1 -10 -0.1 -100 -0.01 2 2 20 0.2 200 0.02 -2 -2 -20 -0.2 -200 -0.02 That would just take all of the integers and also apply a decimal point anywhere on those integers, thus making the real numbers no? Which ones would be missing? Plus I don't understand the diagonal argument because the real number set is infinite, so surely the diagonal would just go on forever so you can never check them all since there will be more and more, never ending. AI: The numbers that would be missing from this scheme would include every single irrational number, and many rationals, too. Specifically, this scheme includes only the rationals with terminating decimal representations. The upshot of the diagonal argument has nothing to do with checking. What we do is take a countable list of real numbers, and then construct a real number that is not on that list (it will be real by completeness of the reals), by making sure that it fails to match every number on the list in (at least) one decimal place. This argument works for any countable list of real numbers, meaning that no countable list of real numbers will include every real number, meaning that the set of all reals is uncountable.
H: Finding the limit of $\frac{\sqrt{x}}{\sqrt{x}+\sin\sqrt{x}}$ How would one find the limit of $\displaystyle\lim_{x\to 0}\frac{\sqrt{x}}{\sqrt{x}+\sin\sqrt{x}}$ I know I have to use the L'Hospital rule. $\displaystyle\lim_{x\to 0}\frac{\frac{1}{2}x^{-1/2}}{\frac{1}{2}\frac{1}{\sqrt{x}}+\frac{1}{2}\frac{1}{\sqrt{x}}\cos\sqrt{x}}$ But I find myself stuck AI: Continuing from where you left off: Simply cancel the common factor of $\frac {1}{2 \sqrt x}$ from numerator and denominator: $$\frac{\frac{1}{2\sqrt x}}{\frac{1}{2}\frac{1}{\sqrt{x}}+\frac{1}{2}\frac{1}{\sqrt{x}}\cos\sqrt{x}} = \dfrac 1{1 + \cos \sqrt x}$$ Now evaluate the limit as $x \to 0$. You should arrive at a limit of $\dfrac 12$.
H: Question about SOT and compact operators I need some help with functional analysis / Hilbert space theory. If you have a favorite text to recommend, please let me know~ Here is my question: Given $v_t$ be the "squeeze operator" on $H=L^2[0, 1]$, where $v_t: L^2[0,1] \to L^2[0, \frac{2-t}{2}]$ acts on $f \in L^2[0,1]$ by squeezing the domain of the function. We have that $\{v_t\}$ for $t \in [0, 1]$ is a family of SOT continuous operators. I am wondering why given any $p \in \mathbb{K}(H)$, we have $\{ v_tpv_t^* \}$ is continuous in norm. I found the following facts on a reference suggested by Wikipedia (Hilbert Space Operators in Quantum Physics), but I am not sure how to prove them or how I may use them... $T_n \to^{SOT} T$ implies that for any $p \in \mathbb{K}(H)$, we have $T_np \to Tp$ in norm. Hints or suggestions would be greatly appreciated! AI: This is essentially the definition of the strong operator topology; the SOT is the topology generated by the evaluation maps $\mathcal{L}(X,Y) \ni T \mapsto Tx \in Y$ (for a fixed $x$). We want all of these maps to be continuous (w.r.t. the topology induced by the norm on $Y$), or equivalently that $T_n \to T$ in this topology on $\mathcal{L}(X,Y)$ if and only if $T_nx \to Tx$ in the topology on $Y$. Other good references off the top of my head: Folland Real Analysis, ch. 5; Rudin Functional Analysis, maybe Reed and Simon Functional Analysis. EDIT: ok, I misunderstood the question a bit. If $T_n \to T$ strongly, and $A$ is compact, we need to show that $$ \sup_{x \in B_X}\left||(T_nA - TA)x \right|| \to 0 $$ where $B_X = \{x\in X: |x| = 1\}$. Equivalently, $$ (*):\quad\quad\sup_{y \in A(B_X)}\left||(T_n - T)y \right|| \to 0. $$ Now, we know that, for each $y$, $||(T_n - T)y|| \to 0$, but things could go wrong if this doesn't happen uniformly in $Y$. But fortunately, $A(B_X)$ is compact, so we have better control of the convergence. In particular, given a fixed $\epsilon$ for each $y \in A(B_X)$, there is a neighborhood of $y$ and $N_y$ so that $||(T_n - T)\tilde y|| < \epsilon$ when $\tilde y$ is in this neighborhood, and when $n > N_y$. Then, because the set is compact, a finite number of these neighborhoods cover $A(B_X)$, and taking $N = \max\{N_y\}$ among the finite number of $N_y$'s associated to these neighborhoods demonstrates $(*)$.
H: Is the proof of the claim correct? Is the claim true? We say that an integer a is divisible by the nonzero integer b, if a = bc for some integer c: When a is divisible by b, we write b | a and say b divides a. Claim: Let a and b be nonzero integers. If a | b and b | a, then a = b. AI: The claim is false: $2\mid (-2)$ and $(-2)\mid 2$. I hope you agree that $2\ne-2$. The error in the proof is when from $k_0k_1=1$ you argue $k_0=1$. It might be $k_0=k_1=-1$. The claim would be true if natural numbers are considered, rather than integers. The “nonzero” clause is not needed. If $a,b\in\mathbb{N}$, with $a\mid b$ and $b\mid a$, then we have $$ b=ax,\quad a=by $$ whence $$ a=axy $$ We have two cases: If $a=0$, then $ax=b=0$, so $a=b$. If $a\ne0$, from $a=axy$ we deduce $xy=1$, so $x=y=1$, therefore $a=by=b$. Adapt to that style of proof.
H: A question about convex set I need to prove the closed set $C\subseteq \mathbb{R}_{+}$ is a convex. And let $x$, $y$ be arbitrary given in $C$, I have proved that $1/2(x+y)\in C$. Then does this means $C$ is convex ? AI: If $C$ is a closed subset of $\Bbb R_+,$ then it does indeed suffice to prove that $C$ is midpoint-convex. Indeed, suppose that $C$ were not convex, so that there exist $x,y\in C$ such that $tx+(1-t)y\notin C$ for some $0<t<1$. Without loss of generality, suppose that $x<y,$ and put $z=tx+(1-t)y$, so that $x<z<y.$ Since the complement of $C$ is open and $x,y\in C,$ then $z$ lies in some open interval $(a,b)$ disjoint from $C$ with $a,b\in C.$ (Why?) But then $\frac{a+b}2\notin C,$ so $C$ is not midpoint-convex. Thus, midpoint-convex implies convex by contrapositive (convex of course implies midpoint-convex). Now, it's worth noting that not all closed subsets of $\Bbb R_+$ will be (midpoint-)convex. Consider $\{1,2\},$ for example. Given that, I must assume that one of the following is true: You've been given a particular closed set $C$ that you just haven't specified. You've been told that $C$ is an arbitrary closed subset of $\Bbb R_+,$ asked to prove or disprove that it is convex, and made a mistake in your "proof" of midpoint-convexity. You've been given an arbitrary closed midpoint-convex set, and asked to prove that it is convex. In the first case, there's no problem, since midpoint-convex implies convex for closed subsets of $\Bbb R_+$ using the proof outlined above. In the second case, just use the above counterexample (and take a good hard look at your "proof" to see where you went wrong). In the third case, you need only complete the proof outlined above.
H: Probability of a random graph being bipartite We start from an "empty" graph with $n$ vertices standing alone. Each vertex has $s$ chances to choose one vertex each chance as its neighbor, uniformly and independently from the $n$ vertices, including itself, with replacement. A vertex chooses its neighbors one by one. So a vertex can choose itself, and can choose a vertex many times. The graph is directed so if $u$ chooses $v$ but $v$ doesn't choose $u$, then $(u,v)\in E$ but $(v,u) \not\in E$ . $s \ge 2$ is a constant integer. I want to show when $n\to +\infty$, it is unlikely that we will get a bipartite graph. That is, for $G(V,E)$, $\exists A \subset V$, all edges in $E$ are between $A$ and $V-A$ and no edge in $E$ is inside $A$ or inside $V-A$, which should be unlikely. I did some experiments with MATLAB and it seems the probability could be exponentially small. (Please don't be effected by my result, it could be wrong.) However, I only want to show it goes to $0$ faster than $O(n^{-1})$. Thank you! AI: The usual way to do this is as follows: Suppose that your graph IS bipartite. Then there is a partition $[n]=A\cup B$, $A\cap B=\varnothing$, such that all edges in the graph have one end in $A$ and one end in $B$. Let $S$ be the number of such partitions -- that is, the number of ways we can write $[n]=A\cup B$, $A\cap B=\varnothing$, such that there are no edges inside $A$ and no edges inside $B$. Then the probability that your graph is bipartite is precisely $P(S>0)$. But, by Markov's inequality, we know that $$ P(S>0)=P(S\geq 1)\leq\frac{\mathbb{E}[S]}{1}=\mathbb{E}[S], $$ where $\mathbb{E}$ denote the expectation. But, since expectations break up over addition, we have $$ \mathbb{E}[S]=\sum_{\substack{[n]=A\cup B\\A\cap B=\varnothing}}P(\text{no edges within $A$ or $B$}). $$ Consider this probability. This is equivalent to saying "No vertex in $A$ chooses a neighbor in $A$ and no vertex in $B$ chooses a neighbor in $B$". Since the choices are made independently, this simplifies a lot: $$ P(\text{no edges within $A$ or $B$})=\prod_{v\in A}P(\text{$v$ chooses no neighbors in $A$})\cdot\prod_{v\in B}P(\text{$v$ chooses no neighbors in $B$}) $$ For a fixed $v$ and a fixed set $A$, $$ P(\text{$v$ chooses no neighbors in $A$})=\frac{\binom{b+s-1}{s-1}}{\binom{n+s-1}{s-1}}, $$ where $\newcommand{\order}[1]{\lvert #1 \rvert}b:=\order{B}$. Why? Because $v$ not choosing elements of $A$ means that it only chooses elements of $B$; the number of ways to choose $s$ elements from $b$, with replacement, is $\binom{b+s-1}{s-1}$. Etc. We get a simiar result for $v\in B$, except using $a:=\order{A}$ in place of $b$. So, this says $$ \begin{align*} \mathbb{E}[S]&=\sum_{a+b=n}\sum_{\substack{[n]=A\cup B\\\order{A}=a,\order{B}=b}}\left[\frac{\binom{a+s-1}{s-1}}{\binom{n+s-1}{s-1}}\right]^b\left[\frac{\binom{b+s-1}{s-1}}{\binom{n+s-1}{s-1}}\right]^a\\ &=\binom{n+s-1}{s-1}^{-n}\sum_{a+b=n}\sum_{\substack{[n]=A\cup B\\\order{A}=a,\order{B}=b}}\binom{a+s-1}{s-1}^b\binom{b+s-1}{s-1}^a\\ &=\binom{n+s-1}{s-1}^{-n}\sum_{a=1}^{n-1}\binom{n}{a}\binom{a+s-1}{s-1}^b\binom{b+s-1}{s-1}^a. \end{align*} $$ Here, we have used the fact that determining $a$ determines $b$, and determining $A$ determines $B$... and the fact that there are $\binom{n}{a}$ ways to choose $A$, given $a$.
H: What is a difference between these two definitions of $T_3$ space? Definition(1) A space $X$ is $T_3$ iff 'For any closed set $F$ and a point not in $F$, there exist non overlapping open neighborhoods. Definition(2) A space $X$ is $T_3$ iff 'For any nonempty closed set $F$ and a point not in $F$, there exist non overlapping open neighborhoods. First of all, 'wikipedia' refers to the second definition. However, 'proofwiki' refers to the first definition. And i don't understand why wikipedia specifically restricted the definition to nonempty closed sets. Which one is widely accepted? Moreover, what's a regular space? Many texts define it different from one another. Is it just $T_3$ generally? AI: Both the definitions are equivalent. Since the condition for empty set is always true, so it redundant to mention that nonempty closed sets. According to Kelly, what ever definition you have given is the definition of a regular space and a regular $T_1$ space is a $T_3$ space.
H: How can I prove by induction that $9^k - 5^k$ is divisible by 4? Recently had this on a discrete math test, which sadly I think I failed. But the question asked: Prove that $9^k - 5^k$ is divisible by $4$. Using the only approach I learned in the class, I substituted $n = k$, and tried to prove for $k+1$ like this: $$9^{k+1} - 5^{k+1},$$ which just factors to $9 \cdot 9^k - 5 \cdot 5^k$. But I cannot factor out $9^k - 5^k$, so I'm totally stuck. AI: $$\begin{align} 9\cdot 9^k - 5\cdot 5^k & = (4 + 5)\cdot 9^k - 5\cdot 5^k \\ \\ & = 4\cdot 9^k + 5 \cdot 9^k - 5\cdot 5^k \\ \\ & = 4\cdot 9^k + 5(9^k - 5^k)\\ \\ & \quad \text{ use inductive hypothesis}\quad\cdots\end{align}$$
H: Two polynomials $r_1, r_2 \in R[X]$ are equal if and only if the cofficients $a_i, b_i$ are equal for all $i, 0 \leq i \leq n$ - Purely a definition? I've read that two polynomials $r_1, r_2 \in R[X]$ on the form $r = a_nX^n + ... + a_1X + a_0$ are equal if and only if the cofficients of $r_1, r_2$: $a_i, b_i$ are equal for all $i, 0 \leq i \leq n$. Here $R[X]$ denote the polynomial ring on $R$, where $R$ is a ring. Is this purely a definition or can I proof this ? Can two polynomials with different cofficients achieve the same values for all $x \in R$ ? I can write $a_nX^n + ... a_1X + a_0 = b_nX^n + ... b_1X + b_0$ set $X = 0$ and deduce $a_0 = b_0$. Then $a_nX^n + ... a_1X = b_nX^n + ... b_1X$ implies $X(a_nX^{n-1} + ... a_1) = X(b_nX^{n-1} + ... b_1)$ but now I can't conclude $a_1 = b_1$ ? Thanks AI: Yes, this is purely by definition. If you were to formally define/construct polynomials, you'd define $R[X]$ as the set of functions $r \colon {\mathbb N} \to R$ that have only finitely many non-zero elements (equipped with the suitable structure of a ring). The values $r(i)$ are the ''coefficients'' and two such functions are equal if all their ''coefficients'' are equal.
H: For $x>0$, $x + \frac1x \ge 2$ and equality holds if and only if $x=1$ Prove that for $x>0$, $x + \frac1x \ge 2$ and equality holds if and only if $x=1$. I have proven that $x+ \frac1x \ge 2$ by re-writing it as $x^2 -2x +1 \ge0$ and factoring to $(x-1)^2\ge0$ which is true because you cannot square something and it be negative. Now I am stuck on the part where I have to prove equality to hold if and only if $x=1$. Any suggestions? AI: You want to show that $x+\frac{1}{x}=2$ if and only if $x=1$. Continuing what you did, we have $(x-1)^2=0$ if and only if $x-1=0$ if and only if $x=1$.
H: Partitioning techniques for finding large matrix determinents I'm in a linear algebra class and we're doing determinants right now. I got this matrix to do: $\begin{matrix} 2 & 1 & 0 & 0 & 0 \\ 3 & -1 & 2 & 0 & 0 \\ 0 & 4 & 1 & -1 & 2 \\ 0 & 0 & -3 & 2 & 4 \\ 0 & 0 & 0 & -1 & 3 \end{matrix}$ It wouldn't be hard to solve via row reduction but I hate to see so many 0s in such a near symetric form go to waste... I had problems close to the one I posted above where I figured out through googling about shcurs, which is convenient if you can cancel the ugly term! This one doesn't have the proper 0 positioning for that though, although it's very close. What's a nice way to solve this without rowreduction? AI: This one isn't going to be very friendly, but aside from row-reduction (really the best method) a not-too-terrible way to proceed would be as follows: Start by cofactor expansion along the first row, so that $$\left|\begin{matrix} 2 & 1 & 0 & 0 & 0 \\ 3 & -1 & 2 & 0 & 0 \\ 0 & 4 & 1 & -1 & 2 \\ 0 & 0 & -3 & 2 & 4 \\ 0 & 0 & 0 & -1 & 3 \end{matrix}\right|=2\cdot\left|\begin{matrix} -1 & 2 & 0 & 0 \\ 4 & 1 & -1 & 2 \\ 0 & -3 & 2 & 4 \\ 0 & 0 & -1 & 3 \end{matrix}\right|-1\cdot\left|\begin{matrix} 3 & 2 & 0 & 0 \\ 0 & 1 & -1 & 2 \\ 0 & -3 & 2 & 4 \\ 0 & 0 & -1 & 3 \end{matrix}\right|$$ The second $4\times 4$ cofactor matrix is very friendly, and we find that $$\begin{align}\left|\begin{matrix} 3 & 2 & 0 & 0 \\ 0 & 1 & -1 & 2 \\ 0 & -3 & 2 & 4 \\ 0 & 0 & -1 & 3 \end{matrix}\right| &= 3\cdot\left|\begin{matrix}1 & -1 & 2 \\ -3 & 2 & 4 \\ 0 & -1 & 3 \end{matrix}\right|\\ &= 3\cdot\bigl(6+0+6-0-9-(-4)\bigr)\\ &=3\cdot 7\\ &= 21,\end{align}$$ using the general formula for determinant of a $3\times 3$ matrix, so that $$\left|\begin{matrix} 2 & 1 & 0 & 0 & 0 \\ 3 & -1 & 2 & 0 & 0 \\ 0 & 4 & 1 & -1 & 2 \\ 0 & 0 & -3 & 2 & 4 \\ 0 & 0 & 0 & -1 & 3 \end{matrix}\right|=2\cdot\left|\begin{matrix} -1 & 2 & 0 & 0 \\ 4 & 1 & -1 & 2 \\ 0 & -3 & 2 & 4 \\ 0 & 0 & -1 & 3 \end{matrix}\right|-21.$$ The first $4\times 4$ cofactor isn't as nice, but again using cofactor expansion along the first row, we have $$\begin{align}\left|\begin{matrix} -1 & 2 & 0 & 0 \\ 4 & 1 & -1 & 2 \\ 0 & -3 & 2 & 4 \\ 0 & 0 & -1 & 3 \end{matrix}\right| &= -1\cdot\left|\begin{matrix} 1 & -1 & 2 \\ -3 & 2 & 4 \\ 0 & -1 & 3 \end{matrix}\right|-2\left|\begin{matrix} 4 & -1 & 2 \\ 0 & 2 & 4 \\ 0 & -1 & 3 \end{matrix}\right|\\ &= -1\cdot7-2\cdot 4\cdot\left|\begin{matrix} 2 & 4 \\ -1 & 3 \end{matrix}\right|\\ &= -7-2\cdot 4\cdot\bigl((2)(3)-(4)(-1)\bigr)\\ &=-7-80\\ &=-87,\end{align}$$ so $$\left|\begin{matrix} 2 & 1 & 0 & 0 & 0 \\ 3 & -1 & 2 & 0 & 0 \\ 0 & 4 & 1 & -1 & 2 \\ 0 & 0 & -3 & 2 & 4 \\ 0 & 0 & 0 & -1 & 3 \end{matrix}\right|=2\cdot-87-21=-174-21=-195.$$
H: Limit of sequence :$ x_n = \frac{2n^2 + 3}{n^3 + 2n}$ Consider the sequence $ x_n = \frac{2n^2 + 3}{n^3 + 2n}, n \in \mathbb{N}$. Show that $ \lim_{n\to \infty} x_n = 0$ I have no idea how to find my $n_{\epsilon} $ such as $ n > n_{\epsilon} \Rightarrow \left| \frac{2n^2 + 3}{n^3 + 2n} \right | < \epsilon $ . I've tried to show it is Cauchy or find another 2 sequences to use the squeeze theorem, but I had no sucess. Can you help me to prove this (A hint would be great!)? Thanks! AI: Note that our expression is positive and $\lt \frac{2n^2+3n^2}{n^3}=\frac{5}{n}$. Now finding an $n_\epsilon$ that works should be easy. Remark: The structure of the answer was chosen to make writing out an $\epsilon$-$N$ argument straightforward. If we are allowed to use other tools, just divide top and bottom by $n^3$. The new bottom has limit $1$, the new top has limit $0$.
H: A man is known to speak truth 3 out of 4 times. He throws a die and reports that it is a six. Find the probability that it is actually a six. I have this question as an example in my maths school book. The solution given there is:- E = the man reports six P(S1)= Probability that six actually occurs = $\frac{1}{6}$ P(S2)= Probability that six doesn't occur= $\frac{5}{6}$ P(E|S1)= Probability that the man reports six when six has actually occurred = $\frac{3}{4}$ P(E|S2)= Probability that the man reports six when six has not occurred = $1-\frac 3 4=\frac 1 4$ Therefore, by Bayes' Theorem, $P(S1|E)=\frac{(\frac{1}{6}\cdot\frac{3}{4})}{(\frac{1}{6}\cdot \frac{3}{4})+(\frac{5}{6}\cdot \frac{1}{4})} =\frac{3}{8} $ I have its solution but my teacher said that the solution given is incorrect and told that the actual solution would be something else:- $P(S1|E)=\frac{\frac 1 6\cdot\frac3 4}{(\frac 1 6\cdot \frac 3 4)+(\frac 5 6\cdot \frac1 4\cdot\frac1 5)} = \frac 3 4$ So, I want to ask which one is correct. Thank you. AI: The difference in solutions comes in the estimation of the probability that the man reports six when six has not occurred. If the man randomly chooses a number to report when he lies (which seems like a reasonable statement), then the probability he chooses 6 is 1/5. If you multiply your calculation of P(E|S2) by this, you get your teacher's solution.
H: Extension of a linear map to a commutative graded algebra Let's fix the notation, $V=\bigoplus_{i\geq 0}{V^i}$ is a graded vector space and $\Lambda V$ is the free commutative graded algebra on $V$. I have been struggling to understand this example: Consider a graded vector space $V$ with basis $\{a, b\}$ such that $a \in V^2$ and $b \in V^5$. Now define a linear map $d$ (of degree $1$) by $da = 0$ and $db = a^3$. It follows that $d$ extends uniquely to a derivation $d : \Lambda V → \Lambda V$. The point of the example is to show that the derivation on $\Lambda V$ is completely determined by its values on $V$. So if i understand well, he considers a linear map $d:V\longrightarrow \Lambda V$ of degree one defined by $$d_2:V^2\longrightarrow \Lambda^3V; a\mapsto 0$$ (here $\Lambda^kV$ is the set of elements of word length $k$) and $$d_5:V^5\longrightarrow \Lambda^6V; b\mapsto a^3$$ The firt question that i'm stuck on is for $d_2(b)=a^3$, i mean $a^3$ is of length $3$, how it can be in $\Lambda^6V$. I really appreciate any help. Thanks !!!! AI: $\Lambda^kV$ is not necessarily the set of elements of word length $k$. Rather than just word length, you need to consider the grading as well. Because $a$ lies in $\Lambda^2V$, $a^3$ lies in $\Lambda^6V.$
H: Can you help solve this cubic in root x? Here's the original equation: $$\frac{1}{\beta}arctan\left(\frac{\sqrt{2rx+x^{2}}}{r}\right)+\left(r\sqrt{2rx+x^{2}}-r^{2}arctan\left(\frac{\sqrt{2rx+x^{2}}}{r}\right)\right)=\frac{\pi}{2\beta}$$ which I've expanded: $$\frac{\sqrt{2}}{\beta\sqrt{r}}x^{1/2}+\left(\frac{8\beta r^{2}-5}{6\beta r^{3/2}\sqrt{2}}\right)x^{3/2}=\frac{\pi}{2\beta}$$ because the top expression only needs to be valid for small $x$. Can you solve either expression for x? Perhaps by writing x = x0 + x1 and approximating a solution? I can't just get a huge mess, I'd much rather an expression that is slightly off but much simpler. Cheers! AI: You have $ax^{1/2}+bx^{3/2}=c=x^{1/2}(a+bx)$ Then $c^2=x(a+bx)^2$ and you have a cubic which can be solved by the usual messy technique.
H: How to calculate this limit I need find the follow limit: $\lim_{n\to\infty}\sqrt[n]{1^{\pi}+2^{\pi}+\cdots+n^{\pi}}$. Please help me. Thanks for your attention. AI: Will use the property: If $\lim_{n\to\infty}a_n=a, \lim_{n\to\infty}b_n=a$, then a sequence $(c_n)$ at property $a_n\leq c_n \leq b_n$ for all $n\in\mathbb{N}$ is a convergent and $\lim_{n\to\infty}c_n=a$. Since $$ 1\leq \sqrt[n]{1^{\pi}+2^{\pi}+\cdots+n^{\pi}}<\sqrt[n]{n^4+n^4+\cdots+n^4}=\sqrt[n]{n\cdot n^4}=\left(\sqrt[n]{n}\right)^5. $$ Because $\lim_{n\to\infty}\left(\sqrt[n]{n}\right)^5=1$ by the above property, we have: $$ \lim_{n\to\infty}\sqrt[n]{1^{\pi}+2^{\pi}+\cdots+n^{\pi}} $$
H: If 4 people have 5 different cars to choose from and two people cannot pick the same. How many different ways could people pick the cars? If 4 people have 5 different cars to choose from and two of those people cannot pick the same(the remaining two people could have the same car). How many different ways could people pick the cars? At first I was thinking First Person: 5 choices Second Person: 4 choices, because they cannot have the same car as person 1. Third Person 5 choices Fourth Person 5 choices Overall 5*4*5*5 = 500 ways, but I'm just not sure if this would assure people 1 and 2 don't get the same car. To help with confusion lets just give the people names to make this more clear. Abby, Bob, Chris, and Dan. They have 5 different types of cars to choose from but Abby and Bob cannot have the same type of car. Meaning Abby, Chris, and Dan could have the same type of car or Bob Chris and Dan could have the same car. The only restriction is that Abby and Bob do not have the same type of car AI: If no one can have the same car, and/or no pair of people can have the same care, then we have $$\;5!\;= \;5\cdot 4\cdot 3\cdot 2 = 120$$ ways, since no one can have a car the same as anyone else. You logic was correct about person $2$ not having as a choice the car selected by Person $1$. By the same logic, Person $3$ cannot have as a choice either the car selected by Person $1$, nor the car selected by Person $2$, leaving Person $3$ only 3 choices, and so on. Given your clarification, then we have that the only restriction is that Person 1 and Person 2 do not have the same car, then your calculation $5\cdot 4\cdot 5\cdot 5$ is correct.
H: Finding the limit of $x \sin\frac{\pi}{x}$ How can I find the limit of the following $x\rightarrow\infty$ $x \sin\frac{\pi}{x}$ I did $\dfrac{\sin\frac{\pi}{x}}{\frac{1}{x}}$ I took the derivative using l hospital and got. $\dfrac{-1x^{-2} \cos \dfrac{\pi}{x}}{-1x^{-2}}$ Simplying I get $\cos \frac{\pi}{x}$ but I am stuck. another problem I have is $\dfrac{\ln(x)}{\cot x}$ as $x\rightarrow0$ I did $\dfrac{\dfrac{1}{x}}{-\csc^2(x)}$ But I am unsure how to go on. AI: Ok, so now you want to evaluate $$ \lim_{x\to\infty} \cos\left(\frac{\pi}{x}\right) $$ what is $\lim_{x\to\infty} \dfrac{\pi}{x}$? What do you get if you plug that into $\cos$? For the second limit, you can rewrite it as $$ -\dfrac{\sin^2(x)}{x} = -\sin(x) \dfrac{\sin(x)}{x} $$ You should know what $\lim_{x\to 0}\dfrac{\sin(x)}{x}$ is, and then use 'limit of product is product of limits'.
H: A question about compact Hausdorff space Let $X$ be a compact Hausdorff space and $C(X)$ be the set of continuous functions on $X$. And $F$ is a closed subspace of $X$. If the $f\in C(X)$ such that $f|_{F}=0$ is only zero function( i.e. $f=0$), then $F=X$??? AI: Yes, by Urysohn's Lemma : If $F \neq X$, then there is some $x\in X\setminus F$, and a continuous function $f$ such that $f(x) = 1$, and $f = 0$ on $F$.
H: Odds of 2 players meetings in a 8 person single elimination tournament I have a 8 person tournament. For the sake of this problem let's say odds of winning are 50% for each player. What is the formula to figure out the odds of any 2 players meetings at any point in the tournament. AI: with random seeding each pair is equally likely to play, but $2^n - 1$ actually do play, one in each game, hence $\frac{2^n - 1}{{2^n} \choose 2}$. For example, with 4 players, $n=2$, B , C,D are equally likely to play A in the first, so B plays him in first woth probability $\frac 13$. With probabilty $\frac 23$ they both have to win their 3rd round to play, so $\frac 13 + \frac 23 \cdot \frac 12 \cdot \frac 12 = \frac 12$.
H: Prove the following with equivalence statements. I need to prove the following statement with equivalence statements. $\exists x \in D,(P(x) \Rightarrow Q(x)) \ \text{is equivalent to} \ (\forall x \in D, P(x)) \Rightarrow (\exists x \in D, Q(x)))$ At the moment, I don't see how they can be possible equivalent as it seems like the quantifier statement is distributed among the predicates. Is this a legitimate operation in logic? Cheers! AI: Assume $\exists x\in D, (P(x)\Rightarrow Q(x))$ and let $x_0\in D$ be such that $P(x_0)\Rightarrow Q(x_0)$. Case 1. $P(x_0)$. Then by modus ponens $Q(x_0)$, hence $\exists x\in D,Q(x)$. We may add an arbitrary premise, so $(\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x)))$. Case 2. $\neg P(x_0)$. Then $\neg \forall x\in D,P(x)$. Ex falsum quodlibet, i.e. $(\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x)))$. Since the result follows in both cases we indeed conclude $(\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x)))$, and thus have shown $$ \tag1\exists x\in D, (P(x)\Rightarrow Q(x))\implies (\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x))).$$ Now assume $(\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x)))$. Case 1. $\forall x\in D,P(x)$. Then by modus ponens $\exists x\in D,Q(x))$, say $Q(x_1)$ with $x_1\in D$. Then also $P(x_1)\Rightarrow Q(x_1)$ and hence $\exists x\in D, (P(x)\Rightarrow Q(x))$. Case 2. $\neg\forall x\in D,P(x)$, i.e. $\exists x\in D,\neg P(x)$. Let $x_2\in D$ with $\neg P(x_2)$. Again using ex falsum quodlibet, we have $P(x_2)\Rightarrow Q(x_2)$ and hence $\exists x\in D, (P(x)\Rightarrow Q(x))$. Since the result follows in both cases we indeed conclude $\exists x\in D, (P(x)\Rightarrow Q(x))$, and thus have shown $$\tag2 (\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x)))\implies \exists x\in D, (P(x)\Rightarrow Q(x)).$$ Combining (1) and (2) we have $$ \exists x\in D, (P(x)\Rightarrow Q(x))\iff (\forall x\in D,P(x))\Rightarrow (\exists x\in D,Q(x))).$$
H: Specify limit of x and y in math equation I have the following linear equation $y = -0.13x + 3$ This calculates a ratio (y) depending on a computer size in gigabytes. The domain in negative is irrelevant ( x > 0) and the minimum ratio(y) is 1.7. How can I modify this function to better integrate those limits? AI: To have $y \gt 1.7$, you must have $x \lt 10$ Given your comment, you can write $$y=\begin {cases} -0.13x+3& 0 \le x \lt 10\\ 1.7& x \ge 10 \end {cases}$$
H: Find the area of the largest rectangle A rectangle is formed by bending a length of wire of length $L$ around four pegs. Calculate the area of the largest rectangle which can be formed this way (as a function of $L$). How am I supposed to do this? If I'm interpreting the question correctly, a square would have an area of $\dfrac{1}{16}L^2$. But I don't know how to find the maximum area. I'm guessing it involves finding the stationary point of some function of $L$, but which function that might be eludes me at the moment. AI: If the rectangle is $h$ by $w$, we have the area is $A=wh$ and we have $2w+2h=L$. You solve the constraint to get $w=\frac 12(L-2h)$, and plug that into the other to get $A=\frac 12h(L-2h)$. Now take $\frac {dA}{dh}$ set it to zero, solve for $h$ and you are there. You will get the result you guessed.
H: At how many points is this function continuous? Question: Let $f$ be a function with domain $[-1, 1]$ such that the coordinates of each point $(x,y)$ satisfy $x^2 + y^2 = 1$. What is the total number of points at which f is necessarily continuous? My Answer: I think the answer should be zero. Here's why: the graph may just be a bunch of discrete points lying on the unit circle. At each of these points, the graph is discontinuous. However, the correct answer is 2. Can someone explain where I am wrong? AI: We know something about the function's behavior, since $y^2 = 1 - x^2$, we have $y = \pm \sqrt{1-x^2}$, and the 'worst' it could be is if it jumps between the positive and negative square roots a lot. That is, if $x$ changes a little, then $\sqrt{1 - x^2}$ (the positive root) only changes a little, and the only way to make this discontinuous is by jumping to the other root. I think we could construct this worst-case behavior by doing something like 'take the positive root if $x \in \mathbb{Q}$, negative root if irrational.' So, with that in mind, what are the points $x$ where choosing the positive vs. the negative root doesn't make that much difference?
H: Negating $(\forall a \in A)(\exists b \in B)(a \in C \leftrightarrow b\in C)$? I'm not quite sure how to go about doing this. When negating I know the quantifiers themselves will be negated meaning that $\forall$ would become $\exists$ and vice-versa. Also I know that $\leftrightarrow$ can be written for example as $(\lnot a\in C \lor b \in C)\land(\not b \in C \lor a \in C)$. And this can be negated using De Morgan's laws. However what about the $\in$ would I have to negate those too? Can you please show me how that's done. AI: $$\lnot(x \in X) \equiv x \notin X$$ $$\lnot\Big((\forall a \in A)(\exists b \in B)(a \in C \leftrightarrow b\in C)\Big)\tag{1}$$ $$\equiv \lnot \Big[(\forall a \in A)(\exists b \in B)\Big((a \notin C \lor b \in C)\land( b \notin C \lor a \in C)\Big)\Big]\tag{2}$$ $$\equiv (\exists a \in A)(\forall b \in B) \Big( \lnot(a\notin C \lor b \in C) \lor \lnot(b\notin C \lor a\in C)\Big)\tag{3}$$ $$ \equiv (\exists a \in A)(\forall b \in B) \Big((a\in C \land b \notin C) \lor (b\in C \land a\notin C)\Big)\tag{4}$$ $(1)$ is the negation of the given proposition. $(2)$ is equivalent to the negated proposition, as you note. $(3)$ Negation moves inward, changing the quantifiers, respectively, finally negating the quantified expression, and applies DeMorgan's Rule. $(4)$ By DeMorgan's.
H: Abby, Bob, Chris, and Dan have 5 vehicles to choose from how many ways can this be done? Abby, Bob, Chris, and Dan have 5 types vehicles to choose from a red car, red truck, green jeep, brown suv, and grey convertible. More than one person can have the same type of vehicle. If Abby and Bob cannot have the same color of vehicle how many ways are there to distribute the vehicles among the four people? I was thinking Abby has 5 choices. Bob then has 3 choices because Abby might pick a red vehicle. Chris and Dan would each have 5 choices because it does not matter what color they have for a total of 5*3*5*5 = 375 ways. Do I need multiple cases due to the fact Abby might not pick a red vehicle or how could I solve this? AI: Chris and Dan are not important, so the give a factor of $5\cdot 5$. Now for Abby + Bob there are $3_\text{not red} \cdot 4_\text{other colors} + 2_\text{red} \cdot 3_\text{not red} = 18$ Possibilities, so the total is $$5\cdot5\cdot 18 = 450$$
H: How to prove this set is a closed set? List item Ex: how to prove the sphere $$\left\{ (x,y,z)∈ℝ^3∣x^2+y^2+z^2=1 \right\}$$ is a closed set?? I tried to use the definition of the closed set,but it did not work out for me. AI: My hint would be: show that the "inside" is open and the "outside" is open. To add a bit more details: Let $\tilde{x}$ denote an arbitrary point in $\mathrm{Outside}=\{(x,y,z) \mid x^2 + y^2+z^2 > 1\}.$ Thinking about it geometrically, the ball centered at $\tilde{x}$ of radius $|\tilde{x}| -1$ should be a subset of $\mathrm{Outside}$. So, you should try showing this.
H: Convergence almost everywhere and convergence in measure Let $(\mathbb{R},\mathcal{L},m)$, let $f_{n}(x)=n\chi_{[0,\frac{1}{n}]}$ then the sequence converges to $0$ everywhere except at $x=0$ thus $f_{n}$ converges a.e. Then in my book (Folland) we have that if $f_{n}\to f$ a.e and $|f_{n}|\le g\in L^{1}$ then $f_{n}\to f$ in $L^{1}$ too. The above conditions are met. Finally by another proposition we have that if $f_{n}\to f$ in $L^{1}$ then $f_{n}\to f$ in measure. I wonder if these relations holds for both finite and infinite measurable spaces? AI: This works always, for example $(\mathbb R, \mathcal B, \lambda)$ forms a $\sigma$-finite but infinite measure space on which this works.
H: Finding the limit of $\frac{\sqrt{1+x^2}}{x^2}$ I am kind of confused when it comes to finding this limit: $\displaystyle\lim_{x\rightarrow\infty}\frac{\sqrt{1+x^2}}{x^2}$ I did $\dfrac{\dfrac{1}{2}\dfrac{1}{\sqrt{\arctan(x)}}}{2x}$ then I am kind of stuck I know I can multiply the complex fraction and get $\dfrac{1}{4x \arctan(x)}$ but it does not make sense. AI: Let $x>1$ so $2x^2 > 1 + x^2$ $$\frac{\sqrt{1+x^2}}{x^2} < \frac{\sqrt{2x^2}}{x^2} = \sqrt 2 \frac{|x|}{|x|^2} = \frac{\sqrt 2}{|x|}$$ From here the limit should be clear, since $$\frac{\sqrt{1+x^2}}{x^2} \geq 0 \qquad \forall\ x\in \mathbb R$$
H: How can prove that $-(-x)=x$? I need to prove the following property, but I don't know how: $$-(-x)=x.$$ Please help me. Thanks for your attention. AI: $-(-x)=-(-x)+0=-(-x)+x+(-x)=[-(-x)+(-x)]+x=0+x=x$
H: Simple upper bound for $\binom{n}{k}$ I remember seeing an upper bound for the binomial $\binom{n}{k}$ with an exponential function, something like $\binom{n}{k}\leq \left(ne/k\right)^k$. What exactly is it, and are there other similar good upper bounds for $\binom{n}{k}$? Edit: As the link in Macavity's comment shows, the bound is indeed $\binom{n}{k}\leq \left(ne/k\right)^k$. How can we prove this? AI: I assume you are looking for the simple bound $$\binom{n}{k} < \left(\dfrac{e n}{k}\right)^k$$ Proof: \begin{align} \binom{n}{k} &= \frac{n(n-1)\dots(n-k+1)}{k!} \\ &= 1\left(1-\frac{1}n \right)\cdots\left(1-\frac{k-1}n \right) \frac{n^k}{k!}\\ &< \frac{n^k}{k!} \qquad \text{as all factors on the left are }\le 1. \end{align} From the Taylor Series of $e^x$, we know $\forall k \in \mathbb{N}, \; e^k > \dfrac{k^k}{k!}$. Combining this with the above, we get the desired bound.
H: Is every normed vector space, an inner product space Let $V$ be a vector space over $\mathbb{C}$. If $V$ is an inner product space, then $V$ is normed (where the norm is defined as $\|x\|=\sqrt{(x,x)}$ ). Now if $V$ is normed, does it follow that $V$ is an inner product space ? I suspect no. I would like to see an example. Thank you. After reading my question again, I think it needs some clarification: Suppose that $V$ is normed with norm $\|\ \|$. Can $V$ be given an inner product space structure such that $(x,x)=\|x\|^2$ ? AI: For an example of a norm that is not induced by an inner product, consider Euclidean space $\Bbb R^n$ (where $n\ge 2$) with the norm $$\lVert \vec x\rVert_1:=\sum_{k=1}^n |x_k|.$$
H: Area of a circular segment. See the picture below: How can I calculate the area in black, using no handy formulas which will give me the answer if I plug in the right values? I had the idea to take $\displaystyle \int_{0.5r}^{r}$, but the problem is I don't know which function to take the integral of. AI: The area of a segment is the area of the sector minus the area of the triangle. If $\theta$ is the angle of the arc (in radians), the area of the sector is $\frac12\theta\cdot R^2$, and the area of the triangle is $2\frac12R\sin\frac\theta2R\cos\frac\theta2=\frac12R^2\sin\theta$. The area of the segment is therfor: $A=\frac12R^2(\theta-\sin\theta)$. Now, we have that $\cos\frac\theta2=\frac12$ which means that $\frac\theta2=\frac13\pi$ and $\theta=\frac23\pi$. Replacing: $$A=\frac12R^2(\frac23\pi-\sqrt3)=(\frac13\pi-\frac12\sqrt3)R^2$$
H: Is the set $\{0,1\}$ opened or closed? I think it is neither open nor closed because it contains none of boundary points and interior point. I'm not sure if its correct, help please! AI: $\{0,1\}$ is closed in $\Bbb R,$ since its complement is $(-\infty,0)\cup(0,1)\cup(1,\infty),$ which is a union of open sets, and so open. Alternately, you can prove that it's closed because it's finite, so has no limit/accumulation points, and so vacuously contains all of its limit/accumulation points. To show that there is no continuous function $f:\Bbb R^n\to\Bbb R$ having range $\{0,1\},$ though, you could make use of a property called connectedness, by noting that $\Bbb R^n$ is connected, while $\{0,1\}$ is not, and then using the fact that continuous images of connected sets are connected. Alternatively, if you're unfamiliar with connectedness, note that $\{0\}$ is both relatively closed and relatively open in $\{0,1\}$--relatively closed because it is closed in $\Bbb R$, and relatively open because $(-\frac12,\frac12)\cap\{0,1\}=\{0\}$. Thus, the preimage of $\{0\}$ under any continuous function $\Bbb R^n\to\{0,1\}$ will be both open and closed in $\Bbb R^n.$ Which subsets of $\Bbb R^n$ are both open and closed? Given the answer to that question, can a continuous function $\Bbb R^n\to\{0,1\}$ be surjective?
H: Boolean formula over 64 Boolean variables X This question comes from this homework assignment from ECS20 at UC Davis. Chess is played on an 8 x 8 board. A knight placed on one square can move to any unoccupied square that is at a distance of two squares horizontally and one square vertically, or else two squares vertically and one square horizontally. The complete move therefore looks like the letter L (in some orientation). A knight cannot move off the board. Unlike other chess pieces, the knight can jump over" other pieces in going to its destination. Consider a chess board on which we place any number $m \in$ {0,1,...,64} of knights, at most one knight on each square. Call the configuration of knights valid if no knight can move to a square occupied by another knight. Carefully specify a Boolean formula over 64 Boolean variables $X$ where the number of truth assignments to $\phi$ is exactly the number of valid knight configurations. This question has left me baffled. How could one solve this? AI: Your formula has a variable $K_{i,j}$ for $1 \le i, j \le 8$. $K_{i,j}$ is intended to mean that there is a knight on square $(i, j)$. You use implications to represent the constraints: if putting a knight on square $(i, j)$ means you can't put a knight on square $(m, n)$, then you represent that constraint by an implication $K_{i,j} \implies \lnot K_{m, n}$. The resulting formula is a conjunction of implications $K_{i,j} \implies \lnot K_{i\pm2,j\pm1}$ and $K_{i,j} \implies \lnot K_{i\pm1,j\pm2}$ (where you discard any implication where one of the subscripts is not between $1$ and $8$, i.e., you ignore constraints on knights that are not on the chessboard).
H: How to write the proof for this? Let $a,b,c \in \mathbb{Z}$, and $a \neq 0$. Use a proof by contradiction to show that if $(a \nmid (bc))$ then $(a \nmid b)$. The symbol $\nmid$ stands for "does not divide". I got the layout, but I don't know how to go about this. Assume x in D: Assume P(x) Assume ¬Q(x) What to do here? Then Q(x) Then P(x) implies Q(x): Then for all x in D, P(x) implies Q(x) AI: Suppose that $a \nmid bc$, but $a \mid b$. Since $a \mid b$, $b=ma$ for some $m \in \mathbb{Z}$. Let $b=ma$. Now by our supposition, $a \nmid bc \Rightarrow a \nmid mac$. But $\frac{mac}{a}=mc \in \mathbb{Z}$, and so $a \mid bc$. This is a contradiction. In general terms of your layout abstraction, the above would go as follows: To prove $P \Rightarrow Q$: Assume $(P \wedge \neg Q)$ (the negation). Use $\neg Q$ to directly prove $\neg P$. Hence $(P \wedge \neg P)$, the contradiction. Therefore $P \Rightarrow Q$. As a condensed sketch starting with the negation of the implication and ending with that which was to be proven: $$(P \wedge \neg Q) \Rightarrow \neg Q \Rightarrow \ldots \Rightarrow \neg P \Rightarrow (P \wedge \neg P) \Rightarrow (P \Rightarrow Q).$$
H: Derived algebra of a lie algebra contained in an ideal Let $\mathfrak{g}$ be a Lie algebra over $\mathbb{R}$ or $\mathbb{C}$. Assume $\mathfrak{i}$ is an ideal with $\mathfrak{g/i}$ abelian. Then the derived algebra $[\mathfrak{g},\mathfrak{g}]\subseteq \mathfrak{i}$. I don't see why this is true. I am new to Lie algebras and am probably missing something obvious. I would appreciate if someone could show me why. Thanks! AI: If $x$ and $y$ are arbitrary elements of $\mathfrak{g}$, we can denote by $\bar{x}$ and $\bar{y}$ their images in $\mathfrak{g}/i$. Since the quotient is abelian, we have $[\bar{x},\bar{y}] = 0$, which means that $[x,y]\in i$, and hence that $[\mathfrak{g},\mathfrak{g}]\subseteq i$.
H: Evaluating integral of $\int e^{-ax} \,dP$ where $P$ is the normal distribution $N(\mu,\sigma^2)$. I realize questions regarding integrating the normal distribution are numerous, but I wasn't able to find an already answered question that helped me with this. The integral is: \begin{align*} \int_{\mathbb{R}} e^{-ax}\,dP = \int^\infty_{-\infty} \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{-(x-\mu)^2}{2\sigma^2}-ax}\, dx \end{align*} but where to take it from here is leaving me clueless. Maple tells me it should evaluate to $e^{\frac{1}{2}a(a\sigma^2-2\mu)}$, but how do I show this? AI: \begin{align*} &\,\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu)^2}{2\sigma^2}-ax\right)\mathrm{d}x\\=&\,\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu+a\sigma^2)^2}{2\sigma^2}-a\mu+\frac{a^2\sigma^2}{2}\right)\mathrm{d}x\\=&\,\exp\left(\frac{a^2\sigma^2}{2}-a\mu\right)\underbrace{\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}\sigma}\exp\left(-\frac{(x-\mu+a\sigma^2)^2}{2\sigma^2}\right)\mathrm{d}x}_{=1}.\end{align*} The last integral is $1$ because the integrand is just the probability density function of $\mathcal{N}(\mu-a\sigma^2,\sigma^2)$.
H: Theorem for calculating the coordinate of standard basis vectors with respect to given basis Is there a theorem that tells how to calculate the coordinates of each of the standard vectors in $\mathbb{R}^n$ with respect to a given basis for $\mathbb{R}^n$? AI: Let $M$ be the matrix whose columns are the vectors of the given base. If $e_i$ denote the $i$-th vector of the standard base, then $M^{-1}e_i$ is a vector whose components are the coordinates of $e_i$ respect to the given base.
H: $\frac{dx}{dt} = |x|^{1/2}$ Im looking to find 4 solutions to the ODE : $\frac{dx}{dt} = |x|^{1/2} , x(0)=0$. Clearly, $x=0$ is one solution. Using seperation of variables for $x>0$ yields $x= t^2/4$ as another solution, and if we consider $x<0$, I find that $x = -t^2/4$. Could someone give a hint as to where I am missing the last solution? Thanks! AI: I claim that a solution (the 'basic' solution) is $$ x = \begin{cases} t^2/4 &: t>0 \\ 0&: t \leq 0 \end{cases} $$ Verify that this not only satisfies the DE and IC, but it is continuous everywhere, with a continuous derivative. (you only have to check the continuity and differentiability at $x=0$ since it is obviously smooth everywhere else). Now think about what happens if you translate this function to the right. EDIT additional thoughts Another way to go about this that avoids translation would be to show that you can take $x$ to be either $t^2/4$ or $0$ on $[0,\infty)$ and either $-t^2/4$ or $0$ on $(-\infty,0)$, and that no matter which of these four options you choose, the result satisfies the DE, IC, and is continuous w/ continuous derivative. I.e. $$ x = \begin{cases} t^2/4 &: t>0 \\ -t^2/4 &: t \leq 0 \end{cases} $$ $$ x = \begin{cases} t^2/4 &: t>0 \\ 0 &: t \leq 0 \end{cases} $$ etc. are all solutions.
H: Fourier Series: Shifting in time domain I am reading "Fourier Transformation for Pedestrians" from T. Butz. He speaks about what happens to the Fourier coefficients when the function is shift in time. I have copied the equation I have a problem with: I don't understand the logic behind going from $f(t-a)$ in the first integral to $f(t')e^{-iw_kt'}e^{-iwk_a}dt'$. It seems like he is using the identity $e^{a+b}=e^ae^b$ but I don't understand the complete logic. Also he then applies the same thing without the complex notation: Why does that work? Why does shifting in time correspond to multiplying $A_k$ for example by $cos \omega_k a - B_k sin \omega_k a$. I don't understand where this is the case. If someone could explain it would be great. Thank you. AI: He is doing $t'=t-a$, note that he is also shifting the interval of the integral, and also $t=t'+a$, replaced in the exponent and applying the exponent rule. For the non-complex, he is shifting from $\mathbb C$ to $\mathbb R^2$, but just by notation. $$C_k=A_k+iB_k=\{A_k;B_k\}\\ e^{i\theta}=\cos\theta+i\sin\theta=\{\cos\theta;\sin\theta\}\\ C_ke^{i\theta}=(A_k\cos\theta-B_k\sin\theta)+i(A_k\sin\theta+B_k\cos\theta)=\{A_k\cos\theta-B_k\sin\theta;A_k\sin\theta+B_k\cos\theta\}\\ $$ Where $\theta=\omega_ka$.
H: Integrals Regulated functions stuck on an example for this question, Give an example of a regulated function $f \colon [a,b] \to \mathbb{R}$ with the properties that $\forall x \in [a,b] f(x) \ge 0 , \int_a^b f = 0$ and there is $c \in [a,b]$ with $f(c) > 0$ I think a function that fits though would be one where $f(0) = y_0$ where $y_0 > 0$, and f is 0 everywhere else, but im not sure how to show that this is regulated? Im not even sure if it is? A function is $f:[a,b] \to \mathbb{R}$ is a regulated function if given $\epsilon>0$ there is a step function $\phi\colon[a,b]\to\mathbb{R}$ s.t $||f-\phi||_\infty < \epsilon$ (equivalently $|f(x) - \phi(x) | < \epsilon \forall x \in [a,b]$) . AI: I assume that $f\colon [a,b]\to\mathbb R$ is defined as regulated iff $\lim_{t\to x^-}f(t)$ exists for all $x\in(a,b]$ and $\lim_{t\to x^+}f(t)$ exists for all $x\in [a,b)$. In that case, for example $$ f(x)=\begin{cases}y_0&\text{if }x=a,\\y_1&\text{if }x=b,\\0&\text{otherwise}\end{cases}$$ is a regulated function (all limits to be considered are $0$. If you pick $y_0>0$ and/or $y_1>0$, you get an exapmple as required (indeed, the integral $\int_a^bf\,\mathrm dx$ does not depend on values at single points) Edit: The definition I used above is known to be equivalent to what you added to the question (cf. Dieudonné, Jean (1969), Foundations of Modern Analysis). But in fact $f$ itself is a step function, that is the linear combination of indicator functions of intervals. It is just that we need to recognize single points as special cases of intervals: $$f= y_0\cdot 1_{[a,a]}+y_1\cdot 1_{[b,b]}.$$
H: Probability I lost when my friend told me I lost I make a bet with a friend. There's a 1/999 chance that I will lose. I don't directly know the results of this bet so there's a 99/100 chance that he will tell the truth about the results regardless of whether he wins or loses. Suppose he tells me that I lost. What is the probability that I actually lost? So I have: $A$ = chances of losing $\frac{1}{1000}$; $A^c$ = chances of winning $\frac{999}{1000}$; $B$ = chances of him telling the truth $\frac{99}{100}$; $B^c$ = chances of him lying $\frac{1}{100}$. By Law of Total Probability, I should have P(A) = $\frac{1}{1000}$ $\frac{1}{1000}$ + $\frac{99}{100}$ $\frac{99}{100}$ which is .980101 but that seems incorrect. So I'm pretty sure I got something wrong except I don't know what. Hints, clarifications, explanations would be much appreciated. Thanks EDIT: Also, I think there's a much more simple solution for this problem since this question is from a math competition for high school level students. AI: Use Bayes's rule: \begin{align*} &\,\mathbb P (\text{lost}\,|\,\text{“lost”})=\frac{\mathbb P(\text{“lost”}\,|\,\text{lost})\mathbb P(\text{lost})}{\mathbb P(\text{“lost”}\,|\,\text{lost})\mathbb P(\text{lost})+\mathbb P(\text{“lost”}\,|\,\text{won})\mathbb P(\text{won})}\\=&\,\frac{0.99\times\dfrac{1}{999}}{0.99\times\dfrac{1}{999}+0.01\times\dfrac{998}{999}}\approx0.0902. \end{align*} Quotation marks mean what your friend told you. Intuitively, that your friend lied is unlikely, but that you lost is way unlikelier. Don't trust him if he tells you you lost.
H: Difficult Integral Question I'm trying to evaluate the following integral; $$\int e^{(x^2 - z^2)} (2x \cos(2xz) - 2z \sin(2xz)) dz$$ I've tried splitting it up, and using integration by parts, but it just isn't coming out in a simple way. I've been stuck on this for hours. I'm sure there's some rule or trick I can use, but I'm really not sure. Any assistance would be fantastic. :) AI: Hint: Observe that the first $e^{x^2}$ factors out of the integral. After removing it, notice that what you have looks sort of like the output of a product rule. Can you work backwards to find the product?
H: A question from GRE math sub 9367, problem 59 Two subgroups H and K of a group G have orders 12 and 30, respectively. Which of the following could NOT be the order of the subgroup of G generated by H and K? A. 30 B. 60 C. 120 D. 360 E. Countable infinity A is the answer because H, with order 12 that doesn't divide 30, can't be a subgroup of K. But anybody can help me construct a concrete example of E, the subgroup generated by H and K with order of countable infinity? AI: With $\alpha=\frac{2\pi}{12}$ and $\beta=\frac{2\pi}{30}$, let $H$ be the subgroup of $SO(3,\mathbb R)$ generated by $$\begin{pmatrix}\cos\alpha&\sin\alpha&0\\ -\sin\alpha&\cos\alpha&0\\ 0&0&1\end{pmatrix} $$ and $K$ by $$\begin{pmatrix}1&0&0\\ 0&\cos\beta&\sin\beta\\ 0&-\sin\beta&\cos\beta\end{pmatrix}. $$ Together they generate a group that is not among the few well-known finite subgroups of $SO(3,\mathbb R)$. On the other hand, as a finitely generated group, it cannot be bigger than countably infinite.
H: Integrals of continuous functions (as an approximation of step functions) I need to.. Show that a continuous function $f \colon [a,b] \to \mathbb{R}$ with the properties $\forall x \in [a,b] f(x) \ge 0$ and $\int_a^b f = 0$, must be identically 0. Now i can see why this is true. Its continuous hence 'smooth' between any two distinct points, meaning the integral would not be equal to 0 unless the function is 0 itself, right? But im not quite sure how to translate this. AI: Let's assume that $f(x_1)>0$ for some $x_1\in(a,b)$, let $\varepsilon=\frac12f(x_1)$ for that $x$. As $f$ is continuous, it means that there is a $\delta\in\mathbb R_{>0}$ such as $|f(x)-f(x_1)|<\varepsilon$ (hence $f(x)>\varepsilon$) for any $x\in(x_1-\delta,x_1+\delta)$. So now $$\int_{x_1-\delta}^{x_1+\delta}f(x) dx>\int_{x_1-\delta}^{x_1+\delta}\varepsilon dx=2\delta\varepsilon>0$$ And \begin{align} \int_a^bf(x)dx=&\int_a^{x_1-\delta}f(x)dx+\int_{x_1-\delta}^{x_1+\delta}f(x) dx>\int_{x_1+\delta}^bf(x) dx\\ &>0+2\delta\varepsilon+0>0. \end{align} Note: if $x_0=a$ or $x_0=b$ and with $f$ continuous, you can find another $x_1$ not equal to either $a$ and $b$.
H: What is meant by 'the completion of Z'? In the first chapter of Algebraic Number Theory (lecture notes collected by Cassels-Fröhlich), page 28 has the following paragraph: "We suppose now that $k$ is a finite field of characteristic $p$ with $q=p^m$ elements. Denote by $\bar{\mathbb{Z}}$ the completion of $\mathbb{Z}$ with respect to the topology defined by the subgroups $n\mathbb{Z}$ ($n>0$). Then $\Gamma(\bar{k}^s/k)$ is an isomorphic copy of $\bar{\mathbb{Z}}$ under the map $$v \mapsto w_q^v$$ where $$\alpha w_q = \alpha^q."$$ $\Gamma(\bar{k}^s/k)$ refers to the Galois group of the maximal seperable extension of $k$ over $k$, and given an element $\sigma$ of the Galois group Fröhlich writes $x\sigma := \sigma(x)$. Firstly, I'm not positive what it means when it says the topology defined by those subgroups; is it saying the topology generated by taking those sets as a basis? Even if so, I don't see what the completion here would be (i.e. how it could be described), and if the elements aren't integers, I don't see how one 'exponentiates' the automorphism $w_q$. AI: Yes, the ideals $n\mathbf Z$ are taken as a basis of neighborhoods of $0$ (it is sufficient to describe a neighborhood basis of $0$ in a topological group, by translation). A Cauchy sequence in this topology is a sequence $\{a_k\}$ of integers such that, modulo any integer $n>0$, $a_k-a_l$ is eventually congruent to $0$. A more abstract definition of the completion (not using Cauchy sequences) is as the inverse limit $\widehat{\mathbf Z} = \varprojlim \mathbf Z/n\mathbf Z$. The group $\widehat{\mathbf Z}$ is an example of a profinite group (an inverse limit of finite groups). It is an uncountable topological group, which is compact and totally disconnected (somewhat like a Cantor set). By the Chinese Remainder Theorem, it is actually isomorphic to the direct product of all $\mathbf Z_p$'s, the additive groups of the $p$-adic integers, over all primes $p$. In the category of profinite abelian groups, $\widehat{\mathbf Z}$ plays the role of $\mathbf Z$, being the profinite abelian group freely generated by a single element (sometimes called a "topological generator"). This generator is the image of $1$ under the canonical map from $\mathbf Z$ to its completion (or, if you prefer, the constant Cauchy sequence $\equiv 1$). Under your setup, the Frobenius automorphism is the topological generator. It is a deep and important fact that the absolute Galois group of a finite field is canonically isomorphic with $\widehat{\mathbf Z}$ (as anon explains). This should be very surprising if you are used to Galois theory over a field like $\mathbf Q$. Indeed, the absolute Galois group of $\mathbf Q$ is an incredibly complicated object, very far from being abelian, let alone of having a simple explicit description!
H: what's the difference between a rational number and an irrational number? I tried to understand the difference between rational numbers and irrational numbers. I understand what is a rational number (a number that can be expressed as the ratio of two numbers p/q). what makes an irrational number, irrational? how do you prove in a simple way that an irrational number is irrational? why does the fact that the ratio of two numbers that can be divided by 2 is irrational? (am I right?) I couldn't understand the chosen answer in What's the difference between rationals and irrationals - topologically? so please, bare in mind that my math skills and understanding are currently weak (understatment) and I'm working to improve them. Thanks! AI: Joke (but true): The difference between a rational number and an irrational number is irrational. Serious answer: Your question already expressed it. A rational number can be written $\frac mn$ for some integer $m$ and some positive integer $n$. An irrational number is a real number that cannot be written like that. To show that a number is rational, the most common approach by far is to find $m$ and $n$, and prove that the number in fact equals their ratio. To show that a number is irrational is often a good deal harder, and is usually done using some sort of proof by contradiction. For example, it took a long time for mathematicians to even prove that $\pi$ is irrational. According to https://mathoverflow.net/questions/40145/irrationality-of-pie-pipi-and-epi2, no one even knows whether $\pi^{\pi^{\pi^\pi}}$ is an integer, let alone whether it is rational (but just about anyone would bet that it's irrational). It turns out that in several senses, almost all real numbers are irrational, and in fact even transcendental (a nastier sort of beast). There are also various techniques available for manufacturing great gobs of irrational (and even transcendental) numbers, but most of the numbers people are actually interested in are either trivially rational, trivially algebraic (not transcendental), or mysterious—no one knows for certain whether they are rational or irrational. Part of the reason for this is that while it's very easy to put together rational numbers to get more rational numbers, you can't really put together irrational numbers to get more irrational numbers in very many ways. For example, the sum or product of two rational numbers is always rational, but the sum or product of two irrational numbers may be rational.
H: Why would $f_n(x) = (\lfloor 2^nf(x)\rfloor/2^n)\wedge n$ converge to $f(x)$? Why would $$f_n(x)=\frac{\lfloor 2^nf(x)\rfloor}{2^n}\land n$$ converge to $f(x)$? I saw this step in the proof of change of variable formula in Rick Durrett's Probability Theory and Examples. AI: Write $f(x)$ in base two. Multiplying by $2^n$ shifts the binary point $n$ places to the right, taking the integer part removes everything to the right of the moved binary point, and dividing by $2^n$ moves the binary point back to its original position. The net effect is to keep only the first $n$ places to the right of the binary point. Clearly, then, $$\left\langle\frac{\lfloor 2^nf(x)\rfloor}{2^n}:n\in\Bbb N\right\rangle$$ converges to $f(x)$. And there is certainly an $m\in\Bbb N$ such that $f(x)<m$, so that taking the minimum with $n$ has no effect when $n\ge m$.
H: Show that $x$ is square free iff for any $y,z$ positive integers $x=yz \Rightarrow \mathrm{hcf}(y,z) = 1$ Show that x is square free if and only if $$x = yz\Rightarrow\mathrm{hcf}(y,z) = 1$$ where x and y are positive integers. I have tried using coprime factorisation leading to $$1 = jy + kz$$ But cant get any further Help appreciated AI: $x$ is not squarefree: Then $x=a^2b=a\cdot ab$ for some $a,b\in\mathbb Z, a\geq 2$, but $\mathrm{hcf}(a,ab)=a\neq 1$. $x$ is squarefree: $\mathrm{hcf}(y,z)\mid y, z$, this implies $(\mathrm{hcf}(y,z))^2\mid yz=x$, so $\mathrm{hcf}(y,z)=1$.
H: What is the proof that SVM can be used to solve the least squares problem with norm equality constraint? I've seen it claimed that the solution to the minimization problem: $$\begin{align*} \arg \min_{b} \quad & {\left\| A b \right\|}_{2}^{2} \\ \text{subject to} \quad & {\left\| b \right\|}_{2} = 1 \end{align*}$$ is given by first finding the singular value decomposition of A, $$\textbf{A} = \bf{U \Sigma V}$$ And then taking the column of $\bf{V}$ corresponding to the smallest singular value. Can someone present a proof that this is so? AI: Norm $\| \cdot \|$ is invariant under unitary transformation so: $$\|Ab\| =\| U\Sigma V^* b\| = \|\Sigma b'\|$$ Where $b' = V^* b$, so $\|b'\| = \|V^* b\| = \|b\| = 1$. Next we have that: $$\text{argmin}_b \|\Sigma V^* b\| = V\text{argmin}_{b'} \| \Sigma b' \|$$ This is because $V^*$ maps unit sphere onto unit sphere. And that $b'$ which minimizes $\|\Sigma b'\|$ is $(0,\dots,0,1)^T$. Finally $V (0,\dots,0,1)^T$ is equal to the last column of $V$.
H: Can someone explain the solution to this statement? Say C: set of courses P(x,y): 'x is a prerequisite for course y' statement: 'some courses have several prerequisites' symbolically: ∃ x ∈ C, ∃ y ∈ C, ∃ z ∈ C, P(y, x) ∧ P(z, x) ∧ y ≠ z I don't really understand how you get the symbolic expression from the verbal expression. Also, might there be a simpler way of writing this in logical notation? In addition! How would you write this: No course has more than two prerequisites. Thank you. AI: The statement $$\exists x\in C\,\exists y\in C\,\exists z\in C\Big(P(y,x)\land P(z,x)\land y\ne z\Big)$$ says that there is a course $x$ such that there are courses $y$ and $z$ that are prerequisites for $x$ and are different courses. In less convoluted language, this says that there is a course $x$ that has at least two prerequisites, here called $y$ and $z$. The $y\ne z$ clause ensures that $y$ and $z$ aren’t just two names for the same course, i.e., that we really do have two prerequisites here, not one with an alias. The word several in the original statement is being interpreted as at least two. If we defined several to mean at least three, we’d need a more complicated expression: $$\exists x\in C\,\exists y\in C\,\exists z\in C\,\exists w\in C\Big(P(y,x)\land P(z,x)\land P(w,x)\land y\ne z\land y\ne w\land z\ne w\Big)$$ Here the $y\ne z\land y\ne w\land z\ne w$ part says that no two of the prerequisites $y,z$, and $w$ are really the same course: we really do have three distinct prerequites here for the course $x$.
H: To what extent can I square both sides of an absolute equation? I am working on some absolute equation problems like the following: $$\begin{align} & {|x-4|} \lt 1 \\ & 1 \le |x| \le 4 \\ & |x+3| = |2x+1| \end{align}$$ Now, for both of these equations, I simply squared both sides to get rid of the absolute and then continued solving from there. Now my question is: when can I not do this and what is the alternative if I can't? Thanks a bunch! AI: Order of a relation is preserved, when you apply a strictly monotonously increasing function (for $\leq, \geq$ you can drop the strictly). $$f: x \mapsto x^2$$ is strictly monotonously increasing on $[0, \infty)$. so you can square whenever all expressions are guaranteed to be $\geq 0$.
H: Why is $|z-a|=\rho$ equivalent to $|z|^2-a(z+\overline{z})=\rho^2-a^2$? I have some problems to understand the following statement from a book about reflections in poincare half-plane modell: For $z,\overline{z} \in \mathbb{C}$ and $ a,\rho \in \mathbb{R}$ we have: $$|z-a|=\rho \quad \text{ equivalent to } \quad |z|^2-a(z+\overline{z})=\rho^2-a^2$$ My wrong (!) approach (first implication isn't correct): \begin{aligned} |z-a|&=\rho\\ \Rightarrow (z-a)^2&=\rho^2\\ \Leftrightarrow z^2-2az+a^2&=\rho^2&\\ \Leftrightarrow z^2-z(a+a)&= \rho^2-a^2\\ \Leftrightarrow z^2-z(a+a+bi-bi)&= \rho^2-a^2\\ \Leftrightarrow z^2-z(z+\overline{z})&= \rho^2-a^2 \end{aligned} It looks fine, except for two points: How do I get $|z|^2$ instead of $|z|$ and $a(z+\overline{z})$ instead of $z(z+\overline z)$? Any ideas are welcome. AI: Recall that $w\bar w=|w|^2$. Then let $w=z-a$. Your first implication is incorrect, be careful. ADD Since $\rho >0$, $|z-a|=\rho \iff |z-a|^2=\rho^2$. Note also that since $a\in\Bbb R$, $\overline{z-a}=\overline z-a $.
H: Is this symbolic expression correct? Say C: set of courses P(x,y): 'x is a prerequisite for course y' statement: 'some courses have the same prerequisites' Is this symbolic expression correct? If not, how would I write this with implication? Also how would I write this without implication? ∃ x ∈ C, ∃ y ∈ C, ∃ z ∈ C, P(y, x) ∧ P(z, x) ∧ y = z Thanks again! AI: What you’ve written simply says that there is a course with a prerequisite (that you happen to have mentioned twice, once as $y$ and once as $z$). First we need to say that there are (at least) two courses: $$\exists x\in C\,\exists y\in C(x\ne y)\;.$$ Now we need to add something to that to say that these courses $x$ and $y$ have a prerequisite in common; I’ve included an answer, but I’ve left it spoiler-protected. $$\exists x\in C\,\exists y\in C\,\exists z\in C\Big(x\ne y\land C(z,x)\land C(z,y)\Big)$$ My answer does not use an implication, and I don’t think that there is a really natural way of saying this that does use an implication.
H: Placing different color balls into distinguishable boxes In how many ways can you place 4 red balls, 5 blue balls, and 6 yellow balls in 4 distinguishable boxes? (Balls with same color are indistinguishable) AI: HINT: If you had only the $4$ red balls, this would be a standard stars-and-bars problem; the same is true if you had only the $5$ blue balls or only the $6$ yellow balls. Solve each of these three problems separately, and combine the solutions appropriately. Note that the three problems really are independent of one another.
H: Is $f =g$ when $g=\limsup f_n$? f and g are two functions . Is $f =g$ $\mu$ a.e when $g=\limsup f_n$ when n -> infinity ? we have $f_n$ --> f $\mu$ a.e , $f_n$ is measurable for all n in N. I think they will equal because when f_n - > f , g = limsup f = f at each x that makes f_n --> f. AI: Yes these are the same. The pointwise limit is the same as the pointwise limsup everywhere where the limit exists.
H: Finding $\int \frac{x^2}{(a^2-x^2)^{\frac{3}{2}}}dx$ using trigonometric substitution. Where did I go wrong? Evaluate the following integral using trigonometric substitution $$\int \frac{x^2}{(a^2-x^2)^{\frac{3}{2}}}dx$$ I used the substitution $x=a \sin(u)$, then $dx = a \cos(u) du$. The integral then becomes: $$\int \frac{a^2 \sin^2(u) a \cos(u)}{(a^2-a^2 \cos^2(u))^{\frac{3}{2}}}du = \int \frac{a^3 \sin^2(u) \cos(u)}{(a^2 \sin^2(u))^{\frac{3}{2}}}du =\int \frac{\cos(u)}{\sin(u)}du = \ln | \sin(u) | + C$$ The last equality comes from the substitution $v=\sin(u)$. Now from the first substitution we have $x=a\sin(u)$ and thus $\sin(u) = \frac{x}{a}$. This gives us $$\int \frac{x^2}{(a^2-x^2)^{\frac{3}{2}}}dx = \ln \left| \frac{x}{a}\right| + C = \ln|x| + C'$$ Where $C' = C - \ln|a|$. This however is of course not correct (unless I am missing something...). Can anyone tell me where I went wrong on this one? Thanks a lot! AI: The bottom should be $a^3\cos^3 u$, so you are essentially integrating $\tan^2 u$, that is, $\sec^2 u-1$.
H: Differentiating both sides of a non-differential equation I'm working on solving for $t$ in the expression $$\ln t=3\left(1-\frac{1}{t}\right)$$ and although I can easily tell by inspection and by graphing that $t=1$, I'd like to prove it more rigorously. I got stuck trying to solve this algebraically, so I tried to take the derivative of each side with respect to $t$ to get $$\frac{1}{t}=3\left(\frac{1}{t^{2}}\right).$$ However, this implies that $t=3$, which is incorrect. Why can't I take the derivative of each side like this? What am I doing wrong or misunderstanding? AI: To answer the general question of "Why can't [one] differentiate each side [of an equation]?": Your original equation, of the form $f(t) = g(t)$, acts as a condition (i.e., is only true for some real $t$, in this case finitely many), not as an identity (true for all $t$ in some open interval). When you differentiate a function $f$ at one point $a$, you implicitly use the values of $f$ in some neighborhood of $a$. Since your $f$ and $g$ are not equal in any open neighborhood, you can't expect differentiating to yield a new true condition. In case an example clarifies, take $f(t) = t$ and $g(t) = 0$. The equation $t = 0$ certainly has a solution, but differentiating both sides gives $1 = 0$. By contrast, it's safe to differentiate both sides of, e.g., $\cos(2t) = \cos^2 t - \sin^2 t$, since this equation is true for all real $t$.
H: Generators of permutation group I want to proof that $S_n$ is generated by the set of transpositions ${(1,2),(1,3), \ldots , (1,n)}$ using that $(k,j) = (1,k)(1,j)(1,j)$ but I don't know how to continue. I know this is a easy problem but I dont know what to do. AI: Any permutation $\sigma \in S_n$ can be written as the product of disjoint cycles. Any cycle $(a_1, \dotsc, a_m) \in S_n$ can be written as the product of transpositions by noting $$ (a_1, a_2, a_3,\dotsc, a_m) = (a_m, a_1)(a_{m-1}, a_1)\dotsc(a_3, a_1)(a_2, a_1). $$ Therefore, the set of all transpositions generate $S_n$. Since $$ (1, j)(1, k)(1, j) = (j, k), $$ the set generated by $(1, 2), \dotsc, (1, n)$ contains all transpositions. Therefore, the set generated by $(1, 2), \dotsc, (1, n)$ is $S_n$.
H: Why if $n \mid m$, then $(a^n-1) \mid (a^m-1)$? My Number Theory book says that for $n, m$ be positive integers and $a>1$, then $(a^n -1)\mid(a^m -1)$ if and only if $n\mid m$. I understand the proof for only if part, but in if part the autor says "it is clear". However a tried to prove that but a get stuck. Can you give a hint? AI: So you want to show $(a^n-1) \mid (a^m-1)$ if $n \mid m$. If $n \mid m$ then $m = nk$ for some integer $k$, so you want to show $(a^n - 1) \mid (a^{nk}-1)$. Now use the fact that $1^k = 1$ and recall that there is a factorisation for $x^k - y^k$; one of the terms will be $a^n - 1$. Let me know if you need further clarification.
H: Is the Koch Snowflake a Compact Space? I am taking an introductory topology class, and we recently defined the notion of compactness. Earlier in the chapter, the Koch snowflake is described, and I am wondering: is the Koch snowflake a compact set? Intuitively I think the answer is yes: it is an infinite union of closed sets (not necessarily closed) and its limiting area is bounded, so it is a bounded and closed set of $\mathbb{R}^{2}$. Am I just waiving my hands, or is this a solid argument? AI: The union of infinitely many closed sets is not necessarily closed. However, the Koch snowflake is compact: it’s the range of a continuous function defined on the compact set $[0,1]$, and continuous functions preserve compactness.
H: Using the partition theorem in probability questions So there is one box and it contains 1 white ball and 1 red ball. When a ball is drawn, it is replaced and another ball of that colour is added to the box. $A_i$ : the event that the ball is red on round $i$ So for example, $P(A_1)$ is $\frac 12$ and $P(A_2|A_1)=\frac 23$ since another red ball was added after getting a red ball on round one. I must use the multiplication lemma to finnd the probability that the first ball chosen is white, the second ball is red and the third is white. Then I must find the probability that exactly one of the first three balls chosen is red. The way in which I have calculated the first part is: $P(A^{c}_1 ∩ A_2 ∩ A^{c}_3) = P(A^{c}_1)P(A_2|A^{c}_1)P(A^{c}_3|A^{c}_1∩A_2)$ which equals: $\frac 12 * \frac 13 * \frac 12 = \frac {1}{12}$ Now is the correct? I don't think I've used the multiplication lemma correctly in this question as I'm not sure what it is. AI: You need to find out the Probability $P(W_1,R_2,W_3)=P(W_1)P(R_2|W_1)P(W_3|R_2,W_1)$ which is not the same as exactly obtaining just one Red ball in 3 turns because the red ball can come up at either one of the 3 chances and all 3 have different probabilities due to the twisted set-up of the experiment that you add a ball of the same colour that you draw from the box every time you replace it (the subscripts denote the turn and the letters indicate the result of that particular turn). Multiplication rule is also known as the "Chain Rule": If A, B, C are 3 events then $P(A,B,C)=P(A)P(B|A)P(C|A,B)$ which is equal to $P(A)P(B)P(C)$ if the events are independent.
H: $\sigma\mathcal C$ is the $\sigma$-algebra generated by $\mathcal C$. Show $\sigma\mathcal C\subset\sigma\mathcal D$ if $\mathcal C\subset\mathcal D$. If $\mathcal{C}$ and $\mathcal{D}$ are two collections of subsets of $E$. How do I prove the following: $$\mathcal{C}\subset\mathcal{D}\implies\sigma\mathcal{C}\subset\sigma\mathcal{D}?$$ AI: You could also use the fact that $\sigma\mathcal{A}$ is the smallest $\sigma$-algebra containing $\mathcal{A}$. If you know this fact then you can derive the desired result in a few easy steps: $\sigma\mathcal{D}$ is the smallest $\sigma$-algebra containing $\mathcal{D}$; $\mathcal{C}$ is contained in $\mathcal{D}$; $\sigma\mathcal{D}$ is a $\sigma$-algebra containing $\mathcal{C}$; $\sigma\mathcal{C}$ is the . . . Therefore $\sigma\mathcal{C}$ is contained in $\sigma\mathcal{D}$.
H: Finding the CDF of a random variable that has uniform distribution of outcomes. I want to find the CDF of a a random variable $X(\omega) = e^\omega$, with the sample space $\Omega = [-1,1]$. The outcomes of $\Omega$ are uniformly distributed. What I've managed so far is to get to this point: $F_X(x) = P(X<= x) = P(e^w <= x) = P(w <= ln(x))$ But I don't know where to go from here. I can't find anything about it in my books or from what I could gather on the net. All the help is welcome! AI: Let $W$ be uniformly distributed on $[-1,1]$. Let $X=e^W$. We want the cdf of $X$. We have $$F_X(x)=\Pr(X\le x)=\Pr(e^W\le x)=\Pr(W\le \ln x).$$ So far, apart from a minor notational change, this is exactly what you wrote. If $\ln x\lt -1$, then $\Pr(W\le \ln x)=0$. Thus $F_X(x)=0$ if $x\lt e^{-1}$. If $\ln x\ge 1$, then $\Pr(W\le ln x)=1$. Thus $F_X(x)=1$ if $x\ge e$. For $-1\le \ln x\lt 1$, we have $\Pr(W\le \ln x)=\frac{\ln x-(-1)}{2}$. Thus in the interval $[e^{-1},e)$ we have $F_X(x)=\frac{\ln x+1}{2}$. If the density function $f_X(x)$ is desired, differentiate.
H: Partial Fractions and power of a factor with $x^2$ I just started working with partial fractions and hit a wall with splitting this one: $$ \frac{3x^2 + 2x + 1}{(x + 2)(x^2 + x + 1)^2} $$ I get here: $$ \frac{Ax + B}{(x^2 + x + 1)^2} + \frac{Cx + D}{x^2 + x + 1} + \frac{E}{x + 2}$$ Then on to: $$ (Ax + B)(x + 2) + (Cx + D)(x^2 + x + 1)(x + 2) + E(x^2 + x + 1)^2 $$ I find that $E = 1$ by using $x = -2$. I am unsure how to proceed from here. AI: You already have \begin{align*} &\frac{3x^2 + 2x + 1}{(x + 2)(x^2 + x + 1)^2}\\ &= \frac{Ax + B}{(x^2 + x + 1)^2} + \frac{Cx + D}{x^2 + x + 1} + \frac{E}{x + 2}\\ &= \frac{(Ax + B)(x + 2) + (Cx + D)(x^2 + x + 1)(x + 2) + E(x^2 + x + 1)^2}{(x+2)(x^2+x+1)^2}. \end{align*} Multiplying both sides by $(x+2)(x^2+x+1)^2$ we have $$3x^2 + 2x + 1 = (Ax + B)(x + 2) + (Cx + D)(x^2 + x + 1)(x + 2) + E(x^2 + x + 1)^2.$$ The right hand side is a quartic equation. If you expand all the brackets, you can collect all the like terms to get an equation as follows: $$3x^2 + 2x + 1 = k_4x^4 + k_3x^3+k_2x^2+k_1x+k_0$$ where $k_i$ depends on $A, B, C, D,$ and $E$. Then you can compare the coefficients of $1, x, x^2, x^3,$ and $x^4$ to get five equations in five unknowns which will be enough to determine $A, B, C, D,$ and $E$.
H: Find the number of possible triangles An interview question. We are given three positive integers p, q, r such that: p + q + r = 27 and p<q<r. Find the number of triangles that are possible using p, q, and r. AI: HINT: Since $\frac{27}3=9$, it’s clear that $r$ must be at least $10$. In order for $p,q$, and $r$ to be the sides of a non-degenerate triangle, it’s necessary and sufficient that $p+q>r$, so $r$ cannot be more than $13$. Thus, $10\le r\le 13$, and it’s not hard to count the possibilities for each value of $r$.
H: How to prove $x-y = x+(-y)$ in ring theory. Okay, I have talked with a lot of people about this silly question. And I have thought about this way longer than is good for me. Everybody seem to disagree with me, and that is the reason I think I can't get this out of my head, because I feel like pure mathematically I'm right. If you look at the expression $x - y$ in ring theory. What does this mean? Okay of course, everybody here knows/agrees that this mean $x+(-y)$. Where $-y$ is the additive inverse of $-y$. And $+$ the binary operation "addition". So what is your problem Kasper ? Well, if you just look at the definitions in ring theory, then I wouldn't be able to conclude this. There is no definition saying that $x-y=x+(-y)$ (in the book I read about ring theory). What I am able to conclude, is that if I define $z=-y$. Then: $$x{-y}=xz=x⋅z=x⋅-y$$ Because it is defined (at least in my book) that $x\cdot y =xy$ where "$⋅$" is the binary operation "multiplication". As the symbol "$-$" is only defined in the context of $-x$ being the additive inverse of $x$, this is the only conclussion I can logically find. Talking with other mathematicians I get the impression I'm a little bit alone in this view, but I think I should be able to justify every step in a math proof, using definitions/theorems/axioms etc. And I shouldn't justify my step because I know already since highschool what is meant with $x-y$. Being able to trace back every math proof to axioms and definitions is part of the beauty of math in my opinion. So, I would say you need to define "$-$" in the context of $-x$ (unary operation) where it denotes the additive inverse, and define "$-$" in the context of $x-y$ (binary operation) where it denotes $x+(-y)$. But I talk to my teacher, he says, no in ring theory you only need "$-$" as an unary operation. If I talk here in chatroom, they tell me the same. Okay, going even deeper in this problem then it is good for me. In a comment in other question: Suppose you wrote the axioms for a group using "addition" as the operation suppressed instead of "multiplication". That is, every group has an "addition" operation. Or equivalently xy means (x+y). Then, the inverse axiom for groups says "for all x, there exists a -x such that x-x=0." Again, I've suppressed the addition operation, so x-x=0 implicitly best gets read as meaning (x+-x)=0. So, no, you don't need to define a binary operation of subtraction for this sort of problem. You just need to recognize that x(y-z) has multiplication suppressed first and addition suppressed second. This makes a little bit sense to me, but still. If you look at groups, and write $xy=x+y$, then you also write $x^{-1}$ instead of $-x$. And in ring theory they define $xy=x⋅y$ not $xy=x+y$ (at least in my book). AI: Edited to match revised question. When $x-y$ is defined, it is of course defined to be $x+(-y)$, but we often don’t bother to define it. After all, there is no actual need to define subtraction in a ring: it’s just a convenience, and you can do everything without it. Yes, a careful author of a textbook would explicitly define $x-y$ if he planned to use the notation, but failure to do so is a pretty minor failing: the definition is easy to pick up from context even if it’s not stated explicitly. I’d even go so far as to say that learning to pick things up from context is part of learning to read mathematics as it is actually written.
H: derivative of a summation with variable upper limit Is the following statement correct and if yes does it need to satisfy specific requirement to be correct: $${ d \over dt} \sum_{j=1}^{N(t)} f(t,j) = \sum_{j=1}^{N(t)} {df(t,j) \over dt} + f(t,N(t)) {dN(t) \over dt}$$ AI: There are two approaches for this problem: one is to define: $$f_j(t)=\left\{\begin{array}{ll}f(t,j)&j\le N(t)\\0&j>N(t)\end{array}\right.$$ Then $$\sum_{j=1}^{N(t)}f(t,j)=\sum_{j=1}^{\infty}f_j(t)$$ and continue as you already know. $$\frac d{dt}\sum_{j=1}^{N(t)}f(t,j)=\sum_{j=1}^{\infty}f_j'(t)$$ Beware where each $f_j$ is non-continues. The other approach is to have all points $\{t_i\}_{i\in I}$ in which $N(t)$ take a jump, meaning $N(t_i^-)\ne N(t_i^+)$. If $\{t_i\}_{i\in I}$ is an well ordered set, you can have: $0<t_1<t_2<\cdots<t_k<\cdots<T$, ($t_0=0$) and we define $N_i=N(t)$ for $t_{i-1}<t<t_i$. Note that $N_i$ should be well defined as there are no jumps in $N$ between $t_{i-1}$ and $t_i$. So now you divide in this function defined by parts: $$\left.\frac d{dt}\sum_{j=1}^{N(t)}f(t,j)\right|_{t_{k-q}<t<t_k}=\sum_{j=1}^{N_k}\frac{df(t,j)}{dt}$$ The derivative at each $t_k$ does not exist unless both limits exists and are equal.
H: Is the abelianization of a subgroup $H$ a subgroup of the abelianization of a group $G$? Let $G$ be a finite group and $H<G$. Then, is it true that $H^{ab} < G^{ab}$, that is, the abelianization of $H$ is a subgroup of the abelianization of $G$? To me, it would make sense if is was indeed true. However, I do not know exactly how to prove it. For any group $G$, we know that $G^{ab} = G/G' < G$, where $G'$ is the commutator subgroup of $G$. Thus, $H^{ab} < H < G$ and so by transitivity, $H^{ab} < G$ and so we obtain $H^{ab} < G$ and $G^{ab} < G$. I am stuck here... Thanks for your help. AI: No, the abelianization functor is not left exact. For instance, consider the group $G$ which is the free product of two copies of $\mathbf Z/2\mathbf Z$. It has the presentation $\langle a,b | a^2 = b^2 = 1\rangle$. Its abelianization is $(\mathbf Z/2\mathbf Z)^2$. But $G$ contains a copy of the abelian group $\mathbf Z$, generated by $ab$. (It is true, however, that abelianization is right exact, as it is left adjoint to the forgetful functor from the category of abelian groups to the category of groups.)
H: I have a recursively defined function, and another function involving powers of a matrix. How can I show that they are equal? The problem is Let $A$ be the $n \times n$ adjacency matrix of a graph $G=(V,E)$ on $n$ vertices, i.e. $A=(a_{ij})$ and $$a_{ij}=\begin{cases} 1 & ij\in E \\ 0 & ij\notin E \end{cases}$$ Show that the $i,j$ entry of $A^k$ is the number of $i-j$ walks in $G$ that use exactly $k$ edges. I'm almost done but I don't quite believe my solution. Here's what I have. Let $w_k(i,j)$ be the number of $i-j$ walks of length $k$. Clearly, $$w_{1}(i,j)=\begin{cases} 1 & ij\in E \\ 0 & ij\notin E \end{cases}$$ The number of $i-j$ walks of length $k$ equal to the number of walks of length $k-1$ from $i$ to a vertex in $N(j)$ where $N(j)$ is the set of vertices adjacent to $j$, so we have $$w_k(i,j)=\sum_{x \in N(j)}w_{k-1}(i,x)$$ Now,, $[A^k]_{ij}=[A^{k-1}A]_{ij}=[A^{k-1}]_i^T [A]_j$, where the subscripts denote columns of the matrix in brackets (since $A$ is symmetric and hence powers thereof are also symmetric). We can write this as $$[A^k]_{ij}=\sum_{l=1}^n[A^{k-1}]_{il}[A]_{lj}$$ And by the definition of $A$, this is equivalent to $$[A^k]_{ij}=\sum_{x\in N(j)}[A^{k-1}]_{ix}$$ Obviously these recursive definitions look very similar and I can't decide if I am done or not. How can I show that these are in fact are the exact same function? AI: You have the right idea, but you’ve not really made it clear. What you want to do is show by inductionn on $k$ that $w_k(i,j)=[A^k]_{ij}$ for all $i,j\in\{1,\ldots,n\}$. This is clearly the case for $k=1$. If $k>1$, and if it holds for $k-1$, then $$\begin{align*} w_k(i,j)&=\sum_{\ell\in N(j)}w_{k-1}(i,\ell)\\ &=\sum_{\ell\in N(j)}[A^{k-1}]_{i\ell}&\text{induction hypothesis}\\ &=\sum_{\ell=1}^n[A^{k-1}]_{i\ell}[A]_{\ell j}&\text{since }[A]_{\ell j}=\begin{cases}1,&\ell\in N(j)\\0,&\ell\notin N(j)\end{cases}\\ &=[A^k]_{ij}\;, \end{align*}$$ and we’re home free.
H: Are power series in a normal matrix themselves normal? Are (convergent) power series in a normal matrix themselves normal? I have looked around for this result, and not found it. How might we prove it? AI: Yes, because a matrix is normal if and only if it is unitarily diagonalizable, we can simultaneously diagonalize a matrix and analytic functions of that matrix, given that said function is analytic in a domain containing the spectrum of the matrix. More concretely, if $X = S \Lambda S^{-1}$, where $\Lambda$ is the diagonal matrix of eigenvalues, and if $$ f(z) = \sum_{k=0}^{\infty} a_n z^n $$ is a function that is holomorphic in some domain $\Omega$ containing all the eigenvalues of $X$, then $$ f(X) = \sum_{k=0}^{\infty} a_n (S\Lambda S^{-1})^n = \sum_{k=0}^{\infty} a_n S \Lambda^n S^{-1} = S f(\Lambda) S^{-1}$$ which makes $f(X)$ a matrix that is unitarily diagonalizable.
H: Simple Newton's method problem Estimate the number of iterations of Newton's method needed to find a root of $f(x)=\cos(x)-x$ to within $10^{-100}$. The answer is $7$ iterations, but I have no idea how it was solved by my instructor. AI: The idea behind the reasoning is the quadratic convergence of Newton's algorithm (if the zero is simple). When you are near the zero $\alpha$, an iteration takes you from $\alpha + \delta$ to $$(\alpha + \delta) - \frac{f(\alpha+\delta)}{f'(\alpha+\delta)} \approx \alpha + \delta - \frac{f'(\alpha)\delta + f''(\alpha)\frac{\delta^2}{2}}{f'(\alpha) + f''(\alpha)\delta} \approx \alpha + \frac{f''(\alpha)}{2f'(\alpha)}\delta^2,$$ so each step roughly doubles the number of correct digits. If you start with approximately one correct digit, after seven steps, you have roughly $2^7 = 128$ correct digits.
H: $\frac{dy}{d \theta} = {e^y\sin^2(\theta)\over {y\sec(\theta)}}$ Please help me solve the above differential equation. I'm confused as to the steps required to obtain the answer AI: Hint: The equation is separable, so rewrite and integrate both sides as: $$\displaystyle \int e^{-y}~ y ~dy = \int \cos \theta~ \sin^2 \theta ~d \theta$$
H: Prove that $x^3 + x^2 = 1$ has no rational solutions? Is this enough for a proof?: $$x^3+x^2 = 1$$ I would factor and get: $x^2(x+1) = 1$ I would show that $x = \sqrt1$, which is rational but then what else would I have to show? $x+1=1$ which gives me $x=0$ and since $x$ cannot equal to $0$ as this would make the statement false ($0$ times anything is $0$). Is it enough to simply state this falsity or is there another way to express it? Thanks! AI: By the rational root theorem, a rational root would have to be $x=1$ or $x=-1$, but neither works.
H: Suppose $g$ is even and let $h=f \circ g$. Is $h$ always an even function? I came across one of the following problems in my homework set: $$ \text{Suppose} \, g \, \text{is even and let} \, h=f \circ g. \text{Is} \, h \, \text{always an even function?}$$ I came to the conclusion through examples that "yes" the answer was true. Rather than just random examples, I also tried thinking about in different way. If we have a function $f$ and we compose it with $g$ and perform the even function test then the even function, $g$ will always evaluate to itself and which makes entire function equivalent. For example, let $g$ be $x^2$ and let $f$ be $\sin x$ then $f \circ g$ is $\sin (x^2)$ and $f(x) = f(-x)$ because $\sin (x^2) = \sin ((-x)^2)$. Is my second approach better than my first? Is there a different and more concrete way to do this? Thanks! AI: Here's my take on a concrete-as-can-be proof of the assertion: "Let $h$ be the composition $f\circ g$. Then by the definition of even function, $h$ is an even function if $h(-x) = h(x)$ for all $x$. Since $g$ is specified as an even function, we have that $g(-x) = g(x)$ for all $x$; therefore, since one property of a function is that $x=y\implies f(x)=f(y)$, by substitution we have $f(g(-x)) = f(g(x))$ for all $x$. But written in terms of the composition $h$, this is the statement that $h(-x) = h(x)$ for all $x$; in other words, that $h$ is an even function." This is essentially your argument made concrete; while various steps could be taken to put it into more formal proof syntax, this should be good enough for all intents and purposes. You can be more concrete than this (e.g., this could be turned into a truly formal proof, one that could be verified by a theorem prover), but it's difficult to really imagine a different approach to the problem that wouldn't be worse; all this one uses are the relevant defining properties of 'function' and 'even function'. As I mentioned in a comment, I encourage you to try and perform the analagous proof for the assertion that "if $g$ is an odd function and $f$ an arbitrary function, then $h=f\circ g$ is an odd function"; is the statement true or false, and why? If the statement is false, can you find conditions on $f$ to make it true? Working through this may give you a better sense for how to use these definitions in formal proofs.
H: Does my proof make sense? Theorem: For groups $(\Bbb R,+)$ and $(\Bbb R,*)$ (both only dealing with positive integers) there is a function $\phi$ that turns $(\Bbb R,+)\to(\Bbb R,*)$ and vice versa. Proof: Assume $(\Bbb R,+)\to(\Bbb R,*)$. So there is a function where elements $x_1,x_2$ going from additive operation to multiplicative operation where $x_1+x_2 \mapsto x_1 *x_2$. So there is some $\phi$ where $\phi(X_1*X_2)= \phi(x_1)*\phi(x_2)$ (here * is an operation) Take $\phi=e$, so $e^{x_1+x_2}=e^{x_1}*e^{x_2}$. Now the inverse of $e$ is $\ln$, so $\ln(x_1 *x_2) = \ln(x_1+x_2)$ I know there is a lot missing from the proof or at least it's not concrete by looking at it. I just need a little help in cleaning up the theorm and proof. AI: "Positive integers" should be "positive reals". Integers are whole numbers. Positive reals are usually denoted $\mathbb R_{>0}$. Your map, properly written, should go $(\mathbb R, +) \to (\mathbb R_{>0}, \times)$. Also, why are you assuming the outcome you are trying to prove? You're asked to provide a function doing the job; just write it down, as you did: $f(x) = e^x$. No need to assume anything. Other than that and the absence of $\LaTeX$, it is fine (and quite concrete).
H: Is this language regular ? [automata] Is this a regular language : $$L = \{w : w \in \{a,b\}^*\text{ and }abw = wba\}$$ Does my automata only need to start with $a$ and $b$, then loop on $a,b$ and finish with $b\to a$, or do I don't understand the language? AI: HINT: Prove that $L$ is generated by the regular expression $(ab)^*a$. If $L_0$ is the language generated by $(ab)^*a$, it’s not hard to prove that $L_0\subseteq L$. To prove that $L\subseteq L_0$, I suggest assuming that $L_0\setminus L\ne\varnothing$, letting $w$ be a word of minimal length in $L_0\setminus L$, and getting a contradiction. (If you get stuck finding the contradiction, feel free to leave a comment.)
H: $\mathbb{E}[X] = \mathbb{E}[\mathbb{E}[X \mid Y]] = \mathbb{E}[yp] = p\mathbb{E}[Y]$ Its given $\mathbb{E}[X \mid Y] = yp$ $$\mathbb{E}[X] = \mathbb{E}[\mathbb{E}[X \mid Y]] = \mathbb{E}[yp] = p\mathbb{E}[Y]$$ What I do not understand is the 3rd to last step. Can I just change $\mathbb{E}[y]$ to $\mathbb{E}[Y]$ like that? Or is this not what happens? AI: On the one hand you have $$E[X\mid Y=y]=yp=f(y)$$ Since the value of $Y$ is random, this means that $E[X\mid Y]$ (without writing $Y=$ something) is a random variable, so it makes sense to take expectation. More precisely, it is the random variable $pY$ (you can in fact define a random variable with any real valued function $f(y)$ whose domain contains the support of the random variable $Y$ and denote it $f(Y)$). The double expectation theorem reads as: $$ E[X]=E[E[X\mid Y]]$$ notice that it would not have had a lot of sense to write $E[E[X\mid Y=y]]$ since in this case $E[X\mid Y=y]$ is a fixed number, a realization (since $y$ "small caps" is a number not a random variable). This nonetheless is a very commonly seen notation. So your formula should be $$ E[X]=E[E[X\mid Y]] = E[pY] = pE[Y] .$$ Also a nice thing to do whenever you use your formula is to make it clear what is the variable that the function $E$ is considering. That is, $$ \sum_{\mathrm{supp}(X)} xP(X=x)= E_X[X]=E_Y[E[X\mid Y]] = \sum_{\mathrm{supp}(Y)} E[X\mid Y=y]P(Y=y).$$
H: If $\alpha$ is even show that $\beta \alpha \beta^{-1}$ is even. Right then I have $\alpha,\beta \in S_n$ for some n and that $\alpha$ is even. I want to show that $\beta \alpha \beta^{-1}$ is even. What I came up with: I know that I can write $\alpha$ and $\beta$ as the product of 2-cycles (if n > 1) and I've shown that if $\beta = \sigma_1\sigma_2...\sigma_n $ where each $\sigma_i$ is a 2-cycle then $\beta^{-1}$ can be written as $\beta^{-1} = \sigma_n \sigma_{n-1} ... \sigma_1$so I know that if $\beta$ is even then so is $\beta^{-1}$ and likewise for odd. So then I can say $\beta \alpha \beta^{-1}$ is a product of $\alpha$'s 2-cycles which we have an even number of and twice the number of $\beta$'s 2-cycles which gives us a second even number of 2-cycles so $\beta \alpha \beta^{-1}$ is even as desired. Does this work alright? I think it's kinda poorly worded but I'm having trouble fixing that. Also for $n = 1$. The only permutation in $S_1$ is the identity right? Since the identity is even it follows immediately right? Thoughts on how to clean this up (or indeed if it is wrong please point that out) are appreciated. AI: If $\alpha = \tau_1 \cdots \tau_{2m}$, then $\beta \alpha \beta^{-1} = (\beta \tau_1 \beta^{-1}) \cdots (\beta \tau_{2m} \beta^{-1})$. Now note that $\beta \tau_i \beta^{-1}$ is also a 2-cycle. Therefore, $\beta \alpha \beta^{-1}$ is a product of an even number of 2-cycles and so is even.
H: Finding the associated matrix of a linear transformation to calculate the characteristic polynomial Let $T : M_{n \times n}(\Bbb R) \to M_{n \times n}(\Bbb R)$ be the function given by $T(A)=A^t$ (the transpose of $A$). I need to find the minimal polynomial and the characteristic polynomial of $T$. So, to find the characteristic polynomial, I'm trying to find the associated matrix of $T$. I did it for the case $n=2$. The coordinates of a matrix $\left( \begin{array}{ccc} a & b \\ c & d \end{array} \right)$ in the canonical basis $\beta$ is $[X]_\beta=\left( \begin{array}{c} a \\ b \\ c \\ d \end{array} \right)$. Let $$A = \left( \begin{array}{ccc} 1 & 0 & 0 & 0 \\ 0 & 0 & 1 &0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 &0 & 1 \end{array} \right)$$ Then $$A[X]_\beta=[A^t]_\beta==\left( \begin{array}{c} a \\ c \\ b \\ d \end{array} \right).$$ So, $A$ is the associated matrix of $T$. I know I can do the same for any $n$, but I don't know how to generalize it, and I need it to find the characteristic polynomial, solving $\det (\lambda I-A)=0$. Maybe there's an easier way to find the characteristic polynomial, if you know it, please let me know. If not, how can I generaize this matrix $A$ for any $n$ to find $\det (\lambda I-A)=0$? Thanks so much for your help, AI: Simply note that $T^2=I$ but $T\ne I$. This gives you the minimal polynomial of $T$. The characteristic polynomial has the same irreducible factors as the minimal polynomial and degree equal to the dimension of the space $T$ acts on.
H: Symmetry groups in algebra Recently I was going over online notes regarding symmetry groups and I came across the following notation: $S_3=\{1,x,x^2,y,xy,x^2y\}$ is generated by $\{x,y\}$. What does this mean? Aren't the elements in $S_3$ of the form $\{(12),(123),(23),(132),e, (13)\}$. Can someone please explain? AI: Take $x=(123)$ and $y=(12)$. Note that $x^3=e$ and $y^2=e$.
H: If I weigh 250 lbs on earth, how much do I weigh on the moon? One of my homework questions is to determine how much a 250 lb person weighs on the moon. I first googled a calculator for this and found that the weight is 41.5 lbs. So I tried to derive it myself and I cannot seem to get the correct answer. Here is what I'm doing: $$F=ma$$ I first converted $250$ lbs to Newtons: $$250lb\frac{4.448 N}{1 lb}=1112N$$ So I then figured I'd plug values into the the formula $F=ma$ $$1112N=113.5kg(1.6\frac{m}{s^2})$$ But no matter how I solve this, I cannot seem to get the correct answer. What am I doing wrong? AI: $1112$ N is the force on earth: it’s (approximately) $$113.5\text{ kg}\cdot 9.8\frac{\text{m}}{\text{s}^2}\;.$$ To get the force on the moon you want $$113.5\text{ kg}\cdot 1.625\frac{\text{m}}{\text{s}^2}\;,$$ which you’ll then have to convert to pounds. Of course you could simply multiply $250$ by the ratio of gravitational accelerations, $\dfrac{1.625}{9.8}$.
H: Notation for Permuting Sets If I have some arbitrary sets $A_i : i \in I$ and I want to permute their intersections pairwise, how would I write such a permutation? Would I use some permutation tensor? Essentially I want to permute $A_i \cap A_k \, \, \forall i,k \in I$. How would I notate this formally? AI: You want $$\left\{A_i\cap A_k:\{i,k\}\in[I]^2\right\}\;.$$ The notation $[X]^\kappa$, where $X$ is any set and $\kappa$ is any cardinal number, is defined to be $$[X]^\kappa=\{S\subseteq X:|S|=\kappa\}\;,$$ the family of subsets of $X$ of cardinality $\kappa$. This is a standard notation, but it’s not universally familiar, so you’d probably want to define it first.
H: Deriving the Laurent series How do I derive the Laurent series $$\frac{1}{z^2}=\sum_{n=2}^{\infty} \frac{(-1)^n(n-1)}{(z-1)^n}$$ from $$\frac{1}{(1-z)^2}=\sum_{n=0}^{\infty}(n+1)z^n$$ It looks like I can do some sort of substitution $$z'=\frac{1}{z-1}$$ however I cannot simply to the result. Any hint would be appreciated AI: With $\left\vert z - 1\right\vert < 1$: $$ {1 \over z^{2}} = {1 \over \left[1 - \left(1 - z\right)\right]^{2}} = \left.{{\rm d} \over {\rm d}z'}\left(1 \over 1 - z'\right) \right\vert_{z'\ =\ 1 - z} = \left.{{\rm d} \over {\rm d}z'}\sum_{n = 0}^{\infty}z'^{n} \right\vert_{z'\ =\ 1 - z} $$
H: how to show that $A(x)\nabla u\in L_\mathrm{loc}^{2}(\Omega) $ for $u\in H_\mathrm{loc}^{1}(\Omega)$ Let $\Omega\subset \mathbb{R}^n$ be a connected open set containing $0$, $u\in H_\mathrm{loc}^{1}(\Omega)$, $A(x)\leq C|x|^{-1+\epsilon}$, where $\epsilon$ is small, and we also have $$ \|\nabla u\|_{L^2(|x|\leq R)}\leq C_{N}R^N,~~R\to 0, $$ for all $N>0$ , then do we have $A(x)\nabla u\in L_\mathrm{loc}^{2}(\Omega) $? AI: The answer is yes. The only problem can be caused around zero. There, if $n\in\mathbb N$ and $B_n$ is the ball with center $0$ and radius $\frac{1}{n}$, you have that $$\int_{B_n\setminus B_{n+1}}|A(x)|^2|\nabla u(x)|^2\,dx\leq$$$$C\int_{B_n\setminus B_{n+1}}|x|^{-2+2\varepsilon}|\nabla u(x)|^2\,dx\leq C\int_{B_n\setminus B_{n+1}}\frac{1}{(n+1)^{-2+2\varepsilon}}|\nabla u(x)|^2dx=$$$$C(n+1)^{2-2\varepsilon}\int_{B_n}|\nabla u(x)|^2\,dx\leq CC_N(n+1)^{2-2\varepsilon}\frac{1}{n^N}.$$ So, by choosing $N=4-2\varepsilon$, you get a convergent series, which shows that $A|\nabla u|$ is square integrable around $0$.
H: probability of 4 of a kind from a deck of 52 5 cards from a deck of 52 , how many ways of four of a kind can be dealt ? I have (13c1) to determine the rank of card so (13c1)(4c4) is all i can think of until now the answer is 2(13c2)(4c4)(4c1) can someone explain how does this work ? AI: There are indeed $\binom{13}{1}$ ways of choosing the kind we have $4$ of. And once that is done, that part of the hand is determined. But then there is the useless fifth card, which can be chosen in $\binom{48}{1}$ ways. So the number of $4$ of a kind hands is $(13)(48)$. For the probability, divide by $\binom{52}{5}$. Remark: The person who did the counting you quote did it in I think less clear way. The $4$ of a kind hand will have two denominations, which can be chosen in $\binom{13}{2}$ ways. For each choice, we have $2$ choices as to which denomination we will have $1$ of. And then the actual card can be chosen in $\binom{4}{1}$ ways, for a total of $2\binom{13}{2}\binom{4}{1}$.
H: Setting up a triple integral in cylindrical coordinates I'm confused on how to get my $\theta$ limits for my triple integral. The question reads as follows: Let D be the region inside a cylinder whose base in the $xy$-plane is the circle $r=3\cos\theta$ and whose top is in the plane $z = 5 - x$. Set up an interated integral for calculating $\iint_D \left( x^2 + y^2 \right) \, dV$. Is there a way I am able to find my $\theta$ from $r=3\cos\theta$? I'm confused my teacher gave us the limits as $-\frac{\pi}{2}$ and $\frac{\pi}{2}$ and I have no idea how they came to that conclusion. AI: The circle $r=3\cos \theta$ can be transformed to Cartesian coordinates as $r^2=3r\cos \theta; x^2+y^2=3x; (x^2-\frac 32)+y^2=\frac 94$, so it has center $(\frac 32,0)$ and radius $\frac 32$. It passes through the origin with a vertical tangent, so is entirely in the first and fourth quadrants. This represents the range in $\theta$ of $[-\frac \pi2,\frac \pi 2]$
H: Group isomorphism from subgroup of $U(n)\times \mathbb{Z}_n$ to $D_n$ the dihedral group of order 2n. I have a group $G_n = U(n)\times \mathbb{Z}_n$ with the operation $(a,x)(b,y) = (ab,ay+x)$ and I have a subgroup $H_n = \{(a,b) \in G_n | a = \pm 1\}$ which I want to show is isomorphic to $D_n$ the dihedral group of order 2n. I know for an isomorphism $\phi:H_n \to D_n$ I need $\phi(ab) = \phi(a)\phi(b)$ for $a,b \in H_n$. I know they have the same number of elements so that's good I suppose but I'm having trouble seeing how to preserve group operations with $\phi$. I've tried to take $\phi: H_n \to \zeta_n$ the set of complex nth roots of unity under multiplication and conjugation (which is isomorphic to $D_n$ right?) because I thought it might be easier and I could then rely on the composition of isomorphisms being an isomorphism but I've not managed to find a working map $\phi$ and I'm very much starting to doubt that it's easier to go this route. Can anybody point me in the right direction? AI: Indeed, $H_n$ and $D_n$ are isomorphic groups. First you may consider $D_n=\langle\tau^j\sigma^i\mid\;\tau^2=1=\sigma^n,\sigma\tau=\tau\sigma^{n-1}\rangle$, and define $\phi:H_n\to{D_n}$ such that $\phi(-1,0)=\tau$ and $\phi(1,1)=\sigma$. For instance you may take $\phi(a,b)=\tau^{\frac{1-a}{2}}\sigma^b$. The rest is straightforward.
H: How find the minimum of the value of $n$ such $n^2\equiv 1\pmod{1007}$ let $n>1$ is positive integers,How find the minimum value of $n$,such $$n^2-1\equiv 0\pmod {1007}$$ My try: $$n^2-1=(n+1)(n-1)$$ and $1007=19\cdot 53$ so I guess $n_\min=1006$, But How prove it? AI: You want $x^2\equiv1\pmod{19}$ and $x^2\equiv1\pmod{53}$ simultaneously. There are four cases: $x\equiv1\pmod{19}$ & $x\equiv1\pmod{53}$, or $x\equiv-1\pmod{19}$ & $x\equiv-1\pmod{53}$, or $x\equiv1\pmod{19}$ & $x\equiv-1\pmod{53}$, or $x\equiv-1\pmod{19}$ & $x\equiv1\pmod{53}$. The solutions to the first two cases are obviously $x\equiv1\pmod{1007}$ and $x\equiv-1\equiv1006\pmod{1007}$. The solutions to the other two cases are $x\equiv476\pmod{1007}$ and $x\equiv-476\equiv531\pmod{1007}$. I think the one you want is $476$. Here's how you can solve the simultaneous congruences $x\equiv1\pmod{19}$ & $x\equiv-1\pmod{53}$. Write $x=53t-1$ from the second congruence and substitute in the first congruence: $$53t-1\equiv1\pmod{19}$$$$-4t\equiv2\pmod{19}$$$$-20t\equiv10\pmod{19}$$$$-t\equiv10\pmod{19}$$$$t\equiv-10\pmod{19}$$$$t\equiv9\pmod{19}$$$$t=9+19u$$$$x=53t-1=53(9+19u)-1=476+1007u$$$$x\equiv476\pmod{1007}$$
H: Need help in determining the volume of styrofoam used with dimensions $2.50ft + 1.50ft + 1.00ft$ So to elaborate on the title, the question is this: The average density of Styrofoam is $1.00 \frac{kg}{m^3}$. If a Styrofoam cooler is made with outside dimensions of $3.00ft$ $x$ $2.00ft$ $x$ $1.50ft$ and inside dimensions of $2.50ft$ $x$ $1.50ft$ $x$ $1.00ft$, determine the (a) volume of Styrofoam used in the cubic meters... So the word that is throwing me off is used. Am I to calculate the volume of the inside and outside and simply add them? I was doing the following: $$V=lwh$$ $$V=(3.00)(2.00)(1.50)$$ (Outer)$V=9ft^3$ $$V=(2.50)(1.50)(1.00)$$ (Inner)$V=3.75ft^3$ (Total)$V=12.75ft^3$ Am I on the right track at all? AI: The outer volume gives you the amount of Styrofoam that it would take to make a cooler of the same size that was completely useless (because there was no space inside, as it was just a block of Styrofoam). The inner volume is the amount of Styrofoam that would have to be removed from such a block to make such a cooler. The amount of Styrofoam used, then, is the original/outer/useless volume minus the inner/removed volume. That is, the amount used is $5.25$ cubic feet.
H: Differential equation application question The air in a room with volume $200m^3$ contains 0.15% carbon dioxide initially. Fresher air with only 0.05% carbon dioxide flows into the room at a rate of $2m^3/min$ and the mixed air flows out at the same rate. a) Find the amount of carbon dioxide in the room as a function of time. b) What is the level of carbon dioxide in the long run? AI: Let $y(t)$ be the amount of carbon dioxide (in $m^3$) in the room at time $t$ (in $min$). The rate of change, $y'(t)$, of $y(t)$ is given by $y'(t) = (\textrm{rate in}) - (\textrm{rate out}) = 2 \cdot 0.05 -2 \cdot \frac{y(t)}{200} = -\frac{1}{100}(y(t) - 10)$. This is a separable differential equation and you can rewrite it as $\frac{y'(t)}{y(t) - 10} = -\frac{1}{100}$. You can integrate both sides with respect to $t$ to find $y(t)$. This will involve a constant of integration $c$. You can use $y(0) = 0.15 \cdot 200$ to find $c$.
H: How to prove a number is not a prime number (without a computer) Show that $$5994937829$$ is not prime number How can I use math methods to prove it, and I know that this be proven using computer. But I can use only math methods to solve it. AI: If $p = 5994937829$ were prime, then Fermat's little theorem implies that $$a^p \equiv a \pmod{p}$$ for all $a$. But this does not hold, since $$2^p \equiv 1030766071 \pmod{p}$$
H: Linear transformation / Polynomial Question $T:P_{3}\rightarrow P_{3}$ defined by $T(p(t))=tp'(t)+p(0)$ is a linear transformation. Determine whether $T$ is invertible. If yes, find $T^{-1}(q(t))$, where $q(t)$ is a polynomial of degree at most three. Thank you. AI: Hint: Just take a polynomial $p \in P_3$, say $p(X) = aX^3 + bX^2 + cX + d$ and compute $q := T(p)$. Then check whether you can invert the transformation of the coefficients.
H: Evaluating $\lim_{x\to 0} \dfrac{\sqrt{1-x}-\sqrt{1+x}}{x^2-3x}$ $$\lim_{x\to 0} \dfrac{\sqrt{1-x}-\sqrt{1+x}}{x^2-3x}$$ I am stuck at radicals. Division by 1/x doesn't help. AI: HINT: Rationalize the numerator $$\sqrt{1-x}-\sqrt{1+x}=\frac{(1-x)-(1+x)}{\sqrt{1-x}+\sqrt{1+x}}=\frac{-2x}{\sqrt{1-x}+\sqrt{1+x}}$$ Then cancel out $x$ form the numerator & denominator as $x\ne0$ as $x\to0$
H: What is an example of a non-convex region? In complex analysis, the proof of Morera's Theorem "for $f\in C(D) $ such that D is a region, if for any triangle $\triangle$ in D, $\int_{\triangle}f = 0$ is True, then f is analytic in D." splits the proof into two cases: for convex and non-convex regions D. I'd like some intuition before my exam on what a non-convex region might look like. Can you provide an example? EDIT: I apologize gentlemen, this question has shown me that the definition of a region is regrettably localized by textbook. In Bak-Newman, a region is defined as an open, connected set in $\mathbb C$, which would make @Did's examples valid if considered with their boundaries removed in $\mathbb C$. AI: A banana. A necklace. A tire. An eight (actually no Roman digit is convex except "one" when it is written as a vertical bar instead of as $1$). Take the drawing of the digit 7 for example. The segment joining its two ends is not included in the drawing. If the figure 7 was convex, it would contain this whole segment.
H: Find a particular solution for second order ODEs using undetermined coefficients method Match the appropriate form of the particular solution labelled A through J with the differential equations below. Enter K if all of the particular solutions are incorrect. $$y''-5y'-24y = 3xe^{2x}, (1)$$ $$y''-4y'+4y = -3xe^{2x}, (2)$$ $$y''-2y' = -8e^{2x}, (3)$$ $$y''-25y = -3x^3e^{2x}, (4)$$ $A. y_p = Ae^{2x}$ $B. y_p = Axe^{2x}$ $C. y_p = (Ax+B)e^{2x}$ $D. y_p = Ax^2e^{2x}$ $E. y_p = (Ax^2+Bx)e^{2x}$ $F. y_p = (Ax^2+Bx+C)e^{2x}$ $G. y_p = Ax^3e^{2x}$ $H. y_p = (Ax^3+Bx^2)e^{2x}$ $I. y_p = (Ax^3+Bx^2+Cx)e^{2x}$ $J. y_p = (Ax^3+Bx^2+Cx+D)e^{2x}$ K. None of the above I chose C for (1), G for (2), B for (3) and J for (4). But none of them are correct, can anyone help me here? AI: Let $a_ny^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1y'+a_0y=Q(x)$ wherein $a_n\ne 0$ and $Q(x)\ne 0$ in an interval, say $I$. Let we take $y_c(x)$ as the general solution of the related homogeneous equation: $$a_ny^{(n)}+a_{n-1}y^{(n-1)}+\cdots+a_1y'+a_0y=0$$. Now if no term of $Q(x)$ is the same as a term in $y_c(x)$ then, $y_p(x)$ is constructed by a linear combination of all terms of $Q(x)$ and all its linearly independent derivatives. This means that if we have, for example $y_c(x)=C_1e^{ax}+C_2e^{bx}$ and $Q(x)=x^{t}e^{dx}$ such that $$a\ne b\neq d\ne a$$ then $$y_p(x)=A_tx^te^{dx}+A_{t-1}x^{t-1}+\cdots A_1xe^{dx}e^{dx}+A_0e^{dx}$$ It seems that for $4$ we can count on $J$.