text
stringlengths
83
79.5k
H: How does the ring of smooth functions fail to be a field? So if we have a differentiable manifold $M$, the set of smooth functions $C^{\infty}(M)$ on $M$ can be equipped with point-wise addition and multiplication. Apparently it is important that this object will be a ring but not a field, however I am having trouble seeing how it fails to be field? I know point wise addition on this set will be an abelian group. So $C^{\infty}(M)$ without ${f=0}$ and with pointwise multiplication must not have one (or more) of the following: commutativity, associativity, a neutral element and an inverse? Or maybe the distributive rule of multiplication over addition? But I’m just not sure which property does not hold and why; surely all do?! AI: It's not a field because not every non-zero smooth function has a multiplicative inverse. For instance, for some open $U\subsetneq M$, pick $f$ to be any smooth function with compact support which vanishes outside $U$. Then, $f|_{M\setminus U}$ is identically $0$ and hence, there is no smooth function $g$ such that $g(x)f(x)=1$ for all $x\in M\setminus U$. In particular, there is no smooth $g$ such that $g(x)f(x)=1$ for all $x\in M$.
H: Increments in Derivatives Say, I have an equation $y^2=x^3$. So I can say $dy^2/dx=3x^2$. So the very small increment in $y^2$ when $x$ becomes $dx$ is $dy^2$ which in this case is $3x^2dx$. I also know that $dy^2/dy=2y$ so I can say $dy^2=2ydy$ and equate the increments but how do I know the increments $dy^2$ are same in both the cases? AI: Divide the second equation you obtained by $dx$ on both sides $$\frac{dy^2}{dx} = 2y \frac{dy}{dx}$$ Then substitute the values $y = x^{\frac 32}$ and $\frac{dy}{dx} = \frac{3}{2}\sqrt{x}$, which are obtained from the initial equation $y^2 = x^3$. This gives you $$\frac{d^2y}{dx} = 3x^2$$ which is identical to equation 1.
H: LR Test in Beta($\theta$, 1) with $H_0 = {\theta_0}$ I'm trying to obtain $\alpha$-level LR Test where $(X_1, ... X_n)$ are from Beta($\theta$, 1) with $H_0 = {\theta_0}$ and $H_0 \neq \theta_0$. I'm looking for $$ \lambda(X) = \frac{\sup_{\theta \in \Theta_0}l(\theta)}{\sup_{\theta \in \Theta}l(\theta)} $$ Suppose $T := \sum^n_{i=1}\ln{X_i}$ and we know that MLE of Beta$(\theta, 1)$ equals $$ \hat{\theta}=\frac{-n}{T} $$ Our $\lambda(X)$ is then: $$ \lambda(X) = \frac{\theta_{0}^{n} (X_1 \cdot ... \cdot X_n)^{\theta_0 - 1}}{(\frac{-n}{T})^n(X_1 \cdot ... \cdot X_n)^{\frac{-n}{T} - 1}} = \left(\frac{- \theta_0 T}{n}\right)^n(X_1 \cdot ... \cdot X_n)^{\theta_0 + \frac{n}{T}} $$ We want to find $\lambda(X) < c$ but we may as well look for $\ln\lambda(X) < \ln c = \hat c $. Taking the logarithm: $$ \ln\lambda(X) = n \ln{\frac{- \theta_0 T}{n}}+\left({\theta_0 + \frac{n}{T}}\right)T =n \ln{\frac{- \theta_0 T}{n}} + \theta_0T + n $$ This has to be lesser than some $\hat c$ $$ n \ln{\frac{- \theta_0 T}{n}} + \theta_0T + n < \hat c $$ We define $f(x) = n \ln{\frac{- \theta_0 x}{n}} + \theta_0x + n$ and see how it behaves: $$ f'(x) = \frac{n}{\frac{\theta_0x}{n}} + \theta_0 \\ f'(x) = \frac{n^2}{\theta_0x} + \theta_0 $$ $f'(x) > 0$ iff: $$ \frac{n^2}{\theta_0x} + \theta_0 > 0 \\ \frac{n^2}{\theta_0x} > - \theta_0 \\ \theta_0x < -\frac{n^2}{\theta_0} \\ x < -\left(\frac{n}{\theta_0}\right)^2 $$ This makes x negative, but it does not raise my concern because $x=\sum \ln X_i$ where $X_i \in (0,1)$ so $\ln X_i < 0$ it can be indeed negative. I'd then get the LR test looking like: $$ \varphi(X) = \begin{cases} 1 & T < d_1 \text{ or } T > d_2 \\ \gamma_1 & T = d_1 \\ \gamma_2 & T = d_2 \\ 0 & T \in (d_1, d_2) \end{cases} $$ where $d_1 < -\frac{n^2}{\theta_0^2} < d_2$ for $d_1, d_2$ calculated to meet $\alpha$. Is my solution correct so far? I have noticed the question has been answered here, but I could not fully get the grasp of it and wanted to go step by step. AI: I didn't check your counts but they look good. This is the graph of your $ln\lambda(\mathbf{x})$ where I just set $T=-\sum_{i}ln X_i$ and fixed some parameters As you can see, say $ln\lambda(\mathbf{x})<c$ is equivalent to $T<c_1$ and $T>c_2$ As suggested, I think it is better to set $T=-\sum_{i}ln X_i$ Now it is easy to check that $Y=-lnX\sim Exp(\theta)$ so $T\sim Gamma(n;\theta)$. Thus under null hypothesis $2\theta_0 T\sim \chi_{(2n)}^2$ and you can now solve the problem with the tables
H: Is there a way to represent a Max Flow problem as a dynamic programming task? I've recently started practising some graph theory problems, and I wanted to know if there is a method which would allow us to approach the Max Flow problem through dynamic programming. I cannot seem to find any resources where they outline a similar approach, and in most places, they seem to utilise either Linear Programming or algorithms, such as the Ford–Fulkerson algorithm. I would be extremely grateful for any suggestions about books/videos where they consider it, or for any ideas that would help me to derive the method by myself. Thanks in advance! AI: Hi: This is not an answer but I'm putting it here because it's easier to type and there's more room. The link below says that a greedy algorithm can be used to get an approximation but it does not result in optimality. To me, that's a hint that dynamic programming will not lead to an optimal solution because dynamic programming is really only applicable when greedy-ness works. So, I'm waving my hands here for sure but my guess is that you'll be going into a dark cave if you try to solve maximum flow using dynamic programming. https://www.geeksforgeeks.org/max-flow-problem-introduction/ Also, just from a practical standoint, really smart people have been working on this problem for a long time. So, if a dynamic programming approach has not been used, chances are that it's not the way to proceed. Although, at the same time, if you did come up with such a result, it would most likely be viewed as an amazing achievement.
H: Which calculation for the eigenfunction and eigenvalue is wrong? Given $\Omega = (0,1)$ and the eigenvalue problem $$-u''(x)+u(x) = \lambda u(x), \quad u'(0)=u(1)=0$$ or equivalently $-u''(x)=(\lambda -1)u(x).$ I first derived $u(x)=\cos(k\pi x)+1$ and $\lambda =k^2\pi^2+1$ which in my opinion solves the problem. However, a friend of mine calculated $u(x)=\cos(\sqrt{\lambda-1}x )$ and $\lambda=(k\pi+\frac{\pi}{2})^2+1$ which also seems to work. Did one of us miscalculate? Is the eigenfunction 'allowed' to contain the eigenvalue itself as a parameter? AI: One should use the most general solution of the ODE, let $\mu=\sqrt{\lambda-1}$, then the most general solution is $$u(x)=A \sin \mu x+ B \cos \mu x \implies u'(x)=\mu A \cos \mu x - \mu B \sin \mu x$$ $u'(0)=0 \implies A=0$, so the compatible solution is $$u(x)=B \cos \mu x, u(1)=0 \implies \cos \mu =0 \implies \mu=(n+1/2)\pi$$ $$\implies \lambda=(n+1/2)^2\pi^2+1, n=0,1,2..$$
H: Defining new function to solve the original function easier. If I have some $f : \mathbb A \to \mathbb A$ that needs to be solve. Can I define $g : \mathbb B \to \mathbb B$ when $\mathbb A$ is a subset of $\mathbb B$ and solve for $g(x)$ then I conclude that $f(x) = g(x)$? For example , if I have $f : \mathbb N \to \mathbb N$ , can I define $g : \mathbb R \to \mathbb R$ so that when I get $g(x) = ???$ $\forall x \in \mathbb R$ , I’ll also get $f(x) = g(x)$ $\forall x \in \mathbb N$? AI: No, this is not allowed. Defining the domain over a larger set gives you more information, whereas defining the range over a larger set can allow more freedom in the function. Take for example, the functional equation $$f(x+y) = f(x) + f(y)$$ for $f: \mathbb R^+ \to \mathbb R^+$. This only has linear solutions, but if you extend it to $f: \mathbb R \to \mathbb R$, there are nonlinear solutions. Even if you use $\mathbb B \to \mathbb A$ instead of $\mathbb B \to \mathbb B$, there are other counterexamples; I'll leave this as an exercise for you to find.
H: Evaluation of line lintegral along the parabola $y=x^2$ Evaluation of $$\int_{C}xydx+(x+y)dy$$ aling the curve $y=x^2$ from $(-2,4)$ to $(1,1)$ What i try Let $\vec{F}=\bigg<xy,x+y\bigg>$ and let $\vec{r}=\bigg<x,y\bigg>$ So we have to calculate $$\int_{C}\vec{F}\cdot \vec{dr}$$ Now let paramatrize the curve $y=x^2$ So we take $x=t$ and $y=t^2$. Then $-2\leq t\leq 1$ So $$\int^{1}_{-2}\vec{F(t)}\cdot \frac{d}{dt}\bigg<\vec{r(t)}\bigg>dt$$ $$\int^{1}_{-2}\bigg<t\cdot t^2,t+t^2\bigg>\cdot \bigg<1,2t\bigg>dt$$ $$\int^{1}_{-2}\bigg(t^3+2t(t+t^2\bigg)dt$$ Can please tell me is my process is right. If not Then How do i solve it. Help me please AI: We could also directly plug in the functions $$\int_C xy\:dx + (x+y)\:dy = \int_{-2}^1 x^3\:dx + \int_4^0 -\sqrt{y}+y\:dy + \int_0^1 \sqrt{y} + y \: dy= -\frac{21}{4}$$ with no extra parametrization work necessary.
H: Clarification of Baire category theorem: a (counter-?)example I am trying to understand the statement of Baire's category theorem. Why is $\bigcap_{n \in \mathbb{N}} A_n := \bigcap_{n \in \mathbb{N}} \big[(n, n+\frac{1}{2}) \cap \mathbb{Q}\big] = \emptyset$ dense in $\mathbb{R}$? As I understand $\mathbb{R}$ is a complete metric space and for each $n$, $A_n$ is an open dense set in $\mathbb{R}$, which would yield the claim by Baire's theorem. But it does not seem in accordance with any conceptual idea I have of a dense set. AI: It seems like the issue here is confusion over the definition of a dense set. A subset $S$ of $X$ is dense $X$ if every point in $X$ is either an element of $X$ or a limit point of $S$. It should be clear here that none of the $A_n$ here are dense in $\mathbb R$, because $-1$ is not in any $A_n$, nor is a limit point of any $A_n$.
H: How to calculate the index $|\mathcal{O_K}/ \mathfrak{a}|$ in sage Let $K=\mathbb{Q}(\sqrt{2})$. I want to calculate $|\mathcal{O_K}/ \mathfrak{a}|$ in sage with $\mathfrak{a}=3\mathbb{Z}$. This code: sage: K.<sqrt(2)>=NumberField(x^2-2) sage:01=K.order(sqrt(2)) OK=ring_ of _ integers() sage: O1.index_in(OK) does not really work. AI: You mean $\mathfrak{a} = (3) = 3\mathcal{O}_K$, because $3\mathbb{Z}$ is not an ideal in $\mathcal{O}_K$ (it is not closed under multiplication by $\sqrt{2}$). This index $[\mathcal{O}_K:\mathfrak{a}] =|\mathcal{O}_K/\mathfrak{a}|$ is also called the norm of the ideal $\mathfrak{a}$: sage: a = OK.ideal(3) sage: a.norm() 9 Since $\mathfrak{a}=(3)$ is prime in $\mathcal{O}_K$ (hence maximal, because $\mathcal{O}_K$ is a Dedekind domain), the quotient $\mathcal{O}_K/\mathfrak{a}$ is a field called the residue field, and we can also get its order: sage: a.is_prime() True sage: a.residue_field().order() 9
H: H1N1 Probability Problem Suppose that the probability of being affected by H1N1 flu virus is 0.02, the probability of those who regularly wash their hands among those affected by the H1N1 virus is known as 0.3, In general, if the probability of people who washing their hands regularly between the whole people (whether they affected or not by the virus) is 0.6, Find the possibility of those who regularly wash their hands to be affected by the virus. Proposed Solution: lets A be the event of people who affected with H1N1 virus and B be the event of people who regularly washing their hands In this case P(A) = 0.02 P(A and B) = 0.3 P(B) = 0.6 P(A|B) = ? Rule: P(A|B) = P(A and B) / P(B) = 0.3 / 0.6 = 0.5 Well, my question is: Are my assumptions and the proposed solution right or not? if not please correct my answer. Thanks in advance. :) AI: If I correctly understood your problem, you should use Bayes formula: $$ P(infect|wash) = \frac{P(wash|infect)P(infect)}{P(wash)} $$ where $P(infect) = 0.02, P(wash)=0.6, P(wash|infect) = 0.3$.
H: Is order matter when writing the roots of a quadratic equation? Equation: $x^2-x-6=0$ The two roots of this equation are $3$ and $-2$. When writing the answer can I also write it as $-2, 3$ or do I have to maintain a certain order? AI: If the question offers the two choices you gave us in a comment, $-2, 3$ and $3, -2$, then the question is wrong. Either answer is correct. Perhaps one of the suggested answers should have had the signs the other way $2, -3$ ( not the order the other way). Then they might be looking for a particular error you might have made.
H: Inverse trigonometric function of trigonometric function This is a simple question. We know that $\sin( arcsin(2\pi) ) = 2\pi$ Because $arcsin$ is the inverse function of $sin$ So logically, the following should be true as well, since $sin$ is the inverse of $arcsin$ $\arcsin( sin(2\pi) ) = 2\pi$ but it actually equals $0$. Why so? AI: No, $\arcsin$ is not the inverse of the sine function. Rather, it is the inverse of the restriction of that function to $\left[-\frac\pi2,\frac\pi2\right]$. And the domain of $\arcsin$ is $[-1,1]\left(=\sin\left(\left[-\frac\pi2,\frac\pi2\right]\right)\right)$. So, $\arcsin(2\pi)$ isn't even defined.
H: How much seconds of YouTube video will be played if I will press play/pause button at rational timesteps? Suppose that I want to watch a continuous YouTube video. I start watching it when my clock shows $t_0=0$ seconds. Each time my clock shows rational amount of seconds I instantly press play/pause button. How much video will be played when my clock will show $t_1$ seconds? Intuition tells me that it should be $\frac{t_1}{2}$ seconds, but how to prove it? AI: The question does not make sense as this is a supertask. There is no next rational number after $0$, so the time you stop is not defined. Your intuition of half the time seems to assume there are the "same number" of rationals and irrationals. That is not correct. The irrationals are uncountable while the rational are countable. One can prove that the measure of the rationals is zero.
H: How do you compute the value of the right derivative of $f(x)= \sin (x)^{\cos (x)} +\cos (x)^{\sin (x)}$ when $x=0$. How do you compute the value of the right derivative of $f(x)= \sin (x)^{\cos (x)} +\cos (x)^{\sin (x)}$ when $x=0$. I'm trying to learn calculus so some explanations wouldn't be so bad. I got stuck computing the limit of $\sin (x)^{\cos (x)} \cdot \big( \frac{\cos ^2 (x)}{sin (x)} - \sin (x) \cdot \ln (\sin (x)\big)$ as $x \rightarrow 0$. Sorry for the grammar mistakes but I'm not English. AI: You should learn logarithmic differentiation: $$h(x)={(\sin{x})}^{\cos{x}}$$ $$\ln{(h(x))}=\cos{x} \cdot \ln{\sin{x}}$$ $$\frac{h'(x)}{h(x)}=-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}$$ $$h'(x)={(\sin{x})}^{\cos{x}} \left(-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}\right)$$ Do this for $g(x)={(\cos{x})}^{\sin{x}}$: $$g'(x)= {(\cos{x})}^{\sin{x}} \left( \cos{x} \cdot \ln{\cos{x}} -\frac{ \sin^2{x}}{\cos{x}} \right)$$ To sum it up, $$f(x)=g(x)+h(x) \implies f'(x)=g'(x)+h'(x)$$ $$f'(x)={(\sin{x})}^{\cos{x}} \left(-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}\right)+{(\cos{x})}^{\sin{x}} \left( \cos{x} \cdot \ln{\cos{x}} -\frac{ \sin^2{x}}{\cos{x}} \right)$$ Using the Maclaurin series approximations of $\sin{x}\approx x$ and $\cos{x} \approx 1-\frac{x^2}{2}$ near $x=0$: $$\lim_{x \to 0^+} f'(x)=x^1 \left(0+ \frac{1-x^2+\frac{x^4}{4}}{x}\right)+1\left(0-0\right)=\boxed{1}$$
H: Question based on finer and coarser topological spaces I am trying exercise 1.2 (question 12) of C. Wayne Patty Foundations of Topology. I am struck on it. Question is-> No. 12 I am unable to think how such a space in (1) must exist as union of topology is not always a topology. ( I could only think of Union of these topologies but it's not a topology. Also, can someone please give some hint for (2) . It would be really helpful. AI: The relevant topology $\mathscr{T}$ is the set of intersections of open sets in the topologies, i.e. $S$ is open if and only if it can be expressed as $\bigcap_{\alpha \in \Lambda} S_\alpha$ where $S_\alpha \in \mathscr{T}_\alpha$. For part (2), suppose $\mathscr{T}'$ that is finer than all $\mathscr{T}_\alpha$. Then $\mathscr{T}'$ contains all open sets in all $\mathscr{T}_\alpha$, so for any collection of open sets $\{S_\alpha\}$ where each $S_\alpha \in \mathscr{T}_\alpha$ (i.e. any collection of open sets, one from each topology), we know that each $S_\alpha$ is also in $\mathscr{T}$, and hence their intersection is open in $\mathscr{T}$. Hence all elements of $\mathscr{T}$ are elements of $\mathscr{T}'$, and so $\mathscr{T}$ is coarser than $\mathscr{T}'$.
H: Correct way to announce a theorem Reading the following statement : Theorem : An operator $A$ has the property $P$ if $\underset{\lambda \rightarrow 0}{% \lim }\left( A-\lambda \right) ^{-1}$ exists. Does the reader understand that $\left( A-\lambda \right) ^{-1}$ is defined on a neighborhood of $0$ ? Or the correct way to announce the theorem must be like : Theorem : An operator $A$ has the property $P$ if there is some neighborood $V\subset% %TCIMACRO{\U{2102} }% %BeginExpansion \mathbb{C} %EndExpansion $ of $0$ such that $\left( A-\lambda \right) ^{-1}$ and $\underset{\lambda \rightarrow 0}{\lim }\left( A-\lambda \right) ^{-1}$ exist. Short announcements are more beautiful. If there are any rules to follow, can somebody tell me about ? AI: I think a good habit when writing down a theorem is to add the essential details about the objects involved in the statement. Other properties may be added elsewhere or not added at all if not strictly important. In both cases, it is not clear where the operator $A$ lives. That is an essential thing to add so as to make the reader able to understand why property $P$ is meaningful for $A$ or why $(A-\lambda)^{-1}$ makes sense. I think it is clear that $(A-\lambda)^{-1}$ must be defined around $\lambda = 0$ in order to be able to take the limit.
H: Testing Series $ \sum\limits_{n = 3}^{\infty} \frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}} $ I have a problem which is related to testing the divergence or convergence for the sum of a series. For more details: $$ \sum\limits_{n = 3}^{\infty} \frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}} $$ To solve this problem, I think that we need to separate $u_{n}$ into two halves, one alternating series and the other. In general, I find it hard to deal with $\ln(n)$ and $\cos(n)$ when $n \rightarrow \infty$ because I can not use limits or Taylor for it. AI: Note that $$\left|\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}\right|\leq \frac{3}{n(\ln(n))^{\frac{3}{2}}}$$ for $n\geq 3$. Now, $f(x)=\frac{3}{x(\ln x)^{\frac{3}{2}}}$ is positive and decreasing on $[3,\infty)$, and $$\int_3^\infty f(x)dx=\int_3^\infty \frac{3}{x(\ln x)^{\frac{3}{2}}}dx=\left.-\frac{6}{(\ln x)^{\frac{1}{2}}}\right|_{3}^\infty<\infty.$$ By integral test, the series $\displaystyle\sum_{n=3}^\infty \left|\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}\right|$ converges. Therefore, the series $\displaystyle\sum_{n=3}^\infty\frac{(-1)^n + 2\cos(\alpha n)}{n(\ln(n))^{\frac{3}{2}}}$ converges absolutely, hence is convergent.
H: Convergence in Polish Spaces If I have a point $x \in S$ where $S$ is some Polish Space with metric $d$, a closed set $F \subset S$, and $inf\;\{d(x,f)|f\in F\}=0$, then why can we say that there exists $f' \in F$ such that $d(x,f')=0$? If we had some kind of compactness it would be easy but my topology class is ages ago and I can´t see a solution right now. AI: For each $n > 0$, there is $y_n \in F$ with $d(y_n,x) \leq 1/n$. So by definition, $y_n \rightarrow x$, so that $x$ is in the closure of $F$. Thus ($F$ closed) $x \in F$.
H: Root of the derivative is unique in some intervals Suppose that $p_n$ is the n-th (real) polynomial, and that it has $n$ simple real roots, let's call them for $x_{n,1}<x_{n,2}<\dots <x_{n,n}$. Then, by Rolle's theorem, $p_n'$ has a real root in each $(x_{n,1},x_{n,2})$, $(x_{n,2},x_{n,3})$, etc. How do you show that this root is unique in these intervals? AI: Suppose that $p_n'$ had multiple roots in one of the intervals. Then $p_n'$ would have more than $n-1$ roots. But since $p_n'$ has degree $n-1$, it can only have $n-1$ roots, a contradiction.
H: Does $\lim\limits_{n\to\infty}|X_n-X|=0$ imply $\lim\limits_{n\to\infty}|X_n-X|^p=0\hspace{0.2cm}\forall p>1$? Let $\left(X_n\right)_{n\geq1}$ be a sequence of random variables and $X$ a random variable as well. Does $\lim\limits_{n\to\infty}|X_n-X|=0$ imply that $\lim\limits_{n\to\infty}|X_n-X|^p=0\hspace{0.2cm}\forall p>1$ as well? That is, is it true that $$\lim\limits_{n\to\infty}|X_n-X|=0\Rightarrow\lim\limits_{n\to\infty}|X_n-X|^p=0\hspace{0.2cm}\forall p>1\;?$$ AI: Yes, it is true. Given $p > 1$, the function $x \mapsto |x|^p$ is continuous. Thus if we have a sequence $(a_n)_n$ in $\mathbb{R}$ with $\lim_n a_n = a$, then by continuity of $x \mapsto |x|^p$ also $\lim_n |a_n|^p =|a|^p$. Hence, we can conclude.
H: Prove using bisection that if $f$ is continuous on $[a, b]$ and $f(a)<0 Let's define the following process: 1) If $f(\frac{a+b}{2})=0$, then we're done. 2) If $f(\frac{a+b}{2})\neq0$, then either $f$ changes sign on $[a, \frac{a+b}{2}]$, or on $[\frac{a+b}{2}, b]$. So consider next the interval where $f$ changes sign. So we end up with a set of intervals $I_1=[a_1,b_1], I_2=[a_2,b_2],$ etc., with the properties that $a_n \leq a_{n+1}$, $b_{n+1} \leq b_n$, and $\forall a \leq \forall b$. Consider the set $A$ of all $a$'s, and the set $B$ of all $b$'s. Clearly, $A\neq\emptyset$ and $B\neq\emptyset$, $A$ is bounded above by any $b$, $B$ is bounded below by any $a$. Hence, there exist $\sup A$ and $\inf B$, $\sup A \leq \inf B$. Since intervals $I_n$ are closed, $\sup A \in A$ and $\inf B \in B$. So $[\sup A, \inf B]$ must be the the 'last' interval in the process above. Question: Can I conclude now that the existence of $\sup A$ and $\inf B$ means that the process above had ended with $f\big(\frac{\sup A + \inf B}{2}\big)=0$? AI: Since intervals $I_n$ are closed, $\sup A\in A$ and $\inf B\in B$. That is not true. Consider, for example, the collection $$I_n = \left[-\dfrac{1}{n}, \dfrac{1}{n}\right].$$ With $A$ and $B$ being as before, it is easy to see that $\sup A = 0 = \inf B$ but $0\notin A$ and $0\notin B$. Now, let us assume that we get an infinite sequence of the intervals. That is, we never stop at any point. This is possible only if $f\left(\dfrac{a_n+b_n}{2}\right)$ is never $0$ at any stage. (As you pointed out, we are clearly done if we ever do reach a stage where we get a $0$.) In this case, your final claim does hold. In fact, we will have $\sup A = \inf B$. It is a standard exercise that if you have a sequence $I_1\supset I_2 \supset \cdots$ of nested closed intervals, then their intersection is non-empty. Moreover, since the size ($b_n - a_n$) tends to $0$, one can also show that the intersection contains precisely one element: Call it $\xi.$ It should be clear that $\sup A = \xi = \inf B$. In particular, $$\xi = \dfrac{\sup A + \inf B}{2}.$$ Moreover, note has the following properties: $a_1 \le a_2 \le \cdots \le \xi \le \cdots \le b_2 \le b_1$, $f(a_n) < 0 < f(b_n).$ Since $\xi = \sup A$, the first property tells us that $a_n \to \xi$. Since $f$ is continuous, we have $f(a_n) \to f(\xi)$. From the second property, we conclude that $f(\xi) \le 0$. A similar analysis with $(b_n)$ shows us that $f(\xi) \ge 0$. This gives us that $$f(\xi) = 0,$$ as desired.
H: Let $X = \{ \sqrt{p} : p \text{ is prime} \}$, $Y \subseteq X$ and $\sqrt{p} \not\in Y$. Show that $[\mathbb{Q}(Y)(\sqrt{p}) : \mathbb{Q}(Y)] = 2$. I am trying to solve Problem 22 from Chapter 5 of Patrick Morandi's Field and Galois Theory: Let $K = \mathbb{Q}(X)$, where $X = \{ \sqrt{p} : p \text{ is prime} \}$. Show that $K$ is Galois over $\mathbb{Q}$. If $\sigma \in \operatorname{Gal}(K/\mathbb{Q})$, let $Y_\sigma = \{ \sqrt{p} : \sigma(\sqrt{p}) = - \sqrt{p} \}$. Prove the following statements. (a) If $Y_\sigma = Y_\tau$, then $\sigma = \tau$. (b) If $Y \subseteq X$, then there is a $\sigma \in \operatorname{Gal}(K/\mathbb{Q})$ with $Y_\sigma = Y$. (c) If $\mathcal{P}(X)$ is the power set of $X$, show that $\lvert \operatorname{Gal}(K/\mathbb{Q})\rvert = \lvert \mathcal{P}(X) \rvert$ and that $\lvert X \rvert = [K : \mathbb{Q}]$, and conclude that $\lvert \operatorname{Gal}(K/\mathbb{Q}) \rvert > [K : \mathbb{Q}]$. (Hint: A Zorn's lemma argument may help in (b). You may want to verify that if $Y \subseteq X$ and $\sqrt{p} \not\in Y$, then $[\mathbb{Q}(Y)(\sqrt{p}):\mathbb{Q}(Y)]=2$. The inequality $\lvert \mathcal{P}(X) \rvert > \lvert X \rvert$ is proved in Example 2.2 of Appendix B.) To complete part (c), I need to show that $[K : \mathbb{Q}]$ is not finite. If I can do this, then I will have shown that $[K : \mathbb{Q}]$ is countably infinite, since $K/\mathbb{Q}$ is an algebraic extension. Since $X$ is also countably infinite, this will show that $\lvert X \rvert = [K : \mathbb{Q}]$. The hint asks me to verify that if $Y \subseteq X$ and $\sqrt{p} \not\in Y$, then $[\mathbb{Q}(Y)(\sqrt{p}):\mathbb{Q}(Y)]=2$. This I am not able to do. I understand that if I show this, then it will imply that $[K : \mathbb{Q}]$ is not finite, because (by induction) for every $n \in \mathbb{N}$, there is an intermediate field $L$ with $[L:\mathbb{Q}] = 2^n$, namely $L = \mathbb{Q}(X_n)$ where $X_n$ is any subset of $X$ of cardinality $n$. The problem essentially boils down to the question, if $\sqrt{p} \not\in Y$, is it still possible that $\sqrt{p} \in \mathbb{Q}(Y)$? (And we seek to show that the answer is "No".) So, one idea I had was to assume that $\sqrt{p} \in \mathbb{Q}(Y)$ and somehow derive a contradiction, but I had no luck in doing that. Another idea was to try and show the existence of a non-trivial automorphism of $\mathbb{Q}(Y)(\sqrt{p})$ over $\mathbb{Q}(Y)$. Since $[\mathbb{Q}(\sqrt{p}):\mathbb{Q}]=2$, we have that $[\mathbb{Q}(Y)(\sqrt{p}):\mathbb{Q}(Y)]\leq 2$, so showing that a non-trivial automorphism exists is enough. In fact, we know exactly what this automorphism should look like: it must act as the identity on $Y$ (and $\mathbb{Q}$, trivially) and it must map $\sqrt{p}$ to $-\sqrt{p}$. But I am not able to argue for why such an automorphism must exist. I know that the non-trivial embedding of $\mathbb{Q}(\sqrt{p})$ into $\mathbb{C}$ can be lifted to an embedding of $\mathbb{Q}(Y)(\sqrt{p})$ into $\mathbb{C}$, but there is no reason for this lift to automatically act as the identity on $Y$, right? I am aware of an earlier question asking to show that for distinct primes $p_1,\dotsc,p_n \in \mathbb{N}$, $\sqrt{p_1},\dotsc,\sqrt{p_n}$ are linearly independent over $\mathbb{Q}$, but in my case I need to show the stronger result that they are algebraically independent over $\mathbb{Q}$, if I'm not mistaken. Any help is appreciated. References Morandi, Patrick, Field and Galois theory, Graduate Texts in Mathematics. 167. New York, NY: Springer. xvi, 281 p. (1996). ZBL0865.12001. AI: In the answer below I follow your hint exactly, and show that $[\mathbb{Q}(Y)(\sqrt{p}):\mathbb{Q}(Y)]=2$. As you already noted, it suffices to show that $\sqrt{p}\not\in \mathbb{Q}(Y)$. When $Y$ is finite, this follows from Bill Dubuque's proof in the accepted answer to the MSE question you linked to. Suppose now that $Y$ is infinite. Let $\lbrace y_k \rbrace_{k\geq 1}$ be an enumeration of $Y$. Suppose by contradiction that $\sqrt{p}\in \mathbb{Q}(Y)$. By definition of $\mathbb{Q}(Y)$ for an infinite $Y$, this means that $\sqrt{p}\in \mathbb{Q}(y_1,y_2,\ldots,y_N)$ for some finite $N$, and then we can apply Bill Dubuque's result again. This finishes the proof.
H: Invertible Matrices and basis Consider the $n\times n$ matrix $A$ and the basis $\{\vec{v_1}\ldots \vec{v_n}\}$ for $\mathbb{R}^n$. Prove if $\{A\vec{v_1} \ldots A\vec{v_n}\}$ is a basis for $\mathbb{R}^n$, then A is invertible. If we let $B=\{A\vec{v_1} \ldots A\vec{v_n}\}$, does this mean the column vectors form a basis and thus $B$ is invertible? How do we prove $A$ is invertible from there? I think I have to start with $c_1(A\vec{v_1})+\ldots+c_n(A\vec{v_n})=\vec{0}$ where $c_1=\ldots=c_n=0$ but I am not sure where to go after that. AI: Hint: Write $v_1,\dots,v_n$ in the basis $Av_1,\dots,Av_n$.
H: Can the cross section of parallelepiped be a regular pentagon Came across this question in a children's recreational mathematics book. Apparently, the cross section of a cube cannot be a regular pentagon. It could be a irregular pentagon though. But if we generalize this problem, can the cross section of parallelepiped be a regular pentagon? How do we prove that? AI: No, it's impossible. Consider two sides of the pentagon that lie in opposite faces of the parallelepiped. The sides lie in the plane of the pentagon, and they also lie in two parallel planes, so they must themselves be parallel, but a regular pentagon has no parallel sides.
H: Prove that quadratic form is differentiable from the definition I am trying to show that a quadratic form $Q: \mathbb{R}^n \to \mathbb{R}, Q(x)=x^T A x$ is differentiable from the definition of the differential. I started by considering $Q(x+h)=(x+h)^T A (x+h)=x^T A x + x^T A h + h^T A x + h^T A h$. We need to find a linear map $L_Q: \mathbb{R}^n \to \mathbb{R}$ s.t. $\lim \limits_{h \to 0} \frac{Q(x+h)-Q(x)-L_Q(h)}{\|h\|}$. Note that $x^T (A + A^T) h = \langle x, (A + A^T) h \rangle$ is a linear map, so this is my candidate for the differential of $Q(x)$, but I am struggling to show that the error term $r_Q(x)=h^T A h$ decays sublinearly. What I tried was to use the CS inequality to show that $\frac{|r_Q(x)|}{\|h\|} = \frac{|\langle h, Ah\rangle|}{\|h\|} \leq \frac{\|h\| \|Ah\|}{\|h\|} = \|Ah\|$, but I don't see how I can show that the right-hand side has a limit of 0. Can someone please tell me if I am on the right track and give me some guidance? This is a problem in my lecture notes right after the definiton of the differential, so it should be able to solve this without much more knowdledge. Thanks a lot! AI: You made a mistake in the second last displayed formula. In reality you have $$\bigl|\langle h,Ah\rangle\bigr|\leq|h|\>|Ah|\leq\|A\|\>|h|^2\ ,$$ where $\|A\|$ is the operator norm of $A$, and therefore $${\langle h,Ah\rangle^2\over|h|^2}\leq\|A\|^2\>|h|^2\ .$$
H: Transitive models of ZFC and power set Let $M$ be a transitive model of ZFC. From my understanding, if $x \in M$ then what $M$ believes to be its power set $\mathcal{P}(x)^M$ does not necessarily agree with the external power set $\mathcal{P}(x)$ (i.e. $\mathcal{P}(x)^M \neq \mathcal{P}(x)$), because $M$ might not contain all subsets of $x$. Here is where my confusion begins: Let $\varphi(x,p) = \forall y (y \in p \leftrightarrow y \subseteq x)$ be the formula saying that $p$ is the power set of $x$. As $M$ is a model of ZFC we have $\varphi^M (x ,p) \leftrightarrow \varphi(x, p)$ for any $x,p \in M$. But $\varphi^M (x , \mathcal{P}(x)^M)$ holds, which implies that $\mathcal{P}(x)^M = \mathcal{P}(x)$ and that $M$ is closed under subsets by transitivity. This doesn't agree with my understanding of transitive models mentioned above. I should note, that I don't have much background in model theory and it is very likely that I'm missing something obvious. AI: Transitive models are closed under elementhood, not under subsets. In other words, $M$ is transitive if $x\in y\in M$ implies $x\in M$, and not as you suggest, $x\subseteq y\in M$ implies $x\in M$. You are correct that if $M$ is transitive, and $x,y\in M$, then $M\models x\subseteq y$ if and only if $x\subseteq y$ (in $V$, that is). The only problem is that perhaps $x\notin M$. But what you can say is that if $M$ is transitive, then $\mathcal P(x)^M=\mathcal P(x)\cap M$.
H: Determining the differential of a map defined on a submanifold of $\Bbb R^n$ Let $M=\{(x,y,z,w)\in \Bbb R^4:x^3+y^3+z^3-3xyz=1\}$ and consider the function $f:M\to \Bbb R^2$ defined by $f(x,y,z,w)=(x+y+z+w,w^3+w)$. It is easily checked that $1$ is a regular value of the function $g:\Bbb R^4\to \Bbb R$, $(x,y,z,w)\mapsto x^3+y^3+z^3-3xyz$, so $M$ is a regular submanifold of $\Bbb R^4$, of dimension $3$. I am trying to determine the differential $f_*:T_p(M)\to T_{F(p)}\Bbb R^2$ at a point $p=(x_0,y_0,z_0,w_0)\in M$. I have shown that tangent space $T_pM \subset T_p\Bbb R^4=\Bbb R^4$ is precisely the set $\{v\in \Bbb R^4: \text{grad}g(p)\cdot v=0\}$. But then how do I have to compute the differential of $f$ at $p$? AI: Hints: $1).\ $ Fix $(x_0,y_0,z_0,w_0)\in M$, and find a slice chart $(\varphi, U)$ for $M$ about $(x_0,y_0,z_0,w_0)$. Try letting $F(x,y,z,w)=x^3+y^3+z^3-3xyz-1$ and taking $\varphi=(F,y,z,w).$ $2).\ $Now check that $$\begin{pmatrix} F_x &F_y &F_z &F_w \\ 0&1 &0 &0 \\ 0& 0&1 &0 \\ 0&0 &0 &1 \end{pmatrix}=\begin{pmatrix} 3x^2-3yz&3y^2-3xz &3z^2-3xy &0 \\ 0&1 &0 &0 \\ 0& 0&1 &0 \\ 0&0 &0 &1 \end{pmatrix}$$ is nonsingular at $(x_0,y_0,z_0,w_0).$ $3).\ $ Find the matrix representation of $f_*$ by using the standard coordinates $(\psi, V)$ on $\mathbb R^2$ about $f(x_0,y_0,z_0,w_0)$, and computing the Jacobian matrix of $\psi\circ f\circ \varphi^{-1}$ at $\varphi(x_0,y_0,z_0,w_0)$.
H: how to prove φ(n) tends to infinity as $n$ grows? I am wondering how can I prove $\lim\limits_{n \rightarrow \infty} {φ(n)=\infty}$ My attempt: For a prime number, we have φ(n)=n-1, so the equation above is proved. However, how can I prove it when n is a composite number? What do you think about it? Could you please show me? Regards AI: Since we have the following inequality: $$\varphi(n) \ge \sqrt{\frac{n}{2}}$$ it is obvious that $\lim_{n\to\infty}\varphi(n)=\infty$ holds.
H: What does "x-y" denote in boolean logic? I came across this equation in set theory : x-y = y'-x' where x and y are sets If it was a "+" or "." , I could easily correlate it with OR and AND function. But what does this "-" indicate in boolean logic? It indicates Set difference in set theory. Is it complement ? But complement is denoted by " ' " here. AI: This says that $x$ and $y$ are sets. Denoting the complement of $z$ by $z'$, we have that $$x-y=x\cap y'$$ Intuitively, you are taking only the elements of $x$ that are not elements of $y$. This does not require $y$ to be a subset of $x$. As you can work out, $$y'-x'=y'\cap (x')'=y'\cap x$$ so you can see the equivalence. For the connection with Boolean logic, this is $x.y'$, or in the notation I am used to $x\wedge \neg y$.
H: A question involving faithful flatness, support of a module, and Spectrum of a ring The following theorem is taken from Matsumura's Commutative Ring Theory [M] Theorem 7.3(i) and the paragraph before it. My questions only concern the proof of the Theorem below. A ring homomorphism $f:A\longrightarrow B$ induces a map ${}^{a\!}f:\mathrm{Spec}(B)\longrightarrow\mathrm{Spec}(A)$, under which a point $\mathfrak{p}\in\mathrm{Spec}(A)$ has an inverse image \begin{equation*} {}^{a\!}f^{-1}(\mathfrak{p})=\{P\in\mathrm{Spec}(B):P\cap A=\mathfrak{p}\} \end{equation*} which is homeomorphic to $\mathrm{Spec}(B\otimes_{A}\kappa(\mathfrak{p}))$. Theorem. Let $f:A\longrightarrow B$ be a ring homomorphism and $M$ a $B$-module. If $M$ is faithfully flat over $A$, then ${}^{a\!}f(\mathrm{Supp}(M))=\mathrm{Spec}(A)$. The proof of the theorem given by [M] is as follows: For $\mathfrak{p}\in\mathrm{Spec}(A)$, since $\kappa(\mathfrak{p})\neq 0$, we have $M\otimes_{A}\kappa(\mathfrak{p})\neq 0$. Hence, if we set $C=B\otimes_{A}\kappa(\mathfrak{p})$ and $M'=M\otimes_{A}\kappa(\mathfrak{p})=M\otimes_{B}C$, the $C$-module $M'$ is non-zero, so that there is a $P^{\ast}\in\mathrm{Spec}(C)$ such that $M'_{P^{\ast}}\neq 0$. Now set $P=P^{\ast}\cap B$. Then \begin{align*} M_{P^{\ast}}'=M\otimes_{B}C_{P^{\ast}}=M\otimes_{B}\left(B_{P}\otimes_{B_{P}}C_{P^{\ast}}\right)=M_{P}\otimes_{B_{P}}C_{P^{\ast}} \end{align*} so that $M_{P}\neq 0$, that is, $P\in\mathrm{Supp}(M)$. But $P^{\ast}\in\mathrm{Spec}(B\otimes\kappa(\mathfrak{p}))$, so that as we have seen $P\cap A=\mathfrak{p}$. Therefore, $\mathfrak{p}\in{}^{a\!}f(\mathrm{Supp}(M))$. [M] seems to have skipped a few lines here and there in the proof of the theorem, and I've not been able to see how [M] obtains the following: Why is $M'$ a non-zero $C$-module? (My guess is that $M\otimes_{A}\kappa(\mathfrak{p})\neq 0$ as an $A$-module, and so is non-zero as a $C$ module. Is this a correct understanding?) Does such a $P^{\ast}$ exists such that $M_{P^{\ast}}\neq 0$? (I have completely no clue on this) Any help or advice would be appreciated. I also do not have any knowledge on algebraic geometry, and as such, if there are books that I should look at in this aspect, please do recommend too. Thanks! AI: For question 1, your understanding is correct. For question 2, I suppose you mean why is there a $P^*$ such that $M\color{red}'_{\!P^*}\ne\{0\}$. That is because as $M'$ is nonzero, its support in $\operatorname{Spec}C$ is nonempty.
H: What does it mean, “Nonindependent identically distributed and nonergodic static behavior.” I have heard “independent identically distributed (iid) process” a lot. It means mathematically that, for any $t$ and $t’$, $X(t)$ and $X(t’)$ follows the same distribution and independent each other. Then, it can be also considered an ergodic process. As an example in my field (i.e., from the communication engineer perspective), there is a Rayleigh wireless channel, which is modeled as an iid process. In detail, for all $t$, $X(t) \sim \mathcal{CN}(0,1)$. However, I have never heard yet the nonindepemdent identically distributed (niid) process, and finally faced it in a paper now. Although its “literal” meaning can be understood, I cannot understand it well. Can someone explain its physical meaning? Or can someone give me some examples? AI: Non-independent identically distributed is what one would typically expect if $X(t)$ is a stationary Markov process, or the output of a stable filter driven by an i.i.d. input signal. For example, suppose $Y(t)\sim\mathcal{CN}(0,1)$ is an iid "noise" sequence and the signal of interest is $$X(t)=\sum_n a_n Y(t-n),$$ where $(a_n)$ is some fixed, non-random, sequence of complex numbers numbers such that $\sum_n |a_n|^2=1$. The $X(t)$ sequence will typically be not iid. But if the $(a_n)$ sequence is such that all $a_n=0$ except for a single particular value of $n$, then the $X(t)$ sequence will be iid, after all.
H: What is the meaning of the union of ascending chain $\bigcup_{k} N_{k}$ With $A\in \operatorname{End}(V)$ and $N(A^k)$ the nullspace of $A^k$, what does the 'union of ascending chain' mean, defined by: $$\bigcup_{k} N(A^k)$$ I would assume that it means: $$N(A)\cup N(A^2)\cup...\cup\ N(A^k)$$ Is this correct? AI: Correct, except that this union may run over more indices, perhaps infinitely many. More generally, $$ x\in\bigcup_{i\in I} X_i\iff \exists i\in I\colon x\in X_i.$$ Here, presumably $\bigcup_k$ is short for $\bigcup_{k\in\Bbb N}$.
H: Is $S_5=\left\{ \begin{pmatrix} x\\ y \end{pmatrix}\in \mathbb{C}^2 ;\, y=\bar{x}\, \right\}$ a subspace of $(\mathbb{C}^2,+,\bullet,\vec{0},1)$ Is $S_5=\left\{ \begin{pmatrix} x\\ y \end{pmatrix}\in \mathbb{C}^2 ;\, y=\bar{x}\, \right\}$ a subspace of $(\mathbb{C}^2,+,\bullet,\vec{0},1)$ With $\begin{pmatrix}x_1 \\ y_1\end{pmatrix} \dotplus \begin{pmatrix} x_2\\ y_2\end{pmatrix}:=\begin{pmatrix} x_1+x_2\\ y_1+y_2\end{pmatrix}$ that is, the usual addition in $\mathbb{C^2}$ $\lambda \bullet \begin{pmatrix} x\\ y\end{pmatrix}=\begin{pmatrix}\lambda x\\ \lambda y\end{pmatrix}$ for $\lambda \in \mathbb{C}$ and $\begin{pmatrix} x\\ y\end{pmatrix}\in \mathbb{C}^2$ AI: No. Hint: Have you considered the effect of multiplying by a non-real complex number?
H: Regular and irregular points of second order ODE I want to find the regular and irregular singular points of this ODE $$x \sin (x) y'' + 3y' + xy = 0$$ What can I do? AI: To start of, I think you can see that when $x=0, x=n\pi$ and $x=\infty$, these might be singularity points. Divide the whole equation by $xsin(x)$ and see if the corresponding coefficients are analytic or not. What I meant by that is to check if $\frac{1}{sin(x)}$ and $\frac{x}{sin(x)}$ are analytic about $x=0$. This is because recall you can write the most prototypical ODE in the form of $y"+\frac{p(x)}{x-x_0}y'+\frac{q(x)}{(x-x_0)^2}=0$ and then one has to check if $p$ and $q$ are analytic about $x_0$. Do the same for $n\pi$ where $n$ is non-zero. Then do the same for the point at infinity via the change of variables, $x=\frac{1}{z}$.
H: Operation that returns a unique result for each unordered set of numbers What operation $f$ can I apply to any two numbers $a$ & $b$, such that $$f(a,b) = f(b,a)$$ where $f(a,b)$ is unique for any combination of a & b in the set of whole numbers? P.S. I'm really not sure what tag to use here, I'd appreciate someone adding the correct one. A word on why: I have a database table with two columns $a$ and $b$. I'd like to ensure that there are no duplicate rows in my table. However, I don't care about the order of $a$ and $b$. a | b --+-- 1 | 2 2 | 1 Is effectively a duplicate. AI: A possible solution can use the property that the set of rational numbers is countable. You can use a slightly modified pairing function to construct a function which satisfies the restrictions.
H: Showing that $\varnothing$ and $X$ are open sets in a metric space $(X, \rho)$. Let $(X, \rho)$ be a metric space. Define the open ball with center $x_0 \in X$ and radius $r > 0$ by $$B(x_0, r) = \{x \in X: \rho(x, x_0) < r\}$$ We say a subset $E$ of $X$ is an open set if for each $x \in E$, there is an $r > 0$ with $B(x, r) \subset E$ . Let $\mathcal{C}$ be the collection of all open sets in a metric space $(X, \rho)$. I wish to show that $\varnothing, X \in \mathcal{C}$. Now, I have done tons of Google searching and searching throughout MSE regarding this problem, but I haven't been able to get a complete explanation of how the proof works. This is my understanding of the proof: We have $\varnothing \in \mathcal{C}$ because since there are no elements in $\varnothing$, the claim that $\varnothing$ is an open set is vacuously true. We have $X \in \mathcal{C}$ because... and I'm a bit lost there. Every example I've seen that attempts to prove $X \in \mathcal{C}$ states that either this claim is obvious, or says that $B(x, 1) \subset X$. However, I'm failing to see why either of these two things would be true. AI: ... because $\{t\in X\,:\, Q(t)\}\subseteq X$ for any predicate $Q$.
H: Comparability graphs are perfect graphs (reference request ) By whom it was initially shown that the family of comparability graphs is a subclass of perfect graphs? I am a first year math student and i am working on the project with my group mates, at certain point of the "paper" we claim that comparability graphs is a subclass of perfect graphs, however what do we put as a reference? Where it has been proven first? P.s. I tried to find information online but didn't succeed. AI: First, you check Wikipedia. Going to https://en.wikipedia.org/wiki/Perfect_graph and clicking on "Families of graphs that are perfect" gives you a list that includes comparability graphs, and a citation for the list: West's Introduction to Graph Theory. Then, you look in that textbook. The index tells us that comparability graphs are mentioned on pages 228, 231, and 329-31, with a definition on page 228. Two paragraphs down from the definition, we have 5.3.25.* Proposition. (Berge [1960]) Comparability graphs are perfect. The proposition is followed by a short proof. It would often be acceptable to cite the textbook for this fact (as Wikipedia does), especially if you're citing the textbook for multiple facts, but in this case, we have more information: we can go to Appendix F: References and see the citation Berge C., Les problèmes de coloration en théorie des graphes. Publ. Inst. Statist. Univ. Paris 9 (1960), 123-160. And now you have the original source. A note on consulting a source before you cite it: Wikipedia is not always reliable, so I would not claim that the theorem is proved in West's Introduction to Graph Theory before actually checking what the textbook says. However, West is always reliable, so I would be fine citing the paper above even if you haven't read it, or don't know French. If you want to be more careful, you should cite the textbook, and mention that it attributes the result to Claude Berge in such-and-such paper. Or, you could read the paper and find the proof there, in which case you're safe - but should still cite both the textbook and the paper, because the textbook might be easier for readers to find.
H: Generating function of a Fibonacci series but with certain variation. Let $f_n$ denote the $nth$ Fibonacci number then what is the generating function for the sequence$f_0,0,f_2,0,f_4,0,...$ $$\text{Attempt}$$Its known that the sequence has following two properties.\begin{align} \smash[b]{\sum_{i=1}^n F_{2i-1}}&=F_1+F_3+F_5+\cdots+F_{2n-1}\\ &=F_{2n}\\ \end{align} $$\begin{align} \smash[b]{\sum_{i=1}^n F_{2i}}&=F_2+F_4+F_6+\cdots+F_{2n}\\ &=F_{2n+1}-1\\ \end{align}$$ I will intentionally not start from $0$. $$B(x)=F_2x^1+0+F_4x^2+..=\sum_{k=1}^{\infty}F_{2k}x^k \implies B(1)=\sum_{k=1}^{\infty}\sum_{i=1}^{k}F_{2i-1}$$.But i dont see how to proceed from here. Any help will be appreciated! AI: Hint. If $f$ is the generating function of any sequence $f_0,f_1,f_2,f_3,f_4\dots$ then $$\frac{f(x)+f(-x)}{2}=f_0+f_2x^2+f_4x^4+\dots.$$
H: Finding The Tangent Line to $\sqrt{x} + \sqrt{y} = 1$ Hello everyone I have a function $\sqrt{x} +\sqrt{y} = 1$ and I have a tangent line to this function that cut the axises at $A , B$. How can I proof that $OA + OB = 1$. $O = (0,0)$. I tried to mark = $A = (a , 0) , B(0, b)$ and find the tangent line by A and B and make some equations with the function but I got to nowhere. AI: Let $M(a,b)$ be a touching point. Thus, since $$\frac{1}{2\sqrt{x}}+\frac{y'}{2\sqrt{y}}=0,$$ we obtain for the slope: $$m=-\sqrt{\frac{b}{a}}$$ and the equation of the tangent: $$y-b=-\sqrt{\frac{b}{a}}(x-a)$$ or $$y=-\sqrt{\frac{b}{a}}x+b+\sqrt{ab}.$$ Now, for $x=0$ we get $y=b+\sqrt{ab}$ and for $y=0$ we obtain $x=a+\sqrt{ab}$. Id est, $$OA+OB=b+\sqrt{ab}+a+\sqrt{ab}=(\sqrt{a}+\sqrt{b})^2=1.$$
H: Subgroups of order 5 and 6 in a group $\mathbb{Z}_{10}$ According to my solution, we use Lagrange's Theorem and the fact, that all subgroups of a finite group have an order dividing the order of the group. As a result, we can say that the orders of the subgroups of the group $\mathbb{Z}_{10}$ are $\{1,2,5,10\}$, which means that we can not have a group of order $6$. Then as for the subgroups of order $5$, we have several subgroups that can be represented as a group of even numbers. I wonder if my judgment is correct or if there are any other ways to solve that type of questions. AI: Generally the best you can do in solving these problems is doing an "educated" exhaustive search. You are right that a subgroup of order $6$ cannot exist. For a subgroup of order $5$ you have also found that the even numbers work. For this case you can check the rest of the elements and see if they lie in a subgroup of order $5$. It is the odd numbers we need to check. You can note that $1$, $3$, $7$, and $9$ generate the group, so cannot be contained in a subgroup of order $5$. We have also that $5$ generates a subgroup of order $2$; any subgroup that contains $5$ must contain a subgroup of order $2$, hence cannot be of order $5$ (in fact the only other one is the whole group). Having eliminated all possibilities, we see that the even elements are all that is left.
H: Two candidates for definition of splitting field Definition 1. [Bourbaki] A splitting field of $f\in \Bbbk[x]$ is an extension $\Bbbk\subset \mathbb K$ which splits $f$ (into possibly repeated linear factors) and satisfies $\mathbb K=\Bbbk(Z(f,\mathbb K))$ where $Z(f,\mathbb K)$ is the set of roots of $f$ in $\mathbb K$. Definition 2. A splitting field of $f\in \Bbbk[x]$ is a weakly initial object in the category of field extensions of $\Bbbk$ which split $f$. We have $1\implies 2$ by mapping zeros to zeros. Conversely, suppose $\Bbbk\subset \mathbb K$ is a splitting field for $f\in \Bbbk[x]$. We wish to prove it is isomorphic over $\Bbbk$ to $\Bbbk(Z(f,\mathbb K))$. By assumption we have a field extension morphism $\mathbb K\subset \Bbbk(Z(f,\mathbb K))$. As a $\Bbbk$-algebra morphism out of a field, this is a $\Bbbk$-linear injection. Moreover, the target is a finite dimensional $\Bbbk$-linear space since $f$ has finitely many zeros. So the $\Bbbk$-algebra morphism is a $\Bbbk$-linear injection of $\Bbbk$-linear spaces of equal $\Bbbk$-dimension whence bijective and therefore a $\Bbbk$-algebra isomorphism. The proof of $2\implies 1$ seems to fail for infinite families of polynomials, where we cannot know the extension generated by all their zeros is finite. Question. What is an example where $2\implies 1$ fails? Is there any reason to take the seemingly weaker definition 2 in the case of infinitely many polynomials? AI: Let $\{f_i\}_{i\in I}$ be a family of polynomials in $\Bbb k[X]$. Let $\mathcal C$ be the category of field extensions of $\Bbb k$ in which all $f_i$ split into linear factors. Let $\Bbb K$ be weakly initial in $\mathcal C$. Let $Z=\{\,a\in\Bbb K\mid \exists i\in I\colon f(a)=0\,\}$. Let $\Bbb L=\Bbb k[Z]$. Then each $f_i$ splits in $\Bbb L$ because $\Bbb L$ contains the zeroes we know to be in $\Bbb K$. So $\Bbb L$ is an object in $\mathcal C$. Hence there exists a homomorphism $ \Bbb K\to\Bbb L$ (which is the identity on $\Bbb k$!). Together with the inclusion $\Bbb L\to \Bbb K$, this induces an endomorphism of $\Bbb L$, which again is determined by a permutation of $Z$ (which in fact permutes zeroes only per $f_i$). The inverse permutation does define an endomorphism of $\Bbb L$ as well, which is of course the inverse of the above (so it turns out to be an automorphism). We can use this to obtain a homomorphism $\Bbb K\to\Bbb L$ that is the identity on $\Bbb L$. As homomorphisms of fields are injective, we conclude that $\Bbb K=\Bbb L$.
H: Determine unknown probability by observing results Disclaimer: Sorry, i'm a self-taught non-native, apart from basics, i don't know the proper terms. I'm pretty sure there has to be a part of statistics that would help me deal with my problem, but somehow i'm unable to word my question well enough for google to be helpful. What if i'm observing fake coin being tossed. Say the coin's heads-chance is some random number (could be anything from 0% to 100%, all numbers equal...i hope uniform distribution is the correct term). Now i saw the coin being flipped X times (could be 10, 100 or millions) and get heads Y times. Now obviously i could say "This is most probably Y/X head-chance coin", but what if i wanted something like a histogram? Something that would let me say "I'm 90% sure that coins head-chance is between 60% and 80%, 4% sure it's between 80% and 90%, 4% sure it's between 50% and 60% and so on..." I'd appreciate even non-complete answers or pointers, "Go study X-E-omega-effect" or such would be helpful as well. AI: What you are thinking of is Bayesian statistics. Call the chance of heads $p.$ Then what you are basically saying is Suppose $p$ is uniformly distributed on $[0, 1]$ and I observe a sample of $X$ (independent) coin flips each with a probability $p$ of being heads. What is the probability distribution of $p$ given my sample $X$? This is exactly the idea of Bayesian statistics. The distribution of $p$ before you observe the data is the prior (in this case a uniform distribution). The distribution of $p$ after you've observed the $X$ coin flips is called the posterior distribution (which you need to work out mathematically). I could point you towards some more sources, but it's hard to do so without a better idea of your current level. The terms Bayesian statistics and Bayesian estimation should help you help yourself though.
H: Notation question: $R(A)=\mathbb{C}^2$ with $A\in End(\mathbb{R}^2)$ I have a map, represented by the matrix $A=\left(\begin{array}{ll}1 & 0 \\1 & 1\end{array}\right)=End(\mathbb{R}^2)$. My teacher wrote in his lecture notes that because it is invertible, we can conclude that $R(A)=\mathbb{C}^2$. To me it looks like he means that the range of the map is $\mathbb{C}^2$ with the complex part of all imaginary numbers equal to zero, which seems a bit odd when he defined $A\in End(\mathbb{R}^2)$. Am I correct, or does $\mathbb{C}$ not denote the complex numbers in this case? AI: We don't have your notes, but yes, the most likely interpretation is that this is a typo for $\mathbb{R}^2$.
H: Median (and consequently the mean) of an evenly-spaced list Why is it the case you can find the median (and consequently the mean) of an evenly-spaced list by taking the mean of opposite terms? Where opposite refers to opposite positions (e.g. first and last). What intuition allows you to most easily reconcile this fact? e.g. [1,2,...,49,50] (1 + 50)/2 = 25.5 = mean = median AI: The mean is the median, since we can write the list as $m - k, m - (m - k + 1), \dots, m, m + 1, \dots, m +k$ if there are an odd number of terms, or similarly $m - (k - 0.5), m - (k - 1.5), \dots, m - 0.5, m + 0.5, \dots, m + (k - 0.5)$ if there are even number of terms, where $m$ is the median. In either case, the mean will be $m$, since the mean is \begin{align} \frac{(m - k) + (m - k + 1) + \dots + m + \dots + (m + k)}{2k + 1} & = \frac{(2k + 1)m + (-k) + k + (-k + 1) + (k - 1) + \dots}{2k + 1} \\ & = \frac{(2k + 1) m }{2k + 1} \\ & = m, \end{align} since we can group the $(2k + 1)$ $m$ terms together, and the rest of the terms in the numerator cancel out. Essentially the same thing happens if there are an even number of terms. And the mean of the two outermost terms is also $m$, since that's just $((m - k) + (m + k)) / 2 = m$.
H: How can we prove that $3t$ cannot be a perfect cube for any integer $t$ except 9? If $t \in \mathbb{Z}$ then prove that $3t$ can never be a perfect cube except for $t=9$. How can we prove things like these? I’m pretty new to Number-Theory and I find it difficult to prove things like these. Mathematical Induction cannot be used here because (beacuse the statement has an exception for $t=9$), even if we use a computer code then also it is not wise to iterate $t$ from $-\infty$ to $\infty$. If we try to use proof by contradiction, our main question just gets transformed: Let’s say there exists a $t$ other than 9 such that $$3t = n^3$$ Well this means $$t=\frac{n^3}{3}$$ that is there exists a perfect cube which is divisible by 3 (other than 27). But how to prove that there doesn’t exist any perfect cube which is divisible by 3 (other than 27). I just need a hint for how to proceed for proofs like these. AI: Consider the number $t=3^2a^3$, We have $$3(3^2a^3)=3^3a^3=(3a)^3$$ Clearly $a$ need not be $1$, hence there are many $t$ that make $3t$ a perfect cube.
H: linear algebra, given the dimension of the kernel of the transformation and finding k. If the dimension of the kernel of the transformation $$T \colon \mathbb{R}^3 \to \mathbb{R}^3,\ T((x, y, z)) = (2x + y, x + z, kx + 2y - z)$$ is $1$, find $k$. I found, $\operatorname{kernel} = \{1,-2,-1\}$ and $k$ is $3$. is that correct? AI: Yes, it is correct, except that $\ker T$ is not $(1,-2,-1)$; it is spanned by it. The determinant of the matrix of $T$ with respect to the standard basis of $\Bbb R^3$ is $k-3$, and therefore $3$ is the only value that $k$ can take for which $\ker T\ne\{0\}$. And it is easy to check that, for this value of $k$, $\ker T$ is spanned by $(1,-2,-1)$.
H: Finding integral for this question- $$F(x)= \int_0^{\pi/2} \frac{1}{(\sqrt{sin(x)}+\sqrt{cos(x)} )^4}dx = $$ Any help would be highly appreciated.I first used an online integral finder but it only displayed me final answer which was =.33333 . PS: I am still in high school so complex answers wouldn't be of any help. AI: Divide the numerator and denominator by $\cos^2{x}$: $$I=\int_0^{\frac{\pi}{2}} \frac{\sec^2{x}}{{\left(\sqrt{\tan{x}}+1\right)}^4} \; dx$$ Let $u=\tan{x}$: $$I=\int_0^{\infty} \frac{du}{{\left(\sqrt{u}+1\right)}^4}$$ Let $w=\sqrt{u}+1$: $$I=-\frac{1}{w^2}+\frac{2}{3w^3} \bigg \rvert_1^{\infty}=\boxed{\frac{1}{3}}$$
H: How to solve a fraction of imaginary numbers? I have the following equation. $a = \frac{(1/2) - (3/2)i}{(3/2) + (3/2)i}$ The solution says that $a^2 = 5/9$. I don't know how I can perform the steps, could I get some feedback? Thanks! AI: Hint: Multiply numerator and denominator by the complex conjugate of the latter first: $$\frac{\frac12-\frac32 i}{\frac32+\frac32i}=\frac{\frac12}{\frac32}\frac{1-3i}{1+i}=\frac13\frac{(1-3i)(1-i)}{(1+i)(1-i)}$$ Can you proceed?
H: Understanding units mod $n$ are relatively prime to $n$ I am trying to prove that the only invertible elements in $\mathbb{Z}_n$ are those that are relatively prime to $n$. The first half is straightforward. If $i$ is relatively prime to $n$, so $\gcd(i,n) = 1$, we have $\alpha i + \beta n = 1$, so $\alpha i - 1 = (-\beta)n$, so $\alpha i \equiv 1 \text{ mod $n$}$, and $a$ is the inverse of $i$. The second half is less straightforward. I am trying to follow some proofs of this fact that I have found, but none of them are clear about the important step. Suppose $a$ is invertible. Then there exists $b$ such that $ab \equiv 1 \text{ mod $n$}$. So $ab - 1 = kn$ for some $k \in \mathbb{Z}$, so $ab = 1 + kn$. The claim at this point is that we can deduce immediately that $\gcd(a,n) = 1$. To show this, I believe we would need to show that $1$ divides both $a$ and $n$ (which is rather trivially true) and that if any other integer divides both $a$ and $n$, then it must divide $1$. Suppose $\gamma$ divides $a$ and $n$. That it divides $n$ implies that it implies $ab$, so it must divide $1 + kn$. It certainly divides $kn$ because it divides $n$, and it must also divide $1$, which gives the result. How does this proof sound? AI: You proof is good. we can tie this all together: $m$ is invertible $\mod n$ if and only if there is an integer $k$ so that $km \equiv 1\pmod n$ (Def) if and only if $km -1$ is divisible by $n$ (Def) if and only if there is an integer $j$ so that $jn = km-1$ (Def) if and only if $km +(-j)n = 1$ (Obvious algebraic maniplation). .... So we need to prove such $k,-j$ exist iff and only if $m,n$ are relatively prime. .... If $m$ and $n$ are relatively prime the Bezout's lemma states precisely that such $k$ and $-j$ exist And if such $k,-j$ exist then the $\gcd(m,n)$ divides the LHS so $\gcd(m,n)|1$ and so $\gcd(m,n) =1$. So $m$ and $n$ are relatively prime.
H: Show that u is radially symmetric. Consider $\Delta u=1$ on an annulus $a<r<b$ in $\Bbb{R}^2$, with $u$ vanishing on both the inner and outer circles. Here $0<a<b$, and $r=|x|=({x_1}^2+{x_2}^2)^{\frac{1}{2}}$ for $x=(x_1,x_2) \in \Bbb{R}^2$. Show that $u$ is radially symmetric. I am not sure how to get to that, but I know that Laplace equation is invariant under all rigid motions. And the given jacobian matrix is $J=\begin{bmatrix} \cos \theta & \sin \theta \\ -r \sin \theta & r \cos \theta \end{bmatrix}$. AI: Step 1: Rewrite the PDE in polar coordinates. https://en.m.wikipedia.org/wiki/Laplace_operator Step 2: assume the solution is only depending on $r$ and solve the ODE. Step 3: Use or proof uniqueness of the poison equation. https://en.m.wikipedia.org/wiki/Uniqueness_theorem_for_Poisson%27s_equation Let me know if you need further assistance with one of the steps.
H: Independence of discrete random variable Let $U_i$,$V_i,\ i=1,\dots,n$ be $2n$ real random variables i.i.d.. Are $(U_i-V_i)^2,\ i=1,\dots, n$ independent ? I should check that $$P((U_i-V_i)^2=k,(U_i-V_i)^2=p)=P((U_i-V_i)^2=k)P((U_j-V_j)^2=p)$$ is there any general theorem about such independence or should I use generating function to make the computation ? AI: I think theorem 2.1.10 of Rick Durret's Probability: Theory and Examples is what you are looking for.
H: Arithmetic Series Word problem Taking a calculus class and trying to solve a series and sequence word problem but I am struggling on the easiest problem. Please someone talk me thru a baby step on this question please? 1) Xin has been given a 14 day training schedule by her coach. Xin will fun for A minutes on day 1, where A is a constant. She will then increase her running time by (d+1) minutes each day, where d is a constant. a) Show that on day 14, Xin will run for ```(A + 13d +13) minutes``` At this point I think I understand that I can write formula as a+(n-1)d where a = A and d=(d+1) and n=14 2) Yi has also been given a 14 day training schedule by her coach. Yi will run for (A-13) minutes on day 1. She will then increase her running time by (2d-1) minutes each day. Given that Yi and Xin will run for the same length of time on day 14, b) find the value of d. My understanding is that the formula be written as a = (A-13), n = 14, d=(2d-1) At this point I don't know how to find the answer of question b. Something is telling me its a problem that I can use system of linear equation to find value of d. But I am not certain. Please can someone help me? Thank you very much. AI: If you are looking to plug numbers into the formula don't use $d$ for both $d$ and $d+1$. Your formula for an arithmetic sequence seems to be if $a = $ Value of first day. And $d=$ the amount changed each day then $a_n = $ the value on day $n = a+ (n-1)d$. This is fine. But as the variable $d$ is given to mean something arbitrarily different we have to use a different variable in our formula. And variables are just notation we could use anything. I'll just put the formula with a capital $D$ so that $a_n = $ the value on day $n = a + (n-1)D$ where we have $D= d+1$ and $a=A$. So the formula is $a_{14} = a + (14-1)D= a+13D = A+13(d+1) = A + 13d + 13. For 2) We have $a = (A-13)$ and $D = 2d-1$ and the formula is that on the $14$th day she will run $a + (14-1)D = a+13D = (A-13) +13(2d-1) = A -13 + 26d-13=A+26d - 26$. So we are told that Xi who ran $A+13d + 13$ on the $14$th day and Yi who ran $A+26d$ on the fourteenth day ran the same ammount on the 14th day. So $A + 13d + 13 = A + 26d-26$. Solve for $d$. Note: We don't need to know what $A$ is. If we subtract $A$ from both sides we have $A+13d + 13 = A + 26d -26$ so $A+13d + 13-A =A+26d -26 -A$ so $(A-A) + 13d + 13 = (A-A) + 26d -26$ so $13d + 13 = 26d - 26$ continue..
H: Prove that $v_{1} + W = v_{2} + W$ if and only if $v_{1} - v_{2}\in W$. Let $W$ be a subspace of a vector space $V$ over a field $\textbf{F}$. For any $v\in V$ the set $\{v\} + W = \{v+w:w\in W\}$ is called the coset of $W$ containing $v$. It is customary to denote this coset by $v + W$ rather than $\{v\} + W$. (a) Prove that $v + W$ is a subspace of $V$ if and only if $v\in W$. (b) Prove that $v_{1} + W = v_{2} + W$ if and only if $v_{1} - v_{2}\in W$. MY ATTEMPT (a) If $v\in W$, then $v + W = W$, which is a subspace by assumption. If $v + W$ is a subspace of $V$, then $0\in v + W$. Thus there is a $w\in W$ such that $v + w = 0$, that is to say, $v = -w\in W$, and we are done. (b) If $v_{1} + W = v_{2} + W$, then $v_{1}\in v_{2} + W$. Consequently, $v_{1} = v_{2} + w$, that is to say, $v_{1} - v_{2} = w \in W$, and we are done. Conversely, if $v_{1} - v_{2}\in W$, then $v_{1} - v_{2} = w\in W$. Hence $v_{1} = v_{2} + w$. Then we have that $v_{1} + W = (v_{2} + w) + W = v_{2} + W$. Any comments on my sotution? Any contribution is appreciated. AI: You can think this geometrically. For example, in $\mathbb{R}^2$. The only subspaces are $0, \mathbb{R}^2$ and Lines passing through origin. Take a non- trivial subspace $W$. Then, $W$ is a line passing through origin. Now, $v+W$ is a translate of $W$. So, if $v \notin W$ then we have translated $W$ away from origin $\implies W$ cannot contain the zero vector, hence, fails to be a subspace of $\mathbb{R}^2$.
H: What does it mean for an ODE to be conservative? What does it mean for an ODE to be conservative? For example, I already read somewhere that the equation $$w\cdot y''-y+y^{2k+1}=0,$$ with $w>0$ and $k\in \mathbb{N}$ constants fixeds, is conservative. In practice, what does this mean? AI: Given the differential equation $wy'' - y + y^{2k + 1} = 0, \tag 1$ we may multiply it through by $y'$: $wy'y'' - yy' + y^{2k + 1}y' = 0, \tag 2$ and observe that $\left ( \left ( \dfrac{w}{2} y' \right )^2 \right )' = wy''y', \tag 3$ and $\left ( -\dfrac{y^2}{2} + \dfrac{y^{2k + 2}}{2k + 2} \right )' = - yy' + y^{2k + 1}y'; \tag 4$ then (2) may be written $\left ( \left ( \dfrac{w}{2} y' \right )^2 \right )' + \left ( -\dfrac{y^2}{2} + \dfrac{y^{2k + 2}}{2k + 2} \right )' = 0, \tag 5$ or $\left ( \left ( \dfrac{w}{2} y' \right )^2 -\dfrac{y^2}{2} + \dfrac{y^{2k + 2}}{2k + 2} \right )' = 0; \tag 6$ thus, $\left ( \dfrac{w}{2} y' \right )^2 -\dfrac{y^2}{2} + \dfrac{y^{2k + 2}}{2k + 2} = C, \; \text{a constant} \tag 7$ along the solution curves $(x, y(x))$ of (1). That is, the quantity on the left of (7) is a conserved quantity of the equation(1); hence we deem (1) a conservative ordinary differential equation, since the function $F(y, y') = \left ( \dfrac{w}{2} y' \right )^2 -\dfrac{y^2}{2} + \dfrac{y^{2k + 2}}{2k + 2} \tag 8$ is invariant in value on the solution curves. In general, a conservative second-order equation or system is one for which a function such as $F(y, y')$ exists. For these $F(y, y')$, it follows that $\dfrac{\partial F(y, y')}{\partial y} y' + \dfrac{\partial F(y, y')}{\partial y'}y'' = \dfrac{dF(y, y')}{dx} = 0, \tag 9$ and thus $y(x)$ satisfies the deifferential equation (9). We may in fact invoke these principles to construct differential equations corresponding to many different $F(y, y')$; for example, if $F(y, y') = \cos y + e^{y'}, \tag{10}$ then we have $- y' \sin y + e^{y'}y'' = 0, \tag{11}$ or $y'' = e^{-y'}y'\sin y. \tag{12}$
H: Sigma Algebra property Let $\mathcal{C}=\{(-\infty,x]:x\in\mathbb{R}\}$ and $\mathcal{C}_0=\{[a,b]:-\infty<a<b<\infty\}$. I want to show that $\sigma(\mathcal{C})=\sigma(\mathcal{C}_0)$. I think the general idea of the proof is to show that $A\subseteq \mathcal{C}\implies A\subseteq\sigma(\mathcal{C}_0)\implies\sigma(\mathcal{C}_0)\subseteq\sigma(\mathcal{C})$ and vice versa. My difficulties lie in understanding why if $A\subseteq \mathcal{C}\implies A\subseteq\sigma(\mathcal{C}_0)$ is true, then, $\sigma(\mathcal{C}_0)\subseteq\sigma(\mathcal{C})$. Can someone help me with this step? AI: Note that $ (-\infty,x]= \cup_{n=1}^{\infty}[-n,x]$. As $ [-n,x]\in C_0 \implies \cup_{n=1}^{\infty}[-n,x] \in \sigma(C_0)\implies \sigma(C)\subseteq \sigma(C_0).\\$ $[a,b]= [a,\infty)\cap(-\infty,b] $ and since $(-\infty,a] \in \sigma(C) \implies(-\infty,a]^c= (a,\infty) \in \sigma(C). \\$ $[a,\infty)= \cap_{n=1}^{\infty}(a-\frac{1}{n}, \infty) \in \sigma(C)\\$. $\implies [a,\infty)\cap(-\infty,b] = [a,b] \in \sigma(C)\implies \sigma(C_0) \subseteq \sigma(C).\\$
H: Deriving the equation of a plane $Ax+By+Cz=D$. Defining a plane as the span of two linearly independent vectors, I've been trying to derive the equation $$Ax+By+Cz=D$$ without much success. The equation seems to indictate that a vector $$\vec{v}=\begin{bmatrix} x \\ y \\ z\end{bmatrix}$$ is in the plane if and only if $Ax+By+Cz=D$. I was wondering if anyone could at least point me in the right direction as to how to prove the two definitions are equivalent. AI: The span of vectors contains the origin, but in general $$A\,0+B\,0+C\,0\ne D.$$ A plane can be defined as the affine set $$\vec p=\lambda\vec a+\mu\vec b+\vec c.$$ We can eliminate $\lambda,\mu$ by forming the dot product with $\vec a\times\vec b$, $$\vec a\times\vec b\cdot\vec p=\vec a\times\vec b\cdot\vec c,$$ which is of the form $$Ax+By+Cz=D.$$
H: Different boundaries of a set, its closure und interior in R Denoting $\overline A$ as closure of A, $A°$ as interior of A. I've proven that in a general topology (X,O), following is true: $\partial\overline A \subseteq \partial A$ and $\partial A° \subseteq A$, where $A \subseteq X $ and $\partial A$ is defined as $\partial A := \overline A / A° $. Now I should consider the generell topology in $R$ and find a subset $B\subseteq R$ such that the boundaries $\partial\overline A, \partial A° $ and $ \partial A $ each are different. I've tried considering $B=[a,b], B=]a,b[, B = ]a,b], B=[a,b] \cup\{c\} $ but none of these are satisfying the condition.. I'm relatively new to topologies..hopefully you have an idea :) AI: You'll need a set of which the closure is larger than the set, and the interior is smaller than the set. Three of your candidates don't satisfy this criterion. Furthermore, note that $\partial \mathring{A} = \overline{\mathring{A}} \setminus \mathring{\mathring{A}} = \overline{\mathring{A}} \setminus {\mathring{A}}$ (taking the interior of an open set yields the open set!). So we need a set where $\overline{\mathring{A}} \neq \overline{A}$, otherwise simply $\partial \mathring{A} = \partial A$. Note that your idea of adding a 'separate' singleton ($\{c\}$) helps in this respect. Let $A = [a,b] \cup \{c\}$, as in your example. $c$ lies in the closure of $A$, but is gone when we take $\overline{\mathring{A}}$! Note that $\partial \overline{A} = \overline{\overline{A}} \setminus \mathring{\overline{A}} = \overline{A} \setminus \mathring{\overline{A}}$. So we need a set where $\mathring{\overline{A}} \neq \mathring{A}$, as otherwise $\partial A = \partial \overline{A}$. The interior of the closure of the set can be larger than the interior of the set. This holds for $A = [0,1) \cup (1,2]$, as here $1$ lies in the interior of the closure, but not in the interior. Now try combining those ideas.
H: Prove that if $A$ is open at $(X, d)$ and $B$ is any subset of $X$, then $\overline{(A\cap \overline{B})}=\overline{(A \cap B)}$ Hello friends could you help me with the following please: Prove that if $A$ is open at $(X, d)$ and $B$ is any subset of $X$, then $\overline{(A\cap \overline{B})}=\overline{(A \cap B)}$ I have tried to do it using that $A\cap \overline{B} \subset \overline{A \cap B}$ But that property has not let me see anything, I do not see what relationship it may have AI: Clearly $A\cap B\subseteq A\cap\operatorname{cl}B$, so $\operatorname{cl}(A\cap B)\subseteq\operatorname{cl}(A\cap\operatorname{cl}B)$, but we still have to show that $\operatorname{cl}(A\cap\operatorname{cl}B)\subseteq\operatorname{cl}(A\cap B)$. Note, though, that $A\cap\operatorname{cl}B\subseteq\operatorname{cl}B$, which is a closed set; what does this tell you about $\operatorname{cl}(A\cap\operatorname{cl}B)$?
H: Negating an alternate definition of a limit point I know that one way to define $x$ as a limit point of set $A$ is to say that there is some sequence $\{a_n\}$ contained in $A$ which converges to $x$ and $a_n \neq x$ $\forall n \in \mathbb{N}$. I'm trying to negate this definition to say $x$ is not a limit point of set $A$. My attempt has been to say: $x$ is not a limit point of set $A$ if for all sequences $\{a_n\}$ contained in $A$, $a_n \rightarrow x$ implies there is some $n \in \mathbb{N}$ such that $a_n = x$. I'm not sure if that's correct, or if there is a better way to say it. Any tips would be appreciated. AI: Your negation is correct. You can also say for all sequences $(a_n)$ of elements in $ A $, $a_n$ does not converge to $ x$ OR there exists $ n \in \Bbb N $ such that $ a_n=x$. $$(\forall (a_n)\in A^{\Bbb N})\;$$ $$ \lim_{n\to+\infty}a_n\ne x \;or\; (\exists n\in \Bbb N)\;:\; a_n=x$$
H: Proper subsets and arbitrary subset of the containing set Suppose there is a set $A$ and $P$ is a proper subset of $A$. Also, suppose that $B$ is any subset (not necessarily a proper subset) of $A$. Then, are we justified in writing $$P \subseteq B \subseteq A$$ In other words, can we be sure that it is not the case that $$B \subseteq P \subseteq A$$ Edit: $A, B, P$ are all finite sets. AI: No. For example, take $A = \{ 1, 2 \}$ and $P = \{ 1 \} \subsetneq A$. If $B = \{ 1,2 \} \subseteq A$, then $P \subseteq B \subseteq A$; If $B = \varnothing \subseteq A$, then $B \subseteq P \subseteq A$; and If $B = \{ 2 \} \subseteq A$, then neither $B$ nor $P$ is a subset of the other, so neither $B \subseteq P \subseteq A$ nor $P \subseteq B \subseteq A$ is true. So there isn't really much you can say about the relationship between a proper subset and a general subset. I should also add: $P \subseteq B \subseteq A$ and $B \subseteq P \subseteq A$ are not mutually exclusive (e.g. they're both true when $P=B$), and one being false does not imply that the other is true (as the above example demonstrates), so your "in other words" is inaccurate.
H: Correct Interpretation of Notation I was reading a parametrization and they used a peculiar way to write their equations which I am unfamiliar with as to how to properly interpret it. K refers to Kelvins in this case and what I am particularly struggling with is with the symbolic meaning of the $min[1, max[0,f(x)]$ structure. $ Tscale_{i} = \min[1, \max[0, \frac{Tsurf_i-268.16 K}{Tmelt_i-268.16 K}]] $ Hope you guys can help me understand it. Regards. AI: $\min[a,b]$ is a function returning the smaller of $a$ and $b$. $\max[b,c]$ is a function returning the larger of $b$ and $c$. $\min[a,\max[b,c]]$ is a function returning the smaller of $a$ and (the larger of $b$ and $c$). $i$ is an index. speculation: $T$ seems to refer to temperature, $K$ seems to be a unit (Kelvin), $T_{surf}$ might mean surface temperature, $T_{melt}$ may be melting point. If our suppositions are true, the function is intended for application on a sequence of paired surface temperatures/melting points. It returns a measure called $T_{scale}$ with no units. This measure ranges from zero to one inclusive. More specifically, it equals either $0$, $1$, or $\frac{T_{surf}-268.16}{T_{melt}-268.16}$ when this value is between zero and one.
H: Laplace equation on unit disc Let $u(x,y)$ be the solution of$$ u_{xx}+u_{yy}=64$$ in unit disc $\{(x,y): x^2+y^2<1\}$ and such that $u$ vanishes on the boundary of disc then find $u(\frac{1}{4},\frac{1}{√2})$ What I tried i know how to solve Laplace's equation so i make a transformation $ v = u -32x^2$ With this my problem converted into $v_{xx}+v_{yy}=0, v= -32x^2$ on boundary of disc Then i shifted to polar coordinates because of unit disc also i know the solution of Laplace equation in polar coordinates is $$ v(r,\theta)= a_0 + \sum_{n=1}^{\infty} a_n r^n cos(n\theta)+\sum_{n=1}^{\infty} b_n r^n sin(n\theta)$$ Boundary condition becomes $ v (1,\theta)= - 32 cos^2\theta= -16( 1+cos2{\theta})$ I tried to match this conditions with solution i got $$a_0= -16 = a_2$$ other coefficients becomes zero .so solution becomes $$ v(r,\theta)= -16-16 r^2 cos2(\theta)$$ how to calculate $u(\frac{1}{4},\frac{1}{√2})$ from there Please help . AI: One of the common tricks in solving Laplace's equation is to exploit the symmetry, as the shape of the domain is usually what determines the solution. In this case, your domain is the unit disc, which is radially symmetrical. As such, it would make more sense to change to polar coordinates instead. Letting $$x = r\cos \theta \qquad \qquad y = r\sin \theta$$ the problem becomes $$\frac 1r \frac{\partial}{\partial r}\bigg(r\frac{\partial u}{\partial r}\bigg) + \frac{1}{r^2} \frac{\partial^2 u}{\partial \theta^2} = 64 \qquad \qquad u(1,\theta) = 0$$ where now $u = u(r,\theta)$. Since the problem is radially symmetrical, we expect that $u$ only depends on $r$, so that the problem simplifies to $$\frac 1r \frac{\partial}{\partial r}\bigg(r\frac{\partial u}{\partial r}\bigg) = 64 \qquad \qquad u(1) = 0$$ where now $u = u(r)$. The general solution is $$u(r) = 16r^2 + A\ln (r) + B$$ We require the solution to be at least continuous in the domain, so we must have $A=0$ in order to avoid the blowup at $r=0$. We then have $$u(1) = 0 \implies B = -16$$ It follows that $$u(r) = 16(r^2-1)$$ Switching back to Cartesian coordinates, we find that $$u(x,y) = 16(x^2+y^2-1) \qquad u \bigg(\frac 14, \frac{1}{\sqrt 2}\bigg) = -7$$ EDIT: Just realised that you actually had the correct answer. You just have to undo all of the transformations. In Cartesian coordinates, $$v(x,y) = -16-16r^2\cos(2\theta) = -16-16r^2 (\cos^2 (\theta) - \sin^2 (\theta)) = -16-16x^2+16y^2$$ and $u$ is calculated via $$u(x,y) = v(x,y) + 32x^2 = -16+16x^2+16y^2 = 16(x^2+y^2-1)$$
H: Maximum size of the automorphism group of a graph given some constraint? Are there any results on the maximum size of the automorphism group of a graph with $n$ vertices and $m$ edges? What about for a graph with $n$ vertices and is $d$-regular? AI: For your second question, Wormald, On the number of automorphisms of a regular graph proves an upper bound for connected $d$-regular graphs; there's a complicated expression which implies in particular that for connected $3$-regular graphs the number of automorphisms is at most $3n2^n$. Without looking too closely at the bound for $d=3$ or in general, we can notice that it's at most exponential, and so if you don't care if your graph is connected, we can do better by having around $\frac n{d+1}$ copies of $K_{d+1}$. Then the number of automorphisms is roughly $(\frac{n}{d+1})! (d+1)!^{n/(d+1)}$, which grows much faster than exponential. This is likely to be best possible. For a lower bound in the $m$-edge case, note that $K_{k,n-k}$ has $k(n-k)$ edges and $k!(n-k)!$ automorphisms, which is really close to $n!$ when $k$ is small.
H: Tensor product $E\otimes_{A} F$ of modules $E,F$ where $F$ has a basis Let $A$ be a ring, $E$ a right $A$-module and $F$ a left $A$-module. Let $(b_\mu)_{\mu\in M}$ be a basis of $F$. Then every element of $E\otimes_AF$ can be written uniquely in the form $\sum_{\mu\in M}(x_\mu\otimes b_\mu)$ where $x\in E^{(M)}$. Attempt: The mapping $v:\bigoplus_{\mu\in M}Ab_\mu\rightarrow F,\,y\mapsto\sum_{\mu\in M} y_\mu$ is an $A$-module isomorphism. Furthermore, there exits a $\mathbf{Z}$-linear bijection $$g:E\otimes_A\bigoplus_{\mu\in M}Ab_\mu\rightarrow\bigoplus_{\mu\in M}(E\otimes_AAb_\mu)$$ such that $g(x\otimes(y_\mu)_{\mu\in M})=(x\otimes y_\mu)_{\mu\in m}$ for $x\in E$ and $y\in\bigoplus_{\mu\in M}Ab_\mu$. Thus the mapping $$[1_E\otimes v]\circ g^{-1}:\bigoplus_{\mu\in M}(E\otimes_AAb_\mu)\rightarrow E\otimes_A F$$ is a $\mathbf{Z}$-isomorphism. At this point, I don't how to deduce the required property: that every element $z\in E\otimes_A F$ can be uniquely written in the form $\sum_{\mu\in M}(x_\mu\otimes b_\mu)$ where $x\in E^{(M)}$. Any suggestions? AI: Start with a pure tensor $x\otimes y$, with $x\in E$ and $y\in F$. By assumption, $$ y=\sum_{\mu\in M}a_\mu b_\mu $$ and therefore $$ x\otimes y=\sum_{\mu\in M}(xa_\mu)\otimes b_\mu $$ Now passing to generic elements of $E\otimes_A F$ is easy. What about uniqueness? Consider the isomorphism $g\colon A^{(M)}\to F$, $g((a_\mu)_\mu)=\sum_\mu a_\mu b_\mu$ Then define $$ \tau\colon E\times F\to E^{(M)},\qquad \tau(x,g((a_\mu)_\mu))=(xa_\mu)_\mu $$ Prove it is balanced, so it induces a group homomorphism $E\otimes_A F\to E^{(M)}$ and show that an obvious map is its inverse.
H: Find the remainder of the polynomial $f(x)$ divided by $(x-b)(x-a)$ given its remainder when divided by $x-a$ and $x-b$ I am hoping to get a feedback on my solution to the following problem and if there are better solutions I would love to take a look. Thank you for your time. Let $f(x)$ be a polynomial with remainders $A$ and $B$ when divided by $x-a$ and $x-b$ respectively, where $a\not=b$. What’s the remainder when $f(x)$ is divided by $(x-a)(x-b)$? My solution: $$f(x)=(x-a)g(x)+A$$ And so $$(x-b)f(x)=(x-a)(x-b)g(x)+A(x-b)\;\;\;(1)$$ On the other hand, $$f(x)=(x-a)(x-b)s(x)+c(x)$$ And so $$xf(x)=(x-a)(x-b)xs(x)+xc(x)\;\;\;\;(2)$$ Subtracting the first expression from the second, $$bf(x)=(x-a)(x-b)(xs(x)-g(x))+xc(x)-A(x-b)$$ Moreover, if we let $f(x)=(x-b)h(x)+B$, we have in a similar manner, $$af(x)=(x-a)(x-b)(xs(x)-h(x))+xc(x)-B(x-a)$$ Hence, $$(b-a)f(x)=(x-a)(x-b)(h(x)-g(x))+B(x-a)-A(x-b)$$ Therefore, $$f(x)=(x-a)(x-b)\dfrac{(h(x)-g(x))}{(b-a)}+\dfrac{B(x-a)-A(x-b)}{b-a}$$ And so the second term is the remainder. AI: Here is a simpler approach: $$ f(x)=(x-a)(x-b)q(x)+ux+v $$ Then $$ A = f(a) = ua + v, \quad B = f(b) = ub + v $$ gives a linear system for $u,v$ whose solution is $$ u = \frac{A-B}{a-b}, \quad v = \frac{a B -A b}{a-b} $$
H: Finding probability density - a result of an algoritm I am supposed to find probability density of the random variable $Y$, which is an output of the following algorithm: Gen $X \sim U(0,1)$ Gen $U \sim U(0,1)$ If $X<U$ then $Y:=X$ else $Y:=1-X$ return Y In R: X <- runif(10000, 0, 1) U <- runif(10000, 0, 1) res <- ifelse(X <= U, X, 1-X) My attempts: $$ P(Y \le y) = \int \limits_0^1 P(Y\le y|U=u) f_U(u) du $$ $Y \in [0,1]$, so lets consider $y \in [0,1]$ $$ P(Y\le y|U=u) = P(Y\le y, \{ X \le u, X>u \}|U=u)= $$ $$ P(Y\le y, X \le u|U=u) + P(Y\le y, X>u |U=u) = $$ $$ P(X\le y, X \le u|U=u) + P(1-X\le y, X>u |U=u)= $$ $$ P(X\le y, X \le u|U=u) + P(X \ge 1-y, X>u |U=u)= $$ $$ P(X < \min(y,u)|U=u) + 1 - P(X \le 1-y, X<u |U=u)= $$ $$ P(X < \min(y,u)|U=u) + 1 - P(X < \min(1-y,u) |U=u)= $$ $$ \min(y,u) + 1 - \min(1-y,u) $$ $$ P(Y \le y) = \int \limits_0^1 \min(y,u) + 1 - \min(1-y,u) du = y - \frac{y^2}{2} +1 -\frac{1}{2} + \frac{y^2}{2} $$ $$ P(Y \le y) = y - 1/2 $$ That would mean that $Y \sim U[0,1]$ but if I perform the simulation in R it turns out that the PDF is equal to $-2x +2$ X <- runif(10000, 0, 1) U <- runif(10000, 0, 1) res<- ifelse(X <= U, X, 1-X) hist(res, prob=TRUE, asp=1) curve(-2*x+2, lwd=2, col="red", add=TRUE) Where am I making a mistake? AI: One mistake -- Your transition from $$P(X\le y, X \le u|U=u) + P(X \ge 1-y, X>u |U=u)=$$ to $$P(X < \min(y,u)|U=u) + 1 - P(X \le 1-y, X<u |U=u)$$ is incorrect, because the complement of the event $(Z\ge a, Z>b)$ is not $(Z\le a, Z<b)$; it is $(Z<a \ \color{red}{\text{or}}\ Z\le b)$. A better way to proceed is to write $(Z\ge a, Z>b)=(Z>\max(a,b))$. From here you can continue with the complement device. With this correction you will get the final form $$\min(y,u) + 1 - \max(1-y,u).$$ Integrating this over $u$ and then differentiating wrt $y$ will get you the correct form for the density.
H: Prove that $\lim_{s \to \infty} \sum_{x=1}^{2s} (-1)^x\sum_{n=1}^{x}\frac{1}{n!}=\cosh (1) -1$ How can we prove that $$\lim_{s \to \infty} \sum_{x=1}^{2s} (-1)^x\sum_{n=1}^{x}\frac{1}{n!}=\cosh (1) -1$$ It seems like this is some kind of telescopic series, but I don't know how to find the limit of this sum. Any help would be greatly appreciated. AI: HINT: Note that we can write $$\begin{align} \sum_{x=1}^{2s}(-1)^x \sum_{n=1}^x\frac1{n!}&=\sum_{x=1}^s\left(-\sum_{n=1}^{2x-1}\frac1{n!}+\sum_{n=1}^{2x}\frac1{n!}\right)\\\\ &=\sum_{x=1}^s\frac1{(2x)!} \end{align}$$
H: $x, y, z$ for dimensions We tend to use $x$ for an arbitary first dimension, $y$ for one at right angles to it, $z$ for one at rigth angles to both of those ... what is the letter for $4\textrm{D}$? AI: I think the letter $t$, because the fourth dimension is (usually) $\textrm{time}$ which is often symbolized with $t$. However when people use many dimensions they often use the notation $x_1, x_2, ...$ since it is easier than having to remember which letter corresponds to which dimension.
H: How can $\mathbb{Z} \times \mathbb{Z}$ be generated by $(1,1)$ and $(0, 1)$? I read somewhere that $\mathbb{Z} \times \mathbb{Z}$ be generated by $(1,1)$ and $(0, 1)$ (taken together, I presume). I am trying to understand how this is true. I found this answer, based on which I developed the following argument: The subgroup generated by $(1,1)$ is of the form $\{(m, n)\}$ where $m= n$. The subgroup generated by $(0, 1)$ is of the form $\{(0, n)\}$. But $\mathbb{Z} \times \mathbb{Z}$ also has elements of the form $\{(m, 0)\}$. How are they being generated by $(1,1)$ and $(0, 1)$? AI: Consider the following: $(m,n)=m(1,1)+(n-m)(0,1)$. This shows that $\Bbb{Z} \times \Bbb{Z} \subseteq \langle (1,1), (0,1)\rangle$. The other inclusion is obvious. To answer your second question: If you want $(m,0)$, then we can have $$(m,0)=m(1,1)+(-m)(0,1).$$
H: How to find number of edges in a graph? Let G(V,E) be an undirected graph: $$V={\{0,1\}}^n$$ E: There is an edge between A and B iff, A and B differ in exactly one index For example (when n=4 -which is the length of each world-): There is an edge connecting 0000,0001 and another edge connecting 0100,0110 But, there is no edge connecting 0000,1010 I need to find the number of edges in this graph I proved that each vertices is connected to n other vertices AI: Number of vertices is $2^n$ and each vertex (according to you) is adjacent to $n$ other vertices. So the total degree is $2^n \,n$. So the number of edges will be half of it as each edge will be counted twice.
H: How to prove that there exists a specific path? Let G(V,E) be an undirected graph. if there exists a vertex called u that its degree is not even then it is connected to another vertex v that its degree is also not even. I tried to prove this by contradiction but reached a closed end, may I get help with this? Note: The sentence says there exists one so I can't choose v and u specifically. AI: I take it that "$u$ is connected to $v$" means that there is a path from $u$ to $v$. Let $C$ be the component of $G$ containing $u$, that is the largest connected subgraph of $G$ containing $u$. Note that the vertices $C$, other than $u$ itself, are precisely of those vertices $v$ such that there is a path from $u$ to $v$. If the theorem is false, then $C$ is a graph with exactly one vertex of odd degree. What lemma have you learned that shows this is impossible?
H: Linear transformation bounded iff its kernel is closed in infinite-dimensional Banach spaces I am working on Problem 8, Chapter 6, in Luenberger's Optimization by Vector Space Methods. It states: "Show that a linear transformation mapping one Banach space into another is bounded if and only if its nullspace is closed." I am having a bit of trouble with the converse. In particular, if we let $f:X \rightarrow Y$ be a linear transformation, Luenberger doesn't assume that either $X$ or $Y$ be finite-dimensional. Do you have any idea on how to proceed? I have thought (without success) of considering the quotient space $\hat{X}/\ker f$ AI: This is not true in general: Let $X$ be any infinite-dimensional banach space with (Hamel) basis $B$ and let $\{b_n\}_{n\in\ \mathbb N}$ be a countable subset of $B$. Then define $T:X\to X$ on the basis $B$ by $T(b_n)=nb_n$ and $T(b)=b$ for $b\in B\setminus \{b_n\}_{n\in\ \mathbb N}$ and extend linearly. Then $T$ is an unbounded operator, with $\ker T=\{0\}$ closed.
H: Question about linear operator on polynomial space $L:\mathbb{R}[x,y]_{\leq 2} \rightarrow \mathbb{R}[x,y]_{\leq 2}$ I'm trying to find a a matrix of a linear operator defined mapping from the set of real polynomials of two variables of degree less than or equal 2 to itself by the following rule: $L(f(x,y)) = (x^2 + 1)*f(1,1) + f(-x+y, 2x) - f(x,y)$ This is stupid, but I'm not sure about the $f(-x + y,2x)$ part: if we take, for example, $L(x)$, should $f(-x +y, 2x)$ evaluate to $-x + y$ or to $-x$ and $y = 0$? In other words, in this expression, should we take $y$ from the vector space as a vector or should it evaluate to $y$ that is on the left hand side, inside the polynomial? AI: If $f(x,y)=x$ then $f(x,y)$ does not depend on the second variable. It just outputs the first coordinate. So $f(-x+y,2x)=-x+y$.
H: How to find the Moment Generating function of a function of random variables from their joint Moment Generating function? Given the Joint Moment Generating function(MGF) of the random vector $X =(Y,Z)$ $$M_{Y,Z}(t_1,t_2) =e^{(t_1^2 + t_2^2 + t_1 t_2)/2} $$ How can I find the MGF of $Y+Z$ $Y-Z$ Is there any generalisation for the MGF of the function G(Y,Z) ? The way by textbook has solved this is: The MGF of $Y+Z$ can be obtained by putting $t_1 = t_2 = t $ in the MGF $M_{Y,Z}(t_1,t_2)$ The MGF of $Y-Z$ can be obtained by putting $t_1 =t $ and $ t_2 = -t $ in the MGF $M_{Y,Z}(t_1,t_2)$ I have no idea from why this is true. I think I am missing an important result. Please help me AI: Recall the definition $M_{Y,Z}(t_1, t_2):= E[e^{t_1 Y + t_2 Z}]$. Then, $$M_{Y+Z}(t) = E[e^{t(Y+Z)}] = E[e^{tY + tZ}] = M_{Y,Z}(t, t)$$ $$M_{Y-Z}(t) = E[e^{t(Y-Z)}] = E[e^{tY + (-t)Z}] = M_{Y,Z}(t, -t)$$
H: Symplectic structure on $\mathbb{S}^{2}$ This question has been asked several times but I cannot find a satisfactory answer. Consider $\mathbb{S}^{2} \subseteq \mathbb{R}^{3}$ and define, for every $p \in \mathbb{S}^{2}$ and every $u,v \in T_{p}\mathbb{S}^{2}$, the $\mathbb{R}$-bilinear form $\omega_{p}(u,v) := \langle p, u \times v \rangle = \det(p,u,v)$. It is clear that the assignment $p \longmapsto \omega_{p}$ defines a symplectic form on $\mathbb{S}^{2}$. What I want to show is that, in cylindrical polar coordinates $(\theta,z)$, $w$ can be written as $\omega = d\theta \wedge dz$. I guess that the idea is to write first $\omega$ in the canonical coordinates $(x,y,z)$ in $\mathbb{R}^{3}$ and then do a change of coordinates. But, how can I write $w$ in standard coordinates first? Here it is answered, but I don't understand how $\omega$ is written in standard coordinates. AI: We write everything in terms of $(x, y, z$-coordinates. Let $p = (x, y, z)$ and let $\Omega_p (v, w) = \det (p \ v \ w)$. In this standard coordinates, \begin{align} \frac{\partial}{\partial x} &= (1,0,0), \\ \frac{\partial}{\partial y} &= (0,1,0),\\ \frac{\partial}{\partial z} &= (0,0,1). \end{align} So \begin{align} \Omega_p \left(\frac{\partial}{\partial x} , \frac{\partial}{\partial y}\right) &= \det \begin{bmatrix} x & 1 & 0 \\ y &0 & 1\\ z & 0 & 0\end{bmatrix}=z \end{align} and similarly $$ \Omega_p \left(\frac{\partial}{\partial y} , \frac{\partial}{\partial z}\right) = x, \ \ \ \Omega_p \left(\frac{\partial}{\partial z} , \frac{\partial}{\partial x}\right) = y$$ Thus $$ \Omega = z dx\wedge dy + x dy \wedge dz + y dz\wedge dx.$$
H: Bochner integral in a direct sum of Banach spaces Let $\mathcal{B} = \mathcal{B}_1\oplus\ldots\oplus \mathcal{B}_n$ be a direct sum of Banach spaces $\mathcal{B}_i$ each with norm $\|\cdot\|_{\mathcal{B}_i}$. The Banach space $\mathcal{B}$ has many equivalent norms. For instance, letting $v = (v_1,\ldots,v_n)\in\mathcal{B}, $ $$\|v\|_\infty = \max_{i}\|v_i\|_{\mathcal{B}_i}$$ and $$\|v\|_1 = \|v_1\|_{\mathcal{B}_i} + \ldots + \|v_n\|_{\mathcal{B}_n}.$$ Let $(\Omega,\Sigma,\mathbb{P})$ be a probability space. My question is to do with the Bochner integrability of a measurable function $f:\Omega\rightarrow \mathcal{B}$ of the form $f(\omega) = (f^{(1)}(\omega),\ldots,f^{(n)}(\omega))$. Claim: A function $f:\Omega\rightarrow\mathcal{B}$ is Bochner integrable if and only if $f^{(i)}:\Omega\rightarrow\mathcal{B}_i$ is Bochner integrable for each $i$. Proof: Suppose first that each $f^{(i)}$ is Bochner integrable and so let $s^{(i)}_k$ be the corresponding sequences of simple functions. Then, using $\|\cdot\|_1$ we have $$\lim_{k\rightarrow\infty} \left[\sum_{i=1}^n\int_\Omega\|f^{(i)}-s^{(i)}_k\|_{\mathcal{B}_i}\,\mathrm{d}\mathbb{P}\right] = \lim_{k\rightarrow\infty} \left[\int_\Omega\|f - s_k\|_1\,\mathrm{d}\mathbb{P}\right] = 0,$$ where $s_k = (s^{(1)}_k,\ldots,s^{(n)}_k)$. Thus $f$ is Bochner integrable. Suppose now that $f$ is Bochner integrable and let $s_k$ be the corresponding sequence of simple functions. Then, now using $\|\cdot\|_\infty$, since for each $i$, $0\leq \|f^{(i)}-s_k^{(i)}\|_{\mathcal{B}_i}\leq \|f-s_k\|_\infty$ and by the squeeze theorem, $$ \lim_{k\rightarrow\infty} \left[\int_\Omega\|f - s_k\|_\infty\,\mathrm{d}\mathbb{P}\right] = 0 \implies \lim_{k\rightarrow\infty} \left[\int_\Omega\|f^{(i)} - s^{(i)}_k\|_{\mathcal{B}_i}\,\mathrm{d}\mathbb{P}\right] = 0.$$ Thus each $f_i$ is Bochner integrable. Question: Is this proof valid? In particular, am I allowed to freely interchange the choice of norm for $\mathcal{B}$ since they are equivalent? EDIT: Is the claim also true for a countably infinite product of Banach spaces $\mathcal{B} \subseteq \prod_{i\in \mathbb{N}}\mathcal{B}_i$, where $\mathcal{B}$ consists of elements $v$ such that $\|v\|_\infty$ is finite? Is the claim also true where $\mathcal{B}$ consists of elements $v$ such that $\|v\|_1$ is finite? AI: Yes, you can use either norm in the proof. Note that $\|v\|_{\infty} \leq \|v\|_1\leq n \|v\|_{\infty}$ and $n$ is fixed.
H: Finding the order of $(1, 1) + \langle(2, 2)\rangle$ in the factor group $\mathbb{Z} \times \mathbb{Z} / \langle (2, 2)\rangle$ I read somewhere that the order of $(1, 1) + \langle(2, 2)\rangle$ in the factor group $\mathbb{Z} \times \mathbb{Z} / \langle (2, 2)\rangle$ is $2$. I am trying to understand how this is true. I have: $(1, 1) + (-2, -2) = (-1, -1) \neq (0, 0)$ $(1, 1) + (0, 0) = (1, 1)$ $(1, 1) + (2, 2) = (3, 3)$ $(1, 1) + (4, 4) = (5, 5)$ $(1, 1) + (6, 6) = (7, 7) \neq (0, 0)$ How is the order of $(1, 1) + \langle(2, 2)\rangle$ in the factor group $\mathbb{Z} \times \mathbb{Z} / \langle (2, 2)\rangle$ equal to $2$? AI: Good typesetting, by the way! The calculations you are doing are finding elements of the coset $(1, 1) + \langle(2, 2)\rangle$; but that's not what you're asking. The order of the coset $(1, 1) + \langle(2, 2)\rangle$ (that is, its order in the factor group $(\Bbb Z\times \Bbb Z)/\langle(2, 2)\rangle$) is the smallest positive integer $m$ such that $m$ times $(1, 1) + \langle(2, 2)\rangle$ is equal to the identity element of $(\Bbb Z\times \Bbb Z)/\langle(2, 2)\rangle$. So to answer this question, you'll need to be able to answer these sub-questions: What is the identity element of $(\Bbb Z\times \Bbb Z)/\langle(2, 2)\rangle$? How can you tell in general whether $(a, b) + \langle(2, 2)\rangle$ is equal to this identity element? What is the definition of "$m$ times $(1, 1) + \langle(2, 2)\rangle$"? Is $1\times\big( (1, 1) + \langle(2, 2)\rangle \big)$ equal to the identity element of $(\Bbb Z\times \Bbb Z)/\langle(2, 2)\rangle$? Is $2\times\big( (1, 1) + \langle(2, 2)\rangle \big)$ equal to this identity element? Is $3\times\big( (1, 1) + \langle(2, 2)\rangle \big)$ equal to this identity element? ...
H: Compute integral using Cauchy Principal Value Using the Cauchy Principal Value, I need to compute the following integral $$\int_{-\infty}^\infty\frac{\cos(ax) - \cos(bx)}{x^2}dx$$ I have used the standard semi-circle contour with an indentation around the singularity at $x=0$. However integrating around the outer semicircle and smaller one around $0$, I find they have no contribution to the integral and the residue is also $0$. However I know the integral is not equal to $0$. Where have I gone wrong? AI: Consider the contour of a semi-circle that avoids the singularity at $(0,0)$ of radius $\varepsilon$. Notice the line integral of the arc of radius $\lim_{R \to \infty} R$ equals 0 from Jordan's lemma. Also, consider $\cos{x}=\Re{\left(e^{ix}\right)}$, so we will take the real part of the integral. By Cauchy's 1st theorem you're left with: $$I=\int_{-\infty}^{\infty}\frac{\cos{ax}-\cos{bx}}{x^2} \; dx =-\int_{\pi}^0 \frac{e^{ia \varepsilon e^{i \theta}}-e^{ib \varepsilon e^{i \theta}}}{{\left(\varepsilon e^{i \theta}\right)}^2} \; i\varepsilon e^{i \theta} d\theta$$ Use the maclaurin series of $e^x$: $$I=i\int_0^{\pi} \frac{ia \varepsilon e^{i \theta}-ib \varepsilon e^{i \theta}}{\varepsilon e^{i \theta}} \; d\theta$$ $$I=-\int_0^{\pi} {a -b } \; d\theta$$ $$I=\Re{\left(\pi \left(b-a\right)\right)}=\boxed{\pi \left(b-a\right)}$$
H: If $A,B$ are linear combinations based on common "underlying" random variables, can they still be independent? Apologies if I am just having a mental block and missing something very obvious. Here is a conjecture that I think is obviously true, and yet I cannot prove it: Let $X_1, X_2, \ldots, X_n, Y, Z$ be mutually independent, real-valued, non-constant random variables. (They need not be identically distributed.) Let $A = \sum_{j=1}^n a_j X_j + Y, B = \sum_{j=1}^n b_j X_j + Z$ where all the coefficients $a_j, b_j$ are non-zero. Prove or provide counter-example:** For $X_j, Y, Z, A, B$ defined as above, $A, B$ cannot be independent. Further thoughts: If some coefficients are zero, the subset of $X_j$'s that actually "affects" $A$ can be distinct from the subset that actually "affects" $B$, and then $A,B$ can of course be independent. But my statement explicitly rules this out. Also, if instead of summation, we have general functions e.g. $A' = f(\vec{X}) + Y, B' = g(\vec{X}) + Z$, then even if each function $f, g$ must be affected by all components, we can still define them s.t. they are independent and therefore $A', B'$ are independent. However, I am not allowing arbitrary functions, but instead summations (linear combinations). To be clear, the summation is over reals. (I would be curious to see a counter example in finite field, but that's not my main question, and even so, you cannot have the $+$ in $A$ be in a different field than the $+$ in $B$, so to speak.) AI: The joint distribution of two i.i.d. normal variables $X, Y$ is radially symmetric, so actually $$X \cos \theta + Y \sin \theta, \; - X \sin \theta + Y \cos \theta$$ are independent i.i.d. normal variables, by applying a rotation matrix. It should be clear how to construct a counterexample to your conjecture from here.
H: existence of negative root I look if there exists a non-zero polynomial $p(x)= a_0 + a_1x + a_2x^2 + \cdots a_nx^n$ with positive integers coefficients : $\forall i, a_i\in \mathbb N $ such that $p(-\sqrt 2)=0$ AI: The minimal polynomial for $-\sqrt{2}$ over $\mathbb{Z}$ is $x^2-2$. Thus any such polynomial having $-\sqrt{2}$ as a root must be divisible by $x^2-2$. Using the notation from your question, write $p(x)=a_0+a_1x+\ldots+a_nx^n$ where each $a_i\in\mathbb{N}$. Supposing $p(-\sqrt{2})=0$, we have $$ p(x)=q(x)(x^2-2) $$ for some $q(x)\in\mathbb{Z}[x]$. Write $q(x)=b_0+b_1x+\ldots+b_{n-2}x^{n-2}$ where each $b_i\in\mathbb{Z}$. Expanding the product, we have $$ p(x)=(b_0x^2+b_1x^3+\ldots+b_{n-2}x^n)-2(b_0+b_1x+\ldots+b_{n-2}x^{n-2}). $$ By our assumption, all coefficients of $p(x)$ are positive, so the coefficient on any $x^j$ for $2\leq j\leq n-2$ is $b_{j-2}-2b_j$ and satisfies $b_{j-2}-2b_j>0$. This means $b_{j-2}>2b_j$ for each $2\leq j\leq n-2$. The coefficient on $x^n$ is $b_{n-2}$, so $b_{n-2}>0$. This forces every coefficient of $q(x)$ to be positive, and in particular, $b_0>0$. But the constant term of $p(x)$ is $-2b_0$, contradicting our assumption that $p(x)$ had all positive coefficients.
H: $\int_\gamma F \cdot dr$, where $F=x^2i+y^2j+z^2k$ Given $\int_\gamma F \cdot dr$ where $F=x^2i+y^2j+z^2k$ where $\gamma$ is the intersection of the sphere $x^2+y^2+z^2=a^2$ with the plane $y=z$. I know it sounds quite silly and easy to compute this but the exercise make the statement about the intersection its made in the first octant of the plane with the coordinates (0,0,a) to $(\frac{a}{\sqrt{2}}, \frac{a}{\sqrt{2}},0)$, so this confuses me. I used to make this with the eyes closed but its been time since i stop doing this even i know that this would be easier with Stokes Theorem, but can you please help me out, with a hint or any suggestions AI: You don't necessarily need Stokes' theorem. A more basic fact would be that $F$ is conservative $$F = \nabla\left(\frac{x^3+y^3+z^3}{3}\right)$$ and the closed loop line integral of any conservative vector field will be $0$ by the fundamental theorem of line integrals.
H: Is there a closed form to this summation? I cannot get a closed form for $\sum\limits_{r=0}^{m} \frac{(m+r) !}{(m-r)! (2 r)!}$ ‘ Does anyone have any idea on what it is? AI: This sum is the Fibonacci number $F_{2 m+1}$.
H: Exactly one root of $p_n$ between two consecutive roots of $p_{n+1}$ Let $p_n$ be a polynomial of exactly degree $n$, with positive leading coefficient, and suppose that it has $n$ simple real roots. Let $y_1<\dots <y_{n+1}$ be real (simple) roots of $p_{n+1}$. Assume that $p_n/p_{n+1}$ is decreasing in each interval free of zeros of $p_{n+1}$. Then we must have $\lim_{x\to y_i^{\pm}}p_n(x)/p_{n+1}(x)=\pm \infty$. Why can we conclude from this that $p_n$ has exactly one root between two consecutive roots of $p_{n+1}$? I was thinking about using the Intermediate Value Theorem, but the interval, for example, $(y_1,y_2)$ is not closed. What to do then? AI: Fix $n$ and put $f=p_n/p_{n+1}$. You have observed that $\lim_{x\to y_i^{\pm}}f(x)=\pm \infty $. Hence given $i$, there exists $a>y_i$ and $b<y_{i+1}$ such that $f(a)>0$ and $f(b)<0$. Furthermore, $a$ and $b$ can be chosen such that \begin{align} a-y_i&<\frac{y_{i+1}-y_i}{2}\, \\ y_{i+1}-b&<\frac{y_{i+1}-y_i}{2}\,. \end{align} Then $y_i<a<b<y_{i+1}$. By the intermediate value theorem, $f(r)=0$ for some $r\in(a,b)$. Suppose, for contradiction, that $p_n$ has two distinct roots $r_1<r_2$ between $y_i$ and $y_{i+1}$. Then $f(r_1)=f(r_2)=0$. Since $f$ is decreasing, it must be that $p_n(x)=0$ for all $x\in[r_1,r_2]$, which is impossible.
H: $\sigma$ algebra question So I'm learning a little bit of measure theory currently, and I am a little in the dark as to the motivation for the definition of the $\sigma$ algebra. My biggest question is why the $\sigma$ algebra must be used rather than the power set (since a $\sigma$ algebra is a proper subset of the power set, right?). I had read somewhere that it has something to do with the Banach-Tarski paradox, but I am unsure why if that is the case. Can someone help shed some light on this for me? Thanks! AI: The relevant theorem about using the whole power set rather than some smaller $\sigma$-algebra is the following. There is no function $\mu$ assigning to every subset $A$ of $\mathbb R$ an ext4ended-real number $\mu(A)\in[0,\infty]$ such that: (1) For any interval $[a,b]$ (for $a<b$ in $\mathbb R$), we have $\mu([a,b])=b-a$ (so $\mu$ agrees with our usual notion of length when the latter is available), (2) For any $A\subseteq\mathbb R$ and any $t\in\mathbb R$, we have $\mu(\{a+t:a\in A\})=\mu(A)$ (i.e., $\mu$ is invariant under translations), and (3) If $A_n$ (for $n\in\mathbb N$) are pairwise disjoint, then $\mu(\bigcup_nA_n)=\sum_n\mu(A_n)$. In other words, there is no "reasonable" measure defined on the whole power set of $\mathbb R$. There is, however, a reasonable measure defined on the "nice" subsets of $\mathbb R$, and the meaning of "nice" here is broad enough to encompass the sets people (or at least analysts and probabilists) normally need to work with. The idea is to use (1) above to define $\mu$ on intervals and then use (3) (and its consequence that, if $A\subseteq B$ then $\mu(A)\leq\mu(B)$) to extend $\mu$ to lots of other sets. Roughly speaking, the sets that get assigned a measure this way are the Lebesgue measurable sets, and they form a $\sigma$-algebra. It turns out that lots of other situations lead to measure functions defined on $\sigma$-algebras. For example, if you had a material object with density $\rho$ varying in space, then there would be a measure, as described above, except it's defined on "nice" subsets of space $\mathbb R^3$ and, in place of (1) you'd have $\mu$ agreeing with ordinary mass on rectangular boxes. Similarly for probability densities. So the $\sigma$-algebra context is broad enough to cover lots of important situations. On the other hand, the $\sigma$-algebra context is narrow enough to allow proofs of useful theorems. Specifically, this context allows us to impose requirement (3) above, countable additivity, on measures, and that requirement plays a role in numerous proofs.
H: Doubts about a proof of the existence of a lower bound on $\varphi(n)$ I recently attempted to solve the following question (exercise 0.2.10 from Dummit & Foote - Abstract Algebra): Prove for any given positive integer $N$ there exist only finitely many integers $n$ with $\varphi(n) = N$, where $\varphi$ denotes Euler’s $\varphi$-function. Conclude in particular that $\varphi(n)$ tends to infinity as $n$ tends to infinity. I ended with something much similar to: Is there an integer $N>0$ such that $\varphi(n) = N$ has infinitely many solutions? Prove for any given positive integer $N$ there exist only finitely many integers n with $φ(n)=N$ Which show that, for any given $N$, there are finitely many $n$ such that $\varphi(n) = N$. I am OK with this. However, the question also asks whether $\varphi(n)$ tends to infinite as $n$ does; 2. attempts to conclude that from 1, and that doesn't feel right to me. Let me explain: All answers reach a point where for $\varphi(n) = N$, then $n$ must be a product of finitely many bounded primes, each raised to a bounded exponent. However, nothing is said about a lower bound on such $n$, and thus it seems wrong to conclude that $N$ grows with $n$; what if $N$ oscillates without a lower bound as $n$ grows bigger? The argument doesn't seem to address this possibility. I know that $\varphi(n)$ is in fact bounded below, and many good answers can be found here: Is the Euler phi function bounded below? What I ask, then, is whether that conclusion is false, needs some tinkering to work, or is fine and I missed something. AI: The conclusion is fine. From the observation that the solution set of $\varphi(x) = N$ is finite for each fixed $N$, it follows that $\varphi$ does not "oscillate" like that. The proof follows, so skip the rest if you'd like to work it out yourself! Recall that the sequence $\varphi$ tends to infinity precisely if for any $M \in \mathbb{R}$ we can find an integer $K$ such that for all $n > K$, $a_n > M$. So take any number $M$. There are finitely many integers $i$ that lie between $0$ and $M$. We know by our observation that for each of these finitely many integers $i$ the solution set of the equation $\varphi(x) = i$ is itself finite. But the union of finitely many finite sets is still finite, so there are finitely many solutions to $\varphi(i) \leq M$. The finite set of solutions has a maximal element, denote it $K$. Now for all $n > K$, $n$ is not a solution to $\varphi(n) \leq M$, so in fact we have $\varphi(n) > M$.
H: Do all homogeneous systems with non-trivial solutions have columns of zeros? I'm trying to think about this problem I'm faced with. My peer stated that a non-trivial homogeneous system (which is square) has a column/row of zeros, but I'm trying to make sense of that. It's pretty mind-boggling at the moment. Can anyone help? AI: Suppose we have a homogeneous system with $n$ equations and $n$ unknowns. What this represents is the system $Ax = 0$ for some square matrix $A$. To say that there is a non-trivial solution to this system means that there is a nonzero vector $x$ such that $Ax = 0$. That is, the null space of $A$ has a nonzero vector, and hence it has dimension at least 1. By the rank-nullity theorem, the rank of the matrix is strictly less than the number of columns. But this corresponds to saying that the row-reduced echelon form of the matrix has at most $n - 1$ pivot columns. Hence there is at least one zero row. However, it is not necessarily the case that we always have a zero column. Consider $\begin{pmatrix}1&1\\ 1&1\end{pmatrix}$ which has RREF of $\begin{pmatrix}1&1\\ 0&0\end{pmatrix}$. This has no zero column, but it has a non-trivial solution, e.g. $(1,-1)$.
H: Proof of ∀(x,y)∈R[|x-y| ≥ |x| - |y|] Is the following proof correct? To prove: ∀(x,y)∈R[|x-y| ≥ |x| - |y|]; where R is the set of real numbers. Proof: Lemma: ∀(x,y)∈R[|x+y| ≤ |x| + |y|] Since x and y are arbitrary real numbers we have, ∀(x,y)∈R[|x+(-y)| ≤ |x| + |-y|] Since |y| = |-y|, |x - y| ≤ |x| + |y| ⇔ -|x - y| ≥ -|x| - |y|⇒|x - y| ≥ -|x| - |y| ⇔|x - y| + |y| ≥ -|x| Applying the Lemma we get, |x - y| + |y| ≥ |x - y + y| = |x| Therefore, |x - y| ≥ |x| - |y| This concludes the proof. AI: Or more simply: Proof. Applying the lemma we get $|x| = |(x-y)+y| \leq |x-y| + |y|$, and then adding $-|y|$ to both sides we get the desired result. $\blacksquare$
H: Does *smooth manifold* implies unique tangent space at each point? The paper "A micro Lie theory for state estimation in robotics" claims that the space tangent to $M$ at $X$, which we note $T_X M$. The smoothness of the manifold, i.e., the absence of edges or spikes, implies the existence of a unique tangent space at each point. I'm wondering why the smoothness of the manifold implies the uniqueness regarding to the tangent space. The confusing part is: what about the two-dimensional surface in Euclidean space? The wiki says that this surface is a two-dimensional manifold. But as far as I'm concerned, the tangent space at each point coincides with each other. Can someone explain this ? Update: In my opinion, any unstructured object is a manifold. In elementary math, we are dealing with structured objects which are exactly what we see everyday in life, e.g. boxes, cups, bands. But as the level of abstractness grows as we're diving into math, there's no way to handle the upcoming problems if we insist on the structured objects. Hence, we define "manifold" that locally resembles Euclidean space. With this characteristic, we're able to apply the rules limited on structured objects, though with some generalization and modification, to everything we see in the math world, without loss of rigorousness. AI: The tangent spaces are "unique" insofar as the things they contain can be considered different. With regards to a plane (which is a two-dimensional manifold), then the tangent space at every point certainly "seems" the same; and if we are to think of it as the tangent plane at that point of the manifold, it coincides entirely. However, the typical definition of a "tangent vector" at a point is dependent on the point itself, so the different tangent spaces contain "different things" in an abstract sense. Indeed, for a smooth $n$-dimensional manifold, the tangent spaces at any two points will be isomorphic; but they are not isomorphic in any "important" or "useful" way, and it generally does not make sense to suppose they are the same thing. Intuitively, you might be able to understand this by thinking about the statement that locally, a point's tangent space doesn't care about the "rotation" nor "position" of its tangent plane; thus, the tangent spaces of two points on a plane, considered as a surface, are no more "equal" than the tangent spaces of two points on a $2$-sphere, or two points on any other surface you might care to name. Immediately after the sentence you mention, the paper goes on to point out that The structure of such tangent spaces is the same everywhere. i.e. the tangent spaces at all points are isomorphic; so it's clear that when the paper says the tangent spaces are "unique" they are talking about something else, and I suspect it is something along the above lines. To summarise, while the use of the word "unique" is perhaps a little confusing, it's definitely correct under the authors' intended interpretation. In any case, it seems that the main point of that sentence is just to point out that tangent spaces exist everywhere, because the manifold is smooth.
H: How to find this discrete limit? While proving some discrete Hardy-type inequalities I tried to prove the following limit for non-negative sequence $a(n)$ and $p>1$ $$\lim_{p\rightarrow1}\frac{1}{p-1}\left[\sum_{n=1}^{r}a(n)^{p}-\left(\sum_{n=1}^{r}a(n)\right)^{p}\right]=\sum_{n=1}^{r}a( n)\log \frac{a( n) }{\sum_{n=1}^{r}a( n)}$$ Since the limit is $\frac{0}{0}$, I tried to use Stolz–Cesàro Theorem but could not reach the result. I will be very appreciative of any help. AI: For convenience, it is not hard to note that the equality is preserved when multiplying all elements of $a(n)$ by a fixed positive $\lambda$; thus, we can prove the result for sequences such that $\sum_{n=1}^{r} a(n) = 1$, and the result will follow for all other sequences. (This doesn't handle the case that $a$ is the zero sequence, but I assume that is impossible, due to the $\sum_{n=1}^{r} a(n)$ on the denominator of a fraction. And if one defines the RHS by continuity for this case, then the equality holds anyway.) Now, in fact, this can be tackled directly by L'Hôpital's. We have all the necessary conditions. $$ \lim_{p \to 1} \frac{\left(\sum_{n=1}^{r} a(n)^p\right) - 1^p}{p-1} = \lim_{p \to 1} \frac{\sum_{n=1}^{r} \log (a(n)) a(n)^p}{1} = \sum_{n=1}^{r} a(n)\log(a(n)). $$
H: A question about a localization of a graded ring Let $R=\oplus_{i\in\mathbb{Z}} R_i$ be a (commutative) graded ring of type $\mathbb{Z}$. It can be shown that if $S$ is a multiplicative set consists of homogeneous elements, $R_S$ have a natural grading structure of type $\mathbb{Z}$. My question is: If $\mathfrak{p}\in \mathrm{Spec}(R)$ (possibly not homogeneous), then is it true that $(R_S, \mathfrak{p}R_S)$ is a local ring, where $S=\{F\in R \mid F$ is homogeneous and $F\not\in \mathfrak{p}\}$? I know that the subring (of degree $0$) $R_{S,0} \subset R_S$ is local, and if a graded ring $A$ is local then the subring (of degree $0$) $A_0 \subset A$ is local, too. AI: This is false. Let $R=k[x]$ for $k$ a field and let $\mathfrak{p}=(x)$. Then there are no nonconstant homogeneous elements not in $(x)$, so $S=k^\times$ and $R_S=k[x]$ which is not local.
H: A trivial question about continuity $f:\mathbb{R}\longrightarrow\mathbb{R}:x\mapsto x^2$ This function is continuous as we all know. Since for every point in the domain, we will always be able to draw a $\delta\epsilon-$rectangle, for every $\epsilon$ which captures every point of $f(x)$ if it captures $x$. As i first started looking on continuouity I thought it exists to help clearify weither a curve has "holes". Or points where the curve diverges. It made sense to me, but then I thought what does it mean to be pointwise continuous? $h:\lbrace 1,2,3 \rbrace\longrightarrow\mathbb{R}:x\mapsto x^2$ is this function also continuous? Why shouldnt it be? I mean if i can pointwise look at a curve and take all the points out of the domain where it is uncontinuous, the curve will be continuous. But I could also take alot of points where the function is continuous out of the domain and it should stay continuous. Even if I leave the function only with a set of ,for example, 3 elements in the domain. Is this correct? And if yes, what is the advantage of stating a function like $h$ is still continuous, if it has nothing left to do with curves. So why do we need this property? Shouldnt it also has some value for $h$ to be continuous? I know this question is really basic, and stupid, but it really interests me to understand what "pointwise-continuity really means for a function. Thank you AI: So, we need to be careful. Continuity is first and foremost a topological term. In the category of topological spaces, continuous maps are the morphisms and topological spaces the corresponding objects. That means, in order to be able to talk about continuity, we need to make sure the spaces we are talking about are in fact topological spaces. The definition of continuity in the language of topological spaces is the following: Let $X,Y$ be topological spaces with topologies $\mathcal{T}_X$ and $\mathcal{T}_Y$ respectively. A map $f:X\to Y$ is continuous, if for every open set $V\subset Y$ in $\mathcal{T}_Y$, the set $f^{-1}(V) \subset X$ is open in $\mathcal{T}_X$. So you notice, continuity does not "come out of thin air", but is dependend on the given topological structure the corresponding sets admit. Given your example of $\{1,2,3\}$ we can actually define a topology $\mathcal{T}$ on it by declaring its open sets to be $\{1,2,3\}, \varnothing, \{1\}, \{2\},\{3\}$. and any union of these elements. To sum it up we get $$\mathcal{T} = \{\{1,2,3\}, \varnothing, \{1\}, \{2\},\{3\},\{1,2\},\{1,3\},\{2,3\}\}$$ Now we can actually use this topology $\mathcal{T}$ on $\{1,2,3\}$ together with your map $f:\{1,2,3\}\to \mathbb{R}$ to induce a topology on $\mathbb{R}$ (called the final topology) via $$\mathcal{T}_\mathbb{R} = \{ U \subset \mathbb{R} \mid f^{-1}(U) \in \mathcal{T}\}$$ What we are doing here is, we define the open subsets of $\mathbb{R}$ exactly to be those subsets, for which $f^{-1}(U)$ is an element of $\mathcal{T}$, i.e. of the given topology on $\{1,2,3\}$. Now how do the open sets of $\mathbb{R}$ with respect to $\mathcal{T}_\mathbb{R}$ look like? Considering that we have $$f\colon \{1,2,3\} \mapsto \mathbb{R},\ x \mapsto x^2$$ we get $$\mathcal{T}_\mathbb{R} =\{ \mathbb{R}, \varnothing,\{1\},\{4\},\{9\},\{1,4\},\{1,9\},\{4,9\},\{1,4,9\} \}$$ Now by construction, your map $f$ is actually continuous on $\{1,2,3\}$! That's simply because we build the topology on $\mathbb{R}$ in exactly the way that the preimage of $f$ for every declared open subset of $\mathbb{R}$ is an open subset of $\{1,2,3\}$. Now you can't possibly draw your function $f$ in the same way you would expect it to look like as if we had a function $f:\mathbb{R} \to \mathbb{R}$. But any visualization is fine, as long as it reflects what's actually happening, namely the preimage of every open subset in $\mathcal{T_\mathbb{R}}$ is an element of $\mathcal{T}$. Do not hesitate to ask questions, if you have any.
H: How to show function property holds for integers I want to show that for all integers $x$ greater than 1, $$f(x)=\left\lfloor{\frac{4x^2}{2x-1}-\left\lfloor{\frac{4x^2-4x}{2x-1}}\right\rfloor}\right\rfloor=3.$$ Upon graphing $f$, it's clear that this is probably true. I considered a monotonicity argument but I'm pretty sure that's not going to fly here. AI: Let's start with the inner floor. $$ \left\lfloor\frac{4x^2-4x}{2x-1}\right\rfloor $$ Using long division, we can reduce this as much as possible: $$ \left\lfloor2x-1-\frac1{2x-1}\right\rfloor $$ Now, since $x$ is a positive integer, $2x-1$ will also be an integer, so will not have an effect on the rounding from applying the floor function. So we have $$ 2x-1+\left\lfloor-\frac1{2x-1}\right\rfloor $$ Now, the part inside the floor is between $-1$ and $0$ for all positive integers, so its floor is just $-1$. So, for integers, the inner floor is just $2x-2$. So the whole thing is now $$ \left\lfloor{\frac{4x^2}{2x-1}-2x+2}\right\rfloor. $$ Again, using long division, reduce the fraction as much as possible: $$ \left\lfloor{2x+1+\frac1{2x-1}-2x+2}\right\rfloor $$ $$ \left\lfloor{3+\frac1{2x-1}}\right\rfloor $$ $$ 3+\left\lfloor{\frac1{2x-1}}\right\rfloor $$ And finally, the fraction is clearly between $0$ and $1$ for all integers greater than one, so it's floor is just $0$, and we are left with $3$.
H: Proving ${ \left\{\sum \left( ab+{b}^{2}+{c}^{2}+ac \right)\right\} }^{4}\geq 27\,{ \sum} ( ab+{b}^{2}+{c}^{2}+ac ) ^{3} ( c+a) ( a+b) $ For $a,b,c>0.$ Prove$:$ $$ \left\{ \sum\limits_{cyc} \left( ab+{b}^{2}+{c}^{2}+ac \right) \right\}^{4}\geq 27\,{ \sum\limits_{cyc}} \left( ab+{b}^{2}+{c}^{2}+ac \right) ^{3} \left( c+a \right) \left( a+b \right) $$ I found a SOS proof's for it but very ugly. We have$:$ $$\text{LHS}-\text{RHS}=\sum\limits_{cyc} f(a,b,c) (a-b)^2 \geq 0$$ where $$\begin{align*} f(a,b,c)&=8\,{a}^{6}+26\,{a}^{5}b+96\,{a}^{4}{b}^{2}+20\,{a}^{4}bc+152\,{a}^{3}{ b}^{3}+130\,{a}^{3}{b}^{2}c\\ &\quad +96\,{a}^{2}{b}^{4}+130\,{a}^{2}{b}^{3}c+ 106\,{a}^{2}{b}^{2}{c}^{2}+100\,{a}^{2}{c}^{4}\\ &\quad +26\,a{b}^{5}+20\,a{b}^{ 4}c+278\,ab{c}^{4}+8\,{b}^{6}+100\,{b}^{2}{c}^{4} \\ & \geq 0\end{align*} $$ I hope for an alternative solution without using $uvw.$ Thanks! AI: A full expanding gives $$\sum_{sym}(4a^8+5a^7b+26a^6b^2-7a^5b^3+22a^4b^4+10a^6bc+45a^5b^2c-16a^4b^3c-36a^4b^2c^2-53a^3b^3c^2)\geq0,$$ which is true by Muirhead.
H: Factor a quadratic when the leading coefficient is not equal to 1 and you can't factor by grouping? If one has a quadratic, for example $5x^2-10x-2$, which has real roots which via the quadratic equation are $(5\pm \sqrt{35})/5$, can you find its factored form. As I understand it $5x^2-10x-2\ne(x-\text{root1})(x-\text{root2})$? How can you turn this type of quadratic into a factored form? AI: Take the coefficient of the squared term (In this quadratic the coefficient is 5). Multiply this coefficient multiply it by $(x-root_1)$ and $(x-root_2)$ where $root_1$ and $root_2$ are the two factors of the quadratic polynomial (In this quadratic the two roots are$\frac{5+\sqrt{35}}{5}$ and $(5-\sqrt{35})/5$). The final solution being: $5\left(x-\frac{5+\sqrt{35}}{5}\right)\left(x-\frac{5-\sqrt{35}}{5}\right)$
H: "Partition" without the disjoint condition A partition of a set $A$ is defined as a set of pairwise disjoint sets whose union is $A$. I'm interested in a related concept, where for a set $A$ you have $Q = \{A_1 \ldots A_n\}$ such that union of all $A_i$ is $A$ but $A_i$ needn't be pairwise disjoint. I'm looking for standard term for this concept so I can look up further literature on it. The term will read as "$Q$ is the superpartition for $A$", with superpartition being replaced by the actual term. AI: Perhaps cover (or covering) For example, "open cover" is used in topology (not to be confused with covering as a projection in topology) and the term "covering system" in elementary number theory. I don't think it's standard in a purely elementary set theory context, but it's already effectively used this way in two places in math so it shouldn't be too great a stretch to generalize.
H: Differentiability of $\cos \lvert x\rvert$ I know that $f(x) = \cos\lvert x\rvert$ is differentiable at $x=0$ and I know what its graph looks like. But if I differentiate $f(x)$ with respect to $x$ , I will have to apply the chain rule i.e, $\frac {df(x)} {dx} = -sin\lvert x\rvert\cdot \frac {d\lvert x\rvert} {dx}$. But $\lvert x\rvert$ is not differentiable at $x= 0$ which makes $f(x)$ non differentiable. So, where did I go wrong? Thanks AI: Chain rule asserts that if $f$ and $g$ are both differentiable functions, then $(f \circ g)' = (f' \circ g)g'$. In your case, $g(x) = |x|$ is not differentiable, hence chain rule does not apply. This, however, does not imply that the original function is not differentiable.
H: Find the number of series with a certain condition Q: Calculate the number of series: $a_1a_2a_3 \dots a_n$ of length $n$ that for all $a_i \in \{0,1,2,3\}$ and there is no occurrence of $3$ right of $0$. Meaning: no $i,j \in \mathbb{N}$ exist so that $1 \leq i < j \leq n$ and $a_j = 3 , a_i = 0$ I tried to approach this in a combinatoric way, by drawing: $ \_ ,\_, \dots\_ $ and filling what I know. Starting for the first _ with a 3, so we cannot use $0$ anywhere else and so the number of options is $3^{n-1}$ And we do it for each $\_$ (meaning we move one forward and do the whole thing again) and what I get is: $\sum_1^n 3^{n-1} = 3^{n-1} \cdot n$ However I am not sure at all if this is the solution, I can't seem to understand what is the trick here... thank you! AI: Suppose that altogether we have $j\leq n$ $0$'s and $3$'s. There are $\binom nj$ places we can choose to place them in the sequence, and only one order they can go in: first all the $0$'s, then all the $3$'s. There can be from $0$ to $j$ $0$'s, so $j+1$ possibilities. Now each of the remaining $n-j$ places can be occupied by a $1$ or a $2$, so we have $2^{n-j}$ ways to complete the sequence. Altogether the number of admissible sequences is $$\begin{align} S:&=\sum_{j=0}^n (j+1)2^{n-j}\binom nj\\ &=2^n\sum_{j=0}^n j2^{-j}\binom nj+2^n\sum_{j=0}^n 2^{-j}\binom nj\tag1 \end{align}$$ Recall the binomial theorem.$$(1+x)^n=\sum_{j=0}^n\binom nj x^j\tag2$$ Differentiating both sides of $(2)$, we have $$ n(1+x)^{n-1}=\sum_{j=0}^n j\binom nj x^{j-1}\tag3$$ Now if we set $x=\frac12$, $(2)$ shows that the second term of $(1)$ is $$2^n\left(1+\frac12\right)^n=3^n,$$ and $(3)$ shows that the first term of $(1)$ is $$2^{n-1}\left(1+\frac12\right)^{n-1}=n\cdot3^{n-1}$$ so the final answer is $$\boxed{(n+3)3^{n-1}}$$ I must say I couldn't follow your calculation, but it seems like you may just have missed the sequences with no $3$'s at all. EDIT Now that I know the answer, I see a better way of doing it. There are $n$ places the last $0$ in the sequence can be. Any number before it can be $0,1,\text{ or }2$, and any number after it can be $1,2,\text{ or }3$, which gives $n\cdot3^{n-1}$ sequences containing a $0$, and of course there are $3^n$ admissible sequences with no $0$. Your argument must be similar, except that you are considering the first $3$ instead of the last $0$.
H: Closed form solution to recurrence: $g(n)=(k-2)g(n-1)+(k-1)g(n-2)$ For the number of paths with exactly $n$ hops from one node to another in a $k$ node fully connected graph, we get the following recurrence: $$g_n=(k-2)g_{n-1}+(k-1)g_{n-2}$$ With $g_1=1$, $g_0=0$ and $g_m=0 \;\; \forall \;\; m<0$. Is there a way to get a closed form for this recurrence? For $k=3$, this yields the Jacobsthal numbers. For future reference, this recurrence represents the number of paths from one node to another in a $k$ node fully connected undirected graph and has the solution: $$g(n) = \frac{(k-1)^n-(-1)^n}{(k)}$$ My attempt: Let's try to construct the generating function. $$B(x)=\sum\limits_{n=0}^\infty g_n x^n$$ We get: $$B(x) = \sum\limits_{n=0}^\infty ((k-2)g_{n-1}+(k-1)g_{n-2})x^n$$ $$=>B(x) = (k-2)x\sum\limits_{n=0}^\infty g_{n-1}x^{n-1}+(k-1)x^2\sum\limits_{n=0}^\infty g_{n-2}x^{n-2}$$ $$=(k-2)x B(x)+(k-1)x^2 B(x)$$ But this just makes $B(x)$ cancel out and we don't get an expression for it. AI: The issue is that your attempt to find the generating function doesn't actually make use of the initial conditions $g_0 = 0$, $g_1 = 1$ at all; you need to encode these somehow into the expression to actually get a useful result. (Also, the recurrence doesn't even hold for $g_1$, so we need to distinguish that case at the very least.) In particular, we can write $$ B(x) = 0 \cdot x^0 + 1 \cdot x^1 + \sum_{n=2}^{\infty} ((k-2)g_{n-1} + (k-1)g_{n-2}) x^n. $$ Can you take it from here? Alternatively, instead of using generating functions, just construct the characteristic polynomial $\lambda^2 - (k-2) \lambda - (k-1) = 0$, which factorises as $(\lambda - (k-1))(\lambda + 1)=0$. So we can write the general form as $$g_n = A\cdot (k-1)^n + B \cdot (-1)^n,$$ from which you can use your initial conditions to finish.
H: Combinatorics Problem : Additional Clause A firm needs to obtain 5 van loads of mineral. The five vans available can go to any of 11 places of mineral. The basic question is how many possible ways this can be achieved. The 11 places all have different kinds of elements in the mineral, so it is always important how many van loads come from which place. Find the number of ways in each case. 1) It does not matter which van brings which kind of mineral, and any places can be used by any number of vans. --> 15C5 Unordered selections with repetition is given by : C(n+r-1,r) The conditions above hold, but additionally there is one of the places that may only be used at most twice due to lack of supply. (The other places are unrestricted). I'm not sure about how to proceed with the additional clause. AI: For positive integers $n,r$, by the Stars-and-Bars formula, there are exactly $$\binom{r+n-1}{n-1}$$ $n$-tuples $(x_1,...,x_n)$ of nonnegative integers satisfying $$x_1+\cdots +x_n=r$$ For $i\in\{1,...,11\}$, let $x_i$ be the number of vans which load at the $i$-th supply source. Then for the first question we get the equation $$x_1+\cdots +x_{11}=5$$ so by the Stars-and-Bars formula, the number of solutions is $$ \binom{5+11-1}{11-1}=\binom{15}{10}=\binom{15}{5} $$ matching your answer. For the second question, we can just subtract the count for those solutions with $x_1\ge 3$. Writing $x_1=3+y_1$, we get the equation $$y_1+(x_2+\cdots +x_{11})=2$$ so by the Stars-and-Bars formula, the number of solutions with $x_1\ge 3$ is $$ \binom{2+11-1}{11-1}=\binom{12}{10}=\binom{12}{2} $$ hence the number of solutions with $x_1\le 2$ is $$ \binom{15}{5}-\binom{12}{2} $$
H: Exist smooth function? Let $f: \mathbb R \rightarrow [0,1] $ with $f(0)=1, f(1)=0$ a function differential such that exist $f^n(0)$, $f^n(1)$ for all $1 \leq n $ and they are equal to $0$ ($f^n$ is the $n$-th derivative ). Can be $f([0,1])$ a smooth function? I try moldering the functions like the $\sin, \cos$ or the exponentional but I don´t find a solution, maybe I need more advanced math topics to solve it, still I would like to know how they would do it. AI: These can be constructed using bump functions. Define $$\phi(x)=\cases{e^{-1/x}&if $x>0$,\\0&if $x\leq 0$.}$$ Then $\phi$ is smooth and all derivatives vanish at $0$. Now set $$\psi(x)=\phi(x)\phi(1-x).$$ Then $\psi$ is smooth, all derivatives are zero at $0$ and $1$, $\psi(x)>0$ on $(0,1)$ and $\psi$ is zero elsewhere. Define $$\chi(x)=\int_0^x\psi(t)\,dt.$$ Then $\chi$ is smooth, increasing, zero for $x\le0$ and a positive constant for $x\ge1$. Finally define, $$f(x)=1-\frac{\chi(x)}{\chi(1)}.$$ Then $f$ does what you want.