Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Is $\int_E \frac{1}{(x^2+y^2)^2}dxdy$ convergent? I have to tell whether this integral is convergent: $$\int_E \frac{1}{(x^2+y^2)^2}dxdy$$ where $E=\{0\leq y \leq x^a\} \cap \{x^2+y^2\leq 1\} $. I'm asked for which $a \geq 0$ the integral converges. How am I supposed to act when I find this kind of integrals? I mean, these domains determined by intersections of a curve with $[0,1]^2$ or with $B_n(0,0)$. Thanks in advance. EDIT: I suppose I should do it for $x \geq 0$ even if not specified.
If $x<0$ we have that $x^a$ is not necessarily defined, so I am going to assume that the actual problem is to discuss the convergence of $$I(a)=\iint_E \frac{dx\,dy}{(x^2+y^2)^2},\qquad E=\{(x,y):x^2+y^2\leq 1, x> 0, 0<y<x^a\}.$$ With these assumptions we have $$ I(a) = \int_{0}^{1}\frac{L(\rho)}{\rho^4}\,d\rho $$ hence the problem boils down to estimating $L(\rho)$ for $\rho\to 0^+$. If $a\leq 1$ we have $L(\rho)\geq c\rho$ and the integral is clearly divergent. It follows that we may assume that $x^a$ is a convex function on $[0,1]$. This easily leads to $$ L(\rho)\sim \rho^a\quad\text{as }\rho\to 0^+ $$ and to the fact that the integral is convergent for $\color{red}{a>3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Guidance requested for vector dot product question. Helping my child out with their year 11 exam preparation, specifically vectors and dot products, I think I may have figured out the answer but I'd like to get some confirmation or, more likely, a short sharp shock of education :-) Keep in mind it's some thirty-plus years since I've had to tackle this stuff. The question is phrased thus: If vector a is perpendicular to vector b-a, which of the following are necessarily true? 1) a.(b-a) = 0 2) a.b = a.a 3) a = b 4) a.b = |a|2 The ones they stated as necessarily true were all but 3. So here is my reasoning. Consider the vectors as follows. If b-a is perpendicular, then the b vector must be like this (although the triangle could of course be oriented in other ways): /| / | / | b / | b-a / | / | /______| a Now, obviously, item 1 is true because the dot product is |a||b-a|cosθ, where θ = 90 hence cosθ = 0. In terms of the other three statements, I used Pythagoras on the magnitudes to work out: $${b^2} = {a^2} + {(b-a)^2}$$ $${b^2} = {a^2} + {b^2 -2ab + a^2}$$ $${b^2} = {2a^2} + {b^2 -2ab}$$ $${2a^2} = {2ab}$$ $${a} = {b}$$ So it appears the magnitude of a and b is the same. That would, of course, mean the triangle is not so much a triangle as two congruent lines. This would explain why items 2 and 4 were true - b-a becomes a zero-length vector which I suppose could be considered perpendicular to a. But the only reason why I can think that item 3 could be false is if that zero-length vector may corrupt things. The other three statements deal with magnitudes only but it may be that that a zero-length vector may be rewritten as zero units north or zero units west, and they may be considered different. Other than that, I'm not sure why item 3 would not be true as well. Of course, it's quite possible that I've made some mistake in the reasoning above, in which case I'd appreciate some guidance so I can once again become a hero to my son :-)
I don't know what your son is supposed to know about the dot product... which is critical for the answer. However if he knows that dot product is distributive vs. addition then $$a \cdot (b-a) = a \cdot b - a \cdot a=0.$$ Therefore 2. is clear and 4. also as $a \cdot a = \vert a \vert^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Affine function is a diffeomorphism? Given an affine function $f:\mathbb{R}^n \to \mathbb{R}^n$ defined for all $x\in \mathbb{R}^n$ by $$f(x)=T(x)+a$$ such that $T$ is an invertible Linear map and $a\in \mathbb{R}^n$, is $f$ a diffeomorphism?
It is a diffeomorphism iff $T$ is invertible. It is easy to see that $f$ is invertible iff $T$ is invertible with inverse $$f^{-1}(x) = T^{-1}(x - a).$$ As $T^{-1}$ is a linear transformation, it is differentiable everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do Tarski's (geometry) axioms imply that all zero segments are congruent? Tarski's axioms are an alternate formalization of geometry (similar to axiom sets of Euclid and later Hilbert). Do these axioms imply: $$\forall\; x,y\in \text{points},\; x x\equiv y y?$$ If yes, what is the proof? My feeling is that the proof must use Tarski's Identity of Congruence $$xy\equiv zz \rightarrow x = y,$$ but I am unable to find a proof.
This seems surprisingly tricky. Identity of Congruence isn't enough by itself since it goes in the wrong direction. I want to use the Five Segment Axiom. Let me call your two points $p,q$ instead, to avoid a conflict with Wikipedia's notation. If $p=q$ then we are done by reflexivity, so assume $p \ne q$. Set $u=z=x'=p$ and $u'=z'=x=q$. Let $y=y'$ be the midpoint of $pq$ (it takes some more work to prove that it exists), so $Bpyq$ and $py \equiv qy$. Now we verify that the hypothesis of the Five Segment Axiom is satisfied, and conclude $zu \equiv z'u'$ which is to say $pp \equiv qq$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
SL-Eigenvalue/function problem with arbitrary boundary values The problem is to find all eigenvalues and eigenfunctions for the following SL system. $u'' + \lambda u = 0, x \in [a,b]$ $u'(a) = u'(b) = 0$ I know the general idea of how to do these problems and can do them for more simple boundary conditions say $u(0) = u(L) = 0$ I know the general solution to the ODE is: $$u(x) = A\cos(\sqrt{\lambda}x) +B\sin(\sqrt{\lambda}x)$$, (do i need the constants for eigenvalue problems?) My issue is with how to apply to boundary conditions to get it in the form given in the solution. The ansatz given in the solution (without explanation) says "Explicit left boundary condition implies $u(x) = \cos(\sqrt{\lambda}(x-a))$. I can see how this solution obviously satisfies $u'(a) = 0$ but I'm not sure how you would get there from: $u'(a) = -\sqrt{\lambda}A\sin(\sqrt{\lambda}a) + \sqrt{\lambda}B\cos(\sqrt{\lambda}a) = 0$ They also drop the constant in the ansatz and I understand thats because any multiple of an eigenfunction is an eigenfunction so may aswell make that 1, but I'm also confused about when and where the constants should be disregarded(and if they should be included anywhere at all). Thanks in advance!
If you want a solution with $u'(a)=0$, then you may safely assume $u(a)=1$ because $u'(a)=u(a)=0$ will force $u\equiv 0$. Any non-trivial solution will be a multiple of the solution where $u'(a)=0,u(a)=1$, and that solution is $$ u(x) = \cos(\sqrt{\lambda}(x-a)). $$ To complete the solution, set $u'(b)=0$, which gives $$ \sin(\sqrt{\lambda}(b-a))\sqrt{\lambda}=0. $$ $\lambda=0$ does give a solution $u(x)=1$. For $\lambda\ne 0$, then solutions are $$ \sqrt{\lambda}(b-a)=\pi,2\pi,3\pi,\cdots \\ \lambda = \frac{n^2\pi^2}{(b-a)^2},\;\;\; n=1,2,3,\cdots. $$ The corresponding non-trivial eigenfunctions are non-zero scalar multiples of $$ u_n(x)=\cos\left(\frac{n\pi(x-a)}{b-a}\right),\;\;\; n=0,1,2,3,\cdots. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3702924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Limit of $\frac{e^{xy}}{x+1}$ as it goes to $0$ I have to do the following limit: $$\lim_{(x,y)\rightarrow (0,0)}\frac{e^{xy}}{x+1}$$ I did that if $x = 0 \Rightarrow \lim \rightarrow 1$ but if $y = 0 \Rightarrow \lim \rightarrow$ undefined. By taking $x=y$ it gets me to the same result. Is the limit then $1$ or in what other way can I evualte it to see if it exists and if so, what's its value? Thanks for the help.
Actually, if $y=0$, it's $\lim\limits_{x\to0}\dfrac1{x+1}=\color{red}1$ too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3703042", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Shilov Chapter 4 Problem 16 I am working on the captioned problem which is reproduced below. And the hint for this problem is the following: But I have no idea of how to use Chapter 3 Prob 12 for this problem. That problem is reproduced here: I have no problem in finding out that three equations for unknown elements of A and B lead to equations for three minors of 2x3 matrix. But I have no clue as to how to proceed using Chap3 Prob 12. Thank you.
The equation $$ \pmatrix{a&b\\ c&d}\pmatrix{x&y\\ z&w}=\pmatrix{P&-Q\\ R&-P} $$ can be rewritten as \begin{aligned} bz-cy &= P,\\ (a-d)y-b(x-w) &= -Q,\\ c(x-w)-(a-d)z &= R. \end{aligned} That is, if $$ A=\pmatrix{b&c&a-d\\ y&z&x-w},\tag{$\ast$} $$ then the minors obtained by deleting respectively the third column, the second column and the first column of $A$ are given by $P,Q$ and $R$. So, if one can construct $A$, one can pick two arbitrary values for $d$ and $w$ and recover $a,b,c,x,y,z$ from $(\ast)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3703181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An identity of Arithmetic Functions Problem: Show that for all positive integers $n$, $$ \sum_{a=1, (a,n)=1}^{n} (a-1, n) = d(n)\phi(n)$$ where $(a, b)$ stands for $\text{gcd}(a, b)$ and $d, \phi$ are the divisor and Euler's totient function, i.e., number of numbers co-prime to n and less than n = $\phi(n)$. I find this one really fascinating because of $a-1$. This problem is from Niven and Zuckermann 'Introduction to the Theory of Numbers'. My approach is to show that the L.H.S. is a multiplicative function. because it is easy to compute it for powers of primes. Let $d_1d_2=n$ where $(d_1, d_2)=1$ I want to show that $ \sum_{a=1, (a,n)=1}^{n} (a-1, n) = (\sum_{a=1, (a,d_1)=1}^{d_1} (a-1, d_1))( \sum_{a=1, (a,d_2)=1}^{d_2} (a-1, d_2)) $ but I am not able to proceed other than showing that some terms are cancelling. THe main problem is that there are $x$ such that $(x, n)=1$ but $x > d_1, d_2$. Please help. Any hints are appreciated.
For a divisor $d|n$, the number of $a>1$ such that $gcd(a-1,n)=d$ is equal to $|\{1 \leq q \leq \dfrac{(n-1)}{d}|(qd,n)=d\}|=|\{1 \leq q \leq \dfrac{(n-1)}{d}|(q,n/d)=1\}|=\varphi(n/d)$. Now there's a multiplicative function. Also, it is known that if $f$ is multiplicative, then $\sum_{d|n}f(d)$ is also muliplicative, so having an expression of this form is useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3703407", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate $\sum_{n=1}^\infty\frac{n^x}{n!}$ I want to evaluate function defined by following sum: $$\sum_{n=1}^\infty\frac{n^x}{n!}$$ I was thinking about writing Taylor series expansion for it. However my try resulted in sum that looks even harder to calculate: $$\sum_{n=1}^{\infty}\frac{\ln^k(n)}{n!}$$Thanks for all the help in solving this problem.
Proof of the Dobiniski's formula: See, $$\sum_{n=0}^{\infty} \frac{n^k}{n!}=\underbrace{\frac{d}{dx}(x\frac{d}{dx}(x.....(\frac{d}{dx}e^x)))),}_\text{$k$ times}$$ at $x=1$. $$=\underbrace{\frac{d}{dx}(x\frac{d}{dx}(......\frac{d}{dx}(x^2e^x+xe^x))),}_\text{$k-2$ times}$$ at $x=1$.....(1) Also , $eB_k=\frac{d^k}{dt^k}(e^{e^t})$ at $t=0$ $=\frac{d^{k-2}}{dt^{k-2}}((e^t)^2e^{e^t}+e^te^{e^t})$ at $t=0$ ....(2) See, (1) and (2) have exact the same structure. Also $e^t=1, e^{e^t}=e$ at $t=0$. And, at $x=1, e^x=e$. Hence, we get $$eB_k=\sum_{n=0}^{\infty} \frac{n^k}{n!}$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3703587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Bound on a complex integral using polar form I'm a little bit confused and also a bit rusty on complex analysis... Here is my problem; Consider a function $f:\mathbb{R}\to\mathbb{C}$. We can express this function in polar coordinates as: $f(x) = A(x)e^{ia(x)}$, with $A(x)\in\mathbb{R}_+$ and $a(x)\in \mathbb{R}$. My question is; does the following inequality hold? $$\left|\int_q^p A(x)e^{ia(x)}\mathrm{d}x \right|\le\left(\sup_{x\in[q,p]} A(x)\right)\cdot \left| \int_q^p e^{ia(x)}\mathrm{d}x\right|$$ I would say; yes. But I'm not familiar enough with complex integrals to be sure that this is indeed the case. Sidenote: I define $\mathbb{R}_+$ as the set of all non-negative real numbers.
I found a counter example that shows that this does not holds true... Choose $A(x)=e^{-x^2}$, $a(x)=x$ and $p=-q=10$. Then the inequality would state that $1.38...\le 1.08...$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3703933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is considering only quadratic in one of the variables of a two variable quadratic sufficient for calculating roots Find the $positive$ integral solutions to $7x^2-2xy+3y^2-27=0$ My solution: Assuming the quadratic in $x$ , if we assume one root to be integral , the other has to be rational (as y must be an integer to satisfy the condition so ,product of roots is rational) For the roots to be rational the $discriminant$ has to be a perfect square. We get the discriminant($\Delta$) as $\Delta=4(189-20y^2)$ which has to be a perfect square. So we get $y^2$=$1,9$ hence $y$ as $1,3$ putting the values back we get the pair $(x,y)=(2,1)$ If we again make a quadratic in $y$ we get the same solution. Hence considering a quadratic in $x$ only is sufficient. My Question: I don't get the fact or intuition behind as to why does considering the quadratic in either $x$ or $y$ is self sufficient as it is not symmetric. If someone could provide me the intuition or proof behind as to why both of them lead to the same results , it would be of great help. Note: All the similar questions I have encountered can be solved by considering only quadratic in either $x$ or $y$ only, so i assume it is general. Thanks.
Assuming $a$ and $b$ are nonzero, we can solve the equation$$ax^2+bxy+cy^2+d=0 \tag{1}\label{1}$$for $x$ by considering the equation as a quadratic in $x$, namely$$x=\frac{-by \pm \sqrt{(by)^2-4a(cy^2+d)}}{2a}, \tag{2}\label{2}$$and for $y$ by considering the equation as a quadratic in $y$, namely$$y=\frac{-bx \pm \sqrt{(bx)^2-4c(ax^2+d)}}{2c}.\tag{3}\label{3}$$ Please note that \ref{1}, \ref{2}, and \ref{3} are the same equations, so solving each of them alone is sufficient to find all roots. So, for example, if you want to find all positive integer roots of the equation$$7x^2-2xy+3y^2-27=0,$$both \ref{2} and \ref{3} will give you the same results.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
find the dim S. Problem taken from Apostol calculas Volume $2$ page No: $13$ books Let $P$ denote the linear space of all real polynomials of degree $\le n$, where $n$ is fixed. let $ S$ denote the set of all polynomials $f$ in $P$, satisfying the condition given below . find the dim S. $1.$$f$ is even. $2.$ $f$ is odd. My attempt : we know that dim $P_n = n+1$ I thinks if $f$ is even then dim $S= \frac{n}{2}$ if $f$ is odd then dim $S= \frac{n+1}{2}$ Is its true ?
Case 1: $n$ is even, say $n=2k$ *Even $f$ will be of the type $ c+c_1x^2+c_2x^4+...c_kx^{2k}$ hence dim(S) =$k+1=n/2+1$ **Odd $f$ will be of the type $d_1x+d_3x^3+...+d_{2k-1}x^{2k-1}$, hence dim(S) =$k=n/2$ Similarly consider, case 2: $n$ is odd. PS: * because even $ f(x) =\frac{ f(x) +f(-x)} {2}$ ** odd $f(x) =\frac{f(x) - f(-x)} {2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Poisson Arrival Process and Uniform Distribution I'm brushing up on some basic probability and have this question: If we have a Poisson arrival process with arrivals $A_{1}, A_{2}, \dots$, and we know that there is one and only one arrival in a time period, say $[t_{1}, t_{2}]$. Does this mean that the one arrival is distributed uniformly on $[t_{1},t_{2}]$? How would one go about "proving" (it might be trivial, but not sure) such a thing?
Let's call your process $N(t)$. i.e. $N(t)$ is the number of arrivals that have happened up to and including time $t$. And given an interval $(a,b]$, let $N((a,b])$ denote the number of arrivals in the interval $(a,b]$. The way you would go about proving it is to fix some number $s \in (t_1,t_2]$ and try to calculate $$ \mathbb{P}\bigl(N(t) < s |\ \text{first arrival is in}\ (t_1,t_2]\bigr). $$ You are hoping the answer is $$ \frac{s-t_1}{t_2 - t_1} $$ The event ''first arrival is in $(t_1,t_2]$'' can be thought of as ''$N(t_1) = 0$ and $N(t_2) \geq 1$''.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Resolving indeterminacy of map on projective spaces induced by a linear map Let $V$ and $W$ be two $k$-vector spaces, where $k$ can be assumed as $\mathbb{C}$, and $f:V\longrightarrow W$ a non null $k$-linear map with kernel $K$. The map $f$ naturally induces a rational morphism $F:\mathbb{P}(V)\dashrightarrow\mathbb{P}(W)$, where $F$ is defined in the non-empty open set $\mathbb{P}(V)\backslash\mathbb{P}(K)$. Then via blowing up we resolve the map $F$, that is. Let $\pi: Bl_{\mathbb{P}(K)}(\mathbb{P}(V))\longrightarrow\mathbb{P}(V)$ be the blow up of $\mathbb{P}(V)$ along $\mathbb{P}(K)$, and $F':Bl_{\mathbb{P}(K)}(\mathbb{P}(V))\longrightarrow\mathbb{P}(W)$ be the map which resolves $F$ or more precisely $F = F'\circ\pi$. I would like to know two things and any help by indicating books, solution any hint would let me happy. 1 - Is it simple to know what is the blow up in this specific case? 2- What is the pull back $(F')^*\mathcal{O}_{\mathbb{P}(W)}(1)$? Thank you so much for any help!!
Let me assume $f$ is surjective (otherwise, replace $W$ by $Im(f)$). Then $$ Bl_{\mathbb{P}(K)}(\mathbb{P}(V)) = \mathbb{P}_{\mathbb{P}(W)}(K \otimes \mathcal{O} \oplus \mathcal{O}(-1)). $$ Next, if $H_V$ and $H_W$ are the pullbacks of the hyperplane classes of $\mathbb{P}(V)$ and $\mathbb{P}(W)$, respectively, and $E$ is the exceptional divisor of the blowup, then $$ H_W = H_V - E. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove$:$ $\sum\limits_{cyc} (\frac{a}{b+c}-\frac{1}{2}) \geqq (\sum\limits_{cyc} ab)\Big[\sum\limits_{cyc} \frac{1}{(a+b)^2}\Big]-\frac{9}{4}$ For $a,b,c$ are reals and $a+b+c>0, ab+bc+ca>0, (a+b)(b+c)(c+a)>0.$ Prove$:$ $$\frac{a}{b+c}+\frac{b}{c+a}+\frac{c}{a+b} -\frac{3}{2} \geqq (\sum\limits_{cyc} ab)\Big[\sum\limits_{cyc} \frac{1}{(a+b)^2}\Big]-\frac{9}{4}$$ My proof$:$ $$4(a+b+c) \prod (a+b)^2 (\text{LHS}-\text{RHS})$$ $$=\prod (a+b) \Big[\sum\limits_{cyc} (ab+bc-2ca)^2+(ab+bc+ca)\sum\limits_{cyc} (a-b)^2 \Big]$$ $$+(a+b+c)(a-b)^2 (b-c)^2 (c-a)^2 \geqq 0$$
From $$ \sum \frac{a}{b+c} -\frac{3}{2} - \left(\sum \frac{ab+bc+ca}{(a+b)^2} -\frac{9}{4}\right) = \frac14 \sum \frac{(a-b)^2}{(a+b)^2} \geqslant 0.$$ We can see, the inequality also true for all $a,\,b,\,c$ are real numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Conditions that a function is analytic in the complex plane of its independent variable? I have no a mathematic undergraduate background, so I am very sorry if this question is too naive. Consider a simple example: $f(x)=\vert x \vert^3$ and $g(x)=x^3$ where $x\in \mathbb{C}$. Why $f(x)$ is not analytic in the complex $x$ plane and $g(x)$ is a analytic function in the extire complex plane of $x$? or what is the conditions that a function is analytic in the complex plane of its independent variable? Please explain as detail as possible but please not use too much jargon. Thank you very much.
Start with the definition of analytic. The function of a complex variable $z$ is analytic at $z \in \mathbb C$ if it is differentiable at $z$, which means $$\begin{align} \frac{f(z+h) - f(z)}{h} \tag 1 \end{align}$$ has a unique limit as $\lvert h \rvert \to 0$, denoted $f'(z)$. The limit has to exist regardless of how and in which direction $h$ approaches zero. This is a strong requirement and requires $f(z)$ to satisfy the Cauchy Riemann equations. These are obtained as follows: write $z=x+iy$ and $f(z) = u(x,y)+iv(x,y)$ and consider the complex derivative when $h = \delta x$ and $h = i\delta y$ for real $\delta x,\delta y$. If $f$ is required to be analytic at $z$ then both are must be the same, so we obtain, $$\frac{\partial f}{\partial x} = f'(z) = -i \frac{\partial f}{\partial y}.$$ Now write this in terms of $u,v$ to obtain the Cauchy-Riemann equations, $$ \begin{align} \frac{\partial u}{\partial x} = \frac{\partial v}{\partial y}, \quad \frac{\partial v}{\partial x} = -\frac{\partial u}{\partial y}. \tag 2 \end{align}$$ When applied to $\lvert z \rvert^3$ these break down. We have $u(x,y) = (x^2+y^2)^{3/2} $ and $v(x,y) = 0$. It is not difficult to see then that $(2)$ will only be satisfied by exception, when $x = y = 0$. Thus $\lvert z \rvert^3$ cannot be analytic except at the single point $z = 0$. I hope this is useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3704966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cardinality of $\operatorname{Hom}_{\mathbb{C}}(A,\mathbb{C})$ Question Prove there is no finite generated algebra $A$ over $\mathbb{C}$ such that the cardinality of $\operatorname{Hom}_{\mathbb{C}}(A,\mathbb{C})$ is exactly $\aleph_0$. I need to prove it using commutative algebra tools such as Hilbert basis & Nullstellensatz theorem, Noether normalization theorem, etc... but can't figure out how. Thank you!
Hint: Noether normalization says there's an injective map $R=\Bbb C[x_1,\cdots,x_n]\to A$ which makes $A$ into a finite $R$-algebra (where $n=\dim A$). This induces a map on the hom-sets $\operatorname{Hom}_{\Bbb C}(A,\Bbb C)\to\operatorname{Hom}_{\Bbb C}(R,\Bbb C)$. What can you say about this map? What can you say about $\operatorname{Hom}_{\Bbb C}(R,\Bbb C)$? More details under the spoiler, though I encourage you to make an effort without looking under the spoiler first. This map of hom-sets is surjective and finite-to-one. Why? Every element in $A$ satisfies some polynomial with coefficients from $R$, so if you know what happens to elements of $R$, then you know what happens with elements of $A$, except possibly up to some finite ambiguity. Now all you have to do is to argue that $\operatorname{Hom}_{\Bbb C}(R,\Bbb C)$ is finite or at least the cardinality of the continuum, which should not be hard using the Nullstellensatz once you remember that maximal ideals are in bijection with maps to a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705123", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$f(x)=(\sin(\tan^{-1}x)+\sin(\cot^{-1}x))^2-1, |x|>1$ Let $f(x)=(\sin(\tan^{-1}x)+\sin(\cot^{-1}x))^2-1, |x|>1$. If $\frac{dy}{dx}=\frac12\frac d{dx}(\sin^{-1}(f(x)))$ and $y(\sqrt3)=\frac{\pi}{6}$, then $y(-\sqrt3)=?$ $$f(x)=(\frac{x}{\sqrt{x^2+1}}+\frac{1}{\sqrt{x^2+1}})^2-1=\frac{2x}{1+x^2}$$ $$\frac{dy}{dx}=\frac12\frac d{dx}(\sin^{-1}(\sin(2\tan^{-1}x)))$$ $$y=\tan^{-1}x+c$$ Using, $y(\sqrt3)=\frac{\pi}{6}$, I get, $c=-\frac\pi6$. Thus, $y(-\sqrt3)=-\frac\pi2$. But the answer is given as $\frac{5\pi}6$.
$\sin(\cot^{-1}x)=\sin\left(\dfrac\pi2-\tan^{-1}x\right)=\cos(\tan^{-1}x)$ $$\implies f(x)=\left(\sin(\tan^{-1}x)+\sin(\cot^{-1}x)\right)^2-1=\sin2\left(\tan^{-1}x\right)$$ Now $\sin^{-1}\left(\sin(2\tan^{-1}x )\right)=\begin{cases} \pi-2\tan^{-1}x &\mbox{if } 2\tan^{-1}x>\dfrac\pi2\iff x>\tan\dfrac\pi4 \\ -\pi-2\tan^{-1}x& \mbox{if } 2\tan^{-1}x<-\dfrac\pi2 \end{cases}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
1st order linear differential equation $y'+\frac{xy}{1+x^2} =x$ Can anyone help me with this one task. I need to resolve 1st order linear equation of this equation. $$y'+\frac{xy}{1+x^2} = x.$$ I stopped when this result came out $$e^{\ln|y|}=e^{-\frac{1}{2}\ln|1+x^2|}\cdot e^C.$$ I try solve this by wolfram $$y=\frac{1}{\sqrt{x^2+1}}\cdot C$$ But when I try to calculate $y'$, I get a strange equation. I think I had to be wrong somewhere. I will be grateful for your help.
This is a linear ODE then $$ y=y_h+y_p\\ y'_h + \frac{x}{1+x^2}y_h = 0\\ y'_p + \frac{x}{1+x^2}y_p = x $$ the homogeneous is separable giving $$ y_h = \frac{c_0}{\sqrt{1+x^2}} $$ now using the method of constants variation due to Lagrange we make $y_p = \frac{c_0(x)}{\sqrt{1+x^2}}$ and substituting we obtain $$ \frac{c_0'(x)}{\sqrt{x^2+1}}-x=0 $$ giving $$ c_0(x) = \frac{1}{3} \left(x^2+1\right)^{3/2} $$ and finally $$ y = \frac{c_0}{\sqrt{1+x^2}}+\frac{1}{3} \left(x^2+1\right)^{3/2}\frac{1}{\sqrt{1+x^2}} = \frac{c_0}{\sqrt{1+x^2}}+\frac{1}{3} \left(x^2+1\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
A problem regarding the ratios $\frac{f(x)}{g(x)}$ and $\frac{g(x)}{h(x)}$, assuming $f(x)g(y) = h\big(\sqrt{x^2+y^2}\big)$ $\mathbf {The \ Problem \ is}:$ Let, $f,g,h$ be three functions defined from $(0,\infty)$ to $(0,\infty)$ satisfying the given relation $f(x)g(y) = h\big(\sqrt{x^2+y^2}\big)$ for all $x,y \in (0,\infty)$, then show that $\frac{f(x)}{g(x)}$ and $\frac{g(x)}{h(x)}$ are constant. $\mathbf {My \ approach} :$ Actually, by putting $x$ in place of $y$ and vice-versa, we can show that $\frac{f(x)}{g(x)}$ is a constant, let it be $c .$ Then, I tried that $g(x_i)g(y_i)=g(x_j)g(y_j)$ whenever $(x_i,y_i)$, $(x_j,y_j)$ satisfies $x^2+y^2 =k^2$ for every $k \in (0,\infty)$ . But, I can't approach further. Any help would be greatly appreciated .
To make the formulas look simpler, define the functions $ \tilde f $, $ \tilde g $ and $ \tilde h $ from $ ( 0 , + \infty ) $ to $ ( - \infty , + \infty ) $ by: $$ \tilde f ( x ) = \log \frac { f \left( \sqrt x \right) } { f ( 1 ) } \qquad \tilde g ( x ) = \log \frac { g \left( \sqrt x \right) } { g ( 1 ) } \qquad \tilde h ( x ) = \log \frac { h \left( \sqrt x \right) } { h \left( \sqrt 2 \right) } $$ Then the functional equation $$ f ( x ) g ( y ) = h \left( \sqrt { x ^ 2 + y ^ 2 } \right) \tag 0 \label 0 $$ transforms to $$ \tilde f ( x ) + \tilde g ( y ) = \tilde h ( x + y ) \text , \tag 1 \label 1 $$ together with $ \tilde f ( 1 ) = \tilde g ( 1 ) = \tilde h ( 2 ) = 0 $. Now, substituting $ y $ for $ x $ and $ x $ for $ y $ in \eqref{1} and comparing the result with \eqref{1}, you get $ \tilde f ( x ) + \tilde g ( y ) = \tilde f ( y ) + \tilde g ( x ) $. Letting $ y = 1 $ in the last equation, you have $$ \tilde f ( x ) = \tilde g ( x ) \text . \tag 2 \label 2 $$Now, use \eqref{1} twice to get $$ \tilde f ( x + 1 ) + \tilde g ( y ) = \tilde h \big( ( x + 1 ) + y \big) = \tilde h \big( x + ( y + 1 ) \big) = \tilde f ( x ) + \tilde g ( y + 1 ) \text. $$ Letting $ y = 1 $ in the above equation you have $ \tilde f ( x + 1 ) = \tilde f ( x ) + \tilde g ( 2 ) $, which using \eqref{1} and \eqref{2} gives $$ \tilde g ( x + y ) = \tilde f ( 1 ) + \tilde g ( x + y ) = \tilde h \big( 1 + ( x + y ) \big) = \tilde h \big( ( x + 1 ) + y \big) \\ = \tilde f ( x + 1 ) + \tilde g ( y ) = \tilde f ( x ) + \tilde g ( 2 ) + \tilde g ( y ) = \tilde g ( x ) + \tilde g ( y ) + \tilde g ( 2 ) \text . $$ Substituting $ \frac x 2 $ for both $ x $ and $ y $ in the above equation you get $ \tilde g ( x ) = 2 \tilde g \left( \frac x 2 \right) + \tilde g ( 2 ) $. Now, using \eqref{1} and \eqref{2} you get $$ \tilde h ( x ) = \tilde h \left( \frac x 2 + \frac x 2 \right) = \tilde f \left( \frac x 2 \right) + \tilde g \left( \frac x 2 \right) = 2 \tilde g \left( \frac x 2 \right) \text . $$ This, together with the previous result shows that $$ \tilde g ( x ) = \tilde h ( x ) + \tilde g ( 2 ) \text . \tag 3 \label 3 $$ You can now rewrite \eqref{2} and \eqref{3} in terms of the original functions and get $$ \frac { f ( x ) } { g ( x ) } = \frac { f ( 1 ) } { g ( 1 ) } $$ and $$ \frac { g ( x ) } { h ( x ) } = \frac { g \left( \sqrt 2 \right) } { h \left( \sqrt 2 \right) } \text , $$ as desired. Of course, you could avoid defining $ \tilde f $, $ \tilde g $ and $ \tilde h $, and use \eqref{0} and some messier equations corresponding to the above ones, in terms of $ f $, $ g $ and $ h $. But I find this way more elegant, and in fact I think this way you can see the simple idea behind the solution more easily.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Name of the set that forms a topological space with a topology Let's assume I have a topological space $(X, \tau)$, where $X$ is a set and $\tau$ is a topology. Now I have $Y\subset X$, but $Y$ is not necessarily element of $\tau$. What do I call $X$ in my publication? It is not the "topological space", because that would be $(X, \tau)$, right? How can I refer to $X$? "Y is a subset of a topology imbued set"? Sounds a bit weird and "Y is a subset of a set which is used to construct a topological space..." is not exactly elegant. The same issue would be the case for a metric space. Can I say "topological space set"? https://en.wikipedia.org/wiki/Vector_space also makes a abusive of termonology, I guess, when saying that A vector space ... is a collection of objects called vectors, which ...
You can say that $Y$ is a subset of the underlying set of the topological space $(X, \tau)$. Usually, when a set $X$ is endowed with some "structure", you can address the set itself by calling it the underlying set. What you are doing is "forgetting" the structure and considering just the set of elements. This works for topological spaces, as well as for groups and in general whenever you have a set with operations defined between its elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Pushout of categories in Set I found this answer Understanding an Example of a Pushout in very helpful regarding disjoint unions, but wanted to know how to understand a pushout of categories in Set, say from : A→B and : A→C, where f and g are functors with shared domain A. There is an equivalence relation on objects, but how to deal with morphisms?
As you say, the pushout $P$ has objects $\mathrm{ob}(P)=\mathrm{ob} B\sqcup \mathrm{ob} C/\sim$, where $\sim$ is the equivalence relation generated by the relation containing $(b,c)$ whenever $b$ and $c$ are the image of the same object in $A$. Now to construct morphisms, if $p_1,p_2\in \mathrm{ob}(P)$, then we start with $$P'(p_1,p_2)=\bigsqcup_{p_1=[b_1],p_2=[b_2]} B(b_1,b_2)\sqcup \bigsqcup_{p_1=[c_1],p_2=[c_2]} C(c_1,c_2).$$ Now, $P'''$ is only a graph, not a category. We let $P''$ be the category freely generated by $P'''$, so that a morphism $p_1\to p_2$ is a path $p_1\to p_2$ in $P'''$. Next, we construct $P'$ by imposing relations on $P''$ asserting that the mappings $B\to P',C\to P'$ are functors. Namely, we require that the path $(f,g)$ is identified with the path $(g\circ f)$ when the latter composition is defined in $B$ or in $C$ and that the paths $(\mathrm{id}_b)$ are identified with $\mathrm{id}_p$ for every $p\in \mathrm{ob} P$ and $b$ such that $p=[b]$. Constructed in this way, $P'$ is the pushout of $B$ and $C$ over $\mathrm{ob} A$. Finally, we construct $P$ itself by imposing further relations asserting that the two functors $A\rightrightarrows P$ must be equal. Now a functor $P\to Q$ is naturally identified with a graph morphism $P'''\to U(Q)$, where $U$ denotes the underlying graph, which respects the compositions and identities of $B$ and $C$ and coequalizes the two maps from $A$; that is, $P\to Q$ is uniquely identified with a pair of functions from $B$ and $C$ which are act equally on $A$. Thus $P$ has the desired universal property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $A=\{(x,y)\in\mathbb{R}:(y\neq 0)\vee (x>0)\}$ is connected. I'm working on a multivariable integration problem and, at some point, I need to use Poincaré Lemma. To justify that I verify the Lemma's hypothesis conditions, I need to prove that the set $$A=\{(x,y)\in\mathbb{R}:(y\neq 0)\vee (x>0)\}$$ is connected. In fact, proving it's simply connected will work as well for my excercise, I don't know wich way is easier, and in any case how to prove it cannot be represented as the union of two or more disjoint non-empty open subsets. I will thank any help or advice.
The space you presented is just $\mathbb{R}^2\setminus \{ (x,0) : x\leq0 \}$. That is you are removing from the real plane the half line starting at the origin and going through the negative part of the $x$ axis. You can easily show it is path connected, thus it is connected. It is also simply connected but from the phrasing of your questions I would think you are not entirely familiar with this concept, am I right? Simply connected means that the first homotopy group of the space is trivial. If you do not know the definition of this group: this is equivalent to the fact that every loop in your space can be contracted to its basepoint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3705945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An Interesting Property Concerning a Sequence of Integers A non-decreasing sequence of positive integers $a_1,a_2,\dots a_n\ (n\geq 3)$ is good if for each $3\leq k\leq n$ there are $1\leq i\leq j<k$ such that $a_i+a_j=a_k$. Let $\ell,m$ be positive integers, and consider the set $[\ell]=\{1,2,\dots,\ell\}$. We say that $[\ell]$ is of type $P(m,1)$ if for any partition of $[\ell]$ into $m$ non-empty disjoint subsets $S_1,\dots,S_m$, there exists an $i\leq m$ such that one can choose, possibly with repetition, some elements in $S_i$ to form a good sequence. Otherwise $[\ell]$ is of type $P(m,0)$. Let $f(m)$ denote the smallest positive integer of type $P(m,1)$. My question is, what is $f(3)$? I managed to prove that $\bullet$ if $\ell$ is of type $P(m,0)$, then so are $1,2,\dots,\ell-1$ $\bullet$ if $\ell$ is of type $P(m,1)$, then so are $\ell+1,\ell+2,\dots$ $\bullet$ $f(1)=2$ $\bullet$ $f(2)=5$ Indeed, note that $[4]=\{1,4\}\cup \{2,3\}$. So $4$ is of type $P(2,0)$, and therefore so are $1,2,3$. Now assume that some $n\geq 5$ is of type $P(2,0)$ and that $[n]=S_1\cup S_2$, where $S_1,S_2$ are non-empty disjoint subsets of $[n]$. WLOG, $1\in S_1,2\in S_2$. (Note that if $1,2$ are in the same $S_i$, then there would be a good sequence $1,1,2$.) Let $r$ be the largest element in $S_1$. If $r=1$, then $2,4\in S_2$ but $2,2,4$ is a good sequence. So $r>1$. If $r<n$, then $r+1\leq n$ and $r+1\in S_2$. Thus, $r-1=(r+1)-2\not\in S_2$, meaning that $r-1\in S_1$. Now $1,r-1,r$ is a good sequence in $S_1$, a contradiction. Therefore, $r=n$. But then $n-1=r-1\not\in S_1$. So $n-1\in S_2$, and $n-3=(n-1)-2\in S_1$. Finally, $3=n-(n-3)\in S_2$, and $5=2+3\in S_1$. Then $4=5-1\in S_2$, creating a good sequence $2,2,4$. We conclude that any $n\geq 5$ is of type $P(2,1)$. $\bullet$ $12<f(3)$ (because $[12]=\{1,3,10,12\}\cup \{2,5,8,11\}\cup \{4,6,7,9\}$) $\bullet$ $f(m)\leq \left[\sum_{j=0}^m\frac{1}{j!}\right]m!\ \forall m\geq 1$ (which follows from a repeated use of pigeonhole principle) In particular, $13\leq f(3)\leq 16$ But I'm not able to see if $13,14,15$ are of type $P(3,1)$. So any help is appreciated. Thanks! I'm also wondering if the following holds. $$f(m)=\left[\sum_{j=0}^m\frac{1}{j!}\right]m!\ \forall m\geq 1$$ Equivalently, is it true that $$(m+1)f(m)+1\leq f(m+1)\ \forall m\geq 1$$
Partial solution: $f(3) \neq 13$ because $\{1,4,7,10,13\}, \{2,3,11,12\}, \{5,6,8,9\}$ Note the first part is $1 \pmod 3$, and we certainly cannot get a good sequence out of that. I wonder if that's a generally good approach...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to calculate average growth when it's negative? We have annual reports for company's revenue and can calculate annual growth as $yg = {y_{i+1} \over y_i}$. And then we can calculate the average monthly growth as $mg = ({y_{i+1} \over y_i})^{1 \over 12}$. So for reports 2000-12 $1m and 2001-12 $2m the average monthly growth would be 1.06. But how calculate monthly growth when the revenue became negative? For reports 2000-12 revenue = $1m and 2001-12 revenue = $-1m? P.S. I need it for simple prediction. For example 2000-12 $1m and 2001-12 $2m the revenue in 2002-02 could be predicted as $2 \times 1.06^2 = 2.25$
An example for negative growth rate: $y_0=100, y_1=80$ The growth rate from $t=0$ to $t=1$ is $g_{01}=\frac{80}{100}-1=0.8-1=-0.2$ So you can use the formula for growth rate no matter whether the growth rate is positive or negative: $$g_{t,t+1}=\frac{y_{t+1}}{y_t}-1$$ Btw, the growth factor $1+g_{01}$ is still positive $1-0.2=0.8$ To apply the formula for the growth rate you need a meaningful zero point. That means that the values are ratio scaled. If $y_t$ is can be negative as well, then a growth rate cannot be determined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706167", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Compute an $n\times n$ determinant with factorial and powers of $x$ Compute $$ D_{n}= \begin{vmatrix} 1 & 0 & 0 & 0 & 0 & \ldots & 1\\ 1 & 1! & 0 & 0 & 0 & \ldots & x\\ 1 & 2 & 2! & 0 & 0 & \ldots & x^{2}\\ 1 & 3 & 3\cdot2 & 3! & 0 & \ldots & x^{3}\\ \ldots & \ldots & \ldots & \ldots & \ldots & \ddots & \ldots\\ 1 & n & n\left( n-1\right) & n\left( n-1\right) \left( n-2\right) & n\left( n-1\right) \left( n-2\right) \left( n-3\right) & \ldots & x^{n}% \end{vmatrix} $$ My attempt: I tried to make an expansion along the first row, but I didn't obtain anything; I also tried to compute some particular cases by I didn't see how to obtain a general formula such that to be capable to compute $D_{n}$. Any ideeas?
Let $M_n$ be the matrix of interest (the argument of the determinant in your question). We claim that $$M_n = A_n B_n, ~ A_n = \begin{bmatrix} 1 & 0 & 0 & \cdots & 0 \\ 1 & 1! & 0 & \cdots & 0 \\ 1 & 2 & 2! & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & n & n(n - 1) & \cdots & n! \end{bmatrix}, ~ B_n = \begin{bmatrix} I_n & \vec{v}_{n-1} \\ 0 & (x - 1)^n / n!\end{bmatrix}$$ where $\vec{v}_{n-1}^T = \begin{bmatrix} 1 & x - 1 & (x - 1)^2 / 2 & \cdots & (x - 1)^{n-1} / (n-1)!\end{bmatrix}$. The correctness of the first $n$ columns of the product are trivial to verify. To verify the correctness of the last column, note that $$(A_n B_n)_{i, ~n+1} = \sum_{j = 0}^{i - 1} \frac{(i - 1)!}{(i - j - 1)!} \frac{(x - 1)^j}{j!} = \sum_{j = 0}^{i - 1} {i - 1 \choose j} (x - 1)^j = x^{i - 1}$$ by the Binomial Theorem. (Note here that the indexing of the matrix starts with $1$, not zero.) As determinants are multiplicative, it follows that $$\det(M_n) = \det(A_n) \cdot \det(B_n) = \left[ \prod_{k = 0}^n k! \right] \cdot (x - 1)^n / n! = \left[ \prod_{k = 0}^{n-1} k! \right] \cdot (x - 1)^n$$ since $\det(A_n)$ is simply the product of its diagonal entries (it's lower triangular) and $\det(B_n) = (x - 1)^n / n!$ by Laplace expanding along the last row. It's reassuring to see that this calculation confirms saulspatz's computer simulations. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $A = [-5, 3)$ and $B = (1, \infty)$, is $3$ a member of $A \cap B$? If $A = [-5, 3)$ and $B = (1, \infty)$, what is $A \cap B$? Since $3$ is not a member of $A$, do we include it in the intersection?
We do not include it in the intersection because it is not a member of $A$, so it is not a member of both sets, so it is not in the intersection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706447", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
For $G \sim G(n, 0.5)$, $I$ a $k$-set of vertices, and event $E_I = \{G[I] \cong K_k \text{ or } K_k^c \}$, how many events does $E_I$ depend on? The note in my lecture says that $E_I$ is independent of all events with disjoint edge-sets, and so it depends on at most ${k \choose 2}\left({{n-2}\choose {k-2}}-1\right)$ other events. I don't get this formula. Let $S$ be a set of $k$ vertices in $G$. I'd construct the events that $E_S$ depends on as follows: for a set $S'$ of $j$ vertices in $S$, with $j=2,...,k-1$, there are ${n-k} \choose {k-j}$ of $(k-j)$-sets in $G \setminus S$ which, together with $S'$, form some $k$-sets whose distinct events $E_S$ depends on. It seems to me the formular only takes into account the case $j = 2$ (with that extra bit of "$-1$" that I don't get).
The formula is an overestimate. There are $\binom k2$ ways to pick two vertices in $I$. For each one of them, there are $\binom{n-2}{k-2}$ sets of size $k$ containing those two vertices and possibly other vertices in $I$. One of those sets of size $k$ is $I$ itself; if we don't count it, there are $\binom{n-2}{k-2} -1$ other sets. Multiplying, we get that formula. A set $J \ne I$ that shares $j>2$ vertices with $I$ will be counted multiple times; for each of the $\binom j2$ ways to pick two vertices in the overlap, $J$ will be one of the $\binom{n-2}{k-2}-1$ sets distinct from $I$ that shares those two vertices with $I$, so it will be counted $\binom j2$ times in total. In your formula, on the other hand, we'd take $\binom k2 \binom{n-k}{k-2}$ to find the number of sets of size $k$ with overlap exactly $2$. Note the $n-k$ in the binomial coefficient rather than the $n-2$; that's how you make sure you don't pick any more vertices of $I$. If we did that, then you are right that we would have to consider overlaps of size $3, 4, \dots, k-1$ as well - but then the resulting sum $$ \sum_{j=2}^{k-1} \binom kj \binom{n-k}{k-j} $$ would be exact and not an overestimate. It's not worth doing this, because the $j=2$ term dominates for small $k$, and so the overestimate is a simpler expression that's still very close to the truth. The only thing I don't understand is this: if we're taking an overestimate anyway, why include the $-1$ when $\binom k2 \binom{n-2}{k-2}$ is simpler to work with?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Countability of the set $(0,1)$ I am trying to prove that the set $(0,1)$ is uncountable from "A First Course in Analysis by Yau". I have a question about a particular step. In the text, the result is proved by contradiction. It is supposed that the set $(0,1)$ is countable, which it is then written that there must exist a bijection $f:\mathbb{N}\rightarrow (0,1)$ (which is ultimately contradicted). My question is, why does the bijective map have to exist? If we suppose that $(0,1)$ is countable, shouldn't there exist an injective map $g:(0,1)\rightarrow\mathbb{N}$?
There is a 1-1 correspondence between $A$ and $B$, if and only if a) there is an injection from $A$ to $B$ and b) at the same time there is an injection from $B$ to $A$. So, if you can demonstrate that injection from $(0,1)$ to $\mathbb N$, then yes, you have demonstrated that $(0,1)$ is countable -- but more, you have proved it to be '''countably infinite'''.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3706918", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Calculate $\mathbb E(Y^2\mid X)$ Let $X,Y$ be random variables with the total normal distribution such that $$\mathbb EX=\mathbb EY=0, \operatorname{Var} X=1, \operatorname{Var} Y=5, \operatorname{Cov}(X,Y)=-2$$Calculate $\mathbb E(Y^2\mid X)$ From this task I can also calculate * *$\mathbb E(XY)=\operatorname{Cov}(X,Y)+\mathbb EX \cdot \mathbb EY=-2$ *$\mathbb EX^2 =\operatorname{Var}X+(\mathbb EX)^2 =1$ *$\mathbb EY^2=5$ However, I know that $$\mathbb E(Y^2\mid X)=\int_{\Omega} Y^2 d \mathbb P_X$$ so this information is unhelpful and I don't know how to calculate $\mathbb E(Y^2\mid X)$.
$Y|X\sim N(\mu_Y+\rho\frac{\sigma_Y}{\sigma_X}(x-\mu_X);\sigma_Y^2(1-\rho^2))$ And $\mathbb{E}[Y^2|X]=\mathbb{V}[Y|X]+\mathbb{E}^2[Y|X]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3707214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
How do you find the coefficient of $x$ in $(x + 1)^2$? I want to learn how can I find out the coefficient of the variable $x$ in the expression $(x + 1)^2$. It is a case of a perfect square expansion.
This is the distributive property of the multiplication: $(a+b)c=ac+bc$. What is $(x+a)^2$? Well, $$(x+a)^2=(x+a)(x+a)$$ Let's write one of the factors $(x+a)=c$, and distribute: $$(x+a)^2=(x+a)c=xc+ac$$ Replacing $c$ back to $x+a$ we get $$(x+a)^2=(x+a)(x+a)=x(x+a)+a(x+a)$$ And we distribute again $$(x+a)^2=x(x+a)+a(x+a)=x^2+xa+ax+a^2$$ At the end we get $$(x+a)^2=x^2+2ax+a^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3707362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Using three points to create a quadratic equation produces an equation, which doesn't seem to pass through the original points I've got three (x,y) points from which I am trying to create a quadratic equation, which are: (2325, 5500) (1880, 3700) (1400, 2360) Using those three points, I create three simultaneous equations: 5500 = 5405625a + 2325b + c 3700 = 3534400a + 1880b + c 2300 = 1960000a + 1400b + c From there, this is the quadratic equation I produce: y = 0.0014x^2 - 1.6524x + 2017.7 This seems to be the right equation. In addition, when I generate the equation through Excel and online tools, the same one results. However, when I input the x-values from my original three points, they're all out quite significantly. Compared to the original points, these are the points (using the same x-values) that the equation produces: (2325, 5744) (1880, 3859) (1400, 2448) There must be something I have done wrong (and most likely something quite basic) but shouldn't the equation I create from the three points give the correct y-values when I use the same x-values? Can someone please point out to me what is going wrong, and how I can fix it? Thanks!
You're simply using too few significant digits in the most important part, the coefficient $a$: I have it as $0.0013548942$ to 8sf on an online solver. Note that the error to $0.0014$ is a full $3.3\%$, which is enough to see the deviations in your crosscheck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3707511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
equivalent condition for a quadratic number field to have a solution of the equation $x^2+y^2=-1$ My question is related to the realization problem of the quaternion group $Q_{8}$. Let $d$ be a positive square-free integer. It is well known that the unique irreducible 2-dimensional representation of $Q_{8}$ realizes over $K=\mathbb{Q}\left(\sqrt{-d}\right)$ if and only if there exist $x,y\in K$ such that $x^2+y^2=-1$. Is there an equivalent condition of $d$ for $K$ to have a solution of the equation $x^2+y^2=-1$?
When $d\equiv-1\pmod 8$, $K$ embeds into the $2$-adic numbers $\Bbb Q_2$, and in $\Bbb Q_2$ the equation $x^2+y^2+1=0$ is insoluble (this boils down to congruences modulo $8$). Suppose $d\not\equiv-1\pmod 8$. Then the equation $x^2+y^2=-1$ is soluble in $K$ iff the quadratic form $X^2+Y^2+Z^2$ is isotropic over $K$. By the Hasse-Minkowski theorem for number fields, this is the case iff $X^2+Y^2+Z^2$ is isotropic over all completions of $K$, that is iff $x^2+y^2=-1$ is soluble over all completions of $K$. Certainly $x^2+y^2=-1$ is soluble over $\Bbb C$ and over $\Bbb Q_p$ for all odd primes $p$. It suffices to prove it is soluble over all $2$-adic completions of $K$. But the only $2$-adic completion of $K$ is $L=\Bbb Q_2(\sqrt{-d})$ which is a quadratic extension of $\Bbb Q_2$. There are only seven different quadratic extensions of $\Bbb Q_2$, and one can check individually that $x^2+y^2=-1$ is soluble in each of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3707718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find the minimum of $\sum_{i=1}^{n} d_{i}{x_{i}}^2$ where $d_{i}>0$ $\forall i=1, ... , n $, when $x$ is solution of $Ax=b$ For $A\in\mathcal M_{m\times n}(\Bbb R)$, let $x$ be solution of $Ax=b$ when $A$ has full row rank. Find the minimum of $\sum\limits_{i=1}^n d_{i}x_i^2$ where $d_i>0$ $\forall i\in\{1,\ldots,n\}$. I believe the question is related to least norm problem since with full row rank, one can find the solution of least norm problem as $x^*=A^{T}(AA^T)^{-1}b$, so I tried to divide $x$ into two components, by $x=x^*+x_N$, where $x_N$ is the component of $x$ in null space of $A$. However, it didn't help me out to find the solution for the case when $d_i>0$. Could someone enlighten me on how to solve this problem?
Consider the Lagrangian $\min_x x^TDx - \lambda^T (Ax-b)$, the problem is convex. Differentiate with respect to $x$, we have $$2Dx-A^T\lambda =0$$ $$x=\frac12D^{-1}A^T\lambda$$ Along with $Ax=b$, we have $$\frac12 AD^{-1}A^T\lambda = b$$ Now, you can solve for $\lambda$ and then solve for $x$. Alternatively, Express $x=x^*+x_N=x^*+B\mu$ where $B \in \mathbb{R}^{n \times k}$ is a matrix where its columns form a basis of the nullspace. then we want to minimize $$(x^*+B\mu )^TD(x^*+B\mu)$$ Hence, we just have to solve a quadratic equation in $\mu$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3707889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How can we express that a partial order is more complete than another one? Suppose we have two partial orders $R$ and $I$ on $\mathbb{C}$ (conplex numbers) such that: * *$R$ is a total order on $\mathbb{R}$ (real numbers). *$I$ is also a total order on $\mathbb{R}$ and, additionally, on $\mathbb{I}$ (imaginary numbers). Is there a technical term fore expressing that $I$ is more complete than $R$ in the sense that it has more pairs in its domain? Would there be a different terminology for the following cases? * *The case where the hierarchy defined by $I$ for $\mathbb{R}$ is the same as $R$'s. *The case where the hierarchy defined by $I$ for $\mathbb{R}$ is different from $R$'s.
From Partially ordered set - Wikipedia: A partial order $\leq^*$ on a set $X$ is an extension of another partial order $\leq$ on $X$ provided that for all elements $x$ and $y$ of $X,$ whenever $x \leq y,$ it is also the case that $x \leq^* y.$ A linear extension is an extension that is also a linear (i.e., total) order. Apart from Wikipedia, it seems to be surprisingly hard to find a reference for this general use of the term extension, except in the special case of a linear extension. Even Wikipedia's definition occurs under the latter heading. Order theory aside, however, the term extension is used fairly widely in mathematics, with a meaning that is at least vaguely similar to the one given in Wikipedia. I can't imagine anyone objecting to it. We'll see! Fortunately, in the case you're interested in (at least, in the first of the two cases you listed), $I$ is a linear extension of $R.$ So there is a well-established technical term that says exactly what you want. It's not clear to me what would be meant by $I$ having (in your words) more pairs than $R$ when it is not a superset of $R,$ as seems to be implied by this sentence in the question: The case where the hierarchy defined by $I$ for $\mathbb{R}$ is different [from] $R$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3708092", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do you compute the value of the right derivative of $f(x)= \sin (x)^{\cos (x)} +\cos (x)^{\sin (x)}$ when $x=0$. How do you compute the value of the right derivative of $f(x)= \sin (x)^{\cos (x)} +\cos (x)^{\sin (x)}$ when $x=0$. I'm trying to learn calculus so some explanations wouldn't be so bad. I got stuck computing the limit of $\sin (x)^{\cos (x)} \cdot \big( \frac{\cos ^2 (x)}{sin (x)} - \sin (x) \cdot \ln (\sin (x)\big)$ as $x \rightarrow 0$. Sorry for the grammar mistakes but I'm not English.
You should learn logarithmic differentiation: $$h(x)={(\sin{x})}^{\cos{x}}$$ $$\ln{(h(x))}=\cos{x} \cdot \ln{\sin{x}}$$ $$\frac{h'(x)}{h(x)}=-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}$$ $$h'(x)={(\sin{x})}^{\cos{x}} \left(-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}\right)$$ Do this for $g(x)={(\cos{x})}^{\sin{x}}$: $$g'(x)= {(\cos{x})}^{\sin{x}} \left( \cos{x} \cdot \ln{\cos{x}} -\frac{ \sin^2{x}}{\cos{x}} \right)$$ To sum it up, $$f(x)=g(x)+h(x) \implies f'(x)=g'(x)+h'(x)$$ $$f'(x)={(\sin{x})}^{\cos{x}} \left(-\sin{x} \cdot \ln{\sin{x}}+\frac{\cos^2{x}}{\sin{x}}\right)+{(\cos{x})}^{\sin{x}} \left( \cos{x} \cdot \ln{\cos{x}} -\frac{ \sin^2{x}}{\cos{x}} \right)$$ Using the Maclaurin series approximations of $\sin{x}\approx x$ and $\cos{x} \approx 1-\frac{x^2}{2}$ near $x=0$: $$\lim_{x \to 0^+} f'(x)=x^1 \left(0+ \frac{1-x^2+\frac{x^4}{4}}{x}\right)+1\left(0-0\right)=\boxed{1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3708275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Use the Chinese Remainder Theorem to determine the value of $x$. I'm trying to solve the following modular arithmetic question using the Chinese Remainder Theorem, using this link. (We learned a different method in our class, but I found this easier to grasp). $$x \equiv 1 (\text{mod} \ 5)$$ $$x \equiv 2 (\text{mod} \ 7)$$ $$x \equiv 3 (\text{mod} \ 9)$$ $$x \equiv 4 (\text{mod} \ 11)$$ I then represented $x$ as a sum of $4$ boxes, such that the first term is "related" to $\text{mod} \ 5$ (i.e. the $1^{st}$ term will not be made $0$ due to the $\text{mod} \ 5$), the second term is related to $\text{mod} \ 7$ and so on. Here's what I mean by "related": If we only consider $\text{mod} \ 5$, the value of box $1$ is $693$, the value of box $2$ is $495$, then $693 \ \text{mod} \ 5 = 3$ but $495 \ \text{mod} \ 5 = 0$. Likewise, if we only consider $\text{mod} \ 7$, then the value of box $1$ is $693 \ \text{mod} \ 7 = 0$ but $495 \ \text{mod} \ 7=5$. And so on... After doing all that, I have $$x = (7 \times 9 \times 11) + (5 \times 9 \times 11) + (5 \times 7 \times 11) + (5 \times 7 \times 9)$$ The next step is applying the $\text{mod} \ 5$ to $x$: $$\begin{align} x \ \text{mod} \ 3 &\equiv 691 \ \text{mod} \ 5 + 495 \ \text{mod} \ 5 + 385 \ \text{mod} \ 5 + 315 \ \text{mod} \ 5 \\ &\equiv 693 \ \text{mod} \ 5 + 0 + 0 + 0 \\ &\equiv 693 \ \text{mod} \ 5 \\ &\equiv 3 \ (\text{mod} \ 5) \end{align}$$ This is where I get stuck. In the video, and the video doesn't explain how to deal with such a scenario. PS - If there is a more "intuitive" or more efficient version of the Chinese Remainder Theorem, I'd be grateful if you could share it. PPS - Sorry if the question is a bit awkwardly formulated. As you can guess this is my first doing this.
There should be $x = (7 \times 9 \times 11)\cdot(7 \times 9 \times 11)^{-1}_5\cdot 1 $ ${}+ (5 \times 9 \times 11)\cdot(5 \times 9 \times 11)^{-1}_7\cdot 2 $ ${}+ (5 \times 7 \times 11)\cdot(5 \times 7 \times 11)^{-1}_9\cdot 3 $ ${}+ (5 \times 7 \times 9)\cdot (5 \times 7 \times 9)^{-1}_{11}\cdot 4$ for this approach.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3708408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Deriving the equation of a plane $Ax+By+Cz=D$. Defining a plane as the span of two linearly independent vectors, I've been trying to derive the equation $$Ax+By+Cz=D$$ without much success. The equation seems to indictate that a vector $$\vec{v}=\begin{bmatrix} x \\ y \\ z\end{bmatrix}$$ is in the plane if and only if $Ax+By+Cz=D$. I was wondering if anyone could at least point me in the right direction as to how to prove the two definitions are equivalent.
The span of vectors contains the origin, but in general $$A\,0+B\,0+C\,0\ne D.$$ A plane can be defined as the affine set $$\vec p=\lambda\vec a+\mu\vec b+\vec c.$$ We can eliminate $\lambda,\mu$ by forming the dot product with $\vec a\times\vec b$, $$\vec a\times\vec b\cdot\vec p=\vec a\times\vec b\cdot\vec c,$$ which is of the form $$Ax+By+Cz=D.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3708689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
Does $\left|A\right|=\left|\mathbb{N}\right|$ and $\left|\mathbb{N}\right|=\left|\mathbb{Z}^+\right|$ imply $\left|\mathbb{Z}^+\right|=\left|A\right|$ Suppose $A$ is a countably infinite set. Is it true that if $\left|A\right|=\left|\mathbb{N}\right|$ and $\left|\mathbb{N}\right|=\left|\mathbb{Z}^+\right|$, then $\left|\mathbb{Z}^+\right|=\left|A\right|$? It can be show by the Cantor-Bernstein theorem that $|\mathbb{N}|=|\mathbb{Z}^+|$, as \begin{align} f&:\mathbb{N}\hookrightarrow\mathbb{Z}^+, \ \text{as} \ f(x)=x+1\implies \left|\mathbb{N}\right|\leq\left|\mathbb{Z}^+\right| \\ g&:\mathbb{Z}^+\hookrightarrow\mathbb{N}, \ \text{as} \ g(x)=x-1\implies \left|\mathbb{Z}^+\right|\leq\left|\mathbb{N}\right|. \end{align} If we know $\left|A\right|=\left|\mathbb{N}\right|$ and $\left|\mathbb{N}\right|=\left|\mathbb{Z}^+\right|$, can we deduce that $\left|\mathbb{Z}^+\right|=\left|A\right|$?
Yes: generally, if you know $\lvert A\rvert=\lvert B\rvert$ and $\lvert B\rvert=\lvert C\rvert$, you can conclude $\lvert A\rvert=\lvert C\rvert$ because if $f:A\to B$ is a bijection and $g:B\to C$ is a bijection, then $g\circ f:A\to C$ is a bijection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3708973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Cauchy problem for PDE Cauchy problem $$y z_x-xz_y=0$$ and $ x_0(s)=cos (s),y_0(s)=sin(s),z_0(s)=1,s>0$ I use Lagrange's method $$\frac{dx}{y}=\frac{dy}{-x}=\frac{dz}{0}$$ From 1st and 2nd $x^2+y^2=c_1$ and from last relation $z=c_2$ So solution is of the type $z = f(x^2+y^2)$ By initial condition $$1=f(cos^2s+sin^2s) \implies 1 = f(1) $$ so what i concluded from this about the solution to the problem ? Please help .
Thankyou very much I got your point One more question : is there are condition which quickly tell that Cauchy problem has unique solution, no solution, or infinitely many solution without solving Cauchy problem ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/3709106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Weak-* convergence in $L^{\infty}$ implies weak convergence in $L^p$ on bounded set In a lecture I found the following result: "We remark that when $\Omega$ is bounded the weak-* convergence of $u_{n}$ in $L^{\infty}(\Omega)$ to some $u \in L^{\infty}(\Omega)$ implies weak convergence of $u_{n}$ to $u$ in any $L^p$, $1 \leq p < \infty$." Does anyone know a book or some other source where this result is proved?
I don't know of a reference but that might be because the result isn't so hard to see. Since $\Omega$ is bounded, we have that $L^q(\Omega) \subseteq L^{1}(\Omega)$ for every $1 \leq q \leq \infty$. Now weak-$*$ convergence in $L^\infty(\Omega)$ means that for every $v \in L^1(\Omega)$, $$\int u_n v dx \to \int u v dx$$ as $n \to \infty$. To prove the weak convergence in $L^p(\Omega)$ we need to prove that if $v \in L^q(\Omega)$ where $p^{-1} + q^{-1} = 1$, then $\int u_n v \to \int u v$. But since $L^q(\Omega) \subseteq L^1(\Omega)$, this is immediate by the weak-$*$ convergence in $L^\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3709379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cone over $X$, equivalence relation I have a question about the definition of the cone over $X$. If $X$ is a space, define an equivalence relation $X\times [0,1]$ by $(x,t)\sim (x',t')$ if $t=t'=1$. Denote the equivalence class of $(x,t)$ by $[x,t]$. The cone over $X$, denoted by $CX$, is the quotient space $X\times [0,1]/\sim$. I acutally do not get this definition of the equivalence relation.... Sure, two points $(x,t), (x',t')$ are equivalent if $t=t'=1$. But which points are equivalent to (for example) $(x,\tfrac12)$? The relation does not tell anything about the cases when $t\neq 1$ or $t\neq t'$, which feels incomplete. But I think I am doing a horrible mistake here. How does this relation include every pair $(x,t)\in X\times [0,1]$, when the relation is only defined for $t=1$? I am currently studying "Introduction to algebraic topology" by Joseph J. Rotman. An exercise goes as follows: For fixed $t$ with $0\leq t<1$, prove that $x\mapsto [x,t]$ defines a homeomorphism from a space $X$ to a subspace of $CX$. Which revealed my misunderstanding. So I am not understanding which equivalence classes there are. For every point $(x,t)$ with $t\neq 0$, the equivalcence class should just contain this one point. Can you elaborate more? Thanks in advance.
The short answer to your question "which points are equivalent to $(x,1/2)$?": Only $(x,1/2)$. The cone should really look like the geometric cone you know. For a generic example, start with a cylinder and identify the points at the top face. This is the motivation for the name cone for $CX$. Up to pathological examples, the other examples look similar. You start with a topological space $X$, make a cylinder with $X$ as a face, so we get $X \times [0,1]$. Then, we identify the points of one face.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3709567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Separated morphisms are stable under base change Suppose that the map $f$ in the following diagram is a separated morphism (i.e. $\Delta_{X/S}:X\rightarrow X\times_{S}X$ is a closed immersion). I want to prove that $p_{2}$ is also a separated morphism. $$\require{AMScd}$$ \begin{CD} X\times_{S}Y @>{p_{1}}>> X\\ @VV{p_{2}}V @VV{f}V\\ Y @>{h}>> S \end{CD} To prove that $p_{2}$ is also separated we have to show that the diagonal morphism $\Delta_{X\times_{S}Y/Y}: X\times_{S}Y\rightarrow (X\times_{S}Y)\times_{Y}(X\times_{S}Y)$ is a closed immersion. My strategy was to construct a cartesian diagram containing the $\Delta_{X/S}$ and $\Delta_{X\times_{S}Y/X}$ and use the fact that closed immersions are stable under base change, i.e. if we have a cartesian diagram $$\require{AMScd}$$ \begin{CD} Z @>>> Y\\ @VVV @VVV\\ X @>>> S \end{CD} such that $X\rightarrow S$ is a closed immersion, then also $Z\rightarrow Y$ is a closed immersion. Unfortunately I couldn't find such a cartesian diagram.
I guess that your right horizontal $g$ in your first diagram is your $f: X \to S$ and your $p_2: X \times_S Y \to Y$ is the pullback of $f$ along horizontal $Y \to S$ (you called it also $f$ but in your first sentence you reserved $f$ for $X \to S$. Let call the horizontal arrow $Y \to S$ $h$. Clearly the diagram below is a pullback because $X \times_{ X \times_S X} (X\times_{S}Y \times_Y X\times_{S}Y) = X \times_{ X \times_S X} X \times_S X \times_S Y=X\times_{S}Y$ (use universal property of fiber product). \begin{CD} X\times_{S}Y @>{p_{1}}>> X\\ @VV\Delta_{}V @VV\Delta_{X/S}V\\ X\times_{S}Y \times_Y X\times_{S}Y @>{pr \times_h pr}>> X \times_S X \end{CD} Now closed immersions are preserved under base change and you assumed $\Delta_{X/S}:X\rightarrow X\times_{S}X$ be closed immersion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3709708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Don't understand how to solve line integral dot product I am trying to solve the integral: $$\int_{C} \bf F \cdot dr$$ between $x=0,y=0,z=0$ and $x=0,y=1,z=1$ where $\textbf {F} = (0,y,1-y^2-z)$ and $C$ is $z=2y-y^2$ I have the solution however I do not understand why the "$-z$" is replaced by $C$. I know that it is easier to solve this way but wouldn't this change $F$? I am confused and perhaps what I am saying doesn't even make sense. Could someone please walk me through how to solve such a question? Solution: $$\frac{dr}{dy}=(0,1,2-2y)$$ $$\int_{0}^{1}(0,y,1+y^2-(2y-y^2)) \cdot (0,1,(2-2y))dy$$ $$=\int_{0}^{1}(2-5y+4y^2)dy=\frac{5}{6}$$
When we're solving line integrals, it is often useful to use parametrizations even if the parametrizations are obvious, as it is in your case. Your curve $C$ is given by $z = 2y-y^{2}$, so if $\gamma= \gamma(t)$ denotes your parametrized curve, you should have: $$\gamma(t) = \begin{cases} \displaystyle x = 0 \\ \displaystyle y = t \\ \displaystyle z = 2t-t^{2} \end{cases}$$ where I simply used the change of variables $y=t$, which is trivial but gives a nicer way to look to the problem. Here $0\le t \le 1$. Now, as you know: $$\int_{C}\vec{F}\cdot d\vec{r} = \int_{0}^{1}\vec{F}(\gamma(t))\cdot \gamma'(t)dt$$ But now $\gamma'(t) = (0,1,2-2t)$ and $\vec{F}(\gamma(t)) = (0,t, 1-t^{2}-(2t-t^{2}))$. Thus: $$\int_{C}\vec{F}\cdot d\vec{r} = \int_{0}^{1}(0,t,1-t^{2}-(2t-t^{2}))\cdot (0,1,2-2t)$$ which is the integral on your solution. In other words, your solution was simply more brief and skipped the parametrization passages. Because the parametrization was kinda obvious, I mean, it basically changes variables, it could be avoided if you wanted. But this step-by-step solution may turn things more clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3709848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Explanation on the limitation of the proof of the Law of Cosines Regarding this method of proving the Law of Cosine: It is noted on that page that This proof isn't perfect. We should have been worried about angles. This can be avoided by using directed angles. I don't understand why we need to worry about the angles, as throughout high school, I have always been treating both the angles and the lengths of the sides as scalar quantities.
When $\pi/2 < \alpha < \pi$, we have an obtuse angle at $A$ and we must consider the geometry of the figure accordingly. The altitude $h$ from $B$ to $AC$ is no longer "inside" the triangle. It extends to some point, say $B'$, on the line containing $AC$ such that $|B'C| > |AC|$; in other words, the signed distance $r$ would need to be negative and $b-r > b$. That said, the relationship $$r = c \cos \alpha$$ does take this into account, since when $\pi/2 < \alpha < \pi$, $-1 < \cos \alpha < 0$, consequently $-c < r < 0$. You can also see this in the final identity $$a^2 = b^2 + c^2 - 2bc \cos \alpha,$$ since again, when $\cos \alpha < 0$, the RHS exceeds $b^2 + c^2$, which is what we would have if $\alpha$ were a right angle. If we think of $b$ and $c$ as fixed and $\alpha$ allowed to vary continuously from $0$ to $\pi$, you would find that the length of $a$ increases from $0$ to $b+c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Triangles and inequalities Let ABC be a triangle and let O be any point in space. How can I show that $AB ^ 2 + BC ^ 2 + CA ^ 2 \leq 3 (OA ^ 2 + OB ^ 2 + OC ^ 2)$? I know this prove by inner products, but is possible to show this by euclidean geometry?
I'll use $P$ instead of $O$ through the course of this answer; $O$ is a little confusing because it usually denotes a centre. Let $D$, $E$ and $F$ be midpoints of $BC$, $CA$, and $AB$ respectively; let $G$ be the intersection of $AD$, $BE$, and $CF$, i.e. the triangle's centroid. It is well known that $AG$ = $2GD$ (and similar for the other medians). By Apollonius's theorem, along with the fact that $AD = \frac 3 2 AG$, we note that $$ \frac 9 2 AG^2 + \frac 1 2BC^2 = AB^2 + AC^2; $$ summing the symmetric equations for the other medians we have $$ \frac 9 2 (AG^2 + BG^2 + CG^2) = \frac 3 2 (AB^2 + BC^2 + CA^2). $$ In other words, $G$ makes the inequality tight. Thus we are clearly motivated to try and do something with $G$. Consider the following diagram. By Stewart's theorem (a slight generalisation of Apollonius's theorem, and only a few steps removed from direct Pythagoras bashing) we obtain the result that $$ \frac 2 3 PD^2 + \frac 1 3 PA^2 = PG^2 + \frac 2 9 AD^2 = PG^2 + \frac 1 2 AG^2. $$ To get rid of the $PD^2$, we use Apollonius on $\bigtriangleup \!\! PCB$: $$ 2PD^2 + \frac 1 2 BC^2 = PB^2 + PC^2. $$ Multiplying the former expression by $3$ and subtracting the latter we obtain $$ PA^2 - \frac 1 2 BC^2 = 3PG^2 + \frac 3 2 AG^2 - PB^2 - PC^2. $$ Summing cyclically for the corresponding expressions oriented about medians $BE$ and $CF$, and substituting in our previous expression for the sum of square distances at the centroid $G$, we obtain $$ 3(PA^2 + PB^2 + PC^2) = 9PG^2 + AB^2 + BC^2 + CA^2, $$ so we are done by non-negativity of squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Describing a kind of probability scenario: chance of two heads over three coin flips A fair coin is tossed three times. What is the probability it will land on heads exactly twice? We know that flipping a single coin has two mutually exclusive outcomes, and that multiple flips constitute independent events. This language suffices to explain why, for the trivial case of two coin flips, we can compute the odds of landing heads twice as P(H and H) = 1/2 x 1/2 = 1/4. What other language can we use to describe the primary question? What are some attributes of probability problems that can be solved using the binomial coefficient? I'm hoping that reading said language will develop my intuition as to why we apply n-choose-k to solve this problem.
The language is the language of 'random variables' and 'probability distributions'. These are fundamental ideas in probability theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a "non-curved" manifold? I've seen some visualizations of manifolds. It seems that they are all "curved" shapes. Is there a "non-curved" manifold?
Yes, you can get a flat, compact manifold embedded in $\Bbb R^3$. This is a consequence of the Nash embedding theorem. In particular, a couple of years ago, this was done in practice with a torus, and the results are quite visually interesting. Here are the first three steps in the construction: It may look curved, but it does actually give a $C^1$ isometric embedding of the flat torus $S^1\times S^1$ in three-dimensional Euclidean space. ($C^1$, i.e. continuously differentiable, is important because otherwise distances along the surface are not necessarily well-defined.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find an angle in the given quadrilateral In the following problem, I want to find the angle marked as $x$. It seems so simple and yet I am out of ideas. It is very easy to get all angles except two of them: angle ADB and angle CBD. Is there a calculation for the angle $x$ that only uses parallel lines and no circles? Edit: A nice solution by using a circle is given below. In the book, this question was asked at the start of the chapter after introducing parallel lines and all the angles in that setup. Sum of angles of a triangle was proved and this question was asked in the exercise. From geogebra, I know the answer is 60 degrees. But I do not know how to argue that the answer is 60 degrees.
Given: 1) $\angle ABC=30^\circ$ 2) $\angle BAD=80^\circ$ 3) $\angle DAC=20^\circ$ 4) $\angle ACB=50^\circ$ 5) $\angle BCD=50^\circ$ 1) (the key point of this solution) Let $F\in AB$ such, that $\angle BCF$ $=\angle FBC=30^\circ$, $\angle ACF$ $=\angle CAD=20^\circ$ 2) $\angle ADC $ $= 180^\circ-\angle DAC-\angle ACB-\angle BCD$ $=180^\circ-20^\circ-50^\circ-50^\circ$ $=60^\circ$ 3) $\angle AFC $ $= \angle FCB+\angle FBC$ $=30^\circ+30^\circ$ $=60^\circ=\angle ADC$ $\Rightarrow$ 4) $ACDF$ is cyclic $\Rightarrow$ 5) $\angle CFD=\angle CAD=20^\circ$ 6) $\angle CDF$ $=180^\circ-\angle CFD-\angle FCB-\angle BCD$ $=180^\circ-20^\circ-30^\circ-50^\circ$ $=80^\circ$ $\Rightarrow$ 7) $\angle FCD=\angle FDC$ $\Rightarrow$ 8) $DF=CF$ 9) $CF=FB$ $\Leftarrow$ $\angle FBC=\angle FCB=30^\circ$ 10) (8-9$\Rightarrow$)$DF=FB$ $\Rightarrow$ $\angle FDB$ $=\angle FBD$ $=\frac{180^\circ-\angle BFD}{2}$ $=\frac{180^\circ-(\angle BFC-\angle CFD)}{2}$ $=\frac{\angle BCF+\angle FBC+\angle CFD}{2}$ $=\frac{30^\circ+30^\circ+20^\circ}{2}=40^\circ$ 11) $\angle ADF=\angle ACF$ $\Leftarrow$ (4) 12) $\angle ADB=\angle ADF+\angle FDB=20^\circ+40^\circ=60^\circ$ 13) Bonus: $\angle DFB=\angle CAB$ $\Rightarrow$ $AC||DF$ I can't see any parallel lines or circles as well. So, construct them) Seriously, $\angle BCF=30^\circ$ is the first thing that it wants to construct. If needed with no circles, $\triangle AEF$ and $\triangle CED$ are congruent, thus $\triangle DEF$ is equilateral and similar to $\triangle CEA$, but I think it's a bit longer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
An increasing sequence whose terms contain only odd digits Consider the increasing sequence: $13579, 13597, \dots,199153773,\dots$, where every term contains all (and only) the digits $1,3,5,7,9$ (every digit must appear at least once in every term, so repetition is allowed). What is the $1992^\text{nd}$ term in the sequence? What is the order (the term number) of $199153773$? I am not sure how to start. I am just thinking that the $1992^\text{nd}$ contains $\left \lfloor \frac{1992}{5!} \right \rfloor = \left \lfloor \frac{1992}{120} \right \rfloor = \left \lfloor 16.6 \right \rfloor = 16$ digits. I am not sure. And I am not asking for the answer, I am just asking for help/hints, then I will edit my post to show you my attempt, if right or wrong. Thanks a lot! Edit: I give up. Barry Cipra and Wolfgang Kais commented (really appreciated). I just confused about counting the $6$-digit numbers.
$1992^{nd}$ term is 1137597. Index of term 199153773 is 306430. def isValidNumList(numList): for x in [1, 3, 5, 7, 9]: if not x in numList: return False return True def nextNumList(numList): numList.reverse() valid = False while not valid: overflow = 1 for i in range(0, len(numList)): if overflow == 0: break; if numList[i] == 9: numList[i] = 1 else: numList[i] += 2 overflow = 0 if overflow > 0: numList.append(1) valid = isValidNumList(numList) numList.reverse() def getNumber(numList): res = 0 for digit in numList: res *= 10 res += digit return res def findNumber(index): n = 1 numList = [1, 3, 5, 7, 9] while n != index: nextNumList(numList) n += 1 return getNumber(numList) def findIndex(number): n = 1 numList = [1, 3, 5, 7, 9] while getNumber(numList) != number: nextNumList(numList) n += 1 return n # prints 1137597 print(findNumber(1992)) # prints 306430 print(findIndex(199153773))
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof to extending a dynamical system with discrete time to one with continuous time Let $S$ be a dynamical system on a metric space $X$ with discrete time $\mathbb{N}_0$. In our script we have a theorem that says one can extend such a system to one, here called $\tilde{S}$, with continuous time $[0,\infty)$ on a larger space $Y$ and $\tilde{S}(1)|_X=S(1)$. We don't have a proof, only the hint to use $Y=C^0([0,1])$. Can anyone help me with a proof?
You construct an equivalence relation on $X\times [0,1]$ so that $(x,1)$ is equivalent to $(f(x),0)$ where $f=S(1,\cdot)$ is the step-1 map of the dynamical system $S$. Then define $$\tilde S(t,(x,s))=(f^n(x),\alpha)$$ where $s+t=n+\alpha$, $n\in\Bbb N_0$ and $\alpha\in[0,1)$. Adapt to your notation convention if I guessed the interpretation of your symbols wrong. Justification: The task description tells that $\tilde S$ acts on a bigger space $\tilde X$ where $X$ is embedded, $\iota:X\to\tilde X$. What we originally know with certainty is that the dynamic on the embedded set should be inherited, $$\tilde S(n,\iota(x))=\iota(S(n,x))=\iota(f^n(x)).$$ What this construction does is to find the most trivial space that satisfies the demands of the task, essentially turning the problem description into a solution. So a value of $\tilde S(t,\iota(x))$ is needed for $t\in(0,1)$. The easiest construction is to give the pair $(x,t)\in X\times\Bbb R$ as that value. Next one has to ensure continuity at $t=1$, $$ \lim_{t\to 1}\tilde S(t,\iota(x))=\tilde S(1,\iota(x))=\iota(f(x)). $$ This leads directly to the construction of an equivalence relation $(x,1)\sim (f(x),0)$, easily generalized to $(x,t+n)\sim (f^n(x),t)$. Then $\tilde X=(X\times\Bbb R)/\sim$, the topology is inherited from $X\times\Bbb R$, the continuity of $\tilde S$ in all arguments follows from the continuity of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3710952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Index of commutator subgroup in the commutator group I need to prove (or find a counter example) that if $G$ is solvable and $H \leq G$ is a subgroup of finite index, then the commutator subgroup $D(H)$ is also a subgroup of finite index in $D(G)$. This would allow me to say : If $G_1$ and $G_2$ are commensurable (i.e. there are $H_1 \leq G_1$ and $H_2 \leq G_2$ such that $|G_i : H_i| < \infty$ and $H_1 \cong H_2$) then $D^n(G_1$) and $D^n(G_2)$ are commensurable for all $n$. It's easy to show that $D(H)$ is a subgroup but I can't even figure out how to start the main part of this proof. Does anyone have a clue ? EDIT : Thanks to Tsemo Aristide who answered the original question. The fact that it's not always true doesn't help me at all for my research, so here's a follow up question : (not sure that editing is the right way to ask a follow up question though so please let me know if I shouldn't be doing this !) What if $G$ was nilpotent instead of solvable ? Would it work then ? Thanks in advance !
This MathOverflow question answers your question about nilpotent groups in the positive, by induction on the Hirsch length. https://mathoverflow.net/questions/107679/index-of-derived-subgroup-in-derived-group
{ "language": "en", "url": "https://math.stackexchange.com/questions/3711066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$ f(x)=\begin{cases}\sin x& \text{if $x$ is rational}\\{\cos x} &\text{if $x$ is irrational}\end{cases} $, finding continuous point Problem: Where the following function continuous?$$f(x)=\begin{cases}\sin {x}& \text{$x\in\mathbb Q$}\\ \cos{x} &\text{$x\notin\mathbb Q$}\end{cases} $$ We can predict that $f$ might be continuous at $x={\pi\over4}+k\pi$ where $k\in\mathbb Z$, and otherwise discontinuous. I want to prove this using $\epsilon-\delta$ argument. For example, at $x={\pi\over4}$, for example, it was difficult to express suitable $\delta$ to derive that $|f(x)-{1\over\sqrt2}|\lt\epsilon$. I learned that it can be proven by using dirichlet $d(x)$, which is $$d(x)=\begin{cases}0&x\in\mathbb Q \\1&x\notin\mathbb Q\end{cases}$$ By setting $f(x):=\sin{x}+d(x)(\cos{x}-\sin{x})$ and $h(x):=\cos{x}-\sin{x}$. We are firstly going to focus on $d(x)h(x)$. If $h(a)=0$, then $d(x)h(x)$ is continuous at $a$ since $$\forall\epsilon\gt0, \exists\delta>0 s.t 0\lt|x-a|\lt\delta\implies |d(x)h(x)-d(a)h(a)|=|d(x)h(x)|\le|h(x)|\lt\epsilon$$ Note that $h$ is continuous. If $h(a)\ne0$, we claim that $d(x)h(x)$ is discontinuous. Suppose $d(x)h(x)$ is continuous, since $h(x)$ is also continuous, so is $d(x)=\frac{d(x)h(x)}{2}$. However, we can easily prove that $d(x)$ is discontinuous at every point, so it's contradiction" $d(x)h(x)$ is discontinuous. When $h(x)=0$, $x={\pi\over4}+n\pi$; at such $x$, $f$ is continuous as we proved. But, I do want to know the solution that does not use without dirichlet function. I tried it, but it was difficult to set up $delta$. Is there an exact way to solve this? Thanks very much. NOTE I have to prove not only continuity at $x={\pi\over4}+k\pi$ but also discontinuity at other points! NOTE I cannot use limits of sequence in the test, so I really want the solution with epsilon-delta arguements.
$f$ is continuous at a point $p$ if and only if for every convergent sequence $p_n$ such that $\lim p_n = p$, $\lim f(p_n) = f(p)$. Now we can approach a point $p$ by both rational sequences and irrational sequences. So given $p$, there is a rational sequence $p'_n$ and an irrational sequence $p_n''$ both of which converge to $p$. So if $f$ is continuous $p$, then $f(p) = \lim f(p_n') = \lim f(p_n'')$. But by definition this is the same as $f(p) = \lim \sin(p'_n) = \lim \cos(p_n'')$. But since $\cos$ and $\sin$ are continuous functions, this boils down to $f(p) = \cos(p) = \sin(p)$. So $f$ is continuous at those points $p$ where $\cos(p) = \sin(p)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3711256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
$f(x)= \sqrt{\frac{(x+1)^3}{x}}$ find constant values $a,b,c \in\mathbb{R}$ such that $f(x)=ax+b+\frac{c}{x}+o(\frac{1}{x})$ when $x \to +\infty$ Would anyone be kind to explain to me how to do this types of problems? My proffessor gave us couple of this problems for homework and we dont have solutions. This is my first time to do something like this. This is the easiest of them. Please write me a solution and I will find pattern myself. Thanks in advance.
I thought that it might be instructive to present a way forward that forgoes use of calculus and relies instead on elementary, pre-calculus tools only. To that end, we proceed. For $x>-1$, we can write $$\sqrt{\frac{(x+1)^3}{x}}=(x+1)\sqrt{1+\frac1x}\tag1$$ As $x\to \infty$, $t=1/x \to 0$. So, let us examine the behavior of the function $f(t)=\sqrt{1+t}$ for "small" values of $t$. Let $e(t)$ denote the function $e(t)=\sqrt{1+t}-a-bt-ct^2$ so that $\sqrt{1+t}=a+bt+ct^2+e(t)$. Note that $e(t)$ is the error between $\sqrt{1+t}$ and a quadratic polynomial $a+bt+ct^2$ that we view as an "approximation" for $\sqrt{1+t}$. Hence, we have $$e(t)=\frac{(1-a^2)+(1-2ab)t+(2ac+b^2)t^2-2bct^3-c^2t^4}{\sqrt{1+t}+a+bt+ct^2}\tag2$$ Now for $a=1$, $b=1/2$, and $c=1/8$, $(2)$ becomes $$e(t)=-\frac1{64}\left(\frac{t-8}{\sqrt{1+t}+1+t/2-t^2/8}\right)t^3$$ and the error function behaves like $Ct^3$ as $t\to 0$ (i.e., $e(t)=O(t^3)$). To see this, simply observe that $\lim_{t\to 0}e(t)/t^3=\frac1{16}$ and so, $e(t)=O(t^3)$ indeed. Moreover, we find that $$\sqrt{1+t}=1+\frac12t-\frac18 t^2+O(t^3)\tag3$$ Finally, letting $x=\frac1t$ in $(3)$ and substituting in $(1)$ reveals $$\sqrt{\frac{(x+1)^3}{x}}=(1+x)\left(1+\frac1{2 x}-\frac1{8x^2}+O\left(\frac1{x^3}\right)\right)$$ And now you can finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3711389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
What's $-2^{-2^{-2^{-2...}}}$? Pretty simple question that I know someone has probably asked before: What's $-2^{-2^{-2^{-2...}}}$? Or, more specifically, what is the number that this repeated sequence approaches? I have calculated it out to about $-0.641185$, but I have no idea what that number is or where you could get it from. Keep in mind that I am not asking for $(-2)^{(-2)^{(-2)...}}$, this sequence doesn't really work. The order of operations looks like this: $-(2^{-(2^{-(2^{-(2...)})})})$ It's basically like taking $-2$, exponentiating it by 2 to get $2^{-2}$, then making that negative $-2^{-2}$, then exponentiating it again, etc.
Recursively, we might define this as the limit as $n \to \infty$ of $$a_n = -(2)^{a_{n-1}}$$ with $a_1 = -2$. Letting $n \to \infty$ on the assumptions of continuity and convergence of $a_n$ to some value $L$ gives us $$L = -2^L$$ This value $L$ is fundamentally what you seek. This equation will have to be solved by means of the Lambert W function and doesn't have a closed form outside of that, to my knowledge anyhow. Note that $-2^L = -e^{\ln(2) \cdot L}$. Thus, follow the following steps: * *Use the identity above *Multiply both sides by $-\ln(2)$ *Multiply both sides by $e^{-\ln(2)L}$ Then you get that $$L = -2^L \implies -\ln(2) L e^{-\ln(2) L}= \ln(2)$$ The Lambert W function is the inverse to $f(x) = xe^x$. That is, $W(xe^x) = x$. Applying it to both sides of the above, we obtain that $$W(-\ln(2) L e^{-\ln(2) L})= W(\ln(2)) \implies -\ln(2)L = W(\ln(2))$$ Solving for $L$ gives us $$ L = \frac{-W(\ln(2))}{\ln(2)}$$ Of course, this is about as much simplification as we can get; many values for the W function have to be approximated. Of course, we also should note this is the principle value since it is a multivalued function otherwise. If we approximate the value by, say, Wolfram, we see that $$L \approx -0.641186$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3711763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find solution set of $\dfrac{8^x+27^x}{12^x+18^x}=\dfrac{14}{12}$ What I've done is factoring it. $$\dfrac{2^{3x}+3^{3x}}{2^{2x}\cdot 3^{x}+3^{2x}\cdot{2^{x}}}=\dfrac{7}{2\cdot 3}$$ This looks like it can be factored more but it doesn't work from my attempts.
$$\dfrac{2^{3x}+3^{3x}}{2^{2x}\cdot 3^{x}+3^{2x}\cdot{2^{x}}}=\dfrac{7}{2\cdot 3}$$ $$3\cdot 2^{3x+1}+2\cdot 3^{3x+1}=7\cdot 2^{2x}\cdot 3^{x}+7\cdot 3^{2x}\cdot 2^{x}$$ Divide both the sides by $2^{3x+1}$, we get $$3+3 \left(\frac{3}{2}\right)^{3x}=\frac72 \left(\frac{3}{2}\right)^{x}+\frac72 \left(\frac{3}{2}\right)^{2x}$$ Let $\left(\frac{3}{2}\right)^x=t\quad \forall \ \ t>0$ $$3+3t^3=\frac{7}{2}t+\frac{7}{2}t^2$$ $$6t^3-7t^2-7t+6=0$$ $$(t+1)(3t-2)(2t-3)=0$$ $$t=\frac32 \implies \left(\frac32\right)^x=\frac{3}{2}\iff x=1$$ $$t=\frac 23\implies \left(\frac32\right)^x=\frac{2}{3}\iff x=-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3711923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Simplifying: $\sum_{n=0}^\infty\sum _{i=0}^n \left(\frac{a}2\right)^{2n}\frac1{i!n!(n-i)!}\cdots$ I'm trying to simplify the following infinite sum: $$\sum_{n=0}^\infty\sum_{i=0}^n \left(\frac{a}2\right)^{2n}\frac1{i!n! (n-i)!}\frac{\partial^{2n}f(x,y)}{\partial x^{2i}\, \partial y^{2(n-i)}},\ \ a,x,y\in\Bbb R$$ Where $f:\Bbb{R}\to\Bbb{R}$ is a sufficiently well-behaved function. The only thing I've thought about is to use a binomial coefficient, i.e. $$\sum_{n=0}^\infty\frac{\left(\frac{a}2\right)^{2n}}{(n!)^2}\sum_{i=0}^n \binom{n}i\frac{\partial^{2n}f(x,y)}{\partial x^{2i}\, \partial y^{2(n-i)}}$$ Maybe the substitution $m=n-i$ will make the sum more symmetric? However, I don"t know how to change the limits of summation in that case. I'm interested to hear what general techniques people use when dealing with such infinite sums. EDIT I suspect that this sum can be written as an infinite sum over 'diagonal terms'. If we construct a matrix for the above elements, say $A_{i,j}$, then this sum should be possible to be written as $\sum_{k=0}^\infty$. such that $k=n+i$. Alternatively, writing $$\sum_{n=0}^\infty \frac{{\left(\frac{a}2\right)}^{2n}}{(n!)^2}x^n=I_0(a\sqrt{x})$$ with $I_0$ the modified Bessel function of first kind, maybe one can 'hide' the summation completely?
From $$\sum_{n=0}^\infty\frac{\left(\frac{a}2\right)^{2n}}{(n!)^2}\sum_{i=0}^n \binom{n}i\frac{\partial^{2n}f}{\partial x^{2i}\, \partial y^{2(n-i)}}$$ We can identify $$\Delta^n(f)=\sum_{i=0}^n \binom{n}i\frac{\partial^{2n}f}{\partial x^{2i}\, \partial y^{2(n-i)}}$$ where $\Delta^n$ is the Laplacian composed $n$ times. $$\sum_{n=0}^\infty\frac{\left(\frac{a}2\right)^{2n}}{(n!)^2}\Delta^n(f)=\left(\sum_{n=0}^\infty\frac{\left(\frac{a}2\right)^{2n}}{(n!)^2}\Delta^n\right)f$$ We can treat the term in brackets as the definition of a certain Differential operator through Taylor expansion. Writing $$\sum_{n=0}^\infty \frac{{\left(\frac{a}2\right)}^{2n}}{(n!)^2}x^n=I_0(a\sqrt{x})$$ with $I_0$ the modified Bessel function of first kind, we could write the following form: $$\left(\sum_{n=0}^\infty\frac{\left(\frac{a}2\right)^{2n}}{(n!)^2}\Delta^n\right)f=I_0(a\sqrt{\Delta})(f)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712064", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For $A, B \subset \mathbb{R}^+$, $\sup(A \cdot B) = \sup A \sup B$. I am trying to prove that: For nonempty subsets of the positive reals $A,B$, both of which are bounded above, define $$A \cdot B = \{ab \mid a \in A, \; b \in B\}.$$ Prove that $\sup(A \cdot B) = \sup A \cdot \sup B$. Here is what I have so far. Let $A, B \subset \mathbb{R}^+$ be nonempty and bounded above, so $\sup A$ and $\sup B$ exist by the least-upper-bound property of $\mathbb{R}$. For any $a \in A$ and $b \in B$, we have $$ab \leq \sup A \cdot b \leq \sup A \cdot \sup B.$$ Hence, $A \cdot B$ is by bounded above by $\sup A \cdot \sup B$. Since $A$ and $B$ are nonempty, $A \cdot B$ is nonempty by construction, so $\sup(A \cdot B)$ exists. Furthermore, since $\sup A \cdot \sup B$ is an upper bound of $A \cdot B$, by the definition of the supremum, we have $$\sup(A \cdot B) \leq \sup A \cdot \sup B.$$ It suffices to prove that $\sup(A \cdot B) \geq \sup A \cdot \sup B$. I cannot figure out the other half of this. A trick involving considering $\sup A - \epsilon$ and $\sup B - \epsilon$ for some $\epsilon > 0$ and establishing that $\sup(A \cdot B) < \sup A \cdot \sup B + \epsilon$ did not seem to work, though it did in the additive variant of this proof. I haven't anywhere used the assumption that $A$ and $B$ are contained in the positive real numbers, and it seems to me that this assumption must be important, probably as it pertains to inequality sign, so I assume that at some point I will need to multiply inequalities by some positive number. I cannot seem to get a good start on this, though. A hint on how to get started on this second half would be very much appreciated.
Hint: Rather than $\sup A - \varepsilon$ and $\sup B - \varepsilon,$ subtract appropriate multiples of $\varepsilon$ from $\sup A, \sup B$ respectively. You'll need to assume that $\varepsilon$ isn't too big. Full proof: [I'm sorry, I can't get the wretched spoiler mechanism to work, so I'm afraid you'll have to avert your eyes!] Let $s = \sup A > 0,$ and $t = \sup B > 0.$ You have already proved that $\sup AB \leqslant st.$ For every $\varepsilon$ such that $\varepsilon > 0$ and $\varepsilon < 2st,$ there exist $a \in A$ and $b \in B$ such that \begin{align*} a & > s - \frac\varepsilon{2t} > 0, \\ b & > t - \frac\varepsilon{2s} > 0. \end{align*} Therefore $$ ab > \left(s - \frac\varepsilon{2t}\right)\left(t - \frac\varepsilon{2s}\right) = st - \frac\varepsilon2 - \frac\varepsilon2 + \frac{\varepsilon^2}{4st} > st - \varepsilon. $$ Therefore $\sup AB \geqslant st,$ therefore $\sup AB = st = (\sup A)(\sup B).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Combinatorics question where someone has to be at least one seat away from anyone else? In a doctor’s waiting room, there are 14 seats in a row. Eight people are waiting to be seen. There is someone with a very bad cough who must sit at least one seat away from anyone else. If all arrangements are equally likely, what is the probability that this happens? My logic is this: Number the 8 people and let person 8 have the cough. We need to find the number if arrangements that person 8 is next to someone and then subtract this from the total to get the number of arrangements where he sits at least one seat away. So treat person 8 and person 7 as one ‘object’. There are 13!*2!/6! distinct arrangements of this (if you say the seats are identical objects). We can repeat this principle 7 more times, pairing up person 8 with person 6, then 5 etc. so we multiply the previous expression by 7 to get 121080960 permutations where the coughing person is next to someone. However, this is way too much. How would you do this problem?
There are $8 \cdot \frac{14!}{8! \cdot 6!}$ configurations possible in total, where the added $8$ arises from the fact that person 8 is 'distinguishable'. If person 8 sits on the edges, this leaves $\frac{12!}{7! \cdot 5!}$ configurations open for the other people to sit ($12 = 14 - 1 - 1$, where $1$ is the edge seat and $1$ the seat next to the edge seat). We multiply this by two, as there are two edge seats. If person $8$ sits on a non-edge seat, this leaves $\frac{11!}{7! \cdot 4!}$ configurations for the other people to set. We multiply this by twelve, as there are twelve non-edge seats. The total amount of 'valid' configurations is therefore $ 2 \cdot \frac{12!}{7! \cdot 5!} + 12 \cdot \frac{11!}{7! \cdot 4!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
what is $\dim \{f\in P_n|T(f)=f\}$ Let $P_n$ be polynomials of degree $\leq2n$ on $\mathbb{R}$.Clearly $P_n$ is a finite dimensional vector space.Let $$ T(f)=\int_{\mathbb{R}}e^{(x^2-y^2)\pi}\cos(2\pi yx)f(y)dy $$ It's easy to show that $T$ maps $P_n$ to itself. The problem is what is $\dim \{f\in P_n|T(f)=f\}$. My idea: Suppose $f(x)=x^k$, we can observe that $T(x^k)=0$ when $k$ is odd. When $k$ is even, by Fourier transformation,I prove that $$ T(f)=\int_{\mathbb{R}}e^{(x^2-y^2+2ixy)\pi}f(y)dy=\frac{1}{(2\pi i)^k}e^{x^2\pi}\frac{\mathrm{d}^k}{\mathrm{d}x^k}e^{-\pi x^2} $$ When $k$ is even, it's easy to prove $T(x^k)$ is a polynominal with degree $k$, and its items are only even items. And let $\{x^k\}_{k=0}^{n}$ be a basis. We can write $T$ as matrix, which is a upper triangular matrix with diagonal $\{1,0,-1,0,1,0,-1,\cdots\}$, so I guess the answer is $\lfloor n/2 \rfloor+1$,but I can't prove it. I calculated $n=2$, and the answer is right.
Write $ \mathcal{F}f(x) = \int_{\mathbb{R}} e^{-2\pi i x y} f(y) \, \mathrm{d}y $ for the Fourier transform, and let $\mathcal{M}f(x) = e^{-\pi x^2}f(x)$. Also let $$ E_n = \{ f \in P_n : \text{$f$ is even} \}. $$ Then it is easy to check that: * *If $f \in P_n$ solves $Tf = f$, then $f \in E_n$. *For $f \in E_n$, we have $Tf = \mathcal{M}^{-1}\mathcal{F}\mathcal{M}f$. *Equip $E_n$ with the inner product $\langle f, g \rangle := \int_{\mathbb{R}} \overline{\mathcal{M}f(x)}\mathcal{M}g(x) \, \mathrm{d}x $. Then by the Plancherel's Theorem, $$ \langle Tf, Tg \rangle = \int_{\mathbb{R}} \overline{\mathcal{F}\mathcal{M}f(x)}\mathcal{F}\mathcal{M}g(x) \, \mathrm{d}x = \int_{\mathbb{R}} \overline{\mathcal{M}f(x)}\mathcal{M}g(x) \, \mathrm{d}x = \langle f, g \rangle, $$ and so, $T$ restricted to $E_n$ is an isometry with respect to this inner product. This proves that $T$ restricted to $E_n$ is diagonalizable. So the geometric multiplicity of $1$ coincide the algebraic multiplicity of $1$, which is $\lfloor n/2\rfloor + 1$ as computed by other users.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Group element that normalizes finite subgroup that is generated by a subset of $G$ Let $N$ be a finte subgroup of a group $G$, and assume $N = \langle S \rangle$ for some subset $S$ of $G$. Prove that an element $g \in G$ normalizes $N$ if and only if $gSg^{-1} \subset N$. My question is about the forward direction. That is, we assume that $g$ normalizes $N$, i.e. $gNg^{-1} = N$, and we want to prove that if $s \in S$, then $gsg^{-1} \in N$. But I cannot see how to do this. In fact, if we let $N = \{e\}$ and $S$ be any nontrivial subset of $G$, then every $g$ normalizes $N$, but $gsg^{-1} \not \in N$ if $s \neq e$.
If $S$ generates $N$, then each element of $N$is of the form $s_1\ldots s_n$, where each $s_i$ is either an element of $S$ or the inverse of an element of $S$ or the identity element of $G$. In particular, $S$ is contained in $N$. Hence if an element $g$ of $G$ normalizes $N$ then in particular it normalizes every element of $S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to find the number of solutions of $6|\cos x|=x$? Now, I think the only way to solve this problem for a high school student was graphically. However using pen and paper to draw the graph, it was virtually impossible to justify or refute the existence of the "Fourth" solution. Using desmos, I realised that we must must work on the the existence of the Fourth solution analytically because $ y=x$ is really close to $6|\cos x|$ at the potential Fourth solution. You can see for yourself. So, this being a high school problem, is there any way to predict whether $y=x$ will intersect $6|\cos x|$ or not? With enough zoom we can see that $y=x$ does not intersect $y=6|\cos x|$. However, how could have I predicted this with a pen and paper? I am acquainted with calculus but had no clue how to go about it. Thanks for your time!
[Using some calculus] It's enough to consider $3\pi/2 \lt x \lt 2\pi$. In that range the cosine is positive, so we can dispense with the absolute value and consider the function $f(x) = x - 6\cos(x)$, show that its minimum is positive and that its second derivative is positive. To get the minimum, we set the derivative equal to 0: $$f'(x) = 1 + 6\sin(x) = 0 \implies x = \sin^{-1}(-1/6) \approx 6.1138 > 6,$$ remembering the restriction on the values of x we are considering. Since the cosine cannot be greater than 1, $f(x) = x - 6\cos(x) > x - 6 > 0$ at $x \approx 6.1138$, so the extremum is positive. The second derivative is $f''(x) = 6\cos(x) > 0$ since the cosine is positive in the interval we are considering, so the extremum is a minimum and the function is concave up throughout that interval. Therefore $f(x)$ has a positive minimum and is concave up, so it is never 0 on the given interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3712823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
A group of order PQ, both primes with Q>P (help solution) I want to prove that if G has order pq, primes, with q>p and p DOES NOT divides q-1, then G is cyclic. My attempt: By Sylow's theorem,$n_p \equiv 1 (\textrm{mod}\ p)$ and $n_q \equiv 1 (\textrm{mod}\ q)$, and more, $n_p \equiv 0 (\textrm{mod}\ pq)$ and $n_q \equiv 0 (\textrm{mod}\ pq)$. From here we get that $n_p=1$ or q and $n_q=$1 or p, combining with what is given by the question, we see that $n_p=1=n_q$, therefore there are only one subgroup of this orders. Let K be the one with order p and H with order q. Since both have a prime as order, they are cyclic. Then, only the neutral elemente lies in the intersection of the two, hence $G=K \times H$ because $|G|=pq$ and both K and H are normal, since they are unique of each order. So,since p and q are coprimes, G is cyclic. Is that correct? This seems just to simple and i am thinking i am making some silly mistake, but i cant see. Thanks to everyone.
This is correct, and the normality of the Sylow subgroups is where the condition that $p$ not divide $q-1$ comes in. If $p$ does divide $q-1$, then we can have $q$ subgroups of order $p$ and a unique nonabelian group structure arises out of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finitely generated projective resolution Let $K$ be a field, $A$ be a finite dimensional $K$-algebra and $M$ be a finitely generated $A$-module. Is it true that $M$ admits a projective resolution by finitely generated projective $A$-modules?
As $A$ is a finite algebra over $K$ it is noetherian. As $M$ is finitely generated there is a surjection $A^{\oplus n} \longrightarrow M$. $A^{\oplus n}$ is noetherian as $A$ is. Let $N$ be the kernel of this map. By noetherianness, it is finitely generated, so there is a surjection $A^{\oplus m} \longrightarrow N$ and hence an exact sequence $A^{\oplus m} \longrightarrow A^{\oplus n} \longrightarrow M \longrightarrow 0$. Repeat this process to get a projective (in fact free) resolution by finitely generated modules. Note that we didn't need the full strength of the assumption that $A$ is finite over $K$ - only that it was noetherian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713163", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving that $2+\sqrt{2}$ is irreducible in $\mathbb{Z}[\sqrt{2}]$. I'm asked to show that $x=2+\sqrt{2}$ is irreducible in $\mathbb{Z}[\sqrt{2}]$ by using the norm map $$N:\mathbb{Z}[\sqrt{2}]\rightarrow \mathbb{Z}^+:a+\sqrt{2}b\mapsto |a^2-2b^2|$$ Now, if $x=yz$, then $2=N(x)=N(y)N(z)$ forcing wlog $N(y)=1$. I'm now stuck trying to show that $y$ must be a unit and would appreciate any help.
Use the definition of the norm. If $y=c+d\sqrt{2}$ then $N(y)=(c+d\sqrt{2})(c-d\sqrt{2})=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find three sets $A$, $B$, $C$, each of them non-empty, such that $(A\cap B)\cup C=A\cap(B\cup C)$ and $(A\cap B)\cup C\neq A\cap(B\cup C)$ I need to find three sets for both statements I have above. I have tried drawing Venn diagrams and shading appropriately then adding numbers in each shaded region to try and guess and check for sets $A$, $B$ and $C$ but am unsure how to find an appropriate solution.
$$B \subset C \subset A$$ $$A \subset B \subset C$$ For general view is good to write right hand side as: $$(A\cap B)\cup C=(A \cap B) \cup(A \cap C)$$ now you see, that difference between sides is difference between sets from right hand of union: on left you have $C$ on right $(A \cap C)$ and on this way you can create any example you would like.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
To find $A$ given that $2I + A +A^2 = B$ where $B$ is given. How to find a matrix $A$ such that the following holds: $$2I + A +A^2 = B,$$ where the matrix $B$ is given. I tried with char poly of $B$ but not getting any idea. Note that it is also given that $B$ is invertible. P.S. $B = \begin{pmatrix}-2&-7&-4\\ \:12&22&12\\ \:-12&-20&-10\end{pmatrix}$.
Here is an ad hoc method: It is straightforward (if tedious) to find null spaces of $B-2I, B-4I, (B-4I)^2$ and determine the Jordan form. With $V=\begin{bmatrix} 3 & -1 & 1 \\ -6 & 1 & 0 \\ 6 & -1 & -1 \end{bmatrix}$ we see that $V^{-1}BV = \begin{bmatrix} 4 & 1 & 0 \\ 0 & 4 & 0 \\ 0 & 0 & 2 \end{bmatrix}$, and since $2+0+0^2 = 2$ we can look for $A$ of the form $A=\begin{bmatrix} \lambda & \alpha & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & 0 \end{bmatrix}$. Then from $A^2+A+2I=B$ we get $\lambda^2+\lambda +2 = 4$ and so $\lambda \in \{-2,1\}$. Then we need $2 \alpha \lambda + \alpha = 1$ from which we get $\alpha = {1 \over 2 \lambda +1}$. Hence $\lambda =-2, \alpha = -{1 \over 3}$ and $\lambda =1, \alpha = {1 \over 3}$ are two solutions (or rather $V A V^{-1}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Interval of convergence and integration of a power series The $\arctan(x)$ can be expanded as a MacLaurin series starting from the integral $$\arctan(x) = \int \frac{1}{1 + x^2} \mathrm{d}x$$ and using $$\frac{1}{1 + x^2} = \sum_{n = 0}^{\infty} (-1)^n x^{2n}$$ as suggested in this answer. This series converges for $x \in (-1,1)$, but, after integration, it can be shown that the resulting MacLaurin series $$\arctan(x) = \sum_{n = 0}^{\infty} (-1)^n \frac{x^{2n + 1}}{2n + 1}$$ converges for $x \in \left[ -1, 1 \right]$. The integration of a series is possible only when the series is evaluated within its interval of convergence, $x \in (-1,1)$: the MacLaurin series for $x = \pm 1$ shouldn't even be considered, because the above result for $\arctan(x)$ should not be available! * *Why instead, here two more points ($x = \pm 1$) can be added to the interval of convergence? Integration does not change the radius of convergence of a series. * *What are then the differences between the interval of convergence and the radius of convergence? Do $(-1,1)$ and $\left[ -1, 1 \right]$ correspond to the same radius of convergence? I read also this question, answer and comments am I am not familiar with Cauchy-Hadamard Radius Formula. A comment here states (given an interval of convergence $(a - R, a + R)$): The issue of convergence at the points $x= a ± R$ is independent of the convergence within the interval $(a−R,a+R)$. My questions above essentially are: why?
In general, you won't be able to say things about endpoints of an interval after integrating. In this case, however, we can just check that the sum converges at the endpoints directly; if $x = \pm 1$, then $x^{2n+1} = x$, so the sum becomes $$x \sum_{n=0}^\infty \frac{(-1)^n}{2n+1},$$ which converges because $\frac{(-1)^n}{2n+1}$ is an alternating series with $\frac{1}{2n+1}$ decreasing and going to $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3713705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Floor function of a product I'm reading a book about proofs and I'm currently stuck in this problem. Prove that for all real numbers $x$ and $y$ we have that: $$\lfloor x\rfloor \lfloor y\rfloor \leq \lfloor xy\rfloor \leq \lfloor x\rfloor \lfloor y \rfloor + \lfloor x \rfloor + \lfloor y \rfloor$$ I though I could do it by cases, considering when the number is a pure integer and when is an integer plus some real number. But by doing this I end up with having a lot of cases to show. Is there any better, simpler and clever approach? Thank you! PS: I end up with a lot of cases because there is a point where I will have to consider "subcases" of cases specifically when the integer part is multiplying with the positive "rest" less than one
It can help to write $x=a+r$ where $a$ is an integer and $0\leq r <1$, so that $\lfloor x \rfloor = a.$ Similarly, let $y=b+s$ where $b$ is an integer and $0\leq s <1.$ Then $$\lfloor x \rfloor \lfloor y \rfloor = ab$$ and $$\lfloor x y \rfloor = \lfloor ab+as + br +sr \rfloor = ab +\lfloor as + br +sr \rfloor.$$ (Assuming $x$ and $y$ are positive,) the last term above has to be greater than $0$, so that's your first inequality. Then take that last term, and since $r$ and $s$ are less than one: $$\lfloor as + br +sr \rfloor \leq \lfloor a + b +sr \rfloor = a + b + \lfloor sr \rfloor.$$ The last term is $0$, so there's your second inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714040", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Asymptotic behavior of recursive sequence Suppose, the real sequence $x_{n+1}=\frac{1}{2}(x_n+\sqrt{x_n^2+c})$ with c>0 is given. Find the asymptotic behavior of this sequence. I have shown, that this sequence goes to infinity as $n\to\infty$ per contradiction. I guess, that it holds $x_n \approx \frac{1}{2}\sqrt{cn}$ which is motivated by $x_{n+1}^2=\frac{1}{2}x_n^2(1+\sqrt{1+\frac{c}{x_n}})+\frac{1}{4}c \approx x_n^2 + \frac{1}{4}c$ where I used the approximation $\sqrt{1+\frac{c}{x_n}} \approx 1$ as $n \to \infty$. I have problems, to turn this into a formal proof, since it holds for the error term for my approximation $\frac{1}{2}x_n^2(1-\sqrt{1+\frac{c}{x_n}}) \to \infty$
If we set $y_n=x_n^2$ then $$y_{n+1}=\frac{y_n}2+\frac c4+\frac{y_n}2\sqrt{1+\frac{c}{y_n}}=y_n+\frac{c}2+O(y_n^{-1}).$$ This proves that $(y_n)$ grows at least linearly, so that $y_n^{-1}=O(n^{-1})$. Therefore $$y_n=\frac{cn}2+O(\ln n)$$ and $$x_n=\sqrt{\frac{cn}2}+O\left(\frac{\ln n}{\sqrt n}\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving $A - B \subset A - (B - A)$ I believe I have been able to prove that for sets $A$ and $B$, $A - (B - A) \supset A - B$, but my proof is not particularly elegant. I was hoping someone knew of a more clever or straightforward way to show this. My proof is: Let $x \in A - B$. Then $x \in A$ and $x \not \in B$. So $x \not \in \{y \mid y \in B, \; y \not \in A \}$, so $x \not \in B - A$. Since $x \in A$ and $x \not \in B-A$, $x \in A - (B - A)$, so $A - B \subset A - (B - A)$.
If $C \subset D$, then we have $A - D \subset A-C$. We have $B - A \subset B$, hence $A- B \subset A - (B-A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Generalisation of a former question concering holomorphic continuation of $z\mapsto \dfrac{1}{z^k}$ Let $\mathbb{D} = \{ z\in \mathbb{C}: |z| < 1\}$ be the unit disk. I want to show that for any $k\in\mathbb{N}$, there is no holomorphic function $f$ which extends continuously to $\partial \mathbb{D}$, such that $$f(z) = \dfrac{1}{z^k}\quad \forall z\in\partial\mathbb{D}$$ There have been attempts to solve this question for $k=1$, see e.g. here. However, I cannot use the mean value theorem for holomorphic functions, because in our lecture, it is defined as follows: Let $f$ be holomorphic on a region $G$ and $B_r(z)$ (the closed ball around $z$ with radius $r$) be a proper subset of $G$. Then $$f(z) = \dfrac{1}{2\pi} \int_0^{2\pi} f(z+re^{it}) dt$$ The proof from here uses $r=1$ and $z=0$, but clearly, $B_1(0)$ is not a proper subset of the region $\mathbb{D}$. I also tried Schwarz' Lemma using $g(z) = z^k f(z)$ (to ensure $g(0) = 0$), but I couldn't conclude what I want.
First proof: Let $h(z)=z^kf(z)-1$. Then $h$ is holomorphic in $\mathbb{D}$, continuous on $\mathbb{D}\cup\partial\mathbb{D}$ and $h(\partial\Bbb{D})=0$. Applying the maximum modulus principle, we obtain that $h\equiv 0$, which implies that $\lim_{z\to 0}f(z)=\infty$, contradicting the fact that $f$ is holomorphic near $0$. Second proof: First note that the condition implies that, on the boundary, $$|f(z)|=1\ \ (*)$$ I claim that $(*)$ implies that $f$ is a polynomial; this in turn clearly implies (by the identity principle) that we cannot have $f(z)= \frac{1}{z^k}$ on all $\partial\mathbb{D}$. First note that $(*)$ implies that $f$ has a finite number of zeros in $\Bbb{D}$. Let us call those different from $0$ $\{z_1,\dots,z_k\}$ (with the respective multiplicities), and let $m$ be the multiplicity of $0$ as a zero of $f$. Consider now $$g(z)=\frac{f(z)}{z^m}\prod_{j=1}^k\frac{1-\bar{z}_jz}{z-z_j}$$ This function does not have any zeros in $\mathbb{D}$ and satisfies $(*)$ again (since every factor of the product has modulus one on $\partial\Bbb{D}$). The fact that $g(z)$ is constant follows from the maximum modulus principle applied to $\frac{1}{g}$ and this in turn implies that $f(z)$ is a polynomial. Note: The factors I included in the product are the inverse of the Blaschke factors, an incredibly useful tools for bounded holomorphic functions on $\mathbb{D}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the trace of this object non-negative? $\phi$ is an arbitrary superposition of powers of a single hermitian matrix, $M^k$: $$\phi =\sum_{k} \alpha_{k} M^{k}$$ Why is the following statement true? $$\langle \mathrm{tr}\left(\phi^\dagger \phi \right)\rangle \geq 0$$ I have recently asked a related question, Why is the trace of a hermitian matrix raised to an even power greater than or equal to 0?. I understand that the trace over any $M^{2k}$ would be non-negative. But here the product $\phi^\dagger \phi$ will in general have odd powers, and so there could be negative terms in the trace, according to my understanding. I am coming across this statement in Eq. (3.9) of https://arxiv.org/abs/2002.08387
Actually if $M$ is any matrix we have $\text{tr(M*M)} \geq 0$. This is clear if $M$ is diagonalizable and diagonalizable matrices are dense in all matrices.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714708", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A divergent Definite Integral I am trying the study a definite integral $$\int_{0}^{\pi}\frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} dx$$ It is a divergent integral, but I am struggling to show that fact. Since the discontinuous point is at $\frac{\pi}{2}$, so $$\int_{0}^{\pi}\frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} dx= \lim_{a \rightarrow \frac{\pi}{2}} \int_{0}^{a} \frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} dx + \lim_{a \rightarrow \frac{\pi}{2}} \int_{a}^{\pi} \frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} dx$$ Then I study the first term on RHS which is $\lim_{a \rightarrow \frac{\pi}{2}} \int_{0}^{a} \frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} dx$. For $x \in [0,\frac{\pi}{2}]$, I try to bound the function $\frac{1}{(x-\frac{\pi}{2})^3+1} \leq \frac{1}{(x-\frac{\pi}{2})^3+\cos{x}} \leq \frac{1}{(x-\frac{\pi}{2})^3}$. I can show $\int_{0}^{a}\frac{1}{(x-\frac{\pi}{2})^3} dx$ is divergent. But by comparison, I cannot say that the required function is also divergent. How can I continue to show that the function is divergent? Thanks~
One approach may be to change variables to $u = x-\pi/2$ $$ \begin{split} \int_0^\pi \frac{dx}{(x-\frac{\pi}{2})^3+\cos{x}} &= \int_{-\pi/2}^{\pi/2} \frac{du}{u^3+\cos(u + \pi/2)} \\ &= \int_{-\pi/2}^{\pi/2} \frac{du}{u^3-\sin u} \\ &= 2 \int_0^{\pi/2} \frac{du}{u^3-\sin u}, \end{split} $$ where the last step is because the integrand is odd. Now over $[0,\pi/2]$ we have $\sin u \le 1$ so $u^3 - 1 \le u^3-\sin x$ therefore, $$ \int_0^{\pi/2} \frac{du}{u^3-\sin u} \ge \int_0^{\pi/2} \frac{du}{u^3-1}, $$ which you can integrate analytically by factoring the denominator and applying partial fractions. It will diverge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
volume of solid generated by the regin bounded by curve $y=\sqrt{x},y=\frac{x-3}{2},y=0$ about $x$ axis Using sell method to find the volume of solid generated by revolving the region bounded by $$y=\sqrt{x},y=\frac{x-3}{2},y=0$$ about $x$ axis, is (using shell method) What I try: Solving two given curves $$\sqrt{x}=\frac{x-3}{2}\Longrightarrow x^2-10x+9=0$$ We have $x=1$ (Invalid) and $x=9$ (Valid). Put $x=9$ in $y=\sqrt{x}$ we have $y=3$ Now Volume of solid form by rotation about $x$ axis is $$=\int^{9}_{0}2\pi y\bigg(y^2-2y-3\bigg)dy$$ Is my Volume Integral is right? If not then how do I solve it? Help me please.
So I would instead split this up into two integrals: $$\pi\int_0^3{(\sqrt{x})^2}dx + \pi\int_3^9{(\sqrt{x})^2-\left(\frac{x-3}{2}\right)^2}dx$$. Using the shell method: $$\int_0^3{2\pi y(2y+3-y^2)}dy$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3714995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Hypothesis testing with an exponential distribution I have the following problem: Given the data $X_1, X_2, \ldots, X_{15}$ which we consider as a sample from a distribution with a probability density of $\exp(-(x-\theta))$ for $x\ge\theta$. We test the $H_0: \theta=0$ against the $H_1: \theta>0$. As test statistic $T$ we take $T = \min\{x_1, x_2, \ldots, x_{15}\}$ . Big values for $T$ indicate the $H_1$. Assume the observed value of $T$ equals $t=0.1$. What is the p-value of this test? Hint: If $X_1, X_2,\ldots,X_n$ is a sample from an $\operatorname{Exp}(\lambda)$ distribution, than $\min\{X_1, X_2,\ldots,X_n\}$ has an $\operatorname{Exp}(n\lambda)$ distribution. The solution says 0.22. I know that the first question you have to ask youself regarding the p-value is: "What is the probability that the H0 would generate a sample θ>0?" So I assume H0 is true and take θ = 0. The probability-density function becomes: f(x) = Exp(-x). I take up the hint, so I make it f(x) = Exp(-nx) This is where I get stuck. I don't know how to proceed with the information given: Assume the observed value of T equals t=0.1. Can I have feedback on this problem? Thanks, Ter
You know that the test statistic under the null hypothesis has distribution $$T\sim \operatorname{Exp}(n)=\operatorname{Exp}(15)$$ The weight of the tail of this distribution is $$\operatorname{P}(T>t)=\exp(-15t)$$ We reject null at significance level $$\alpha \leq \operatorname{P}(T>t)=\exp(-15t)$$ p-value is maximum significance level at which we reject null, so $$\text{p-value}=\exp(-15t)=\exp(-15\times 0.1)\approx 0.22$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715139", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Expanding linear factors of a polynomial Expanding $$a(x-r_1)(x-r_2)\cdots (x-r_n)$$ should give $$ax^n-a(r_1+r_2+\cdots r_n)x^{n-1}+a(r_1 r_2+r_1 r_3+\cdots r_{n-1}r_n)x^{n-2}+\cdots (-1)^{n}ar_1 r_2\cdots r_n$$ but I fail to prove it. I was only able to do cases $n=1$, $n=2$ and $n=3$ (and even the result for $n=3$ seems a bit different): $$\begin{align*}a(x-r_1)&=ax-ar_1\\ a(x-r_1)(x-r_2)&=(ax-ar_1)(x-r_2)\\&=ax^2-axr_2-axr_1+axr_1 r_2\\&=ax^2-a(r_1+r_2)x+ar_1 r_2 x\\a(x-r_1)(x-r_2)(x-r_3)&=(ax^2-a(r_1 +r_2)x+ar_1 r_2 x)(x-r_3)\\&=ax^3-a(r_1 +r_2)x^2+ar_1 r_2 x^2-ar_3 x^2+ar_3 (r_1 +r_2)x-ar_1 r_2 r_3x\\&=ax^3-x^2(a(r_1 +r_2)-ar_1 r_2+ar_3)+x(ar_3 (r_1+r_2)-ar_1 r_2 r_3)\\&=ax^3-a(r_1+r_2-r_1r_2+r_3)x^2+a(r_1r_3+r_2r_3-r_1r_2r_3)x\end{align*}$$ Could someone help me to prove the general case for all $n$? Edit: There's an error in my computation.
Let us first examine some examples to guess the general case:$$(x-r_1)(x-r_2)=x^2-(r_1+r_2)x+r_1r_2$$ $$(x-r_1)(x-r_2)(x-r_3)=x^3-(r_1+r_2+r_3)x^2+(r_1r_2 + r_1r_3+r_2r_3)x-r_1r_2r_3.$$So, we can guess the following identity:$$\prod_{i=1}^n(x-r_i)=\sum_{k=0}^n\sum_{1 \le j_1 \lt ... \lt j_k \le n}(-1)^kr_{j_1} ... r_{j_k}x^{n-k}.$$Let us prove the claim by induction. The base case is trivial. So, let us assume that the claim is correct for $n=m$, that is,$$\prod_{i=1}^m(x-r_i)=\sum_{k=0}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}x^{m-k}.$$ So, we need to prove the claim for $n=m+1$ as follows.$$\prod_{i=1}^{m+1}(x-r_i)=\left ( \prod_{i=1}^m(x-r_i) \right ) \left ( \vphantom{\prod_{i=}^n} x-r_{m+1} \right )$$ $$=\left (\sum_{k=0}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}x^{m-k} \right ) \left ( \vphantom{\prod_{i=}^n} x-r_{m+1} \right )$$ $${=\sum_{k=0}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}x^{(m+1)-k} -\sum_{k=0}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}r_{m+1}x^{m-k}}$$ $$=\left (x^{m+1}+\sum_{k=1}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}x^{(m+1)-k} \right ) - \left ( \sum_{k=0}^{m-1}\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}r_{m+1}x^{m-k}+ (-1)^m r_{j_1} ... r_{j_m}r_{m+1} \right )$$ $$=\left (x^{m+1}+\sum_{k=1}^m\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^kr_{j_1} ... r_{j_k}x^{(m+1)-k} \right ) - \left ( \sum_{k=1}^{m}\sum_{1 \le j_1 \lt ... \lt j_k \le m}(-1)^{k-1}r_{j_1} ... r_{j_{k-1}}r_{m+1}x^{m-(k-1)}+ (-1)^m r_{j_1} ... r_{j_m}r_{m+1} \right )\tag{*}\label{*}$$ $$= \left ( x^{m+1} + (-1)^m r_{j_1} ... r_{j_m}r_{m+1} + \sum_{k=1}^m (-1)^k \left ( \sum_{1 \le j_1 \lt ... \lt j_k \le m}r_{j_1} ... r_{j_k}+ r_{j_1} ... r_{j_{k-1}}r_{m+1} \right ) x^{(m+1)-k} \right )$$ $${= \left ( x^{m+1} + (-1)^m r_{j_1} ... r_{j_m}r_{m+1} + \sum_{k=1}^m (-1)^k \sum_{1 \le j_1 \lt ... \lt j_k \le m+1}r_{j_1} ... r_{j_k} x^{(m+1)-k} \right )}$$ $$=\sum_{k=0}^{m+1} \sum_{1 \le j_1 \lt ... \lt j_k \le m+1}(-1)^k r_{j_1} ... r_{j_k} x^{(m+1)-k}.\tag{**}\label{**}$$Thus, by induction we proved that for any natural number $n$ the following identity holds:$$\prod_{i=1}^n(x-r_i)=\sum_{k=0}^n\sum_{1 \le j_1 \lt ... \lt j_k \le n}(-1)^kr_{j_1} ... r_{j_k}x^{n-k}.$$ Footnote \ref{*} is followed from the following property of summation:$$\sum_{i=m}^nA_i=\sum_{i=m+1}^{n+1}A_{i-1}.$$ \ref{**} is followed from considering the fact that for any fixed $k$ one can decompose the sum $\sum_{1 \le j_1 \lt ... \lt j_k \le m+1} r_{j_1} ... r_{j_k}$ into two sums: (i) the sum of terms not containing $r_{j_{m+1}}$, that is, $\sum_{1 \le j_1 \lt ... \lt j_k \le m} r_{j_1} ... r_{j_k}$, and (ii) the sum of terms containing $r_{j_{m+1}}$, that is $\sum_{1 \le j_1 \lt ... \lt j_k \le m} r_{j_1} ... r_{j_{k-1}}r_{j_{m+1}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Find all the endomorphisms of the multiplicative group $\mathbb R^+$. I am looking for the set $End(\mathbb R^+)$i.e. the set of all endomorphisms from $\mathbb R^{+}$ to itself where the operation is multiplication.Can someone help me to find them explicitly.I doubt whether they can be found explicitly or are existential.Do we require Zorn's Lemma here?
$\mathbb R^+$ is isomorphic to the additive group $\mathbb R$ via the logarithm. This is a $\mathbb Q$-vector space. Any endomorphism of this as a group will be $\mathbb Q$-linear, since it is $\mathbb Z$-linear. Thus the set of endomorphisms is all $\mathbb Q$-linear maps. You won't find an explicit description of all of these since we can't even write $\mathbb R$ as a $\mathbb Q$-vector space explicitly, but this is at least a complete description of them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is a group of prime-power order always abelian? Let $G$ be a group of order $p^n$, with $p$ prime. By Sylow's first theorem, there exists at least one subgroup of order $p^n$ (the number of subgroups of order $p^i$ is $1$ mod $p$ per $i$). The subgroups with order $p^n$ are all Sylow-$p$ groups. Now, by Sylow's third theorem, because the group is of order $p^n$, the number $m_{p^{n}}$ of such subgroups must divide $\#G/p^n =1$, and only $1$ divides $1$, so there is only one subgroup of order $p^n$. By Sylow's second theorem, all Sylow-$p$ groups are conjugated to each other by at least one element $g\in G$, so, for any $S$ and $S'$, we have $S=gS'g^{-1}$. In this case, there is only one Sylow-$p$ group, so it is conjugated to itself. Of course, that one subgroup is the group itself. We now have $gG=Gg$ for some $g$ in $G$. Can we get to the entire group being abelian, from here? I ask because my textbook on Abstract Algebra states that any group of order $p^2$ is abelian, and I'm curious whether it generalises. Edit: As has been pointed out, everything I've proved above is quite trivial. Below it is discussed that the essential question is actually "How does one prove that groups of order $p^2$ are abelian using Sylow theory?", since my textbook explicitly mentions this property as an application of Sylow's theorems. Edit 2: One of the authors has confirmed that they accidentally mixed some classic classification theorems into the list of applications of Sylow theory, and that this was one of them.
You can construct a nonabelian group of order $p^n$, for $p$ an odd prime, $n\gt2$, by selecting a nontrivial homomorphism $\varphi:\Bbb Z_p\to\rm{Aut}(\Bbb Z_{p^{n-1}})\cong\Bbb Z_{p^{n-1}-p^{n-2}}$. Let $G=\Bbb Z_{p^{n-1}}\rtimes_\varphi\Bbb Z_p$. If, on the other hand, $p=2$, consider dihedral groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Showing that disjoint intervals implies independent number of arrivals for random point set If we have a number of i.i.d. random variables, $X_1, X_2, ..., X_z$ (where $z$ is the realisation of a random variable $Z\sim Po(\lambda)$ independent of each $X_i$) with pdf $f$ that form a random point set, then I want to show that for two disjoint sets (in this case intervals), we have that $N(A_1)$ and $N(A_2)$ are independent. These are the number of "arrivals" (as in a Poisson Process - but I haven't shown it to be one yet) in said intervals. I will denote $p_i:=\int_{A_i} f\,dx$ to be the probability of being in the interval $A_i$. I want to show $P(N(A_1)=n\space\cap\space N(A_2)=m)=P(N(A_1)=n)P(N(A_2)=m)$. If we only consider the cases where $n+m\leq z$, then $$P(N(A_1)=n\space\cap\space N(A_2)=m)=P(N(A_1)=n)P(N(A_2)=m|N(A_1)=n)$$ So I want that $P(N(A_2)=m|N(A_1)=n)=P(N(A_2)=m)$. However, $$P(N(A_2)=m|N(A_1)=n)=\binom{z-n}{m}p_2^m(1-p_2)^{z-n-m}$$ is definitely not the same expression as the desired one, i.e. $\binom{z}{m}p_2^m(1-p_2)^{z-m}$. Where did I go wrong? I know for a fact that the set must describe a Poisson Process, so I must have made an error somewhere.
The point process you are considering is not a Poisson point process (PPP) but rather a binomial point process (BPP). Indeed, for a PPP, $N(A)$ is Poisson distributed while in your case $N(A)$ is a binomial random variable. In particular, even if $A_1$ and $A_2$ are disjoint, $N(A_1)$ and $N(A_2)$ are dependent since $$N(A_1) + N(A_2) = N(A_1 \cup A_2) \leq z.$$ I should note however that there is an error in your calculation. We have $$P(N(A_1) = n, \, N(A_2) = m) = \binom{z}{n} \binom{z-n}{m}p_1^np_2^m (1-p_1-p_2)^{z-n-m}\tag{1},$$ which is saying that we want $n$ points in $A_1$, $m$ points in $A_2$ AND $z-n-m$ points in $\mathbb{R}\setminus (A_1\cup A_2)$. Final note: if you want a PPP then you should take $z$ Poisson distributed (instead of deterministic) and independent of the $X_i$. EDIT: Let $Z$ be Poisson distributed with mean $\lambda$ and set $N = \sum_{i=1}^Z\delta_{X_i}$. Then $$\begin{align} P(N(A_1) = n, \, N(A_2) = m) &= \sum_{z=0}^\infty P(N(A_1) = n,\, N(A_2) = m, \, Z = z) \\ &= \sum_{z=n+m}^\infty P(N(A_1) = n,\, N(A_2) = m, \, Z = z)\\ &=\sum_{z=n+m}^\infty P(N(A_1) = n,\, N(A_2) = m| Z = z)e^{-\lambda}\frac{\lambda^z}{z!}, \end{align}$$ where I used that the summand is zero if $z < n+m$ for the second equality and that $Z \sim Po(\lambda)$ for the last. Now, conditionally on $Z=z$, the computation above is valid and gives that $$\begin{align}P(N(A_1) = n, \, N(A_2) = m|Z=z) &= \binom{z}{n} \binom{z-n}{m}p_1^np_2^m (1-p_1-p_2)^{z-n-m} \\ &= \frac{z!}{n! m!(z-n-m)!}p_1^n p_2^m (1-p_1-p_2)^{z-n-m}. \end{align}$$ We deduce that $$\begin{align} P(N(A_1) = n, \, N(A_2) = m) &= \frac{p_1^n p_2^m e^{-\lambda}}{n!m!}\sum_{z=n+m}^\infty \frac{\lambda^z (1-p_1-p_2)^{z-n-m}}{(z-n-m)!} \\ &= \frac{p_1^n p_2^m \lambda^{n+m}e^{-\lambda}}{n!m!}\sum_{z=0}^\infty \frac{\lambda^z (1-p_1-p_2)^{z}}{z!} \\ &= \frac{p_1^n p_2^m \lambda^{n+m}}{n! m!} e^{-\lambda(p_1+p_2)} \tag{2}. \end{align}$$ On the other hand, summing over all $m \in \mathbb{N}$ in $(2)$, we get that $$P(N(A_1) = n) = \frac{(\lambda p_1)^n}{n!}e^{-\lambda p_1},$$ i.e. $N(A_1) \sim Po(\lambda p_1)$. Similarly $N(A_2) \sim Po(\lambda p_2)$. It follows that $$P(N(A_1) = n, N(A_2) = m) = P(N(A_1) =n)P(N(A_2) =m).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $x$ be an eigenvector of $A.$ Is it true that if $x^{\perp}$ is invariant under $A,$ then $A$ is normal? Let $A$ be an $n \times n$ matrix with entries in $\mathbb C,$ and let $x$ be an eigenvector of $A.$ If $x^{\perp}$ is invariant under A, is it true that $A$ is normal? Here is my idea. Let $\lambda$ be an eigenvalue of $A$ such that $Ax = \lambda x.$ Let $y \in x^{\perp}.$ Then, we have that $\overline{\lambda} y \in x^{\perp}$ and $A(\overline{\lambda} y) \in x^{\perp}$ so that $\left<A(\overline{\lambda}y),x\right>=\left<Ay,\lambda x\right>=\left<Ay,Ax\right>.$ So, $A$ is normal. Thanks in advance for your help.
This is false. $x=(1,0,0)^T$ is an eigenvector of $A=\pmatrix{0&0&0\\ 0&0&1\\ 0&0&0}$ and $x^\perp$ is $A$-invariant, but $A$ isn't normal. The statement can be corrected by requiring that $x^\perp$ is $A$-invariant for every eigenvector $x$ of $A$. The corrected statement can be proved by mathematical induction on the size of $A$ or by constructing an orthogonal eigenbasis of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the domain and range of $f(x) = \frac{x+2}{x^2+2x+1}$: The domain is: $\forall x \in \mathbb{R}\smallsetminus\{-1\}$ The range is: first we find the inverse of $f$: $$x=\frac{y+2}{y^2+2y+1} $$ $$x\cdot(y+1)^2-1=y+2$$ $$x\cdot(y+1)^2-y=3 $$ $$y\left(\frac{(y+1)^2}{y}-\frac{1}{x}\right)=\frac{3}{x} $$ I can't find the inverse... my idea is to find the domain of the inverse, and that would then be the range of the function. How to show otherwise what is the range here?
Why? Just write the function as follows: $$f(x) = \frac{(x+1)+1}{(x+1)^2} = \frac{x+1}{(x+1)^2}+ \frac{1}{(x+1)^2} $$ later, $$f(x)= \frac{1}{(x+1)}+\frac{1}{(x+1)^2}=u+u^2 = (u+0.5)^2-0.25 = \left(\frac{1}{x+1}+0.5\right)^2-0.25$$ Since here you should be able to continue, just see the variation of $x$ and transform to $f(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3715987", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Eigenvalues of offset multiplication tables Consider the $n$ x $n$ 'multiplication table' constructed as (using Mathematica language) $$ M_n^s = \text{M[n_,s_]:=Table[ k*m , {k,1+s, n+s}, {m, 1+s, n+s} ] } $$ For example, $$M_4^0 = \begin{pmatrix} 1 & 2 & 3 & 4 \\ 2 & 4 & 6 & 8 \\3 & 6 & 9 & 12 \\4 & 7 & 12 & 16 \end{pmatrix} $$ and $$M_4^1 = \begin{pmatrix} 4 & 6 & 8 & 10 \\ 6 & 9 & 12 & 15 \\8 & 12 & 16 & 20 \\10 & 15 & 20 & 25 \end{pmatrix} $$ Amazingly, these matrices have all zero eigenvalues, except for a single value. The non-zero eigenvalue exhibits an interesting pattern as a function of $n$ and $s.$ Starting with $n=2,$ the sequences read as follows $$ \text{eigv }M_n^0=\{1,5,14,30,55,91,140...\} = (n-1)(2n^2-n)/6 $$ $$ \text{eigv }M_n^1=\{4,13,29,54,90,139,203...\} = (n-1)(2n^2 + 5n + 6)/6 $$ $$ \text{eigv }M_n^2=\{9,25, 50, 86, 135, 199,280...\} = (n-1)(2n^2 + 11n + 24)/6 $$ $$ \text{eigv }M_n^3=\{25,61,110,174,255,355...\} = (n-1)(2n^2 + 17n + 54)/6 $$ I've worked out many of these, and it appears that, in considering the quadratic polynomial, and using $[n^1]$ to mean the coefficient of $n,$ $$ [n^2] = 2, \, [n^1]=6s-1, \, [n^0] = 6s^2 , \ s=1,2,3... $$ The question is: can these observations be proved?
All lines are linearly dependent (construction with entries $k*m$), so there is only one non zero eigenvalue. This eigenvalue is the trace of the matrix (theorem that sum of eigenvalues is trace) so $\lambda=\sum_{j=1}^n (s+j)^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Maximizing the probability of Bernoulli Sums Suppose I have $X_1,\cdots X_n$ with $X_i \sim \text{Bernoulli}(p_i)$, here we only assume $X_1,\cdots X_n$ are independent but not neccessarily identical so there are n degree of freedom. Now, I want to show that the probability that $$\mathbb{P}(\sum_{i=1}^n X_i = m)$$ is maximized at $p_i = \frac{m}{n}$. How do I do that? It seems intuitive proving it is hard.
Your question does not make sense. Consider these changes to you exercise and tell me if it can be what you are looking for: * *the $n$ Bernulli's iid rv $X_i$ *$$Y= \sum_i X_i \sim Bin(n;p)$$ So Y is again a rv taking values in $y=\{0;1;2;...;n\}$. Let's suppose that the result of the sum is a fixed $0 \leq m \leq n$ The goal is to maximize the probability $$\mathbb{P}[Y=m]=\binom{n}{m}p^m(1-p)^{n-m}$$ as we have to maximize the function with respect to $p$ we can take into consideration only $$\mathbb{P}[Y=m]=p^m(1-p)^{n-m}$$ Let's take the log (log is a monotone transformation so it doesn't change the argmax) $$log P=mlog p+(n-m)log(1-p)$$ Let's derive with respect to $p$ $$\frac{\partial}{\partial p}log P=\frac{m}{p}-\frac{n-m}{1-p}$$ setting it =0 and solving with respect to $p$ you find $p=\frac{m}{n}$ It is obviuosly a maximum, but you can check it with calculating the second derivative
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why does $x=e^t+2e^{-t},y=e^t-2e^{-t}$ plot to a straight line? This parametrization satisfies $x^2-y^2=8$, so I was expecting a hyperbola. But what I got was a straight line. Why though? https://www.wolframalpha.com/input/?i=parametric+plot+%28e%5Et%2B2e%5E%28-t%29%2Ce%5Et-2e%5E%28-t%29%29 EDIT- I tried a different range for $t$ and the point (8,0) isn't even on the line that gets plotted
Try to make the plot for $t$ between for example -3 and 3, the you should see it. Also, the point $(\sqrt 8,0)$ should be on there, not $(8,0)$. Regarding the comments, it does not give a straight line, since the constant term depends on $t$, which varies. However, the reason why it looks like a straight line is that for 'large' $t > 0$ we have that $x \approx e^t$ and $y \approx e^t$, so that $y \approx x$. A similar thing happens for large negative $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving: $\lim_{x\to 0}\left(\frac{\pi ^2}{\sin ^2\pi x}-\frac1{x^2}\right)=\frac{\pi ^2}3$ without L'Hospital Evaluating $$\lim_{x\to 0}\left(\frac{\pi ^2}{\sin ^2\pi x}-\frac{1}{x^2}\right)$$ with L'Hospital is so tedious. Does anyone know a way to evaluate the limit without using L'Hospital? I have no idea where to start.
Well, you've got your answer, and it's a good one, I'd use series expansions always in such case, but then, the answerer couldn't know you've ever heard of those expansions, and some of your comments show you aren't too familiar with them. That's why SE encourages sharing information about your mathematical background, btw. Most people ignore that. But then, you risk to get an answer like the following, without any l'Hospitals, based only on elementary principles: "From the elementary identity $$\frac1{\sin^2x}-\frac1{x^2}=\sum^\infty_{k=1}3^{-2k}\,\frac{\frac83-\frac{16}9\sin^2\frac{x}{3^k}}{\left(1-\frac43\sin^2 \frac{x}{3^k}\right)^2},$$ letting $x\to0,$ we get $$\frac1{\sin^2x}-\frac1{x^2}\to\sum^\infty_{k=1}3^{-2k}\cdot\frac83=\frac13,$$ and the result we're looking for follows after replacing $x\to\pi x.$" The joke is: that identity is an elementary consequence of the triplication formula $$\sin3y=3\sin y-4\sin^3y$$ and the limit $\sin y/y\to1$ as $y\to0,$ indeed. Of course, such an answer isn't helpful, not just because it is rather obscure, but also because the method is applicable only in exceptional cases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Factorize $(a+b+c)^5-(a+b-c)^5-(a+c-b)^5-(b+c-a)^5$ We can see that $a=0$ makes expression $0$, similarly others make expression $0$.This implies that its factorization has $abc$ in it.I substituted $a+b+c=s$ tried to find remaining factors.From here I did not progress any further Later I went on finding this on wolframalpha and it's factorization is $80abc(a^2+b^2+c^2)$. How can we get its factorization by hand without actually expanding whole thing?Is there any slick way?
Let $a+b=x,\,a-b=y$ \begin{align*} (a + b + c)^5 - (a + b - c)^5 - (a + c - b)^5 - (b + c - a)^5=\\ (x+c)^5+(c-x)^5-(y+c)^5+(y-c)^5=\\ c^5 + 5 c^4 x + 10 c^3 x^2 + 10 c^2 x^3 + 5 c x^4 + x^5+\\ c^5 - 5 c^4 x + 10 c^3 x^2 - 10 c^2 x^3 + 5 c x^4 - x^5+\\ -c^5 - 5 c^4 y - 10 c^3 y^2 - 10 c^2 y^3 - 5 c y^4 - y^5+\\ -c^5 + 5 c^4 y - 10 c^3 y^2 + 10 c^2 y^3 - 5 c y^4 + y^5=\\ 20 c^3 (x^2-y^2)+10c(x^4-y^4)=\\ 20 c^3 (x-y)(x+y)+10c(x-y)(x+y)(x^2+y^2)=\\ 80 c^3 b a + 80 c b a (a^2+b^2)=\\ 80 abc(c^2+a^2+b^2) \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Let $G$ be a group. Let $x,y,z \in G$ such that $[x,y]=y$, $[y,z]=z$, $[z,x]=x$. Prove that $x=y=z=e$. Let $G$ be a group. Let $x,y,z \in G$ such that $[x,y]=y$, $[y,z]=z$, $[z,x]=x$ (the commutators; $[x,y]=xyx^{-1}y^{-1}$). Prove that $x=y=z=e$. I tried to show it by proving that $zx^mz^{-1}=x^{2m}$ with induction. Therefore, if the order of $x$ is even, we can take $zx^{|x|/2}z^{-1}=x^{|x|}=e$ and thus, $x^{|x|/2}=e$ proving it. However, what if the order of $x$ is infinite (or even just odd)? I don't know what to do about that cases. Thank you very much in advance!
We have $$xyx^{-1}y^{-1}=y$$ so $$xyx^{-1}=y^2$$ Similarly $$yzy^{-1}=z^2$$ and $$zxz^{-1}=x^2$$ Note also that since $xyx^{-1}y^{-1}=y$, we have $$yx^{-1}y^{-1}=x^{-1}y$$ so $$yxy^{-1}=y^{-1}x$$ Thus $$yzxz^{-1}y^{-1}=z^2y^{-1}xz^{-2}=y^{-1}xy^{-1}x$$ hence $$yzxz^{-1}y^{-1}x^{-1}=y^{-1}xy^{-1}$$ so that $$yzxz^{-1}x^{-1}y^{-2}=y^{-1}xy^{-1}$$ so that $$yzxz^{-1}x^{-1}=y^{-1}xy$$ Thus $$yzxz^{-1}x^{-1}yzxz^{-1}x^{-1}=x$$ so that $$yzxz^{-1}x^{-1}yzxz^{-1}=e=yx^2yx^{-1}yx^2=yxy^3x^2$$ so $$x^{-1}y^{-1}=y^3x^2$$ so that $$xy^4x^2=x$$ and hence $$xy^4x=e$$ so that $$xxy^4xx^{-1}=xy^8x=e$$ so that $$xy^4xx^{-1}y^{-8}x^{-1}=xy^{-4}x^{-1}=y^{-8}=e$$ It follows that at least one of $y,y^2,y^4,y^8$ is the identity, and since each of these can be obtained by conjugating $y$ by some power of $x$ it follows that they all are. It is easy from there to see that $x=e$ and $z=e$ as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3716955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
solving this probability paragraph Let $B_n$ denotes the event that n fair dice are rolled once with $P(B_n)=1/2^n$ where n is a natural number. Hence $B_1,B_2,B_3,..B_n$are pairwise mutually exclusive events as n approaches infinity. The event A occurs with atleast one of the event $B_1,B_2,B_3,..B_n$ and denotes that the numbers appearing on the dice is S If even number of dice has been rolled,then show that probability that $S=4$ is very close to $1/16$ next show that probability that greatest number on the dice is 4 if three dice are known to have been rolled is $37/216$ Finally,if $S=3$, then prove that $P(B_2/S)=24/169$ my approach : well I tried using the conditional probability formula in part one and baye's theorem in the final one but I am unable to get to the correct answer.kindly help me out,all help is greatly appreciated.
probability that greatest number on the dice is 4 if three dice are known to have been rolled is $37/216$ This is about a conditional probability $|B_3$. We have $P=P’-P’’$, where $P’=\left(\frac 46\right)=\frac {64}{216}$ is a probability that the greatest number on a dice is at most $4$ and $P’’=\left(\frac 36\right)=\frac {27}{216}$ is a probability that the greatest number on a dice is at most $3$. If even number of dice has been rolled,then show that probability that $S=4$ is very close to $1/16$ I tried two interpretations for an event $A$ which is $S=4$, but obtained the following answers. If $A$ means that at least one thrown dice has $4$ then we have $$P=P\left(A{\Huge|}\bigcup_{k=1}^\infty B_{2k}\right)= \frac{1}{P\left(\bigcup_{k=1}^\infty B_{2k}\right)}\sum_{k=1}^\infty P(A|B_{2k})P(B_{2k}) =$$ $$\frac{1}{\sum_{k=1}^\infty \frac 1{2^{2k}}}\sum_{k=1}^\infty \left(1-\left(\frac 56\right)^{2k}\right)\frac 1{2^{2k}}= 1-\frac{\sum_{k=1}^\infty\left(\frac{5}{12}\right)^{2k}}{\sum_{k=1}^\infty \frac 1{2^{2k}}}=$$ $$1-\frac{\frac{\left(\frac{5}{12}\right)^2}{1-\left(\frac{5}{12}\right)^2}}{\frac {\frac 1{2^2}}{1-{\frac 1{2^2}}} }= 1-\frac{\frac{1}{\left(\frac{12}{5}\right)^2-1}} {\frac 1{\frac {2^2}1-1}}= 1-\frac{2^2-1}{{\left(\frac{12}{5}\right)^2-1}}=$$ $$1-\frac{3\cdot 5^2}{{12^2-5^2}}=1-\frac{75}{119}=\frac{44}{119}.$$ If $A$ means that all thrown dices have $4$ then we have $$P=P\left(A{\Huge|}\bigcup_{k=1}^\infty B_{2k}\right)= \frac{1}{P\left(\bigcup_{k=1}^\infty B_{2k}\right)}\sum_{k=1}^\infty P(A|B_{2k})P(B_{2k}) =$$ $$\frac{1}{\sum_{k=1}^\infty \frac 1{2^{2k}}}\sum_{k=1}^\infty \frac 1{6^{2k}}\cdot\frac 1{2^{2k}}= \frac{\frac 1{12^2} \cdot\frac {1}{1-\frac 1{12^2}}}{\frac 1{2^2} \cdot\frac {1}{1-\frac 1{2^2}} }= \frac {\frac{1}{12^2-1}}{\frac{1}{2^2-1}}=\frac 3{143}.$$ if $S=3$, then prove that $P(B_2/S)=24/169$ $P(B_2|A)=\frac{P(A\cap B_2)}{P(A)}$, but because of the above there is a problem how to interpret $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Suppose M is a finitely generated non-zero R-module, where R is a commutative unital ring. Show that the tensor product of M with itself is non-zero. Suppose M is a finitely generated non-zero R-module, where R is a commutative unital ring. Show that the tensor product of M with itself is non-zero. I know one way to show this is to find an R-bilinear map which is nonzero, but am not sure how to find it.
Not sure this is the best proof, but here goes. Note that $M\otimes_R M = 0$ if and only if $(M\otimes_R M)_{\mathfrak p} = 0$ for all primes $\mathfrak p\subset R$ (Atiyah-MacDonald 3.8). But $$ (M\otimes_R M)_{\mathfrak p}\cong M_\mathfrak p\otimes_{R_{\mathfrak p}}M_{\mathfrak p} $$ by Atiyah-MacDonald 3.7. Hence we have reduced to the case where $R$ is a local ring. The statement now follows from the fact that tensor product is faithful for finitely-generated modules over local rings, e.g. exercise 3 in the second chapter of Atiyah-MacDonald. (This is a consequence of Nakayama's lemma, as far as I remember.) Remark. So why doesn't the argument given here show that tensor products are faithful over any ring via reduction to the local case? Simply because a pair of distinct modules $M$ and $N$ might be supported on disjoint sets of primes. For instance, this is the case when $R = \mathbb Z$, $M = \mathbb Z/p$ and $N = \mathbb Z/q$ for distinct primes $p,q$. Then $\mathbb Z /p\otimes\mathbb Z/q = 0$ because, with $r$ ranging over primes, $(\mathbb Z/p)_{(r)} = 0$ for $r\neq p$ and similarly for $q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Rouché's theorem in annulus $1<|z|<2$ I have to find number of roots of the polynomial $p(z)=z^4-8z+10$ in the annulus $1<|z|<2$ . I'm trying to do this using Rouché's theorem.And, by this theorem , I have that $p$ does not have zeros in $|z|<1$ , which means that number of zeros in the given annulus is the same as the number of zeros on $|z|<2$ . Then I try to see how does this polynomial 'behave' in $|z|=2$ and I see that the coefficients $2^4$ and $|-8*2|$ are the same ,and in that way I cannot decide what should I choose for $g$ so that $|p-g|<|g|$ and so to apply Rouché's theorem . Can someone help me do this? Any advice is appreciated. Thank you in advance!
We plan to use the version of Rouché's theorem which says that if $|g(z)| < |f(z)|$ on the boundary of our region then $f$ and $f+g$ have the same number of zeros with multiplicty in the region. Let's first set a goal for ourselves. If we can show that $|p(z)| > 3$ for $|z| = 2$, then Rouché's theorem tells the number of zeros of $p(z)$ and $q(z) = z^4 - 8z + 7$ agree. We see that $z = 1$ is a zero of $q(z)$, so using polynomial long division we get that $q(z)= (z-1)(z^3 + z^2 + z - 7)$. Note this also rules out zeros in $|z| = 2$, and ruling out zeros in $|z| = 1$ is already handled by analysis you mention having done in this question. The factor $r(z) = z^3 + z^2 + z - 7$ then yields to ad-hoc analysis. By checking $1$ and $3/2$, the intermediate value theorem finds a root $x_0$ of $r(z)$ in $(1, 3/2)$. Furthermore, $r'(z) = 3z^2 + 2z + 1$, which the quadratic formula tells us has only imaginary roots, hence $r(z)$ is monotone on $\mathbb{R}$. We conclude that $x_0$ is the only real root of $r(z)$, and so the other roots of $r(z)$ are conjugate. Call them $\alpha$ and $\overline{\alpha}$. Then factoring shows $-7 = -x_0 |\alpha|^2$, and our bounds on $x_0$ force $|\alpha|^2 > 4$, hence $|\alpha| > 2$. Together with what you've already shown, this proves that $p(z)$ has exactly $2$ zeroes in the annulus. So we just need to make this estimate. It suffices to show that $|p(z)|^2 > 9$ for $|z| = 2$. For $|z| = 2$, doing out the multiplication gives $$|p(z)|^2 = (z^4 - 8z + 10)(\bar{z}^4 - 8\bar{z} + 10) = 612 + 10\text{Re}(z^4) - 16 \text{Re}(z^3) - 80 \text{Re}(z),$$ where $612$ arises as $2^8 + 8 * 8 * 2^2 + 100$. We bound $\text{Re}(z^4)$ by $-16$, bound $\text{Re}(z^3)$ by $8$, and $\text{Re}(z)$ by $2$ to see that $$|p(z)|^2 \geq 612 - 10 * 16 - 16 * 8 - 80*2 = 612 - 448 > 9,$$ and this completes our proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Signature and Environment in Type Theory Signature and Environment are both related to the description of constants. I feel confused about the two notions in type theory. Could anyone explain their main difference? Thanks!
Functionally, environment (or contexts) and signatures behave quite similarly, and in some settings you can emulate signatures using environments to place the axioms as variables in the context. However, they should be thought of differently. An environment or context is typically a list of typed variables, e.g. $x_1 : A_1, \ldots, x_n : A_n$. Terms in this environment may project out the variables, e.g. $x_1 : A_1, x_2 : A_2 \vdash \langle x_2, x_1 \rangle : A_2 \times A_1$. The environment may differ during the course of a derivation, for instance if binding operators, like $\lambda$-abstractions, are used. In a certain sense, environments are a notion internal to the type theory. A signature in type theory is a set of constants and function symbols (and potentially base types) that are given as axioms for the type theory (the exact form will differ depending on the treatment). These axioms may be used throughout a derivation, just like ordinary rules. However, the signature is fixed for a type theory: the signature may not change over the course of a derivation, unlike an environment. You can think of the type theory as being parameterised by the signature. Therefore, signatures are in some sense an external notion. If the type theory in question has function types, we can represent function symbols from a signature as variables (of function types) in an environment, but this will only behave the same way as a signature if we do not bind those variables. Another way of thinking about it is that variables in an environment are local (and hence may not be available everywhere), whereas constants in a signature are global (and may be used at any point in a derivation).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Name of the rule allowing the exchanging $\sin$ and $\cos$ in integrals with limits $0$ and $\pi/2$? As in $0$ to $\frac{\pi}{2}$ limits the area under curve of $\sin \theta$ and $\cos \theta$ are same, so in integration if the limits are from $0$ to $\frac{\pi}{2}$ we can replace $\sin \theta$ with $\cos \theta$ and vice versa. Example- \begin{align*} \int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3x-\cos x}{\cos^3x-\sin x} dx &=\int\limits_{0}^{\frac{\pi}{2}} \frac{\sin^3x-\sin x}{\sin^3x-\sin x} dx\\ &=\int\limits_{0}^{\frac{\pi}{2}}dx\\ &=\frac{\pi}{2} \end{align*} I want to know what the name of this rule.
This is just plain wrong. Indeed, if you evaluate your original integral numerically, you get a negative answer. What is correct is this: For any continuous function $f(x,y)$, it is the case that $$\int_0^{\pi/2} f(\sin\theta,\cos\theta)\,d\theta = \int_0^{\pi/2} f(\cos\theta,\sin\theta)\,d\theta.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717583", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 4 }
Find the condition where the product of two factorials is minimized I have to find the maximum value of a combination ${N \choose H} = \dfrac{N!}{H!(N-H)!}$ where $1 \leq H \leq N \leq 16$. My reasoning was to find the maximum value you have to maximize the numerator $N!$ and minimize the denominator $H!(N-H)!$. The maximum value of numerator occurs when $N = 16$. Then the term becomes, $\dfrac{16!}{H!(16-H)!}$. As, $1 \leq H \leq 16$, I can manually check for which $H$ value the term is maximized. But is there any other way to find out that the denominator is minimized when $H = 8$? I tried to break now into three conditions, namely when $H > (16-H)$, $H < (16 - H)$ and $ H = 16 - H$, but I couldn't establish the relation between the three conditions. Any help would be appreaciated.
From the complementary combination $\displaystyle\binom{N}{H} = \binom{N}{N-H}$ we conclude that $\displaystyle\binom{N}{H}$ is symmetric with respect to $\displaystyle H=\bigg\lceil{\frac{N}{2}}\bigg\rceil$ Now we show that $\displaystyle\binom{N}{H}$ is monotonically increasing for $\displaystyle 0\le H \le \bigg\lceil{\frac{N}{2}}\bigg\rceil$ keeping $N$ fixed. Equivalently we will show $$\text{If } \displaystyle 0\le H_1 < H_2\le \bigg\lceil{\frac{N}{2}}\bigg\rceil \text{ then } \binom{N}{H_1} <\binom{N}{H_2}$$ Proof: Let $H_2 = H_1 + k$ with $k\ge1$. Now $$\begin{equation*} \begin{aligned} \binom{N}{H_2} &= \frac{N\cdot(N-1)\cdots(N-H_2+1)}{1\cdot2\cdot3\cdots H_2}\\ &=\frac{N\cdot(N-1)\cdots (N-H_1+1)\cdot(N-H_1)\cdot(N-H_1-1)\cdots(N-H_1-k+1)}{1\cdot 2\cdot 3\cdots H_1\cdot(H_1+1)\cdots(H_1+k)} \\ &=\binom{N}{H_1}\cdot\frac{(N-H_1)(N-H_1-1)\cdots(N-H_1-k+1)}{(H_1+1)(H_1+2)\cdots(H_1+k)}\\ &=\binom{N}{H_1}\cdot\frac{N-H_1}{H_1+k}\cdot\frac{N-H_1-1}{H_1+k-1}\cdots\frac{N-H_1-k+1}{H_1+1} \end{aligned} \end{equation*}$$ We need to show now every fraction of RHS $\ge1$ and at least one fraction $>1$ Notice that, every fraction of RHS is of the form $\displaystyle \frac{N-H_1-r}{H_1+k-r}$ where $0\le r \le k-1$ $$\begin{equation}\displaystyle \begin{aligned} \frac{N-H_1-r}{H_1+k-r}&=\frac{N-H_2+k-r}{H_2-r}\\ &\ge \frac{N-H_2+1}{H_2-r} \text{ as } k-r\ge1 \\ &\ge \frac{N-\bigg\lceil\displaystyle\frac{N}{2}\bigg\rceil+1}{\bigg\lceil\displaystyle\frac{N}{2}\bigg\rceil-r} \\ &=\frac{\bigg\lfloor\displaystyle\frac{N}{2}\bigg\rfloor+1}{\bigg\lceil\displaystyle\frac{N}{2}\bigg\rceil-r} \end{aligned} \end{equation}$$ If $r > 0$ this fraction $>1$ and if $r=0$ this fraction $\ge1$ and we are done. $\blacksquare$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $2^{\aleph_0}\neq \aleph_{\alpha+\omega}$ for any ordinal $\alpha$. Show that $2^{\aleph_0}$ $\neq$ $\aleph_{\alpha+\omega}$ for any ordinal $\alpha$. What I did was the following: I first used ordinal addition where $\alpha+\omega$ = $\sup\{\alpha+n:n \in \omega\}=\sup \: \omega$ = $\omega$. Thus, $\aleph_{\alpha+\omega}$ = $\aleph_{\omega}$. Then there is a corollary in my text book that says: $\aleph_0 \in cf(2^{\aleph_0})$, so $2^{\aleph_0}$ $\neq$ $\aleph_{\omega}$, as $cf(\aleph_\omega)=\aleph_0$. Hence, $2^{\aleph_0}$ $\neq$ $\aleph_{\alpha+\omega}$. Have I done something wrong here, as I got zero points on this task. Thanks for your help!
You got zero points, as you should have. It is true that $\alpha+\omega=\sup\{\alpha+n\mid n<\omega\}$ the rest is absolutely false. Note, for example, that $\omega_1+n$ is uncountable, for any $n<\omega$, but you are claiming that $\sup\{\omega_1+n\mid n<\omega\}$ is a countable ordinal. How is that even possible? What is true, however, is that the cofinality of $\aleph_{\alpha+\omega}$ is countable, as witnessed by $\aleph_{\alpha+n}$ for $n<\omega$, being a cofinal sequence. Then we can apply the theorem stating that $\aleph_0<\operatorname{cf}(2^{\aleph_0})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3717862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Maximal ideal in valuation ring In the lectures I've attended on algebraic number theory, there is a standard definition of a valuation ring, $$\mathcal{O} = \{x \in K: v(x) \geq 0\}$$ $$\mathcal{P} = \{x \in K: v(x) > 0\},$$ where $K$ is a field and $v$ an exponential valuation. $\mathcal{P}$ is then easily proven to be a unique maximal ideal of $\mathcal{O}$. However, a more general setting is introduced right after that: let $\mathcal{O}$ be an integral domain and $K$ its field of fractions. Then $\mathcal{O}$ is called a valuation ring, if $$\forall x \in K^*: (x \in \mathcal{O}) \lor (x^{-1} \in \mathcal{O}),$$ and then $$\mathcal{P} = \{x \in \mathcal{O}:x^{-1} \notin \mathcal{O}\}$$ is claimed to be its unique maximal ideal. I have two questions: * *First of all, how does one prove that $\mathcal{P}$ in the second case is indeed an ideal, and also maximal and unique? *Second, are the two definitions equivalent? Meaning if we define an arbitrary $v$ on $K$ in the second scenario, will then $\mathcal{O}$ be the same as in the first one?
For 1., note that every $x\in R$ not in $\mathcal{P}$ is invertible in $R$, so once one has that $\mathcal{P}$ is an ideal it is automatically the unique maximal ideal. Now if $x,y\in\mathcal{P}$ then since either $\frac{x}{y}\in\mathcal{O}$ or $\frac{y}{x}\in\mathcal{O}$, you get $\frac{x+y}{x}\in\mathcal{O}$ or $\frac{x+y}{y}\in\mathcal{O}$. So if $x+y\notin\mathcal{P}$ it would follow that $\frac{1}{x}\in\mathcal{O}$ or $\frac{1}{y}\in\mathcal{O}$, a contradiction. For 2. the answer depends a bit on the generality of the definition of an exponential valuation in your course. If an exponential valuation is a homomorphism $v:K^\times\rightarrow\mathbb{R}$ with $v(x+y)\geq\min\{v(x),v(y)\}$, then the answer is NO. Valuation rings in the general sense correspond to valuations $v:K^\times\rightarrow\Gamma$ for arbitrary ordered abelian groups $\Gamma$. For more on this see e.g. Chapters 1 and 2 of the book Valued Fields by Engler-Prestel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many solutions to the $a+b+c+d=100$ exist? Given a,b,c,d belonging to the set of whole numbers and given the equation $a+b+c+d=100$ How many solutions like $(a,b,c,d)=(80,10,5,5)\; ; \; (a,b,c,d)=(0,1,2,97)$ exist? We can repeat elements and the order does not matter.
We're looking for the non-negative integer solutions for the equation $x_1 + x_2 + x_3 + x_4 = 100$. Instead of using numbers for writing the solutions, we will use strokes, so for instance we represent the solution $ x_1 = 1, x_2 = 1, x_3 = 1, x_4 =97 $, or 1 + 1 + 1 + 97, like this: | + | + | + ||| $ \cdots$ [97 strokes]. Now, each possible solution is an arrangement of 100 strokes and 3 plus signs, so the number of arrangements is P(103; 100, 3) = $\frac{103!}{100! *3!}$ The general solution for such questions -combinations with repitition- is: $\mathit{P}(n+r-1;r,n-1) = \frac{(n+r-1)!}{r!(n-1)!}=\binom{n+r-1}{r}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Modulus operation to find unknown If the $5$ digit number $538xy$ is divisible by $3,7$ and $11,$ find $x$ and $y$ . How to solve this problem with the help of modulus operator ? I was checking the divisibility for 11, 3: $5-3+8-x+y = a ⋅ 11$ and $5+3+8+x+y = b⋅3$ and I am getting more unknowns ..
From modulus 11, $$ 53800 + 10x + y \equiv 5 - 3 + 8 - x + y \equiv -1-x+y \equiv 0 \pmod {11}\\ \implies y\equiv 1+x \pmod {11} $$ but $y$ and $x$ are digits, so $0\le y\le 9$ and $1\le 1+x \le 10$, so it must hold that $y=1+x$. From modulus 3, $$ 53800 + 10x + y \equiv 5 + 3 + 8 + x + (x+1) \equiv 2+2x \equiv 0 \pmod {3}\\ \implies x\equiv -1 \pmod {3} $$ so $x=2,5,8$. Eventually, from modulus 7, $$ 53800 + 10x + y \equiv 5 + 3x + (x+1) = 6+4x \equiv 0 \pmod {7}\\ \implies x\equiv 2 \pmod {7} $$ so $x=2,9$. The only choice is thus $x=2$, $y=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Sum of first K primes is triangle number I was reading something this morning and came across the fact that 28 is both the sum of the first five prime numbers and of the first seven natural numbers. Naturally, I then tried to find other numbers U such that for some integers n and k $$U=\sum_{a=1}^{n}a=\sum_{a=1}^{k}p_a$$ I quickly noticed that 10 is both the sum of the first four natural numbers and of the first three prime numbers, but that I couldn't find any others off the top of my head. After sitting at a computer, I found that the next such number is 133386, which is the sum of the first 516 natural numbers and of the first 217 prime numbers. There were no other examples that for which $k\leq1000$. Before sitting at the computer, I hypothesized that there were no other examples, and tried to go about proving it. Based on the fact that the sum of the first n natural numbers is $\frac{n(n+1)}{2}$, I was able to proceed: $$\frac{n(n+1)}{2}=U$$ $$n^2+n-2U=0$$ $$n=\frac{-1+\sqrt{1+8U}}{2}$$ Is there any way of proceeding past this point, either proving that there are infinitely many numbers k that $1+8\sum_{a=1}^{k}p_a$ is a perfect square or that there are only a finite number that fulfill this criterion?
You noted that $10$, $28$, and $133386$ were the first three numbers that were initial sums both of primes and of naturals. We can then search Sloane's (the On-line Encyclopedia of Integer Sequences) for those terms and get A066527. That page reveals that the next terms are $4218060$, $54047322253$, $14756071005948636$, $600605016143706003$, $41181981873797476176$, $240580227206205322973571$, and $1350027226921161196478736$. And since the keyword "more" is listed on the page, it is thought that there are likely more numbers like this to be found, but no proof is given/referenced. The next term, if it exists, is greater than $6640510710493148698166596$ according to the late Donovan Johnson.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718591", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove that $\{f_n\} _{n=1}^{\infty}$ uniformly converges to $ f(x)=\int_{0}^{1}g(x,t)\mathrm{dt}$ Let $g:(0,\infty)\times [0,1]\to {\mathbb{R}}$ be continuous with respect to each variable separately and $$f_n=\frac{1}{n}\sum_{i=1}^{n}g\left(x,\frac{i}{n}\right)$$ How can show that $\{f_n\} _{n=1}^{\infty}$ uniformly converges to $\displaystyle f(x)=\int_{0}^{1}g(x,t)\mathrm {dt}$ on $[m,M]$, each subset of $(0,\infty)$.
I will show you an example that $f$ is not continuous on $[0, 1]$. It is easy to adapter it to $(0,\infty)$. In particular, your statement is not valid in general. For $x > 0$ let $g(x, \cdot)$ be the hat function with support on $[0, 1/x]$ and maximum value $x$. Further let $g(0, \cdot) = 0$. Then, $g$ is separately continuous and $f$ is discontinuous at $x=0$. Notice: the statement is valid, if you add an equicontinuity type condition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3718715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }