Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Proof of an integral property $$\int_0^1 f(x)g'(x)dx=\pi$$ If $f(1)g(1)=f(0)g(0)$ then $$\int_0^1 f'(x)g(x)dx= -\pi$$ So I have to prove this and I have absolutely no idea how to do it. I am guessing I will have to use the fundamental theorem of calculus and it show that the rate of change is $1$ because it didn't change from $f(1)g(1)$ to $f(0)g(0)$
Hint: $$\dfrac{d(f(x)\cdot g(x))}{dx}=?$$ Integrate both sides with respect to $x$ between $[0,1]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Discretization matrix for 3D Poisson equation It is known that the 2D Poisson equation defined on a domain $\Omega$ (let's say $\Omega := (0,1)^2$) with Dirichlet boundary conditions $u(x,y)_{|\partial \Omega}=g(x,y)$, $$u_{xx} + u_{yy}=f$$ can be discretized, using finite differences, to obtain a system of linear equations of the form $$A\vec{u} = \vec{f}+\vec{b}$$ where $A$ is a coefficient matrix, $\vec{u}$ the discrete solution to solve for, and $\vec{b}$ contains terms from the boundary conditions. The matrix $A$ for the 2D Poisson equation has the block form \begin{bmatrix} D & I & \dots &0 \\ I & \ddots & \ddots & \vdots\\ \vdots & \ddots & \ddots & I\\ 0 & \dots & I & D \end{bmatrix} where $$D=\begin{bmatrix} -4 & 1 & 1 &0 & \dots& 0 \\ 1 & -4 & 1 & 1 & \dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots\\ \vdots & \ddots & \vdots & \dots & -4 & 1\\ 0 & \dots & \dots & 1 & 1 & -4 \end{bmatrix}$$ I want to discretize the 3D Poisson equation $$u_{xx} + u_{yy}+u_{zz}=f$$ on $\Omega := (0,1)^3$. So, I suppose, that the discretized set $\Omega_{\Delta x}$ must be a 3D mesh. The resulting discretization equations will be $$\Delta_h u_{i,j,k} = \frac{u_{i+1,j,k}+u_{i,j+1,k}+u_{i,j,k+1}-6u_{i,j,k}+u_{i-1,j,k}+u_{i,j-1,k}+u_{i,j,k-1}}{(\Delta x)^2}=f_{i,j,k}+b_{i,j,k}$$ So that $A$ should now look like this: $A=\begin{bmatrix} -6&1 & 1 & 1 &0 & 0 & 0&\dots& 0 \\ 1 & 1 & -6 & 1 & 1 & 1 &0&\dots & 0 \\ \vdots & \vdots & \vdots & \vdots & \vdots & \vdots &\vdots&\ddots&0\\ \vdots & \vdots & \vdots & \vdots & \dots & \dots & \dots&-6 & 1\\ 0 & \dots & 0 & 0 & 0 & 1 & 1 & 1 & -6 \end{bmatrix}$ I'm not quite sure, however, what the form of $D$ exactly is. I.e. is $A$ still block-tridiagonal, with two block diagonals comprised of $I$'s, and the main diagonal comprised of $D$'s, but with higher dimensions? Would appreciate some insight.
I like a lot the Kronecker product representation. If $A_1$ is the one-dimensional second-derivative approximation, i.e. $$ A_1 = \begin{pmatrix} 2 & -1 & \\ -1 & 2 & \ddots \\ & \ddots & \ddots \end{pmatrix}, $$ then you can write the $3d$ Laplacian as $$ A_3 = I \otimes I \otimes A_1 + I \otimes A_1 \otimes I + A_1 \otimes I \otimes I. $$ For the $2d$ case it is the same, i.e., $A_2 = I \otimes A_1 + A_1 \otimes I$, and this generalizes to higher dimensions. You can have a look into a small instance of this matrix, and its properties (e.g., sparsity pattern) will be clear.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3120948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If a set $A$ is finite then $A\cap B$ is a finite set. Background: Theorem - If $A\subseteq \mathbb{N}_n$ then $A$ is a finite set and $|A|\leq n$. Question: Let $A$ be a finite set and $B$ be some set. If $A$ is a finite set, then $A\cap B$ is a finite set. Attempted proof: Let $A$ be a finite set and let $C = A\cap B$. If $C = \emptyset$ then $C$ is finite. Suppose $C \neq \emptyset$, since $C\subseteq A$, the set $A\neq \emptyset$ and there exists $k\in\mathbb{N}_k$ such that $A\sim \mathbb{N}_k$. That is, there exists $k\in\mathbb{N}$ and there exists a one-to-one correspondence $f:A\to\mathbb{N}_k$. The restriction of $f$ on the set $C$ $f|_C$ is a one-to-one function from $C$ onto $f(C)$. Therefore, $C\sim f(C)$. By theorem above, some $f(C)$ is a subset of $\mathbb{N}_k$, $f(C)$ is a finite set therefore since $C\sim f(C)$, $C$ is finite as well. I am not sure if this is completely right, any feedback or other approaches would be appreciated.
Suppose $A\cap B$ is infinite. Then we have $x_1,x_2,x_3,....$ all in $A\cap B$. But then each of them is also in $A$. Then $A$ has ininite cardinality. A contradicition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
References for injectivity/surjectivity $\pi_i(U(n-k)) \rightarrow \pi_i(U(n))$ I have read (Theorem 29.3, line 7, pg 144) the following statement: $\pi_i(U(n-k)) \rightarrow \pi_i(U(n))$ is surjective for $i \le 2(n-k)+1$ and injective for $i \le 2(n-k)$. and Therefore $\pi_i(St_k(U(n))=0$ for $i \le 2(n-k)$. What results are used here and where can I find a reference of this?
I think they have made an off-by-$1$ error, they might have been thinking about the map $BU(n-k) \to BU(n)$. (The notes you're reading are unpublished and there is a disclaimer at the start about not being proofread, so an off-by-$1$ error isn't so bad.) A great reference for a lot of results like this is Mimura and Toda's "Topology of Lie Groups " I and II. They compute a lot of homotopy and homology groups for classical Lie groups and their classifying spaces. For this particular example, look at Corollary 3.17 on page 68. The statement includes: ... For $i < 2n$, $\pi_i U(n) \cong \pi_i U(n+1)$, ... Changing variables we see $\pi_iU(n-1)\cong \pi_i U(n)$ for $i < 2(n-1)$, and their argument actually also shows surjectivity in degree $2(n-1)$. Typically you use the long exact sequence of the fibration $$ U(n - 1) \to U(n) \to S^{2n - 1}$$ to compute the connectivity of $U(n - 1)\to U(n)$. The degree $i = 2n - 2$ is the last degree where $\pi_i S^{2n-1} = 0$ so our map induces surjective homomorphisms in all degrees $i \leq 2(n-1)$, and moreover they are injective if $i < 2(n-1)$. The (correct) connectivity of your map, and hence the Stiefel manifold, follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this subspace isomorphic to $\ell^1$? Let $X$ be a Banach space. Suppose there exists a sequence $(x_n)$ in $X$ such that for all finite $A\subseteq\mathbb{N}$ we have that $\|\sum_{n\in A}x_n\|$ equals the number of elements in $A$. Does this imply that the subspace spanned by $\{x_n\}$ is isomorphic to $\ell^1$?
In order that the subspace spanned by $(x_n)$ is isomorphic to $l^1$ it has to be closed. Hence, we have to talk about the closure of the span of $(x_n)$. If $X$ is reflexive, then this subspace is also reflexive, and cannot be isomorphic to $l^1$. If $X$ is a Hilbert space (or strictly convex space) then no such sequence can exist. Also the example by @Jochen ($X=\mathbb R^1$, $x_n=1$) shows that span of $(x_n)$ needs not to be infinite-dimensional.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121289", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove convexity of set I've had a hard time proving this statement. The objective is to prove that the set $M$ is convex where $f(y)$ can be any function. The task is to prove it using triangle inequality. I've looked at threads like Proving Convexity of an Open Disk but i still can't wrap my head around it. If anyone could give me a hint in the right direction i would be very grateful. $$M = \{\, x\,\big| \, ||x-y|| \leq f(y) \, \text{ for all } y \in S\,\} \quad \text{where } S \subseteq \mathbb{R}^n$$ Best Regards
Another hint: $M$ is the intersection of a certain collection of closed balls.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Sample Space of a Fair Coin A coin is tossed until, for the first time, the same result appears twice in succession. Define a sample space for this experiment. The solution in the back of the book is: $[x_1x_2...x_n: n \ge 1, x_i \in [H, T]; x_i \ne x_{i+1}, 1 \le i \le n-2; x_{n-1}= x_n]$ I don't have a clue how this result was achieved. Besides knowing that the sample space of a fair coin is $[H, T]$ I am completely lost.
Its a list of the first $n$ coin flips given as $x_i$, with the restriction that the only consecutive tosses with the same outcome are $x_n$ and $x_{n-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can we prove that infinite many primes begin with any given digitstring? With Dirichlet's theorem, we can easily prove that infinite many primes end with a given digitstring with final digit $1,3,7$ or $9$. Can we also prove that infinite many primes begin with a given digit-string with first digit non-zero ? Intuively, this should be the case, but I wonder whether we can show it rigorously. I thought of prime gaps, but I am not sure whether the best proven prime gaps are sufficient.
Yes. All you need is a prime gap of the form $p_{n+1}-p_n\lt (p_n)^\theta$ for $\theta\lt 1$; this is well-known (Wikipedia suggests that Hoheisel was the first to prove a bound of this form). Once you've got that, it becomes a matter of simple math; let $K$ be the initial digit string, of length $k=\lceil\log_{10}(K)\rceil$. Then the difference between e.g. $10^n\cdot K$ and $10^n\cdot(K+1)$ is $10^n$ whereas we have $10^n\cdot K\lt 10^{n+k}$. Now just choose $n$ such that $\theta\cdot (n+k)\lt n$; then the gap between $10^n\cdot K$ and $10^n\cdot (K+1)$ is larger than the largest possible prime gap there. Note that this proves even more that was asked: not just that there are infinitely many primes of the given form, but that given an initial digit string, then for every sufficiently large $n$ there's at least one $n$-digit prime beginning with that digit string.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Determining the basis for strings with cyclic and permutation symmetry I'd like to determine a basis set of strings for strings of length m composed of n letters that are compatible with cyclic symmetry of the string and permutation symmetry of the letters. It's relatively straightforward to do this by hand in individual cases. For example $m=5, n=3$ with the letters labelled $1,2,3$, then the minimal number of strings I believe are $$ {00000}, {00001},{00011},{00101},{00012},{00112},{00102}, {00121}, {01012}$$ where I can generated all the other strings by cyclically permuting the indices of the strings above, or by permuting the letters {1,2,3} amongst themselves. Is there are a way of determining the number of basis strings for each $m,n$? Or at least the number of basis strings constructed in this way.
The problem of counting the basis strings is an instance of Power Group Enumeration as defined by Harary and Palmer in the text Graphical Enumeration. We have the cyclic group acting on the slots with cycle index $$Z(C_m) = \frac{1}{m} \sum_{d|m} \varphi(d) a_d^{m/d}.$$ The group acting on the colors is the symmetric group $S_n$ with recurrence by Lovasz for the cycle index $Z(S_n):$ $$Z(S_n) = \frac{1}{n} \sum_{l=1}^n a_l Z(S_{n-l}) \quad\text{where}\quad Z(S_0) = 1.$$ With these two cycle indices it suffices to run the PGE algorithm as documented e.g. at the following MSE link. This yields e.g. for four colors and $m$ slots the sequence $$1, 2, 3, 7, 11, 39, 103, 367, 1235, 4439, \\ 15935, 58509, 215251, 799697, \ldots$$ which points us to OEIS A056292 where we find confirmation of these data. Similarly for six colors and $m$ slots we obtain $$1, 2, 3, 7, 12, 43, 126, 539, 2304, 11023, \\ 54682, 284071, 1509852, 8195029, \ldots$$ which points to OEIS A056294, again for confirmation. The algorithm is shown below. with(numtheory); pet_cycleind_cyclic := proc(n) option remember; 1/n*add(phi(d)*a[d]^(n/d), d in divisors(n)); end; pet_cycleind_symm := proc(n) option remember; if n=0 then return 1; fi; expand(1/n*add(a[l]*pet_cycleind_symm(n-l), l=1..n)); end; neckl_pg := proc(m, n) option remember; local idx_slots, idx_colors, res, term_a, term_b, v_a, v_b, inst_a, inst_b, len_a, len_b, p, q; if m = 1 or n = 1 then return 1 fi; idx_slots := pet_cycleind_cyclic(m); idx_colors := pet_cycleind_symm(n); res := 0; for term_a in idx_slots do for term_b in idx_colors do p := 1; for v_a in indets(term_a) do len_a := op(1, v_a); inst_a := degree(term_a, v_a); q := 0; for v_b in indets(term_b) do len_b := op(1, v_b); inst_b := degree(term_b, v_b); if len_a mod len_b = 0 then q := q + len_b*inst_b; fi; od; p := p*q^inst_a; od; res := res + lcoeff(term_a)*lcoeff(term_b)*p; od; od; res; end;
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121744", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Integral $\int_0^{2π} e^{e^{ix}} dx$ Work out the integral $$\int_0^{2π} e^{\large e^{ix}} \, dx.$$ I am now stuck with this for $2$ days, so please help! Here is my try: $$I=\int_0^{2π} e^{\large e^{ix}} dx=\int_0^{2\pi} e^{\large{\cos x+i\sin x}}dx$$ $$=\int_0^{2\pi}e^{cos x}\left(\cos(\sin x) +i\sin(\sin x)\right) dx$$ $$\overset{\large2\pi -x \rightarrow x}=\int_0^{2\pi} e^{\cos x} \left(\cos (\sin x)+i\sin(\sin(2\pi- x)\right)dx$$ $$\Rightarrow 2I=\int_0^{2\pi} e^{{\cos x}} \cdot 2\cos (\sin x)dx$$ $$\text{as}\ \sin(2\pi -x )=-\sin x$$ $$\Rightarrow I=\int_0^{2\pi} e^{\cos x}\cos (\sin x)dx$$ $$=\int_0^{2\pi} \left(e^{\cos x}\cos (\sin x)+e^{\cos (\pi-x)}\cos (\sin x)\right)dx$$ $$=\int_0^{\pi} 2 \left(\frac{e^{\cos x}+e^{-\cos x}}{2}\right)\cos(\sin x)dx$$ $$=2\int_0^\pi \operatorname{cosh}(\cos x)\cos(\sin x)dx $$
Use $z= e^{it}$ and integrate in the circumference of radius 1. Sou your integral becomes $$ \int_{C} \frac{e^z}{iz}dz$$ which has a singularity in $z=0$ which is $\frac{1}{i}$ and using the resiude theorem then the integral is $2\pi i \; Res_0(f) = 2\pi i \frac{1}{i} = 2\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121869", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Is there a name for a square matrix with constant diagonal and off-diagonal elements? I am interested in real symmetric matrices of the form: $$\mathbf{M} = \begin{bmatrix} a & t & t & \cdots & t \\ t & a & t & \cdots & t \\ t & t & a & \cdots & t \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ t & t & t & \cdots & a \text{ } \text{ } \\ \end{bmatrix}.$$ This is a simple matrix form where the diagonal elements have a constant value $a \in \mathbb{R}$ and the off-diagonal elements have a constant value $t \in \mathbb{R}$. Some useful special cases of this matrix form are the centering matrix and the equicorrelation matrix. (This matrix form is also a particular case of the Toeplitz matrix, but it is much simpler than that general form.) Question: Does this matrix form have a name?
In old statistics literature, these are sometimes called completely symmetric matrices. I have made a light effort to revive the term in my writing or teaching (which is usually a combinatorial context).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3121971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why $P(X=Y)=0$ if $X$ and $Y$ are i.i.d continous r.v.s? I read this conclusion in text book: If $X$ and $Y$ are i.i.d continuous r.v.s, then $P(X=Y)=0$, but it doesn't give any proof. I am a bit confused here, what is the rational for this conclusion? Is it because for a continuous r.v., the probability of it equaling to any particular value is $0$? Thanks.
For a legalistic argument: $P(X=Y)=E( \mathbb 1_{X=Y}) = E( E( \mathbb 1_{X=Y}|Y)) = E( 0) = 0$ since $E(\mathbb 1_{X=Y}|Y) = 0 \text{ a.s.}$. That is, the OP's observation that continuous random variables take on particular values with probability zero, when used with Fubini's theorem (a.k.a. the law of total expectation), does the trick.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
A noncomplete graph $H$ of order $5$ or more with the property that $deg (u) ≠ deg (v)$ for every pair $u, v$ of nonadjacent vertices of $H$. I'm working in the following graph theory excercise: Give an example or explain why no such example exists of a noncomplete graph $H$ of order $5$ or more with the property that $deg (u) ≠ deg (v)$ for every pair $u, v$ of nonadjacent vertices of $H$. I've reached the conclusion that is impossible that this graph exists because it can not be generated edges infinitely, is there a formalization that could help me to show it? Thanks in advance for any hint or help.
Does $$ H = ([5], \{(1,2),(2,3),(2,4),(3,4),(3,5),(4,5)\}). $$ satisfy your requirements? I found it by considering the complement, which has the property that every pair of adjacent vertices has distinct degrees.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is the percentage symbol a constant? * *Isn't the percentage symbol actually just a constant with the value $0.01$? As in $$ 15\% = 15 \times \% = 15 \times 0.01 = 0.15. $$ *Isn't every unit actually just a constant? But why do we treat them in such a special way then?
Well, it really depends. In Chinese schools, students are told that $100\%=1,40\%=2/5$, so % is a constant. In the UK examination system, it appears that % is treated as a unit. Students are NOT expected to write the above two expressions. However, it is agreed around the world that you should not write something like "$250\%$ liters of water". So it is a good idea to think of it as a constant, but not write it as a constant. Other units like cm, mm, kg are like the basis of a vector space or something or the imaginary unit $i^2=1$. The are NOT even like usual numbers because they cannot be added together.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "54", "answer_count": 15, "answer_id": 3 }
The set of even numbers less than 6 The set of even numbers less than 6 is ? E = {$0, 2 ,4$} Or E= {...., $-2 , 0 , 2 ,4$ } Which answer is correct? The definition of the even number is any number can be written as $x = 2n$. Is $$n \in \mathbb Z$$ or $$n\in \mathbb N ? $$
The even integers less than $6$ are $\ldots, -2, 0, 2, 4$. The even whole numbers less than $6$ are $0,2,4$. The even natural numbers less than $6$ are either $0,2,4$ or perhaps only $2,4$. "Even number" is even more ambiguous than "even natural number".
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Approximate monotonicity of $\epsilon$-covering number This is from Exercise 4.2.10 in Roman Vershynin's book, High-Dimensional Probability: An Introduction with Applications in Data Science. Let $(T;d)$ be a metric space and $N(A,d,\epsilon)$ be the $\epsilon$-covering number of $A\subset T$ (the minimal cardinality of an $\epsilon$-net of $A$). Then for $A\subset B\subset T$, we have $$N(A,d,\epsilon)\le N(B,d,\epsilon/2), \forall\epsilon>0$$ How do we prove this inequality? In my opinion this is probably related to the exterior covering number, since Exterior Covering Number of $\epsilon /2$ is greater of equal than Covering Number of $\epsilon$, but I cannot get any further from here. Any help will be appreciated, thank you!
Yes, we can use the exterior covering numbers : \begin{equation} N(A,d, \epsilon) \le N^{ext}(A,d,\epsilon/2) \le N^{ext}(B,d,\epsilon/2) \le N(B,d,\epsilon/2) \end{equation} The first and third inequalities are from exercise 4.2.9 and the second one follows from the inclusion and the definition of exterior covering numbers ( any exterior covering of B is also an exterior covering of A ).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Circle measurement of Archimedes Let $f_n$ or $F_n$ be areas of the regular $n-$ polygon described to the unit circle or circumscribed. Show $f_{2n}=\sqrt{f_nF_n}$ and $F_{2n}=\frac{2f_{2n}F_{n}}{f_{2n}+F_n}$ In the solutions there is only the fact that this follows from elementary geometric considerations. I tried to explain the result by elementary geometry, but I did not succeed. I have already searched the net for approaches. For example you can construct a $2n$ corner out of a $n-$ corner which is in a circle by drawing a point between two corners of a side. To clarify what I mean I have added a sketch using the example of a 4 corner. Why is the geometric mean of the area content of the outer and inner 4 corners now the area content of the inner $8$ corner? I have no approach to the formula for the outer $2n$ corner. How can I draw the $2n$ corner from the outer $n$ and why is the harmonic mean of the area of the inner $2n$ corner and the area of the inner $n$ corner? I hope someone can help me.
Let $l_n$ and $L_n$ be the lengths of the sides of the regular $n$-polygon inscribed in, or circumscribed to, the unit circle. The inscribed polygon with $n$ sides is formed by $n$ equal triangles, with base $l_n$ and altitude $\sqrt{1-l_n^2/4}$. We have then: $$ f_n={n\over2}l_n\sqrt{1-l_n^2/4},\quad F_n={n\over2}L_n. $$ On the other hand $L_n:l_n=1:\sqrt{1-l_n^2/4}$, that is: $$ L_n={l_n\over\sqrt{1-l_n^2/4}},\quad\text{and}\quad F_n={n\over2}{l_n\over\sqrt{1-l_n^2/4}}. $$ The area of the inscribed polygon with $2n$ sides is obtained by adding to $f_n$ the area of $2n$ small triangles: $$ f_{2n}=f_n+2n\cdot{1\over2}{l_n\over2}\big(1-\sqrt{1-l_n^2/4}\big)= {n\over2}l_n=\sqrt{f_n\cdot F_n}. $$ For $F_{2n}$ one can do a similar reasoning: $$ l_{2n}^2=l_n^2/4+\big(1-\sqrt{1-l_n^2/4}\big)^2=2\big(1-\sqrt{1-l_n^2/4}\big), $$ $$ \tag{*} F_{2n}={L_{2n}^2\over l_{2n}^2}f_{2n}={1\over1-l_{2n}^2/4}f_{2n}= {2\over1+\sqrt{1-l_n^2/4}}\cdot{n\over2}l_n $$ and finally: $$ {1\over f_{2n}}+{1\over F_{n}}={2\over nl_n}\big(1+\sqrt{1-l_n^2/4}\big) ={2\over F_{2n}}. $$ EDIT. To obtain the last result in $(*)$, we must substitute $l_{2n}^2=2\big(1-\sqrt{1-l_n^2/4}\big)$ and $f_{2n}={n\over2}l_n$ into $F_{2n}=(1-l_{2n}^2/4)^{-1}f_{2n}$: $$ F_{2n}=\left(1-{l_{2n}^2\over4}\right)^{-1}f_{2n}= \left(1-{2\big(1-\sqrt{1-l_n^2/4}\big)\over4}\right)^{-1}{n\over2}l_n= \left({1\over2}+{1\over2}\sqrt{1-l_n^2/4}\right)^{-1}{n\over2}l_n= \left({1+\sqrt{1-l_n^2/4}\over2}\right)^{-1}{n\over2}l_n= {2\over 1+\sqrt{1-l_n^2/4}}\cdot{n\over2}l_n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3122876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Finding a smooth atlas of $M=\{ (x,y,z) \in \mathbb{R}^3 \; | \; x^2+y^2-z^2=1 \}$ We know that $M=\{ (x,y,z) \in \mathbb{R}^3 \; | \; x^2+y^2-z^2=1 \}$ is a submanifold of $\mathbb{R}^3$, with dimension $2$. The point is that I have to find an atlas which define the structure of $C^{\infty}$ manifold for $M$. I just discovered these objects, and I have to say that I don't know how to do. As $M$ is a submanifold of dimension $2$, I think I should find some open sets $(U_i)_i$ of $M$, and some open sets $(V_i)_i$ of $\mathbb{R}^2$, and some homeomorphism $\psi : U_i \rightarrow V_i$, such that the transition map are $C^\infty$. My first idea was to use the theorem of the implicit functions, to have $z$ function of $x,y$ and then I would be able to solve the problem I think. But all I know is that : $(x,y,z) \in M \iff z = +- \sqrt{x^2+y^2-1}$, and from that, I tried several things but it doesn't lead where I wanted. Someone could help me, and explain me how to do, and what's the idea between thoses objects ? Thank you !
The space is topologically equivalent to a punctured plane, which suggests that we can cover $M$ with a single smooth chart. Since the defining equation has the form $F(x^2 + y^2, z) = C$, $M$ is invariant under rotations about the $z$-axis $Z$, which suggests that we look for a map $M \to \Bbb R^2 - \{ 0 \}$ that is suitably compatible with those rotations (or more precisely, that is equivariant for the two rotation actions.) In particular, this suggests that if we view $\Bbb R^2$ as the $xy$-plane in $\Bbb R^3$ we map the arc given by the intersection of $M$ with a half-plane bounded by $Z$ to the ray $\Bbb R^2 \cap Z$ of the form $$(x, y, z) \mapsto \left(f(z) \frac{x}{|x|}, f(z) \frac{y}{|y|}\right)$$ for a suitable bijection $f : \Bbb R$ to $\Bbb R^+$. A natural choice is $f(z) := \exp z$, giving $$(x, y, z) \mapsto \left(\frac{x}{|x|} \exp z, \frac{y}{|y|} \exp z\right) .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123014", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Derivative of sigmoid function that contains vectors Could someone show me how to take the derivative of this function with respect to $w_i$? $f(w) = \frac{1}{1+e^{-w^Tx}}$ $w$ and $x$ are both vectors $\in \mathbb{R}^D$ How would this be different from taking the derivative with respect to $w$ itself?
With $x = (x_1, x_2, \ldots, x_n)^T \tag 1$ and $w = (w_1, w_2, \ldots, w_n)^T, \tag 2$ we have $w^Tx = \displaystyle \sum_1^n w_i x_i; \tag 3$ we observe that $\dfrac{\partial (w^Tx)}{\partial w_j} = x_j, \; 1 \le j \le n; \tag 4$ we may write $f(w) = \dfrac{1}{1 + e^{-w^Tx}} = (1 + e^{-w^Tx})^{-1}, \tag 5$ and deploy the chain rule: $\dfrac{\partial f(w)}{\partial w_j} = \dfrac{df(w)}{d(w^Tx)} \dfrac{\partial (w^Tx)}{\partial w_j}$ $= -(1 + e^{-w^Tx})^{-2} (-e^{-w^Tx}) \dfrac{\partial (w^Tx)}{\partial w_j} = (1 + e^{-w^Tx})^{-2} (e^{-w^Tx}) x_j = \dfrac{e^{-w^Tx} x_j}{(1 + e^{-w^Tx})^{-2}}. \tag 6$ The above shows how to form the derivatives with respect to the individual $w_i$; to take the derivative with respect to $w$, we again call on the chain rule, but rather than (6) we have $\dfrac{\partial f(w)}{\partial w} = \dfrac{df(w)}{d(w^Tx)} \dfrac{\partial (w^Tx)}{\partial w} = (1 + e^{-w^Tx})^{-2} (e^{-w^Tx})\dfrac{\partial (w^Tx)}{\partial w}, \tag 7$ where it remains to evaluate $\dfrac{\partial (w^Tx)}{\partial w}; \tag 8$ but this is straightforward; we form the difference $(w + \Delta w)^T x - w^Tx = w^Tx + \Delta w^T x - w^Tx = \Delta w^T x, \tag 9$ whence $(w + \Delta w)^T x - w^Tx - \Delta w^T x = 0, \tag{10}$ which yields $\Vert (w + \Delta w)^T x - w^Tx - \Delta w^T x \Vert = 0, \tag{11}$ independently of $\Vert \Delta w \Vert$; since it then follows that $\dfrac{\Vert (w + \Delta w)^T x - w^Tx - \Delta w^T x \Vert}{\Vert \Delta w \Vert} = 0, \; \forall \Delta w \ne 0, \tag{12}$ we may conclude that the linear map $\Delta w \to \Delta w^T x \tag{13}$ is the sought-for derivative (8); it follows then that (7) may be written $\dfrac{\partial f(w)}{\partial w} = \dfrac{df(w)}{d(w^Tx)} \dfrac{\partial (w^Tx)}{\partial w} = (1 + e^{-w^Tx})^{-2} (e^{-w^Tx})(\cdot)^Tx; \tag{14}$ we note that this is the linear mapping from $\Bbb R^n \to \Bbb R$ given by $\Delta w \mapsto \dfrac{ e^{-w^Tx}}{ (1 + e^{-w^Tx})^2} \Delta w^T x. \tag{15}$ We may cast this in a somewhat more normative form via the observation that, since $\Delta w^T x \in \Bbb R$ is a scalar quantity, $\Delta w^T x = (\Delta w^T x)^T = x^T (\Delta w^T)^T = x^T \Delta w, \tag{16}$ and therefore (15) becomes $\Delta w \mapsto \dfrac{ e^{-w^Tx}}{ (1 + e^{-w^Tx})^2} x^T \Delta w, \tag{17}$ so we at last find $\dfrac{\partial f(w)}{\partial w} = \dfrac{df(w)}{d(w^Tx)} \dfrac{\partial (w^Tx)}{\partial w} = \dfrac{ e^{-w^Tx}}{ (1 + e^{-w^Tx})^2} x^T, \tag{18}$ is the derivative we seek.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Klein bottle with constant curvature is flat A torus (equipped with a Riemannian or Lorentzian metric) which has constant curvature must be flat because of Gauss-Bonnet theorem. Is it true that a Klein bottle (equipped with a Riemannian or Lorentzian metric) which has constant curvature must also be flat? My first idea was to argue with Gauss-Bonnet, but then I realized that this only works for oriented surfaces. Can we lift the metric to the torus and then argue that this lifted metric is flat?
The Klein bottle, like the torus, has $\Bbb R^2$ as universal covering and the group of deck transformations is made of translations. If we endow the Klein bottle with a metric of constant curvature and pull it back to $\Bbb R^2$ we get a metric of constant curvature which admits a group of translations acting as isometries. The flat metric is the only such. Alternatively, the Klein bottle admits the torus as twofold cover, so you can apply the Gauss-Bonnet argument to the constant curvature metric on the torus pullback of the metric on the bottle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that all normals to $\gamma(t)=(\cos(t)+t\sin(t),\sin(t)-t\cos(t))$ are the same distance from the origin. Show that all normals to $\gamma(t)=(\cos(t)+t\sin(t),\sin(t)-t\cos(t))$ are the same distance from the origin. My attempt: Let $\vec{p}=(\cos(t_0)+t_0\sin(t_0),\sin(t_0)-t_0\cos(t_0))$ be any arbitrary point for $t_0\in\mathbb{R}$. Then the tangent vector at $\vec{p}$ is given by $\dot\gamma(t_0)=(t_0\cos(t_0),t_0\sin(t_0))\implies$ the slope of the tangent vector at any point is given by $m=\tan(t_0)\implies$ the slope of any normal line is given by $m_{\perp}=-\cot(t_0)$. Now we calculate the normal line at any point $\vec{p}:$ $$y-(\sin(t_0)-t_0\cos(t_0))=-\cot(t_0)(x-\cos(t_0)-t_o\sin(t_0))\implies$$ $$\cot(t_0)x+y+(2t_0\cos(t_0)+\cot(t_0)\cos(t_0)-\sin(t_0))=0$$ Recall that the distance from $Ax+By+C=0$ and $Q(x_0,y_0)$ is: $$|l,Q|=\frac{|Ax_0+By_0+C|}{\sqrt{A^2+B^2}}$$ Hence $$|l,Q|=\frac{\sqrt{4t_0^2\cos^2(t_0)+\cot^2(t_0)\cos^2(t_0)+\sin^2(t_0)}}{\sqrt{\cot^2(t_0)+1}}$$ How can I proceed from here? Thanks in advance! $$$$ $$$$ Further progress: Following the advice of user429040, the parametric form of any normal line is: $$\mathscr l=(x(t_0)-tt_0\sin(t_0), y(t_0)+tt_o\cos(t_o))$$ The goal is to now minimize the norm of this parametric line over $t$, and show that this minimum does not depend on $t_0$: $$|\mathscr l|=|(x(t_0)-tt_0\sin(t_0), y(t_0)+tt_o\cos(t_o))|=((t-1)^2(t_0^2+1))^\frac{1}{2}\implies\min|\mathscr l|=0.$$ However, we can graphically confirm and confirm by Prof. Blatter's answer above that this conclusion is incorrect. Where do I go from here?
As a variant: The point $\gamma(t)$ is obtained by taking the point on the circle $r(t)=(\cos(t), \sin(t))$ and following circle's tangent line at $r(t)$, which we denote by $L$, and which is in the direction $Rot_{90} r(t)=(\sin(t), -\cos(t))$, for time $t$. Thus after we compute the tangent to $\gamma(t)$ to have direction parallel to $(\cos(t), \sin(t))$ i.e. to $r(t)$, we know that the normal to $\gamma$ at $\gamma(t)$ is parallel to $L$ and passes through $\gamma(t)$, which $L$ also does. So the normal line is nothing but $L$ itself. Of course $L$ is tangent to the unit circle (at $r(t)$), and so is unit distance from the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
Prove that there exists a $k$ such that the sequence obtained by listing each element of $S$ a total $k$ times is a degree sequence of some graph. I'm currently working in the following graph theory excercise: Let $S = \{2, 6, 7\}$. Prove that there exists a positive integer $k$ such that the sequence obtained by listing each element of $S$ a total $k$ times is a degree sequence of some graph. What is the minimum value of $k$? I'm basing on the theorem: But haven't find a way to suit the lemma in my solution, thanks in advance for any hint or help.
This is a partial answer which addresses only the first part of the question: proving the existence of $k$, without trying to determine the minimal value. Given any set $S=\{s_1,s_2,\dots,s_n\}$ of $n$ positive integers, let $k$ be the least common multiple of the numbers $s_1+1,s_2+1,\dots,s_n+1$. Then the graph $G=\frac k{s_1+1}\cdot K_{s_1+1}+\frac k{s_2+1}\cdot K_{s_2+1}+\cdots+\frac k{s_n+1}\cdot K_{s_n+1}$ has $k$ vertices of degree $s_i$ for each $i$. For the given set $S=\{2,6,7\}$ this dumb construction uses $k=42$, which is far from the minimum, as we can see from the accepted answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many ways are there to place $5$ balls into $8$ boxes? $8$ boxes are arranged in a row. In how many ways can $5$ distinct balls be put into the boxes if each box can hold at most one ball and no two boxes without balls are adjacent?
HINTS: Start by satisfying the claim that "no two empty boxes are adjacent". For this you will need $4$ balls and I let you figure out in how many ways these can be arranged so that there are no empty boxes next to each other. Finally you have $1$ ball left. How many empty boxes are left? Hope this helped If you need more guidance, please let me know.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Interior solution of a Variational Inequality in Hilbert Space I'm trying to understand the proof of the following claim: Let $K \subset \mathbb{R}^n$ be compact and convex and let $$F:K \rightarrow (\mathbb{R}^n)'$$ be continuous. Then there exists an $x\in K$ s.t. the dual pairing $$\langle F(x), y-x \rangle \geq 0 \quad \forall y \in K$$ Further, if $x$ satisfying the above inequality is on the interior of $K$ then $F(x) = 0$. I understood the existence, but not the proof for $F(x) = 0$ which proceeds as follows: If $x$ is interior to $K$ then the points $y-x$ describe a neighborhood of the the origin -- ie for any $q \in \mathbb{R}^n$ there exists an $\epsilon \geq 0$ and $y \in K$ s.t. $q = \epsilon(y-x)$ so that: $$\langle F(x), q \rangle = \epsilon \langle F(x), y-x \rangle \geq 0 \quad \forall q \in \mathbb{R}^n$$ This implies $F(x) = 0$ I, in particular, don't quite follow the statements in bold. * *I don't see any translation to the origin or anything, so this confused me, are they assuming $x$ is the origin or something? *How does $F(x) = 0$ follow from the inequality.
* *I don't see any translation to the origin or anything, so this confused me, are they assuming $x$ is the origin or something? They chose $x$ as the origin of a new coordinate system by the change of variables $y\mapsto y-x$ i.e. they simply translate the starting coordinate system. *How does $F(x)=0$ follow from the inequality? Since $F\in(\Bbb R^n)^\prime$, the duality pairing $\langle\:,\,\rangle:(\Bbb R^n)^\prime\times \Bbb R^n\to\Bbb R$ is simply the euclidean scalar product in $\Bbb R^n$, therefore for all $(u,v)\in (\Bbb R^n)^\prime\times \Bbb R^n\equiv \Bbb R^n\times \Bbb R^n$ $$ \langle u, \lambda v\rangle = \lambda \langle u, v\rangle\quad\forall\lambda\in\Bbb R^n.\label{1}\tag{SP} $$ Now, if $y$ is any vector contained in a spherical neighborhood $V_x$ of $x$ (with $v_x\Subset K$), we have by hypothesis $\langle F(x), y-x \rangle\ge 0$ then $\langle F(x), x-y \rangle=-\langle F(x), y-x \rangle$ is $\le 0$ by \eqref{1}. However, by hypothesis we have that $$ \langle F(x), x-y \rangle \ge 0\:\text{ again}\iff \langle F(x), y-x \rangle=0 \quad\forall y\in V_x $$ and this implies $F(x)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Confused about $\csc(x)$ The title says it all, let's take this simple example: I want to find $c$. Using $\sin$ I get: * *$\sin(30)$ = $4 / c$ *$c = 4 / \sin(30) = 8$ Now, using $\csc$: * *$\csc(30) = c / 4$ *$c = \csc(30) * 4 = 8$ But also: * *$\csc(30) = c / 4$ *$\tan(30) = 4 / a$ *$4 = \tan(30) * a$ *$c = \csc(30)\cdot \tan(30)\cdot c = c\cdot (c / 4)\cdot (4 / a) = c$ *OR: $\csc(30) \cdot \tan(30)\cdot c = \sec(30)\cdot c \approx c$ What's gone wrong here, I assumed $c$ would cancel out? EDIT 1: Why isn't $c$ cancelling out in the last example, even after the fix? EDIT 2: Fixed steps 2 and 3 (changed $c$ to $a$). Thank you for your help, silly mistakes.
$$\sec30^{\circ}=\frac{1}{\cos30^{\circ}}=\frac{2}{\sqrt3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3123998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
"Expectation exists" vs "Expectation is finite" Are the statements * *"Expectation exists" *"Expectation is finite" equivalent? If not, could someone please provide a counterexample. In case it's relevant, I don't know measure theory, but am confortable with probability theory & statistics at the undergraduate level.
A counterexample is the Cauchy distribution. It is symmetric around $0$ but both tails of the distribution are 'too' heavy: $$E[X]=\int_{-\infty}^{\infty} x\cdot\frac{1}{\pi(1+x^2)}\,\mathrm{d}x = \int_{-\infty}^{0} x\cdot\frac{1}{\pi(1+x^2)}\,\mathrm{d} x + \int_{0}^{\infty} x\cdot\frac{1}{\pi(1+x^2)}\,\mathrm{d} x =\infty -\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find all the group morphisms from $C_3$ to $C_4$ I know that since the gcd of 3 and 4 is 1, there can only be one group morphism but I'm unsure of how to find it. Any help would be greatly appreciated!
Hint: If $\phi:C_{3}\to C_{4}$ is a group homomorphism, what will be the kernel and image of $\phi$? Especially, what is an order of $\ker(\phi)$? And also an order of $\mathrm{img}(\phi)$? Think about the Lagrange's theorem and the 1st isomorphism theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\operatorname{Tr}(AB)=\operatorname{Tr}(A)\operatorname{Tr}(B)$ Problem. Show that $\operatorname{Tr}(AB)=\operatorname{Tr}(A)\operatorname{Tr}(B)$, if $A^2+3AB+B^2=BA$ and $\det(A)=0$, where all matrices involved is real valued $2\times2$ matrices. First I rewrote the $A^2+3AB+B^2=BA$ as $$ A(A+3B)=B(A-B) $$ and I took the determinant. Then I obtained that $$ \det(B)\det(A-B)=0, $$ because $\det(A)=0$. So, $\det(B)=0$ or $\det(A-B)=0$. Case I. Then, by Cayley-Hamiton theorem, I wrote that $$ A^2=(\operatorname{Tr}(A))A \text{ and } B^2=(\operatorname{Tr}(B))B. $$ Multiplying the above two equations, I got $(AB)^2=(\operatorname{Tr}(A)\operatorname{Tr}(B))AB$. Another use of Cayley theorem got $(AB)^2=(\operatorname{Tr}(AB))AB$. So, $\operatorname{Tr}(A)\operatorname{Tr}(B)$ must be equal with $\operatorname{Tr}(AB)$ only if $AB=BA$. But I don't have such a hypothesis. Case II. For the case where $\det(A-B)$ I don't know how to handle this one in order to get the answear.
Here is a bit messy solution: Lemma. Let $F$ be a field with characteristic $\neq 2$ and $A$ and $B$ be $2\times 2$ be matrices over $F$. * *If $A$ and $B$ commute, then either $A = pB + qI$ or $B = pA + qI$ for some $p, q \in F$. *If $A^2 = k(AB - BA)$, then $A^2 = 0$. In particular, if $k \neq 0$, then $AB = BA$. We postponed the proof of this lemma to the end. By rearranging the equality in the assumption, we obtain $(A+B)^2 = 2[(A+B)A-A(A+B)]$. So, by Lemma, $(A+B)^2 = 0$ and $AB = BA$. Now we claim the following. Claim. Either $A = 0$ or $B = kA$ for some $k$. Once this is true, then the rest is straightforward: If $A = 0$, there is nothing to prove. If $B = kA$ for some $k$, then $$ \operatorname{Tr}(AB) = k\operatorname{Tr}(A^2) = k\operatorname{Tr}(A)^2 = \operatorname{Tr}(A)\operatorname{Tr}(B), $$ where the second step follows from the identity $\operatorname{Tr}(A^2) = \operatorname{Tr}(A)^2$, which itself is a consequence of both Cayley-Hamilton theorem and $\det(A) = 0$. $\square$ So it remains to show the claim. Assume that $A \neq 0$. Since $\det(A) = 0$, we may utilize Lemma to write $$A+B = aA + bI $$ for some $a, b$. Indeed, this is obvious when $B = pA + qI$. On the other hand, if $A = pB + qI$, then $p \neq 0$ by the assumption, and so, $A+B = \frac{p+1}{p}A - \frac{q}{p}I$. Next, we consider two possibilities: * *If $a = 0$, then $0 = (A+B)^2 = b^2 I$, and so, $b = 0$. This implies $B = -A$. *If $a \neq 0$, then by rearranging $0 = (A+B)^2$ we obtain $A\left(A + \frac{2b}{a}I\right) = -\frac{b^2}{a^2} I$. Since $A$ is not invertible, this must imply $b = 0$ and hence $B = (a-1)A$. This completes the proof of Claim. $\square$ Proof of Lemma. For the first part, replacing $A$ by $A-\frac{1}{2}\operatorname{Tr}(A)$ and $B$ by $B-\frac{1}{2}\operatorname{Tr}(B)$ if needed, we may assume that $A$ and $B$ take the form $$ A = \begin{pmatrix} a & b \\ c & -a \end{pmatrix}, \qquad B = \begin{pmatrix} p & q \\ r & -p \end{pmatrix} $$ Now write $u = (a, p)$, $v = (b, q)$, $w = (c, r)$. Then comparing both sides of $AB - BA = 0$, we find that $AB = BA$ if and only if $\det(u, v) = \det(v, w) = \det(w, u) = 0$. In particular, they are parallel to each other. This implies that $A$ and $B$ are parallel, and so, the desired conclusion follows. For the second part, we first check $\det(A) = 0$. To this end, assume otherwise. Then $$ 2 = \operatorname{Tr}(I) = \operatorname{Tr}(A^{-1}A^2A^{-1}) = k \operatorname{Tr}(BA^{-1} - A^{-1}B) = 0,$$ a contradiction. Then taking trace to both sides, by Cayley-Halimton theorem, $$\operatorname{Tr}(A)^2 = \operatorname{Tr}(A^2) = k \operatorname{Tr}(AB-BA) = 0, $$ and so, $\operatorname{Tr}(A) = 0$. Therefore the desired claim follows from Cayley-Hamilton theorem. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
If $S$ is compact then the separating classes are same as convergence determining classes Suppose $(S,\rho)$ is a metric space with Borel $\sigma-$algebra $\mathcal S$. A class of subsets $\mathcal A\subset\mathcal S$ is called a separating class if whenever two Borel probability measures $P$ and $Q$ satisfy $P(A)=Q(A)$ for all $A\in\mathcal A$ then $P=Q$ on $\mathcal S$. A class of subsets $\mathcal A\subset\mathcal S$ is called a convergence determining class if whenever, for any sequence $P_n,P$ of Borel probability measures, $P_n(A)\to P(A)$ for all $P$-continuity sets $A\in \mathcal A$ implies $P_n\implies P$ i.e. $P_n(A)\to P(A)$ for all $P-$continuity sets $A\in\mathcal S$. In general, a separating class need not be a convergence determining class although the converse is always true. However, I want to show that if $S$ is compact, then every separating class is also a convergence determining class. Any help will be appreciated.
In the case of a compact metric space the space of all Borel probability measures is relatively compact (by Prohorov's Theorem). Let $A$ be a $P-$ continuity set in $\mathcal S$. It is enough to show that every subsequence of $\{P_n(A)\}$ has a further subsequence converging to $P(A)$. But every subsequence of $\{P_n\}$ has a further subsequence converging weakly to some probability measure $Q$. Hence $Q(A)=P(A)$ for every $P$ continuity set $A$. This finishes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Limiting value of a sequence when n tends to infinity Q) Let, $a_{n} \;=\; \left ( 1-\frac{1}{\sqrt{2}} \right ) ... \left ( 1- \frac{1}{\sqrt{n+1}} \right )$ , $n \geq 1$. Then $\lim_{n\rightarrow \infty } a_{n}$ (A) equals $1$ (B) does not exist (C) equals $\frac{1}{\sqrt{\pi }}$ (D) equals $0$ My Approach :- I am not getting a particular direction or any procedure to simplify $a_{n}$ and find its value when n tends to infinity. So, I tried like this simple way to substitute values and trying to find the limiting value :- $\left ( 1-\frac{1}{\sqrt{1+1}} \right ) * \left ( 1-\frac{1}{\sqrt{2+1}} \right )*\left ( 1-\frac{1}{\sqrt{3+1}} \right )*\left ( 1-\frac{1}{\sqrt{4+1}} \right )*\left ( 1-\frac{1}{\sqrt{5+1}} \right )*\left ( 1-\frac{1}{\sqrt{6+1}} \right )*\left ( 1-\frac{1}{\sqrt{7+1}} \right )*\left ( 1-\frac{1}{\sqrt{8+1}} \right )*.........*\left ( 1-\frac{1}{\sqrt{n+1}} \right )$ =$(0.293)*(0.423)*(0.5)*(0.553)*(0.622)*(0.647)*(0.667)* ....$ =0.009*... So, here value is tending to zero. I think option $(D)$ is correct. I have tried like this $\left ( \frac{\sqrt{2}-1}{\sqrt{2}} \right )*\left ( \frac{\sqrt{3}-1}{\sqrt{3}} \right )*\left ( \frac{\sqrt{4}-1}{\sqrt{4}} \right )*.......\left ( \frac{\sqrt{(n+1)}-1}{\sqrt{n+1}} \right )$ = $\left ( \frac{(\sqrt{2}-1)*(\sqrt{3}-1)*(\sqrt{4}-1)*.......*(\sqrt{n+1}-1)}{{\sqrt{(n+1)!}}} \right )$ Now, again I stuck how to simplify further and find the value for which $a_{n}$ converges when $n$ tends to infinity . Please help if there is any procedure to solve this question.
As we multiply positive factors below $1$, the sequence is positive and strictly decreasing, hence convergent (ruling out B). As already $a_1<1$, we also rule out A. If we multiply $a_n$ by $b_n:=\left(1+\frac1{\sqrt 2}\right)\cdots \left(1+\frac1{\sqrt {n+1}}\right)$, note that the product of corresponding factors is $\left(1-\frac1{\sqrt {k}}\right)\left(1+\frac1{\sqrt {k}}\right)=1-\frac1k<1$, hence $a_nb_n<1$. On the other hand, by expanding the product and dropping lots of positive terms $$b_n\ge 1+\frac1{\sqrt 2}+\frac1{\sqrt 3}+\ldots+\frac1{\sqrt {n+1}} $$ so that $b_n$ gets arbitrarily large. We conclude that $a_n$ gets arbitrarily small positive. In other words $a_n\to 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124615", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Probability of getting a sequence of at least $k$ heads in $n$ coin flips. Let's say we are flipping a coin $n$ times. What is the probability of getting a sequence of at least $k$ heads? So far I've come up with this: If we assume $k=2$, and the number of events satisfying the conditions is $m$ for $N$ throws, then for $N+1$ throws the number of satisfying events is: $2m + \frac{2^N - m}{2}$. $2m$ comes from the situation when the sequence was already there so we can get either heads or tails.$\frac{2^N - m}{2}$ Is when there was no sequence but the last result was heads, so we have to get heads again. This can be made into an equation and solved, I guess. However, it's more difficult to do it for higher $k$. How to proceed further to get a result for higher $k$?
This is not a complete answer to your question but only works under the extra condition that $n\leq2k$. Flip a fair coin again and again (so do not stop). Let $A_{k,n}$ denote the event that among the first $n$ tosses there is a consecutive sequence of at least $k$ heads. Let $T$ denote the index of the first occurence of a tail (so e.g. $T=4$ iff the sequence starts with $HHHT$). Then $P(T=t)=2^{-t}$. We find that $P(A_{k,n}\mid T=t)=1$ if $t>k$ and $P(A_{k,n}\mid T=t)=P(A_{k,n-t})$ otherwise. Based on that we find:$$P(A_{k,n})=\sum_{t=1}^{\infty}P(A_{k,n}\mid T=t)P(T=t)=\sum_{t=1}^kP(A_{k,n-t})2^{-t}+\sum_{t=k+1}^{\infty}2^{-t}=\sum_{t=1}^kP(A_{k,n-t})2^{-t}+2^{-k}$$where $A_{k,n-t}=\varnothing$ and consequently $P(A_{k,n-t})=0$ if $k>n-t$. Based on this equality by induction it can be proved that: $$P(A_{k,n})=2^{-k}\left(\frac12n-\frac12k+1\right)\text{ for }n\in\{k,k+1,\dots,2k\}\tag1$$ As base case we have $P(A_{k,k})=2^{-k}$ by substitution $n=k$ and it is evident that this is correct. Presupposing that the equality holds for every $n\in\{k,k+1,\dots,m-1\}$ where $k<m\leq2k$ we find:$$P(A_{k,m})=\sum_{t=1}^kP(A_{k,m-t})2^{-t}+2^{-k}=\sum_{t=1}^{m-k}P(A_{k,m-t})2^{-t}+2^{-k}=\sum_{n=k}^{m-1}P(A_{k,n})2^{n-m}+2^{-k}=$$$$2^{-k}\sum_{n=k}^{m-1}\left(\frac12n-\frac12k+1\right)2^{n-m}+2^{-k}=2^{-k}\left(\frac12m-\frac12k+1\right)$$ If the coin is unfair and there is a probability of $p$ on heads and $q=1-p$ on tails then the same route can be followed to arrive at the more general formula: $$P\left(A_{k,n}\right)=p^{k}\left(\left(n-k\right)q+1\right)\text{ for }n\in\{k,k+1,\dots,2k\}$$ Also see this question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Isomorphic field extensions have the same degree Let $k_1 \subseteq k'_1$ and $k_2 \subseteq k_2'$ be field extensions. Suppose there is a field isomorphism $\phi: k'_1 \to k'_2$ where $\phi(k_1)=k_2$. Show that $[k_1':k_1]=[k'_2:k_2]$. Now my first instinct was to try and show $k_1'$ is isomorphic to $k'_2$ as vector spaces but this is nonsense since the base fields aren't equal. Therefore I want to somehow show that $k_1=k_2$ but I am not really sure this is even true. I can't seem to understand why this ought to be true.
Let $1_{k_1},a_1,\dots, a_n$ a linear basis of $k_1'$ over $k_1$. Show that $\phi(1_{k_1}),\phi(a_1),\dots, \phi(a_n)$ is a linear basis of $k_2'$ over $k_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3124767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is $a=b$ the only possibility? In the following: $$2a-2b+(b^2-a^2)\operatorname{sinh} (by)\cos(ax)=0, \forall x,y \in \mathbb{R},$$ does it result automatically that $a=b$ in order for this to hold? Is there any other possibility? Why?
Say $a\ne b$ then we have $$2-(b+a)\sinh(by)\cos(ax)=0$$ for each $x,y$. So for $y=0$ we get $2=0$. A contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that embeddings, diffeomorphisms, etc. are stable classes of maps This is part of Problem 16 in Chapter 6 of Lee's Smooth Manifolds. Let $N,M,S$ be smooth manifolds. A smooth family of maps is a collection $\{F_s:N\to M \;|\; s\in S\}$ such that $F_s(x)=F(x,s)$ for some smooth $F:N\times S\to M.$ A class $\mathcal F$ of maps is called stable if whenever $\{F_s:N\to M \;|\; s\in S\}$ is a smooth family of maps and $F_{s_0}\in\mathcal F$ for some $s_0\in S$, then there is some neighborhood $U$ of $s_0$ in $S$ so that $F_s\in \mathcal F$ for each $s\in U.$ The problem I am asked to do is show that, under the assumption that $N$ is compact, the following classes of maps are stable: immersions, submersions, embeddings, diffeomorphisms, local diffeomorphisms, and maps which are transverse to a given properly embedded submanifold $X\subseteq M.$ I have solved the problem for immersions and submersions using the fact that maps of this form are maps which have maximal rank, and rank is lower semicontinuous. I am stuck when it comes to the other classes of maps, however. For example, when $F_{s_0}$ is a diffeomorphism, it is clear to me that under small perturbations of $s,$ the matrix for $d(F_s)_x$ will be invertible for each $x,$ since $GL_n(\mathbb R)$ is open in $M_{n\times n}(\mathbb R).$ What is not clear to me is why injectivity and surjectivity should be preserved under small perturbations. I'm having similar troubles with the other parts. Something that I'm sure of is that compactness of $N$ is a necessary assumption for all of these classes of maps to be stable. I used this crucially in my solutions for the immersions and submersions, and I have done the next exercise demonstrating that none of these classes are stable when $N$ is not compact. Any help or hints are appreciated. EDIT: I have reduced the parts for the class of embeddings and diffeomorphisms to showing that injectivity is preserved under perturbations. I am unsure of how to show this.
I found solution on the page 13 of http://math.berkeley.edu/~kozai/m141f14/m141_notes.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that $\det(A^{2019} +B^{2019} )$ and $\det(A^{2019} -B^{2019} )$ are divisible by $4$ Let $A, B \in M_2(\mathbb{Z})$ so that $$\det A=\det B=\frac{1}{4} \det(A^2+B^2)=1$$ Prove that $\det(A^{2019} +B^{2019} )$ and $\det(A^{2019} -B^{2019} )$ are divisible by $4$. The only observation I have made is that $\det(A^{2019} +B^{2019}) +\det(A^{2019} - B^{2019} )=4$ so suffice it to prove that one of them is divisible by $4$. EDIT : This was featured on the regional stage of the maths olympiad in Romania on Saturday, so it is an actual problem, not something I came up with. It is relevant to me because I am preparing for the next stage. EDIT : $AB=BA$ indeed, I forgot to include it when I posted the problem and I am sorry for this. Since there are people in the comments eager to see the official paper, here it is : https://imgur.com/a/z8v3VZp As you can see, this contest took place on the 24th of February(I know we have a pretty strange date format, but this is what 24.02. 2019 means) and it wasn't an online competition, so I really was honest when I said I was not cheating.
The other answer demonstrates that the result is false unless $AB=BA$; here's a proof in that case. Note that $A^2+B^2=\dfrac12\left((A+B)^2+(A-B)^2\right)$. Therefore, $$4=\det(A^2+B^2)=\frac14\det\left((A+B)^2+(A-B)^2\right),$$ i.e. $$\det\left((A+B)^2+(A-B)^2\right)=16.$$ But because $\det(X+Y)=2\det X+2\det Y-\det(X-Y)$ for any two $2\times 2$ matrices $X,Y$, we have $$\det\left((A+B)^2+(A-B)^2\right)=2\det(A+B)^2+2\det(A-B)^2-\det(4AB),$$ so by the condition, $$16 = \det(A+B)^2+\det(A-B)^2.$$ Now, $A+B$ and $A-B$ are both integer matrices, so their determinants squared are nonnegative perfect squares. The only two nonnegative perfect squares that sum to $16$ are $0,16$, therefore one of $\det(A+B),\det(A-B)$ is $0$ and the other is $\pm 4$. We are done because of the factorisations $$A^{2019}+B^{2019}=(A+B)(A^{2018}-A^{2017}B+\dots+B^{2018}),$$ $$A^{2019}-B^{2019}=(A-B)(A^{2018}+A^{2017}B+\dots+B^{2018}).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 1 }
Wave Equation with Initial Conditions on Characteristic Curves I am trying to solve the initial value problem: $$\begin{cases} u_{tt}-u_{xx} =0\\ u|_{t = x^2/2} = x^3, \quad |x| \leq 1 \\ u_{t}|_{t = x^2 / 2} = 2x, \quad |x| \leq 1 \\ \end{cases} $$ I'm unsure if we can use D'Alambert's formula in this problem, so what I have attempted is just using the general solution and going from there: $u(x, t) = f(x - t) + g(x + t)$, means (after plugging in initial conditions): $$f(x - \frac{x^2}{2}) + g(x + \frac{x^2}{2}) = x^3, \ \mathrm{and}$$ $$-f'(x - \frac{x^2}{2}) + g'(x + \frac{x^2}{2}) = 2x.$$ I am unsure where to go from here, or if my approach is correct. Could someone help me out?
The general solution is $u(x,t)=f(x-t)+g(x+t)$ $u|_{t=\frac{x^2}{2}}=x^3$ : $f\left(x-\dfrac{x^2}{2}\right)+g\left(x+\dfrac{x^2}{2}\right)=x^3......(1)$ $u_t(x,t)=f_t(x-t)+g_t(x+t)=g_x(x+t)-f_x(x-t)$ $u_t|_{t=\frac{x^2}{2}}=2x$ : $g_x\left(x+\dfrac{x^2}{2}\right)-f_x\left(x-\dfrac{x^2}{2}\right)=2x$ $g\left(x+\dfrac{x^2}{2}\right)-f\left(x-\dfrac{x^2}{2}\right)=x^2+c......(2)$ $\therefore f\left(x-\dfrac{x^2}{2}\right)=\dfrac{x^3-x^2-c}{2}$ , $g\left(x+\dfrac{x^2}{2}\right)=\dfrac{x^3+x^2+c}{2}$ $f\left(-\dfrac{x^2-2x}{2}\right)=\dfrac{x^3-x^2-c}{2}$ , $g\left(\dfrac{x^2+2x}{2}\right)=\dfrac{x^3+x^2+c}{2}$ $f\left(-\dfrac{(x-1)^2-1}{2}\right)=\dfrac{x^3-x^2-c}{2}$ , $g\left(\dfrac{(x+1)^2-1}{2}\right)=\dfrac{x^3+x^2+c}{2}$ $f\left(-\dfrac{x^2-1}{2}\right)=\dfrac{(x+1)^3-(x+1)^2-c}{2}$ , $g\left(\dfrac{x^2-1}{2}\right)=\dfrac{(x-1)^3+(x-1)^2+c}{2}$ $f\left(-\dfrac{x^2-1}{2}\right)=\dfrac{x^3+2x^2+x-c}{2}$ , $g\left(\dfrac{x^2-1}{2}\right)=\dfrac{x^3-2x^2+x+c}{2}$ $f\left(-\dfrac{x^2-1}{2}\right)=\dfrac{x(x+1)^2-c}{2}$ , $g\left(\dfrac{x^2-1}{2}\right)=\dfrac{x(x-1)^2+c}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Image of $\gamma(t)=(\sin(t+k)\cos(t), \sin(t+k)\sin(t))$ is a circle. I would like to show the image of $\gamma(t)=(\sin(t+k)\cos(t), \sin(t+k)\sin(t))$ is a circle. I was hinted that I should appeal to the Isoperimetric Inequality. Hence I calculate the area that the simply closed positively oriented curve $\gamma(t)$ encloses: $$A(\gamma(t))=\frac{1}{2}\int_0^T(x\dot y-\dot xy )dt$$ where $T=2\pi$ in our case. $$\dot x=\cos(t+k)\cos(t)-\sin(t+k)\sin(t), \dot y=\cos(t+k)\sin(t)+\sin(t+k)\cos(t))\implies x\dot y-\dot xy= \sin^2(t+k)\implies A(\gamma(t))=\frac{1}{2}\int_0^{2\pi}\sin^2(t+k)dt=\frac{\pi}{2}$$ Similarly, I calculate the closed curve's length $l(\gamma(t))$: $$l(\gamma(t))=\int_0^{2\pi}|\dot \gamma(t)|dt=\int_0^{2\pi}1dt=2\pi$$ According to the isoperimetric Equality, $$A(\gamma(t))=\frac{1}{4\pi}l(\gamma(t))^2$$ if and only if $\gamma(t)$ is a circle. Clearly, $\frac{\pi}{2}\ne\frac{1}{4\pi}(2\pi)^2$. $$$$What am I missing here?
The problem is in setting $T = 2\pi$, because your function is actually $\pi$ periodic. Hence the area enclosed is $$A = \frac 1 2 \int_0^{\pi} x\dot y - y \dot x = \frac 1 2 \int_0^{\pi} \sin^2(t + k) \, dt = \frac{\pi}{4}$$ while the corresponding length is $$\ell(\gamma(t)) = \int_0^{\pi} \, dt = \pi.$$ Now the equality $$A = \frac{\ell^2}{4\pi}$$ holds. For a purely algebraic calculation that also reveals the true periodicity of the curve, note that $$x(t) = \frac 1 2 \big(\sin(2t + k) + \sin(k)\big)$$ and $$y(t) = -\frac 1 2 \big(\cos(2t + k) - \cos(k)\big).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Preduals and $c_0$ I know that $c_0$ does not have a predual, but if we put an equivalent norm on $c_0$, can this space have a predual?
No. In a separable dual space, every closed convex set is the closed convex hull of its extreme points. (Krein-Milman property). But in $c_0$ with an equivalent norm, the original unit ball is still a closed convex set with no extreme points. Reference Diestel, J.; Uhl, J. J. jun., Vector measures, Mathematical Surveys. No. 15. Providence, R.I.: American Mathematical Society (AMS). XIII, 322 p. $ 35.60 (1977). ZBL0369.46039.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How is this property called for mod? We have a name for the property of integers to be $0$ or $1$ $\mathrm{mod}\ 2$ - parity. Is there any similar name for the remainder for any other base? Like a generalization of parity? Could I use parity in a broader sense, just to name the remainder $\mathrm{mod}\ n$?
Either AlessioDV's good answer, or you could say: "of the form $nq+r$". as a representation of $\equiv r \pmod n$ for example a number $N$ with $$N\equiv 1\pmod 3$$ has property $N=3q+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3125933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 3 }
Interpolation between "arithmetic mean" and "geometric mean" Define a function $f(X; p)$, which measures a statistic on a set of points $X = {x_1, ..., x_n}$ (and $x_i \in (0, 1)$), with respect to a parameter $p$. I am looking for ways to define the function $f(.)$, such that on one extreme it becomes arithmetic mean, and on the other extreme it becomes a geometric mean: * *if $p=0$ then $ f(X; p=0) = \frac{1}{n}\sum_i x_i $ *if $p=1$ then $ f(X; p=0) = (\prod_i x_i)^{1/n} $ Note: I am not looking for an "interpolation"; but rather a function that could be deformed into either "means" (similar to how an $L_p$ norm can be multiple different operations, depending on the value of $p$).
$$f(X;p) = M_{1-p}(x_1, \ldots, x_n)=\left( \frac1n \sum_{i=1}^n x_i^{1-p}\right)^\frac1{1-p}$$ When $p=0$, we clearly have the AM. Also, $M_p$ also satisfies the condition that $\lim\limits_{p \to 1}M_{1-p}(x_1, \ldots, x_n)=\lim\limits_{p \to 0}M_p(x_1, \ldots, x_n)$ to be equal to the geometric mean. This is known as the generalized mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stuck on a Geometry Problem $ABCD$ is a square, $E$ is a midpoint of side $BC$, points $F$ and $G$ are on the diagonal $AC$ so that $|AF|=3\ \text{cm}$, $|GC|=4\ \text{cm}$ and $\angle{FEG}=45 ^{\circ}$. Determine the length of the segment $FG$. How can I approach this problem, preferably without trigonometry?
Ok after some calculus, I figured out to solve this problem. Let's draw the vertical V that goes through E, and call $\alpha$ and $\frac{\pi}{4}-\alpha$ the angles we get on the left and right of the 45° angle cut by V and call c the length of the side of the square. We can then do some trigonometry to get : * *$c \cdot tan(\alpha) - (c-\frac{3}{\sqrt{2}}) = c \cdot tan(\alpha)$ (Continue the line EF to cut AD) *$ tan(\frac{\pi}{4}-\alpha) = c \cdot \frac{\sqrt{2}}{8}-1$ (G is $\frac{4}{\sqrt{2}}$ away vertically from the right side and $\frac{c}{2}-\frac{4}{\sqrt{2}}$ away horizontally from the middle) With a little trigo, we have 2 equations in $c$ and $tan(\alpha)$ that we can solve, which allow us to solve the whole problem. This makes me think that there is no "simple" answer without trigo :( Edit : Well, seems there was a simple answer afterall. Really nice proof, I'm glad I was wrong !
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126169", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 12, "answer_id": 9 }
Negating statements help I need to negate (move the negation inside) or write the contrapositive statements of the following statements: * *Write the contrapositive of: If everyone is here, then someone will leave. *Write the negation of: If Alice and Bob go, then Carol or Dave will come. *Write the negation of Every integer is even or odd, but no integer is even and odd. *rewrite as if then statement, then write its contrapositive: Every integer bigger than $1$ is divisible by some prime. *rewrite as if then statement, then negate statement: Every Integer that is divisible by $2$ and $3$ is divisible by $6$. This is what I’ve done: * *If someone didn’t leave, then not everyone was here. *Alice and Bob went; however, neither Carol or Dave came along. *There is an integer that is neither even or odd, or there is an integer that is even or odd. *If $x$ is an integer bigger than $1$, then it is divisble by some prime. Negation: $x$ is an integer bigger than $1$, however $x$ is not divisible by any prime. *If $x$ is an integer divisible by $2$ and $3$ then $x$ is also divisible by $6$. contrapositive: If $x$ is an integer not divisible by $6$, then $x$ is not divisble by $2$ and $3$. These are not entirely correct, so I was hoping you guys would tell me what's wrong with them and how to fix them.
* *Write the contrapositive of: If everyone is here, then someone will leave. * *If someone didn’t leave, then not everyone was here. "Did not" is not the complement for "will". "Was not" is not the complement for "is". Don't change the tenses. *Write the negation of: If Alice and Bob go, then Carol or Dave will come. *Alice and Bob went; however, neither Carol or Dave came along. Don't tamper with the verbs; just leave them as they are, or negate them if required.   "Go" either remains "go" or negates to "did not go". So on. *Write the negation of Every integer is even or odd, but no integer is even and odd. *There is an integer that is neither even or odd, or there is an integer that is even or odd. The negation of "all are" is "some are not", but the negation of "not some are" is just "some are". Also the negation of "are even or odd" is "are not-even and not-odd", which you may say as "are odd and even". *rewrite as if then statement, then write its contrapositive: Every integer bigger than $1$ is divisible by some prime. *If $x$ is an integer bigger than $1$, then it is divisble by some prime. Negation: $x$ is an integer bigger than $1$, however $x$ is not divisible by any prime. Where did $x$ come from? Don't introduce terms not used in the original statement. Start with "If any integer is blah blah blah, then it is rhubarb rhubarb," then negate that. Hint: That is still a universal quantified statement. *rewrite as if then statement, then negate statement: Every Integer that is divisible by $2$ and $3$ is divisible by $6$. *If $x$ is an integer divisible by $2$ and $3$ then $x$ is also divisible by $6$. contrapositive: If $x$ is an integer not divisible by $6$, then $x$ is not divisble by $2$ and $3$. Same. Try again
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
If Two Independent Geometric Random Variables are equal Let $W_1$ and $W_2$ be independent geometric random variables with parameters $p_2$ and $p_2$ , respectively. Find $P (W_1 = W_ 2 )$. To me this question seems unsolvable. I think we can assume $W_1=p_1$ and $W_2=P_2$. Also, I think we can assume that $W_1+W_2=1$. However it mentions that they are geometric random variables, which we have not learned in the class. So, I am guessing that it is very closely related to Poisson distribution. For context, the section this comes from was about Poisson distribution. $$\binom{n}{k}p^k(1-p)^{(n-k)}\rightarrow \frac{e^{-\mu}\mu^{k}}{k!}$$ $$\text{as }n\rightarrow \infty \text{ and } p\rightarrow 0 \text{ with } np=\mu$$
A geometric random variable with parameter $p$ has probability mass function $$ p(k) = (1-p)^{k-1}p,\quad k=1,2,\ldots. $$ So if $W_1\sim\mathrm{Geo}(p_1)$ and $W_2\sim\mathrm{Geo}(p_2)$ are independent, we have \begin{align} \mathbb P(W_1=W_2) &= \mathbb P\left(\bigcup_{k=1}^\infty\{W_1=W_2, W_1=k\} \right)\\ &= \sum_{k=1}^\infty \mathbb P(W_1=W_2,W_1=k)\\ &= \sum_{k=1}^\infty \mathbb P(W_1=k,W_2=k)\\ &= \sum_{k=1}^\infty \mathbb P(W_1=k)\mathbb P(W_2=k)\\ &= \sum_{k=1}^\infty (1-p_1)^{k-1}p_1(1-p_2)^{k-1}p_2\\ &= p_1p_2\sum_{k=0}^\infty ((1-p_1)(1-p_2))^k\\ &= \frac{p_1p_2}{1-(1-p_1)(1-p_2)}. \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determinant of $E^T H E$ where $E$ is semi-orthogonal and $H$ is positive definite Is there a way to simplify/obtain alternative forms of $\text{det} \left(E^T H E \right)$ where $E \in \mathbb{R}^{m \times n}$, $m > n$ is semi-orthogonal (meaning that its columns are orthonormal) and $H \in \mathbb{R}^{m \times m}$ is positive definite?
The Cauchy interlacing theorem states that if $\lambda_1,\dots,\lambda_m$ are the eigenvalues of $H$ in decreasing order, and $\mu_1,\dots,\mu_n$ are the eigenvalues of $E^THE$, then we have the inequalities $$ \lambda_j \geq \mu_j \geq \lambda_{j+(m-n)} $$ for $j = 1,\dots,n$. Conversely, for any tuple of real numbers $\mu_1,\dots,\mu_n$ satisfying the above inequalities for all $j$, there exists a semi-orthogonal $E$ such that the $\mu_j$ are the eigenvalues of $E^THE$. Consequently: denote $$ a= \prod_{j=m-n+1}^m \lambda_j, \qquad b = \prod_{j={1}}^n \lambda_j $$ we can state that $$ a \leq \det(E^THE) = \prod_{j=1}^n \mu_j \leq b $$ and that any determinant on the interval $[a,b]$ can be attained with a suitable semi-orthogonal matrix $E$. In terms of the exterior product (AKA wedge-product, AKA alternating tensor product), we can state that $a = \lambda_{\min}(\wedge^k H)$, and $b = \lambda_{\max}(\wedge^k H)$. Note that $\wedge^k H$ will be a size-$\binom mk$ square matrix. It might also be useful to observe that we have $$ \lambda_m^n \leq a \leq \det(H)^{n/m} \leq b \leq \lambda_1^n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that $2018^{2019}> 2019^{2018}$ without induction, without Newton's binomial formula and without Calculus. Prove that $2018^{2019}> 2019^{2018}$ without induction, without Newton's binomial formula and without Calculus. This inequality is equivalent to $$ 2018^{1/2018}>2019^{1/2019} $$ One of my 'High school' student asked why the inequality is true. The whole class became interested in the problem.The demonstration that such inequality is true, using calculus, can be found here. But my students are not familiar with calculus. I can also show by induction and Newton's binomial formula that $ n^{(n + 1)}> (n + 1)^n $, to $ n> 3$, but my students are not familiar with mathematical induction. Another limitation of my students is that they have not yet learned the Newton's binomial formula. How to prove the inequality $2018^{2019}> 2019^{2018}$ without induction, without Newton's binomial formula and without calculus? That is how to prove this inequality for High school students without using Newton's binomial formula?
It is the same as $2018>(1+\frac{1}{2018})^{2018}$, so we only need to show that $$a_n=\left(1+\frac{1}{n}\right)^n$$ is bounded. There are many proofs of this. Here is one possibility. Let $b_n=(1+\frac{1}{n})^{n+1}$. We observe that $b_n$ is decreasing since $$\begin{aligned}b_{n-1}=\left(\frac n{n-1}\right)^n&=\left[\text{Geometric mean of }\underbrace{\frac n{n-1},\ldots,\frac{n}{n-1}}_n,1\right]^{n+1}\\ &\ge \left[\text{Harmonic mean of }\underbrace{\frac n{n-1},\ldots,\frac{n}{n-1}}_n,1\right]^{n+1}\\ &=\left(\frac{n+1}n\right)^{n+1}=b_n.\end{aligned}$$ Hence, $a_n\le b_n\le b_1=4$. However, you may need to make them believe that geometric mean is larger than harmonic mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 13, "answer_id": 3 }
Shift in Time Domain by altering Fourier-Coefficients I want to apply a circular shift to a time-series, by changing the phase of the fourier-coefficients. But I run into trouble. In the current state the true sequence starts in the middle of the time-series. So thought that, I could do a phase shift in the frequency domain, by running this matlab command: [x,fs]= audioread("sound.wav"); x= x(:,1)'; x = real(ifft(fft(x).*exp(-i*pi))) Since the signal can be seen as a linear combination of periodic functions i should be able to find a phase-term to make my desired shift in the time-domain. But it does not behave as I expected. Is it possible to do it like that? What am I not seeing here? The following shows the signal I want to modify. I want it to be shifted to that:
A delay of $\tau$ seconds corresponds to a multiplication with $e^{-\tau s} = e^{- i \tau \omega}$ in the Laplace/Fourier domain. You need to add a multiplication with a radial frequency vector in the exponent exp(-i*pi). The Matlab code becomes [x,fs]= audioread("sound.wav"); x= x(:,1)'; N = length(x); w = 2*pi*(0:N-1)/N; tau = 123; % choose an appropriate value x = real(ifft(fft(x).*exp(-i*tau*w*pi)));
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Q: on Lemma I.5.11 proof from Tourlakis 2003 I'm unsure of the exact form of the I.H. in this case (the $\exists_x B \in Wff(M)$ rule). I was thinking it could be $(B[x \leftarrow t])^\mathcal{J} = (B[x \leftarrow \bar i])^\mathcal{J}$, but it is possible that $w$ occurs free in $B$, so interpretation may not be defined ($B$ is not closed). But in the proof he says $((B[w \leftarrow \bar j])[x \leftarrow \bar i])^\mathcal{J}$ comes from I.H., so does this mean that I.H. is: For any $m \in M$, $(B[w \leftarrow \bar m][x \leftarrow t])^\mathcal{J} = (B[w \leftarrow \bar m][x \leftarrow \bar i])^\mathcal{J}$ ? Is this correct? Are we allowed to say something like this?
First line of the proof : the formula $\mathscr A$ is written in full. Second step, the substitution $[x \leftarrow t]$ acts only on the formula $\mathscr B$ in the scope of the quantifier. Third line : the semantic clause for the existential quantifier is applied (Def.1.5.6, page 55). Here we have an object $j \in M$ ad we use the corresponding term : $\overline j$. Fourth line : the two substitutions commute (already proved). Fifth line : the simultaneous substitution is transformed into two successive substitutions. According to the statement of the Lemma, $\mathscr A$ has only $x$ free. Thus, $\mathscr A [x \leftarrow t] := ((\exists w) \mathscr B)[x \leftarrow t])$ has no free variables. When we remove the leading existential quantifier, formula $\mathscr B [x \leftarrow t]$ has only $w$ free, and thus $(\mathscr B [x \leftarrow t][w \leftarrow \overline j])$ has no free variables. This in turn means that $\mathscr B [w \leftarrow \overline j]$ has only one free variable : $x$. But formula $\mathscr B [w \leftarrow \overline j]$ has a "lower complexity" than $\mathscr A$, and thus we apply the Induction Hypotheses : sixth line. This means that, according to the I.H., the formula $((\mathscr B [w \leftarrow \overline j])[x \leftarrow t])$ - that is, the formula $\mathscr B [w \leftarrow \overline j]$ with $x$ free where we have substituted the term $t$ in place of $x$ - has the same truth value of formula $((\mathscr B [w \leftarrow \overline j])[x \leftarrow \overline i])$, where $\overline i$ is the term associated to element $i \in M$ such that : $t^{\mathscr I}=i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3126911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geodesically complete implies exponential map defined on all of $T_pM$ This should be completely straightforward as every book I have read on the Hopf-Rinow theorem states this is "obvious". But I can find the life of me to justify this. Geodesically complete means every geodesic $\gamma$ extends to all of time $\mathbb{R}$. How is that related to the velocity $\gamma'$ (which hence relates to exp)? If the curve extends all time, then are they implying the speed also extends all time? Does that even make sense since speed is constant for a geodesic, so why would that be attached to the time parameter?
Let $(M,g)$ a Riemannian manifold. Recall that for all $p\in M$ and $v \in T_pM$ there exists some $\varepsilon > 0$ and a $C^\infty$curve $\gamma(t)\colon (-\varepsilon, \varepsilon)\to M$ such that $\gamma$ is the unique geodesic passing through $p$ at time zero with velocity $\dot{\gamma}(0) = v$. The fact that the exponential map is defined for some $v \in T_pM$ has nothing to do with existence of a geodesic through $p$ with velocity $v$ (this always happens!), but with the fact that $\gamma(1)$ is defined ($\varepsilon > 1$). One has to be careful in thinking that a reparameterization would change the speed of a geodesic. A much careful treatment is given in the first five pages of Chapter 3 in do Carmo's book
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
When is maximising a definite integral the same as maximising the integrand? When is maximising $$\int_a^b f(x) \text{d}x$$ the same as maximising $f(x)$? Context: I was trying to find the most probable location of an electron in the ground state hydrogen atom, where the wavefunction is $\text{exp}(-r/a_0)$, and hence the probability density is $\text{exp}(-2r/a_0)$. The probability is therefore proportional to $$\iiint e^{-\frac{2r}{a_0}}\text{d}^3{\bf r}.$$ Since in spherical coordinates, the volume of a spherical shell is $4\pi r^2 \text{d}r$, I am told that to maximise the probability, I should maximise $r^2 \text{exp}(-2r/a_0)$. However, I seem to be maximising the integrand, not the integral, here. If $\text{d}P \propto r^2 \text{exp}(-2r/a_0) \text{d}r$, is $\text{d}P/\text{d}r$ not just $r^2 \text{exp}(-2r/a_0)$?
Think of the definite integral as giving you the area under the curve of $f(x)$ above the interval $[a,b]$. Suppose at all values on the interval, you had another function $g$ which is always more than $f$ (i.e. $g(x)\geq f(x)$ for all $x\in [a,b]$). The area under $g$ is obviously greater. Therefore, when the integrand is bigger, the integral is bigger, and vice versa. (Note that you run into trouble here if on the interval $f$ is sometimes greater than $g$ and sometimes smaller than $g$, I'm assuming we don't need to care about this issue here.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proving that $\chi(G)=\omega(G)$ if the complement of G is bipartite. ***Please note I am aware that this question was already asked here: Proving that $ \chi(G) = \omega(G) $ if $ \bar{G} $ is bipartite. however, I have a reason to believe the answer given in that topic is flawed. What I so far tried: I divided the complement of G to parts V and U, while all isolated vertices belong to V and |V| is bigger than |U|. $|V|≤\omega(G)$ so I gave the vertices of V distinct colors (they are a clique in G), now all that is left to show is that the vertices of U can be colored in G without adding extra colors. the answer in the linked topic suggests that since every vertex of U has a neighbor in V in the complement of G, you can give each vertex in U the color of one of its neighbors. that is not really the case, since two vertices of U can potentially only be adjacent to the same vertex in V, and since U is a clique in G, you cannot give them the same color. this method can only work if one can prove that the complement of G has a matching that spawns all vertices of U. this is not the case for certain graphs, and so this method fails. edit: here is a complement graph for which |V| is smaller than $\omega(G)$. (top two blue vertices and lower four red vertices form a 6 clique in G.)
Using the notation $G^c$ for complement, and $\alpha(G)$ for the stability number of $G$ The proposed proof went just a bit fast on the definition of $U$ and $V$. With its definition, you could have that $V$ is not the biggest independent set in $G^c$. The key point is that you need $\vert V \vert \geq \vert U \vert$ before taking into account the isolated vertices of $G^c$, so that $\vert V\vert = \omega(G)=\alpha(G^c)$ Then if two elements of $U$, $u_1$ and $u_2$ are connected to the same vertices $v$ of $V$, then at least one of them must be connected to another vertices $v' \in V$. The detailled written proof is a bit tedious, but try drawing appropriate graph, it's much easier to see. If need be I'll put the proof here tonight.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does a generalization of the Borel-Cantelli lemma mean intuitively? I know the first two BC lemmas. However, I was looking at this generalization of it and I don't understand the intuition behind it. In the 2nd BC lemma, we require independence of events. The generalization is used to replace the condition of pairwise independence. Pairwise independence can be replaced with the weaker condition of $P(A_k A_j) \leq P(A_k)P(A_j)$ for ever $k$ and $j$ such that $k\neq j$ What does this mean in simple terms? I want to know how I can explain it without having to use math. What results ensue from not requiring independence and replacing it with the weaker condition?
This is known as the "pairwise negative correlation" property. This is found for example in the paper Balanced Matroids by Feder, Mihail, available here. What this means intuitively is the appearance of one element in the pairwise condition means that the appearance of the other element is less likely.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simplify $\frac{4^{-2}x^3y^{-3}}{2x^0}$ to $\frac{x^3}{32y^3}$ I am to simplify $\frac{4^{-2}x^3y^{-3}}{2x^0}$ and I know that the solution is $\frac{x^3}{32y^3}$ I understand how to apply rules of exponents to individual components of this expression but not as a whole. For example, I know that $4^{-2}$ = $\frac{1}{4^2}$ = $1/16$ But how can I integrate this 1/16 to the expression? Do I remove the original $4^{-2}$ and replace with 1 to the numerator and a 16 to the denominator like this? $\frac{1x^3y^{-3}}{16*2x^0}$ How can I simplify the above expression to $\frac{x^3}{32y^3}$? Would be grateful for a granular set of in between steps, even if they are most basic to others.
Do it step by step : First, as you said $4^{-2} = \frac{1}{16}$, replace it in the given expression : $$ \dfrac{4^{-2}x^3y^{-3}}{2x^0} = \dfrac{x^3y^{-3}}{16\cdot 2 x^0}$$ Then, simplify the denominator, $16\cdot 2 = 32$ and $x^0 = 1$ so that : $$\dfrac{4^{-2}x^3y^{-3}}{2x^0} = \dfrac{x^3\cdot y^{-3}}{32} $$ Then, because $y^{-3} = \dfrac{1}{y^3}$, you can find the final expression : $$ \dfrac{4^{-2}x^3y^{-3}}{2x^0} = \dfrac{x^3}{32y^3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Determinant of A transpose time B equals determinant of A times B transpose I am reading Franz Hohn's Elementary Matrix Algebra (1973) and having trouble solving the following exercise: Prove that, if $A$ and $B$ are both of order $n$, (a) $\det A^{T}B = \det A B^T = \det A^T B^T = \det AB$ (b) $\det A^*B^* = \overline{\det AB}$. My trouble is that the author has not yet proven the multiplicative property $\det AB = \det A \det B.$ If I could use that property (together with $\det A^T = \det A$ and $(AB)^T = B^T A^T$) then the exercise would be trivial. If I could get the first equality in (a) then I could get the rest of the problem. While attempting to solve this exercise I ended up just proving the multiplicative property, but I don't think that's what the author intends. Am I missing something simple? Any hint is greatly appreciated.
We can use that $\det(C^T)=\det(C)$ and $\det(AB)=\det(BA)$ as follows: $$ \det(A^TB)=\det(A^TB)^T=\det(B^TA)=\det(AB^T) $$ At some point we need to prove the "classical" properties, like $\det(AB)=\det(BA)$ or even better, $\det(AB)=\det(A)\det(B)$. And then we do not care too much whether the text has it at page $xy$ or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3127847", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Can a holomorphic function be globally represented by a power series on an open connected set? I have a theorem in my book (Stein) which says: Suppose $f$ is holomorphic in an open set $\Omega$. If $D$ is a disc centered at $z_0$ and whose closure is contained in $\Omega$, then $f$ has a power series expansion at $z_0$ $$f(z) = \sum_{n=0}^{\infty} a_n(z-z_0)^n$$ for all $z \in D$. Let's say our open set $\Omega$ contains $0$; let $P(z)$ be the power series of $f$ centered at $0$. Obviously, if $\Omega$ is not connected, $P(z)$ does not have to represent $f$ at all points of $\Omega$. However, what about the case where $\Omega$ is connected? Are there any counterexamples?
The set in which a power series converges is always an open disk together with some subset of the boundary of the disk. So, if a function $f$ can be represented by a single power series centered at $0$ on an open set $\Omega$, then $\Omega$ must be contained in an open disk $D$ (possibly of infinite radius) around $0$ such that $f$ extends holomorphically to $D$ (and conversely, since if $f$ is holomorphic on such a disk $D$ then its Taylor series at $0$ converges to it on the whole disk). This gives lots of counterexamples. For instance, if $f(z)=\frac{1}{z-1}$, then $f$ is holomorphic on $\Omega=\mathbb{C}\setminus\{1\}$ but cannot be represented by a power series centered at $0$ on this domain since otherwise $f$ would need to extend holomorphically to all of $\mathbb{C}$ since $\mathbb{C}$ is the only disk centered at $0$ that contains $\Omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128090", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Discrepancy between calculus methodologies - Is it significant? Two of the ways of doing calculus with algebra are non-standard analysis NSA and smooth infinitesimal analysis SIA. NSA has a technique called 'taking the standard part' which neglects incremental (or infinitesimal) terms at the end of derivations, whereas SIA has the nilsquare rule which neglects higher power incremental terms during derivations. For example if we differentiate $y = x^2$ from first principles we have: $$(x + h)^2 = x^2 + hx^{2'}$$ $$x^2 + 2hx + h^2 = x^2 + hx^{2'}$$ $$x^{2'} = 2x + h$$ $$x^{2'} = 2x$$ in NSA, whilst in SIA we have: $$(x + h)^2 = x^2 + hx^{2'}$$ $$x^2 + 2hx + h^2 = x^2 + hx^{2'}$$ $$2hx = hx^{2'}$$ $$x^{2'} = 2x$$ It's tempting to think the two methods are equivalent. However, consider the formula for secant length: $$s^2 = h^2 + (hy')^2$$ $$s = h \sqrt{1 + y'^2}$$ Taking the standard part of $y'$ while evaluating $s$ would result in the first RHS term being set to zero, rendering the equation useless, whereas the nilsquare rule doesn't have this effect. What does this tell us, if anything, about the validity and/or usefulness of NSA and SIA relative to eachother?
Taking the standard part is not really a technique for neglecting infinitesimal terms at the end of a derivation. Instead, non-standard analysis has a bona fide standard part function $st: \mathbb{R}^* \rightarrow \mathbb{R}$, a map that assigns an (extended) real number to each hyper-real number. With this in mind, your first derivation, where you claim to differentiate the squaring function using NSA, is invalid. In particular, the claimed equality $(x + h)^2 = x^2 + h(x^2)'$ does not hold at all; even if it did, you are not allowed just neglect infinitesimals like you do when you pass from $(x^2)' = 2x + h$ to $(x^2)' = 2x$. None of this works. What you should say instead is that the squaring function has derivative $2x$ because $$ st\left( \frac{(x+h)^2 - x^2}{h} \right) = 2x $$ for each real number $x \in \mathbb{R}$ and infinitesimal hyperreal $h \in \mathbb{R}^*$. The same thing goes for your third derivation: you cannot just take the standard part in the middle of an equality and expect the resulting equation to hold. As an aside: in its present form, that third derivation does not hold up in Synthetic Differential Geometry (Smooth Infinitesimal Analysis) either - it amounts to inferring $s = h\sqrt{1 + (y')^2}$ from $s^2 = 0$. This should answer your question: these arguments don't tell us anything about the validity and/or usefulness of NSA and SDG relative to each other, simply because they are not correct. That said, the methods of NSA and SDG are far from equivalent: NSA strictly includes usual (standard) analysis, while SDG is specifically a way of doing differential calculus in a smooth topos - which excludes many mathematical artifacts that analysts are concerned with, but greatly simplifies a lot of differential-geometric concepts or computations. Then there are models of SDG in which both nilsquare SDG-style and proper NSA-style infinitesimals coexist. I'll stop here: a detailed look at these would be best suited as an answer to another (softer) question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Conjugacy of CSAs: Proof in Humphries' Intro to Lie Algebras and Rep Theory I am reading Chapter 16 of the title mentioned. On the middle of page 82 the statement is made "it is clear that if $\phi: L \rightarrow L'$ is a surjective homomorphism (of Lie algebras) then $\phi(L_a(ad(y)))={L'}_a(ad(\phi(y)))$". Here $L_a(ad(y))$ means $ker(ad(y)-aI)^m$ for $m$ sufficiently large (so that the kernel is as large as possible). The forward inclusion seems clear to me but the reverse inclusion I do not see. Is there something obvious I am missing?
I think what we have here is this linear algebra set-up. There are finite-dimensional vector spaces $V$ and $V'$ ($L$ and $L'$ here) and a surjective linear map $\phi:V\to V'$. I'll let $K$ denote its kernel. One also has an endomorphism $A$ of $V$ (here, $\text{ad y}$) with $A(K)\subseteq K$ so that $A$ induces an endomorphism $A'$ of $V'$ (here, $\text{ad }\phi(y)$). I'll assume we are working over $\Bbb C$ or a similar field. Then $V$ splits as a direct sum of generalised eigenspaces $V_a$ where $V_a$ consists of all vectors with $(A-aI)^mv=0$ for some $A$. Likewise $V'$ splits into a direct sum of $V_a'$ where these are now the generalised eigenspaces for $A'$. As you note, $\phi(V_a)\subseteq V_a'$. So $\phi\left(\bigoplus_{b\ne a} V_b\right)\subseteq\bigoplus_{b\ne a}V_{b}'$. Call these direct sums $W_a$ and $W_a'$; they are complements to $V_a$ and $V'_a$ in $V$ and $V'$. If $u\in V_a'$, then $u=\phi(v+w)=\phi(v)+\phi(w)$ where $v\in V_a$ and $w\in W_a$. But then $\phi(w)=0$ since $\phi(w)\in W_a'$ which is a complement of $V_a'$ in $V'$. Then $u=\phi(v)\in \phi(V_a)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving diffusion equation with variable dissapation I've been asked to solve the diffusion equation with variable dissipation: $ \frac{∂u}{∂t} - D \frac{∂^2u}{∂x^2} + e^{-pt}u = 0 , x∈(-∞,∞), t∈[0,∞] $ subject to $u(x,0) = φ(x),$ where $D>0$ and $p>0$ are given constants, and φ(x) is a given function. I think I need to use the substitution $u(x,t) = v(x,t)h(t)$ for some function h(t). I would be grateful for a solution rather than hints. Thanks for any help.
You're on the right track. Let's try to figure out what substitution will work. Let $u(x,t) = h(t)v(x,t)$ then $$ h_tv + hv_t - Dhv_{xx} + e^{-pt}hv = 0 $$ $$ \implies v_t - Dv_{xx} + \left(\frac{h_t}{h}+e^{-pt}\right)v = 0 $$ $v$ will satisfy the heat equation if the third term on the LHS is zero, i.e. $$ \frac{h_t}{h} + e^{-pt} = 0 $$ Can you take it from here? Edit: Once you've found $h$ from the above ODE (a particular solution will do), then $v$ satisfies $$ v_t - Dv_{xx} = 0 $$ with initial condition $$ v(x,0) = \frac{u(x,0)}{h(0)} = \frac{\varphi(x)}{h(0)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128597", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there a proof for the Determinant of the Log-Euclidean-Tensor-Interpolation yielding the geometric mean of the Determinants of its sampling points? Given a set of $i \in \mathbb{N}$ symmetric-positive-definite-tensors $\boldsymbol{A}_i$ and the corresponding weights $w_i$ with $w_i>0$ and $\sum_i w_i = 1$. The Log-Eucledian-Mean is defined as $ \bar{\boldsymbol{A}} = \exp(\sum_i w_i\log(\boldsymbol{A}_i))$. After some testing on random sets, I assume the Determinant of the interpolated value is equal to the geometric mean of the Determinants of its sampling tensors. $\det{\bar{\boldsymbol{A}}} = \prod_i \left(\det{\boldsymbol{A}_i} \right)^{w_i}$ Does anyone know, whether a nice proof exists for this identity? TIA
Using some well-known properties connecting exponents, determinants, traces, and logarithms, as well as multiplicativity of determinant and linearity of trace, this boils down to some boring algebra: $$\det \bar A = \det \exp\sum\limits_i w_i \log(A_i)=\exp\operatorname{tr}\sum\limits_i w_i \log(A_i) = \\ = \exp \sum\limits_i \operatorname{tr} w_i \log(A_i) = \prod \left(\exp\operatorname{tr}\log A_i\right)^{w_i}=\prod \left(\det\exp \log A_i\right)^{w_i} = \\ = \prod \left(\det A_i\right)^{w_i}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Graphical examination for existence of 2nd order ODE given a solution curve Given any solution curve $y(t)$, what are some graphical criterion to determine whether there exists some 2nd order linear, homogeneous, with continuous but possibly non-constant coefficients, to which the curve is a solution? Each of the plots below show graphs of three functions $y(t)$. For which options do there exist a second order linear homogeneous ode with continuous but possibly non-constant coefficients to which all three functions are solutions of?
Not a complete answer, but too long for a comment. Upon closer inspection, I'll concede that (b) is in fact not a solution of the original question. Note that the solution curves in (b) all differ by a constant. If there exists a second-order ODE that admits all of these curves as a solution, then it must also admit a constant as a solution. In general, an IVP that admits a constant solution looks something like this $$ y''(t) + p(t)y'(t) = 0, \quad y(t_0) = y_0, \ y'(t_0) = 0 \tag{1} $$ where $p(t)\ne 0$. If the ODE in $(1)$ has a non-constant solution, said solution must not have a zero derivative at any point (since if a solution has a zero derivative, it must be constant due to the Uniqueness Theorem). We see that the non-constant curves in (b) do not satisfy this, therefore (b) is not correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Method to find solution for $a^x \equiv \mod n$ The congruence $5^x \equiv 1 \mod 36$ has a solution because $5$ and $36$ are relatively prime, i.e. $5$ and $2^23^2$ have no common factors. Is there a method to find $x$? All I can see is that $5^3 \equiv 5 \mod 6$.
You could always use the Chinese Remainder Theorem for congruences like this. For $$ 5^x\equiv 1 \ \bmod 36 $$ you should solve $$ \begin{cases} 5^x\equiv 1 \ \bmod 4\\ 5^x\equiv 1 \ \bmod 9 \end{cases} $$ This way you have to deal with smaller moduli.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3128930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How to restrict coefficients of polynomial, so the function is strictly monotonic? I need to restrict the degree of freedom of the coefficients of a polynomial, so the function is always strictly monotonic in the domain $x\in\left[0, 1\right]$ and $y\in\left[0, 1\right]$. The polynomial also has to go through the points $(0, 0)$ and $(1, 1)$. The general formula for a polynomial of degree 3 is $$a_3\cdot x^3 + a_2\cdot x^2 + a_1\cdot x + a_0\quad .$$ The constraints $f(0) = 0$ and $f(1) = 1$ give $$a_0 = 0$$ and $$a_3 = 1 - a_2 - a_1$$ yielding the fitted polynomial $$(1 - a_2 - a_1)\cdot x^3 + a_2\cdot x^2 + a_1\cdot x\quad .$$ The number of parameters is already reduced to 2, but I also need the function to be strictly monotonic, so $f'(x) \geq 0$. The additional constraints $f'(0) \geq 0$ and $f'(1) \geq 0$ give $$a_1 \geq 0$$ and $$a_2 \leq 3 - 2\cdot a_1$$ but this does not imply $f'(x) \geq 0$ in general. Is there a simple way to achieve what I want, even for the general case, where the degree of the polynomial is not restricted to 3 and can be any integer number? As a result, I want to choose the coefficients of the polynomial according to some constraints and the function is always strictly monotonic, includes the points $(0, 0)$ and $(1, 1)$ and is inside the unit square (see curves). EDIT: I performed a Monte Carlo simulation to determine the constraints graphically (see monte carlo). The black lines correspond to $$a_1 \geq 0$$ and $$a_2 \leq 3 - 2\cdot a_1\quad .$$ The rest looks elliptic to me. All dots inside the yellow area give monotonic increasing functions (see array of curves). EDIT2: The accepted answer is correct. See the visual proof.
The $n=3$ case is tractable: Your polynomial is increasing if the derivative is always positive, so it's enough to show that it is never zero. (Never zero means it has the same sign everywhere, $f(0) < f(1)$ ensures that that sign is positive.) Since the derivative is a quadratic, having no zeros is equivalent to its discriminant being negative:$$ (2a_2)^2 -4\cdot 3 \cdot (1-a_2-a_1)\cdot a_1 \leq 0 $$ I'd bet that that's the ellipse you're seeing in your simulation. Unfortunately I can't see any way for this to generalize to $n >3 $. edit - I have changed the condition to include 0, a simple counterexample to the strict inequality is the polynomial $4*(x-1/2)^3 + 1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129051", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show homeomorphism between two quotient topologies Consider two subspaces of $\Bbb R^2$ in the usual topology: Square $I^2$ := $\{(x, y) ∈ \Bbb R^2: 0 \leq x, y \leq 1\}$ Annulus $A$:= $\{(x, y) \in \Bbb R^2: 1 \leq x^2 + y^2 \leq 4\}$, Define the following equivalence relations $∼$ and $≈$ on $I^2$ and $A$ respectively by $(x, y) ∼ (x, y)\ \forall (x, y) \in I^2$, $(0, y) ∼ (1, y)$ and $(x, 0) ∼ (x, 1)$ if $0 \leq x, y \leq 1$ $(x, y) ≈ (x, y)\ \forall(x, y) \in A,\ (x, y) ≈ (2x, 2y)$ if $x^2 + y^2 = 1$. I wish to prove that in the respective quotient topologies of $[I^2]_∼$ and $[A]_≈$ are homeomorphic . It is perhaps helpful to view quotient spaces as homeomorphic to the torus; so intuitively they should be homeomorphic. So, we can show show each quotient space is homeomorphic to the torus. But does there exist an explicit homeomorphism for this or something? Is there a proof that need not necessarily involve the torus or by pictures, if possible hmm~ *Just an additional remark my classmate and I were working on an exercise in a new topology course where we encountered the above problem and just realise that he had a similar post at Show two topological spaces are homeomorphic but I will be eligible and happy to offer a bounty hopefully to an answer that is helpful for us, thank you!
Identifying $\mathbb{R}^2$ with $\mathbb{C}$, we have $A = \{ z \in \mathbb{C} \mid 1 \le \lvert z \rvert \le 2 \}$. Define $$q : I^2 \to A, q(x,y) = 2(x+1)e^{2\pi iy} .$$ This is a well-defined continuous map because $\lvert q(x,y) \rvert = 2(x+1) \in [1,2]$. It is a closed map because $I^2$ is compact and $A$ is Hausdorff. Moreover, $q$ is surjective: If $z \in A$, then $z = \lvert z \rvert e^{2\pi it}$ for some $t \in [0,1]$. Hence $(\frac{\lvert z \rvert -1}{2},t) \in I^2$ and $q(\frac{\lvert z \rvert -1}{2},t) = z$. So $q$ is a continuous closed surjection, hence a quotient map. Obviuosly we have $q(x,y) = q(x',y')$ if and only if $x = x'$ and either $y = y'$ or $\{y, y' \} = \{0, 1 \}$. Note that the latter implies $(x,y) \sim (x',y')$. Let $p : A \to T = A/\approx$ denote the quotient map. Then $r = p \circ q : I^2 \to T$ is a quotient map. We claim that $r(x,y) = r(x',y') \Leftrightarrow (x,y) \sim (x',y')$. "$\Leftarrow$" We have $q(x,0) = q(x,1)$, hence $r(x,0) = r(x,1)$, and we have $q(0,y) = e^{2\pi iy}, q(1,y) = 2e^{2\pi iy} = 2q(0,y)$, hence $r(0,y) = r(1,y)$. "$\Rightarrow$" In case $q(x,y) = q(x',y')$ we are done. So let $q(x,y) \ne q(x',y')$. Hence w.l.o.g. (1) $\lvert q(x,y) \rvert = 1$ and (2) $q(x',y') = 2q(x,y)$. But (2) implies (3) $\lvert q(x',y') \rvert = 2$ and (4) $y = y'$ or $\{y, y' \} = \{0, 1 \}$. From (1) and (3) we conclude $x = 0$ and $x' = 1$. Then (4) shows that $(x,y) \sim (x',y')$. Our above claim proves that $r$ induces a homeomorphism $\hat{r} : I^2 /\sim \phantom{.} \to A/\approx$. See also When is a space homeomorphic to a quotient space?.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Two basic questions in Wasserstein spaces We denote by $P (\mathbb{R}^{d})$ the space of probability measures on $\mathbb{R}^{d}$ and for $p\geqslant 1$ the Wasserstein space by \begin{equation*} P^p (\mathbb{R}^{d}) = \{ \mu \in P(\mathbb{R}^{d}) \,|\, \int |x|^p d\mu(x) <\infty \} \end{equation*} Also, we say that $\mu_N \rightarrow \mu $ in $P^p (\mathbb{R}^{d})$ iff $\mu_N \rightarrow \mu$ weakly and $\int |x|^p d\mu_N(x) \rightarrow \int |x|^p d\mu$. I am trying to clarify the following : 1)if $q<p$, then do we have that $P^p (\mathbb{R}^{d}) \subseteq P^q (\mathbb{R}^{d}) $ ? 2)if the previous question is correct, then does convergence in $P^p (\mathbb{R}^{d})$ imply convergence in $P^q (\mathbb{R}^{d})$ ? Intuitively, they should be correct, but I can't even prove the first one. I tried Holder but I can't finish it. Could someone give me some help by telling me if at least the statements are correct or wrong ?
If $p > q$, then a probability distribution $X$ with a $p$th moment has a $q$th moment. This follows by applying Jensen's inequality to the function $\phi(x) = x^{p/q}$. Note that $\phi''(x) = (p/q)(p/q - 1) x^{p/q - 1}$, which is convex on $[0,\infty)$ exactly when $p/q > 1$. So we can compute: $E [ ( |X|^q)^{p/q} ] \geq E [|X|^q]^{p/q}$. The same logic will tell you that if $X_n \to 0$ in $L_p$, then it also does in $L^q$, when $q < p$. (This is only true for probability distributions.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why are there two different recurrences for Gegenbauer polynomials? As I mentioned previously, I've been reading up on Gegenbauer polynomials in preparation for a blog post on the kissing number problem—specifically, the Delsarte method. To make a long story short, the method involves expressing a particular function as a non-negative linear sum of Gegenbauer polynomials. In various publications on this topic (see, for example, Musin, "The Kissing Number in Four Dimensions" in the July 2008 Annals of Mathematics), these polynomials have the following recurrence: $$ G^{(n)}_0(t) = 1 $$ $$ G^{(n)}_1(t) = t $$ $$ G^{(n)}_k(t) = \frac{(2k+n-4)tG^{(n)}_{k-1}(t)-(k-1)G^{(n)}_{k-2}(t)}{k+n-3} $$ However, in the Wikipedia and Wolfram MathWorld plot summaries for Gegenbauer polynomials, the recurrences are different (and not just up to a constant factor). In both, converting to consistent symbols, we have $$ G^{(n)}_0(t) = 1 $$ $$ G^{(n)}_1(t) = 2nt $$ $$ G^{(n)}_k(t) = \frac{2t(k+n-1)G^{(n)}_{k-1}(t)-(k+2n-2)G^{(n)}_{k-2}(t)}{k} $$ Both definitions are normalized to $G^{(n)}_0(t) = 1$. What accounts for the difference between the two?
I'll use $G$ for the second definition and $\widetilde G$ for the first definition. $G$ has the generating function $(1 - 2 x t + t^2)^{-n}$: $$(1 - 2 x t + t^2)^{-n} = \sum_{k \geq 0} G_k^{(n)}(x) \,t^k.$$ $\widetilde G$ is constructed by first taking $C$ with the gf $(1 - 2 x t + t^2)^{1 -n/2}$ and then normalizing by $C(1)$: $$(1 - 2 x t + t^2)^{1 - n/2} = \sum_{k \geq 0} C_k^{(n)}(x) \,t^k, \\ \widetilde G_k^{(n)}(x) = \frac {C_k^{(n)}(x)} {C_k^{(n)}(1)}.$$ Therefore $$\widetilde G_k^{(n)}(x) = \frac {(-1)^k} {\binom {2 - n} k } G_k^{(n/2 - 1)}(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $f$ divides $g$ in $S[x]$, show that $f$ divides $g$ in $R[x]$ for $R$ a sub-ring of $S$. Let $R$ be a sub-ring of a ring $S$. Let $f,g$ be non-zero polynomials in $R[x]$ and assume that the leading coefficient of $f$ is a unit in $R$. If $f$ divides $g$ in $S[x]$, show that $f$ divides $g$ in $R[x]$. Thoughts: If $f$ divides $g$ in $S[x]$, then $g(x) = f(x)q(x)$ for some $q(x)$ in $S[x]$. From here I'm not sure what the question is asking. Do I have to show that $q(x)$ is an element of $R[x]$ ? Insights appreciated.
Well, as I mentioned in the comments and Bill Dubuque has explained it well in his answer, you can use the Euclidean division algorithm. Note that for the Euclidean division algorithm to work, all that you need is to know that the leading coefficient of the divisor is a unit. To see this, try to divide a polynomial $g(x)$ by another polynomial $f(x)$ of lower degree and you'll see that you can always cancel the term of highest degree in $g(x)$ when the leading coefficient of $f(x)$ is a unit. For a better insight and seeing what can go wrong when the leading coefficient is not a unit, try to divide $3x^2+1$ by $2x-1$ in $\mathbb{Z}$. You will immediately see that you can't get rid of $3x^2$ because $2 \not\mid 3$. However, since in this problem your leading coefficient is a unit, you can assume that $f(x)$ is a monic polynomial. Since $1$ divides anything in the ring, no such problem can arise. Addendum: without using the uniqueness of the divisor and the remainder polynomials, we can argue as follows: Suppose that $g(x)=f(x)k(x)+r(x)$ in $R[x]$. Since $R[x] \subseteq S[x]$, the same equation holds in $S[x]$. On the other hand, you had assumed that $g(x)=f(x)q(x)$ in $S[x]$; so we get that $f(x)q(x)=f(x)k(x)+r(x)$. Hence, $f(x)\big( q(x) -k(x) \big) = r(x)$. Since the leading coefficient of $f(x)$ is $1$ and $1$ is never a zero-divisor, unless $q(x)-k(x) = 0$, we have $\deg r(x) \geq \deg f(x)$ which is a contradiction. So, $q(x)-k(x)=0$ and therefore, $r(x)=0$. Thanks to Bill Dubuque for point out that $f \mid r$ still implies $\deg r \geq \deg f$ because $f$ is monic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Prove that there are at least $4(p-3)(p-1)^{p-4}$ functions $f:S\to S$ satisfying $\sum \limits_{x\in T} x^{f(x)}\equiv a \pmod p$ This question is the third round of Iranian exam questions, which has not been answered for several years now. I think there are many people here, which may be able to solve this problem. From AOPS Problem: $a$ is an integer and $p$ is a prime number and we have $p\ge 17$. Suppose that $S=\{1,2,....,p-1\}$ and $T=\{y|1\le y\le p-1,\operatorname{ord}_p(y)<p-1\}$. Prove that there are at least $4(p-3)(p-1)^{p-4}$ functions $f:S\longrightarrow S$ satisfying $$\sum_{x\in T} x^{f(x)}\equiv a \pmod p$$ It seems that this problem can be solved by generating function, but I don't know how to start. Thank you.
first if you don't have the restriction $ord(y)<p-1$ you can choose $p-2$ elements arbitary and choose $f(g)$(g is an primary root) such that the equality work.the best think after $g$ is $g^{\pm 2}$ so choose f of other elements arbitary and choose f of this two elements such that the equality works this gave the problem bound.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why would a facility use room numbers $3233$, $53$, $61$, $\infty$, $10^{100}$, $1729$, $4$, $1.61803$, $3.14159$, $1.33333$, $\sqrt{-1}$, $0$, $1$? I was part of TCS Ignite training after my BSc graduation from July 2007 to October 2007. The training space had a numbering system for its rooms such as $$3233,\; 53,\; 61, \;\infty, \; 10^{100}, \; 1729, \;4, \; 1.6180, \;3.1415, \;1.3333, \; \sqrt{-1}, \;0, \;1$$ What do these numbers mean in our life? I had this question in mind during training. Before I ask this question to somebody, I happened to leave the organisation unexpectedly. Can someone please explain?
* *3233 is semiprime and a product of other two prime numbers in the list 53 & 61. 3233 is the biggest room in the facility and can accommodate 200 people. Rooms 53 & 61 can accommodate 100 people each. All other rooms are labs accommodating 24 people with computers. In RSA algorithm these three numbers 3233, 53, 61 can be used. *I think 4 signifies four color theorem of Graph Theory. *As per the below book link 1.333 is called as pythagorean number derived from a pythagorean triple (4, 3, 5). https://books.google.co.in/books?id=_w9MDwAAQBAJ&pg=PA74&lpg=PA74&dq=%22pythagorean+number%22+(1.333)&source=bl&ots=qO0wQGLdXc&sig=ACfU3U2RF0tFmhVdY9Cut7IYztQ3eKqVBw&hl=en&sa=X&ved=2ahUKEwj-k5bzzODgAhUYCTQIHVKfD88Q6AEwBXoECAkQAQ#v=onepage&q=%22pythagorean%20number%22%20(1.333)&f=false The above facility was designed by people who worked in various universities and organizations such as NSA of United States and returned to India after 20 years. This facility Head name is Srinivasan Raman which resembles the Mathematician name Srinivasa Ramanujan. Often, I used to think this space was not created accidentally. It has a purpose behind that. This training space is full of knowledge. But I was not lucky enough to continue working in this space. Sometimes I used to think about these numbers and figure out what they mean. I had idea about few numbers. But, I am not sure about the number 1.333 still.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3129924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Noncentral Wishart Expected Value - Solving a Matrix Integral Let $\mathbf{V} \sim ncWish\left(\nu_1, \, \mathbf{\Sigma}, \, \frac{\nu_1}{\nu_2-p-1} \mathbf{\Theta}\right)$ follow a noncentral Wishart distribution according to Theorem 3.5.1. in Gupta, Nagar - Matrix Variate Distributions. What is $$ \mathbf{E}\left[ \mathbf{V} |\mathbf{V}|^{\frac{\nu_2}{2}} \right] = \;? $$ To answer this question we need to find a closed form solution to the following integral, where the part in blue is the kernel density of the noncentral Wishart distribution $$ \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} \, |\mathbf{V}|^\frac{\nu_2}{2} \color{blue}{ \exp\left(-\frac{1}{2}\text{tr}(\mathbf{\Sigma}^{-1}\mathbf{V})\right) |\mathbf{V}|^{\frac{\nu_1-p-1}{2}} {}_0F_1\left(\frac{\nu_1}{2}; \frac{1}{4}\frac{\nu_1}{\nu_2-p-1} \mathbf{\Theta} \mathbf{\Sigma}^{-1} \mathbf{V} \right)} \mbox{d} \mathbf{V}, $$ where $\nu_1$ and $\nu_2$ are scalars, $\mathbf{\Theta}$ is a real matrix, $\mathbf{V}$ and $\mathbf{\Sigma}$ are real, symmetric, positive definite matrices and ${}_0F_1$ is the hypergeometric function of a matrix argument.
I suppose that all matrices in question are $p \times p$. It is clear that no closed-form result for the integral can be obtained if $p=1$, so it is doubtful that such an expression exists for general $p$, especially since there is no such expression for the generalized hypergeometric function ${}_0F_1$ of matrix argument. Next, I note that in multivariate statistical analysis, where this integral was derived, the notation $\color{blue}{\mathbf{\Theta\Sigma^{-1}}}$ is shorthand for a symmetric matrix $\mathbf{M}$ that has the same eigenvalues as the matrix $\color{blue}{\mathbf{\Theta\Sigma^{-1}}}$. I provide an approach to the integral that is derived from the paper, ``Integral transform methods in goodness-of-fit testing, II: The Wishart distributions,'' Ann. Inst. Statist. Math. 72 (2020), 1317-1370; see p. 1328, Proposition 2 in that paper. Let us rewrite the integral in the form $$ f(\mathbf{M}_1,\mathbf{M}_2) := \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} \, |\mathbf{V}|^\alpha \exp(-{\rm{tr}} \, \mathbf{M}_1^{-1}\mathbf{V}) \, {}_0F_1(\beta; \mathbf{M}_2 \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}}, $$ where $\alpha$, $\beta$, $\mathbf{M}_1$, and $\mathbf{M}_2$ are trivially expressible in terms of your original notation, e.g., $\mathbf{M}_1 = 2 \mathbf{\Sigma}$, etc. With this notation, $\mathbf{M}_1$ is a positive definite (and symmetric) matrix, and $\mathbf{M}_2$ is a symmetric matrix. Make the substitution $\mathbf{V} \to \mathbf{M}_1^{1/2} \mathbf{V} \mathbf{M}_1^{1/2}$. It is well-known that the measure $\mbox{d}\mathbf{V}/|\mathbf{V}|^{(p+1)/2}$ is invariant under this substitution; so, after simplification, we obtain \begin{align*} f(\mathbf{M}_1,\mathbf{M}_2) = |\mathbf{M}_1|^\alpha \mathbf{M}_1^{1/2} \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} |\mathbf{V}|^\alpha \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1\left(\beta; \mathbf{M}_1^{1/2} \mathbf{M}_2 \mathbf{M}_1^{1/2} \mathbf{V} \right) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}} \mathbf{M}_1^{1/2}. \end{align*} That is, $$ f(\mathbf{M}_1,\mathbf{M}_2) = |\mathbf{M}_1|^\alpha \mathbf{M}_1^{1/2} f(\mathbf{I}_p,\mathbf{M}) \mathbf{M}_1^{1/2}, $$ where $\mathbf{I}_p$ denotes the $p \times p$ identity matrix, and $\mathbf{M} = \mathbf{M}_1^{1/2} \mathbf{M}_2 \mathbf{M}_1^{1/2} $. Now consider the integral, $$ g(\mathbf{M}) := f(\mathbf{I}_p,\mathbf{M}) = \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} \, |\mathbf{V}|^\alpha \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1(\beta; \mathbf{M} \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}}, $$ where $\mathbf{M}$ is any $p \times p$ symmetric matrix. Observe that $g(\mathbf{M})$ is a symmetric $p \times p$ \textit{matrix}, each of whose entries is an integral. That the integral defining $g(\mathbf{M})$ converges absolutely for all $\alpha > (p-1)/2$ and all $\mathbf{M}$ can be proved using the Poisson integral for the Bessel functions of matrix argument; see Herz (Ann. Math., 61 (1955), 474--523). Denote by $O(p)$ the group of all $p \times p$ orthogonal matrices. Note that if $\mathbf{H} \in O(p)$ then $$ g(\mathbf{H} \mathbf{M} \mathbf{H}^{-1}) = \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} \, |\mathbf{V}|^\alpha \, \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1\left(\beta; \mathbf{H} \mathbf{M} \mathbf{H}^{-1} \mathbf{V} \right) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}}, $$ Making the substitution $\mathbf{V} \to \mathbf{H} \mathbf{V} \mathbf{H}^{-1}$, using the invariance of the measure $\mbox{d}\mathbf{V}/|\mathbf{V}|^{(p+1)/2}$, and simplifying, we obtain $$ g(\mathbf{H} \mathbf{M} \mathbf{H}^{-1}) = \mathbf{H} \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} |\mathbf{V}|^\alpha \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1(\beta; \mathbf{M} \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}} \mathbf{H}^{-1}. $$ In short, we have shown that, for all $p \times p$ symmetric matrices $\mathbf{M}$ and all $\mathbf{H} \in O(p)$, $$ g(\mathbf{H} \,\mathbf{M} \,\mathbf{H}^{-1}) = \mathbf{H} \,g(\mathbf{M}) \,\mathbf{H}^{-1}. $$ This property has appeared earlier in the 1970's and 1980's papers of K. I. Gross and R. A. Kunze on generalized Bessel functions of matrix argument. Gross and Kunze called it an orthogonal ``covariance'' property to distinguish it from the property of invariance under $O(p)$. Since every symmetric matrix can be diagonalized by a transformation of the form $\mathbf{M} \to \mathbf{H} \mathbf{M} \mathbf{H}^{-1}$, the covariance property also shows that, in calculating $g(\mathbf{M})$, it suffices to assume that $\mathbf{M}$ is diagonal. The above covariance property is the best that one can do for general $p$. For $p=2$, I suggest that you try to evaluate $g(\mathbf{M})$ by using an explicit formula for the ${}_0F_1$ function when $p=2$ (see Muirhead's book, \textit{Aspects of Multivariate Statistical Theory}, Wile, New York, 1982) and then calculating each entry of $g(\mathbf{M})$ term-by-term. For the case in which $\mathbf{M} = \mathbf{I}_p$ (or a multiple of the identity), there is some hope that the matrix $g(\mathbf{M})$ can be calculated explicitly for general $p$. For $\mathbf{M} = \mathbf{I}_p$, the covariance property reduces to $$ g(\mathbf{I}_p) = \mathbf{H} \, g(\mathbf{I}_p) \, \mathbf{H}^{-1} $$ for all $\mathbf{H} \in O(p)$. By Schur's lemma, it follows that $g(\mathbf{I}_p) = c \, \mathbf{I}_p$ for some constant $c$. Therefore, all off-diagonal entries of $g(\mathbf{I}_p)$ are equal to zero. As for the diagonal entries, by taking traces, we obtain \begin{align*} c p = {\rm{tr}} \,(c \,\mathbf{I}_p) &= {\rm{tr}} \, g(\mathbf{I}_p) \\ &= {\rm{tr}} \, \int_{\mathbf{V}>\mathbf{O}} \mathbf{V} \, |\mathbf{V}|^\alpha \, \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1(\beta; \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}} \\ &= \int_{\mathbf{V}>\mathbf{O}} ({\rm{tr}} \, \mathbf{V}) \, |\mathbf{V}|^\alpha \, \exp(-{\rm{tr}} \, \mathbf{V}) \, {}_0F_1(\beta; \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}} \\ &= - \frac{\partial}{\partial t} \int_{\mathbf{V}>\mathbf{O}}|\mathbf{V}|^\alpha \, \exp(- t \, {\rm{tr}} \, \mathbf{V}) \, {}_0F_1(\beta; \mathbf{V}) \frac{\mbox{d}\mathbf{V}}{|\mathbf{V}|^{(p+1)/2}}\Bigg|_{t=1}. \end{align*} Expanding the function ${}_0F_1(\beta; \mathbf{V})$ in a series of zonal polynomials $Z_\kappa$, integrating term-by-term using a \href{https://dlmf.nist.gov/35.4.E8}{formula} for the Laplace transform of $Z_\kappa$, and then differentiating term-by-term, one obtains a final result in terms of an infinite series of zonal polynomials.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Whitehead product $[i_2, i_2]_W$ Let $i_2$ be the generator of $\pi_2(S^2)$ and $\eta$ be the Hopf fibration from $S^3$ to $S^2$. How would one go about showing that $[i_2,i_2]_W=2\eta$ (up to a sign)? This is one of the exercises in a course of topology I am currently self-studying. I have seen some solutions using the Hopf invariant, but as far as I understand, it is related to cohomology, which is the next topic. I believe that there must some way to calculate the above product from the first principles... I would like to use that to calculate the first stable homotopy group, but the only other answer I found goes the other way and uses the fact $\pi_4(S^3)=\mathbb{Z}_2$ to show $[i_2,i_2]_W=2\eta$.
The Hopf fibration $S^1\to S^3\to S^2$ gives the long exact sequence in homotopy groups. Since higher homotopy groups of $S^1$ are trivial, we get $\pi_3(S^3)\cong\pi_3(S^2)$, with the generator given by the Hopf map. We can define the inverse isomorphism $H:\pi_3(S^2)\to \mathbb{Z}$ called the Hopf invariant as follows. Let $x_1,x_2\in S^2$ be two distinct points and assume we pick a reprensetative of a class $\alpha\in\pi_3(S^2)$ which is a smooth map $f:S^3\to S^2$ with regular values $x_1$ and $x_2$. We can then consider $L_i:=f^{-1}(x_i)$ for $i=1,2$ which is an oriented submanifold of dimension $1$ in $S^3$, i.e. an oriented link. Define: $$ H(\alpha):=lk(L_1,L_2)$$ the linking number of two oriented links, which is defined as the sum of pairwise linking numbers of each components of $L_1$ with each component of $L_2$. One then checks that this is a well-defined invariant of $\alpha$. Since both maps are very explicit, one can calculate that: $$H(\eta)=1$$ $$H([\iota_1,\iota_2]_W)=2$$ In the Hopf fibration $L_1$ and $L_2$ are two circle fibres and it is an exercise to prove that they form a Hopf link, so their linking number is $1$. For the Whitehead product use that it is given as the composition $S^3\to S^2\vee S^2\to S^2$ of the universal Whitehead product (the attaching map of the top cell in $S^2\times S^2$) and the fold map. Both $L_1$ and $L_2$ are $2$-component links (together this forms a $0$-framed push-off of the Hopf link!) and out of four linking numbers, precisely two are $1$. To see this you might want to use that linking numbers can be computed from surfaces (here disks) that links bound in $D^4$ (see Rolfsen, for example).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Solve for $x$ : $\sqrt{x-6} \, + \, \sqrt{x-1} \, + \, \sqrt{x+6} = 9$? I want to solve the following equation for $x$ : $$\sqrt{x-6} \, + \, \sqrt{x-1} \, + \, \sqrt{x+6} = 9$$ My approach: Let the given eq.: $$\sqrt{x-6} \, + \, \sqrt{x-1} \, + \, \sqrt{x+6} = 9 \tag {i}$$ On rearranging, we get: $$\sqrt{x-6} \, + \, \sqrt{x+6} = 9 \, - \, \sqrt{x-1} $$ On Squaring both sides, we get: $$(x-6) \, + \, (x+6) + 2 \,\,. \sqrt{(x^2-36)} = 81 + (x-1)\, - 18.\, \sqrt{x-1}$$ $$\implies 2x + 2 \,\,. \sqrt{(x^2-36)}= 80 + x - 18.\, \sqrt{x-1}$$ $$\implies x + 2 \,\,. \sqrt{(x^2-36)}= 80 - 18.\, \sqrt{x-1} \tag{ii}$$ Again we are getting equation in radical form. But, in Wolfram app, I am getting its answer as $x=10$, see it in WolframAlpha. So, how to solve this equation? Please help...
I'm not entirely sure if this approach will find all possible $x$ in any such question, nevertheless here it is giving the correct answer and is relatively short. Since the RHS $=9$ is an integer, all three terms in the RHS under the radicals must be perfect squares. $$\therefore, x-6=p^2$$ $$x-1=q^2$$ $$x+6=r^2$$ where $p,q,r$ are integers. $$\Rightarrow x=p^2+6=q^2+1=r^2-6$$ Here we need value of only one of $p,q,r$ to find $x$ $\Rightarrow (p+q)(p-q)=-5$ $\Rightarrow (p+q),(p-q)=(-1)(5), (1,-5), (5,-1), (-5,1)$ $\Rightarrow p=±2$ Since $x=p^2+6$, $$\therefore \fbox{x=10}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Finding the position of two lines for each value of k I have the two lines r and s $$ r: \begin{cases} x+y=1 \\x+z=1 \end{cases} $$ $$ s: \begin{cases} x-ky=k \\z-x=k \end{cases}$$ First, I create one single system with all equations $$ \begin{cases} x+y=1 \\x+z=1\\x-ky=k\\x-z=k \end{cases} $$ which I rewrite as $$\begin{matrix} 1 & 1 & 0 & 1 \\1&0&1&1\\1&-k&0&k\\1&0&-1&k\end{matrix}$$ I then transform it in row echelon form, getting $$ \begin{matrix} 1&1&0&1\\0&-1&1&0\\0&0&-k-1&k-1\\0&0&0& \frac{-(k-1)^2}{-k-1} \end{matrix} $$ I find the values of k by getting the determinant of the matrix $$ -(k-1)^2 = 0 \rightarrow k = 1 $$ So if I write the ref matrix with 1 instead of k I get $$\begin{matrix} 1&1&0&1\\0&-1&1&0\\0&0&-2&0\\0&0&0&0 \end{matrix} $$ The matrix has rank 3, which is not its maximum rank, meaning the system has infinite solutions. If the system has infinite solutions, does that means that the lines are always parallel when k=1? And since the rank of the matrix is 4 for each value of k other than 1, does that mean that the two lines always intersect when k is not 1?
You made a sign error when you combined the two systems of equations: the second equation for $s$ is $z-x=k$, but the last equation in your combined system is $x-z=k$, which is not the same thing. If you reorder the variables, it should be either $-x+z=k$ or $x-z=-k$. This error cascades through the further calculations. With the correct equation, the last row of the reduced matrix is $$\begin{bmatrix}0&0&0&{k^2+4k-1\over k+1}\end{bmatrix}.$$ This system is therefore consistent, and so has an infinite number of solutions as you stated, when $k^2+4k-1=0$. (As it turns out, the determinant of the reduced matrix is also equal to $k^2+4k-1$, but there’s not really any need to compute it.) The lines are coincident, not parallel, for those values of $k$. If they were parallel, there would be no solutions. You also need to examine the case $k=-1$ separately since in that case the row-reduction you performed is invalid because of a division by zero. After substituting this value of $k$ into the original equations, one can see at a glance that the system is inconsistent: for $r$ you have $x+y=1$, but for $s$ you have $x+y=-1$. The planes defined by these two equations are parallel, so there are no points in common between the two lines at all. Your conclusion that the system is consistent and therefore has a single solution for other values of $k$ than the three above is incorrect, however. The rank of the augmented matrix is indeed $4$, but the rank of the coefficient matrix cannot be greater than $3$, so the system has no solutions for these values of $k$. To put it another way, the last row of the reduced matrix above represents the equation $$0={k^2+4k-1\over k+1}$$ (and a similar equation for your erroneous computation), which has no solutions unless the right-hand side vanishes. Finally, having no intersection in 3-D doesn’t mean that the lines are parallel as it does in two dimensions. The lines might be skew: they simply pass by each other in different directions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130524", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show that matrix of $T$ has at least dim range $T$ nonzero entries. Suppose $V$ and $W$ are finite-dimensional and $T \in \mathcal{L}(V, W)$. Show that with respect to each choice of bases of $V$ and $W$, the matrix of $T$ has at least dim range $T$ nonzero entries. I have no idea on how to solve this.
This is an exercise from Linear Algebra Done Right, so here's an answer in line with Axler's style: Choose bases $v_1,v_2,...,v_n$ of $V$ and $w_1,w_2,...,w_m$ of $W$ and let $\mathcal{M}(T)$ be the matrix of $T$ with respect to these bases. Column $j$ of $\mathcal{M}(T)$ will consist entirely of zeros iff $Tv_j=0$; equivalently column $j$ has at least one nonzero entry iff $Tv_j \not=0$. So out of the $n$ basis vectors of $V$ say $k \leq n$ of them are mapped to zero; assume without loss that $Tv_1,Tv_2,...,Tv_k=0$. Then $\mathrm{range} \: T = \mathrm{span}(Tv_{k+1},Tv_{k+2},...,Tv_n)$, which has dimension $\leq n-k$ (since these vectors need not be linearly independent.) Now for each $j=k+1,k+2,...,n$, $Tv_j \not=0$ so at least one of the scalars in its basis expansion with respect to the $w_i$'s is nonzero; hence the corresponding column of the matrix has at least one nonzero entry. So each of these $n-k$ vectors contributes at least one nonzero entry to the matrix, so we have at least $n-k$ nonzero entries (which is at least $\mathrm{dim} \: \mathrm{range} \: T$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130797", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Why this sequence of R.V converge in distribution, but doesnt in probability Why this sequence of R.V converge in distribution, but doesnt in probability? Probability space $([0,1],B,m)$. ($B$ consists of all Borel sets of $[0,1]$, $m$ is the Lebesgue measure.) Let $X_{2n}(ω)=ω$, $X_{2n−1}(ω)=1−ω$ My intuiton says that I have to show that $X_{2n}$ and $X_{2n−1}$ have the same distribution, which is an Uniform distribution. And after this, showing that they converge in distribution, which is obvious. But How can I show this? And also, how can I Show that $X_{n}$ does not converge in probability. Any help guys?
You're right about the convergence in distribution part. To show they have a uniform distribution, calculate the CDF. For instance, for $x\in (0,1),$ $P(X_0\le x) = P(\omega < x) = m(\{\omega :\omega <x\})=m((0,x)) = x,$ and you can compute similarly for $1-\omega.$ To show that it doesn't converge in probability, observe that the sequence oscillates between $\omega$ and $1-\omega.$ The distance between the two is $1-2\omega.$ Thus for $m$ even and $n$ odd, $P(|X_n-X_m|>\epsilon) =P(|1-2\omega|>\epsilon)>0.$ Thus $X_n$ is not Cauchy in probability and therefore not convergent in probabillity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3130908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
A Version of Fubini-Tonelli Theorem for Hilbert Space Valued Functions I'm currently working on a project in which we define a new type of integral. And I'm trying to intechange the integral with expectation, something like $\mathbb{E} \left[ (\mathcal{N})\int f dW \right]=(\mathcal{N})\int \left[ \mathbb{E} f dW \right]$, where $\mathcal{N}$ denoted the defined integral, $f$ is an operator-valued stochastic process and $W$ a Hilbert-space valued $Q$-Weiner Process. Under which conditions can I do such thing? What I give might be vague. But, could this involve something like Fubini-Tonelli Theorem? I haven't also read a version of this theorem in for function in infinite dimensions. If there is, please cite. Thank you in advance! :)
It seems that the Theorem you need is Theorem 2 in Chapter X in the book The Bochner Integral isbn 9780124958500
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Leaving recurrence summation in terms of $k$, $\sum_{i=0}^{k-1}\frac{3^i\sqrt{\frac n{3^i}}}{\log\frac n{3^i}}$ I have an exercise where I need to use the substitution method to solve the following recurrence and determine their corresponding complexity. $$t(n)=3t(n/3) + \frac{\sqrt n}{\log n}$$ After some iterations, I got the following pattern. $$t(n)=t\left(\frac{n}{3^k}\right)+\sum_{i=0}^{k-1}\frac{3^i\sqrt{n/3^i}}{\log n/3^i}$$ Honestly I do not know what kind of approach I could use to solve the summation, and leave everything in terms of $k$.
Hint 1. As regards the asymptotic analysis (complexity?), you may use the Master Theorem. Then $c_{crit}=\log_2(3)\approx 1.58$ and $f(n)=\frac{n^{1/2}}{\log(n)}\leq n^{1/2}$. What about $T(n)$? Hint 2. Note that $$t(2^n)=3^nt(1)+\sum_{k=0}^{n-1}\frac{3^k\sqrt{2^{n-k}}}{\log (2^{n-k})} =3^nt(1)+3^n\sum_{k=1}^{n}\frac{(\sqrt{2}/3)^k}{k\log (2)}\sim C\cdot 3^n$$ because the series $\sum_{k=1}^{n}\frac{(\sqrt{2}/3)^k}{k\log (2)}$ is convergent (note that $\sqrt{2}/3<1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving $3\mid p^3 \implies 3\mid p$ I want to prove $3\mid p^3 \implies 3\mid p$ (Does it?) The contrapositive would be $3 \nmid p \implies 3 \nmid p^3$ I believe. $3\nmid p \implies p = 3q + r$ ($0<r<3$), so $p^3 = 27q^3+27q^2r+9qr^2+r^3$ Dividing by $3$ we get $3(9q^3 + 9q^2r + 3qr^2) + r^3$ Is this correct so far? How do I finish the proof please? Do I need to show that $r^3$ can never be a multiple of $3$?
$$p^3\bmod3=(p\bmod3)^3\bmod 3,$$ so that $$p\bmod3=0\to p^3\bmod3=0$$ $$p\bmod3=1\to p^3\bmod3=1$$ $$p\bmod3=2\to p^3\bmod3=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Paper folding and continued fractions This question is suggested by a prior question, which has not received a complete answer. One corner of rectangular sheet of paper $P_1$ is folded down to make a trapezoid, and then a right triangle is cut off to make another rectangle $P_2.$ The process is repeated with $P_2$ to produce a rectangle $P_3$ and so on. We assume that the ratio of the sides of $P_1$ is irrational, so none of the rectangles is a square, and the process goes on forever. Let $S_n$ be the area of $P_n$ and suppose $$\lim_{n\to\infty}{S_{n+2}\over S_n}$$ exists. What are the possible values of the limit. In the original question, the OP stated that he believed that, in order for the limit to exist, it must be the case that $P_n$ and $P_{n+2}$ are similar, for large $n,$ but that he couldn't prove it. In a partial answer, I corrected some calculation errors, and pointed out the relation of the problem to continued fractions, but I haven't been able to prove the OP's hypothesis either, though I think it's very likely true. Can you prove that $P_n$ and $P_{n+2}$ must eventually be similar? Alternatively, can you see how to modify the analysis so as not to use this hypothesis?
There is another possible limit $\color{blue}{3-2\sqrt{2}=(1+\sqrt{2})^{-2}}$. How? Suppose rectangle $n$ has length $a$ and width $b$ with $a>2b$. Then to get to rectangle $n+2$ you shorten the $a$ dimension twice leaving a rectangle with dimensions $a-2b$ and $b$. If $\dfrac{a-2b}{b}=\dfrac{b}{a}$ then the rectangles are similar forcing a fixed value of $S_{n+2}/S_n$. Note that the intermediate rectangle would not be similar so we would not have $S_{n+1}/S_n=\sqrt{S_{n+2}/S_n}$. The above equation for $a$ and $b$ is solved for a positive root $a/b=1+\sqrt{2}>2$. Thereby $\dfrac{S_{n+2}}{S_n}=\dfrac{a-2b}{a}=3-2\sqrt{2}$. With $a/b=1+\sqrt{2}>2$ one can verify that also $\dfrac{S_{n+3}}{S_{n+1}}=3-2\sqrt{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131458", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Strong Induction for Fibonacci I'm a little lost of how to use strong induction to prove the following for the Fibonacci sequence: $F_n < 2^n$ for all natural numbers Any help would be very much appreciated!!
Hint: $F_0=0<2^0$, $F_1=1<2^1$. If the statement holds for every $m<n$, where $n\ge2$, then $$ F_n=F_{n-1}+F_{n-2} $$ and $n-1<n$, $n-2<n$, so…
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
A Continuous function $f: \overline{B_1(0)} \subset \ell^2\to \mathbb{R}$ which does not reach the maximum? If necessary, recall that $$ \ell^2 = \{x=\{x_n\}_n\subset \mathbb{R} : \|x\|^2:=\sum_n |x|^2<\infty\} $$ and $ \overline{B_1(0)} $ is the closed unit ball with respect to that norm. Can we find an explicit example of a continuous function $f: \overline{B_1(0)} \subset \ell^2\to \mathbb{R}$ which does not attain its maximum? The point is that this ball is not a compact set. Thank you.
Try $$f(x) = \sum_{n=1}^\infty (1-1/n) x_n^2$$ Note that $f(x) < 1$ for all $x \in \overline{B_1(0)}$, and you can get arbitrarily close to $1$...
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show that the following sequence converges. Please Critique my proof. The problem is as follows: Let $\{a_n\}$ be a sequence of nonnegative numbers such that $$ a_{n+1}\leq a_n+\frac{(-1)^n}{n}. $$ Show that $a_n$ converges. My (wrong) proof: Notice that $$ |a_{n+1}-a_n|\leq \left|\frac{(-1)^n}{n}\right|\leq\frac{1}{n} $$ and since it is known that $\frac{1}{n}\rightarrow 0$ as $n\rightarrow \infty$. We see that we can arbitarily bound, $|a_{n+1}-a_n|$. Thus, $a_n$ converges. My question: This is a question from a comprehensive exam I found and am using to review. Should I argue that we should select $N$ so that $n>N$ implies $\left|\frac{1}{n}\right|<\epsilon$ as well? Notes: Currently working on the proof.
Consider $b_n = a_n + \sum_{k=1}^{n-1} \frac{(-1)^{k-1}}{k}$. Then $$ b_{n+1} = a_{n+1} + \sum_{k=1}^{n} \frac{(-1)^{k-1}}{k} \leq a_n + \frac{(-1)^n}{n} + \sum_{k=1}^{n} \frac{(-1)^{k-1}}{k} = b_n, $$ which shows that $(b_n)$ is non-increasing. Moreover, since $\sum_{k=1}^{\infty} \frac{(-1)^{k-1}}{k}$ converges by alternating series test and $(a_n)$ is non-negative, it follows that $(b_n)$ is bounded from below. Therefore $(b_n)$ converges, and so, $(a_n)$ converges as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
How fast is the height of the water in a cylindrical tank increasing? A cylindrical tank with radius $5m$ is being filled with water at a rate of $3m^{3}/\min$. How fast is the height of the water increasing. The radius $r=5m$ The rate of water is $\dfrac{dV}{dt}=3m^{3}/\min$ The height of the water in the cylinder is $h$ The volume of a cylinder is given by the formula $V=\pi r^{2}h$ Differentiating both sides I have: $\dfrac{dV}{dt}=\pi2rh\dfrac{dr}{dt}+\pi r^{2}h\dfrac{dh}{dt}$ Is this step correct? Here I am stuck since I don't know to get $h$ for height. How should I proceed?
Each quantity is a function of time except for the radius $r$ because it does not change with time (well, technically speaking, you still could think of it as a function of time, but it would be a constant function then: $r(t)=5\ m$). All those quantities are related by this expression: $$V(t)=\pi r^2 h(t)$$ This equality states that at any given time, the volume of the water in the tank as a function of time equals the height of the water in the tank as a function of time multiplied by $\pi r^2$. You know the rate at which the volume of the water in the tank is increasing and you know what $r$ is. You also know that $V(t)$ is equivalent to $\pi r^2 h(t)$. If they are equivalent, their derivatives must also be equivalent: $$V'(t)=[\pi r^2 h(t)]'=\pi r^2 h'(t)$$ Now, just solve for $h'(t)$ which tells you exactly what the problem is asking you to find—how fast the height of the water in the tank increases: $$h'(t)=\frac{V'(t)}{\pi r^2 }=\frac{3}{25\pi}\ m/min.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3131936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
direct image functor $f_*$ left exact I would have to ask for apology for following question by everybody who is familar with algebraic geometry since this might be a quite trivial problem that cause some irritations for me: we consider a morphism $f: Z \to Y$ between schemes. then the induced direct image functor $f_*$ is claimed to be left exact. the proofs I found used always following argument: We consider an exact sequence of sheaves on $Z$ $$0 \to \mathcal{F} \to \mathcal{G} \to \mathcal{H} \to 0$$ choose arbitrary open subset $V \subset Y$ and apply firstly $f_*$ and then the $\Gamma(V, -)$ functor. Since $\Gamma(V, -)$ is left exact for sheaves we obtain exact sequence $$0 \to \mathcal{F}(f^{-1}(V)) \to \mathcal{G}(f^{-1}(V)) \to \mathcal{H}(f^{-1}(V))$$ at this point the proves end. why now we are done? why is this sufficient? I thought that a sequence of sheaves if exact if and only if the induced sequence at every stalk is exact namely we have to verify that $$0 \to (f_*\mathcal{F})_y \to (f_*\mathcal{G})_y \to (f_*\mathcal{H})_y$$ is exact in every $y \in Y$. at that moment I encounter the problem that I don't know how explicitely calculate the stalk $(f_*\mathcal{F})_y$ of the direct image sheaf. on the other hand the exactness for all sequences of second type seems to be much weaker as the conditions for exactness on stalks as in the last one. or are these two criterions for exactness equivalent?
Left exact on all open sets implies left exact on stalks. This follows from exactness of direct limits for categories of modules, cf. Why do direct limits preserve exactness?.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Why does wolfram answer as such in this example for surface area and volume of revolution on an area crossing the axis? Compute the volume and surface area of the solid $S$ obtained by revolving the region $R$ (pictured below) enclosed by the graphs $y=q(x)=-(2x^2-7x+3),x=1,x=2$ and $y=-1$, around the $x$-axis, which crosses the region. Volume: I can just consider the region $T$ enclosed by the graphs $y=q(x),x=1,x=2$ and the $x$-axis (instead of $y=-1$) because the overlap (Volume of revolution on an area crossing the axis), seen precisely in the inequality $|q| > |-1|$ for $x = 1$ to $2$, means that the solid $P$ from the region $T$ is identical to $S$. That is $\text{Vol}(P) = \text{Vol}(S)$ because $P=S$? Update: Wolfram (Solid P and Solid S) says $\pi + \frac{107 \pi}{15} = \text{Vol}(P) \ne \text{Vol}(S) = \frac{107 \pi}{15}$. * *I notice $\pi$ is the volume of the solid $M$ obtained by rotating the square $N$ enclosed by $y=0$ and $y=-1$ from $x=1$ to $2$. *Could it be that Wolfram interprets the calculation for the volume of Solid S as $'\text{Vol}(S)' = \text{Vol}(P) + \text{Vol}(M)$ ? *Perhaps not because Wolfram looks like it's computing $$\int_1^2 \pi |\color{red}{-1} + (-q)^2| dx = \int_1^2 \pi |-(-1)^2 + (q)^2| dx$$ So, the '$\color{red}{-1}$' is actually '$-(-1)^2$' rather than the original '$y=-1$'? Hopefully, Wolfram assumes the axis doesn't cut the interior of the region. Surface area: For surface area, do we have $\text{SA}(P) = \text{SA}(S)$ because $P=S$ also? Update: Wolfram (Solid P and Solid S) says $\text{SA}(P) = \text{SA}(S)$, but actually I think the answer should be $\text{the SA}(P)\text{ that wolfram gives} - 13 \pi$. Where does this extra $13 \pi$ come from please?
It appears that in the interpretation of Wolfram Alpha, two regions on opposite sides of the axis cancel each other out rather than simply reaffirming that the points swept by both are part of the solid. Hence as we see in the figures on the two pages you linked to, the figure that is bounded at the $x$-axis produces a solid with no holes, whereas the figure bounded by $y = -1$ produces a solid with a cylindrical hole of radius $1$ drilled through the axis of the solid. The result is even more complicated if the larger region extends farther from the $x$ axis in the positive direction for some values of $x$ and farther in the negative direction for other values of $x,$ as in this example, bounded by the line $y=-3$. I suppose one could argue whether this is a good interpretation. It could come down to this: what is the purpose of rotating a region that crosses the axis of rotation? For the surface area, Wolfram Alpha apparently counts all of the surface area of the solid, including the area at the flat "ends" of the solid, not just the parts formed by the rotation of parts of the curves $y=2x^2-7x+3$ and $y=-1.$ For Solid P, which does not have a hole drilled through it, the total surface area includes the disk of radius $2$ (area $4\pi$) at $x = 1$ and the disk of radius $3$ (area $9\pi$) at $x=2,$ contributing $13\pi$ to the total surface area of the solid. For Solid S, the hole removes a disk of radius $1$ from the disk at each flat end of the solid, which reduces the total area by $2\pi,$ but it adds the lateral area of the cylindrical hole, which increases the total area by $2\pi,$ so the final result is the same. This happens only because the length and radius of the cylindrical hole are equal; try merely increasing the radius of the hole (while still staying inside Solid P), for example using the line $y=-2$, and you will see a smaller total surface area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Show that $\left<\mathcal{M}\right>=V$ iff $v_i$ is a linear combination of the elements of $\mathcal{M}$. I'm struggling with the following problem: Let $V$ be a finitely generated vector space over the field $F$. Let $\left( v_1, \ldots, v_n \right)$ be a basis of $V$. Let $\mathcal{M}$ be a family of vectors in $V$. Show that $\left<\mathcal{M}\right>=V$ if and only if, for $i=1,\ldots,n$; $v_i$ is a linear combination of the elements of $\mathcal{M}$. I have the following: $\Rightarrow$) Suppose $\left<\mathcal{M}\right>=V$ such that $m_i \in \mathcal{M}$ for $i=1,\ldots,n$. Which means that $\left<m_1,\ldots,m_n\right>=V$. Given that $\left( v_1, \ldots, v_n \right)$ is a basis of $V$, $\left< v_1, \ldots, v_n \right>=V=\left<m_1,\ldots,m_n\right>$. So does this mean that $v_i$ is a linear combination of the elements of $\mathcal{M}$? I think I might have some right idea for the above but it might not be sufficient. $\Leftarrow$) If I assume that $v_i$ is a linear combination of the elements of $\mathcal{M}$ for $i=1,\ldots,n$, this would mean $\lambda_1 m_1+\ldots+\lambda_n m_n=\left( v_1, \ldots, v_n \right)$ for $\lambda_1,...,\lambda_n \in F$. How should I continue to show that $\left<\mathcal{M}\right>=V$?
Do note that it is not necessary for $M$ to have $n$ elements. It is only a generating set, so all you know is that it has at least $n$ elements. Suppose $\langle M\rangle=V$. So, every element of $V$ can be written as a linear combination of elements of $M$. Now, $(v_i)_1^n$ is a basis of $V$, so each $v_i$ must be an element of $V$, so each $v_i$ can be written as a linear combination of elements of $M$. Now, conversely, suppose that each $v_i$ can be written as a linear combination of elements of $M$. So, $$v_i=\sum_{m\in M}\lambda_mm~~~\textrm{for some scalars }\lambda_m$$ Now, since $(v_i)$ is a basis of $V$, every $v\in V$ can be written as a linear combination of the $v_i$'s, so, $$v=\sum_i\alpha_i v_i=\sum_i\alpha_i\sum_{m\in M}\lambda_mm=\sum_i\sum_{m\in M}\alpha_i\lambda_m m$$ with the last equality showing that each $v\in V$ is a linear combination of the elements of $M$, so $\langle M\rangle=V$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132239", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area between trigo function and a line How can I compute the area of region enclosed by $\cos^2x$ and $y=1-\frac{2}{\pi}x$? I am just stuck on the first step, finding their intersection point, I know I can plot the graph using online resources but is there any numerical method to obtain the intersection?
Define $f(x)=\cos^2x$ and $g(x)=1-2/\pi \cdot x$. Two intersection points can simply be found by plugging in $x=0$ and $x=\pi/2$. For the third intersection point, you may use numerical methods such Newton-Raphson, Secant and the alike. But if this is a problem on a test which it seems to be because of the "nice" $x$-values we're getting as intersection points. You might try out other special angles and you'll observe that $x=\pi/4$ is an intersection point as well. All you're now left to do is to figure out in what intervals, what function is greater and integrate from the suitable points. Clearly, $g(x)$ has a negative slope for all $x$ and the derivative of $\cos^2x$ is $-\sin2x$ which tells us it is zero at $x=0$ and $x=\pi/2$. Plug in any value between $0$ and $\pi/2$ to figure out the sign of the slope of $f(x)$ say $x=\pi/4$, which tells you this is negative too. Plug in any values between $x=0$ and $x=\pi/4$ to figure out which function would be greater in $[0, \pi/4]$ and do the same for the interval $[0,\pi/2]$. $$\text{Area}=\int_{0}^{\pi/4}\left(\cos^2x-1+\dfrac{2}{\pi}x\right)\mathrm dx+\int_{\pi/4}^{\pi/2}\left(1-\dfrac{2}{\pi}x-\cos^2x\right)\mathrm dx $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Closed form for these polynomials? I have a recurrence relation for these polynomials $p_i(x) $: \begin{align} p_0(x)&=0 \\ p_1(x)&=1 \\ p_{2i}(x)&=p_{2i-1}(x)-p_{2i-2}(x) \\ p_{2i+1}(x)&=xp_{2i}(x)-p_{2i-1}(x) \\ \end{align} I've been unable to find a closed form, but the recurrence is simple so it seems there might be one. However the polynomials arise in a deep context, so maybe not. Is it possible to find one? The context: Let $G=\mathbb Z_2*\mathbb Z_2$, with generators $s, t$. Let $M$ be the free module over the ring $\mathbb Z[c, d] $ of rank $2$ with basis $x_1,x_2$. Define \begin{align} s(x_1)&=-x_1 \\ t(x_2)&=-x_2 \\ s(x_2)&=cx_1+x_2 \\ t(x_1)&=x_1+dx_2 \\ \end{align} Define elements \begin{align} y_0&=x_1 \\ y_{2i+1}&=t(y_{2i}) \\ y_{2i}&=s(y_{2i-1}) \\ \end{align} Then $$y_i=p_{2\lfloor i/2\rfloor+1}(cd)x_1+dp_{2\lceil i/2\rceil}(cd)x_2$$
Writing $v_n=\begin{pmatrix} p_{2n+1} \\ p_{2n}\end{pmatrix}$, the recurrence can be written as $v_{n+1}=Av_n$ where $$A=\begin{pmatrix} x-1 & -x \\ 1 & -1\end{pmatrix}.$$ So, we have $v_n=A^nv_0$ for all $n$, and the question is just about computing powers of the matrix $A$. This can be done by diagonalizing: the characteristic polynomial of $A$ is $q(t)=t^2-(x-2)t+1$. Picking a root $r$ of $q$ over an algebraic closure of $\mathbb{Q}(x)$, then $r$ and $s=x-2-r$ are the eigenvalues of $A$ with eigenvectors $\begin{pmatrix}r+1\\ 1\end{pmatrix}$ and $\begin{pmatrix}s+1\\ 1\end{pmatrix}$. So we can write $$A=B\begin{pmatrix} r & 0 \\ 0 & s\end{pmatrix}B^{-1}$$ where $B=\begin{pmatrix} r+1 & s+1 \\ 1 & 1\end{pmatrix}$. If I have not screwed up the calculation, this gives the following "explicit" formula for $A^n$: $$A^n=\frac{1}{r-s}\begin{pmatrix} r^n(r+1)-s^n(s+1) & (r+1)(s+1)(s^n-r^n) \\ r^n-s^n & s^n(r+1) - r^n(s+1)\end{pmatrix}.$$ Applying this to $v_0=\begin{pmatrix} 1 \\ 0\end{pmatrix}$ we get $$p_{2n+1}=\frac{r^n(r+1)-s^n(s+1)}{r-s}$$ and $$p_{2n}=\frac{r^n-s^n}{r-s}.$$ Note that by the quadratic formula, we can write $$r=\frac{x-2+\sqrt{x^2-4x}}{2}$$ and $$s=\frac{x-2-\sqrt{x^2-4x}}{2}$$ and so substituting these into the formulas above gives a closed form for $p_{2n+1}$ and $p_{2n}$ in terms of $x$. Or, for an explicit formula without any radicals, we can expand out $r^n$ and $s^n$ to get $$p_{2n}=\frac{1}{2^{n-1}}\sum_{0\leq 2k<n}\binom{n}{2k+1}(x^2-4x)^k(x-2)^{n-2k-1}$$ and then $p_{2n+1}$ is just $p_{2n}+p_{2n+2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How can the correct form of the partial fractions decomposition be found for arbitrary rational functions? What is the reasoning or intuition that leads to the assumption that $$r(x) =\frac{x^2 + 2}{ (x+2)(x-1)^2}$$ can be expressed as $$r(x) = \frac{A}{x-1} + \frac{B}{(x-1)^2} + \frac{C}{x+2}$$ (For the sake of context, this problem arose when trying to solve an integral by the method of partial fractions.) Moreover, how can the correct form of the partial fractions decomposition be found for other rational functions?
Start with the rational function $\frac{N(x)}{D(x)},$ where $N(x)$ and $D(x)$ are polynomials over whichever field we happen to be working in (such as the real numbers or complex numbers) and the degree of $N(x)$ is less than the degree of $D(x).$ Consequences of Bezout's Identity Suppose we can factor $D(x) = P_1(x)P_2(x),$ where $P_1(x)$ and $P_2(x)$ have no common factor (and therefore no common root). Then by Bezout's theorem for polynomials (aka Bezout's identity for polynomials), there are polynomials $F_1(x)$ and $F_2(x)$ such that $F_1(x)P_1(x) + F_2(x)P_2(x) = 1.$ (I believe the application of Bezout's identity here is why calculus books give the technique of partial fraction decomposition without proof. Bezout's identity comes from abstract algebra, which normally is not taught until you've had at least a couple of years of calculus; you wouldn't normally see it before university, and even then you'd probably only see it if you majored in math. Personally, I think it's a shame the curriculum is sequenced this way--I spent most of my time in first-year calculus griping about why we needed to memorize all that ugly ____, and only a few years later, when I finally got to the upper level courses, realized it was actually beautiful and made perfect sense--but that's enough ranting for one answer.) By polynomial division we also have \begin{align} N(x)F_1(x) &= Q_2(x)P_2(x) + R_2(x), \\ N(x)F_2(x) &= Q_1(x)P_1(x) + R_1(x) \end{align} where the degree of $R_i(x)$ is less than the degree of $P_i(x).$ Therefore \begin{align} N(x) &= N(x)(F_1(x)P_1(x) + F_2(x)P_2(x)) \\ &= N(x)F_1(x)P_1(x) + N(x)F_2(x)P_2(x) \\ &= Q_2(x)P_1(x)P_2(x) + P_1(x)R_2(x) + Q_1(x)P_1(x)P_2(x) + P_2(x)R_1(x) \\ &= (Q_1(x) + Q_2(x))P_1(x)P_2(x) + P_1(x)R_2(x) + P_2(x)R_1(x). \end{align} Since $\deg(R_1(x)) < \deg(P_1(x))$ and $\deg(R_2(x)) < \deg(P_2(x)),$ it follows that $\deg(P_2(x)R_1(x)) < \deg(P_1(x)P_2(x))$ and $\deg(P_1(x)R_2(x)) < \deg(P_1(x)P_2(x)).$ Since we must also have $\deg(N(x)) < \deg(P_1(x)P_2(x)),$ we must have $\deg((Q_1(x) + Q_2(x))P_1(x)P_2(x)) < \deg(P_1(x)P_2(x)),$ which is possible only if $Q_1(x) + Q_2(x) = 0.$ Therefore we can more simply write $$ N(x) = P_1(x)R_2(x) + P_2(x)R_1(x).$$ Therefore \begin{align} \frac{N(x)}{D(x)} &= \frac{P_1(x)R_2(x) + P_2(x)R_1(x)}{P_1(x)P_2(x)} \\ &= \frac{R_2(x)}{P_2(x)} + \frac{R_1(x)}{P_1(x)}. \tag1 \end{align} Taking out a first-degree factor To apply this to partial fraction decomposition, if $x - a$ divides $D(x)$ we find the greatest power of $x - a$ that divides $D(x).$ Suppose that this is the $n$th power. Set $P_1(x) = (x - a)^n$ and $P_2(x) = \frac{D(x)}{(x - a)^n}.$ Then $P_1(x)$ and $P_2(x)$ have no common factor, and the result $(1)$ above says that $$ \frac{N(x)}{D(x)} = \frac{R_2(x)}{P_2(x)} + \frac{R_1(x)}{(x - a)^n} $$ where $\deg(R_2(x)) < \deg(P_2(x))$ and $\deg(R_1(x)) < n = \deg((x - a)^n).$ Taking out an irreducible quadratic factor If we are doing real analysis and don't allow polynomials to have complex coefficients, then $D(x)$ might have a factor of the form $x^2 + bx + c$ that cannot be factored into first-degree polynomials (that is, it is irreducible). In that case, if the highest power of $x^2 + bx + c$ that divides $D(x)$ is the $m$th power, then we can write $P_1(x) = (x^2 + bx + c)^m$ and $P_2(x) = \frac{D(x)}{(x^2 + bx + c)^m}.$ It follows that $P_1(x)$ and $P_2(x)$ have no common factor, and therefore (according to $(1)$ again) $$ \frac{N(x)}{D(x)} = \frac{R_2(x)}{P_2(x)} + \frac{R_1(x)}{(x^2 + bx + c)^m} $$ where $\deg(R_2(x)) < \deg(P_2(x))$ and $\deg(R_1(x)) < 2m = \deg((x^2 + bx + c)^m).$ Completing the decomposition Provided that we are able to find all the first- and second-degree factors of the polynomial $D(x),$ we can repeatedly take out either first-degree factors or irreducible quadratic factors from $D(x)$ and then from the polynomial $P_2(x)$ that we get after taking out the previous factor, until we end up with a $P_2$ that is itself a first-degree polynomial or an irreducible quadratic. We end up with something that looks like this: $$ \frac{N(x)}{D(x)} = \frac{S_1(x)}{(x - a_1)^{n_1}} + \cdots + \frac{S_h(x)}{(x - a_h)^{n_h}} + \frac{T_1(x)}{(x^2 + b_1x + c_1)^{m_1}} + \cdots + \frac{T_1(x)}{(x^2 + b_kx + c_k)^{m_k}}. $$ The final step of the proof is to show that if the degree of $U(x)$ is less than the degree of $(V(x))^p,$ then $$ \frac{U(x)}{(V(x))^p} = \frac{U_1(x)}{V(x)} + \frac{U_2(x)}{(V(x))^2} + \cdots + \frac{U_p(x)}{(V(x))^p} $$ where the degree of each $U_i(x)$ is less than the degree of $V(x).$ We can get this result by dividing $U(x)$ by $V(x)$ (the remainder is $U_p(x)$), then dividing the quotient of that division by $V(x)$ again (the remainder is $U_{p-1}(x)$), and so on repeatedly until we get a quotient whose degree is less than that of $V(x),$ which will happen after at most $p-1$ divisions. That's why, when you have a factor that occurs more than once in the factorization of $D(x)$, you get a term for each power of that factor up to the highest power that divides $D(x).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Where $a$ is introduced in $a + bi$ when solving polynomials using complex numbers Trying to wrap my head around complex numbers and almost there. I am looking for problems that show me how to introduce $i$ into an equation. What I'm finding a lot of is "Simplify 2i + 3i = (2 + 3)i = 5i", where the $i$ has already been introduced somehow magically. The only primitive examples I've tried so far is "Simplify $\sqrt{-9}$" and by the definition of $i = \sqrt{-1}$ we get $3i$. That part makes sense for now. But it's just $3i$, or $bi$ from the equation, there is no $a$. I don't see in what situations you get the $a$ and how you know how/where to add it. For example, on Wikipedia they show: In this case the solutions are −1 + 3i and −1 − 3i, as can be verified using the fact that i2 = −1: ${\displaystyle ((-1+3i)+1)^{2}=(3i)^{2}=\left(3^{2}\right)\left(i^{2}\right)=9(-1)=-9,}$ ${\displaystyle ((-1-3i)+1)^{2}=(-3i)^{2}=(-3)^{2}\left(i^{2}\right)=9(-1)=-9.}$ I am not skilled enough yet to know how they solved this, but I am wondering if they are saying $−1 + 3i$ is the form $a + bi$, or that $-1$ is separate. Wondering if one could start off with a simple polynomial equation without any presence of $i$, and then show how you introduce $i$ in two different cases/examples: * *Where it's just $bi$, not $a + bi$ *Where it's $a + bi$ That way it should help explain how to introduce $i$ into a polynomial equation. I'm imagining something like, or something more complicated if this doesn't have the $a$: $(x + 3)^2 = -10$ I've started by doing: $x + 3 = \sqrt{-10} = \sqrt{10}i$ $x = -3 + \sqrt{10}i$ Not sure if this means that $-3 + \sqrt{10}i$ is the complex number, or just $\sqrt{10}i$. Not sure if you need to be adding complex numbers to both sides, etc.
A complex number is always of the form $a+bi$, where $a,b\in \mathbb R$ but $a$ and/or $b$ can be zero. Also, when you were solving the equation $(x+3)^2=-10$ you forgot the second solution $x=-3-\sqrt{10} i$. Remember that $-10=(\pm \sqrt{10}i)^2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Integral inequality of length of curve Let $f:\mathbb{R}\to \mathbb{R}$ be a continuously differentiable function. Prove that for any $a.b\in \mathbb{R}$ $$\left (\int_a^b\sqrt{1+(f'(x))^2}\,dx\right)^2\ge (a-b)^2+(f(b)-f(a))^2$$. source: This is from TIFR GS stage 2. I think mean value theorem kills it but can't do it ...even try Cauchy-Schwarz inequality but nothing conclusion.Geometrically its true since distance between two point $(a,f(a))$ & $(b,f(b))$ is greater or equal to any arc length of same endpoints.
An easy way to do this is to note that since distance is invariant under rotations, without loss of generality, we may assume that $f(a)=f(b).$ And now, since $\sqrt{1-f'(x)}\ge 0$ on $[a,b]$, the function in $C^1([a,b])$ that minimizes the integral coincides with the function $f$ that minimizes the integrand, and clearly, this happens when $f'(x)=0$ for all $x\in [a,b].$ That is, when $f$ is constant on $[a,b].$ Then, $f(x)=f(a)$ and the result follows. If you want to do this without the wlog assumption, then argue as follows: Let $\epsilon>0,\ f\in C^1([a,b])$ and choose a partition $P=\{a,x_1,\cdots,x_{n-2},b\}$. The length of the polygonal path obtained by joining the points $(x_i,f(x_i))$ is $\sum_i \sqrt{(\Delta x_i)^2+(\Delta y_i)^2}$ and this is clearly $\ge (b-a)^2+(f(b)-f(a))^2$. (You can make this precise by using an induction argument on $n$.) And this is true for $\textit{any}$ partition $P$. But the above sum is also $\sum_i\sqrt{1+\frac{\Delta y_i}{\Delta x_i}}\Delta x_i $ and now, upon applying the MVT, we see that what we have is a Riemann sum for $\sqrt{1+f'(x)}$. To finish, choose $P$ such that $\left |\int^b_a\sqrt{1+f'(x)}dx- \sum_i\sqrt{1+f'(c_i)}\Delta x_i \right |<\epsilon $. (The $c_i$ are the numbers $x_i<c_i<x_{i-1}$ obtained from the MVT). Then, $(b-a)^2+(f(b)-f(a))^2\le \sum_i\sqrt{1+f'(c)}\Delta x_i<\int^b_a\sqrt{1+f'(x)}+\epsilon.$ Since $\epsilon$ is arbitrary, the result follows. For a slick way to do this, use a variational argument: assuming a minimum $f$ exists, consider $f+t\phi$ where $t$ is a real parameter and $\phi$ is arbitrary $C^1([a,b])$. Subsitute it into the integral: $l(t)=\int_a^b \sqrt{1+(f'+t\phi')^2}dx$. Since $f$ minimizes this integral, the derivative of $l$ at $t=0$ must be equal to zero. Then, $0=l'(0)= \int_a^b \dfrac{f'\phi'}{\sqrt{1+(f')^2}}dx$. After an integration by parts, we get $\dfrac{f'}{\sqrt{1+(f')^2}} = c$ for some constant $c\in \mathbb R,$ from which it follows that $f'=c$. And this means, of course, that the graph of $f$ is a straight line connecting $(a,f(a))$ and $(b,f(b)).$ The desired inequality follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
Absolute convegence of Fourier series implies uniform convergence? I've read many statements in my textbook that include something along the lines of "the Fourier series of $f$ converges absolutely, and hence uniformly to $f$". I'm not sure why this is true. If the Fourier series converges absolutely, then it converges. That is, $ \lim_{n \to \infty} S_n(\theta) $ converges for each $\theta$. Why does it follow then that this limit is uniform?
Let $$S_n(\theta)=\sum_{|k|\leq n}c_ke^{2i\pi k\theta}$$ be the partial Fourier series. Now, suppose that for all $\theta \in [0, 2\pi]$, we have absolute convergence $S_n(\theta)\rightarrow f(\theta)$. Let $\varepsilon>0$. By definition of the series being absolutely convergent, $$\sum_{k\in\mathbb Z}|c_k|<+\infty$$ Therefore, the partial sums are Cauchy: Let $\varepsilon>0$, there exists $N>0$ such that if $n>N$ and $m>N$, $$\sum_{|k|>n,m}|c_k|<\varepsilon$$ So for all $\theta \in [0, 2\pi]$, $$\left|S_n(\theta)-S_m(\theta)\right| \leq \sum_{|k|>n,m}|c_k| < \varepsilon$$ So $\{S_n\}_{n\in\mathbb N}$ is uniformly Cauchy, so it converges uniformly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3132902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help with an application of Young's inequality I am reading through a set of notes about concentration of Gaussian measure, and on page 56, they make the following claim that I am failing to see the proof of: Now we estimate the second summand at the right hand side. Using Young's inequality with $y = e^{\lambda \tilde Z} \geq 0$ and $x = \Sigma^2 - (e-1)E\Sigma^2$, we get \begin{align*}E(\Sigma^2e^{\lambda\tilde Z} &= (e-1)(E\Sigma^2)Ee^{\lambda \tilde Z}+E\left[(\Sigma^2 - (e-1)E\Sigma^2)e^{\lambda \tilde Z}\right]\\ & \leq (e-1)(E\Sigma^2)Ee^{\lambda \tilde Z} +\lambda E(\tilde Ze^{\lambda \tilde Z})-Ee^{\lambda \tilde Z} + E e^{\Sigma^2 - (e-1)E\Sigma^2}\end{align*} This inequality is baffling to me. I'm assuming they mean Young's product inequality, but stated in its usual form, Young's product inequality does not allow us to put "$x$" in an exponent. The best I could think of was using the fact that $\require{enclose} \enclose{horizontalstrike}{\frac{x^p}{p} = \frac{e^{p\log x}}{p} = \frac{x e^p}{p}}$ so that if we chose $p = x$, we could get from Young's inequality: $$\require{enclose} \enclose{horizontalstrike}{x\cdot y \leq e^{x} +y^{\frac{x}{x-1}}\cdot \frac{x-1}{x}}$$ for $\require{enclose} \enclose{horizontalstrike}{x,y \geq 0}$, but even then, it seems to me that the second term becomes quite intractable. Where are the authors getting this inequality from? Note: A lot of variables in the above quote are defined in the preceding parts of the proof, so I may have left out important context for this problem that can be found in the link.
They use a generalized version of Young's inequality (see here, the last example under "Generalization using Fenchel-Legendre transform"). In particular, given two real numbers $x > 0$, $y$, one gets: $$xy \leq x \ln x - x +e^y.$$ This version of Young's inequality was already used in the document you linked, at the bottom of page 43. Note by the way that there is a mistake in your computation ($e^{p \ln x} \neq x e^p$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Non nesting action on Real Trees Background Let $G$ be a group acting by homeomorphisms on an $\mathbf{R}$-tree $T$. The element $ g \in G $ is elliptic if the fixed point set $\operatorname{Fix}(g)$ is non-empty. The group action is non-nesting if no $ g \in G $ maps an arc properly into itself. Claim Let $ g \in G $ be elliptic. If $ x \not \in \operatorname{Fix}(g)$, then $\operatorname{Fix}(g) \cap [x, gx]$ consists of precisely one point. Available Proof Given $x \not \in \operatorname{Fix}(g)$, choose $ x_0 \in \operatorname{Fix}(g)$, and defne $u$ by $ [x_0, x] \cap \operatorname{Fix}(g) = [x_0, u]$. Non-nesting implies $\operatorname{Fix}(g) \cap [x, gx] = \{ u \} $. My doubt Why does Non-nesting imply that $Fix g \cap [x, gx] = \{ u \} $ works?
First, the property of being at most a singleton is true without assumption on $g$: Let $T$ be an $\mathbf{R}$-tree. Then for an arbitrary self-homeomorphism $g$ of $T$ and $x\in T$, the intersection $\mathrm{Fix}(g)\cap [x,gx]$ is at most 1 point. Indeed, if $y,z$ are fixed points in $[x,gx]$, say with $y\in [x,z]$, then $y=gy\in [gx,gz]=[z,gx]$, which forces $y=z$. Next: If $g$ is elliptic and both $g$ and $g^{-1}$ are non-nesting then this is indeed a singleton. Let $x_0$ be a fixed point. Let $v$ be the median point of the triple $\{x_0,x,gx\}$. Since both $g$ and $g^{-1}$ are non-nesting, none of $[x_0,v]$ and $[x_0,gv]$ is properly contained in the other. Hence, $gv=v$ and $v\in [x,gx]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivative of $tr(A(C \circ X)BB'(C' \circ X')A')$ Can we differentiate this function: $tr(A(C \circ X)BB'(C' \circ X')A')$ w.r.t $X$? Also, $tr(A(C \circ X)Y)$ w.r.t $X$.
The Frobenius product is a convenient notation for the trace $\,A:B = {\rm Tr}(A^TB)$ Define the matrices $$\eqalign{ Y &= C\circ X &\implies dY &= C\circ dX \cr W &= AYB &\implies dW &= A\,dY\,B \cr }$$ Write the function in terms of these new variables. Then calculate its differential and gradient. $$\eqalign{ \phi &= W:W \cr d\phi &= W:dW + dW:W \cr &= 2W:dW \cr &= 2W:(A\,dY\,B) \cr &= 2(A^TWB^T):dY \cr &= 2(A^TWB^T):(C\circ dX) \cr &= 2\big(C\circ (A^TWB^T)\big):dX \cr &= 2\Big(C\circ\big(A^TA(C\circ X)BB^T\big)\Big):dX \cr \frac{\partial\phi}{\partial X} &= 2C\circ\big(A^TA(C\circ X)BB^T\big) \cr }$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133272", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\lim\limits_{n \to \infty} \int_{1-\epsilon}^1 \ln(1+x+...+x^{n-1})dx$ Evaluate $$\lim_{n \to \infty} \int_{1-\epsilon}^1 \ln(1+x+...+x^{n-1})dx$$ for $0<\epsilon<1$. The problem is at the point $x=1$, because there the formula $$1+x+...+x^{n-1}=\frac{1-x^n}{1-x}$$ can't be applied. I think of using the mean value theorem for integrals, so there would be $c_n \in [1-\epsilon,1]$ so that $$\int_{1-\epsilon}^1 \ln(1+x+...+x^{n-1})dx=\epsilon\ln(1+c_n+...+c_n^{n-1})$$ but taking limits gives an indeterminate form. I do not know Lebesgue's dominated convergence theorem. I would appreciate an elementary proof.
You can replace the upper limit with $k$ and apply $\lim_{k\rightarrow 1}$ to the whole integral. Then that formula can be applied. So the integral becomes $$\int_{c}^{k} (\log(1-x^n) - \log (1-x))dx$$ Since $x<1$ in the given interval $\log(1-x^n)$ is $0$ as n tends to infinity. So \begin{align*} & =-\int_{c}^{k} \log (1-x))dx \\ & =(1-x)\log(1-x)-(1-x) \\ \end{align*} from $c$ to $k$ $$=(1-k)\log(1-k)-(1-k)-((1-c)\log(1-c)-(1-c))$$ The limit of first term is $0$ as $k\rightarrow 1$. The second term is $$=(1-c)(1-\log(1-c))$$ Now $c=1-\varepsilon$ like in the question $$=\varepsilon(1-\log\varepsilon)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the number of ways to express 1050 as sum of consecutive integers I have to solve this task: Find the number of ways to present $1050$ as sum of consecutive positive integers. I was thinking if factorization can help there: $$1050 = 2 \cdot 3 \cdot 5^2 \cdot 7 $$ but I am not sure how to use that information (if there is a sense) example I can solve something similar but on smaller scale: \begin{align} 15 &= 15 \\ &= 7+8 \\ &=4+5+6 \\ &= 1+2+3+4+5 \end{align} ($4$ ways)
For the sum of next integer we may use formula for the sum of arithmetic sequence $$\sum_{k=1}^na_k=\frac n2(a_1+a_n)$$ So \begin{aligned} 1050 &= \frac n2(a_1+a_n) \\ 2100 &= n(a_1+a_n) \\ 2100&= n(a_1+a_n) \\ 2100&= n(a_1+a1+(n-1)) \\ 2^2\cdot3\cdot5^2\cdot7&= n(2a_1+n-1) \end{aligned} Now: * *If $n$ is even, then $(2a_1+n-1)$ is odd, so $$n=2^2\cdot3^x\cdot5^y\cdot7^z$$ where $x \in \{0,1\},\; y \in \{0,1,2\}, \;z \in \{0,1\}$, so there are $2 \times 3 \times 2 = 12$ possibilities for $n$. *If $n$ is odd, then similarly $$n=3^x\cdot5^y\cdot7^z$$ and we obtain other $12$ possibilities for $n$. So there are $24$ solutions altogether, a half o them, i. e. $\color{red}{12}$, for only positive integers, because for positive integers must be $a_1 \ge 1$, and consequently $(2a_1+n-1) >n$, so in the product $n(2a_1+n-1)$ the first multiplier have be smaller than the second one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133472", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
4 Spheres all touching each other?? If there are 4 spheres all touching each other and 3 of them have diameters 4, 6 and 12 what is the diameter of the fourth one? I imagine it like 3 balls on a flat table touching each other and then we are supposed to put another one on top of them but in my imagination the top sphere could be any size basically right?
Not any size. If the fourth sphere is sufficiently small, it can fit in the hole in the middle, also resting on the table, without touching any of the other three spheres. I’m guessing this problem is asking for the minimum diameter of the fourth sphere which guarantees contact with all of the other three.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Determine whether the given map $\phi$ makes these two binary structures isomorphic $\langle \mathbb{Q}, \mathbb{+} \rangle$ with $\langle \mathbb{Q}, \mathbb{+} \rangle$ where $\phi(x) = x + 1$ for $x \in \mathbb{Q}$ In order to show that two binary structures are isomorphic, I need to check whether $\phi$ is one-to-one, onto, and a homorphism. a) one-to-one check (pass) $$ \begin{align} \phi(x) &= \phi(y) \\ x + 1 &= y + 1 \\ -1 & ~~~-1 \\ x &= y \end{align} $$ b) onto check (pass) $$\phi(x+1) = (x+1) + 1 = x + 2 \textrm{ for } x \in \mathbb{Q}$$ c) homomorphism check (fail?) $$ \phi(x + y) = (x + y) + 1 \neq \phi(x) + \phi(y) = (x+1) + (y+1) = x + y + 2$$ I think that the homomorphism check failed here and that means these two binary structures are not isomorphic for $\phi$. Did I come to the right conclusion? Am I missing any considerations in any of the checks?
Your check for a) is fine, though you might want to start with "For all $x,y\in\Bbb{Q}$...". Your check for b) is incomplete, and has a typo (you write $\Bbb{Z}$ in stead of $\Bbb{Q}$). It would suffice to say For all $x\in\Bbb{Q}$ we have $x=\phi(x-1)$, so $\phi$ is surjective. Your check for c) is fine, and show that $\phi$ is indeed not an isomorphism. However, this doesn't prove that the two binary structures aren't isomorphic. In fact they are the same, so of course they are isomorphic. In hindsight it suffices to check that c) fails, and there is no need to bother checking a) and b).
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
question on the meaning of supremum I know that supremum means least upper bound. If I have a sequence of events, $\{A_n\}_{n=1}^\infty$ then $$\limsup_{n\rightarrow \infty} A_n = \lim_{n\rightarrow \infty} \sup_{j\geq n} A_j$$ I'm having trouble understanding this statement: "The supremum of a collection of elements in a partially ordered set is its least upper bound, so $\sup_{j\geq n} A_j$ should be a set and it should hold that $A_j \subset \sup_{j\geq n} A_j$ for all $j \geq n $. Because the supremum should also be the smallest upper bound it is not hard to see that $$\sup_{j\geq n} A_j= \bigcup_{j=n}^\infty A_j$$" Why does $j \geq n $ matter? Also, I don't understand what set is partially ordered, and also why the supremum of $A_j$ is itself a set. Shouldn't it just be an element? And I also don't see how the supremum is the same thing as the union.
Probability events are (the measurable) subsets of the probability space $\Omega$, and the partial order among them is simply the relation 'being subset of'. So, for any countable set of events $B_i$, their supremum is just $\bigcup_i B_i$ (which is still measurable). By the way, we also have $$\limsup_n A_n\ =\ \lim_n\sup_{j\ge n} A_j\ =\ \bigcap_n\bigcup_{j\ge n} A_j$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133831", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Having the sequence $(a_{n})_{n\geq1}$, $a_{n}=\int_{0}^{1} x^{n}(1-x)^{n}dx$, find $\lim\limits_{n{\rightarrow}\infty} \frac{a_{n+1}}{a_{n}}$ Having the sequence $(a_{n})_{n\geq1}$, $a_{n}=\int_{0}^{1} x^{n}(1-x)^{n}dx$, find $\lim\limits_{n{\rightarrow}\infty} \frac{a_{n+1}}{a_{n}}$. Is this limit equal to $\lim\limits_{n{\rightarrow}\infty}(a_{n})^{\frac{1}{2}}$? I am unsure if I can apply l'Hospital to this limit, as the integral is a real value. I tried applying integration by parts but I ended nowhere. Later edit: I am so sorry, it is $(1-x)^{n}$, not $(1-x^{n})$, I did this mistake out of tiredness.
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\bbox[10px,#ffd]{\int_{0}^{1}x^{n}\pars{1 - x}^{n}\dd x} = \int_{-1/2}^{1/2}\pars{{1 \over 4} - x^{2}}^{n}\dd x = {1 \over 2^{2n}}\int_{0}^{1}\pars{1 - x^{2}}^{n}\dd x \\[5mm] \stackrel{\mrm{as}\ n\ \to\ \infty}{\sim}\,\,\,& {1 \over 2^{2n}}\int_{0}^{\infty}\expo{-nx^{2}}\dd x = {\root{\pi} \over 2^{2n + 1}\root{n}} \end{align} such that $$ {a_{n + 1} \over a_{n}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\sim}\,\,\, {1/4 \over \root{1 + 1/n}} \,\,\,\stackrel{\mrm{as}\ n\ \to\ \infty}{\to}\,\,\,\bbx{1 \over 4} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/3133954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
How to show that "If cancellation laws hold in $\mathbb{Z}_n$, then $n$ must be prime" I was assigned to prove that the following statements are equivalent: * *$\mathbb{Z}_n$ has no zero divisors. *$\mathbb{Z}_n$ is an integral domain. *The cancellation laws hold in $\mathbb{Z}_n$ *$n$ is prime. For the proof of $(1)\implies (2)$, $(2)\implies (3)$ and $(4)\implies (1)$ I encountered no problem. I am having a difficulty in proving the implication $3\implies 4$. But I found a counterexample. For instance since $n=4$ is not prime, then we have in $\mathbb{Z}_4$: $2(3)=2(1)=2$ but $3\neq 1$. So I assume that the proof is via contradiction. Here is my attempt: Let $a\neq 0$ be an element of $\mathbb{Z}_n$. Since cancellation laws hold then $ab=ac\implies b=c$. This means that $b\equiv c(mod\ n)$ or $n\mid(b-c)$. Now suppose $n$ is composite. Then $b-c=kn$ for $k<b-c<n$. From here, I do not know how to proceed. Is my proof flow correct? If not how can we prove the implication $(3)\implies (4)$. Thanks in advance for all your help.
You are almost there, here's a hint: suppose that $n$ is not prime, so that $n = st$ with $s,t \geq 1$. Look at this equation modulo $n$. Derive a contradiction from the fact that $st \equiv 0 = s \cdot 0\pmod{n}$ but $0 < s \neq n$ and in particular, $n | s$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/3134259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }