text
stringlengths
256
16.4k
This is not a solution, but it's too long for a comment. As I wrote in the comment above, if $n$ is composite, then the statement is easily proved. Write $n=x \cdot y$ and pick $(a,b)=(x, xy-x)$. Since $$a^2+b^2=x^2(1+(y+1)^2)$$is divisible by $x^2$, it's not a prime. This reduces the study of this problem to primes $n \ge 7$. This is equivalent to the following: $$\exists a \in \left\{ 1, \dots , \frac{n-1}{2} \right\}: \ a^2+(n-a)^2 \mbox{ is composite}$$ Now, for all $a \in \left\{ 1, \dots , \frac{n-1}{2} \right\}$, call $$s=n-2a$$Note that $s$ is odd and that $s \in \left\{ 1, 3, 5, \dots , n-2 \right\}$. Then$$\frac{n^2+s^2}{2} = \frac{2n^2-4an+4a^2}{2} = n^2-2an+2a^2 = a^2+(n-a)^2$$which is composite for a suitable option for $s$. Hence your conjecture is equivalent to the following: For all primes $n \ge 7$ there exists an odd integer $s \in \{ 1, \dots , n-2\}$ such that $\frac{n^2+s^2}{2}$ is composite.
I find this all easier to understand if I write $\phi$ as a map between two different manifolds (where the two might coincidentally be the same manifold). Also, it all comes from abstract linear algebra: If $A_*: V \rightarrow W$ is a linear map (i.e., a pushforward), then there is a natural dual map $A^*: W^* \rightarrow V^*$ (i.e., a pullback), such that for any $v \in V$ and $\ell \in W^*$,$$(A^*\ell)(v) = \ell(A_*v).$$If you choose bases for $V$ and $W$ and use the dual bases for $V^*$ and $W^*$, then you can write $A_*$ and $A^*$ as matrices. Now you can apply this to the differential of a map $\phi: M \rightarrow N$, which is a linear map $\phi_*: T_pM \rightarrow T_{\phi(p)}N$, which is analogous to the map $A_*$ above and defined as follows: Given $v \in T_pM$, there exists a curve $c: (-\delta,\delta) \rightarrow M$ such that $c(0) = p$ and $c'(0) = v$. You can compose $\phi$ with $c$ to get a curve in $N$ and define$$\phi_*v = \left.\frac{d}{dt}\right|_{t=0}\phi(c(t)) \in T_{\phi(p)}N.$$Since $\phi_*: T_pM \rightarrow T_{\phi(p)}N$ is a linear map like $A_*$ above, there is a dual map $\phi^*: T_{\phi(p)}^*N \rightarrow T_p^*M$. When you push forward a vector field $v$, you're just applying the linear map $\phi_*$ to $v(p)$ for each $p \in M$. Notice that if $\phi$ is either not injective or not surjective, $\phi_*v$ is not a vector field on $N$. Similarly, the pullback of a differential form $\omega$ on $N$ is simply applying $\phi^*$ to $\omega(\phi(p))$ for each $p$. Notice that, contrast to the pushforward, the pullback of a smooth differential form on $N$ is a smooth differential form on $M$. Since everything above was defined without using local coordinates, we now know they don't depend on any choice of coordinates. If you now choose local coordinates on $M$ near $p \in M$ and on $N$ near $\phi(p)$, then you get a basis of $T_pM$ by holding all but one coordinate on $M$ fixed and differentiating the curve with respect to the remaining coordinate. You can do the same using the coordinates on $N$. You can now write $\phi_*: T_pM \rightarrow T_{\phi(p)}N$ as a matrix, just as for $A_*$ above. Using the corresponding dual bases, you can write $\phi^*: T^*_{\phi(p)}N \rightarrow T^*_pM$ as matrices, just as described for $A^*$. Now you can check that the matrices for $\phi_*$ and $\phi^*$ are essentially the Jacobian matrix of partial derivatives of $\phi$ written with respect to the local coordinates on $M$ and $N$. Finally, to minimize confusion, I recommend never talking about the pushforward of a differential form or the pullback of a vector field. If $\phi$ is a diffeomorphism, then there is a pushforward of vector fields on $N$ by $\phi^{-1}_*$ and a pullback of differential forms on $M$ by $\phi^{-1}$. Such precision in language makes it much less likely you'll get confused or make mistakes.
Elementary discrete dynamical systems problems Problem 1 Consider the dynamical system \begin{align*} x_{n+1} &= f(x_n) \quad \text{for $n=0,1,2,3, \ldots$ ,} \end{align*} where the function $f$ is graphed along with the diagonal $x_n = x_{n+1}$, below. Find the equilibria of the dynamical system. Indicate them on the graph along as well as list them with their approximate numerical values. Using cobwebbing, determine the stability of each equilibrium. Be sure to indicate the direction of your cobwebbing with arrows. Problem 2 Consider the dynamical system \begin{align*} y_{t+1} &= g(y_t) \quad \text{for $t=0,1,2,3, \ldots$ ,} \end{align*} where the function $g$ is graphed along with the diagonal $y_t = y_{t+1}$, below. Find the equilibria of the dynamical system. Indicate them on the graph along as well as list them with their approximate numerical values. Using cobwebbing, determine the stability of each equilibrium. Be sure to indicate the direction of your cobwebbing with arrows. Problem 3 Consider the dynamical system \begin{align*} q_{t+1} - q_t = \frac{a q_t +b}{c}, \quad \text{for $t=0,1,2,3, \ldots$} \end{align*} where $a$, $b$, and $c$ are parameters. Find all equilibria. Problem 4 Consider the dynamical system \begin{align*} m_{n+1} - m_n = \frac{b m_n +c}{d}, \quad \text{for $n=0,1,2,3, \ldots$} \end{align*} where $b$, $c$, and $d$ are parameters. Find all equilibria. Problem 5 Consider the dynamical system \begin{align*} r_{t+1} = \frac{a r_t +b}{c}, \quad \text{for $t=0,1,2,3, \ldots$ } \end{align*} where $a$, $b$, and $c$ are parameters. Find all equilibria. Problem 6 Consider the dynamical system \begin{align*} c_{t+1}-c_{t} = \frac{(c_t -a)(c_t-b)}{d}, \quad \text{for $t=0,1,2,3, \ldots$ } \end{align*} where $a$, $b$, and $d$ are parameters. Find all equilibria. Problem 7 Consider the dynamical system \begin{align*} b_{n+1}-b_{n} = \frac{(b_n -\alpha)(b_n-\beta)(b_n-\gamma)}{\delta}, \quad \text{for $n=0,1,2,3, \ldots$ } \end{align*} where $\alpha$, $\beta$,$\gamma$, and $\delta$ are parameters. Find all equilibria. Problem 8 Consider the dynamical system \begin{align*} m_{t+1} - m_t = -\frac{a}{b} m_t + \frac{a}{b}c, \quad \text{for $t=0,1,2,3, \ldots$ } \end{align*} where $a$, $b$, and $c$ are parameters. If one doubles the parameter $a$ and doubles the parameter $b$, how does the dynamical system change? If you know that value of $\lambda = \frac{a}{b}$, do you need to know the values of $a$ and $b$ individually to determine the evolution of the dynamical system? Why or why not? Rewrite the dynamical system in terms of $\lambda$ and $c$ (with no dependence on $a$ or $b$ individually). Problem 9 Consider the dynamical system \begin{align*} l_{n+1} = \frac{ab}{c} l_n e^{-l_n a b /c}, \quad \text{for $n=0,1,2,3, \ldots$ } \end{align*} where $a$, $b$, and $c$ are parameters. If one doubles the parameter $a$ and doubles the parameter $c$, how does the dynamical system change? If one doubles the parameter $b$ and doubles the parameter $c$, how does the dynamical system change? If one doubles the parameter $a$ and halves the parameter $b$, how does the dynamical system change? If you know that value of $\gamma = \frac{ab}{c}$, do you need to know the values of $a$, $b$, and $c$ individually to determine the evolution of the dynamical system? Why or why not? Rewrite the dynamical system in terms of $\gamma$ (with no dependence on $a$, $b$, or $c$ individually). Problem 10 What is an equilibrium of a dynamical system? What does it mean for an equilibrium to be stable? To be unstable? Problem 11 Consider the dynamical system \begin{align*} h_{t+1} -h_t &= \frac{h_t(h_t -1)}{2} \quad \text{for $t=0,1,2,3, \ldots$ .} \end{align*} Find the equilibria. If $h_0 = -0.1$, calculate $h_1$, $h_2$, $h_3$, and $h_4$. If $h_0= 0.1$, calculate $h_1$, $h_2$, $h_3$, and $h_4$. Based on the above calculations, what can you infer about the stability of one of the equilibria? If $h_0 = 0.99$, calculate $h_1$, $h_2$, $h_3$, and $h_4$. If $h_0= 1.01$, calculate $h_1$, $h_2$, $h_3$, and $h_4$. Based on the above calculations, what can you infer about the stability of one of the equilibria? Problem 12 Consider the dynamical system \begin{align*} S_{t+1} -S_t &= \frac{S_t(1-S_t)}{2} \quad \text{for $t=0,1,2,3, \ldots$ .} \end{align*} Find the equilibria. If $S_0 = -0.01$, calculate $S_1$, $S_2$, $S_3$, and $S_4$. If $S_0= 0.01$, calculate $S_1$, $S_2$, $S_3$, and $S_4$. Based on the above calculations, what can you infer about the stability of one of the equilibria? If $S_0 = 0.9$, calculate $S_1$, $S_2$, $S_3$, and $S_4$. If $S_0= 1.1$, calculate $S_1$, $S_2$, $S_3$, and $S_4$. Based on the above calculations, what can you infer about the stability of one of the equilibria? Problem 13 Consider the dynamical system \begin{align*} q_{n+1} -q_n &= a q_n \quad \text{for $n=0,1,2,3, \ldots$ } \end{align*}where $a$ is a parameter. Solve the dynamical system to give a formula for $q_n$ in terms of the initial condition $q_0$, the parameter $a$, and the number $n$. Find the equilibria. For each equilibrium, determine for which values of $a$ the equilibrium is stable. Problem 14 Consider the dynamical system \begin{align*} p_{n+1} &= b p_n \quad \text{for $n=0,1,2,3, \ldots$ } \end{align*}where $b$ is a parameter. Solve the dynamical system to give a formula for $p_n$ in terms of the initial condition $p_0$, the parameter $b$, and the number $n$. Find the equilibria. For each equilibrium, determine for which values of $b$ the equilibrium is stable. One you have worked on a few problems, you can compare your solutions to the ones we came up with.
Let $X$ be a discrete random variable (r.v.) taking distinct values $x_1,x_2,\dots$, and let $p_i:=P(X = x_i)$. Then the entropy of $X$ is defined by the formula \begin{equation} H(X): = \sum_i p_i \log \frac1{p_i}. \tag{1}\end{equation}Note that it does not matter whatsoever in what set/space the values $x_1,x_2,\dots$ are assumed to be; it does not matter if some of these values are close to, or far away from, one another -- in any sense. What matters is that the $p_i$'s are the probabilities of distinct values of the r.v. $X$. Clearly, if we aggregate some of these values, then the entropy will go down; and if we split some of these values, then the entropy will go up. Therefore, the formula $H(X) = \int dx\, p(x) \log \frac1{p(x)}$ will hardly make sense if, say, the integral is understood in the Riemann sense, implying the rather arbitrary grouping of the values $x$ according to the standard metric on $\mathbb R$. Moreover and much more importantly, the Riemann sums $\sum_i p(x_i)\Delta x_i \log \frac1{p(x_i)}\,$ for this integral are quite different from the sums $\sum_i p(x_i)\Delta x_i \log\frac1{p(x_i)\Delta x_i}$ that would genuinely correspond to the reality-based definition (1). Also, these latter sums will usually be very large if the $\Delta x_i$'s are very small, and the values of these sums may fluctuate wildly depending on the choice of the $\Delta x_i$'s. The more general formula $H(X) = \int p(x) \log \frac1{p(x)}\, \mu(dx)$, where $\mu$ is a measure and the grouping of the values $x$ occurs according to the closeness of the corresponding values of $p(x)$ (!!), will hardly make more sense than the Riemann integral. The only exception here would be when $\mu$ is the counting measure, with which no actual grouping (or splitting) of any values occurs. Then for the density (say $p$) of the distribution of the r.v. $X$ with respect to the counting measure $\mu$, the condition $\int p\,d\mu=1$ can be rewritten as $\sum_x p(x)=1$, which will imply that $p(x)\ne0$ only for (at most) countably many values of $x$, so that the r.v. $X$ is necessarily discrete -- and then we can write \begin{equation*} H(X) = \int p(x) \log \frac1{p(x)}\,\mu(dx) =\sum_x p(x) \log \frac1{p(x)}, \end{equation*}which is the same as (1), up to the change in notation. So, if the r.v. $X$ is not discrete, then the only reasonable value to assign to the entropy of $X$ appears to be $\infty$, at least from the viewpoint of information theory. As for the integral $\tilde H(X):=\int p(x) \log \frac1{p(x)}\, \mu(dx)$ in the case when $\mu$ is the Lebesgue measure, the main interest to it seems to be the easily seen fact (see e.g. Barron) that the maximum of $\tilde H(X)$ over all absolutely continuous r.v.'s $X$ with a fixed variance is attained when the distribution of $X$ is normal; moreover, the absolute value of difference of $\tilde H(X)$ from its maximum equals the relative entropy $\int p(x)\log\frac{p(x)}{\varphi(x)}\, dx$, where $\varphi$ is the normal density with the same mean and variance as $p$. However, what is actually used in the proofs is the relative entropy $\int \log\frac{dP}{dQ}\,dP$ (also known as the Kullback--Leibler divergence), which is well defined for any probability measures $P$ and $Q$ such that $P$ is absolutely continuous with respect to $Q$.
I thought I had great intuition and mathematical understanding of the Metropolis-Hastings algorithm, until closer inspection... as I started compiling my notes, I realized I do not understand the rejection step of the algorithm. Here is what I understood: We have a target distribution $\pi(x)$, and we construct a transition kernel $K(x' \mid x)$ such that the detailed balance equation holds. $$\pi(x)K(x' \mid x) = \pi(x') K(x \mid x')$$ We can choose $$K(x, x') = \displaystyle \alpha(x, x')q(x \mid x')$$ Where $\alpha$ is the Metropolis-Hastings ratio, and $q$ is some proposal distribution. This particular construction of $\alpha$ helps correct the discrepancies in our detailed balance equation, thus providing us flexibility in choosing $q$. Where I am having problems: How do I think about $K$ as a distribution (or even visualize)? In particular, what is $\alpha(x, x')$? What's going on with the sampling step where we reject and stay at $x$? Originally I thought of it as some correction function, but the rejection meant $X' := X$ and thus instinctively, I want think of $K(X, X')$ as a mixture of a $\delta_{\{X\}}(X')$ and $q(X'|X)$, however, the mass associated with this dirac delta varies depending on $x'$...? Not quite a mixture model. Should I be looking to interpret $\alpha$ as some form of accept-reject algorithm? How do I write $K$ as a density? Edit: Maybe this should be a question not a comment: With regards to the order of derivations (ie. motivation), is the following a reasonable thought process? We want to construct some transition kernel invariant to our target distribution. We select some proposal distribution, and notice it breaks detailed balance equation. we correct it with an acceptance-probability. Due to this correction probability, we need to have some action corresponding to the complement accepting the proposed state -> we remain at our current state. Question: Is this choice of "remain at current state" arbitrary?
Using Path Difference in a Two Source Interference Pattern to Find the Wavelength of a Source The diagram below shows two loudspeakers A and B a distance 0.8 m apart producing coherent sound of the same frequency and wavelength. 2 m away from the line joining the loudspeakers is parallel line along which a detector C is moved. When ABC form a right angle constructive interference occurs and the path difference between AC and BC is one wavelength. The distance AC can be found using Pythagoras Theorem is \[AC^2 =BC^2 +AB^2 =2^2 +0.8^2 =4.64 \rightarrow AC= \sqrt{4.64} = 2.1541 m\] to four decimal places. The wavelength is \[\lambda=2.1541-2=0.1541 m\].
I have a problem with solving the following question. Let $\mathcal{P} = \{\mathbb{P}_\theta : \theta \in \Theta\}$ be a statistical family of discrete distributions with state space $\mathcal{X}$ and let $\textit{X}$ denote the corresponding random variable. Recall the definition of the $\textit{Kullback Leiber}$ divergence: $K_{\textit{X}}(\theta_{1}, \theta_{2}):=\mathbb{E_{\theta_{1}}}\biggl[ln\frac{p(\textit{X},\theta_{1})}{p(\textit{X},\theta_{2})}\biggl]$. Let $Y=g(\textit{X})$. Prove that $K_{\textit{X}}(\theta_{1}, \theta_{2}) ≥ K_{\textit{Y}}(\theta_{1}, \theta_{2})$ with equality if and only if $Y$ is sufficient for $\theta$. In fact I don't know where to start. Should I use any theorem?
Geometric Drawings Lines Two lines, \(a_1x+by_1+\;c_1=0\) and \(a_2x+by_2+\;c_2=0\) intersect if \(a_1b_2\neq a_2b_1\) The point of intersection is $$x=(b_1c_2-b_2c_1)/(a_1b_2-a_2b_1)$$ $$y=(a_1c_2-a_2c_1)/(a_1b_2-a_2b_1)$$ Line Segments Circle Bézier Curve Linear Bézier curves A linear Bézier curve is simply a straight line between two points, which can be defined as linear interpolation between two points $$\mathbf {B}_{linear\mathbf {P} _{0}, \mathbf {P} _{1}} (t)=(1-t)\mathbf {P} _{0}+t\mathbf {P} _{1}{\mbox{ , }}0\leq t\leq 1$$ Quadratic Bézier curves A quadratic Bézier curves can be defined as linear interpolation between corresponding points on two linear Bézier curves. $${\displaystyle \mathbf {B}_{quadratic \mathbf {P} _{0}, \mathbf {P} _{1}, \mathbf {P} _{2}} (t)=(1-t)\mathbf {B}_{linear \mathbf {P} _{0}, \mathbf {P} _{1}}(t)+t\mathbf {B}_{linear \mathbf {P} _{0}, \mathbf {P} _{2}}(t){\mbox{ , }}0\leq t\leq 1}$$ Qubic Bézier curves And a cubic Bézier curves is linear interpolation between corresponding points on two quadratic Bézier curves. $$\mathbf {B}_{cubic{\mathbf {P} _{0}, \mathbf {P} _{1}, \mathbf {P} _{2},\mathbf {P} _{3}}} (t)=(1-t)\mathbf {B} _{quadratic \mathbf {P} _{0},\mathbf {P} _{1},\mathbf {P} _{2}}(t)+t\mathbf {B} _{quadratic \mathbf {P} _{1},\mathbf {P} _{2},\mathbf {P} _{3}}(t)$$ $$=(1-t)^{3}\mathbf {P} _{0}+3(1-t)^{2}t\mathbf {P} _{1}+3(1-t)t^{2}\mathbf {P} _{2}+t^{3}\mathbf {P} _{3}{\mbox{ , }}0\leq t\leq 1$$ B-Spline Curve \(P\) is a set of control points and \(t\) is a vector of non-decreasing numbers called "knot vector" which has \(number\;of\;control\;points\;+\;order\;of\;the\;curve(n)\;+\;1\) elements, e.g. \((0, 0, 0, 1, 2, 3, 3, 3)\). B-spline basis function, \(B\) is defined on the knot vector and used to weight the control points. $${\displaystyle Spline_{n,t}(x)=\sum _{i}P _{i}B_{i,n}(x)}$$ $$B_{i,0}(x):=$$ $$\begin{array}{lc}1&\mathrm{if}\; {t}_{i}\leq{x}<{t}_{i + 1}\end{array},$$ $$\begin{array}{lc}0&otherwise\end{array}$$ $${\displaystyle B_{i,k}(x):={\frac {x-t_{i}}{t_{i+k}-t_{i}}}B_{i,k-1}(x)+{\frac {t_{i+k+1}-x}{t_{i+k+1}-t_{i+1}}}B_{i+1,k-1}(x)}$$ Learn more Circumcircle and Circumcenter Circumcircle The circle that passes through three vertices of a triangle. Circumcenter The center of the circumcircle. The intersection of the perpendicular bisectors of a triangle. Incircle and Incenter Incircle The circle tangent to the three sides of a triangle. Incenter The center of the incircle. The intersection of angle bisectors of a triangle. Excircles and Excenters Excircles The circles tangent to one of a triangle's sides and to the extensions of the other two. Excenters The centers of an excircles. Points where the external angle bisectors of a triangle intersect. Orthocenter Orthocenter The intersection of the three altitudes of a triangle. Centroid Centroid The arithmetic mean position of all the points in a shape. The centroid of a triangle is the intersection of its three medians. Reuleaux Triangle A reuleaux triangle can rotate within a square while touching all four sides of a square.
I believe that category theory is one of the most fundamental theories of mathematics, and is becoming a fundamental theory for other sciences as well. It allows us to understand many concepts on a higher, unified level. Categorical methods are general, but of course they can be applied to specific categories and thereby help us to solve specific problems. I am not asking for canonical applications in which category theory is used. I have read all answers to similar math.SE questions on applications of category theory, but they don't fit to my question below. I would like to ask for applications of the notions of "category", "functor", and "natural transformation" (perhaps also "limit" and "adjunction") , which go beyond descriptions, but really solve specific problems in an elegant way. I am aware of many, many proofs of theorems which have category-theoretic enhancements, in particular by means of the Yoneda Lemma, but I'm not looking for these kind of applications either. So my question is (even though I know that this is not the task of category theory): Can you name a specific and rather easy to understand theorem, whose statement naturally does not contain any categorical notions, but whose proof introduces a suitable category / functor / natural transformation in a crucial way and uses some basic category theory? The proof should not just depend on a large theory (such as arithmetic geometry) whose development has used category theory over decades. The proof should not just be a categorical version of a proof which was already known. So here is an example of this kind, taken from Hartig's wonderful paper "The Riesz Representation Theorem Revisited", and hopefully there are more of them: Let $X$ be a compact Hausdorff space, $M(X)$ the Banach space of Borel measures on $X$ and $C(X)^*$ the dual of the Banach space of continuous functions on $X$. Integration provides a linear isometry $$\alpha(X) : M(X) \to C(X)^*, ~ \mu \mapsto \bigl(f \mapsto \int f \, d\mu\bigr).$$ The Riesz Representation Theorem asserts that this is an isomorphism. For the "categorical" proof, observe first that the maps $\alpha(X)$ are actually natural, i.e. provide a natural transformation $\alpha : M \to C^*$. Using naturality and facts from functional analysis such as the Hahn-Banach Theorem, one shows that if $X$ satisfies the claim and admits a surjective map to $Y$, then $Y$ satisfies the claim. Since every compact Hausdorff space is the quotient of an extremally disconnected space, namely the Stone-Cech-compactification of its underlying set, we may therefore assume that $X$ is extremally disconnected. Now here comes the actual mathematics, and I will just say that there are enough clopen subsets which allow you to construct enough continuous functions. The general case has been reduced to a very easy one, using the concept of natural transformation.
$S^3=\{(x_1,x_2,x_3,x_4)\in R^4 ~|~~ x_1^2+x_2^2+x_3^2+x_4^2=4\}$ with the induced metric.$\forall p\in S^3 , X\in T_pS^3 , ||X||=2$, how to show $\alpha(t)=P\cos t+X\sin t$ is geodesic of $S^3$ ? I find a local coordinate $$ u:(A,B,C)\rightarrow(2\cos A \cos B, 2\cos A\sin B, 2\sin A\cos C , 2\sin A\sin C) $$ then compute the $\frac{\partial u}{\partial A},\frac{\partial u}{\partial B},\frac{\partial u}{\partial C}$, (it is a little complex to write) then compute the $g_{ij}$ , only $g_{11}=4 , g_{2}=4\cos^2 A , g_{33}=4\sin^2 A$ and others are zero. then connection $$ \Gamma^1_{ij}=0 ~(i\ne j ~or~ i=j=1 )~,~\Gamma_{22}^1=\Gamma^1_{33}=\sin A\cos A \\ \Gamma^2_{12}=\Gamma^2_{21}=-\tan A ~,~ other ~\Gamma^2_{ij}=0 \\ \Gamma^3_{13}=\Gamma_{31}^3=\cot A ~,~ other ~\Gamma^3_{ij}=0. $$ In fact , I have compute out the $\alpha(t) ~\text{and}~ \alpha'(t)$ in local coordinate, but $\alpha '(t)$ can't be represented by $\frac{\partial u}{\partial A},\frac{\partial u}{\partial B},\frac{\partial u}{\partial C}$ , I must have done some wrong in somewhere . I think this is a good excise , but I fail to work out.
I have a very precise question concerning p. 82-83 of Stein's book "Singular integrals and differentiability properties of functions". Actually it is a calculation problem. For $f \in L^{2}(\mathbb{R}^n)$, denote by $u(x,y)$ the Poisson integral of $f$ $$u(x,y)=\int_{\mathbb{R}^n} P_{y}(t) f(x-t) dt.$$ Set $$|\nabla u(x,y)|^2=\left| \frac{\partial u}{\partial y}\right|^2 + \sum_{j=1}^{n} \left| \frac{\partial u}{\partial x_{j}} \right|^2.$$ After a calculation, it comes: $$\frac{\partial u}{\partial y}=\int_{\mathbb{R}^n} - 2 \pi |t| \hat{f}(t) e^{-2\pi i t \cdot x}e^{-2 \pi |t| y} dt$$ and $$\frac{\partial u}{\partial x_{j}}=\int_{\mathbb{R}^n} - 2 \pi i t_{j} \hat{f}(t) e^{-2\pi i t \cdot x}e^{-2 \pi |t| y} dt.$$ Ok. But then, it is written that $$\int_{\mathbb{R^n}} |\nabla u (x,y)|^2 dx = \int_{\mathbb{R}^n} 8 \pi^2 |t|^2 |\hat{f}(t)|^{2}e^{-4 \pi |t|y}dy.$$ I don't understand how to get that. Especially I don't see how the square outside the integral in for instance $\left| \frac{\partial u}{\partial y} \right|$ manages to come inside the integral. Any hint is welcome. Hint: write $u(\cdot,y) = P_y \ast f$. Now for the Fourier transform of a convolution, we have a nice identity. Slightly more detailed answer follows. Denote by $\mathcal{F}_x$ the Fourier transform in the variable x. We have by Plancherel's identity and $\mathcal{F}(P_y \ast f) = \hat{P_y} \hat{f}$ that $$ \int \lvert \nabla u(x,y) \rvert^2 dx = \int \lvert \xi \rvert^2 \lvert \hat{u}(\xi, y)\rvert^2 d\xi = \int \lvert \xi \rvert^2 \lvert \hat{P_y}(\xi) \rvert^2 \lvert \hat{f}(\xi) \rvert^2 d\xi. $$ One needs compute the Fourier transform of the Poisson kernel to finish. Thanks for your answer. However I'm still confused. Considering the very definition of $|\nabla u (x,y)|^2$, $$\int_{\mathbb{R}^n} \left|\nabla u (x,y)\right|^2 dx = \int_{\mathbb{R}^n} \left|\frac{\partial u}{\partial y} (x,y)\right|^2 dx + \sum_{j=1}^n \int_{\mathbb{R}^n} \left|\frac{\partial u}{\partial x_{j}} (x,y)\right|^2 dx.$$ Now I agree on the fact that $$\sum_{j=1}^n \int_{\mathbb{R}^n} \left|\frac{\partial u}{\partial x_{j}} (x,y)\right|^2 dx = \int_{\mathbb{R}^n} |\xi|^2|\hat{P_{y}}(\xi)|^2|\hat{f} (\xi)|^2 d\xi.$$ But my main problem is precisely how to deal with $\int_{\mathbb{R}^n} |\frac{\partial u}{\partial y} (x,y)|^2 dx = \int_{\mathbb{R}^n} \left( \int_{\mathbb{R}^n} 2\pi |t|\hat{f}(t) e^{-2\pi i t \cdot x}e^{-2\pi|t|y}\right)^2 dx$...
Mini Research Project Time Updates Added CCSD(T) $n_i$ and dipole moments and tweaked discussion (the delay was caused by a system-wide storage upgrade on the machines which took nearly a week to complete). Preamble This response is in no way meant to be contrary to what Geoff has already posted. I happen to enjoy these types of questions and I like to tackle them as little research projects for fun. Therefore, my answer will be brutally detailed and in-depth. That said, this should be a good introduction to how I would approach this problem if I were going to do this at the 'production-level' of research but this definitely goes far beyond the purpose of any reasonable response for an SE answer. Its All About the Dipoles Baby Okay so I wouldn't use THAT heading in a paper but maybe a talk depending on who my audience was... We can easily determine the dipole moment of cis-2-butene via electronic structure theory. I have modeled two conformers of cis-2-butene as shown below. I will refer to the geometry on the left as StructA and the geometry on the right as StructB. A couple of things to note since I don't include any captions to tables. Energies are always reported in $\mathrm{kcal\ mol}^{-1}$ and dipole moments ($\mu$) are given in Debye. Computational Methods Full geometry optimizations and corresponding harmonic vibrational frequency computations were performed with second-order Moller-Plesset perturbation (MP2) theory and a variety of density functional theory (DFT) methods using the Gaussian 09 software package for the conformers of cis-2-butene. Both conformers were characterized in $C_{2v}$ symmetry. The DFT methods implemented include B3LYP, B3LYP-GD3(BJ), M06-2X, MN12SX, N12SX, and APFD. The B3LYP-GD3(BJ) method employs Grimme's 3rd generation dispersion correction as well as the Becke-Johnson damping function. All DFT computations employed a pruned numerical integration grid having 90 radial shells and 590 angular points per shell. The heavy-aug-cc-pVTZ basis set was employed for these computations where the heavy (non-hydrogen) atoms were augmented with diffuse functions (e.g. cc-pVTZ for H and aug-cc-pVTZ for carbon). This basis set is abbreviated as haTZ. The CCSD(T) (i.e. coupled-cluster method that includes all single and double substitutions as well as a perturbative treatment of the connected triple excitations) was similarly employed using the CFOUR software package. The magnitudes of the components of the residual Cartesian gradients of the optimized geometries were less than $6.8\times 10^{-6} E_h\ a_0^{-1}$. Single point energies were computed with the explicitly correlated MP2-F12 [specifically MP2-F12 3C(FIX)] and CCSD(T)-F12 [specifically CCSD(T)-F12b with unscaled triples contributions] methods in conjunction with the haTZ. These computations were performed with the Molpro 2010.1 software package using the default density fitting (DF) and resolution of the identity (RI) basis sets. Natural bond orbital (NBO) analyses were performed for the MP2 optimized structures using the haTZ basis set and the SCF density. All computations employed the frozen core approximation (i.e. 1s$^2$ electrons frozen in carbon). Computational Methods in English Two different conformations of cis-2-butene were characterized with a variety of cheap (but usually okay) approximations (i.e. DFT methods) as well as reliable (but generally more expensive) wave function methods (i.e. MP2 and CCSD(T)). The wave function methods are necessary to validate the DFT results. CCSD(T) is the gold-standard and gives very good results for single-reference closed-shell well-behaved systems so we will use this as our 'best estimate'. We use a variety of methods in order to look for agreement in the results. If we see good agreement across the board, we can be confident in our results. If we see massive discrepancies then we will have to be careful when we analyze the results. Geometries have been converged to a tight threshold (i.e. we have good molecules) and our DFT computations use a relatively dense integration grid (which leads to more accurate results). Notice that I employ a heavy-aug-cc-pVTZ (haTZ) basis set. Why leave the diffuse functions off hydrogen? The purpose of diffuse functions is to describe electron density far away from the nucleus of an atom. Therefore we slap these functions onto carbon which are relatively large atoms compared to hydrogen. Hydrogen, on the other hand, has only one electron and therefore has a small electron density when isolated. In cis-2-butene, hydrogen is bonded to carbon via a rather small bond distance. The electron density around the hydrogen is even more reduced than an isolated hydrogen atom in the gas phase. Therefore, it would be impractical to include diffuse functions on hydrogen. Doing so may even lead to erroneous results since we will be trying to describe electron density far away from the hydrogen nucleus when in reality there is virtually none to be found. Finally, we perform single point energies using explicitly correlated methods. Because the CCSD(T) opts and freqs will likely not be done in time for this posting, we can gauge how the resulting geometry from each optimization procedure will vary from another geometry given by a different method. If all of the energies are similar (within a few tenths of a $\mathrm{kcal\ mol}^{-1}$), then we can be confident that our geometries are not only very similar but that small deviations in the geometry will have little effect on the corresponding energies at least in this region of the potential energy surface (PES). Large deviations will usually mean that the method which produced the 'outlier' is not a good approximation for the system (I do not expect cis-2-butene to be problematic at all). Explicitly correlated methods accelerate convergence to the CBS limit. These methods have been shown to give results that a large basis set and a canonical method would provide but with a much smaller basis set. For example, the result that I get with a regular CCSD(T)/aug-cc-pV5Z basis set could be obtained using CCSD(T)-F12/aug-cc-pVTZ. This makes the computations less intensive and much more feasible. Results The number of imaginary frequencies ($n_i$), relative MP2-F12 and CCSD(T)-F12 energetic ($\Delta E^{\mathrm{MP2-F12}}$ and $\Delta E^{\mathrm{CC-F12}}$, respectively, in $\mathrm{kcal\ mol^{-1}}$) and dipole moment ($\mu_z$ in Debye) are given in the following table for both conformers of cis-2-butene for a variety of a methods.The relative energies were determined by taking the difference of the respective geometry and the reference CCSD(T) geometry [e.g. E(CCSD(T))-E(MP2)]. StructB is a second-order saddle point ($n_i = 2$) on every single PES considered and therefore is not a minimum energy structure. StructA is, however, a minimum ($n_i = 0$) on every PES considered. The characterization of the nature of the stationary point is consistent between CCSD(T), our best estimate, MP2, and DFT methods. Single point energies reveal negligible differences in the optimized geometries. The energies associated with the MP2 optimized structures is the reference point for all other relative energies. Deviations grow no larger than 0.27 $\mathrm{kcal\ mol}^{-1}$ for the MP2-F12 and CCSD(T)-F12 relative energies. In addition, there is good agreement between the MP2-F12 and CCSD(T)-F12 relative energies, suggesting that higher-order correlation effects are small. The dipoles for StructA and StructB are very similar with very small magnitudes, on the order of a couple tenths of a Debye. To put these quantities into perspective, the dipole moment of water is 1.85 D. We all know that water has a pretty large dipole moment so by comparison, cis-2-butene has a very WEAK dipole moment. You can compare to the dipole moments of other molecules by referring to this NIST reference. Clearly, the rotation of the methyl groups have a very small effect on the dipole moments of each conformer. The MP2 and DFT dipole moments deviate from the best estimate by no more than 0.03 D, but remains in qualitative agreement for both StructA and StructB. The figure below shows the directionality of the dipole. The head of the (unscaled) arrow points toward the negative pole while the tail of the (unscaled) arrow is oriented to the positive pole. The numbers on the atoms represent 'natural charges' from an Natural Bond Orbital (NBO) analysis of the MP2 optimized Struct A. Clearly the carbon atoms have a small negative charge to them as these atoms are sucking (great scientific term here) electron density away from the neighboring hydrogen atoms. This is because the nuclear charge (i.e. the number of protons) on carbon is much greater than that of hydrogen (6 vs. 1). It was suggested that I add some data highlighting the energy difference between the two different conformers. The following table presents the energy difference (where $\Delta E_{\mathrm{A-B}}$ is equivalent to E(A)-E(B)) of each optimized geometry using the MP2-F12 and CCSD(T)-F12 single point energies (in $\mathrm{kcal\ mol}^{-1}$). We can see that StructA is about 1.5 $\mathrm{kcal\ mol}^{-1}$ lower in energy than StructB. This makes sense because StructB is a higher order saddle point and StructA is a minimum on the PES. (I'm actually quite surprised that the energy difference is this large for a couple of methyl rotations...) Conclusions The dipole moment for two conformers of cis-2-butene has been examined using seven different computational approaches and a triple-$\zeta$ quality basis set. The performance of these methods have been tested by evaluating the energies of each geometry with the 'gold standard' CCSD(T) method (the explicitly correlated variant). Good agreement is seen across the board with respect to the Hessian indices, energies, and dipole moments. Only StructA was a minimum on each PES. Both conformers of cis-2-butene have a very weak dipole moment on the order of 0.2 -- 0.3 D. The positive pole is in the vicinity of the methyl groups whereas the negative pole is centered around the sp2 hybridized carbons. FAQ So you may be asking yourself (or rather, should be asking yourself) questions such as these listed below. I will tackle them one at a time. 1.) Why did we look at two conformers of cis-2-butene and why does it matter? StructA is a minimum which means that if you were to characterize this guy in the gas phase, you'd find StructA rather than StructB. This is important because if we were to report these results to other scientists, they will want to know what they can expect to find without wondering. Therefore, StructA is going to be our conformer of interest rather than StructB. The latter still will provide insightful results but that is about it for the purposes of this examination. 2.) Why did we use a variety of methods to characterize these molecular systems? Computational methods are simply approximations. They are not guaranteed to give the 'correct' answer. So we try to address this by using a variety of methods (i.e. approximations) and we analyze the results accordingly. If the results are in agreement, then we can feel confident that the results are correct since we tested them against a list of methods. You can never rely on just one method unless it is rigorous, well-tested and has been shown repeatedly to perform well in the literature. When we say 'perform well', this usually means that the computational results are in reasonable agreement with experimental results. This is important because experimental results is the LAW (for all intents and purposes). If the computations disagree with experiment, 99.9% of the time this means that your computational approach sucked and that your approximation was either flawed or misapplied. By using a host of methods, we can put a little more faith into the results we observe because the chances of massive disagreement between the results of the methods is very unlikely in a normal situation. 3.) What is the point of doing a bunch of energy points? Again, because we used a slew of methods to characterize the geometries of cis-2-butene, we end up with non-identical geometries each time we use a new approximation. For instance, the methyl C-H bond lengths from B3LYP are going to be a little bit different from those obtained with MP2. So then that begs the question, "How do these small differences effect the property of the system that we are interested in?" Generally, minute differences will have little effect on the resulting energies of each molecule under consideration. Energy is a very important property that chemists love to look at. So, if we take each geometry (and each one is unique from the other) and we evaluate the energy of the molecule at the same level of theory (in this case, MP2-F12 and CCSD(T)-F12), then we can quickly see how 'resolved' each geometry was. There should be very good agreement between the relative energies of each geometry (probably to within a few tenths of a $\mathrm{kcal\ mol}^{-1}$). 4.) Okay, so why MP2-F12 AND CCSD(T)-F12? We use MP2-F12 AND CCSD(T)-F12 to test for 'higher-order correlation effects'. MP2 methods are much cheaper than CCSD(T) methods but MP2 is not as rigorous and can be error prone in a host of molecular systems. Therefore, we test the performance of MP2 by busting out our 'gold-standard' which is the CCSD(T) method. If MP2 agrees well with CCSD(T), then we can feel confident in our MP2 results and never have to revisit the difficult, time-consuming CCSD(T) computations ever again. ALso, CCSD(T) will also tell us how DFT performed as well. DFT methods must always be calibrated against something more rigorous since DFT is known for 'getting the right answer for the wrong reasons' and it isn't always right. The 'F12' bit just means that these are 'explicitly-correlated' methods. Rather than give an introduction to what this means, you should understand why we use it instead. You may have noticed that whenever we do a computational job, we specify a method AND a basis set (e.g. heavy-aug-cc-pVTZ). These basis sets can be measured by how many atomic orbitals (or functions) are given in the set. The more that are given, the better the 'basis-set approximation'. Think of this in terms of Riemann sums where you try to approximate the area under a curve using a set of rectangles. Each rectangle is a basis function and the number of rectangles you use forms a basis set. The more rectangles you use, the better your approximation of the area under that curve will be. Basis sets in computational chemistry behave the same way. When you approach an infinite set of rectangles, you approach the exact answer. When you approach an infinite number of basis functions, you approach what is called the CBS (complete basis set) limit. At the CBS limit, you have an exact answer. We cannot implement an infinite basis set in chemistry (for obvious reasons), and very large basis sets are cost prohibitive. Therefore, people have devised these F12 approximations that are constructed in such a way to give results that are comparable to those that you'd get with a large basis set, but you can get them by using a relatively small basis set instead! This is a powerful approach to convergent quantum chemistry that saves you a lot of time while maintaining a set of very good results. 5.) Why didn't you provide more pretty pictures? That's just the nature of the beast. Computational chemistry is usually short on graphics but very dense on spreadsheets. Plus... I'm no artist. I actually spent a good couple hours trying to get some electrostatic potentials posted but the new version of G09 hates the molecular viewer programs I currently use so I ditched that idea.
A cipher $E_k(m)$ is malleable if there is a nontrivial binary relation $\sim$ on messages such that given $c = E_k(m)$, it is easy to find $c' = E_k(m')$ with $m \sim m'$. For example, AES-CTR is malleable because for any $m$ and $m'$ with $m' = m \oplus \delta$, it is easy to compute $$c' = c \oplus \delta = E_k(m) \oplus \delta = E_k(m \oplus \delta) = E_k(m'),$$ in which case $m \sim m' = m \oplus \delta$. A hash-then-encrypt variant where we encrypt $m \mathbin\| H(m)$ instead is still malleable because it is easy to compute $\delta = (m \oplus m') \mathbin\| [H(m) \oplus H(m')]$ for any desired message $m'$. Similarly, textbook RSA, where the ciphertext for a message $m$ is $m^e \bmod n$, is malleable because for any $m$ and $m'$ with $m' = m \cdot \eta \bmod n$, it is easy to compute $$c' \equiv c \cdot \eta^e \equiv m^e \cdot \eta^e \equiv (m \cdot \eta)^e \equiv (m')^e \pmod n,$$ in which case $m \sim m' = m \cdot \eta \bmod n$. As defined by Dolev–Dwork–Naor (paywall-free) and used throughout the literature, a cipher is nonmalleable (NM-) under an attack model (-CPA, -CCA, -CCA2, etc.) if, after interacting with the oracles in the attack model, and upon being presented a challenge ciphertext $c$ for a message of a form chosen by the adversary, the adversary cannot find—even using further interaction with the oracles—a nontrivial relation $\sim$ and a ciphertext $c'$ whose plaintext is related by $\sim$ to the plaintext of $c$. (Here we rule out trivial relations like $m \sim m$.) In practical terms, nonmalleability means an adversary can't selectively modify ciphertexts; the worst they can do is denial of service or wholesale replacement. The Bellare–Namprempre 2000 paper you cited on generic composition of symmetric ciphers and MACs doesn't formally define nonmalleability because nonmalleability is largely not important for the symmetric setting: when the sender and recipient share a key, they generally care about eavesdropping and forgery, but not about selective modification—detecting forgery means detecting selective modifications, and rejecting forgeries prevents abusing selective modifications to leak secrets. Nonmalleability matters more in the setting of public-key While it may not anonymous encryption, like an activist leaking a document to a journalist. directly reveal to the adversary what a message was, selective modification can often be exploited in a larger system like an PGP mail client with EFAIL to leak the message when there is no notion of authentication to prevent forgery per se since anyone can anonymously submit documents. So nonmalleability is defined in the Bellare–Desai–Pointcheval–Rogaway 1998 paper on generic relations between notions of security for public-key encryption.
Consider the following equations $$ \begin{array}{cccx}\tag{1} z+1&=&\frac{1}{z}&,\\ z+2&=&\frac{1}{z^2}&,\\ \vdots &=& \vdots &,\\ z+k &=&\frac{1}{z^k}&. \end{array} $$ where $k$ is a positive integer number. Question: How to find all positive real solutions of $(1)$ when $k$ is given. Example: The only positive real solution of $(1)$ when $k=1$ and $k=2$ is the $z=\frac{1}{\mu}$, where $\mu=\frac{1+\sqrt{5}}{2}$(Golden ratio). My try: I prove that the system $(1)$ has no positive real solution for $k>2$. Proof: Consider $(1)$ has a positive real solution such as $z$, then we get $$z+k=\frac{1}{z^k} \Longleftrightarrow (z+k-1)+1=\frac{1}{z^k} \Longleftrightarrow \frac{1}{z^{k-1}}+1=\frac{1}{z^k} \Longleftrightarrow z^k+z-1=0$$but the equation $z^k+z-1=0$ has no positive real solution for $k>2$. Is my proof correct. Thanks for any suggestion. Edit(1): My proof is incorrect since the equation $z^k+z-1=0$ has positive real solution for $k>2$. Is it possible to ask you to improve my proof or make a correct proof for the question. Thanks
I'm trying to come up with examples to better understand the definition of the cell decomposition of a topological space $X$. The simplest example I could think of would be $X =[0, 1] \subseteq \mathbb{R}$. This is the definition I'm working with. If $X$ is a nonempty topological space, a cell decomposition of $X$is a partition $\Gamma$ of $X$ into subspaces that are open cells of various dimensions, such that the following condition is satisfied: for each cell $e \in \Gamma$ of dimension $n \geq 1$ there exists a continuous map $\Phi$ from some closed $n$-cell $D$ into $X$ (called a characteristic map for $e$) that restricts to a homeomorphism from $\text{Int}(D)$ onto $e$ and maps $\text{Bd}(D)$ into the union of all cells of $\Gamma$ of dimensions strictly less than $n$. So for $X = [0, 1]$ if I set $\Gamma = \{(0,1), \{0\}, \{1\}\}$ I run into a problem, because although $(0, 1)$ is homeomorphic to the open ball $\mathbb{B}^1$, $\{0\}$ and $\{1\}$ are both homeomorphic to the closed ball $\overline{\mathbb{B}^0}$, so unless there's a wilder decomposition, I can't partition $X$ into subspaces that are homeomorphic to open balls. I'm guessing however that $[0, 1]$ does have a cell decomposition, so there must be a way to decompose $X$ into subspaces which are open cells, and where the union of those subspaces equals $X$, if so what would an example of a cell decomposition of $X$ be?
The equation for the rate constant ($\pu{s^{-1}}$) for Forster (or Resonance of dipole-dipole ) energy transfer at separation R is $$ k_R= \alpha\frac{\kappa^2\phi}{\tau R^6}\int_0^{\infty} \frac{F(\nu)\epsilon(\nu)}{\nu^4} d\nu$$where the constant $\alpha =(9000\ln(10) )/(128\pi^5n^4N)$, n is the solution refractive index and N Avogadro's number. The quantum yield of the donor is $\phi$, and its excited state lifetime (in the absence of quencher) is $\tau$, the orientation term is $\kappa$ and R the separation of donor and acceptor. In the (overlap) integral F is the fluorescence spectrum measure in frequency (not wavelength) and the area under $F(\nu)$ is normalised to unity. The molar extinction coefficient of the acceptor is $\epsilon(\nu)$ also measured on a frequency scale normally in units $\pu{dm^3mol^{-1}cm^{-1}}$. The rate constant is more commonly written as $$k=\frac{1}{\tau} \left( \frac{R_0}{R} \right)^6 $$ where $R_0$ is the critical distance at which energy transfer rate constant equals the fluorescence rate constant ($1/\tau$) and is also a measure of the overlap of fluorescence from the donor and absorption by the acceptor. This the decay rate of a molecule that is fluorescing and undergoing energy transfer is at separation R equal to $k=k_f+k_{isc}+k{ic}+k_R$ where isc and ic are intersystem crossing and internal conversion respectively. In you questions; (A) the bigger this is the larger $k_R$ is. (B) and (C) there is no real cut off, $R_0$ for chlorophyll is $\approx 8$ nm so transfer can occur beyond this distance it just falls off as $1/R^6$ (D) Clearly the smaller $\phi$ is the lower the rate of transfer and in direct proportion. If the molecule has a low fluorescence yield clearly the molecule cannot transfer energy as it is directed elsewhere. (E) If $\epsilon$ is small then the overlap integral is small so the rate $k_R$ is small.
Mathematician:John Lewis Selfridge (Redirected from Mathematician:J. Selfridge)Jump to navigation Jump to search Mathematician Proved in $1962$ that $78 \ 557$ is a Sierpiński number of the second kind. Nationality American History Born: February 17, 1927, Ketchikan, Alaska, United States Died: October 31, 2010, DeKalb, Illinois Theorems and Definitions Baillie-PSW Primality Test (with Robert Baillie, Carl Bernard Pomerance and Samuel Standfield Wagstaff Jr.) Erdős-Selfridge Function (with Paul Erdős) Publications 1960: E1408: The Highest Power of $2$ in the Numerator of $\sum_{i = 1}^k 1 / \left({2 i - 1}\right)$( Amer. Math. Monthly Vol. 67: 924 – 925) (with D.L. Silverman) www.jstor.org/stable/2309478 1974: A New Function Associated with the prime factors of $\displaystyle \binom n k$(with E.F. Ecklund Jr. and Paul Erdős) 1975: Not Every Number is the Sum or Difference of Two Prime Powers( Math. Comp. Vol. 29, no. 129: 79 – 81) (with Frederick R. Cohen) (in which Not Every Number is the Sum or Difference of Two Prime Powers is presented) www.jstor.org/stable/2005463 July 1980: The Pseudoprimes to $25 \cdot 10^9$( Math. Comp. Vol. 35, no. 151: 1003 – 1026) (with Carl Pomerance and Samuel S. Wagstaff, Jr.) www.jstor.org/stable/2006210 1983: Factorizations of $b^n \pm 1$ up to high powers( Contemporary Mathematics Vol. 22: 1 – 178) (with John Brillhart, D.H. Lehmer, Samuel S. Wagstaff Jr. and Bryant Tuckerman) 1988: Factorizations of $b^n \pm 1, b = 2, 3, 5, 6, 7, 10, 11, 12$ up to high powers (2nd ed.)( Contemporary Mathematics Vol. 22: 1 – 226) (with John Brillhart, D.H. Lehmer, Samuel S. Wagstaff Jr. and Bryant Tuckerman) Dec. 1986: Pairs of Squares with Consecutive Digits( Math. Mag. Vol. 59, no. 5: 270 – 275) (with C.B. Lacampagne) www.jstor.org/stable/2689401
Line 1: Line 1: − An inequality for vector functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557901.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557902.png" />, and their derivatives, defined in some bounded domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557903.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557904.png" />: + + : + − <table class="eq" style="width: 100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www. encyclopediaofmath. org/legacyimages/k/k055/k055790/k0557905. png" /></td> <td valign= "top" style="width:5%;text-align:right;">( 1) </td></tr></table> + :. + + . + + + + . + + + + = () + + − where + + + + + + + where − <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text- align:center;"><img align= "absmiddle" border="0" src="https://www. encyclopediaofmath. org/legacyimages/k/k055/k055790/k0557906.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table> + -= . . − The Korn inequality is also valid for vector functions in the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557907.png" /> obtained by completing the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/k/k055/k055790/k0557908.png" /> with respect to the norm ( 2). The inequality (1) is sometimes called the second Korn inequality; the first Korn inequality being inequality (1) without the second term on the left. + The Korn inequality in the (was by Korn in elasticity. − − The inequality was proposed by A. Korn (1908) in order to obtain an a priori estimate for the solution of non-homogeneous equations of elasticity theory. ====References==== ====References==== − <table><TR><TD valign="top" >[1 ]</TD> <TD valign="top" > G. Fichera, "Existence theorems in elasticity theory" , ''Handbuch der Physik'' , '''VIa/2''' , Springer (1972) pp. 347–389 {{MR| }} {{ZBL|} } </TD></TR></table> + + + + + + + valign="top"1 + + + + + + valign="top" + + G. Fichera, "Existence theorems in elasticity theory", ''Handbuch der Physik'', '''VIa/2''', Springer (1972) pp. 347–389 + | + |} An inequality concerning the derivatives of vector functions $f:\mathbb R^n\to \mathbb R^n$. Assuming that $f$ is continuosly differentiable, we denote by $Df$ the Jacobian matrix of its differential and by $D^s f$ its symmetric part, namely the matrix with entries\[\frac{1}{2} \left(\frac{\partial f_j}{\partial x_i} + \frac{\partial f_i}{\partial x_j}\right)\, .\]Denoting by $|Df|$ and $|D^s f|$ the corresponding Hilbert-Schmidt norms, the original inequality of Korn (see [K2]) states that, if $f\in C^1_c (\mathbb R^n)$, then\[\int |Df|^2 \leq 2 \int |D^s f|^2\, .\]In fact, when $f$ is $C^2$ a simple integration by parts yields the identity\[\int |D^s f|^2 = \frac{1}{2} \int |D f|^2 + \frac{1}{2} \int ({\rm div}\, f)^2\]from which Korn's inequality is obvious. A standard approximation procedure yields then the general statement: in fact for the same reason the inequality holds for functions in the Sobolev class $H^1_0$. The Korn's inequality can also be concluded easily using the Fourier Transform. \[\|Du\|_{L^p} \leq C \|D^s u\|_{L^p}\, ,\]where the constant $C$ depends, additionally, upon $p$. The latter generalization uses the Calderon-Zygmund estimates for singular integral operators, see for instance [C]. The cases $p = 1, \infty$ of the inequality are false, as implied by a more general theorem of Ornstein about the failure of $L^1$ estimates for general singular integral operators, see [O]. For a modern proof the reader might consult [CFM]. The Korn inequality has several applications in the theory of nonlinear elasticity (and was in fact originally derived by Korn in linear elasticity, see [K]); cf. [C2], [F]. [C] P. G. Ciarlet, "On Korn's inequality", Chinese Ann. Math., Ser B 31 (2010), pp. 607-618. [C2] P. G. Ciarlet, "Mathematical Elasticity", Vol. I : Three-Dimensional Elasticity, Series “Studies in Mathematics and its Applications”, North-Holland, Amsterdam, 1988. [CFM] S. Conti, D. Faraco, F. Maggi, "A new approach to counterexamples to $L^1$ estimates: Korn’s inequality, geometric rigidity, and regularity for gradients of separately convex functions", Arch. Rat. Mech. Anal. 175, (2005), pp. 287-300. [K] A. Korn, "Solution general du probleme d'equilibre dans la theorie de l'elasticite", Annales de la Faculte de Sciences de Toulouse, 10, (1908), pp. 705-724 [K2] A. Korn, "Ueber einige Ungleichungen, welche in der Theorie der elastischen und elektrischen Schwingungen eine Rolle spielen", Bulletin internationale de l'Academie de Sciences de Cracovie, 9, (1909), pp. 705-724 [O] D. Ornstein, "A non-inequality for differential operators in the $L^1$ norm", Arch. Rational Mech. Anal., 11, (1962), pp. 40–49 [F] G. Fichera, "Existence theorems in elasticity theory", Handbuch der Physik, VIa/2, Springer (1972) pp. 347–389
Cars, of random length $L$, arrive at a gate. The first car parks against the gate. The other arriving cars park behind at a distance uniformly distributed on $[0,1]$.Let $N(t)$ be the number of cars parked at a distance $t$ from the gate. Find:$$ \lim_{t\to \infty} E[N(t)]/t $$ I have not been given a distribution for $L$. If there was no space in between cars I could just use the continuous renewal equation. I'm not sure how to incorporate the space in between cars here. I think the expected value of the spaces between cars is $1/2(N(t)-1)$. Assuming I have found this correctly, could I just add this expected value to the expected value of the general continuous renewal process?
In terms of energy, the 3 dimensional MB Distribution is giving the probability for a particle to have an energy $E \geq E + dE$ is: $$f(E) = \frac{2}{\sqrt \pi} \cdot \bigg(\frac{1}{k_BT}\bigg)^{\frac{3}{2}} \cdot e^{-\frac{E}{k_BT}} \cdot \sqrt{E} \cdot dE$$ It is said that the 1 dimensional MB Distribution, giving the probability for a particle to have a certain energy in 1 degree of freedom, say $E_x \geq E_x + dE_x$, is: $$f(E_x) = \sqrt{\frac{1}{\pi E_x k_B T}} \cdot e^{-\frac{E_x}{k_BT}} \cdot dE_x$$ How is this 1D MB-Distribution derived from scratch and from the 3D MB Distribution? The MB distribution for the one degree of freedom in terms of momentum is: $$ w(p_x) dp_x = \frac1{\sqrt{2\pi m k_BT}} e^{-\frac{p_x^2}{2mk_BT}} dp_x $$ The relation between $p_x$ and $E$ is $$ E = \frac{p_x^2}{2m}\ \longleftrightarrow \ p_x(E) = \pm\sqrt{2mE}. $$ Probability "conservation" requires $$ f(E) dE = 2 w(p_x(E)) dp_x(E), $$ where $p_x = \sqrt{2mE}$ is chosen and $2$ factor accounts for two possibilities. From the last equation follows $$ f(E) = 2\ w(p_x(E))\ |p_x'(E)|. $$ This formula leads to the needed expression. Consider a canonical ensemble of a single particle in a 1D container of length $L$. According to the (classical) Boltzmann probability distribution, the probability density function for the state of the system being the microstate with position $x'$ and momentum $p'$ is $$ f_{x,p}(x',p') = \frac{e^{-\beta H(x', p')}}{\int\limits_0^L dx'' \int\limits_{-\infty}^\infty dp'' e^{-\beta H(x'',p'')}}$$ where $ H(x, p) = p^2/2m $ is the energy of the particle with position $x$ and momentum $p$, and $\beta = 1/k_BT$. The marginal probablilty density function for the momentum is $$ f_p(p') = \int\limits_0^L dx' f_{x,p}(x',p') = \frac{\int\limits_0^L dx'e^{-\beta p'^2/2m}}{\int\limits_0^L dx'' \int\limits_{-\infty}^\infty dp'' e^{-\beta p''^2/2m}} = \frac{L e^{-\beta p'^2/2m}}{L\int\limits_{-\infty}^\infty dp'' e^{-\beta p''^2/2m}} = \sqrt{\frac{\beta}{2\pi m}} e^{-\beta p'^2/2m}. $$ To calculate the probability distribution function for the energy, note that the probability that the energy is less than $E'$ is given by $$ \int\limits_0^E' dE'' f_E(E'') = \int\limits_{-\sqrt{2mE'}}^\sqrt{2mE'} dp' f_p(p'') = 2\int\limits_0^\sqrt{2mE'} dp' f_p(p'') . $$ Differentiating both sides and using the fundamental theorem of calculus, $$f_E(E') = 2 \sqrt{\frac{m}{2E'}} f_p(\sqrt{2mE'}) = \sqrt{\frac{2m}{E'}} \sqrt{\frac{\beta}{2\pi m}} e^{-\beta E'} = \sqrt{\frac{\beta}{\pi E'}} e^{-\beta E'}. $$ If you don't know where the Boltzmann probability distribution that I started with comes from, it is a standard result of statistical mechanics that should be found in any statistical mechanics textbook. If you want to ask about that, I think that probably deserves its own question. If you wish to start with a particle free to move in a 3D container of volume $V$, $$ f_{\vec{x},\vec{p}}(\vec{x}',\vec{p}') = \frac{e^{-\beta H(\vec{x}', \vec{p}')}}{\int\limits_V d^3\vec{x}'' \int\limits_{-\infty}^\infty d^3\vec{p}'' e^{-\beta H(\vec{x}'',\vec{p}'')}}$$ $$ f_\vec{p}(\vec{p}') = \int\limits_V d^3\vec{x}' f_{\vec{x},\vec{p}}(\vec{x}',\vec{p}') = \frac{\int\limits_V d^3\vec{x}'e^{-\beta p'^2/2m}}{\int\limits_V d^3\vec{x}'' \int\limits_{-\infty}^\infty d^3\vec{p}'' e^{-\beta p''^2/2m}} = \frac{V e^{-\beta p'^2/2m}}{V\int\limits_{-\infty}^\infty d^3\vec{p}'' e^{-\beta p''^2/2m}} = \frac{e^{-\beta p'^2/2m}}{\left( \int\limits_{-\infty}^\infty dp_x'' e^{-\beta p_x''^2/2m} \right) \left( \int\limits_{-\infty}^\infty dp_y'' e^{-\beta p_y''^2/2m} \right) \left( \int\limits_{-\infty}^\infty dp_z'' e^{-\beta p_z''^2/2m} \right)} = \left( \frac{\beta}{2\pi m} \right)^{3/2} e^{-\beta p'^2/2m} = \left( \frac{\beta}{2\pi m} \right)^{3/2} e^{-\beta (p_x'^2 + p_y'^2 + p_z'^2)^2/2m}. $$ This is the joint probability density function for all three components of the momentum. If you want the marginal probability density function for the $x$-component only, $$ f_{p_x}(p_x') = \int\limits_{-\infty}^\infty dp_y' \int\limits_{-\infty}^\infty dp_z' f_\vec{p}(\vec{p}') = \left( \frac{\beta}{2\pi m} \right)^{3/2} e^{-\beta p_x'^2/2m} \int\limits_{-\infty}^\infty dp_y' e^{-\beta p_y'^2/2m} \int\limits_{-\infty}^\infty dp_z' e^{-\beta p_z'^2/2m} = \sqrt{\frac{\beta}{2\pi m}} e^{-\beta p_x'^2/2m} $$ so you end up with the same distribution as the 1D case. If you want the distribution of the energy due to the motion along the $x$ direction (i.e. $E_x = p_x^2/2m$), the math to derive it is identical to the 1D case above, and you end up with $$f_{E_x}(E_x') = \sqrt{\frac{\beta}{\pi E_x'}} e^{-\beta E_x'}. $$
( Convexity is closed under intersection) Given a family of convex set $C_i$, where $i\in I$ for some index set $I$, then $\cap_{i\in I}C_i$ is convex. In other words, the intersection of convex sets are convex. Proof. Suppose two points $p$ and $q$ are in $\cap_{i\in I}C_i$. That is, for each $i\in I$, $p\in C_i$ and $q\in C_i$. Consider $L$, the line segment between $p$ and $q$. By the definition of convexity, $L\subseteq C_i$. That is $L\subseteq \cap_{i\in I}C_i$. By the definition of convexity again, $\cap_{i\in I}C_i$$ is convex. ( Convexity is closed under translation) If $C$ is a convex set, then $C_t=\{x+t\mid x\in C\}$ is also convex. In other words, if you translate a convex set, you will get a convex set. Proof is skipped. (Convexity is closed under erosion) Suppose $A$ is a convex set. Let $B$ be another set. Then the erosion of $A$ by $B$ is convex. Proof. I will let you figure out given the above two lemmas. Another hint is given in the statement itself, which is, you do not need $B$ to be convex at all. Is the erosion of $A$ by $B$ always empty? Imagine both $A$ and $B$ are disks centered at the origin. $A$ is a huge while $B$ is a very small. Then the "erosion" by $B$ can only remove a small stripe along the circumference of $A$. The inner part of $A$ will not be "eroded". So the erosion of $A$ by $B$ will keep a significant part of $A$. You can also check an example on Wikipedia.
Difference between revisions of "Unitriangular matrix group:UT(3,p)" (→Other descriptions) m (→In coordinate form) (11 intermediate revisions by 2 users not shown) Line 2: Line 2: ==Definition== ==Definition== + + ===As a group of matrices=== ===As a group of matrices=== Line 9: Line 11: <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> <math>\left \{ \begin{pmatrix} 1 & a_{12} & a_{13} \\ 0 & 1 & a_{23} \\ 0 & 0 & 1 \\\end{pmatrix} \mid a_{12},a_{13},a_{23} \in \mathbb{F}_p \right \}</math> − The + The the + + <math>= </math> + + + + . + + <math>= </math> the : + + + + + + is . ===In coordinate form=== ===In coordinate form=== Line 16: Line 32: with the multiplication law given by: with the multiplication law given by: − <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + <math> (a_{12},a_{13},a_{23}) (b_{12},b_{13},b_{23}) = (a_{12} + b_{12},a_{13} + b_{13} + a_{12}b_{23}, a_{23} + b_{23}), + + (a_{12},a_{13},a_{23})^{-1} = (-a_{12}, -a_{13} + a_{12}a_{23}, -a_{23}) </math>. The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: The matrix corresponding to triple <math>(a_{12},a_{13},a_{23})</math> is: Line 30: Line 48: The group can be defined by means of the following [[presentation]]: The group can be defined by means of the following [[presentation]]: − <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = + <math>\langle x,y,z \mid [x,y] = z, xz = zx, yz = zy, x^p = y^p = z^p = \rangle</math> − where <math> + where <math></math> denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators <math>x,y,z</math> correspond to matrices: Line 49: Line 67: 0 & 0 & 1\\ 0 & 0 & 1\\ \end{pmatrix}</math> \end{pmatrix}</math> + + ===As a semidirect product=== ===As a semidirect product=== Line 67: Line 87: # These groups fall in the more general family <math>UT(n,p)</math> of [[unitriangular matrix group]]s. The unitriangular matrix group <math>UT(n,p)</math> can be described as the group of unipotent upper-triangular matrices in <math>GL(n,p)</math>, which is also a <math>p</math>-Sylow subgroup of the [[general linear group]] <math>GL(n,p)</math>. This further can be generalized to <math>UT(n,q)</math> where <math>q</math> is the power of a prime <math>p</math>. <math>UT(n,q)</math> is the <math>p</math>-Sylow subgroup of <math>GL(n,q)</math>. # These groups fall in the more general family <math>UT(n,p)</math> of [[unitriangular matrix group]]s. The unitriangular matrix group <math>UT(n,p)</math> can be described as the group of unipotent upper-triangular matrices in <math>GL(n,p)</math>, which is also a <math>p</math>-Sylow subgroup of the [[general linear group]] <math>GL(n,p)</math>. This further can be generalized to <math>UT(n,q)</math> where <math>q</math> is the power of a prime <math>p</math>. <math>UT(n,q)</math> is the <math>p</math>-Sylow subgroup of <math>GL(n,q)</math>. − # These groups also fall into the general family of [[extraspecial group]]s. + # These groups also fall into the general family of [[extraspecial group]]s. ==Elements== ==Elements== Line 129: Line 149: ==Subgroups== ==Subgroups== {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} {{further|[[Subgroup structure of unitriangular matrix group:UT(3,p)]]}} + + {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} {{#lst:subgroup structure of unitriangular matrix group:UT(3,p)|summary}} Line 138: Line 160: {{#lst:linear representation theory of unitriangular matrix group:UT(3,p)|summary}} {{#lst:linear representation theory of unitriangular matrix group:UT(3,p)|summary}} − == + ==== − + == − − − − − − − − − − − + of order the center, the . − − − − − − − − − − ==GAP implementation== ==GAP implementation== Line 204: Line 206: |} |} − − − − − − − − ==External links == ==External links == − * {{wp| + * {{wp|}} Latest revision as of 11:21, 22 August 2014 This article is about a family of groups with a parameter that is prime. For any fixed value of the prime, we get a particular group. View other such prime-parametrized groups Contents 1 Definition 2 Families 3 Elements 4 Arithmetic functions 5 Subgroups 6 Linear representation theory 7 Endomorphisms 8 GAP implementation 9 External links Definition Note that the case , where the group becomes dihedral group:D8, behaves somewhat differently from the general case. We note on the page all the places where the discussion does not apply to . As a group of matrices The multiplication of matrices and gives the matrix where: The identity element is the identity matrix. The inverse of a matrix is the matrix where: Note that all addition and multiplication in these definitions is happening over the field . In coordinate form We may define the group as set of triples over the prime field , with the multiplication law given by: , . The matrix corresponding to triple is: Definition by presentation The group can be defined by means of the following presentation: where denotes the identity element. These commutation relation resembles Heisenberg's commuatation relations in quantum mechanics and so the group is sometimes called a finite Heisenberg group. Generators correspond to matrices: Note that in the above presentation, the generator is redundant, and the presentation can thus be rewritten as a presentation with only two generators and . As a semidirect product This group of order can also be described as a semidirect product of the elementary abelian group of order by the cyclic group of order , with the following action. Denote the base of the semidirect product as ordered pairs of elements from . The action of the generator of the acting group is as follows: In this case, for instance, we can take the subgroup with as the elementary abelian subgroup of order , i.e., the elementary abelian subgroup of order is the subgroup: The acting subgroup of order can be taken as the subgroup with , i.e., the subgroup: Families These groups fall in the more general family of unitriangular matrix groups. The unitriangular matrix group can be described as the group of unipotent upper-triangular matrices in , which is also a -Sylow subgroup of the general linear group . This further can be generalized to where is the power of a prime . is the -Sylow subgroup of . These groups also fall into the general family of extraspecial groups. For any number of the form , there are two extraspecial groups of that order: an extraspecial group of "+" type and an extraspecial group of "-" type. is an extraspecial group of order and "+" type. The other type of extraspecial group of order , i.e., the extraspecial group of order and "-" type, is semidirect product of cyclic group of prime-square order and cyclic group of prime order. Elements Further information: element structure of unitriangular matrix group:UT(3,p) Summary Item Value number of conjugacy classes order Agrees with general order formula for : conjugacy class size statistics size 1 ( times), size ( times) orbits under automorphism group Case : size 1 (1 conjugacy class of size 1), size 1 (1 conjugacy class of size 1), size 2 (1 conjugacy class of size 2), size 4 (2 conjugacy classes of size 2 each) Case odd : size 1 (1 conjugacy class of size 1), size ( conjugacy classes of size 1 each), size ( conjugacy classes of size each) number of orbits under automorphism group 4 if 3 if is odd order statistics Case : order 1 (1 element), order 2 (5 elements), order 4 (2 elements) Case odd: order 1 (1 element), order ( elements) exponent 4 if if odd Conjugacy class structure Note that the characteristic polynomial of all elements in this group is , hence we do not devote a column to the characteristic polynomial. For reference, we consider matrices of the form: Nature of conjugacy class Jordan block size decomposition Minimal polynomial Size of conjugacy class Number of such conjugacy classes Total number of elements Order of elements in each such conjugacy class Type of matrix identity element 1 + 1 + 1 + 1 1 1 1 1 non-identity element, but central (has Jordan blocks of size one and two respectively) 2 + 1 1 , non-central, has Jordan blocks of size one and two respectively 2 + 1 , but not both and are zero non-central, has Jordan block of size three 3 if odd 4 if both and are nonzero Total (--) -- -- -- -- -- Arithmetic functions Compare and contrast arithmetic function values with other groups of prime-cube order at Groups of prime-cube order#Arithmetic functions For some of these, the function values are different when and/or when . These are clearly indicated below. Arithmetic functions taking values between 0 and 3 Function Value Explanation prime-base logarithm of order 3 the order is prime-base logarithm of exponent 1 the exponent is . Exception when , where the exponent is . nilpotency class 2 derived length 2 Frattini length 2 minimum size of generating set 2 subgroup rank 2 rank as p-group 2 normal rank as p-group 2 characteristic rank as p-group 1 Arithmetic functions of a counting nature Function Value Explanation number of conjugacy classes elements in the center, and each other conjugacy class has size number of subgroups when , when See subgroup structure of unitriangular matrix group:UT(3,p) number of normal subgroups See subgroup structure of unitriangular matrix group:UT(3,p) number of conjugacy classes of subgroups for , for See subgroup structure of unitriangular matrix group:UT(3,p) Subgroups Further information: Subgroup structure of unitriangular matrix group:UT(3,p) Note that the analysis here specifically does not apply to the case . For , see subgroup structure of dihedral group:D8. Table classifying subgroups up to automorphisms Automorphism class of subgroups Representative Isomorphism class Order of subgroups Index of subgroups Number of conjugacy classes Size of each conjugacy class Number of subgroups Isomorphism class of quotient (if exists) Subnormal depth (if subnormal) trivial subgroup trivial group 1 1 1 1 prime-cube order group:U(3,p) 1 center of unitriangular matrix group:UT(3,p) ; equivalently, given by . group of prime order 1 1 1 elementary abelian group of prime-square order 1 non-central subgroups of prime order in unitriangular matrix group:UT(3,p) Subgroup generated by any element with at least one of the entries nonzero group of prime order -- 2 elementary abelian subgroups of prime-square order in unitriangular matrix group:UT(3,p) join of center and any non-central subgroup of prime order elementary abelian group of prime-square order 1 group of prime order 1 whole group all elements unitriangular matrix group:UT(3,p) 1 1 1 1 trivial group 0 Total (5 rows) -- -- -- -- -- -- -- Tables classifying isomorphism types of subgroups Group name GAP ID Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup Trivial group 1 1 1 1 Group of prime order 1 1 Elementary abelian group of prime-square order 0 Prime-cube order group:U3p 1 1 1 1 Total -- Table listing number of subgroups by order Group order Occurrences as subgroup Conjugacy classes of occurrence as subgroup Occurrences as normal subgroup Occurrences as characteristic subgroup 1 1 1 1 1 1 0 1 1 1 1 Total Linear representation theory Further information: linear representation theory of unitriangular matrix group:UT(3,p) Item Value number of conjugacy classes (equals number of irreducible representations over a splitting field) . See number of irreducible representations equals number of conjugacy classes, element structure of unitriangular matrix group of degree three over a finite field degrees of irreducible representations over a splitting field (such as or ) 1 (occurs times), (occurs times) sum of squares of degrees of irreducible representations (equals order of the group) see sum of squares of degrees of irreducible representations equals order of group lcm of degrees of irreducible representations condition for a field (characteristic not equal to ) to be a splitting field The polynomial should split completely. For a finite field of size , this is equivalent to . field generated by character values, which in this case also coincides with the unique minimal splitting field (characteristic zero) Field where is a primitive root of unity. This is a degree extension of the rationals. unique minimal splitting field (characteristic ) The field of size where is the order of mod . degrees of irreducible representations over the rational numbers 1 (1 time), ( times), (1 time) Orbits over a splitting field under the action of the automorphism group Case : Orbit sizes: 1 (degree 1 representation), 1 (degree 1 representation), 2 (degree 1 representations), 1 (degree 2 representation) Case odd : Orbit sizes: 1 (degree 1 representation), (degree 1 representations), (degree representations) number: 4 (for ), 3 (for odd ) Orbits over a splitting field under the multiplicative action of one-dimensional representations Orbit sizes: (degree 1 representations), and orbits of size 1 (degree representations) Endomorphisms Automorphisms The automorphisms essentially permute the subgroups of order containing the center, while leaving the center itself unmoved. GAP implementation GAP ID For any prime , this group is the third group among the groups of order . Thus, for instance, if , the group is described using GAP's SmallGroup function as: SmallGroup(343,3) Note that we don't need to compute ; we can also write this as: SmallGroup(7^3,3) As an extraspecial group For any prime , we can define this group using GAP's ExtraspecialGroup function as: ExtraspecialGroup(p^3,'+') For , it can also be constructed as: ExtraspecialGroup(p^3,p) where the argument indicates that it is the extraspecial group of exponent . For instance, for : ExtraspecialGroup(5^3,5) Other descriptions Description Functions used SylowSubgroup(GL(3,p),p) SylowSubgroup, GL SylowSubgroup(SL(3,p),p) SylowSubgroup, SL SylowSubgroup(PGL(3,p),p) SylowSubgroup, PGL SylowSubgroup(PSL(3,p),p) SylowSubgroup, PSL
I have the function $$f(x)=\frac{2x}{10+x}$$ and I am asked to find its power series representation which I found to be $$\sum_{n=0}^{\infty} (-1)^{n} *\frac{2x^{n+1}}{10^{n+1}}$$ and I found the radius of convergence to be $R=10$. All until here is clear and easy, but when I am asked to find the 1st few terms I tries to do the following $c_0$ I plugged a value of $x=0$ in my originalfunction $f(x)$ which equals $(0)$ [correct answer] $c_1$ I plugged a value of $x=0$ in the 1stderivative of $f(x)$ which equals $\frac{1}{5}$ [correct answer] $c_2$ I plugged a value of $x=0$ in the 2ndderivative of $f(x)$ [incorrect answer] $c_3$ I plugged a value of $x=0$ in the 3rdderivative of $f(x)$ [incorrect answer] $c_4$ I plugged a value of $x=0$ in the 4thderivative of $f(x)$ [incorrect answer] So if $c_0$ and $c_1$ are correct why would the others not be as well? Am I missing something profoundly important?
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Determining the amount of time a process requires calls for a timer. These devices can be simple kitchen timers (not very precise) or complex systems that can measure to a fraction of a second. Accurate time measurement is essential in kinetics studies for assessing rates of chemical reactions. Determining the Rate Law from Experimental Data In order to experimentally determine a rate law, a series of experiments must be performed with various starting concentrations of reactants. The initial rate law is then measured for each of the reactions. Consider the reaction between nitrogen monoxide gas and hydrogen gas to form nitrogen gas and water vapor. \[2 \ce{NO} \left( g \right) + 2 \ce{H_2} \left( g \right) \rightarrow \ce{N_2} \left( g \right) + 2 \ce{H_2O} \left( g \right)\] The following data were collected for this reaction at \(1280^\text{o} \text{C}\) (see table below). Table 18.10.1 Experiment \(\left[ \ce{NO} \right]\) \(\left[ \ce{H_2} \right]\) Initial Rate \(\left( \text{M/s} \right)\) 1 0.0050 0.0020 \(1.25 \times 10^{-5}\) 2 0.010 0.0020 \(5.00 \times 10^{-5}\) 3 0.010 0.0040 \(1.00 \times 10^{-4}\) Notice that the starting concentrations of \(\ce{NO}\) and \(\ce{H_2}\) were varied in a specific way. In order to compare the rates of reaction and determine the order with respect to each reactant, the initial concentration of each reactant must be changed while the other is held constant. Comparing experiments 1 and 2: The concentration of \(\ce{NO}\) was doubled, while the concentration of \(\ce{H_2}\) was held constant. The initial rate of the reaction quadrupled, since \(\frac{5.00 \times 10^{-5}}{1.25 \times 10^{-5}} = 4\). Therefore, the order of the reaction with respect to \(\ce{NO}\) is 2. In other words, \(\text{rate} \propto \left[ \ce{NO} \right]^2\). Because \(2^2 = 4\), the doubling of \(\left[ \ce{NO} \right]\) results in a rate that is four times greater. Comparing experiments 2 and 3: The concentration of \(\ce{H_2}\) was doubled while the concentration of \(\ce{NO}\) was held constant. The initial rate of the reaction doubled, since \(\frac{1.00 \times 10^{-4}}{5.00 \times 10^{-5}} = 2\). Therefore, the order of the reaction with respect to \(\ce{H_2}\) is 1, or \(\text{rate} \propto \left[ \ce{H_2} \right]^1\). Because \(2^1 = 2\), the doubling of \(\ce{H_2}\) results in a rate that is twice as great. The overall rate law then includes both of these results. \[\text{rate} = k \left[ \ce{NO} \right]^2 \left[ \ce{H_2} \right]\] The sum of the exponents is \(2 + 1 = 3\), making the reaction third-order overall. Once the rate law for a reaction is determined, the specific rate constant can be found by substituting the data for any of the experiments into the rate law and solving for \(k\). \[k = \frac{\text{rate}}{\left[ \ce{NO} \right]^2 \left[ \ce{H_2} \right]} = \frac{1.25 \times 10^{-5} \: \text{M/s}}{\left( 0.0050 \: \text{M} \right)^2 \left( 0.0020 \: \text{M} \right)} = 250 \: \text{M}^{-2} \text{s}^{-1}\] Notice that the rate law for the reaction does not relate to the balanced equation for the overall reaction. The coefficients of \(\ce{NO}\) and \(\ce{H_2}\) are both 2, while the order of the reaction with respect to the \(\ce{H_2}\) is only one. The units for the specific rate constant vary with the order of the reaction. So far, we have seen reactions that are first or second order with respect to a given reactant. Occasionally, the rate of a reaction may not depend on the concentration of one of the reactants at all. In this case, the reaction is said to be zero-order with respect to that reactant. Summary The process of using experimental data to determine a rate law is described. Contributors CK-12 Foundation by Sharon Bewick, Richard Parsons, Therese Forsythe, Shonna Robinson, and Jean Dupon.
For what $k\in\mathbb N$, $\sqrt{n}+\sqrt{n+k}$ is irrational? ($\forall n\in\mathbb N$) Well, a possible but perhaps in the long run not exhaustive method, since you must find for what $k \in \mathbb{N}$ this is true; consider $$(\sqrt{n}+\sqrt{n+k})(-\sqrt{n}+\sqrt{n+k})=-n+(n+k)$$ Which of course gives $k$. now, since $k$ is rational, it means that $\sqrt{n}$ and $\sqrt{n+k}$ is rational thus \begin{align} n&=&p^{2} \\ n+k&=& q^{2} \end{align} Perhaps you can see where I am going with this and finish it off? Hint: If $k=2m+1$, $m\ge1$, then $\sqrt{n}+\sqrt{n+k}$ fails to be an irrational number for $n=m^2$: $$\sqrt{n}+\sqrt{n+k}=\sqrt{m^2}+\sqrt{m^2+2m+1}=m+(m+1)\in\mathbb{Q}$$ If $k=4m$, $m>1$, then $\sqrt{n}+\sqrt{n+k}$ fails to be an irrational number for $n=(m-1)^2$. $$\sqrt{n}+\sqrt{n+k}=\sqrt{(m-1)^2}+\sqrt{(m-1)^2+4m}=(m-1)+(m+1)\in\mathbb{Q}$$ This result is not true in general for example, suppose that $n$ is a square $n=l^2$ and $k=0$ $\sqrt{l^2+0}+\sqrt{l^2}$ is rational, suppose that $n=9, k=16$, $\sqrt{16+9}+\sqrt{9}$ is rational. The question is given $n$ for what values of $k$, $\sqrt{n+k}+\sqrt{n}$ is irrrational? Proposition Suppose that $k,n\in N$ and $n(n+k)$ is not a square then $\sqrt{n+k}+\sqrt{n}$ is irrational. Proof: $(\sqrt{n}+\sqrt{n+k})^2=n+n+k+2\sqrt{n}\sqrt{n+k}$ $((\sqrt{n}+\sqrt{n+k})^2-2n-k)^2=4n(n+k)$ Consider $P(X)= (X^2-2n-k)^2-4n(n+k)=X^4-2(2n+k)X^2+(2n+k)^2-4n(n+k)=X^4-2(2n+k)X^2+k^2$ $\sqrt{n+k}+\sqrt{n}$ is a root of $P(X)$. Consider $Q(X)=U^2-2(2n+k)U+k^2$. The discriminant of $Q(X)$ is $4(2n+k)^2-4k^2=4(4n^2+4nk+k^2)-4k^2 =16n(n+k)$, thus if $n(n+k)$ is not a square, $Q(X)$ and hence $P(X)$ is irreducible. Solution is $k = 2^{2v+1}*j$ where j is odd. Obviously $k$ cannot equal $0$. $k$ must/can be an odd number multiplied by an odd power of 2. ($k$ can not be odd. If $k$ is even, $k$ is not an odd number times an even power of 2. If $0$ is not considered a natural number than 1 is a possible value for $k$ is the one exception. (As $\sqrt{n} + \sqrt{n+1}$ is rational $\iff n = 0$). $\sqrt{n} + \sqrt{n + k} = r = a/b; \gcd(a,b) = 1 \implies n + k = r^2 + n - 2r\sqrt{n} \implies k = r(r - 2\sqrt{n}) \implies k = (a/b)(a/b - 2m); n = m^2$ for some integer $m$. Which implies $bk = a^2/b - 2am \in \mathbb Z \implies b = 1$. So for any $k = a(a - 2m)$ we have a possibility of the sum being rational. Otherwise we don't. So we can't have $k = 2j + 1$ odd or we'd have $k = k(k - 2j)$ (which would yield $\sqrt{j^2} + \sqrt{j^2 + k = j^2 + 2j + 1}$ being rational.) We can't have $k = 2^{2v}j$ where $j$ is odd or we'd have $k = 2^vj(2^vj - 2^v(j - 1))$ On the other hand if $k = 2^{2v + 1}j$ where $j$ is odd we can't have $k = a(a - 2m)$. The powers of 2 just don't add up. Suppose that $$\sqrt n +\sqrt {n+k}=r, \quad r\in\mathbb{Q}$$ Now we have: $$\sqrt {n+k} =r-\sqrt n $$ $$n+k=r^2-2r\sqrt n +n$$ $$2r\sqrt n=r^2-k$$ $$\sqrt n=\frac{r^2-k}{2p}$$ This is a contradiction because the left is $\sqrt n\in\mathbb{I=\mathbb{Q^c}}$, and the right is $\frac{r^2-k}{2p}\in\mathbb{Q}$. The contradiction is due to a wrong assumption that $\sqrt n+\sqrt {n+k}$ was a rational number.
Galois field finite field A field with a finite number of elements. First considered by E. Galois [1]. The number of elements of any finite field is a power $p^n$ of a prime number $p$, which is the characteristic of this field. For any prime number $p$ and any natural number $n$ there exists a (unique up to an isomorphism) field of $p^n$ elements. It is denoted by $\mathrm{GF}(p^n)$ or by $\mathbb{F}_{p^n}$. The field $\mathrm{GF}(p^m)$ contains the field $\mathrm{GF}(p^n)$ as a subfield if and only if $m$ is divisible by $n$. In particular, any field $\mathrm{GF}(p^n)$ contains the field $\mathrm{GF}(p)$, which is called the prime field of characteristic $p$. The field $\mathrm{GF}(p)$ is isomorphic to the field $\mathbb{Z}/p\mathbb{Z}$ of residue classes of the ring of integers modulo $p$. In any fixed algebraic closure $\Omega$ of $\mathrm{GF}(p)$ there exists exactly one subfield $\mathrm{GF}(p^n)$ for each $n$. The correspondence $n \leftrightarrow \mathrm{GF}(p^n)$ is an isomorphism between the lattice of natural numbers with respect to division and the lattice of finite algebraic extensions (in $\Omega$) of $\mathrm{GF}(p)$ with respect to inclusion. The lattice of finite algebraic extensions of any Galois field within its fixed algebraic closure is such a lattice. The algebraic extension $\mathrm{GF}(p^n)/\mathrm{GF}(p)$ is simple, i.e. there exists a primitive element $\alpha \in \mathrm{GF}(p^n)$ such that $\mathrm{GF}(p^n) = \mathrm{GF}(p)(\alpha)$. Such an $\alpha$ will be any root of any irreducible polynomial of degree $n$ from the ring $\mathrm{GF}(p)[X]$. The number of primitive elements of the extension $\mathrm{GF}(p^n)/\mathrm{GF}(p)$ equals $$ \sum_{d|n} \mu(d) p^{n/d} $$ where $\mu$ is the Möbius function. The additive group of the field $\mathrm{GF}(p^n)$ is naturally endowed with the structure of an $n$-dimensional vector space over $\mathrm{GF}(p)$. As a basis one may take $1,\alpha,\ldots,\alpha^{n-1}$. The non-zero elements of $\mathrm{GF}(p^n)$ form a multiplicative group, $\mathrm{GF}(p^n)^*$, of order $p^n-1$, i.e. each element of $\mathrm{GF}(p^n)^*$ is a root of the polynomial $X^{p^n-1}-1$. The group $\mathrm{GF}(p^n)^*$ is cyclic, and its generators are the primitive roots of unity of degree $p^n-1$, the number of which is $\phi(p^n-1)$, where $\phi$ is the Euler function. Each primitive root of unity of degree $p^n-1$ is a primitive element of the extension $\mathrm{GF}(p^n)/\mathrm{GF}(p)$, but the converse is not true. More exactly, out of the $$ \frac{1}{n} \sum_{d|n} \mu(d) p^{n/d} $$ irreducible unitary polynomials of degree $n$ over $\mathrm{GF}(p)$ there are $\phi(p^n-1)/n$ polynomials of which the roots are generators of $\mathrm{GF}(p^n)$. The set of elements of $\mathrm{GF}(p^n)$ coincides with the set of roots of the polynomial $X^{p^n} - X$ in $\Omega$, i.e. $\mathrm{GF}(p^n)$ is characterized as the subfield of elements from $\Omega$ that are invariant with respect to the automorphism $\tau : x \mapsto x^{p^n}$, which is known as the Frobenius automorphism. If $\mathrm{GF}(p^m) \supset \mathrm{GF}(p^n)$, the extension $\mathrm{GF}(p^m)/\mathrm{GF}(p^n)$ is normal (cf. Extension of a field), and its Galois group $\mathrm{Gal}\left({\mathrm{GF}(p^m)/\mathrm{GF}(p^n)}\right)$ is cyclic of order $m/n$. The automorphism $\tau$ may be taken as the generator of $\mathrm{Gal}\left({\mathrm{GF}(p^m)/\mathrm{GF}(p^n)}\right)$. References [1] E. Galois, "Écrits et mémoires d'E. Galois" , Gauthier-Villars (1962) [2] B.L. van der Waerden, "Algebra" , 1–2 , Springer (1967–1971) (Translated from German) [3] N.G. [N.G. Chebotarev] Tschebotaröw, "Grundzüge der Galois'schen Theorie" , Noordhoff (1950) (Translated from Russian) [4] N. Bourbaki, "Algebra" , Elements of mathematics , 1 , Springer (1989) pp. Chapt. 1–3 (Translated from French) How to Cite This Entry: Galois field. Encyclopedia of Mathematics.URL: http://www.encyclopediaofmath.org/index.php?title=Galois_field&oldid=34238
2019-09-04 12:06 Soft QCD and Central Exclusive Production at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The LHCb detector, owing to its unique acceptance coverage $(2 < \eta < 5)$ and a precise track and vertex reconstruction, is a universal tool allowing the study of various aspects of electroweak and QCD processes, such as particle correlations or Central Exclusive Production. The recent results on the measurement of the inelastic cross section at $ \sqrt s = 13 \ \rm{TeV}$ as well as the Bose-Einstein correlations of same-sign pions and kinematic correlations for pairs of beauty hadrons performed using large samples of proton-proton collision data accumulated with the LHCb detector at $\sqrt s = 7\ \rm{and} \ 8 \ \rm{TeV}$, are summarized in the present proceedings, together with the studies of Central Exclusive Production at $ \sqrt s = 13 \ \rm{TeV}$ exploiting new forward shower counters installed upstream and downstream of the LHCb detector. [...] LHCb-PROC-2019-008; CERN-LHCb-PROC-2019-008.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : The XXVII International Workshop on Deep Inelastic Scattering and Related Subjects, Turin, Italy, 8 - 12 Apr 2019 Úplný záznam - Podobné záznamy 2019-08-15 17:39 LHCb Upgrades / Steinkamp, Olaf (Universitaet Zuerich (CH)) During the LHC long shutdown 2, in 2019/2020, the LHCb collaboration is going to perform a major upgrade of the experiment. The upgraded detector is designed to operate at a five times higher instantaneous luminosity than in Run II and can be read out at the full bunch-crossing frequency of the LHC, abolishing the need for a hardware trigger [...] LHCb-PROC-2019-007; CERN-LHCb-PROC-2019-007.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Úplný záznam - Podobné záznamy 2019-08-15 17:36 Tests of Lepton Flavour Universality at LHCb / Mueller, Katharina (Universitaet Zuerich (CH)) In the Standard Model of particle physics the three charged leptons are identical copies of each other, apart from mass differences, and the electroweak coupling of the gauge bosons to leptons is independent of the lepton flavour. This prediction is called lepton flavour universality (LFU) and is well tested. [...] LHCb-PROC-2019-006; CERN-LHCb-PROC-2019-006.- Geneva : CERN, 2019 - mult.p. In : Kruger2018, Hazyview, South Africa, 3 - 7 Dec 2018 Úplný záznam - Podobné záznamy 2019-05-15 16:57 Úplný záznam - Podobné záznamy 2019-02-12 14:01 XYZ states at LHCb / Kucharczyk, Marcin (Polish Academy of Sciences (PL)) The latest years have observed a resurrection of interest in searches for exotic states motivated by precision spectroscopy studies of beauty and charm hadrons providing the observation of several exotic states. The latest results on spectroscopy of exotic hadrons are reviewed, using the proton-proton collision data collected by the LHCb experiment. [...] LHCb-PROC-2019-004; CERN-LHCb-PROC-2019-004.- Geneva : CERN, 2019 - 6. Fulltext: PDF; In : 15th International Workshop on Meson Physics, Kraków, Poland, 7 - 12 Jun 2018 Úplný záznam - Podobné záznamy 2019-01-21 09:59 Mixing and indirect $CP$ violation in two-body Charm decays at LHCb / Pajero, Tommaso (Universita & INFN Pisa (IT)) The copious number of $D^0$ decays collected by the LHCb experiment during 2011--2016 allows the test of the violation of the $CP$ symmetry in the decay of charm quarks with unprecedented precision, approaching for the first time the expectations of the Standard Model. We present the latest measurements of LHCb of mixing and indirect $CP$ violation in the decay of $D^0$ mesons into two charged hadrons [...] LHCb-PROC-2019-003; CERN-LHCb-PROC-2019-003.- Geneva : CERN, 2019 - 10. Fulltext: PDF; In : 10th International Workshop on the CKM Unitarity Triangle, Heidelberg, Germany, 17 - 21 Sep 2018 Úplný záznam - Podobné záznamy 2019-01-15 14:22 Experimental status of LNU in B decays in LHCb / Benson, Sean (Nikhef National institute for subatomic physics (NL)) In the Standard Model, the three charged leptons are identical copies of each other, apart from mass differences. Experimental tests of this feature in semileptonic decays of b-hadrons are highly sensitive to New Physics particles which preferentially couple to the 2nd and 3rd generations of leptons. [...] LHCb-PROC-2019-002; CERN-LHCb-PROC-2019-002.- Geneva : CERN, 2019 - 7. Fulltext: PDF; In : The 15th International Workshop on Tau Lepton Physics, Amsterdam, Netherlands, 24 - 28 Sep 2018 Úplný záznam - Podobné záznamy 2019-01-10 15:54 Úplný záznam - Podobné záznamy 2018-12-20 16:31 Simultaneous usage of the LHCb HLT farm for Online and Offline processing workflows LHCb is one of the 4 LHC experiments and continues to revolutionise data acquisition and analysis techniques. Already two years ago the concepts of “online” and “offline” analysis were unified: the calibration and alignment processes take place automatically in real time and are used in the triggering process such that Online data are immediately available offline for physics analysis (Turbo analysis), the computing capacity of the HLT farm has been used simultaneously for different workflows : synchronous first level trigger, asynchronous second level trigger, and Monte-Carlo simulation. [...] LHCb-PROC-2018-031; CERN-LHCb-PROC-2018-031.- Geneva : CERN, 2018 - 7. Fulltext: PDF; In : 23rd International Conference on Computing in High Energy and Nuclear Physics, CHEP 2018, Sofia, Bulgaria, 9 - 13 Jul 2018 Úplný záznam - Podobné záznamy 2018-12-14 16:02 The Timepix3 Telescope andSensor R&D for the LHCb VELO Upgrade / Dall'Occo, Elena (Nikhef National institute for subatomic physics (NL)) The VErtex LOcator (VELO) of the LHCb detector is going to be replaced in the context of a major upgrade of the experiment planned for 2019-2020. The upgraded VELO is a silicon pixel detector, designed to with stand a radiation dose up to $8 \times 10^{15} 1 ~\text {MeV} ~\eta_{eq} ~ \text{cm}^{−2}$, with the additional challenge of a highly non uniform radiation exposure. [...] LHCb-PROC-2018-030; CERN-LHCb-PROC-2018-030.- Geneva : CERN, 2018 - 8. Úplný záznam - Podobné záznamy
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
Let $\zeta(n)$ denote the Riemann Zeta function for positive integers $n>1$ as usual by: $$ \zeta(n)=\sum_{m=1}^{\infty}m^{-n}. $$ There are fast-converging series for $\zeta(2)$ and $\zeta(3)$, but not others. In the spirit of Apéry's $$ {\displaystyle {\begin{aligned}\zeta (3)&={\frac {5}{2}}\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{{\binom {2k}{k}}k^{3}}}\end{aligned}}}, $$ quick numerical computations show that Conjecture 1.$$ {\displaystyle {\begin{aligned}\zeta (4)&={\frac {36}{17}}\sum _{k=1}^{\infty }{\frac {1}{{\binom {2k}{k}}k^{4}}}\end{aligned}}}.$$ However, I am unable to prove this statement. I have tried using $$ 2(\sin^{-1}x)^2 =\sum_{k=1}^{\infty}{\frac{(2x)^{2k}}{{\binom {2k}{k}}k^{2}}}, $$ but to no avail. Any help is appreciated.
This is an extremely interesting question, with complexity due to the extreme flexibility of the quotient maps $q: X \rightarrow X/\sim$ compared to covering spaces $p:\tilde Y \rightarrow Y$. One important difference is the way in which construction of covering space gives a relation on the 1-structure, the map $p$ adds structure to $\tilde Y$. In contrast the map $q$ can delete all structure of $X$ or add as much sturcture as one desires. The vein of my ideas is to capture on which subsets of the domain the quotient map acts as a covering map. A condition that yields very trivial lifts is the following: Let $A \subset X$ defined as $A =\{x \in X; \forall y \in X \,\, !(x \sim Y)\}$. This is the exact subset of $X$ such that $q|_A (x) = x$. Now if a function $f:Y \rightarrow X/ \sim$ is such that $f(Y) \subset q(A)$ we may define the lift $\tilde f:Y \rightarrow X$ by regarding the image of $f$ as a subset of $X$. We immediately run into problems if we wish to extend the image of $f$ onto a point of identification of $X / \sim$. For one, the preimage by $q$ of $f(Y)$ can become quite disconnected, destroying any hopes of continuity. If we restrict ourselves to subsets $B \subset X/ \sim$ such that for each $ x \in B$ there is a neighborhood $U$ for which $q^{-1}(U)$ is disjoint union of open sets with at least one homeomorphic to $U$, here $q$ behaves like a covering space in this neighborhood and it follows $f(Y) \subset B$, $f$ has a lift iff $f_*(\pi_1(Y)) \subset q_*(\pi_1(X))$ just as if $q$ were a covering map. Another way of viewing I suppose would be to restrict the relation for which the quotient is a covering space. An example of this of course would be for a group action $G$ on $X$ and the relation $x \sim y \iff orb(x) = orb(y)$ where $orb(x)$ denotes the orbit of $x$ by the action of $G$. Then we are guaranteed by a classic result that the map $q:X \rightarrow X/ G$ is a covering map.
Archive: Subtopics: Comments disabled Tue, 31 Oct 2017 [ The Atom and RSS feeds have done an unusually poor job of preserving the mathematical symbols in this article. It will be much more legible if you read it on my blog. ] Lately I've been enjoying He continues a little later: As you can see, it is not written in the usual dry mathematical-textstyle, presenting the material as a perfect and aseptic distillationof absolute truth. Instead, one sees the history of logic, the riseand fall of different theories over time, the interaction and relationof many mathematical and philosophical ideas, and Girard's reflectionsabout it all. It is a transcription of a lecture series, and readslike one, including all of the speaker's incidental remarks andoffhand musings, but written down so that each can be weighed andpondered at length. Instead of wondering in the moment what he meantby some intriguing remark, then having to abandon the thought to keepup with the lecture, I can pause and ponder the significance. Girardis really, really smart, and knows way more about logic than I everwill, and his offhand remarks reward this pondering. The book is The book really gets going with its discussion of Gentzen's sequent calculus in chapter 3. Between around 1890 (when Peano and Frege began to liberate logic from itsmedieval encrustations) and 1935 when the sequent calculus wasinvented, logical proofs were mainly in the “Hilbert style”.Typically there were some axioms, and some rules of deduction by whichthe axioms could be transformed into other formulas. A typicalexample consists of the axioms $$A\to(B\to A)\\(A \to (B \to C)) \to ((A \to B) \to (A \to C)) $$(where !!A, B, C!! are understood to be placeholders that can bereplaced by any well-formed formulas) and the deduction rule In contrast, sequent calculus has few axioms and many deductionrules. It deals with A typical deductive rule in sequent calculus is: $$ \begin{array}{c} Γ ⊢ A, Δ \qquad Γ ⊢ B, Δ \\ \hline Γ ⊢ A ∧ B, Δ \end{array} $$ Here !!Γ!! and !!Δ!! represent any lists of formulas, possibly empty. The premises of the rule are: From these premises, the rule allows us to deduce: The only axioms of sequent calculus are utterly trivial: $$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$ There are no premises; we get this deduction for free: If can prove !!A!!, we can prove !!A!!. (!!A!! here is a metavariable that can be replaced with any well-formed formula.) One important point that Girard brings up, which I had never realizeddespite long familiarity with sequent calculus, is the symmetrybetween the left and right sides of the turnstile ⊢. As I mentioned,the interpretation of !!Γ ⊢ Δ!! I had been taught was that itmeans that if every formula in !!Γ!! is provable, then some formula in!!Δ!! is provable. But instead let's focus on just one of theformulas !!A!! on the right-hand side, hiding in the list !!Δ!!. The sequent!!Γ ⊢ Δ, A!! can be understood to mean that to prove!!A!!, it suffices to proveall of the formulas in !!Γ!!, and to The all-some correspondence, which had previously caused me to wonder why it was that way and not something else, perhaps the other way around, has turned into a simple relationship about logical negation: the formulas on the left are positive, and the ones on the right are negative.[2]) With this insight, the sequent calculus negation laws become not merely simple but trivial: $$ \begin{array}{cc} \begin{array}{c} Γ, A ⊢ Δ \\ \hline Γ ⊢ \lnot A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, Δ \\ \hline Γ, \lnot A ⊢ Δ \end{array} \end{array} $$ For example, in the right-hand deduction: what is sufficient to prove !!A!! is also sufficient to disprove !!¬A!!. (Compare also the rule I showed above for ∧: It now says that if proving everything in !!Γ!! and disproving everything in !!Δ!! is sufficient for proving !!A!!, and likewise sufficient for proving !!B!!, then it is also sufficient for proving !!A\land B!!.) But none of that was what I planned to discuss; this article is (intended to be) about sequent calculus's “cut rule”. I never really appreciated the cut rule before. Most of the deductive rules in the sequent calculus are intuitively plausible and so simple and obvious that it is easy to imagine coming up with them oneself. But the cut rule is more complicated than the rules I have already shown. I don't think I would have thought of it easily: $$ \begin{array}{c} Γ ⊢ A, Δ \qquad Λ, A ⊢ Π \\ \hline Γ, Λ ⊢ Δ, Π \end{array} $$ (Here !!A!! is a formula and !!Γ, Δ, Λ, Π!! are lists of formulas, possibly empty lists.) Girard points out that the cut rule is a generalization of modus ponens: taking !!Γ, Δ, Λ!! to be empty and !!Π = \{B\}!! we obtain: $$ \begin{array}{c} ⊢ A \qquad A ⊢ B \\ \hline ⊢ B \end{array} $$ The cut rule is also a generalization of the transitivity of implication: $$ \begin{array}{c} X ⊢ A \qquad A ⊢ Y \\ \hline X ⊢ Y \end{array} $$ Here we took !!Γ = \{X\}, Π = \{Y\}!!, and !!Δ!! and !!Λ!! empty. This all has given me a much better idea of where the cut rule came from and why we have it. In sequent calculus, the deduction rules all come in pairs. There is a rule about introducing ∧, which I showed before. It allows us to construct a sequent involving a formula with an ∧, where perhaps we had no ∧ before. (In fact, it is the only way to do this.) There is a corresponding rule (actually two rules) for getting rid of ∧ when we have it and we don't want it: $$ \begin{array}{cc} \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ A, Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A\land B, Δ \\ \hline Γ ⊢ B, Δ \end{array} \end{array} $$ Similarly there is a rule (actually two rules) about introducing !!\lor!! and a corresponding rule about eliminating it. The cut rule seems to lie outside this classification. It is not paired. But Girard showed me that it $$ \begin{array}{c} \phantom{A} \\ \hline A ⊢ A \end{array} $$ can be seen as an introduction rule for a pair of !!A!!s, one on each side of the turnstile. The cut rule is the corresponding rule for eliminating !!A!! from both sides. Sequent calculus proofs are much easier to construct than Hilbert-style proofs. Suppose one wants to prove !!B!!. In a Hilbert system the only deduction rule is modus ponens, which requires that we first prove !!A\to B!! and !!A!! for some !!A!!. But what !!A!! should we choose? It could be anything, and we have no idea where to start or how big it could be. (If you enjoy suffering, try to prove the simple theorem !!A\to A!! in the Hilbert system I described at the beginning of the article. (Solution) In sequent calculus, there is only one way to prove each kind of thing, and the premises in each rule are simply related to the consequent we want. Constructing the proof is mostly a matter of pushing the symbols around by following the rules to their conclusions. (Or, if this is impossible, one can conclude that there is no proof, and why.[3]) Construction of proofs can now be done entirely mechanically! Except! The cut rule The good news is that Gentzen, the inventor of sequent calculus, showed that one can dispense with the cut rule: it is unnecessary: Gentzen's demonstration of this shows how one can take any proof that involves the cut rule, and algorithmically eliminate the cut rule from it to obtain a proof of the same result that does not use cut. Gentzen called this the “Hauptsatz” (“principal theorem”) and rightly so, because it reduces construction of logical proofs to an algorithm and is therefore the ultimate basis for algorithmic proof theory. The bad news is that the cut-elimination process cansuper-exponentially increase the size of the proof, so it does notlead to a $$ \begin{array}{cc} \begin{array}{c} Γ, A, A ⊢ Δ \\ \hline Γ, A ⊢ Δ \end{array} & \qquad \begin{array}{c} Γ ⊢ A, A, Δ \\ \hline Γ ⊢ A, Δ \end{array} \end{array} $$ And suddenly Girard's invention of linear logic made sense to me. In linear logic, contraction is forbidden; one must use each formula in one and only one deduction. Previously it had seemed to me that this was a pointless restriction.Now I realized that it was no more of a useless hair shirt than theintuitionistic rejection of the law of the proof by contradiction:not a stubborn refusal to use an obvious tool ofreasoning, but arestriction of proofs to produce The book is going to get into linear logic later in the next chapter. I have read descriptions of linear logic before, but never understood what it was up to. (It has two logical and operators, and two logical or operators; why?) But I am sure Girard will explain it marvelously. Sun, 15 Oct 2017 [ I started this article in March and then forgot about it. Ooops! ] Back in February I posted an article about how there are exactly 715 nondecreasing sequences of 4 digits. I said that !!S(10, 4)!! was the set of such sequences and !!C(10, 4)!! was the number of such sequences, and in general $$C(d,n) = \binom{n+d-1}{d-1} = \binom{n+d-1}{n}$$ so in particular $$C(10,4) = \binom{13}{4} = 715.$$ I described more than one method of seeing this, but I didn't mention the method I had found first, which was to use the Cauchy-Frobenius-Redfeld-Pólya-Burnside counting lemma. I explained the lemma in detail some time ago, with beautiful illustrated examples, so I won't repeat the explanation here. The Burnside lemma is a kind of big hammer to use here, but I like big hammers. And the results of this application of the big hammer are pretty good, and justify it in the end. To count the number of distinct sequences of 4 digits, where some sequences are considered “the same” we first identify a symmetry group whose orbits are the equivalence classes of sequences. Here the symmetry group is !!S_4!!, the group that permutes the elements of the sequence, because two sequences are considered “the same” if they have exactly the same digits but possibly in a different order, and the elements of !!S_4!! acting on the sequences are exactly what you want to permute the elements into some different order. Then you tabulate how many of the 10,000 original sequences are left fixed by each element !!p!! of !!S_4!!, which is exactly the number of cycles of !!p!!. (I have also discussed cycle classes of permutations before.) If !!p!! contains !!n!! cycles, then !!p!! leaves exactly !!10^n!! of the !!10^4!! sequences fixed. (Skip this paragraph if you already understand the table. The four rows above are an abbreviation of the full table, which has 24 rows, one for each of the 24 permutations of order 4. The “How many permutations?” column says how many times each row should be repeated. So for example the second row abbreviates 6 rows, one for each of the 6 permutations with three cycles, which each leave 1,000 sequences fixed, for a total of 6,000 in the second row, and the total for all 24 rows is 17,160. There are two different types of permutations that have two cycles, with 3 and 8 permutations respectively, and I have collapsed these into a single row.) Then the magic happens: We average the number left fixed by each permutation and get !!\frac{17160}{24} = 715!! which we already know is the right answer. Now suppose we knew how many permutations there were with each number of cycles. Let's write !!\def\st#1#2{\left[{#1\atop #2}\right]}\st nk!! for the number of permutations of !!n!! things that have exactly !!k!! cycles. For example, from the table above we see that $$\st 4 4 = 1,\quad \st 4 3 = 6,\quad \st 4 2 = 11,\quad \st 4 1 = 6.$$ Then applying Burnside's lemma we can conclude that $$C(d, n) = \frac1{n!}\sum_i \st ni d^i .\tag{$\spadesuit$}$$ So for example the table above computes !!C(10,4) = \frac1{24}\sum_i \st 4i 10^i = 715!!. At some point in looking into this I noticed that$$\def\rp#1#2{#1^{\overline{#2}}}%\def\fp#1#2{#1^{\underline{#2}}}%C(d,n) =\frac1{n!}\rp dn$$ where !!\rp dn!! is the so-called “risingpower” of !!d!!: $$\rp dn = d\cdot(d+1)(d+2)\cdots(d+n-1).$$I don't think I had a proof of this; I just noticed that !!C(d, 1) =d!! and !!C(d, 2) = \frac12(d^2+d)!! (both obvious), and the Burnside'slemma analysis of the !!n=4!! case had just given me !!C(d, 4) =\frac1{24}(d^4 +6d^3 + 11d^2 + 6d)!!. Even if one doesn't immediatelyrecognize this latter polynomial it looks like it ought to factor andthen on factoring it one gets !!d(d+1)(d+2)(d+3)!!. So it's easy toconjecture !!C(d, n) = \frac1{n!}\rp dn!! and indeed, this is easy toprove from !!(\spadesuit)!!: The !!\st n k!! obey the recurrence$$\st{n+1}k = n \st nk + \st n{k-1}\tag{$\color{green}{\star}$}$$ (by an easy combinatorial argument In general !!\rp nk = \fp{(n+k-1)}k!! so we have !!C(d, n) = \rp dn = \fp{(n+d-1)}n = \binom{n+d-1}d = \binom{n+d-1}{n-1}!! which ties the knot with the formula from the previous article. In particular, !!C(10,4) = \binom{13}9!!. I have a bunch more to say about this but this article has already been in the oven long enough, so I'll cut the scroll here. [1] The combinatorial argument that justifies !!(\color{green}{\star})!! is as follows: The Stirling number !!\st nk!! counts the number of permutations of order !!n!! with exactly !!k!! cycles. To get a permutation of order !!n+1!! with exactly !!k!! cycles, we can take one of the !!\st nk!! permutations of order !!n!! with !!k!! cycles and insert the new element into one of the existing cycles after any of the !!n!! elements. Or we can take one of the !!\st n{k-1}!! permutations with only !!k-1!! cycles and add the new element in its own cycle.) [2] We want to show that the coefficients of !!\rp nk!! obey the same recurrence as !!(\color{green}{\star})!!. Let's say that the coefficient of the !!n^i!! term in !!\rp nk!! is !!c_i!!. We have $$\rp n{k+1} = \rp nk\cdot (n+k) = \rp nk \cdot n + \rp nk \cdot k $$ so the coefficient of the the !!n^i!! term on the left is !!c_{i-1} + kc_i!!.
The "integer square root" of anon-negative integer \$ n \$ is defined as the largest integer not greater than \$ \sqrt{n} \$:$$ \operatorname{isqrt}(n) = \lfloor \sqrt{n} \rfloor = \max \{ k \in \Bbb N_0 \mid k^2 \le n \}$$It is for example needed in prime factorization, as an upper bound for the possible factors. A simple approach is to compute the floating point square root and truncate the result to an integer. In Swift that would be func isqrt_simple(_ n: Int) -> Int { return Int(Double(n).squareRoot())} As observed in Computing the square root of a 64-bit integer, this can produce wrong results for large numbers, because an IEEE 64-bit floating point number with its 53 bit significand cannot represent large integers exactly. Here is an example: let n = 9223371982334239233let r = isqrt_simple(n)print(r) // 3037000491print(r * r <= n) // false The correct result would be 3037000490, since $$ \sqrt{9223371982334239233} \approx 3037000490.9999996957524364127605120353 $$ (computed with PARI/GP). The following implementation uses the ideas from DarthGizka's answerto the above mentioned question to implement a "correct" integer square root function in Swift 4: func isqrt(_ n: Int) -> Int { precondition(n >= 0, "argument of isqrt() must be non-negative") var r = Int(Double(n).squareRoot()) // Initial approximation // Try to increase: while case let prod = r.multipliedReportingOverflow(by: r), !prod.overflow && prod.partialValue < n { r += 1 } // Decrease if necessary: while case let prod = r.multipliedReportingOverflow(by: r), prod.overflow || prod.partialValue > n { r -= 1 } return r} Example: let n = 9223371982334239233let r = isqrt(n)print(r) // 3037000490print(r * r <= n) // trueprint((r+1) * (r+1) > n) // true Remarks: I have chosen Intas argument and result type even if the square root is defined only for non-negative integers. The reason is that Intis the prevalent integer type in Swift and already used for quantities that can not be negative (e.g. the countof an array, or the MemoryLayout<T>.sizeof a type). Intcan be a 32-bit or 64-bit quantity, therefore I cannot check against a constant for overflow (as r < UINT32_MAXin DarthGizka's C++ solution). multipliedReportingOverflow()is used instead to check if squaring a candidate causes an overflow. The code worked correctly in all my tests. Here are some tests which all succeed func testSqrt(_ n: Int) { let r = isqrt(n) if r * r > n { print("Too large:", n, r) } else if (r+1) * (r+1) <= n { print("Too small:", n, r) }}testSqrt(4503599761588224)testSqrt(4503599895805955)testSqrt(4503600030023688)testSqrt(4503600164241423)testSqrt(9223371982334241080)testSqrt(9223371982334239233)testSqrt(9223372024852248003)testSqrt(9223372024852247041)testSqrt(9223372030926249000)testSqrt(9223372030926247424) These tests fail if isqrt_simple() is used instead. All feedback is welcome, in particular suggestions how to improve the performance.
A type $T$ is a specification. A term $t$ of type $T$ is an implementation together with a proof of correctness. Dependent types are more expressive than simple types found in programming languages. Via the propositions-as-types correspondence they allow us to express logical statements which comprise a specification, rather than just "bare" typing information. For instance, the type$$\prod_{k : \mathbb{N}} \sum_{m : \mathbb{N}} \mathsf{prime}(m) \times (k < m)$$can be read in any of the following ways: As a proposition: for every natural number $k$ there is a prime $m$ larger than $k$. As a type: the type of functions which take as input a number $k$ and output a triple $(m, p, q)$ where $m$ is a number, $p$ is a proof that $m$ is prime, and $q$ is a proof that $k < m$. As a specification: implement a function which takes a number and returns a prime larger than it. Fancier specifications can be expressed just as well. For instance, we can express the specification for a dictionary as a dependent sum (or a record type if it's available)$$\sum_{D : \mathsf{Type}} \sum_{K : \mathsf{Type}} \sum_{V : \mathsf{Type}} \sum_{\mathsf{empty} : D} \sum_{\mathsf{add} : K \to V \to D \to D} \sum_{\mathsf{lookup} : K \to D \to 1 + V} \cdots$$which is read as follows: we need to specify the type of dictionaries $D$, the type of keys $K$, the type of values $V$, the empty dictionary, and the addition and lookup functions. The $\cdots$ would express the required properties of dictionaries, i.e., logical statements governing the behavior of a dictionary.
I suspect this has been asked here before, but I didn't find anything using Search. Why is Newton's second law only second-order in position? For instance, could there exist higher-order masses $m_i$ with $$F(x) = m\ddot{x} + \sum_{i=3}^{\infty} m_i x^{(i)}?$$ Are there theoretical reasons why $m_i$ must be exactly zero for $i>2$? If not, if these masses existed but were extremely small, would we be able to tell experimentally (e.g. by observing galactic motion)?
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
If $A_1,A_2,...$ is a sequence of subsets of a topological space. Prove $\overline{\bigcup_{k=1}^{\infty}A_k} = \bigcup_{k=1}^{\infty}A_k \cup \bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ I am first trying to decipher the right hand side of the equation Let $x\in$ RHS $\Rightarrow $ $x\in\bigcap_{k=1}^{\infty}A_k$ and $x\in\bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ Now $x\in\bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ $\Rightarrow $ $x\in \bigcap_{k=1}^{\infty}\overline{ \left \{ A_{k+0} \cup A_{k+1} \cup A_{k+2} \cup A_{k+3} ... \right \}} $ $\Rightarrow$ $x\in \overline{(A_1 \cup A_2 \cup A_3...)} \cap\overline{(A_2 \cup A_3 \cup A_4...)} \cap \overline{(A_3 \cup A_4 \cup A_5...)}\cap.... $ But this is not taking me anywhere. I would appreciate if someone can point me in right direction. @Brian: Here is how I am trying to attempt the corrected problem.. As per your comment the correct problem should be $\overline{\bigcup_{k=1}^{\infty}A_k} = \bigcup_{k=1}^{\infty}\overline{A_k} \cup \bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ Let $x\in \overline{\bigcup_{k=1}^{\infty}A_k}$ this implies that every nbhd $U$ of $x$ will intersect with some $A_k$ where $k\geq 1$. To prove in forward direction $x\in \bigcup_{k=1}^{\infty}\overline{A_k}$ implies that $x$ belongs to the closure of at least one of the $A_k$ where $k\geq 1$.Thus every neighborhood of $x$ intersects with at least one perticular $A_k$ where $k\geq 1$. I believe that this would be good enough to imply forward inclusion that is LHS $\subset$ RHS. Please let me know if it does not. Conversely Let $x\in \bigcup_{k=1}^{\infty}\overline{A_k} \cup \bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ As you explained $x\in \bigcap_{k=1}^{\infty}\bigg(\overline{\bigcup_{l=0}^{\infty}A_{k+l}}\bigg)$ implies that for each $k\geq1$ and each nbhd $U$ of $x$ there exist an $l\geq k$ such that $(U\cap A_l)\ne\varnothing $. Thus for each $k\geq1$ every nbhd of $x$ intersects with some $A_l$ (where $l\geq k)$. Also as mentioned before $x\in \bigcup_{k=1}^{\infty}\overline{A_k}$ implies that $x$ belongs to the closure of at least one of the $A_k$ where $k\geq 1$. I am having hard time in using these two deductions to imply that $x\in$ LHS. I would appreciate if you can help me.
Since $On⊂L⊆V$, properties of ordinals that depend on the absence of a function or other structure (i.e. $\Pi_1^{ZF}$ formulas) are preserved when going down from $V$ to $L$. Hence initial ordinals of cardinals remain initial in L. Regular ordinals remain regular in $L$. Weak limit cardinals become strong limit cardinals in $L$ because the generalized continuum hypothesis holds in $L$. Weakly inaccessible cardinals become strongly inaccessible. Weakly Mahlo cardinals become strongly Mahlo. And more generally, any large cardinal property weaker than 0# (see the list of large cardinal properties) will be retained in $L$. Let $\kappa$ be a Mahlo cardinal in $V$, i.e. let $\kappa$ be inaccessible such that $S:= \{ \alpha \in \kappa \mid \alpha \text{ is regular} \}$ is stationary in $\kappa$. First, note that $L \models \kappa \text{ is inaccessible}$. Indeed, if $L \models \kappa \text{ is not a cardinal}$, then there is some $\mu < \kappa$ and some $f \in L$ such that $L \models f \colon \mu \to \kappa \text{ is surjective}$. However, this is a $\Sigma_{0}$ property and hence, in $V$, $f \colon \mu \to \kappa$ is surjective. Contradiction. Since $\kappa > \omega$ (as an ordinal), this also yields that $L \models \kappa \text{ is uncountable}$. Repeating this argument with cofinal $f \colon \mu \to \kappa$ yields that $L \models \kappa \text{ is regular}$. Since $L \models \operatorname{GCH}$, it now suffices to prove that $L \models \kappa \text{ is a limit cardinal}$. This is trivial, because for any ordinal $\gamma < \kappa$, we have that $(\gamma^{+})^V < \kappa$ and since cardinals in $V$ are cardinals in $L$, this proves $$L \models \forall \gamma < \kappa \exists \gamma < \mu < \kappa \colon \mu \text{ is a cardinal}.$$ Now let $T := \{ \alpha \in \kappa \mid L \models \alpha \text{ is regular} \}$. By the argument given above, any $\alpha$ that is regular in $V$ is regular in $L$ and hence $S \subseteq T$. Suppose that $L \models \kappa \text{ is not Mahlo}$. Then there is some $C \subseteq \kappa$ such that $L \models C \text{ is club in } \kappa \text{ and } C \cap T = \emptyset$. Being club is a $\Sigma_0$, property and hence $C \subseteq \kappa$ is club in $\kappa$. Since $S \subseteq T$, we have that $C \cap S = \emptyset$ and hence $S$ is not stationary in $V$. This is a contradiction and we therefore must have that $L \models T \text{ is stationary}$. Thus, $\kappa$ remains Mahlo in $L$. Let $C(k)$ be the set of club subsets of $k.$ Let $R(k)=\{l\in k: l=cf(l)\}.$ Observe that for any set $S\subset On,$ if $S\in L$ then (1) $\forall a\in On \;[\;\{b\cap S :b\in a\} \in L\;], \; \text {and}$ (2) $\forall a\in On\;[\;a=\cup (a\cap S)\iff L\Vdash (a= \cup (a\cap S)\;].$ (3) Also observe that $\forall a\in On\;[a=|a|\implies L\Vdash a=|a|.)]$ Let $C(k)$ be the set of club subsets of $k.$ From (1) and (2) we have $ C(k)\supset (C(k))^L.$ Let $R(k)=\{l<k: l=|l|\}.$ From (3), we have $R(k)\subset (R(k))^L.$ So for any $c\in (C(k))^L$ we have $c\in C(k), $ so $\emptyset\ne c\cap R(k)\subset c\cap (R(k))^L.$ Remark. If $k=|k|>\omega$ and $R(k)$ is stationary in $k,$ then $k$ must be weakly inaccessible. Obviously $k$ can't be a successor cardinal. If $k$ is singular then (i): If $k>cf(k)=\omega $ let $f(n):\omega \to k$ be a co-final strictly increasing map with $f(0)=\omega.$ Let $g(n)=f(n)+1$ for $n\in \omega.$ Then $\{g(n):n\in \omega\}$ is club in $k$ and contains no cardinals. (ii): If $k>cf (k)=l>\omega,$ let $S\subset k$ with $|S|=l$ and $\cup S=k.$ Let $C=\{a\in k: l<a=\cup (a\cap S)\}.$ It is easy to show that $C$ is club in $k.$ For $a\in C$ we have $cf (a)\leq |a\cap S|\leq l<a.$ So $C$ has no regular members.
In complex numbers you have to take care with power functions. In complex analysis you have infinity logarithms. A logarithm function $l$ in a region D is a holomorphic function in $D$ such that $\exp(l(z))=z$ for all $z \in D$. As $\exp$ is not injective in $\mathbb{C}$, a complex number can have infinite logarithms. But you also know that the period of complex exponential function is $2\pi i \mathbb{Z}$, so if $\bar l$ is a logarithm function in $D$, then other logarithms functions in $D$ are given by $l = \bar l + 2 \pi i k$ for some $k \in \mathbb{Z}$. The principal branch of logarithm is defined in $\mathbb{C}$ without the real negative axis, by $l(z)= \log|z|+i \theta$, where $\theta$ is the principal argument of $z$, and $\log$ is the usual real logarithm. Now that you fix a logarithm function $l$, you can define $z^\sigma = \exp(\sigma l(z))$. With the principal branch, $l(i)=\log|1|+i\frac{\pi}{2}=i\frac{\pi}{2}$, so $i^i = \exp(i l(i))=\exp(i^2\frac{\pi}{2})=\exp(-\frac{\pi}{2})$. So all values of $i^i$ are given by $\exp(-\frac{\pi}{2}+2\pi ik)$ with $k \in \mathbb{Z}$. Notice that $\exp(2\pi ik)=1$ for all $k \in \mathbb{Z}$ so your plot its just one point, $\exp(-\frac{\pi}{2}).$
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Let $V$ be a vector space and $T:V\rightarrow V$ a linear transformation with the property that $T(W)\subseteq W$ for every subspace $W$ of $V$. Prove that $T$ is scalar multiplication, i.e. there is an element $\lambda$ in the field of scalars such that $T(v)=\lambda v$ $\forall v\in V$. My attempt: I gather that for any element $w$ in a subspace $W$ with basis $\{w_1,\dots,w_n\}$, we have $w = a_1w_1+\dots+a_nw_n$ for scalars $a_1,\dots,a_n$. We also know that $T(w) = T(a_1w_1)+\dots+T(a_nw_n)$, and that since for each $i$, span$\{w_i\}$ is a subspace, $T(w_i)=\alpha_i(w_i)$ for some scalar $\alpha_i$. I feel like this should be enough for the solution, but I can't get there. Any help appreciated!
Current browse context: math.CA Change to browse by: References & Citations Bookmark(what is this?) Mathematics > Classical Analysis and ODEs Title: Lower semicontinuity via W^{1,q}-quasiconvexity (Submitted on 14 Jun 2011 (v1), last revised 22 Dec 2012 (this version, v7)) Abstract: We isolate a general condition, that we call "localization principle", on the integrand L:\MM\to[0,\infty], assumed to be continuous, under which W^{1,q}-quasiconvexity with q\in[1,\infty] is a sufficient condition for I(u)=\int_\Omega L(\nabla u(x))dx to be sequentially weakly lower semicontinuous on W^{1,p}(\Omega;\RR^m) with p\in]1,\infty[. Some applications are given. Submission historyFrom: Jean-Philippe Mandallena [view email] [v1]Tue, 14 Jun 2011 21:31:03 GMT (8kb) [v2]Sat, 26 Nov 2011 18:08:40 GMT (10kb) [v3]Thu, 12 Jul 2012 08:44:54 GMT (12kb) [v4]Thu, 19 Jul 2012 09:35:08 GMT (13kb) [v5]Sat, 15 Dec 2012 18:41:47 GMT (12kb) [v6]Wed, 19 Dec 2012 20:48:31 GMT (12kb) [v7]Sat, 22 Dec 2012 08:40:07 GMT (12kb)
I am currently reading Topological Classification and Stability of Fermi Surfaces by Y. X. Zhao and Z. D. Wang (PRL 110, 240404 (2013)). They remark that the Green's function (along the complex frequency axis) can be viewed as a mapping $S^p \to \mathrm{GL}(N,\mathbb{C})$, where $p$ is the co-dimension of the Fermi surface. It is then natural to classify the Fermi surface by the topological character of this mapping which falls into some instance of $\pi_p(\mathrm{GL}(N,\mathbb{C})$). In particular, one can define the winding number $$ N_p = C_p \int_{S^p} \mathrm{tr}~(G\textbf{d} G^{-1})^p, $$ where $C_p = -~p!~/~( (2p+1)! (2\pi i )^{p+1})$ . I am now wondering how these ideas are related to the topology of band insulators: Since the Brillouin zone is periodic, an insulating band represents a compact manifold of some co-dimension $p$ (similar to a Fermi surface). And hence I should be able to write down the winding number $N_p$ in terms of Green's functions. Is this a straightforward generalisation of the formula presented above? I suppose that in this case the Green's function needs to be viewed along the real frequency axisin order to pick up the correct singularities. Is this correct?
Can we divide two vector quantities? For eg., Pressure( a scalar) equals force (a vector) divided by area (a vector). No, in general you cannot divide one vector by another. It is possible to prove that no vector multiplication on three dimensions will be well-behaved enough to have division as we understand it. (This depends on exactly what one means by 'well-behaved enough', but the core result here is Hurwitz's theorem.) Regarding force, area and pressure, the most fruitful way is to say that force is area times pressure:$$\vec F=P\cdot \vec A.$$As it turns out, pressure is not actually a scalar but a matrix (or, more technically, a rank 2 tensor). This is because, in certain situations, an area with its normal vector pointing in the $z$ direction can also experience forces along $x$ and $y$, which are called shear stresses. In this case, the correct linear relation is that$$\begin{pmatrix}F_x\\ F_y \\ F_z \end{pmatrix}=\begin{pmatrix}p_x & s_{xy} & s_{xz} \\ s_{yx} & p_y & s_{yz} \\ s_{zx} & s_{zy} & p_z\end{pmatrix}\begin{pmatrix}A_x\\ A_y \\ A_z \end{pmatrix}.$$In a fluid, shear stresses are zero and the pressure is isotropic, so all the $p_j$s are equal, and therefore the pressure tensor $P$ is a scalar matrix. In a solid, on the other hand, shear stresses can occur even in static situations, so you need the full matrix. In this case, the matrix is referred to as the stress tensor of the solid. As an aside, you can actually divide two vectors. The only question is how do you want to interpret the objects and more importantly the operation. For example, you can map the vectors to an object in a quaternion space quite simply as: $$ \phi:V \rightarrow H: \vec{v} \mapsto (0,\vec{v}) , $$ and then division is well defined. But your answer will be, in general, quite obviously, a general quaternion $(r,\vec{u})$, and you then need a physical interpretation for this. In the specifics of your question, you see, the objects and the operation are fixed by nature. Force and area are vectors related by a tensor called pressure as: $$ \vec{F} = P \vec{A}, $$ where the operation of $P$ on $\vec{A}$ is defined to be the tensor action. In this setup there is no unique way to define division of two vectors to produce a tensor: the definition of the operation admits no sensible inverse. To define vector division as the scalar result of one vector "divided" by another, where the scalar times the denominator vector would then give us the numerator vector, we can write the following: \begin{align*} \vec u&=w\vec v\\ \vec u\cdot\vec v&=w\vec v\cdot\vec v\\ \therefore w&=\frac{\vec u\cdot\vec v}{v^2} \end{align*} The math for a scalar quotient works. That is one way to divide out a vector It depends on the context. Division is usually defined as inverse of multiplication. If $$x\cdot\vec{v}=\vec{u}$$ then, if there is only one $x$ that satisfies above relation, you can say that $x=\frac{\vec{u}}{\vec{v}}$. The $x$ here can be scalar (so you multiplied vector with scalar) and it's only meaningful if you consider vectors which are pointing in the same direction. $x$ could be a matrix and other answers have shown cases where the matrix is not unique. $x$ could also be a vector and you could consider either dot or cross product. Again there are cases when this works and when it does not. So you cannot divide by anything, there can be some divisions that cannot be defined, but that's fine - you cannot divide by zero in reals aswell. You just have to understand what you are doing and whether inverse is unique and if it's definable at all. There are cases where vector division makes sense and is useful. For example, let's consider Lorentz force on charge that's moving in magnetic field. $$\vec{F}=q \vec{v}\times\vec{B}$$ If you can measure the force and one of the quantities on the right hand side, the other is the division (however, beware if it's inverse of right side or left side multiplication :)) of force and the measured right hand side quantity. It could be written as $$\vec{v}=\frac{\vec{F}}{\vec{B}}(left)$$ where "left" and "right" is a matter of convention. However, as Jerry pointed out, the solution is not unique. So whenever you can multiply, you can check if there exists inverse. There are cases when there is no unique inverse, but if there is one, you can call it the division. Vectors are not totally on one side or the other - you can usually find a set of vectors for which certain division is meaningful. As per the Wolfram Mathworld page In general, there is no unique matrix solution to the matrix equation, $$\mathbf y=\mathbb A\mathbf x$$ An example is then given for $\mathbf y=2\mathbf x=(2,4)$ in which there are 3 different solutions. Suppose we take $A = TB$, where $A$ and $B$ are vectors and $T$ is a tensor. Now if $A$ and $B$ given and vector division possible we can find the value of $T$. If we take a simple example $A = (a_1,a_2,a_3)$, $B = (b_1,b_2,b_3)$ and $T$ is a 3x3 matrix: $$T=\left(\begin{matrix}t_{11} & t_{12} & t_{13} \\ t_{21} & t_{22} & t_{23}\\t_{31} & t_{32} & t_{33}\end{matrix}\right)$$ Now from above relation we get three equations with nine unknowns, which never give a unique solution, so we can say vector division impossible unless we take $A$ and $B$ as parallel and $T$ as a scalar. protected by Qmechanic♦ May 8 '14 at 13:00 Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count). Would you like to answer one of these unanswered questions instead?
I wanna clarify some issues about renormalization in the $\bar{MS}$ scheme that I glossed over when I first learnt about this stuff. I am following http://arxiv.org/abs/1411.7853 section 3.1. The gluon part of the QCD Lagrangian is considered and the renormalized coupling and gluon field are written $$g=\bar{\mu}^{\epsilon}Z_gg_R\qquad{}A_{\mu}=\sqrt{Z_A}A_{\mu}^R\tag{10}$$ where $\bar{\mu}=\frac{\mu}{\sqrt{2\pi}}e^{\gamma_E/2}$. It is immediately stated that the renormalization constant takes the form $$Z_g=1+\frac{\alpha_s(\mu)}{4\pi}\frac{Z_{11}}{\epsilon}+\bigg(\frac{\alpha_s(\mu)}{4\pi}\bigg)^2\bigg(\frac{Z_{22}}{\epsilon^2}+\frac{Z_{21}}{\epsilon}\bigg)\\+\bigg(\frac{\alpha_s(\mu)}{4\pi}\bigg)^3\bigg(\frac{Z_{33}}{\epsilon^3}+\frac{Z_{32}}{\epsilon^2}+\frac{Z_{31}}{\epsilon}\bigg)+\ldots{}\tag{13}$$ I don't see why it should be obvious that $Z_g$ should take this form. What justifies this?
It looks like you're new here. If you want to get involved, click one of these buttons! Is the usual \(\leq\) ordering on the set \(\mathbb{R}\) of real numbers a total order? So, yes. 1. Reflexivity holds 2. For any \\( a, b, c \in \tt{R} \\) \\( a \le b \\) and \\( b \le c \\) implies \\( a \le c \\) 3. For any \\( a, b \in \tt{R} \\), \\( a \le b \\) and \\( b \le a \\) implies \\( a = b \\) 4. For any \\( a, b \in \\tt{R} \\), we have either \\( a \le b \\) or \\( b \le a \\) So, yes. Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the Dedekind-MacNeille_completion of the rationals. Matthew has told us interesting things about it before. Hausdorff, on his part, in the book I mentioned here, says that any total order, dense, and without \( (\omega,\omega^*) \) gaps, has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). Perhaps, due to our interest in things categorical, we can enjoy (instead of Cauchy sequence methods) to see the order of the (extended) real line as the [Dedekind-MacNeille_completion of the rationals](https://en.wikipedia.org/wiki/Dedekind%E2%80%93MacNeille_completion#Examples). Matthew has told us interesting things about it [before](https://forum.azimuthproject.org/discussion/comment/16714/#Comment_16714). Hausdorff, on his part, in the book I mentioned [here](https://forum.azimuthproject.org/discussion/comment/16154/#Comment_16154), [says](https://books.google.es/books?id=M_skkA3r-QAC&pg=PA85&dq=each+everywhere+dense+type&hl=en&sa=X&ved=0ahUKEwjLkJao-9DaAhWD2SwKHVrkBcIQ6AEIKTAA#v=onepage&q=each%20everywhere%20dense%20type&f=false) that any total order, dense, and without \\( (\omega,\omega^*) \\) [gaps](https://en.wikipedia.org/wiki/Hausdorff_gap), has embedded the real line. I don't have a handy reference for an isomorphism instead of an embedding ("everywhere dense" just means dense here). I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) I believe the [hyperreal numbers](https://en.wikipedia.org/wiki/Hyperreal_number) give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) That's an interesting question, Jonathan. That's an interesting question, Jonathan. Jonathan Castello I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \(|\mathbb{R}| = |^\ast \mathbb{R}|\). This is because \(^\ast \mathbb{R}\) embeds \(\mathbb{R}\) and is constructed from countably infinitely many copies of \(\mathbb{R}\) and taking a quotient algebra modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all unbounded dense linear orders of cardinality \(\aleph_0\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the Morley categoricity theorem. From this we have that all unbounded dense linear orders with cardinality \(\kappa \geq \aleph_0\) are isomorphic. This is referred to in model theory as \(\kappa\)-categoricity. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. Puzzle MD 1: Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. [Jonathan Castello](https://forum.azimuthproject.org/profile/2316/Jonathan%20Castello) > I believe the hyperreal numbers give an example of a dense total order that embeds the reals without being isomorphic to it. (I can’t speak to the gaps condition though, and it’s just plausible that they’re isomorphic at the level of mere posets rather than ordered fields.) In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. First, we can observe that \\(|\mathbb{R}| = |^\ast \mathbb{R}|\\). This is because \\(^\ast \mathbb{R}\\) embeds \\(\mathbb{R}\\) and is constructed from countably infinitely many copies of \\(\mathbb{R}\\) and taking a [quotient algebra](https://en.wikipedia.org/wiki/Quotient_algebra) modulo a free ultra-filter. We have been talking about quotient algebras and filters in a couple other threads. Next, observe that all [unbounded dense linear orders](https://en.wikipedia.org/wiki/Dense_order) of cardinality \\(\aleph_0\\) are isomorphic. This is due to a rather old theorem credited to George Cantor. Next, apply the [Morley categoricity theorem](https://en.wikipedia.org/wiki/Morley%27s_categoricity_theorem). From this we have that all unbounded dense linear orders with cardinality \\(\kappa \geq \aleph_0\\) are isomorphic. This is referred to in model theory as *\\(\kappa\\)-categoricity*. Since the hypperreals and the reals have the same cardinality, they are isomorphic as unbounded dense linear orders. **Puzzle MD 1:** Prove Cantor's theorem that all countable unbounded dense linear orders are isomorphic. Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in here. Can two lattices be isomorphic and their associated posets not? Hi Matthew, nice application of the categoricity theorem! One question if I may. You said: > In fact, while they are not isomorphic as lattices, they are in fact isomorphic as mere posets as you intuited. But in my understanding the lattice and poset structure is inter-translatable as in [here](https://en.wikipedia.org/wiki/Lattice_(order)#Connection_between_the_two_definitions). Can two lattices be isomorphic and their associated posets not? (EDIT: I clearly have no idea what I'm saying and I should probably take a nap. Disregard this post.) Can two lattices be isomorphic and their associated posets not? Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving infima and suprema, ie limits, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \) and \(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \) such that: Now consider \(\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\). The hyperreals famously violate the Archimedean property. Because of this \(\bigwedge_{^\ast \mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\) does not exist. On the other than if we consider \( \bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\), that does exist by the completeness of the real numbers (as it is bounded below by \(\psi(0)\)). Hence $$\bigwedge_{\mathbb{R}} \{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \}\right)$$So \(\psi\) cannot be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \(\phi\) and \(\psi\) are a mere lattice isomorphism, preserving finite meets and joints. > Can two lattices be isomorphic and their associated posets not? If two lattices are isomorphic preserving *infima* and *suprema*, ie *limits*, then they are order isomorphic. The reals and hyperreals provide a rather confusing counter example to the converse. I am admittedly struggling with this myself, as it is highly non-constructive. From model theory we have two maps \\(\phi : \mathbb{R} \to\, ^\ast \mathbb{R} \\) and \\(\psi :\, ^\ast\mathbb{R} \to \mathbb{R} \\) such that: - if \\(x \leq_{\mathbb{R}} y\\) then \\(\phi(x) \leq_{^\ast \mathbb{R}} \phi(y)\\) - if \\(p \leq_{^\ast \mathbb{R}} q\\) then \\(\psi(q) \leq_{\mathbb{R}} \psi(q)\\) - \\(\psi(\phi(x)) = x\\) and \\(\phi(\psi(p)) = p\\) Now consider \\(\\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\). The hyperreals famously violate the [Archimedean property](https://en.wikipedia.org/wiki/Archimedean_property). Because of this \\(\bigwedge_{^\ast \mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\) does not exist. On the other than if we consider \\( \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\\), that *does* exist by the completeness of the real numbers (as it is bounded below by \\(\psi(0)\\)). Hence $$ \bigwedge_{\mathbb{R}} \\{ \psi(x) \, : \, x \in \mathbb{R} \text{ and } 0 < x \\} \neq \psi\left(\bigwedge_{^\ast\mathbb{R}} \\{ x \, : \, x \in \mathbb{R} \text{ and } 0 < x \\}\right) $$ So \\(\psi\\) *cannot* be a complete lattice homomorphism, even though it is part of an order isomorphism. However, just to complicate matters, I believe that \\(\phi\\) and \\(\psi\\) are a mere *lattice* isomorphism, preserving finite meets and joints.
I would like to check if two subpopulations of my data have the same parameters in a model. Model 1 is based on subpopulation 1 and Model 2 is based on subpopulation 2. Model 1: $y=x^\alpha + \gamma +\varepsilon$ Model 2: $y=x^\beta+ \theta +\varepsilon$ The parameters of the two models are estimated with Nonlinear Least Squares. The hypothesis I want to test is therfore: H0: $\gamma $ = $\theta$ and $\alpha = \beta$ Normally I would use an Chow-test/F-test to test this hypothesis. However the residuals ($\varepsilon$) of the two models are heavy tailed. Since the F-test is sensitive to non-normality and will probably result in small p-values I would like to use another test. What test would be suitable?
Please correct me if I got you wrong, but let me a bit elaborate your question and provide possible answers for two different cases. Say, you have a descrete HMM model, i.e., transition probabilities $A_{ij} = P(q_{t+1}=S_i~|~q_t=S_j)$, and emission probabilities $B_{i}(k) = P(o_t = V_k~|~q_t=S_i)$, where $q$'s denote states in the sequence, $o$'s denote observations, the state space $\mathcal{S} = \{S_1,\dots,S_N\}$ and possible outcomes are $\mathcal{V} = \{V_1,\dots,V_K\}$, $t$ indexes the time (position in a sequence). Now, if you know (somehow) transition probabilities $A$ and emission probabilities $B$, and want to incorporate some priors $B^{\text{prior}}$ you believe in, the best way is use multiplication $B^{\text{new}}_{i}(k) = B_{i}(k) \cdot B^{\text{prior}}_{i}(k)$ and then renormalize the conditional probabilities. The meaning of this is to give weights to the original emission probabilities according to your prior knowledge. Another scenario, is when you want to learn transition matrix and emission probabilities from a set of given sequences $\mathcal{O} = \{\vec O_1, \dots, \vec O_L\}$. In this case, we should examine the M-step of the commonly used EM algorithm for HMM learning a bit more in depth. According to the M-step, we have the following update rule$$\bar B_i(k) = \frac{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}\gamma_t^l(j)\cdot\mathbb{1}[o_t^l = v_k]\right)}{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}\gamma_t^l(j)\right)} = \frac{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}P(q_t = S_j, o_t^l = v_k~|~\vec O_l; \theta)\right)}{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}P(q_t = S_j~|~\vec O_l; \theta)\right)},$$where $\gamma$ is one of the precomputed statistics on the E-step, $\gamma_t^l(j) = P(q_t = S_j~|~\vec O_l; \theta)$, and $\mathbb{1}[\cdot]$ is the indicator function, and $\theta$ denotes the current model parameters. Now in order to incorporate $B^{\text{prior}}$ into the model, the right way would probably be to weight all the observation sequences according to this prior, i.e., change the M-step formula to the following$$\tilde B_i(k) = \frac{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}P(q_t = S_j, o_t^l = v_k~|~\vec O_l; \theta)\right)P_{\text{prior}}(\vec O_l;\theta)}{\sum_{l=1}^L\left(\sum_{t=1}^{T_l}P(q_t = S_j~|~\vec O_l; \theta)\right)P_{\text{prior}}(\vec O_l; \theta)},$$where $P_{\text{prior}}(\vec O_l; \theta)$ is computed according to the transition matrix of the current model $\theta$, and prior emissions you have $B^{\text{prior}}$. An intuition behind such way of including prior is that you do not regard all the sequences of observations as equally possible, but judge them according to your prior knowledge.
Let $X_1, X_2, ..., X_n$ be a random sample from a distribution who's PDF is given by $f(x; \theta)=(\theta+1)x^{\theta} $ for $0 \leq x \leq 1$ or $0$ otherwise. Find the method-of-moments estimator for $\theta$. So I have done the following: $\mathbb{E}(X)=\int_{-\infty}^{\infty} xf(x) dx = (\theta +1)\int_{0}^{1}x^{\theta+1}dx=\frac{\theta +1}{\theta+2}$ I am unsure now how to equate this to $\bar{X}$
Let $1\leq d$ be an integer. Consider the $d$-dimensional moment curve $\mu\colon \mathbb R\to \mathbb R^d$ given by $t\mapsto (t,t^2,\dots, t^d)$. Given a finite subset $S\subset \mathbb R$ of cardinality $\geq d+1$, the $d$-dimensional cyclic polytope $C(d,S)$ is the convex hull of $\mu(S)$ in $\mathbb R^d$. It is well known, that the combinatorial type of the polytope $C(d,S)$ does depend (for fixed $d$) only on the cardinality of $S$. Cyclic polytopes and their triangulations can be studied by varying $d$ and exploiting the relationships arising from the obvious projection map $C(d+1,S)\to C(d,S)$ (forgetting the last coordinate); see e.g. Rambau's thesis. The cyclic polytope $C(d,S)$ can be equipped with additional structure as follows: every subset $T\subset S$ of cardinality $d$ determines an affine hyperplane spanned by $\mu(T)$ in $\mathbb R^d$; all together this gives a hyperplane arrangement $H(d,S)$. If we restrict this hyperplane arrangement to the cyclic polytope, we get a partition $P(d,S)$ of $C(d,S)$ into convex pieces with pairwise disjoint interiors. What is known about the combinatorics of the hyperplane arrangement $H(d,S)$ and about the corresponding partition $P(d,S)$? Is there a systematic study a la Rambau? Note that $H(d,S)$ is no longer invariant if one replaces $S$ by a different set of the same cardinality. Let us denote by $H(d,n)$ the "standard" arrangement where $S=[n]=\{0,\dots, n\}$. For which $S$ is $H(d,S)$ combinatorially equivalent to $H(d,n)$? For which $S$ is $P(d,S)$ combinatorially equivalent to $P(d,n)$? Finally, let me ask a more specific question which is motivated by some pictures I drew in dimension two. Given a surjective weakly monotone map $f\colon[n]\to[d]$, let $U_f$ denote the collection of those $d$-dimensional simplices $\Delta^I=C(d,I)$ spanned by the vertices $\mu(I)$ for some $I\subset [n]$ such that $f|I\colon I\to [d]$ is a bijection. My two-dimensional pictures suggest: Is it true that, for every $f$ as above, the polytope $\bigcap U_f \subset C(d,n)$ is (A) a piece of the partition $P(d,n)$ and (B) a $d$-dimensional simplex?
Let it be given that $A$ is a real valued $m$ by $n$ matrix with entries $p_j\left(\frac{i}{m-1}\right)_{i,j=0}^{m-1,n-1}$, where $p_j$ is a Legendre polynomial. Show that $$\frac{\|{x}\|_2}{2} \leq \sqrt{\frac{2}{m}}\|{Ax}\|_2\leq 3\frac{\|{x}\|_2}{2}$$ when $m\geq Cn^2$ for $C\in\mathbb{R}$ and for all $x\in\mathbb{C}^n$. I am given a hint: $$\int_{-1}^1 |p'(x)|^2dx\leq cn^4\int_{-1}^{1}|p(x)|^2dx$$ for all $p\in\mathbb{P}_n$ and $c>0$. Would someone be able to help me with this? I am somewhat stuck and do not really know where to start. I know that the Legendre polynomials are degined as polynomials $p_1,p_1,\dots$ satisfying $p_n\in\mathbb{P}_n$ and $$\int_0^1p_n(x)p_m(x)dx = \delta_{n,m}$$ for $n,m=0,1,2,\dots$.
Simple pursuit Zombies moving towards you will always catch you, but due to their lack of intelligence, your survival time increases exponentially with your relative speed. In $k=O(1)$ dimensional space ($k=2$ in the problem), the expected survival time is $d⋅(1/Θ(μd^k))^{(1+1/v)(1±o(1))/(k-1)}$ if $v$ is bounded below 1 and $μd^k→0$. I conjecture that for fixed $v$, the above $o(1)$ is unnecessary. If $μ$ depended on the distance $r$ from the origin, the threshold value for survival (at large $r$ and constant $v<1$) is $μ(r) = r^{-(k-1)v/(1+v)±o(1)}$ (the $o(1)$ is likely negative and necessary here). Consider first the continuous field version of the problem: The initial density is $μ$ and the player loses when the mass within distance $d$ of the player reaches (or exceeds) 1. Let $r(t)$ be the trajectory of the player; $r(0) = 0$. If $a$ and $b$ are trajectories of two possible zombies, we have: - $a'(t) = v \frac{r(t)-a(t)}{|r(t)-a(t)|}$ - $|b(t)-a(t)|$ is nonincreasing - For small $|b(t)-a(t)|$, $b'(t) = a'(t) - \frac{v}{|a(t)-r(t)|} (b(t)-a(t))_⊥ + O(\frac{|b(t)-a(t)|}{|r(t)-a(t)|})^2$ where the orthogonal projection $(b(t)-a(t))_⊥ = b(t)-a(t)-(b(t)-a(t))⋅(r(t)-a(t)) \frac{r(t)-a(t)}{|r(t)-a(t)|^2}$. - If $f_T(a(0)) = a(T)$, then the density at time $T$ at $a(T)$ is $μ/\det J_{f_T}$, and by integrating the above $b'$ equation, we get $\log \det J_{f_T} = -(k-1)v \int_0^T \frac{dt}{|r(t)-a(t)|}$. Now among all $a$ and $r$ with $|a(T)-r(T)|≤d$, the integral (and thus the density) is minimized if $|a(0)|=(1+v)T+d$, which requires moving in a straight line from the origin at maximum speed. Furthermore, in this case, the density at $a(T)$ matches the average density within distance $d$ up to a constant factor, and the precise bounds in the first paragraph follow. Lower bounds For the nonfield version, we can get the lower bounds by escaping in a nearly straight line while avoiding traps. This is possible here even if the player velocity is always within $ε$ (if $ε$ is $Θ(1)$) of moving with speed 1 in the positive $x$ direction. Intuitively, on a straight line, the average distance between traps of size $O(d)$ is $O(1/(μ_1 d^{k-1}))$ (where $μ_1$ is the relevant field density; $1-v=Ω(1)$), and using the (approximate) independence, the frequency of larger traps drops exponentially with trap size, with the lower bounds (in the first paragraph) reached with high probability unless the initial position is a trap. However, formalization of traps is a bit tricky, so we instead observe that we can trace out a trajectory of speed $1+v$ of sufficient clearance against the initial configuration, and then evolve it the same way as the zombies to get the escape trajectory. As long as all tangents of the initial trajectory are at an angle $≤α$ (for $α≤45°$) to the $x$ axis, this property will hold throughout the trajectory evolution, allowing us to ensure that the clearance will not shrink too much. For fixed $v$ ($μ$ does not depend on $r$), we can remove the $o(1)$ from the lower bounds for survival time by choosing a trajectory with variable $α$ with, at each point, $1/α$ at least polynomial in the distance between the point and the final destination. Also, for $v=1$ (and small $d$) and variable $μ(r) = r^{-k/2+1/6-ε}$, you can survive by making your trajectory increasingly smooth: Zombies following you at a small distance gain on you at a speed proportional to the square of the path curvature (and the square of the distance to you), so with curvature $r^{-0.5-ε}$ you can avoid $d→0$ as $r→∞$ for those zombies. In a straight line path and $d=Θ(1)$, you will encounter zombies at typical intervals $s = Θ(1/(μ(r) r^{(k-1)/2}))$. Avoiding an incoming zombie from a distance $s$ uses correction $O(\sqrt s)$, corresponding to curvature $O(s^{-1.5})$, which is $O(r^{-0.5-ε_2})$ for the above $μ(r)$. Also, for $v≤1$ and $d=0$, you can survive indefinitely (i.e. not lose at finite time) by simply moving at speed 1 in a straight line in any unoccupied direction — or even, with probability 1, by moving with speed 1 along any curve with bounded curvature, with the curve chosen independently of zombie positions. Upper bound For the upper bound, it suffices to consider a single point and a linear approximation to the problem. To escape, for every $R>0$, the player would have to cross (at time $O(R/v)$) an $a(t)$ that starts at the sphere $|a(0)|=R$. A player cannot approach $a(t)$ to within distance $d$ without increasing the field density at $a(t)$ in $ρ =(R/d)^{(k-1)/(1+1/v)}$ times. Furthermore, as long as the increase in density at $a(t)$ is $o(ρ)$ times, the cumulative relative nonlinearity within distance $O(d)$ of $a(t)$ is $o(1)$. (Proof outline: If the remaining density increase is $ρ_1$ times, then the distance of the relevant points to $a$ is $D=O(ρ_1^{1/(k-1)} d)$, and with the player far enough compared to $D^2/d$, the nonlinearity is small enough.) From there, for large enough $R$, $\{b(0):|b(t)-a(t)|≤d|\}$ contains a volume $ω(\log R)$ ellipsoid (contained within distance $O(R)$ from the origin). By a counting argument, with probability $1-o(1)$ all such ellipsoids contain at least one relevant point, as required. For the bounds without $o(1)$, we are off by a factor of $\log(1/(μd^k))$ inside the $Θ(μd^k)$. However, I expect that the $\log$ factor can be eliminated by using the player path rather than a single point $a(t)$ and by analyzing the impact of nonlinearities. Intelligent pursuit If zombies (in $k = O(1)$ dimensional space) could strategize and cooperate, and letting $D=\frac{1}{μd^{k-1}v}$, they could surround you with gaps $<d$ in time $O(D)$ and capture you in time $O(D + \frac{\log^{1/k}(1+Dμ^{1/k})}{μ^{1/k}v})$ (and even do this with a strategy independent of your movement; also, the second summand is typically small and stems from random fluctuations in zombie density). You cannot escape for $μ(r) = ω(1/r^{k-1})$. I do not know whether you can escape if $μ(r) = O(1/r^{k-1})$ (and $d$ is small enough and $v<1$); Conway's angel problem has some of the similar subtleties. At $v<1$, a fixed finite number of zombies cannot capture you at a small enough $d$ since you can perturb your path (with sufficient smoothness and clearance) to avoid the first zombie, use a smaller scale perturbation against the second one, and so on. For the naive implementation, you might have to avoid the $n$th zombie $2^{n-1}$ times, and your clearance drops exponentially with $n$. However, by considering groups of zombies, you can cut off a fraction at each length scale, allowing a polynomial clearance, and you can escape at density $μ(r) = r^{-k+ε}$ for a small enough $ε>0$ dependent on $v$ (and small enough $d$ dependent on $v,ε$). At $v=1$, $k+1$ well-placed zombies (i.e. 3 for the plane) can win in finite time even at $d=0$. The reason is that a single pursuer can guard a half-space. They can even (deterministically) win if they have nonzero but small enough reaction time proportional to distance (i.e. the speed of light is finite). A single pursuer can guard a slowly receding half-space, allowing 1 (effectively 2) out of $k+1$ pursuers to advance.
It rather depends on what you mean by "get". In general you can't obtain population quantities from sample information. However, you can often obtain estimates, though in this case the estimates may not be very good. If you have them, you can readily calculate the parameters from the population mean and median; if $\tilde{m}=\exp(\mu)$ is the population median and $m=\exp(\mu+\frac12\sigma^2)$ is the population mean then $\mu=\log(\tilde{m})$ and $\sigma^2=2\log(\frac{m}{\tilde{m}})=2(\log(m)-\log(\tilde{m}))$. You could similarly attempt to use the sample mean and sample median in some kind of estimator of the population quantities. If the only things you have are the sample mean and median from a lognormal ($\bar{x}$ and $\tilde{x}$ respectively) then you could at least use the obvious strategy of replacing population quantities by sample ones*, combining method of moments and method of quantiles ... $\hat{\mu}=\log(\tilde{x})$ and $\hat{\sigma}^2=2\log(\frac{\bar{x}}{\tilde{x}})=2(\log(\bar{x})-\log(\tilde{x}))$. I believe these estimators will be consistent. However, in small samples these are sure to be biased, and may not be very efficient, but you may not have a lot of choice without considerable analysis. Of course, in reality, you don't really know your data are drawn from a lognormal distribution - that's pretty much a guess. However, in practice it might be a quite serviceable assumption. Ideally one would work out the joint distribution of the sample mean and median from a lognormal, and then try to maximize the likelihood over the parameters on that bivariate distribution; that should do about as well as possible, but that's more a decent research problem (well worth a paper if it hasn't been done before) than a matter of a few paragraphs of answer. One could conduct some simulation investigations into the properties of the joint distribution of sample mean and median. For example, consider that the distribution of the ratio of mean to median should be scale-free -- a function of $\sigma$ only. Even if we can't compute it algebraically, we can look at how the ratio (for example) behaves as $\sigma$ changes. One might then be able to choose the $\sigma$ that approximately maximizes the chance of getting the ratio you observed ($\mu$ could be estimated in a variety of ways, but the obvious one - the log of the median, as mentioned earlier - would not be terrible). * Warning: it's perfectly possible for the sample median to exceed the sample mean. In that case the simple estimator suggested above is no help, since it relies on the mean being above the median (it will give a negative estimate for a positive parameter).
I've recently found an article (referred somewhere on this site) criticizing the use of common rules of algebra on infinite series. To be honest, the video referred is one of the videos of Numberphile I liked the most. I mean, informally, to say a rule doesn't hold, I think one should find an example (in modern logic, a $\forall$ statement is true by default, and a $\exists$ false); say, associativity of addition for infinite series: $$ S_1=(1-1)+(1-1)+(1-1)+(1-1)\cdots=0\\ S_2=1+(-1+1)+(-1+1)+(-1+1)+\cdots=1\\ \therefore S_1\neq S_2 $$ But what inconsistency does: $$ \begin{align} S=1&-1+1-1+\cdots\\ S+S=1&-1+1-1+\cdots+\\ &+1-1+1-1+\cdots=1\iff\\ \iff2S=1&\iff S=\frac12 \end{align} $$ create? I'm not even getting into Cesàro summation. Why does the limit of a sum have to equal the sum itself? Why can't we have $$ \frac12=\sum_{n=0}^\infty\ (-1)^n\neq \lim_{x\to\infty}\sum_{n=0}^x\ (-1)^n= \text{st}\sum_{n=0}^H\ (-1)^n= \text{undefined} $$ After all, in here $x$ is an arbitrarily big real , $H$ is a positive infinite hyperinteger number and $\infty$ is number , not a number. Where is the inconsistency? NaN Edit: There are already a lot of comments, and I feel I haven't made myself clear. Maybe the question is more philosophical than I thought. Here is an attempt to make my still developing points clearer: $\cdots$ means the continuation to infinity of a series that continues the most simple pattern. Example: $\displaystyle\sum_{n=0}^\infty\ (-1)^n$ means that for whatever number you have taken the partial sum, you are as far from the result as you were in the beginning. As by $6.$, such non-converging sum cannot be computed directly. In the first example, it is proven that associativity does not hold for all infinite series, at least for divergent series, the same way $\sqrt a \sqrt b=\sqrt{ab}$ does not hold in $\mathbb C$. However, non-contradicting laws for associativity can be found: Associativity may not work infinitely for numbers within the same series, as $S\neq S_1\neq S_2\neq S$ shows. However, it works pairwise between infinite series. $$ \begin{align} S+S=1&-1+1-1+\cdots+\\ &+1-1+1-1+\cdots \end{align} $$ is the same as $$ S+S=1+(-1+1)+(+1-1)+(-1+1)+\cdots $$ which is $1$, as we have seen. This rule is consistent. Other pairwise associations for this will either give the same, or $2-2+2-2+\cdots$, which is also $1$. In other words, for any numbers, associativity works. So it works pairwise (2 series, 2 numbers per application), an infinite number of times. But it doesn't work within the same series, as each set of numbers would be a finite series, infinity is not a number, so it can't have not-a-number of times per application. That is, $$ S_1=(1-1)+(1-1)+(1-1)+\cdots $$ is the same as $$ \begin{align} S_1=1+&1+1+\cdots\\ -1-&1-1+\cdots \end{align} $$ and not the same as $$ \begin{align} S=1-&1+\\ +1-&1+\\ +1-&1+\\ +\cdots \end{align} $$ The limit of a sum equals the sum when the sum converges. Example: $$ \sum_{n=0}^\infty 2^{-n}=\lim_{x\to\infty}\sum_{n=0}^x 2^{-n}=2 $$ The limit of a non-converging sum does not exist or is infinity, not a number. All sums have a value, even thought their limits might not have one, or the value of the sum is infinity. If that is the case, infinity, as not a number, cannot be directly summed with another sum (eliminating problems as $\infty-\infty$ by reason of lack of information). If according to non-contradictory rules, a value can be assigned to a sum, that is the value of the sum. See the example above for $1-1+1-1+\cdots$ To distinguish between the values of two non-convergent sums, they first must be computed according to non-contradictory rules. Then their values can be compared, by transitivity of equality. A divergent sum cannot be computed directly (the reason why $S\neq S_1\neq S_2\neq S$), as by definition of infinity, one cannot reach it. Again, use non-contradictory rules, making a finite number of changes that maintain the value of the divergent sum (see the examples' consistency). Thank you for reading,
Applet: Lotka-Volterra model, visualized as functions of time Illustration of the solution to the predator-prey system \begin{align*} \diff{r}{t} &= \alpha r - \beta r p\\ \diff{p}{t} &= - \gamma p + \delta r p\\ r(t_0) &= r_0\\ p(t_0) &= p_0 \end{align*} for the population sizes $r(t)$ of prey and $p(t)$ of predators at time $t$. The left panel shows plots of the solutions $r(t)$ (blue curve) and $p(t)$ (red curve) as functions of time. The initial conditions $r_0$ and $p_0$ can be changed by dragging the blue and red points on the line $t=t_0$ or by typing values in the corresponding boxes. Green points on the blue and red curves illustrate particular values of $r(t)$ and $p(t)$ for the value of $t$ shown on the green slider. You can change $t$ by dragging the green point on the slider, the green points on the blue and red curves, or by clicking the play button (triangle) in the lower left corner of one of the panels, which starts an animation where $t$ increases steadily. The values of $r(t)$ and $p(t)$ for the value of $t$ on the green slider are displayed at the top and are illustrated by the number (capped at 10,000) of blue points (prey individuals) and red diamonds (predator individuals) shown in the right panel. The Greek letters $\alpha$, $\beta$, $\gamma$, and $\delta$ are parameters that can be changed by typing values in the boxes. The maximum values of $r$ and $p$ that are shown in the plots as well as the range of times $t_0 \le t < t_f$ can be changed by typing values in their corresponding boxes. Applet file: lotka_volterra_versus_time.ggb Applet links General information about Geogebra Web applets This applet was created using Geogebra. In most Geogebra applets, you can move objects by dragging them with the mouse. In some, you can enter values with the keyboard. To reset the applet to its original view, click the icon in the upper right hand corner. You can download the applet onto your own computer so you can use it outside this web page or even modify it to improve it. You simply need to download the above applet file and download the Geogebra program onto your own computer.
we know the sobolev embedding theorem of Saloff-Coste $\Big(\int_B|F|^{2q}d\mu\Big)^{\frac1q}\le e^{C(1+\sqrt KR)}V^{-2/n}R^2\int_B\Big(|\nabla F|^2+R^{-2}F^2\Big)d\mu $ wtih $Ric\ge-(n-1)K$, for all '$B$' of radius $R$ and volume $V$, $F\in C^{\infty}_0(B)$, $q=n/(n-2)$. My question is whether this inequality was established in the smooth metric measure space,i.e. $(M,g,e^{-f}d\mu)$ with Bakry-Emery Ricci curvature bouneded below $Ric_f=Ric+Hess f\ge-(n-1)K$? Thank you!
Apologies for my inability to share intuition, a frequently subjective issue... I have learned a lot by reading the Steuernagel group numerical flows and topological features of such flows, in practice. For a recent discussion/proof of the zeros, singularities,and negative probability density features, hence your source-sink query in anharmonic quantum systems see Kakofengitis, Oliva & Steuernagel, 2017. Basically all bets are off when you (a point in phase space) and neighbors enter a phase-space cell of order $\hbar$, by dint of the uncertainty principle, and that includes a definition of what a trajectory is. If you watch the nifty movies of Cabrera and Bondar in the WP article you are linking to for the Morse and quartic potentials, you actually see this in real time, as a lump (you) spread all over the phase space in a highly organized way... I defy you to discern trajectories there! There is powerful topology at work, but I'd defer to Steuernagel for that. As a practical reassurance, I'll work out a trivial exercise from our book, on compressibility of Euler flows. For a Hamiltonian $H=p^2/(2m)+V(x)$, the Moyal evolution equation amounts to an Eulerian probability transport continuity equation, $$\frac{\partial f(x,p)}{\partial t} +\partial_x J_x + \partial_p J_p=0~,$$where, for $\mathrm{sinc}(z)\equiv \sin z/~ z$ , the phase-space flux is $$J_x=pf/m~ ,\\ J_p= -f \mathrm{sinc} \left( {\hbar \over 2} \overleftarrow {\partial _p} \overrightarrow {\partial _x} \right)~~ \partial_x V(x).$$ Classical mechanics is crucially different, in that the phase-space current is always $ {\bf J}=(p/m,-\partial_xV(x))f$, and the velocity ${\bf v}=(p/m,-\partial_xV(x))$, manifestly divergenceless in phase space. Now note for the oscillator, $V_1= x^2/2$, $J_p=-f x $, so the phase-space velocity ${\bf v}=(mp,- x)$ and $\nabla \cdot {\bf v}=0$, incompressibility. This is a reminder that the quantum oscillator is basically classical, and its wave packets do not spread, as iconically pointed out by Schroedinger... coherent states. But this is a crying exception. For a more generic potential, like the quartic, $V_2= x^4/4$, $$v_p=J_p/f= -x^3 +\hbar^2 x ~\partial_p^2f /f , \\\nabla \cdot {\bf v}= \hbar^2 x ~\partial_p (\partial_p^2 f(x,p) /f(x,p))\neq 0,$$ so the flow is modified by $O(\hbar^2)$ to compressible. So the strictly quantum difference between the quantum Moyal bracket and the classical Poisson bracket is the crucial element in increasing or decreasing the amount of (quasi)probability in a comoving phase-space region $\Omega$, since$${d \over dt}\! \int_{\Omega}\! \! dx dp ~f= \int_{\Omega}\!\! dx dp \left ({\partial f \over \partial t}+ \partial_x (\dot{x} f) + \partial_p (\dot{p} f ) \right) = \int_{\Omega}\! \!\! dx dp~ (\{\!\!\{ H,f\}\!\!\}-\{H,f\})\neq 0 ~. $$ Note added: It is even odder. Quantum flows display physically significant viscosity!
Since the dawn of quantum mechanics, entanglement has been a central notion of the theory [18], a very important body of work being dedicated to understanding, classifying, measuring, and characterizing this very important property of quantum states. Once it has been recognized that it is computationally hard to decide whether a given quantum state is entangled or separable (i.e. classically correlated), researchers turned their attention to finding entanglement criteria, that is computationally efficient tests which guarantee the presence of quantum entanglement, without being also necessary for it. In order to define entangled quantum states, we start with the opposite notion, that of separable states: a quantum state $\rho_{AB}$, shared between two parties Alice and Bob is called separable if it can be written as a convex mixture of product states between our two protagonists: \begin{equation}\label{eq:separable-decomposition}\tag{1} \rho_{AB} = \sum_{i=1}^r p_i \sigma_A^{(i)} \otimes \sigma_B^{(i)}, \end{equation} where $\sigma_A^{(1)}, \ldots, \sigma_A^{(r)}$ are states on Alice’s system, the $\sigma_B^{(i)}$ are states on Bob’s system, and the $p_i$ are convex weights. In other words, separable states are the quantum states that Alice and Bob can prepare locally, using only shared randomness. States which are not separable are called entangled. The prototypical entangled state is the maximally entangled state of two qubits: \begin{equation}\label{eq:maximally-entangled-state}\tag{2} | \psi_{me} \rangle = \frac{1}{\sqrt 2} (|00\rangle + |11\rangle) \in \mathbb C^2 \otimes \mathbb C^2. \end{equation} For unit rank quantum states $\rho = | \psi \rangle \langle \psi |$, called pure, deciding whether the state is separable or entangled is easy: one needs to decide whether the reshaped matrix $\Psi$ has rank one, a task which is computationally cheap, using standard algorithms for computing singular- or eigenvalues. Recall that to a quantum state $|\psi\rangle = \sum_{ij} x_{ij} |ij \rangle$, one associates a $d \times d$ matrix $\Psi = \sum_{ij} x_{ij} |i \rangle \langle j|$. The tensor $\psi$ is called the vectorization of $\Psi$, while $\Psi$ is said to be a reshaping of $\psi$ into a matrix. For the example of the maximally entangled state above, one computes $\Psi_{me} = I_2 / \sqrt 2$, which is a matrix of rank 2, certifying the entanglement of the pure state $|\psi_{me}\rangle$. In the case of mixed quantum states (positive semidefinite matrices of unit trace), the situation is much more complicated. The decision problem associated to the (weak) membership problem for the set of separable states has been proven by Gurvits [8] to be NP-hard, in some precise way of encoding the size of the input and the required precision. The result of Gurvits means that there is no universal, computationally cheap, and exact criterion for entanglement and separability; one needs to allow for some errors if one desires some fast way of testing entanglement. Computationally efficient entanglement tests have existed for a long time, the most important being the Peres-Horodecki positive partial transpose (PPT) test [17,10]: if the partial transposition of a quantum state is not positive semidefinite, then the state is entangled: $$[\operatorname{id} \otimes \operatorname{transp}](\rho_{AB}) \ngeq 0 \implies \rho_{AB} \text{ is entangled.}$$ The operation $[\operatorname{id} \otimes \operatorname{transp}](X)$ is cleverly denoted by $X^\Gamma$, the $\Gamma$ superscript being meant to represent half of the transpose. In the case of the rank one (pure state) case of the maximally entangled case from \eqref{eq:maximally-entangled-state}, the partial transposition criterion gives $$\frac 1 2 \begin{bmatrix} 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 \end{bmatrix}^\Gamma = \frac 1 2 \begin{bmatrix} 1 & 0 & 0 & 0\\ 0 & 0 & 1 & 0\\ 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 1 \end{bmatrix}, \text{ with spectrum $(-1, 1, 1, 1)/2$}.$$ The presence of the negative eigenvalue $-1$ in the spectrum above proves that the maximally entangled state is, indeed, entangled. In their seminal work [10], the Horodeckis proved a very intriguing result, giving an equivalent characterization of separability: a quantum state $\rho_{AB} \in \mathcal M_d(\mathbb C) \otimes \mathcal M_d(\mathbb C)$ is separable if and only if for every positive map $\Phi : \mathcal M_d(\mathbb C) \to \mathcal M_d(\mathbb C)$, the matrix $[\operatorname{id} \otimes \Phi](\rho_{AB})$ is positive semidefinite. Here, we use the notion of positivity in the $C^*$ algebra sense: a linear map $\Phi$ between to matrix algebras is called positive if it sends positive semidefinite matrices to positive semidefinite matrices, see [16]. Hence, to show that a quantum state $\rho_{AB}$ is entangled, it suffices to find a single positive map $\Phi$ who’s partial action on the quantum state renders it non-positive semidefinite; such a map is called an entanglement witness (to be more precise, in the literature the Choi-Jamio{\l}kowski matrix [12,5] of $\Phi$ is called an entanglement witness [11, Sections VI B 2,3]). The partial transposition criterion corresponds to the very important choice $\Phi = \operatorname{transp}$. The problem with the Horodeckis’ characterization of separability is that one needs to check the positive semidefinite condition for all positive maps. One can get around this in small dimensions: for $2 \otimes 2$ and $2 \otimes 3$ systems, it is enough to check the positivity of the partial transposition; this fact is a non-trivial result in operator algebra due to Woronowicz [20], see also [2, Section 2.4.5] for a very pleasant treatment of the $2 \otimes 2$ case. In dimension larger than $6 = 2\times 3$, the transposition map does not suffice: there exist PPT entangled states. Moreover, one does not get away with using “more” positive maps, see [19,1,7]. In [13], the authors investigate the duality relation between positive maps and entanglement from a different perspective. Instead of trying to find a subclass of positive maps necessary and sufficient for characterizing entanglement, they look at the set of density matrices which are certified as entangled by a given positive map. The larger this set is, the better is the given fixed map at detecting entanglement. Precisely, the authors introduce the non-$m$-positivedimension of a positive map $\Phi:\mathcal M_d(\mathbb C) \to \mathcal M_d(\mathbb C)$, which measures how large a subspace of $\mathbb C^d \otimes \mathbb C^d$ can be if every quantum state supported on the subspace is non-positive semidefinite under the partial action of $\Phi$. Equivalently, this is the maximal number of negative eigenvalues that the adjoint map $\operatorname{id}_m \otimes \Phi^*$ can produce from a positive semi-definite input, where the identity map acts on $m \times m$ matrices. This number, denoted $\nu_m(\Phi)$ has been previously computed by one of the authors in the case of the transposition map [14]: $$\nu_m(\operatorname{transp}_d) = (m-1)(d-1).$$ The authors proceed then to study many properties of the quantities $\nu_m$, and, importantly, to define a regularized version thereof, $$\nu(\Phi):=\lim_{m \to \infty} \frac{\nu_m(\Phi)}{m}.$$ It is important to show that the limit above exists, this being a non-trivial result. The authors proceed then to give important lower bounds for the quantities $\nu_m$ and $\nu$, which are then applied to some important classes of positive maps in quantum information theory, such as the reduction map [4], the Choi map [6], or the Breuer-Hall map [3,9]. Overall, the results in [13] are an important contribution to the theory of positive linear maps, which is, mainly due to the close relation to entanglement theory, a very active field of research. Recently, many important mathematical contributions have been made from research groups with a quantum theory background ([15] or [21] just to mention two recent ones), evidence of the strong interactions between the two fields. [1] Guillaume Aubrun and Stanislaw Szarek. Dvoretzky's theorem and the complexity of entanglement detection. Discrete Analysis, page 1242, 2017. 10.19086/da.1242. https://doi.org/10.19086/da.1242 [2] Guillaume Aubrun and Stanisław J Szarek. Alice and Bob Meet Banach: The Interface of Asymptotic Geometric Analysis and Quantum Information Theory, volume 223. American Mathematical Soc., 2017. [3] Heinz-Peter Breuer. Optimal entanglement criterion for mixed quantum states. Physical review letters, 97(8):080501, 2006. 10.1103/PhysRevLett.97.080501. https://doi.org/10.1103/PhysRevLett.97.080501 [5] Man-Duen Choi. Completely positive linear maps on complex matrices. Linear algebra and its applications, 10(3):285-290, 1975. 10.1016/0024-3795(75)90075-0. https://doi.org/10.1016/0024-3795(75)90075-0 [7] Hamza Fawzi. The set of separable states has no finite semidefinite representation except in dimension $3\times 2$. arXiv preprint arXiv:1905.02575, 2019. https://arxiv.org/abs/1905.02575. arXiv:1905.02575 [8] Leonid Gurvits. Classical complexity and quantum entanglement. Journal of Computer and System Sciences, 69(3):448-484, 2004. 10.1016/j.jcss.2004.06.003. https://doi.org/10.1016/j.jcss.2004.06.003 [9] William Hall. A new criterion for indecomposability of positive maps. Journal of Physics A: Mathematical and General, 39(45):14119, 2006. 10.1088/0305-4470/39/45/020. https://doi.org/10.1088/0305-4470/39/45/020 [10] Michał Horodecki, Paweł Horodecki, and Ryszard Horodecki. Separability of mixed states: necessary and sufficient conditions. Physics Letters A, 223(1):1-8, 1996. 10.1016/S0375-9601(96)00706-2. https://doi.org/10.1016/S0375-9601(96)00706-2 [11] Ryszard Horodecki, Paweł Horodecki, Michał Horodecki, and Karol Horodecki. Quantum entanglement. Reviews of Modern Physics, 81(2):865, 2009. 10.1103/RevModPhys.81.865. https://doi.org/10.1103/RevModPhys.81.865 [12] Andrzej Jamiołkowski. Linear transformations which preserve trace and positive semidefiniteness of operators. Reports on Mathematical Physics, 3(4):275-278, 1972. 10.1016/0034-4877(72)90011-0. https://doi.org/10.1016/0034-4877(72)90011-0 [13] Nathaniel Johnston, Benjamin Lovitz, and Daniel Puzzuoli. The Non-m-Positive Dimension of a Positive Linear Map. Quantum, 3:172, August 2019. 10.22331/q-2019-08-12-172. https://doi.org/10.22331/q-2019-08-12-172 [14] Nathaniel Johnston. Non-positive-partial-transpose subspaces can be as large as any entangled subspace. Physical Review A, 87(6):064302, 2013. https://doi.org/10.1103/PhysRevA.87.064302. https://doi.org/https://doi.org/10.1103/PhysRevA.87.064302 [15] Alexander Müller-Hermes, David Reeb, and Michael M Wolf. Positivity of linear maps under tensor powers. Journal of Mathematical Physics, 57(1):015202, 2016. https://doi.org/10.1063/1.4927070. https://doi.org/https://doi.org/10.1063/1.4927070 [16] Vern Paulsen. Completely bounded maps and operator algebras, volume 78. Cambridge University Press, 2002. [19] Łukasz Skowronek. There is no direct generalization of positive partial transpose criterion to the three-by-three case. Journal of Mathematical Physics, 57(11):112201, 2016. 10.1063/1.4966984. https://doi.org/10.1063/1.4966984 [20] Stanisław Lech Woronowicz. Positive maps of low dimensional matrix algebras. Reports on Mathematical Physics, 10(2):165-183, 1976. 10.1016/0034-4877(76)90038-0. https://doi.org/10.1016/0034-4877(76)90038-0 [21] Yu Yang, Denny H Leung, and Wai-Shing Tang. All 2-positive linear maps from m3 (c) to m3 (c) are decomposable. Linear Algebra and its Applications, 503:233-247, 2016. 10.1016/j.laa.2016.03.050. https://doi.org/10.1016/j.laa.2016.03.050 Cited by This View is published in Quantum Views under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. Copyright remains with the original copyright holders such as the authors or their institutions.
Search Now showing items 1-9 of 9 Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Transverse sphericity of primary charged particles in minimum bias proton-proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV (Springer, 2012-09) Measurements of the sphericity of primary charged particles in minimum bias proton--proton collisions at $\sqrt{s}$=0.9, 2.76 and 7 TeV with the ALICE detector at the LHC are presented. The observable is linearized to be ... Pion, Kaon, and Proton Production in Central Pb--Pb Collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-12) In this Letter we report the first results on $\pi^\pm$, K$^\pm$, p and pbar production at mid-rapidity (|y|<0.5) in central Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV, measured by the ALICE experiment at the LHC. The ... Measurement of prompt J/psi and beauty hadron production cross sections at mid-rapidity in pp collisions at root s=7 TeV (Springer-verlag, 2012-11) The ALICE experiment at the LHC has studied J/ψ production at mid-rapidity in pp collisions at s√=7 TeV through its electron pair decay on a data sample corresponding to an integrated luminosity Lint = 5.6 nb−1. The fraction ... Suppression of high transverse momentum D mesons in central Pb--Pb collisions at $\sqrt{s_{NN}}=2.76$ TeV (Springer, 2012-09) The production of the prompt charm mesons $D^0$, $D^+$, $D^{*+}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at the LHC, at a centre-of-mass energy $\sqrt{s_{NN}}=2.76$ TeV per ... J/$\psi$ suppression at forward rapidity in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE experiment has measured the inclusive J/ψ production in Pb-Pb collisions at √sNN = 2.76 TeV down to pt = 0 in the rapidity range 2.5 < y < 4. A suppression of the inclusive J/ψ yield in Pb-Pb is observed with ... Production of muons from heavy flavour decays at forward rapidity in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV (American Physical Society, 2012) The ALICE Collaboration has measured the inclusive production of muons from heavy flavour decays at forward rapidity, 2.5 < y < 4, in pp and Pb-Pb collisions at $\sqrt {s_{NN}}$ = 2.76 TeV. The pt-differential inclusive ... Particle-yield modification in jet-like azimuthal dihadron correlations in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2012-03) The yield of charged particles associated with high-pT trigger particles (8 < pT < 15 GeV/c) is measured with the ALICE detector in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV relative to proton-proton collisions at the ... Measurement of the Cross Section for Electromagnetic Dissociation with Neutron Emission in Pb-Pb Collisions at √sNN = 2.76 TeV (American Physical Society, 2012-12) The first measurement of neutron emission in electromagnetic dissociation of 208Pb nuclei at the LHC is presented. The measurement is performed using the neutron Zero Degree Calorimeters of the ALICE experiment, which ...
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
I found the following problem on a comprehensive exam: Let $f : \mathbb{R}^2 \to \mathbb{R}$ be a continuous function and consider the function $F: \mathbb{R}^2 \to \mathbb{R}$ given by $$F(x,y) = \int_{D_{x,y}} f(u,v)\,du\,dv, \qquad D_{x,y} = \left\{(u,v) \in \mathbb{R}^2 \,\middle|\, u^2 + v^2 \leq x^2 + y^2 \right\}$$ Is $F(x,y)$ differentiable? If yes, find the differential $DF$. To me, it seems that this is a straight forward application of Leibniz Rule, but I've never applied it in such a setting: Treating $y$ as a constant, we can describe F(x,y) (somewhat inelegantly) as $$\int_{h_1(x)}^{h_2(x)} \int_{g_1(x,v)}^{g_2(x,v)} f\,du\,dv$$ where $h_1,h_2$ are differentiable. Then if we let $$G(v) = \int_{g_1(x,v)}^{g_2(x,v)} f\,du\,dv$$ We get $\frac{d}{dx}F(x,y) = G(h_2(x)h_2'(x) - G(h_1(x))h_1'(x)$. Does this seem correct? This does not seem like a very adequate answer. I'd like to ask for some more insight into problems of this form, verification for if the my attempt had any validity, and a more complete answer if possible. Thanks.
This question follows on from a previous question I asked which was answered. It turns out my question lacked some important details, which was revealed by the answer posted on that thread. This is thus an edited version with the relevant details included. Given a random vector $X \in \mathbb{R}^k$, with a known pdf given by $f_X$. If $Y, Z \in \mathbb{R}^k$ are defined by $Y = AX$, $Z = BX$, where $A,B \in \mathbb{R}^{k\times k}$ are different, given, real-valued, singular matrices. I know how to calculate pdfs of $Y$ and $Z$ on their own. But how do I derive the joint pdf of $Y$ and $Z$? To be more specific: $f_X$ is a mixture of $0$-mean multivariate gaussians, each component in the mixture with a different, diagonal covariance matrix (but not of the form $\Sigma = \sigma^2 I$). given a unit vector $v \in \mathbb{R}^k$, my matrix $A = v v^\top$ is the projection of $X$ onto the $v$ direction, and $B = I - A$, such that $Y$ is orthogonal to $Z$. Any help would be much appreciated. For some context: My goal is to check for the $f_X$, $A$ and $B$ mentioned above, whether the vectors $Y$ and $Z$ are independent. This means I need to check whether the joint distribution of $Y$ and $Z$ factorises into the product of the marginals. There are at least some cases when this is true: if, for example, $X \sim \mathcal{N}(0,\sigma^2 I)$ and $A$, $B$ were projections described above. But proving it is not true in my case would also be helpful. Hence my need to derive the joint distribution of $Y$ and $Z$.
Difference between revisions of "Fujimura's problem" Line 3: Line 3: :<math>\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}</math> :<math>\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}</math> − which contains no equilateral triangles <math>(a+r,b,c), (a,b+r,c), (a,b,c+r)</math> with <math>r > 0</math>; call such sets ''triangle-free''. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute <math>\overline{c}^\mu_n</math>. This quantity is relevant to a certain + which contains no equilateral triangles <math>(a+r,b,c), (a,b+r,c), (a,b,c+r)</math> with <math>r > 0</math>; call such sets ''triangle-free''. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute <math>\overline{c}^\mu_n</math>. This quantity is relevant to a certain hyper-optimistic . == n=0 == == n=0 == − + <math>\overline{c}^\mu_0 = 1</math> + + . == n=1 == == n=1 == − + <math>\overline{c}^\mu_1 = 2</math> + + . == n=2 == == n=2 == − + <math>\overline{c}^\mu_2 = 4</math> + + (e.g. remove (0,2,0) and (1,0,1) from <math>\Delta_2</math>). == n=3 == == n=3 == − + − + (0,3,0), (0,2,1), (2,1,0), (1,0,2) from <math>\Delta_3</math>. − + − set A: (0,3,0) (0,2,1) (1,2,0) + : with only three removals each of these (non-overlapping) triangles must have one removal: − set B: (0,1,2) (0,0,3) (1,0,2) + − set C: (2,1,0) (2,0,1) (3,0,0) + set A: (0,3,0) (0,2,1) (1,2,0) + set B: (0,1,2) (0,0,3) (1,0,2) + set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: Consider choices from set A: − (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) + − (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work + (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) − (1,2,0) is symmetrical with (0,2,1) + (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work + (1,2,0) is symmetrical with (0,2,1) == n=4 == == n=4 == − :<math>\overline{c}^\mu_4=9 + :<math>\overline{c}^\mu_4=9</math> The set of all <math>(a,b,c)</math> in <math>\Delta_4</math> with exactly one of a,b,c =0, has 9 elements and is triangle-free. The set of all <math>(a,b,c)</math> in <math>\Delta_4</math> with exactly one of a,b,c =0, has 9 elements and is triangle-free. Line 43: Line 52: == n=5 == == n=5 == − :<math>\overline{c}^\mu_5=12</math> + :<math>\overline{c}^\mu_5=12</math> + The set of all (a,b,c) in <math>\Delta_5</math> with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. The set of all (a,b,c) in <math>\Delta_5</math> with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Line 83: Line 93: Another upper bound comes from counting the triangles. There are <math>\binom{n+2}{3}</math> triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for <math>\overline{c}^\mu_n</math>. Another upper bound comes from counting the triangles. There are <math>\binom{n+2}{3}</math> triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for <math>\overline{c}^\mu_n</math>. + + + + + + Revision as of 18:23, 13 February 2009 Let [math]\overline{c}^\mu_n[/math] the largest subset of the triangular grid [math]\Delta_n := \{ (a,b,c) \in {\Bbb Z}_+^3: a+b+c=n \}[/math] which contains no equilateral triangles [math](a+r,b,c), (a,b+r,c), (a,b,c+r)[/math] with [math]r \gt 0[/math]; call such sets triangle-free. (It is an interesting variant to also allow negative r, thus allowing "upside-down" triangles, but this does not seem to be as closely connected to DHJ(3).) Fujimura's problem is to compute [math]\overline{c}^\mu_n[/math]. This quantity is relevant to a certain hyper-optimistic conjecture. n=0 [math]\overline{c}^\mu_0 = 1[/math]: This is clear. n=1 [math]\overline{c}^\mu_1 = 2[/math]: This is clear. n=2 [math]\overline{c}^\mu_2 = 4[/math]: This is clear (e.g. remove (0,2,0) and (1,0,1) from [math]\Delta_2[/math]). n=3 [math]\overline{c}^\mu_3 \geq 6[/math]: For the lower bound, delete (0,3,0), (0,2,1), (2,1,0), (1,0,2) from [math]\Delta_3[/math]. For the upper bound: observe that with only three removals each of these (non-overlapping) triangles must have one removal: set A: (0,3,0) (0,2,1) (1,2,0) set B: (0,1,2) (0,0,3) (1,0,2) set C: (2,1,0) (2,0,1) (3,0,0) Consider choices from set A: (0,3,0) leaves triangle (0,2,1) (1,2,0) (1,1,1) (0,2,1) forces a second removal at (2,1,0) [otherwise there is triangle at (1,2,0) (1,1,1) (2,1,0)] but then none of the choices for third removal work (1,2,0) is symmetrical with (0,2,1) n=4 [math]\overline{c}^\mu_4=9[/math]: The set of all [math](a,b,c)[/math] in [math]\Delta_4[/math] with exactly one of a,b,c =0, has 9 elements and is triangle-free. (Note that it does contain the equilateral triangle (2,2,0),(2,0,2),(0,2,2), so would not qualify for the generalised version of Fujimura's problem in which [math]r[/math] is allowed to be negative.) Let [math]S\subset \Delta_4[/math] be a set without equilateral triangles. If [math](0,0,4)\in S[/math], there can only be one of [math](0,x,4-x)[/math] and [math](x,0,4-x)[/math] in S for [math]x=1,2,3,4[/math]. Thus there can only be 5 elements in S with [math]a=0[/math] or [math]b=0[/math]. The set of elements with [math]a,b\gt0[/math] is isomorphic to [math]\Delta_2[/math], so S can at most have 4 elements in this set. So [math]|S|\leq 4+5=9[/math]. Similar if S contain (0,4,0) or (4,0,0). So if [math]|S|\gt9[/math] S doesn’t contain any of these. Also, S can’t contain all of [math](0,1,3), (0,3,1), (2,1,1)[/math]. Similar for [math](3,0,1), (1,0,3),(1,2,1)[/math] and [math](1,3,0), (3,1,0), (1,1,2)[/math]. So now we have found 6 elements not in S, but [math]|\Delta_4|=15[/math], so [math]S\leq 15-6=9[/math]. n=5 [math]\overline{c}^\mu_5=12[/math]: The set of all (a,b,c) in [math]\Delta_5[/math] with exactly one of a,b,c=0 has 12 elements and doesn’t contain any equilateral triangles. Let [math]S\subset \Delta_5[/math] be a set without equilateral triangles. If [math](0,0,5)\in S[/math], there can only be one of (0,x,5-x) and (x,0,5-x) in S for x=1,2,3,4,5. Thus there can only be 6 elements in S with a=0 or b=0. The set of element with a,b>0 is isomorphic to [math]\Delta_3[/math], so S can at most have 6 elements in this set. So [math]|S|\leq 6+6=12[/math]. Similar if S contain (0,5,0) or (5,0,0). So if |S| >12 S doesn’t contain any of these. S can only contain 2 point in each of the following equilateral triangles: (3,1,1),(0,4,1),(0,1,4) (4,1,0),(1,4,0),(1,1,3) (4,0,1),(1,3,1),(1,0,4) (1,2,2),(0,3,2),(0,2,3) (3,2,0),(2,3,0),(2,2,1) (3,0,2),(2,1,2),(2,0,3) So now we have found 9 elements not in S, but [math]|\Delta_5|=21[/math], so [math]S\leq 21-9=12[/math]. General n A lower bound for [math]\overline{c}^\mu_n[/math] is 2n for [math]n \geq 1[/math], by removing (n,0,0), the triangle (n-2,1,1) (0,n-1,1) (0,1,n-1), and all points on the edges of and inside the same triangle. In a similar spirit, we have the lower bound [math]\overline{c}^\mu_{n+1} \geq \overline{c}^\mu_n + 2[/math] for [math]n \geq 1[/math], because we can take an example for [math]\overline{c}^\mu_n[/math] (which cannot be all of [math]\Delta_n[/math]) and add two points on the bottom row, chosen so that the triangle they form has third vertex outside of the original example. An asymptotically superior lower bound for [math]\overline{c}^\mu_n[/math] is 3(n-1), made of all points in [math]\Delta_n[/math] with exactly one coordinate equal to zero. A trivial upper bound is [math]\overline{c}^\mu_{n+1} \leq \overline{c}^\mu_n + n+2[/math] since deleting the bottom row of a equilateral-triangle-free-set gives another equilateral-triangle-free-set. We also have the asymptotically superior bound [math]\overline{c}^\mu_{n+2} \leq \overline{c}^\mu_n + \frac{3n+2}{2}[/math] which comes from deleting two bottom rows of a triangle-free set and counting how many vertices are possible in those rows. Another upper bound comes from counting the triangles. There are [math]\binom{n+2}{3}[/math] triangles, and each point belongs to n of them. So you must remove at least (n+2)(n+1)/6 points to remove all triangles, leaving (n+2)(n+1)/3 points as an upper bound for [math]\overline{c}^\mu_n[/math]. Asymptotics The corners theorem tells us that [math]\overline{c}^\mu_n = o(n^2)[/math] as [math]n \to \infty[/math]. By looking at those triples (a,b,c) with a+2b inside a Behrend set, one can obtain the lower bound [math]\overline{c}^\mu_n \geq n^2 \exp(-O(\sqrt{\log n}))[/math].
I have a question of a more mathematical nature on the mathSE (Symmetric Direct Product Distributive?) that received a good answer, but I think an answer more oriented to chemists would be a useful resource here. I'm trying to determine the symmetry of the second overtone band of the degenerate $\Pi_u$ bend of $\ce{CO2}$. I'm told I need to use the formula$$\chi_v(\hat{R})=\frac{1}{2}[\chi(\hat{R})\chi_{v-1}(\hat{R})+\chi(\hat{R}^v)]$$ where $v$ is the number of quanta in the mode, $\hat{R}$ is an operation and $\chi_{v}$ is the character of that operation for the $(v-1)^{\text{th}}$ overtone. I arrive at the correct answer, which is that the second overtone has symmetry $\Pi_u\oplus\Phi_u$. But this formula seemed complicated so I wanted to see if I could get the result just from taking the symmetric direct product of the state three times like so: $$\Pi_u\otimes\Pi_u\otimes\Pi_u=(\Sigma^+_g\oplus\Delta_g)\otimes\Pi_u=2\Pi_u\oplus\Phi_u$$ Following the direct product tables I seem to get this, which has an extra $\Pi_u$. I know this is wrong because the representation should only be quadruply degenerate, corresponding to the 4 different ways of distributing the quanta between the original degenerate modes: $(3,0) (2,1) (1,2) (0,3)$. Why does this approach fail? Does the symmetric direct product not distribute over the direct sums?
This is a perfect storm of notational dissonance between QM and QFT. Your statement I had read that the symmetry is spontaneously broken if $A \left|\psi \right>_n^{(1)}\neq 0 $ and symmetric if $A \left|\psi \right>_n^{(1)}= 0. $ is inapposite and misconstrued—justly paradoxical. I suspect labels on symmetric phase versus SSB might not be productive here. Review of SSQM $$Q=\begin{pmatrix} 0& 0\\ A& 0 \end{pmatrix}, \qquad Q^\dagger=\begin{pmatrix} 0& A^\dagger\\ 0& 0 \end{pmatrix},$$so $$H= Q Q^\dagger+ Q^\dagger Q= \begin{pmatrix} A^\dagger A& 0\\ 0& A A^\dagger \end{pmatrix}\equiv \begin{pmatrix} H_1& 0\\ 0& H_2 \end{pmatrix}.$$Now $$A|\psi_n^{(1)}\rangle = |\psi_{n-1}^{(2)}\rangle, \qquad A^\dagger |\psi_n^{(2)}\rangle = |\psi_{n+1}^{(1)}\rangle, \qquad E^{(1)}_{n+1}= E_n^{(2)} \qquad E_0^{(1)}=0.$$So $\psi^{(2)}_n$ and $\psi^{(1)}_{n+1}$ have the same eigenvalue under H, and they are degenerate pairs-- (unbroken symmetry),$$Q \begin{pmatrix} \psi^{(1)}_{n+1}\\ 0 \end{pmatrix} = \begin{pmatrix} 0\\ \psi^{(2)}_{n} \end{pmatrix} .$$ But the ground state is unique, an unpaired susy-singlet, and non-degenerate:$$Q \begin{pmatrix} \psi^{(1)}_{0}\\ 0 \end{pmatrix} = 0, \qquad A|\psi^{(1)}_0\rangle = 0 ,$$so, if you chose, you might call it SSBroken (but Witten calls it unbroken). A Susy rotation does not couple it with other states. So the bottom of the spectrum, the ground state, lacks the symmetry of the rest of the spectrum. This is a narrow answer to your question, but does not alleviate the perfect storm of dissonance. The QFT paradigm The above stands in sharp contrast to QFT with its infinite d.o.f. To avoid confusion, I'll call the infinitesimal Susy charge here ${\cal Q}$, so that the real correspondent to the finite Q above is, instead, the full group super-transformation, $\exp i\bar\theta {\cal Q}$, for a Grassmann angle θ. In QFT, the symmetric (unbroken) phase is characterized by $${\cal Q}|0\rangle= 0, \Longrightarrow \exp (i\bar\theta {\cal Q})|0\rangle=|0\rangle .$$For $[H,{\cal Q}]=0$, eigenstates of the hamiltonian,$$H \phi |0\rangle= E \phi |0\rangle $$are susy-rotated to degenerate ones,$$H(e^{i\bar\theta {\cal Q}} \phi e^{-i\bar \theta {\cal Q}} )|0\rangle = e^{i\bar\theta {\cal Q}} H\phi |0\rangle = E (e^{i\bar\theta {\cal Q}} \phi e^{-i\bar \theta {\cal Q}} )|0\rangle,$$evocative of "what you had read". So, it is here that this is the hallmark of degeneracy of the spectrum, while the vacuum is unique, similarly to above. In the SSB phase, we have the opposite: most states are non-degenerate, by failure of the above argument, but the vacuum is now degenerate. It is not unique:$${\cal Q}|0\rangle\neq 0 \Longrightarrow \qquad |\Omega\rangle= \exp (i\bar\theta {\cal Q})|0\rangle\neq |0\rangle .$$This state is degenerate with the vacuum and roils and bubbles with goldstinos. So, at the end of the day, your question dramatizes the stark difference between QM and QFT. The takeaway may well be that sticking to facts rather than labels might be the sanest option.
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. Density Hales-Jewett (DHJ) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) A spreadsheet containing the latest lower and upper bounds for [math]c_n[/math] can be found here. Unsolved questions Gowers.462: Incidentally, it occurs to me that we as a collective are doing what I as an individual mathematician do all the time: have an idea that leads to an interesting avenue to explore, get diverted by some temporarily more exciting idea, and forget about the first one. I think we should probably go through the various threads and collect together all the unsolved questions we can find (even if they are vague ones like, “Can an approach of the following kind work?”) and write them up in a single post. If this were a more massive collaboration, then we could work on the various questions in parallel, and update the post if they got answered, or reformulated, or if new questions arose. IP-Szemeredi (a weaker problem than DHJ) Solymosi.2: In this note I will try to argue that we should consider a variant of the original problem first. If the removal technique doesn’t work here, then it won’t work in the more difficult setting. If it works, then we have a nice result! Consider the Cartesian product of an IP_d set. (An IP_d set is generated by d numbers by taking all the [math]2^d[/math] possible sums. So, if the n numbers are independent then the size of the IP_d set is [math]2^d[/math]. In the following statements we will suppose that our IP_d sets have size [math]2^n[/math].) Prove that for any [math]c\gt0[/math] there is a [math]d[/math], such that any c-dense subset of the Cartesian product of an IP_d set (it is a two dimensional pointset) has a corner. The statement is true. One can even prove that the dense subset of a Cartesian product contains a square, by using the density HJ for [math]k=4[/math]. (I will sketch the simple proof later) What is promising here is that one can build a not-very-large tripartite graph where we can try to prove a removal lemma. The vertex sets are the vertical, horizontal, and slope -1 lines, having intersection with the Cartesian product. Two vertices are connected by an edge if the corresponding lines meet in a point of our c-dense subset. Every point defines a triangle, and if you can find another, non-degenerate, triangle then we are done. This graph is still sparse, but maybe it is well-structured for a removal lemma. Finally, let me prove that there is square if [math]d[/math] is large enough compare to [math]c[/math]. Every point of the Cartesian product has two coordinates, a 0,1 sequence of length [math]d[/math]. It has a one to one mapping to [4]^d; Given a point ((x_1,…,x_d),(y_1,…,y_d)) where x_i,y_j are 0 or 1, it maps to (z_1,…,z_d), where z_i=0 if x_i=y_i=0, z_i=1 if x_i=1 and y_i=0, z_i=2 if x_i=0 and y_i=1, and finally z_i=3 if x_i=y_i=1. Any combinatorial line in [4]^d defines a square in the Cartesian product, so the density HJ implies the statement. Gowers.7: With reference to Jozsef’s comment, if we suppose that the d numbers used to generate the set are indeed independent, then it’s natural to label a typical point of the Cartesian product as (\epsilon,\eta), where each of \epsilon and \eta is a 01-sequence of length d. Then a corner is a triple of the form (\epsilon,\eta), (\epsilon,\eta+\delta), (\epsilon+\delta,\eta), where \delta is a \{-1,0,1\}-valued sequence of length d with the property that both \epsilon+\delta and \eta+\delta are 01-sequences. So the question is whether corners exist in every dense subset of the original Cartesian product. This is simpler than the density Hales-Jewett problem in at least one respect: it involves 01-sequences rather than 012-sequences. But that simplicity may be slightly misleading because we are looking for corners in the Cartesian product. A possible disadvantage is that in this formulation we lose the symmetry of the corners: the horizontal and vertical lines will intersect this set in a different way from how the lines of slope -1 do. I feel that this is a promising avenue to explore, but I would also like a little more justification of the suggestion that this variant is likely to be simpler. Gowers.22: A slight variant of the problem you propose is this. Let’s take as our ground set the set of all pairs (U,V) of subsets of \null [n], and let’s take as our definition of a corner a triple of the form (U,V), (U\cup D,V), (U,V\cup D), where both the unions must be disjoint unions. This is asking for more than you asked for because I insist that the difference D is positive, so to speak. It seems to be a nice combination of Sperner’s theorem and the usual corners result. But perhaps it would be more sensible not to insist on that positivity and instead ask for a triple of the form (U,V), ((U\cup D)\setminus C,V), (U, (V\cup D)\setminus C, where D is disjoint from both U and V and C is contained in both U and V. That is your original problem I think. I think I now understand better why your problem could be a good toy problem to look at first. Let’s quickly work out what triangle-removal statement would be needed to solve it. (You’ve already done that, so I just want to reformulate it in set-theoretic language, which I find easier to understand.) We let all of X, Y and Z equal the power set of \null [n]. We join U\in X to V\in Y if (U,V)\in A. Ah, I see now that there’s a problem with what I’m suggesting, which is that in the normal corners problem we say that (x,y+d) and (x+d,y) lie in a line because both points have the same coordinate sum. When should we say that (U,V\cup D) and (U\cup D,V) lie in a line? It looks to me as though we have to treat the sets as 01-sequences and take the sum again. So it’s not really a set-theoretic reformulation after all. O'Donnell.35: Just to confirm I have the question right… There is a dense subset A of {0,1}^n x {0,1}^n. Is it true that it must contain three nonidentical strings (x,x’), (y,y’), (z,z’) such that for each i = 1…n, the 6 bits [ x_i x'_i ] [ y_i y'_i ] [ z_i z'_i ] are equal to one of the following: [ 0 0 ] [ 0 0 ] [ 0, 1 ] [ 1 0 ] [ 1 1 ] [ 1 1 ] [ 0 0 ], [ 0 1 ], [ 0, 1 ], [ 1 0 ], [ 1 0 ], [ 1 1 ], [ 0 0 ] [ 1 0 ] [ 0, 1 ] [ 1 0 ] [ 0 1 ] [ 1 1 ] ? McCutcheon.469: IP Roth: Just to be clear on the formulation I had in mind (with apologies for the unprocessed code): for every $\delta>0$ there is an $n$ such that any $E\subset [n]^{[n]}\times [n]^{[n]}$ having relative density at least $\delta$ contains a corner of the form $\{a, a+(\sum_{i\in \alpha} e_i ,0),a+(0, \sum_{i\in \alpha} e_i)\}$. Here $(e_i)$ is the coordinate basis for $[n]^{[n]}$, i.e. $e_i(j)=\delta_{ij}$. Presumably, this should be (perhaps much) simpler than DHJ, k=3. High-dimensional Sperner Kalai.29: There is an analogous for Sperner but with high dimensional combinatorial spaces instead of "lines" but I do not remember the details (Kleitman(?) Katona(?) those are ususal suspects.) Fourier approach Kalai.29: A sort of generic attack one can try with Sperner is to look at f=1_A and express using the Fourier expansion of f the expression \int f(x)f(y)1_{x<y} where x<y is the partial order (=containment) for 0-1 vectors. Then one may hope that if f does not have a large Fourier coefficient then the expression above is similar to what we get when A is random and otherwise we can raise the density for subspaces. (OK, you can try it directly for the k=3 density HJ problem too but Sperner would be easier;) This is not unrealeted to the regularity philosophy. Gowers.31: Gil, a quick remark about Fourier expansions and the k=3 case. I want to explain why I got stuck several years ago when I was trying to develop some kind of Fourier approach. Maybe with your deep knowledge of this kind of thing you can get me unstuck again. The problem was that the natural Fourier basis in \null [3]^n was the basis you get by thinking of \null [3]^n as the group \mathbb{Z}_3^n. And if that’s what you do, then there appear to be examples that do not behave quasirandomly, but which do not have large Fourier coefficients either. For example, suppose that n is a multiple of 7, and you look at the set A of all sequences where the numbers of 1s, 2s and 3s are all multiples of 7. If two such sequences lie in a combinatorial line, then the set of variable coordinates for that line must have cardinality that’s a multiple of 7, from which it follows that the third point automatically lies in the line. So this set A has too many combinatorial lines. But I’m fairly sure — perhaps you can confirm this — that A has no large Fourier coefficient. You can use this idea to produce lots more examples. Obviously you can replace 7 by some other small number. But you can also pick some arbitrary subset W of \null[n] and just ask that the numbers of 0s, 1s and 2s inside W are multiples of 7. DHJ for dense subsets of a random set Tao.18: A sufficiently good Varnavides type theorem for DHJ may have a separate application from the one in this project, namely to obtain a “relative” DHJ for dense subsets of a sufficiently pseudorandom subset of {}[3]^n, much as I did with Ben Green for the primes (and which now has a significantly simpler proof by Gowers and by Reingold-Trevisan-Tulsiani-Vadhan). There are other obstacles though to that task (e.g. understanding the analogue of “dual functions” for Hales-Jewett), and so this is probably a bit off-topic. Bibliography H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119.
The Annals of Statistics Ann. Statist. Volume 5, Number 4 (1977), 646-657. Upper Bounds on Asymptotic Variances of $M$-Estimators of Location Abstract If $X_1, \cdots, X_n$ is a random sample from $F(x - \theta)$, where $F$ is an unknown member of a specified class $\mathscr{F}$ of approximately normal symmetric distributions, then an $M$-estimator of the unknown location parameter $\theta$ is obtained by solving the equation $\sum^n_{i=1} \psi(X_i - \hat{\theta}_n) = 0$ for $\hat{\theta}_n$. A suitable measure of the robustness of the $M$-estimator is $\sup \{V(\psi, F): F \in \mathscr{F}\}$, where $V(\psi, F) = \int \psi^2 dF/(\int \psi' dF)^2$ is (under regularity conditions) the asymptotic variance of $n^{\frac{1}{2}}(\hat{\theta}_n - \theta)$. A necessary and sufficient condition for $F_0$ in $\mathscr{F}$ to maximize $V(\psi, F)$ is obtained, and the result is specialized to evaluate $\sup \{V(\psi, F):F \in \mathscr{F}\}$ when the model for $\mathscr{F}$ is the gross errors model or the Kolmogorov model. Article information Source Ann. Statist., Volume 5, Number 4 (1977), 646-657. Dates First available in Project Euclid: 12 April 2007 Permanent link to this document https://projecteuclid.org/euclid.aos/1176343889 Digital Object Identifier doi:10.1214/aos/1176343889 Mathematical Reviews number (MathSciNet) MR443197 Zentralblatt MATH identifier 0381.62033 JSTOR links.jstor.org Citation Collins, John R. Upper Bounds on Asymptotic Variances of $M$-Estimators of Location. Ann. Statist. 5 (1977), no. 4, 646--657. doi:10.1214/aos/1176343889. https://projecteuclid.org/euclid.aos/1176343889
Could anyone help with this problem? Thanks A joint density function is given as follows: $$f(x,y) =\begin{cases}{} 0.125\cdot (x+y+1) \ \ \text{for} -1<x<1, 0<y<2 \\ 0, \text{otherwise} \end{cases}$$ Calculate $P(X>Y)$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community A joint density function is given as follows: $$f(x,y) =\begin{cases}{} 0.125\cdot (x+y+1) \ \ \text{for} -1<x<1, 0<y<2 \\ 0, \text{otherwise} \end{cases}$$ Calculate $P(X>Y)$ This question appears to be off-topic. The users who voted to close gave this specific reason: Just recall what the density function represents: the probability of an event $A$ is the integral of the density function on $A$. So you have to inegrate the function on the set of points $A= \{(x,y) \mid x>y\} $. So $x$ can be any number in $[-1, 1]$, and $y$ has to be smaller than $x$. Hence, compute $\int\limits_{-1}^{1} \int\limits_{0}^{x} f(x,y) \, dy \, dx$. As you integrate 0 in the inner integral whenever $x$ is negative, it is the same as $\int\limits_{0}^{1} \int\limits_{0}^{x} f(x,y) \, dy \, dx$. You can easily compute this.
Say $$\mathcal{C'}\to \mathcal{C}\leftarrow \mathcal{D}$$ is a diagram of model categories and (e.g. Left) Quillen functors. I want to write down a (hopefully simple) model category $\mathcal{D}'$, or at least a category with weak equivalences, such that its $\infty$-categorical localization is the homotopy limit of the localizations of this diagram in the $(\infty,1)$ category of $(\infty, 1)$ categories. Is there a nice way to do this? I'm willing to impose any reasonable niceness conditions on the categories in the diagram. Philippe Gaucher is right. This problem was solved by Julie Bergner, here. I recently asked a question that summarized some of her work on this problem. The point is that the homotopy limit of your diagram is a category $M$ whose objects are 5-tuples $(x_1,x_2,x_3,u,v)$ with $x_1 \in C'$, $x_2 \in D$, $x_3\in C$, and $F(x_1) \stackrel{u}{\to} x_3 \stackrel{v}{\gets} G(x_2)$ in $C$, where $F$ and $G$ are the two functors in your diagram. The morphisms in this category of 5-tuples are obvious. This category $M$ can be given a model structure where the weak equivalences and cofibrations are levelwise (on each $x_i$), and that model structure can be localized if desired to force $u$ and $v$ to be weak equivalences in the local objects of $M$. Bergner then proves $M$ has the correct homotopy type, meaning that, upon passage to complete Segal spaces (i.e. $(\infty,1)$-categories), it becomes the actual homotopy pullback of the diagram. She has to assume the model categories she starts with are combinatorial, but this seems a standard assumption now from the $\infty$-categorical perspective (i.e. assuming presentability). Bergner uses a right Bousfield localization, so you need to assume right properness, or pass to right semi-model categories like Barwick does in this paper. The difference between a semi-model structure and a full model structure is invisible to the underlying $(\infty,1)$-category. EDIT (in answer to comments): Bergner uses the notation $L_DX$ for the category I called $M$ above. It's the lax homotopy limit. The homotopy limit is the full subcategory where the maps $u$ and $v$ have been forced to be weak equivalences. She does not claim it has a model structure in general, but it does in some special cases, e.g. if $L_DX$ is right proper and combinatorial. This occurs if each of your categories $C,C',D$ is combinatorial and has all objects fibrant, for example. This assumption can be avoided (and Bergner points this out, right after Theorem 3.2 in the linked paper) by using Barwick's method of right Bousfield localization without right properness. The result is a right semi-model structure on $Lim_DX$, and such categories have associated $(\infty,1)$-categories just like model categories do. And Bergner proves that the associated $(\infty,1)$-category is the homotopy limit in the category of $(\infty,1)$-categories, as you'd expect (working in the model of Complete Segal Spaces).
We are given a set of grayscale image patches obtained from images after edge detection. Each patch is 10x10 pixels with intensity varying between 0 and 255 for each pixel. This set may contain a very few (maybe one or two) or a large number of patches. We want to represent the set by a single 10x10 pixel patch or “feature” such that the feature captures the invariance or common properties of the set. Intuitively speaking, the feature should be the intersection of the edges in the image patches in the set. For example, if all the patches contain a vertical edge at a fixed position and also contain other edges in different orientations and positions, then our feature should contain only the vertical edge at the fixed position and no other edge. Unfortunately, the operation of intersection is not well-defined for images. How can we approach to compute such a feature for a set of image patches? I may be wrong if i have not understood the question! I am trying to give a rather elementary introduction here. I can refine things and be more rigorous as suited. What you are looking for is that of 100 (or 1000) patches, which patch is the most representative patch of all. For simplicity if the size of a patch is 1x1. So it is just a scalar. In this case, you can just find a simple average $RepScalar = Avg = \sum P_i$. You can also apply median rather than averaging if appropriate. A more generalized way is to treat this as random variable and find expected value as $E(P) = \sum PDF(x) \cdot x $ I don't know if you are really conversent with this: so i am leaving you with a tutorial on this: http://www.dartmouth.edu/~chance/teaching_aids/books_articles/probability_book/Chapter6.pdf Now in your case the $P$ is a vector instead of scalar. So the averaging will become $$ RepPatch[i] = \sum_{k} P_k[i] $$ where $k$ is a patch number and $i \in { 1, ... 100 }$ for $10x10$ patch size. If you know the
Modulo Multiplication on Reduced Residue System is Closed Theorem Let $m \in \Z_{> 0}$ be a (strictly) positive integer. Let $\Z'_m$ be the reduced residue system modulo $m$: $\Z'_m = \set {\eqclass k m \in \Z_m: k \perp m}$ Then $S$ is closed, in the sense that: $\forall a, b \in \Z'_m: a \times_m b \in \Z'_m$ Proof Let $\eqclass r m, \eqclass s m \in \Z'_m$. Then by definition of reduced residue system: $r, s \perp m$ By Bézout's Lemma: $\exists u_1, v_1 \in \Z: u_1 r + v_1 m = 1$ $\exists u_2, v_2 \in \Z: u_2 s + v_2 m = 1$ Then: \(\displaystyle \paren {u_1 r + v_1 m} \paren {u_2 s + v_2 m}\) \(=\) \(\displaystyle u_1 u_2 r s + v_1 u_2 s m + u_1 v_2 r m + v_1 v_2 m^2\) \(\displaystyle \) \(=\) \(\displaystyle \paren ({u_1 u_2} r s + \paren {v_1 u_2 s + u_1 v_2 r + v_1 v_2 m} m\) \(\displaystyle \) \(=\) \(\displaystyle 1\) That is, $\struct {\Z'_m, \times_m}$ is closed. $\blacksquare$ Proof 1964: Walter Ledermann: Introduction to the Theory of Finite Groups(5th ed.) ... (previous) ... (next): $\S 6$: Examples of Finite Groups: $\text{(iii)}$: $(1.31)$
Electronic Journal of Statistics Electron. J. Statist. Volume 10, Number 2 (2016), 3894-3944. Robustness in sparse high-dimensional linear models: Relative efficiency and robust approximate message passing Abstract Understanding efficiency in high dimensional linear models is a longstanding problem of interest. Classical work with smaller dimensional problems dating back to Huber and Bickel has illustrated the clear benefits of efficient loss functions. When the number of parameters $p$ is of the same order as the sample size $n$, $p\approx n$, an efficiency pattern different from the one of Huber was recently established. In this work, we study relative efficiency of sparsity linear models with $p\gg n$. In the interest of deriving the asymptotic mean squared error for $l_{1}$ regularized M-estimators, we propose a novel, robust and sparse approximate message passing algorithm (RAMP), that is adaptive to the error distribution. Our algorithm includes many non-quadratic and non-differentiable loss functions. We derive its asymptotic mean squared error and show its convergence, while allowing $p,n,s\to \infty$, with $n/p\in (0,1)$ and $n/s\in (1,\infty)$. We identify new patterns of relative efficiency regarding $l_{1}$ penalized $M$ estimators. We show that the classical information bound is no longer reachable, even for light–tailed error distributions. Moreover, we show new breakdown points regarding the asymptotic mean squared error. The asymptotic mean squared error of the $l_{1}$ penalized least absolute deviation estimator (P-LAD) breaks down at a critical ratio of the number of observations per number of sparse parameters in the case of light-tailed distributions; whereas, in the case of heavy-tailed distributions, the asymptotic mean squared error breaks down at a critical ratio of the optimal tuning parameter of P-LAD to the optimal tuning parameter of the $l_{1}$ penalized least square estimator. Article information Source Electron. J. Statist., Volume 10, Number 2 (2016), 3894-3944. Dates Received: August 2015 First available in Project Euclid: 13 December 2016 Permanent link to this document https://projecteuclid.org/euclid.ejs/1481598073 Digital Object Identifier doi:10.1214/16-EJS1212 Mathematical Reviews number (MathSciNet) MR3581957 Zentralblatt MATH identifier 1357.62215 Citation Bradic, Jelena. Robustness in sparse high-dimensional linear models: Relative efficiency and robust approximate message passing. Electron. J. Statist. 10 (2016), no. 2, 3894--3944. doi:10.1214/16-EJS1212. https://projecteuclid.org/euclid.ejs/1481598073
You not only can, but also must treat symbols for units by the ordinary rules of algebra, since unit symbols are mathematical entities and not abbreviations. The value of a quantity is expressed as the product of a number and a unit. That number is called the numerical value of the quantity expressed in this unit. This relation may be expressed in the form $$Q = \left\{ Q \right\} \cdot \left[ Q \right]$$ where $Q$ is the symbol for the quantity, $\left[ Q \right]$ is the symbol for the unit, and $\left\{ Q \right\}$ is the symbol for the numerical value of the quantity $Q$ expressed in the unit $\left[ Q \right]$. For example, the mass of a sample is $$m = 100\ \mathrm g$$ Here, $m$ is the symbol for the quantity mass, $\mathrm g$ is the symbol for the unit gram (a unit of mass), and $100$ is the numerical value of the mass expressed in grams. Thus, the value of the mass is $100\ \mathrm g$. It is important to distinguish between the quantity $Q$ itself and the numerical value $\left\{ Q \right\}$ of the quantity expressed in a particular unit $\left[ Q \right]$. The value of a particular quantity $Q$ is independent of the choice of unit $\left[ Q \right]$, although the numerical value $\left\{ Q \right\}$ will be different for different units. For example, changing the unit for the mass in the previous example from the gram to the kilogram, which is $10^3$ times the gram, leads to a numerical value which is $10^{-3}$ the numerical value of the mass expressed in grams, whereas the value of the mass stays the same. $$m = 100\ \mathrm g = 0.100\ \mathrm{kg}$$ Since symbols for units are mathematical entities, both the numerical value and the unit may be treated by the ordinary rules of algebra. For example, the equation $m = 100\ \mathrm g$ may equally be written $$m/\mathrm g = 100$$ It is often convenient to label the axes of a graph in this way, so that the tick marks are labelled only with numbers. The quotient of a quantity and a unit may also be used in this way for the heading of a column in a table, so that the entries in the table are all simply numbers. Performing the mathematical operations of quantities is called quantity calculus. Quantities are multiplied and divided by one another according to the rules of algebra, resulting in new quantities. The quotient of two quantities, $Q_1$ and $Q_2$, satisfies the relation$$\begin{align}\frac{Q_1}{Q_2} &= \frac{ \left\{ Q_1 \right\} \cdot \left[ Q_1 \right] }{ \left\{ Q_2 \right\} \cdot \left[ Q_2 \right] } \\[6pt]&= \frac{ \left\{ Q_1 \right\} }{ \left\{ Q_2 \right\} } \cdot \frac{ \left[ Q_1 \right] }{ \left[ Q_2 \right] }\end{align}$$Thus, the quotient $\left\{ Q_1 \right\}/\left\{ Q_2 \right\}$ is the numerical value $\left\{ Q_1/Q_2 \right\}$ of the quantity $Q_1/Q_2$, and the quotient $\left[ Q_1 \right]/\left[ Q_2 \right]$ is the unit $\left[ Q_1/Q_2 \right]$ of the quantity $Q_1/Q_2$. For example, assuming a volume of $V = 0.127\ \mathrm{l}$, the density $\rho$ of the above-mentioned sample is$$\begin{align}\rho &= \frac{m}{V} \\[6pt]&= \frac{ 0.100\ \mathrm{kg} }{ 0.127\ \mathrm{l} } \\[6pt]&= \frac{ 0.100 }{ 0.127 } \cdot \frac{ \mathrm{kg} }{ \mathrm{l} } \\[6pt]&= 0.79\ \mathrm{kg/l}\end{align}$$ Similarly, the product of two quantities, $Q_1$ and $Q_2$, satisfies the relation$$\begin{align}Q_1 \cdot Q_2 &= \left( \left\{ Q_1 \right\} \cdot \left[ Q_1 \right] \right) \cdot \left( \left\{ Q_2 \right\} \cdot \left[ Q_2 \right] \right) \\[6pt]&= \left\{ Q_1 \right\}\left\{ Q_2 \right\} \cdot \left[ Q_1 \right] \left[ Q_2 \right]\end{align}$$ Thus, the product $\left\{ Q_1 \right\}\left\{ Q_2 \right\}$ is the numerical value $\left\{ Q_1Q_2 \right\}$ of the quantity $Q_1Q_2$, and the product $\left[ Q_1 \right]\left[ Q_2 \right]$ is the unit $\left[ Q_1Q_2 \right]$ of the quantity $Q_1Q_2$. For example, considering the standard acceleration of free fall $g_\mathrm n = 9.80665\ \mathrm{m/s^2}$, the weight $F_\mathrm g$ of the above-mentioned sample is$$\begin{align}F_\mathrm g &= m \cdot g_\mathrm n \\[6pt]&= 0.100\ \mathrm{kg} \times 9.80665\ \frac{\mathrm m}{\mathrm{s^2}} \\[6pt]&= 0.100 \times 9.80665 \times \mathrm{kg} \cdot \frac{\mathrm m}{\mathrm{s^2}} \\[6pt]&= 0.98\ \frac{\mathrm {kg\ m}}{\mathrm{s^2}} \\[6pt]&= 0.98\ \mathrm{N}\end{align}$$ In forming products and quotients of unit symbols, the normal rules of algebraic multiplication or division apply. For example, the expansion work $W$ at constant pressure $p = 100\,000\ \mathrm{Pa} = 100\,000\ \mathrm{kg\ m^{-1}\ s^{-2}}$ associated with a volume change of $\Delta V = 0.5\ \mathrm{m^3}$ is $$\begin{align}W &= p \cdot \Delta V \\[6pt]&= 100\,000\ \frac{\mathrm{kg}}{\mathrm{m\ s^{2}}} \times 0.5\ \mathrm{m^3}\\[6pt]&= 50\,000\ \frac{\mathrm{kg}}{\mathrm{m\ s^{2}}}\cdot\mathrm{m^3}\\[6pt]&= 50\,000\ \frac{\mathrm{kg\ m^2}}{\mathrm{s^{2}}}\\[6pt]&= 50\,000\ \mathrm J\end{align}$$ Two or more quantities cannot be added or subtracted unless they belong to the same kind. The expression shall be written as the sum or difference of expressions for the quantities$$l=12\ \mathrm m-7\ \mathrm m$$or parentheses shall be used to combine the numerical values, placing the common unit symbol after the complete numerical value$$l=\left(12-7\right)\ \mathrm m$$but it is not permissible to write$$l=12-7\ \mathrm m\quad\color{red}{\small\text{(wrong!)}}$$For the same reason, quantities on each side of an equal sign in an equation must be of the same kind$$\begin{align}m_\text{total} &= m_1+m_2 \\1.8\ \mathrm{kg} &= 1.5\ \mathrm{kg}+0.3\ \mathrm{kg}\end{align}$$However, quantities of the same kind do not necessarily have the same unit.$$250\ \mathrm g = 0.250\ \mathrm{kg}$$$$10\ \mathrm{m/s} = 36\ \mathrm{km/h}$$Anyway, quantities on each side of an equal sign in an equation must not be of different kinds. $$1\ \mathrm{mol} = 22.414\ \mathrm l\quad\color{red}{\small\text{(wrong!)}}$$
Short answer I think the formula for the expected successes is this: \begin{align}E &= n \cdot \frac{3d - t - 2e + 1}{e-1}, &\text{where } & 1 ≤ t ≤ e ≤ d\end{align} While the variance could be this (not tested): \begin{align} V = n \cdot \left(\frac{d-t+1}{d-1} - \frac{(e-t)^2-(d-e+1)^2}{(d-1)^2}\right)\end{align}Here is what all the variables mean: \$d\$ ... number of sides a single die has (in Shadowrun \$d = 6\$ - we roll plain old six-sided dice) \$n\$ ... number of such dice in the pool (usually \$n = Attribute + Skill\$ in Shadowrun) \$e\$ ... minimum roll for a die to explode (\$e = 6\$ in Shadowrun - only the 6 explodes) \$t\$ ... minimum roll for a success (\$t = 5\$ in Shadowrun - 5 and 6 are successes) \$h\$ ... number of hits, i.e. dice with a result \$≥t\$ in the roll (not needed here) Knowing the average spread (from the variance) is nice too, because you'll also want to know if it is still a frequent occurrence to get, I don't know, 12 successes on a roll of just 16 dice, or if 8 hits is already very unlikely. I.e. with a lower explosion threshold, higher hit counts become more likely. However, the expectation value might be very similar to that of a lower hit-threshold \$t\$ at higher explosion-threshold \$e\$. The Math behind Exploding on 6 only: If you want formulae, I thought I might give a brief summary of my question about exploding die pools and its answers. You can show the formulae below to be true for probabilities of exactly \$h\$ hits, the expectation values of hits \$E\$ and their variances \$V\$: \begin{align}p^\text{non-exp}_{d,n,t,h} &= \binom{n}{h}\left(\frac{d-t+1}{d}\right)^h\left(1-\frac{d-t+1}{d}\right)^{n-h}\\E^\text{non-exp}_{d,n,t} &= n\ \frac{d-t+1}{d}\\V^\text{non-exp}_{d,n,t} &= n\ \frac{(d-1)(d-t+1)}{d^2}\\%p^\text{exp}_{d,n,t,h} &= \frac{(t-1)^n}{d^{n+h}} \sum_{k=0}^{\max(h,n)} \binom{n}{k}\binom{n+h-k-1}{h-k}\left[\frac{d(d-t)}{t-1}\right]^k\\E^\text{exp}_{d,n,t} &= n\ \frac{d-t+1}{d-1}\\V^\text{exp}_{d,n,t} &= n\ \frac{t\,(d-t+1)}{(d-1)^2}\\\end{align} The ideas for proofs can be found on math stackexchange. Now this assumes, that dice only explode at the maximum roll of 6 in your case. So it can't tell you anything about rolls where dice explode e.g. on 5 and 6. Except, that it stands to reason that a a roll of a six sided where 1 and 2 are no successes, 3 and 4 are successes without re-rolls and 5 and 6 are successes with explosion is equal to a roll of three-sided dice where 1 is not a success, 2 is a success without re-roll and 6 is an exploding success. I've put together a small web-page (useful for Shadowrun or the oWoD) for this and tested it with a simulation: Arbitary explosion thresholds: The formulae should be fairly easy to modify for arbitrary explosion thresholds with the same reasoning used in my link. Let's call the explosion threshold \$e\$. So if the roll explodes on 5 and 6, then \$e = 5\$ in this case (for Shadowrun we'd have \$e = d = 6\$). The expectation value \$E_1\$ of a single die has to fulfill this equation: $$ E_1 = 0 \cdot \frac{t-1}{d} + 1 \cdot \frac{2d-t-e}{d} + (E_1+1) \cdot \frac{d-e-1}{d}$$ Zero successes with a probability \$\frac{t-1}{d}\$, on success and no explosions with a probability of \$\frac{2d-t-e}{d}\$ and in case of exploding dice we have a probability of \$\frac{d-e-1}{d}\$ to get \$E_1\$ more successes. This can be solves for \$E_1\$. Now the expectation value for \$n\$ dice is just \$n\$ times that for one dice (\$E = n E_1\$): \begin{align}E &= n \cdot \frac{3d - t - 2e + 1}{e-1}, &\text{where }& 1 ≤ t ≤ e ≤ d\end{align} Note, that while the formulae for exploding on the highest value are thoroughly tested, I did not test the above formula.
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
On the shape of planar Brownian paths We establish a formula describing the shape of the convex hull of sample paths in the case of planar Brownian motion : viz. the average number of edges joining paths’ points separated by a time-lapse \Delta \tau in [ \Delta \tau _1, \Delta \tau_2 ] is equal to 2\log (\Delta \tau_2 / \Delta \tau_1 ), regardless of the total duration T of the motion. The formula exhibits invariance when the time scale is multiplied by any factor. Apart from its theoretical importance, our result provides new insights regarding the shape of two-dimensional objects modelled by stochastic processes’ sample paths ( eg polymer chains) : in particular for a total time (or parameter) duration T, the average number of edges on the convex hull ("cut off’’ to discard edges joining points separated by a time-lapse shorter than some \Delta \tau much smaller than T) will be given by 2 \log (T / \Delta \tau). Thus it will only grow logarithmically, rather than at some higher pace.
Difference between revisions of "Group cohomology of elementary abelian group of prime-square order" (→Over an abelian group) (→Over the integers) (27 intermediate revisions by the same user not shown) Line 1: Line 1: + + + + + Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Suppose <math>p</math> is a [[prime number]]. We are interested in the [[elementary abelian group of prime-square order]] <math>E_{p^2} = (\mathbb{Z}/p\mathbb{Z})^2 = \mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z}</math>. Line 19: Line 24: ===Over the integers=== ===Over the integers=== − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> + + + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q + 3)/2} & \qquad q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{q/2}, & q = 2,4,6,\dots + + + + \\ \mathbb{Z}, & \qquad q = 0 \\\end{array}\right.</math> The first few homology groups are given below: The first few homology groups are given below: Line 35: Line 46: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: The homology groups with coefficients in an abelian group <math>M</math> are given as follows: − <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = + <math>H_q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};M) = \left\lbrace\begin{array}{rl} (M/pM)^{(q+3)/2} \oplus (\operatorname{Ann}_M(p))^{(q-1)/2}, & \qquad q = 1,3,5,\dots\\ (M/pM)^{q/2} \oplus (\operatorname{Ann}_M(p))^{(q+2)/2}, & \qquad q = 2,4,6,\dots \\ M, & \qquad q = 0 \\\end{array}\right.</math> Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Here, <math>M/pM</math> is the quotient of <math>M</math> by <math>pM = \{ px \mid x \in M \}</math> and <math>\operatorname{Ann}_M(p) = \{ x \in M \mid px = 0 \}</math>. Line 58: Line 69: | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + sq/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + sq/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> |} |} + ==Cohomology groups for trivial group action== ==Cohomology groups for trivial group action== Line 67: Line 79: <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> <math>H^q(\mathbb{Z}/p\mathbb{Z} \oplus \mathbb{Z}/p\mathbb{Z};\mathbb{Z}) = \left\lbrace \begin{array}{rl} (\mathbb{Z}/p\mathbb{Z})^{(q-1)/2}, & q = 1,3,5,\dots \\ (\mathbb{Z}/p\mathbb{Z})^{(q+2)/2}, & q = 2,4,6,\dots \\ \mathbb{Z}, & q = 0 \\\end{array}\right.</math> + + + + The first few cohomology groups are given below: The first few cohomology groups are given below: Line 87: Line 103: These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. These can be deduced from the homology groups with coefficients in the integers using the [[dual universal coefficients theorem for group cohomology]]. − − ===Important case types for abelian groups=== ===Important case types for abelian groups=== Line 97: Line 111: | <math>M</math> is uniquely <math>p</math>-divisible, i.e., every element of <math>M</math> can be divided by <matH>p</math> uniquely. This includes the case that <math>M</math> is a field of characteristic not 2. || all zero groups || all zero groups | <math>M</math> is uniquely <math>p</math>-divisible, i.e., every element of <math>M</math> can be divided by <matH>p</math> uniquely. This includes the case that <math>M</math> is a field of characteristic not 2. || all zero groups || all zero groups |- |- − | <math>M</math> is <math>p</math>-torsion-free, i.e., no nonzero element of <math>M</math> multiplies by <math>p</math> to give zero. || <math>(M/pM)^{(q- + | <math>M</math> is <math>p</math>-torsion-free, i.e., no nonzero element of <math>M</math> multiplies by <math>p</math> to give zero. || <math>(M/pM)^{(q-)/2}</math> || <math>(M/pM)^{(q+2)/2}</math> |- |- | <math>M</math> is <math>p</math>-divisible, but not necessarily uniquely so, e.g., <math>M = \mathbb{Q}/\mathbb{Z}</math> || <math>(\operatorname{Ann}_M(p))^{(q+3)/2}</math> || <math>(\operatorname{Ann}_M(p))^{q/2}</math> | <math>M</math> is <math>p</math>-divisible, but not necessarily uniquely so, e.g., <math>M = \mathbb{Q}/\mathbb{Z}</math> || <math>(\operatorname{Ann}_M(p))^{(q+3)/2}</math> || <math>(\operatorname{Ann}_M(p))^{q/2}</math> Line 107: Line 121: | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q - 1)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> | <math>M</math> is a finitely generated abelian group || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q - 1)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of the torsion part of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> || all isomorphic to <math>(\mathbb{Z}/p\mathbb{Z})^{r(q + 1) + s(q + 3)/2}</math> where <math>r</math> is the rank for the <math>p</math>-Sylow subgroup of <math>M</math> and <math>s</math> is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of <math>M</math> |} |} + + + + + + + + + + + + + + + + + + + + + + + + + + Latest revision as of 21:34, 24 October 2011 Contents This article gives specific information, namely, group cohomology, about a family of groups, namely: elementary abelian group of prime-square order. View group cohomology of group families | View other specific information about elementary abelian group of prime-square order Particular cases Homology groups for trivial group action FACTS TO CHECK AGAINST(homology group for trivial group action): First homology group: first homology group for trivial group action equals tensor product with abelianization Second homology group: formula for second homology group for trivial group action in terms of Schur multiplier and abelianization|Hopf's formula for Schur multiplier General: universal coefficients theorem for group homology|homology group for trivial group action commutes with direct product in second coordinate|Kunneth formula for group homology Over the integers The homology groups below can be computed using the homology groups for the group of prime order (see group cohomology of finite cyclic groups) and combining it with the Kunneth formula for group homology. The even and odd cases can be combined giving the following alternative description: The first few homology groups are given below: rank of as an elementary abelian -group -- 2 1 3 2 4 Over an abelian group The homology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These homology groups can be computed in terms of the homology groups over integers using the universal coefficients theorem for group homology. Important case types for abelian groups Case on Conclusion about odd-indexed homology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided uniquely by . This includes the case that is a field of characteristic not . all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Cohomology groups for trivial group action FACTS TO CHECK AGAINST(cohomology group for trivial group action): First cohomology group: first cohomology group for trivial group action is naturally isomorphic to group of homomorphisms Second cohomology group: formula for second cohomology group for trivial group action in terms of Schur multiplier and abelianization In general: dual universal coefficients theorem for group cohomology relating cohomology with arbitrary coefficientsto homology with coefficients in the integers. |Cohomology group for trivial group action commutes with direct product in second coordinate | Kunneth formula for group cohomology Over the integers The cohomology groups with coefficients in the integers are given as below: The odd and even cases can be combined as follows: The first few cohomology groups are given below: 0 rank of as an elementary abelian -group -- 0 2 1 3 2 Over an abelian group The cohomology groups with coefficients in an abelian group are given as follows: Here, is the quotient of by and . These can be deduced from the homology groups with coefficients in the integers using the dual universal coefficients theorem for group cohomology. Important case types for abelian groups Case on Conclusion about odd-indexed cohomology groups, i.e., Conclusion about even-indexed homology groups, i.e., is uniquely -divisible, i.e., every element of can be divided by uniquely. This includes the case that is a field of characteristic not 2. all zero groups all zero groups is -torsion-free, i.e., no nonzero element of multiplies by to give zero. is -divisible, but not necessarily uniquely so, e.g., , any natural number is a finite abelian group isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of isomorphic to where is the rank (i.e., minimum number of generators) for the -Sylow subgroup of is a finitely generated abelian group all isomorphic to where is the rank for the -Sylow subgroup of the torsion part of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of all isomorphic to where is the rank for the -Sylow subgroup of and is the free rank (i.e., the rank as a free abelian group of the torsion-free part) of Tate cohomology groups for trivial group action PLACEHOLDER FOR INFORMATION TO BE FILLED IN: [SHOW MORE] Growth of ranks of cohomology groups Over the integers With the exception of the zeroth homology group and cohomology group, the homology groups and cohomology groups over the integers are all elementary abelian -groups. For the homology groups, the rank (i.e., dimension as a vector space over the field of elements) is a function of that is a sum of a linear function (of slope 1/2) and a periodic function (of period 2). The same is true for the cohomology groups, although the precise description of the periodic function differs. For homology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . For cohomology groups, choosing the periodic function so as to have mean zero, we get that the linear function is and the periodic function is . Note that: The intercept for the cohomology groups is 1/4, as opposed to the intercept of 3/4 for the homology groups. This is explained by the somewhat slower start of cohomology groups on account of being torsion-free. The periodic parts for homology groups and cohomology groups are negatives of each other, indicating an opposing pattern that is explained by looking at the dual universal coefficients theorem for group cohomology. Over the prime field If we take coefficients in the prime field , then the ranks of the homology and cohomology groups both grow as linear functions of . The linear function in both cases is . Note that in this case, the homology groups and cohomology groups are vector spaces over and the cohomology group is the vector space dual of the homology group. Note that there is no periodic part when we are working over the prime field.
Let $\mathscr A$ be an algebra of sets. Let $\Sigma$ be the smallest sigma algebra (also the smallest monotone class) containing $\mathscr A$. Let $A_0 \in \Sigma$. Then $A_0 \cap \Sigma$ is a sigma algebra and is also the smallest sigma algebra (or monotone class) containing the algebra $\mathscr A \cap A_0$. To prove $A_0 \cap \Sigma$ is a sigma algebra is easy, but I'm having trouble showing it is the smallest such sigma algebra. In particular, I'm trying to use the monotone class theorem (c.f. Lieb Loss Analysis ch 1) to do so. To that end, let $\{\Pi\}_\alpha$ be the collection of all monotone classes containing $\mathscr A \cap A_0$. Let $$\Pi = \bigcap_{\alpha} \Pi_\alpha.$$ We wish to show $\Pi = A_0 \cap \Sigma$. It is clear $\Pi \subset A_0 \cap \Sigma$, since $A_0 \cap \Sigma = \Pi_\beta$ for some $\beta$. I'm unclear how to show the reverse inclusion. A hint as to how to proceed would be helpful, as opposed to a full answer. This seems as though it should be fairly straightforward seeing as it appears as a side in a proof in Lieb Loss, but it hasn't shown to be so...
Definition:Particular Negative Contents Definition A particular negative is a categorical statement of the form: Some $S$ is not $P$ where $S$ and $P$ are predicates. In the language of predicate logic, this can be expressed as: $\exists x: \map S x \land \neg \map P x$ Its meaning can be amplified in natural language as: There exists at least one object with the property of being $S$ which does not have the quality of being $P$. $\left\{{x: S \left({x}\right)}\right\} \cap \left\{{x: \neg P \left({x}\right)}\right\} \ne \varnothing$ or, more compactly: $S \cap \complement \left({P}\right) \ne \varnothing$ Also denoted as Traditional logic abbreviated the particular negative as $\mathbf O$. Thus, when examining the categorical syllogism, the particular negative $\exists x: \map S x \land \neg \map P x$ is often abbreviated: $\map {\mathbf O} {S, P}$ The abbreviation $\mathbf O$ for a particular negative originates from the second vowel in the Latin word neg O, meaning I deny. Also see Sources 1965: E.J. Lemmon: Beginning Logic... (previous) ... (next): $\S 4.4$: The Syllogism 1973: Irving M. Copi: Symbolic Logic(4th ed.) ... (previous) ... (next): $4.1$: Singular Propositions and General Propositions 2008: David Nelson: The Penguin Dictionary of Mathematics(4th ed.) ... (previous) ... (next): Entry: syllogism
Is there a proper proof of the following property: Let $p$ be a prime number. The number of invertible elements in $\mathbb{Z}/p^n\mathbb{Z}$ is $(p-1)p^{n-1}$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Is there a proper proof of the following property: Let $p$ be a prime number. The number of invertible elements in $\mathbb{Z}/p^n\mathbb{Z}$ is $(p-1)p^{n-1}$. This question appears to be off-topic. The users who voted to close gave this specific reason: An element $\overline{a}$ (where $0\leq a \leq n-1)$ of the ring $\mathbb{Z}/n$ is invertible precisely when $a$ is coprime to $n$ by a standard result of elementary number theory. The number of positive integers less than or equal to $n$ which are coprime to $n$ is given by the euler-phi function. Hence it suffices to compute $\varphi(p^n)$. You can show that $\varphi(p^n)=(p-1)p^{n-1}$ by noting that the only postive integers less than or equal to $p^n$ which are not coprime to it are the multiples of $p$ namely $kp$ for $k=1,\dotsc, p^{n-1}$. Hence $$\varphi(p^n)=p^n-p^{n-1}=(p-1)p^{n-1}.$$ Let $G=\mathbb{Z}/p^{n}\mathbb{Z}$. Lemma. For $a,n\in\mathbb{N}$, $ax\pmod{m}=1$ has a solution if and only if $\gcd(a,m)=1$. Proof. $ab\pmod{m}=1\iff m\mid ab-1\iff\exists k\in\mathbb{N}$ such that $mk=ab-1\iff1=a(b)+m(-k)\iff\gcd(a,m)=1$.// Now, by our lemma we know that the invertible elements in $G$ are precisely those that are relatively prime to $p^{n}$. If you are familiar with the Euler-totient function, then you know that the number of such elements is $\varphi(p^{n})=p^{n}-p^{n-1}=(p-1)p^{n-1}$.
Find the limit of the sequence: $$a_{n + 1} = \int_{0}^{a_n}(1 + \frac{1}{4} \cos^{2n + 1} t)dt,$$ such that $a_0 \in (0, 2 \pi)$ That was one of the tasks in the Olympiad. Here is my approach. First, I wanted to simplify the integral: $\int_{0}^{a_n}(1 + \frac{1}{4} \cos^{2n + 1} t)dt = \int_{0}^{a_n}(dt) + \frac{1}{4} \int_{0}^{a_n} \cos^{2n + 1} (t) dt$ That leads to the following relation: $$a_{n + 1} = a_n + \frac{1}{4} \int_{0}^{a_n} \cos^{2n + 1} (t) dt$$ Now, there is a $\cos t$ with some power which reminded me of the standart integral $\int \cos^n(x) dx$. We can find a recursive formula for it in the following way: $I_n = \int \cos^n(x) dx = \int \cos(x) \cos^{n - 1}(x) dx = \sin x \cos^{n - 1}x - (n - 1)\int \sin(x) \cos^{n - 2}(x) (- \sin (x)) dx.$ This leads to $I_n = \sin x \cos^{n - 1}x + (n - 1) I_{n - 2} - (n - 1) I_n$ And final recurrence relation is $$I_n = \frac{1}{n} \sin x \cos^{n - 1}x + \frac{n - 1}{n} I_{n - 2}$$ For a long time I am trying to make a connection between the original integral $\int_{0}^{a_n} \cos^{2n + 1} (t) dt$ and this recurrence relation, but I have failed to come up with anything meaningful at the moment. Well, I guess we can just plug in $2n + 1$ instead of $n$ and we get $$I_{2n + 1} = \frac{1}{2n + 1} \sin x \cos^{2n}x + \frac{2n}{2n + 1} I_{2n - 1}$$ Ok, now if we try to evaluate this as definite integral we should get $I_{2n + 1}(a_n) - I_{2n + 1}(0) = (\frac{1}{2n + 1} \sin a_n \cos^{2n}a_n + \frac{2n}{2n + 1} I_{2n - 1}(a_n)) - (0 + \frac{2n}{2n + 1} I_{2n - 1}(0))$ $I_{2n + 1}(a_n) - I_{2n + 1}(0) = \frac{1}{2n + 1} \sin a_n \cos^{2n}a_n + \frac{2n}{2n + 1} I_{2n - 1}(a_n) - \frac{2n}{2n + 1} I_{2n - 1}(0).$ So, $$\frac{1}{4} \int_{0}^{a_n} \cos^{2n + 1} (t) dt = \frac{1}{4(2n + 1)} \sin a_n \cos^{2n}a_n + \frac{2n}{4(2n + 1)} \big[ I_{2n - 1}(a_n) - I_{2n - 1}(0) \big] $$ $$\frac{1}{4} \int_{0}^{a_n} \cos^{2n + 1} (t) dt = \frac{1}{4(2n + 1)} \sin a_n \cos^{2n}a_n + \frac{2n}{4(2n + 1)} \big[ I_{2n - 1}(a_n) - \cos a_0 \big] $$ I would appreciate any help if you provide me with some insights or clues on how to proceed.
Fix $r>0$. For $h>0$ let $u_h\in W^{1,2}(B_h,B_r)$, where $B_h$ and $B_r$ are the ball of radius $h$ and $r$ centered at the origin in $\mathbb R^n$, respectively. I want to approximate the $u_h$ with Lipschitz function $f_h^\lambda\in C^{0,1}(B_h,B_r)\cap W^{1,2}(B_h,B_r)$. There is a theorem which says (https://onlinelibrary.wiley.com/doi/abs/10.1002/cpa.10048, Appendix A.1) Theorem: Let $U\subset\mathbb R^n$ be a bounded Lipschitz domain. Then there exists a constant $C(U)$ with the following property: For each $u\in W^{1,2}(U,\mathbb R^n)$ and each $\lambda>0$ there exists $v:U\rightarrow\mathbb R^n$ such that 1. $\|dv\|_{L^\infty(U)}\leq C\lambda$, 2. $|\{x\in U: u(x)\neq v(x)\}|\leq\frac{C}{\lambda^2}\int_{\{x\in U:|du(x)|>\lambda\}}|du|^2 dx$, 3. $\|du-dv\|_{L^2(U)}^2\leq C\int_{\{x\in U:|du(x)|>\lambda\}}|du|^2 dx$. If I fix $h>0$ and take the $f_h^\lambda$ as in the theorem, then as $\lambda$ grows I get the problem that $\|df_h^\lambda\|_{L^\infty}$ could blow up, and I could no longer guarantee that the image of $f_h^\lambda$ is contained in $B_r$. For me it would be enough that as $h\rightarrow 0$ I get that $$\frac{1}{\mathrm{Vol}(B_h)}\int_{B_h}\mathrm{dist}(du(x),SO(n))-\mathrm{dist}(df(x),SO(n))dx$$ tends to zero, where the distance is taken w.r.t. the Frobenius norm (almost everywhere). Although I have 2. in the theorem, it does not solve my problem, or does it? I mean even if the set where $u_h$ and $f_h^\lambda$ differ shrinks, I cannot say that the image of $f_h^\lambda$ is contained in $B_r$, since $|df_h^\lambda|$ potentially becomes very large.
The elementary "opposite over hypotenuse" definition of the sine function defines the sine of an angle, not a real number. As discussed in the article "A Circular Argument" [Fred Richman, The College Mathematics Journal Vol. 24, No. 2 (Mar., 1993), pp. 160-162. Free version here. Thanks to Aaron Meyerowitz's answer to question 72792 for the reference.], angles might be measured either by the area of a sector of unit radius having the angle or by the arc length of such a sector. If the former convention is adopted then it can be proven using a completely unexceptionable Euclidean argument that $\lim_{x\to 0} \sin(x)/x = 1$. Also, whichever convention is adopted (or so it seems to me), using completely unexceptionable Euclidean arguments, it is possible to prove the angle addition formulas for sine and cosine. Using these two ideas, it is straightforward to find the derivatives of sine and cosine, and from there one can derive an algorithm for computing digits of sine and cosine (and for computing $\pi$) using the relatively sophisticated mean-value version of Taylor's theorem. The equivalence of the two definitions of sine (or of angle measurement) apparently depends on something like Archimedes' postulate: "If two plane curves C and D with the same endpoints are concave in the same direction, and C is included between D and the straight line joining the endpoints, then the length of C is less than the length D." (Again, thanks to Aaron Meyerowitz.) Of course, it is just this postulate that Archimedes needed to prove that the area of a circle is equal to the area of a triangle with base the circumference of the circle and height the radius. And something like it is surely necessary to derive any algorithm for computing digits of $\pi$. (Except, and this confuses me a bit, it seems that if we used the area definition of angle, we could derive an algorithm for computing sine without depending on this postulate, and from there we could get an algorithm for computing digits of $\pi$ since $\sin(\pi)=0$.) I am looking in general for elucidation of the conceptual connections between the ideas I have so far discussed and of their background. But here are two more specific questions. First, in what sense is a postulate like Archimedes' needed in the foundations of geometry? (I wonder, in particular, if in a purely formal development we might get by without it, but we would somehow be left without assurance that what we had axiomatized was really geometry.) Also, are more intuitive alternatives to Archimedes' postulate? Second, what is really needed to get an algorithm for computing digits of sine? Does it really require such complicated technology as Taylor series? It seems like if one uses the area definition of angle, one might be able to give an algorithm using unexceptionable Euclidean techniques and without so much as invoking the notion of limit.
We have a positive series $\displaystyle\sum^\infty_{n=1}a_n$. is the following series converge or diverge ?$$\displaystyle\sum^\infty_{n=1}\frac{a_n}{1+n^2a_n}$$ Suppose $\displaystyle\sum^\infty_{n=1}a_n$ does converge, so by the comparsion test the given series also converge. Suppose $\displaystyle\sum^\infty_{n=1}a_n$ does not converge: If $a_n$ is a bounded sequence with a bound $M$ then: $\forall n \ a_n\le M \Rightarrow \large\frac{a_n}{1+n^2a_n}>\frac{a_n}{1+M}\to\infty$ So the given series diverge. If $a_n$ isn't bounded, it has a subsequence that tends to infinity, so we have: $\displaystyle\frac{a_{n_k}}{1+{n_k}^2a_{n_k}}\longrightarrow^{k\to\infty}\infty$ so the given series will diverge. (Couldn't find the tex for the limit with arrow)
The idea behind Stokes' theorem Green's theorem states that, given a continuously differentiable two-dimensional vector field $\dlvf$, the integral of the “microscopic circulation” of $\dlvf$ over the region $\dlr$ inside a simple closed curve $\dlc$ is equal to the total circulation of $\dlvf$ around $\dlc$, as suggested by the equation \begin{align*} \dlint = \iint_\dlr \text{“microscopic circulation of $\dlvf$” } dA. \end{align*} We often write that $\dlc = \partial \dlr$ as fancy notation meaning simply that $\dlc$ is the boundary of $\dlr$. Green's theorem requires that $\dlc = \partial \dlr$. The “microscopic circulation” in Green's theorem is captured by the curl of the vector field and is illustrated by the green circles in the below figure. Green's theorem applies only to two-dimensional vector fields and to regions in the two-dimensional plane. Stokes' theorem generalizes Green's theorem to three dimensions. For starters, let's take our above picture and simply embed it in three dimensions. Then, our curve $\dlc$ becomes a curve in the $xy$-plane, and our region $\dlr$ becomes a surface $\dls$ in the $xy$-plane whose boundary is the curve $\dlc$. Even though $\dls$ is now a surface, we still use the same notation as $\partial$ for the boundary. The boundary $\partial \dls$ of the surface $\dls$ is a closed curve, and we require that $\partial \dls = \dlc$. The next question is what the microscopic circulation along a surface should be. For Green's theorem, we found that \begin{align*} \text{“microscopic circulation”} = (\curl \dlvf) \cdot \vc{k}, \end{align*} (where $\vc{k}$ is the unit-vector in the $z$-direction). We wanted the component of the curl in the $\vc{k}$ direction because this corresponded to microscopic circulation in the $xy$-plane. Similarly, for a surface, we will want the microscopic circulation along the surace. This corresponds to the component of the curl that is perpendicular to the surface, i.e, \begin{align*} \text{“microscopic circulation”} = (\curl \dlvf) \cdot \vc{n}, \end{align*} where $\vc{n}$ is a unit normal vector to the surface. You can see this using the right-hand rule. If you point the thumb of your right hand perpendicular to a surface, your fingers will curl in a direction corresponding to circulation parallel to the surface. In summary, to go from Green's theorem to Stoke's theorem, we've made two changes. First, we've changed the line integral living in two dimensions (Green's theorem) to a line integral living in three dimensions (Stokes' theorem). Second, we changed the double integral of $\curl \dlvf \cdot \vc{k}$ over a region $\dlr$ in the plane (Green's theorem) to a surface integral of $\curl \dlvf \cdot \vc{n}$ over a surface floating in space (Stokes' theorem). The required relationship between the curve $\dlc$ and the surface $\dls$ (Stokes' theorem) is identical to the relationship between the curve $C$ and the region $\dlr$ (Green's theorem): the curve $\dlc$ must be the boundary $\partial \dlr$ of the region or the boundary $\partial \dls$ of the surface. We write Stokes' theorem as: \begin{align*} \dlint = \ssint{\dls}{\curl \dlvf \cdot \vc{n}} = \sint{\dls}{\curl \dlvf} \end{align*} (Recall that a surface integral of a vector field is the integral of the component of the vector field perpendicular to the surface.) We see that the integral on the right is the surface integral of the vector field $\curl \dlvf$. Stokes theorem says the surface integral of $\curl \dlvf$ over a surface $\dls$ (i.e., $\sint{\dls}{\curl \dlvf}$) is the circulation of $\dlvf$ around the boundary of the surface (i.e., $\dlint$ where $\dlc = \partial \dls$ ). Once we have Stokes' theorem, we can see that the surface integral of $\curl \dlvf$ is a special integral. The integral cannot change if we can change the surface $\dls$ to any surface as long as the boundary of $\dls$ is still the curve $\dlc$. It cannot change because it still must be equal to $\dlint$, which doesn't change if we don't change $\dlc$. (In analogy of how the gradient $\nabla f$ is a path-independent vector field, you could say that $\curl \dlvf$ is “surface independent” vector field, but we don't usually use that term.) For example, staring with a planar surface such as sketched above, we see that the surface $\dls$ doesn't have to be the flat surface inside $\dlc$. We can bend and stretch $\dls$, and the above formula is still true. In the below applet, you can move the green point on the top slider to change the surface $\dls$. For any of those surfaces, the integral of the “microscopic circulation” $\curl \dlvf$ over that surface will be the total circulation $\dlint$ of $\dlvf$ around the curve $\dlc$ (shown in red). The important restriction is that the boundary of the surface $\dls$ is still the curve $\dlc$. Applet loading Macroscopic and microscopic circulation in three dimensions. The relationship between the macroscopic circulation of a vector field $\dlvf$ around a curve (red boundary of surface) and the microscopic circulation of $\dlvf$ (illustrated by small green circles) along a surface in three dimensions must hold for any surface whose boundary is the curve. No matter which surface you choose (change by dragging the green point on the top slider), the total microscopic circulation of $\dlvf$ along the surface must equal the circulation of $\dlvf$ around the curve. (We assume that the vector field $\dlvf$ is defined everywhere on the surface.) You can change the curve to a more complicated shape by dragging the blue point on the bottom slider, and the relationship between the macroscopic and total microscopic circulation still holds. The surface is oriented by the shown normal vector (moveable cyan arrow on surface), and the curve is oriented by the red arrow. Stokes' theorem allows us to do even more. We don't have to leave the curve $\dlc$ sitting in the $xy$-plane. We can twist and turn $\dlc$ as well. If $\dls$ is a surface whose boundary is $\dlc$ (i.e., if $\dlc = \partial \dls$), it is still true that \begin{align*} \dlint = \sint{\dls}{\curl \dlvf}. \end{align*} For example, in the above applet, you can also move the blue point on the bottom slider to change the curve $\dlc$. The surface $\dls$ also changes as you change $\dlc$, since its boundary has to be $\dlc$. Note that moving the green point on the top slider does not change the value of either integral in the above formulas. Since the curve $\dlc$ does not change, the left line integral doesn't change, which means the value of the right surface integral cannot change. On the other hand, moving the blue point on the bottom slider does change the values of the integrals since the curve $\dlc$ changes. The important point is that, even in this case, the left line integral and the right surface integral are always equal. There is one more subtlety that you have to get correct, or else you'll may be off by a sign. You need to orient the surface and boundary properly. The cyan normal vector to the surface and the orientation of the curve (shown by red arrow) in the above applet are chosen with the proper relative orientations so that Stokes' theorem applies. You can read some examples here.
This has been stuck in my head and although I've found quite some info, I can't get to the final answer. I'm probably overthinking something, so I hope you can point it out or help me out otherwise. I'm looking for the minimum spot size of: A 1 µm laser of quality $M^2 = 1.5$ Leaving the system through a 40cm diameter lens Being focused by that lens at a range of say, 5 km. (f=5000m) EDIT: It concerns an in-atmosphere beam (say, at sea level) but turbulence, scintillation and similar effects can be ignored. For all intents & purposes, it could be in vacuum. It's right in between the 'minimum spot size for microscopic applications' and 'spot size after thousands of km's of space travel' - resulting in me getting slightly confused on what is applicable and what is not. Basically I've found a number of equations like $ D'=\frac{4\lambda f}{\pi D} $ and $\omega_0= \frac{\lambda f}{\pi a}$ (with a= beam radius at 1/e^2 intensity) Giving me a minimum diameter of 0.016m / radius of 0.008m at the provided range respectively, or $\omega(z) = \omega_0*\sqrt{1+(\frac{z}{z_r})^2}$ Coming up with a radius of 0.20016m. (This one assumes the beam waist is at the lens itself, which would be valid for much larger ranges but probably not so much for 5km) While they all make sense, I "feel" they're not considering the general divergence of the beam.The answers provided here (Physics of Focusing a Laser) seem to be in the right direction. For example,$\omega_0 = \frac{M^2 \lambda }{\pi \Theta }$comes with the note "Note that if you know the $M^2$ and measure the divergence of a beam, then you can calculate the waist radius." Now I can't measure the divergence of my laser (theoretical case :) ) but I don't think I can calculate it from that equation either, since although I know the radius at the lens, that is not the beam waist. As you see, I'm a bit stuck. Maybe I've just been looking at it for too long, garbling up my head. It'd be great if someone can clear this up for me!
In almost all Quantum Field Theories textbooks the same approach to quantization is presented as the first example: one considers the scalar real Klein-Gordon field $\phi$ and just write it as $$\phi(x) =\int_{} \dfrac{d^3 p}{(2\pi^3)\sqrt{2\omega_p}} (a(p) e^{-i p_\mu x^\mu}+a^\dagger(p) e^{i p_\mu x^\mu})$$ being $a(p),a^\dagger(p)$ operators satisfying $[a(p),a^\dagger(q)]=(2\pi)^3\delta(p-q)$ and $[a(p),a(q)]=[a^\dagger(p),a^\dagger(q)]=0$. Then textbooks continue by stating that $a(p),a^\dagger(q)$ are annihilation and creation operators in a Fock Space. Putting this all into a more organized way, what one has done here is: considering one wants to make classical fields turn to quantum fields obeying the canonical commutation relations, one possible approach is this one where the Hilbert space turns out to be a Fock Space. It turns out that this is not the only way. Unlike QM, there are infinitely many non equivalent representations of the CCR. My question here is: what is the history of this specific approach based on the Fock Space? How Physicists first derived it? How it was discovered that one approach to achieve a representation of the CCR for a system of uncountably many degrees of freedom, i.e., a field, was this one based on the Fock Space? What was the first derivations of the Fock Space itself that appears in this aproach?
Definition:Probability Density Function Definition Let $\struct {\Omega, \Sigma, \Pr}$ be a probability space. Let $X: \Omega \to \R$ be a continuous random variable on $\struct {\Omega, \Sigma, \Pr}$. Let $\Omega_X = \Img X$, the image of $X$. Then the probability density function of $X$ is the mapping $f_X: \R \to \closedint 0 1$ defined as: $\forall x \in \R: \map {f_X} x = \begin {cases} \displaystyle \lim_{\epsilon \mathop \to 0^+} \frac {\map \Pr {x - \frac \epsilon 2 \le X \le x + \frac \epsilon 2} } \epsilon & : x \in \Omega_X \\ 0 & : x \notin \Omega_X \end {cases}$ Also known as Probability density function is often conveniently abbreviated as p.d.f. or Sometimes it is also referred to as the density function. Also see
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 20, 27 pp. Evolution systems of measures and semigroup properties on evolving manifolds Abstract An evolving Riemannian manifold $(M,g_t)_{t\in I}$ consists of a smooth $d$-dimensional manifold $M$, equipped with a geometric flow $g_t$ of complete Riemannian metrics, parametrized by $I=(-\infty ,T)$. Given an additional $C^{1,1}$ family of vector fields $(Z_t)_{t\in I}$ on $M$. We study the family of operators $L_t=\Delta _t +Z_t $ where $\Delta _t$ denotes the Laplacian with respect to the metric $g_t$. We first give sufficient conditions, in terms of space-time Lyapunov functions, for non-explosion of the diffusion generated by $L_t$, and for existence of evolution systems of probability measures associated to it. Coupling methods are used to establish uniqueness of the evolution systems under suitable curvature conditions. Adopting such a unique system of probability measures as reference measures, we characterize supercontractivity, hypercontractivity and ultraboundedness of the corresponding time-inhomogeneous semigroup. To this end, gradient estimates and a family of (super-)logarithmic Sobolev inequalities are established. Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 20, 27 pp. Dates Received: 16 August 2017 Accepted: 2 February 2018 First available in Project Euclid: 27 February 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1519722149 Digital Object Identifier doi:10.1214/18-EJP147 Mathematical Reviews number (MathSciNet) MR3771757 Zentralblatt MATH identifier 1390.60287 Subjects Primary: 60J60: Diffusion processes [See also 58J65] 58J65: Diffusion processes and stochastic analysis on manifolds [See also 35R60, 60H10, 60J60] 53C44: Geometric evolution equations (mean curvature flow, Ricci flow, etc.) Citation Cheng, Li-Juan; Thalmaier, Anton. Evolution systems of measures and semigroup properties on evolving manifolds. Electron. J. Probab. 23 (2018), paper no. 20, 27 pp. doi:10.1214/18-EJP147. https://projecteuclid.org/euclid.ejp/1519722149
Main Page The Problem Let [math][3]^n[/math] be the set of all length [math]n[/math] strings over the alphabet [math]1, 2, 3[/math]. A combinatorial line is a set of three points in [math][3]^n[/math], formed by taking a string with one or more wildcards [math]x[/math] in it, e.g., [math]112x1xx3\ldots[/math], and replacing those wildcards by [math]1, 2[/math] and [math]3[/math], respectively. In the example given, the resulting combinatorial line is: [math]\{ 11211113\ldots, 11221223\ldots, 11231333\ldots \}[/math]. A subset of [math][3]^n[/math] is said to be line-free if it contains no lines. Let [math]c_n[/math] be the size of the largest line-free subset of [math][3]^n[/math]. [math]k=3[/math] Density Hales-Jewett (DHJ(3)) theorem: [math]\lim_{n \rightarrow \infty} c_n/3^n = 0[/math] The original proof of DHJ(3) used arguments from ergodic theory. The basic problem to be consider by the Polymath project is to explore a particular combinatorial approach to DHJ, suggested by Tim Gowers. Some background to this project can be found here, and general discussion on massively collaborative "polymath" projects can be found here. Threads (1-199) A combinatorial approach to density Hales-Jewett (inactive) (200-299) Upper and lower bounds for the density Hales-Jewett problem (active) (300-399) The triangle-removal approach (inactive) (400-499) Quasirandomness and obstructions to uniformity (final call) (500-599) TBA (600-699) A reading seminar on density Hales-Jewett (active) We are also collecting bounds for Fujimura's problem. Here are some unsolved problems arising from the above threads. Bibliography M. Elkin, "An Improved Construction of Progression-Free Sets ", preprint. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem for k=3“, Graph Theory and Combinatorics (Cambridge, 1988). Discrete Math. 75 (1989), no. 1-3, 227–241. H. Furstenberg, Y. Katznelson, “A density version of the Hales-Jewett theorem“, J. Anal. Math. 57 (1991), 64–119. B. Green, J. Wolf, "A note on Elkin's improvement of Behrend's construction", preprint. K. O'Bryant, "Sets of integers that do not contain long arithmetic progressions", preprint. R. McCutcheon, “The conclusion of the proof of the density Hales-Jewett theorem for k=3“, unpublished.
Difference between revisions of "Quasirandomness" (→Introduction: further cleanup of vandalism) (→Introduction) Line 1: Line 1: − + ..././../.:. − + − + − + − + − + − + − + − + − + − + − + − + ==A possible definition of quasirandom subsets of <math>[3]^n</math>== ==A possible definition of quasirandom subsets of <math>[3]^n</math>== Revision as of 17:50, 15 March 2009 http://www.viddler.com/explore/angelinajolienu angelina jolie naked http://forums.denverbroncos.com/member.php?u=362359 angelina jolie naked http://forums.slimdevices.com/member.php?u=27319 angelina jolie naked http://www.jamendo.com/en/user/angelina_jolie_nude angelina jolie naked http://www.justin.tv/angelina_jolie_twins/profile angelina jolie naked A possible definition of quasirandom subsets of [math][3]^n[/math] As with all the examples above, it is more convenient to give a definition for quasirandom functions. However, in this case it is not quite so obvious what should be meant by a balanced function. Here, first, is a possible definition of a quasirandom function from [math][2]^n\times [2]^n[/math] to [math][-1,1].[/math] We say that f is c-quasirandom if [math]\mathbb{E}_{A,A',B,B'}f(A,B)f(A,B')f(A',B)f(A',B')\leq c.[/math] However, the expectation is not with respect to the uniform distribution over all quadruples (A,A',B,B') of subsets of [math][n].[/math] Rather, we choose them as follows. (Several variants of what we write here are possible: it is not clear in advance what precise definition will be the most convenient to use.) First we randomly permute [math][n][/math] using a permutation [math]\pi[/math]. Then we let A, A', B and B' be four random intervals in [math]\pi([n]),[/math] where we allow our intervals to wrap around mod n. (So, for example, a possible set A is [math]\{\pi(n-2),\pi(n-1),\pi(n),\pi(1),\pi(2)\}.[/math]) As ever, it is easy to prove positivity. To apply this definition to subsets [math]\mathcal{A}[/math] of [math][3]^n,[/math] define f(A,B) to be 0 if A and B intersect, [math]1-\delta[/math] if they are disjoint and the sequence x that is 1 on A, 2 on B and 3 elsewhere belongs to [math]\mathcal{A},[/math] and [math]-\delta[/math] otherwise. Here, [math]\delta[/math] is the probability that (A,B) belongs to [math]\mathcal{A}[/math] if we choose (A,B) randomly by taking two random intervals in a random permutation of [math][n][/math] (in other words, we take the marginal distribution of (A,B) from the distribution of the quadruple (A,A',B,B') above) and condition on their being disjoint. It follows from this definition that [math]\mathbb{E}f=0[/math] (since the expectation conditional on A and B being disjoint is 0 and f is zero whenever A and B intersect). Nothing that one would really like to know about this definition has yet been fully established, though an argument that looks as though it might work has been proposed to show that if f is quasirandom in this sense then the expectation [math]\mathbb{E}f(A,B)f(A\cup D,B)f(A,B\cup D)[/math] is small (if the distribution on these "set-theoretic corners" is appropriately defined).
Segment 7 Calculation Problems 1. Prove the result of "mechanical way". <math> \begin{align} \Delta^2 &= < (x-a)^2 > \\ &= < x^2 -2ax + a^2 > \\ \frac{d{\Delta^2}}{da} &= 0 \\ 2 <a - x> &= 0\\ 2(a - <x>) &= 0\\ a &= <x> \end{align} </math> 2. Thought process while solving the problem: It is easier to construct a piecewise function to have a maximum at zero. To ensure that the function is a probability distribution, I decided to choose the function split at 0 to 1 and 1 to <math>\infin</math> In order that the <math>M_4</math> not exist, the function should contain the <math>-5</math> power of x in it's integral. <math> p(x) = \begin{cases} 0, & \text{if } x \le 0\\ \frac34, & \text{if } 0 \le x \le 1 \\ \frac1{x^5}, & \text{if } 1 \le x \le \infin \\ \end{cases} </math> <math> \begin{align} \int_{-\infin}^{\infin}p(x) &= 1\\ <x^3> &= \frac74 \\ <x^4> &= \text{does not exist} \end{align} </math> 3. Positives and Negatives about using Median over Mean. Positives: Mean, due to it's sensitivities is skews the central tendency because of outliers. While median is a better estimate of the true valies. Negatives: Mean is a lot more sensitive to the data, that is, if the distribution is close to a normal distribution the mean would give the central tendency. But the median is not as efficient in a normal distribution. Food for Thought Problems Class Activity 1. What does a joint uniform prior on w and b look like? Let P(X,Y) be P(w=X,b=Y), it's uniform, so the probability is the same for any X,Y. <math> P = \int_0^1\int_0^{1-X} P(X,Y)dXdY = 1</math> We got <math> P = P(X,Y)* (X-\frac{X^2}2)|_0^1 = 1 \rightarrow P(X,Y) = 2</math> 2.Suppose we know that w=0.4, b = 0.3, and d = 0.3. If we watch N = 10 games, what is the probability that W = 3, B = 5, and D = 2? <math> P(3,5,2) = \binom{10}{3} \cdot \binom{7}{5} \cdot w^3 \cdot b^5 \cdot d^3 = 0.0353</math> 3. For general w, b, d, W, B, D, what is P(W, B, D | w, b, d)? <math> P(W,B,D|w,b,d) = \binom{N}{W} \cdot \binom{N-W}{B} \cdot w^W \cdot b^B \cdot d^D </math> 4. Applying Bayes, what is P(w, b, d | W, B, D)? What is the Bayes denominator? <math> P(w,b,d | W, B, D) = \frac{P(W,B,D|w,b,d) \cdot P(w,b,d)}{\int_0^1 \int_0^{1-w} P(W,B,D|w,b,d) \cdot P(w,b,d)}=\frac{w^W \cdot b^B \cdot d^D}{\frac{W! \cdot B! \cdot D!}{(W+B+D+2)!}} </math> The denominator is: <math>\frac{W! \cdot B! \cdot D!}{(W+B+D+2)!}</math> 5. Using the data from last Friday, count the outcomes of the first N games and produce a visualization of the joint posterior of the win rates for N = 0, 3, 10, 100, 1000, and 10000. An interesting observation is the more game we count, the smaller the possible posterior probability space we will get, which means we become more confident on the probability we estimate by counting more games. N=0 N=3 N=10 N=100 N=1000 N=10000
So I have a graph and need to find shortest path between two points in it. I need 1 to do it it using bidirectional search. The bidirectional search should be goal-directed, i.e. A*. So let $l(u,v)$ be length of the (oriented) edge $u,v$, $\pi_f(v)$ the potential of vertex $v$ in forward search and $\pi_r(v)$ potential of vertex $v$ in reverse search and $d(u,v)$ length of the shortest path from $u$ to $v$. Let $s$ be start vertex, $t$ goal vertex. The algorithm selects vertices by $d(s,v)+\pi_f(v)$ forward and $d(v,t)+\pi_r(v)$ reverse. Let's call $\mu$ the length of the shortest path found so far, $n_f$ the vertex on top of forward queue and $n_r$ the vertex on top of reverse queue. I found two ways The obvious option is to stop forward when $d(s,n_f)+\pi_f(n_f)\geq\mu$ and reverse when $d(n_r,t)+\pi_r(n_r)\geq\mu$. It is also not needed to process edges that were already processed in the other direction. Here the $\pi_f$ and $\pi_r$ are independent can be very specific, but the algorithm may need to continue quite long after the shortest path was actually found if the potential function is significantly underestimating. Create a pair of consistentpotential functions as defined in this lecture. The requirements are given as $$\pi_f(u) + \pi_r(u) == \pi_f(v) + \pi_r(v)$$ for each edge $u,w$ (which really means the sum has to be constant over the whole graph). Without loss of generality we can make $\pi_r(v) = -\pi_f(v)$ and use the normal stopping condition from non-goal-directed search expressed as $$d(s,m_f)+\pi_f(m_f) + d(m_r,t)+\pi_r(m_r) \geq \mu+\pi_r(t)$$ (assuming we shift $\pi$ so that $\pi_f(s) = 0$). This allows easier stop, but the potential function has to only indicate whether $v$ is closer to start or goal and for vertex (equally) far from both will be the same as for vertex in the middle of the shortest path. Therefore it will be less specific. Now what I am looking for is: anything that would give me idea which would be more efficient (without having to implement both and test them) and whether the second can even be used if the heuristics is not monotonous, i.e. when $d(u,v) - \pi_f(u) + \pi_f(v) \ge 0$ does nothold (the linked lecture assumes that, but not doing so could save me a lot of data and I/O is a bottleneck, so I would prefer not to even though it means occasionally having to reprocess vertex). 1 Some important optimization techniques can only be applied to bidirectional search.
In my textbook (Chemistry Part - I for Class XI published by NCERT), there is an equation for the energy of an electron in an energy state: $$E_n = -R_\mathrm H\left(\frac{1}{n^2}\right)$$ and there is a paragraph below it with the following text: where $R_\mathrm H$ is called Rydberg constantand its value is $2.18\times10^{-18}\ \text{J}$. There is another section with the expression for the wavenumber ($\overline{\nu}$): $$\overline{\nu}=109\,677 \left(\frac{1}{n_1^2} - \frac{1}{n_2^2}\right)\ \text{cm}^{-1}$$ with a paragraph with the following text: The value $109\,677 \space\text{cm}^{-1}$ is called the Rydberg constantfor hydrogen. I checked online and found that in most (all) websites (incl. Wikipedia), the value of Rydberg constant is $109\,677 \space\text{cm}^{-1}$. But when I searched for its value in joules, I found this website with the value of Rydberg constant $= 2.18\times10^{-18}\ \text{J}$. How can Rydberg constant be written in joules?
The Suzuki coupling reaction (also called Suzuki-Miyaura coupling reactions; Ref.1) is the coupling of an aryl or vinyl boronic acid with an aryl or vinyl halide or triflate using a palladium(0) catalyst similar to Heck reaction and Negishi reactions in mechanistic aspects. In particular, Negishi reaction uses organozinc reagents instead of organoboronic ... I offer the following thoughts on the paper you cite regarding the formation of artemisinin 9 from carboxylic acid 1. You are correct that the allylic hydroperoxide 2 is formed by an ene reaction with singlet oxygen (1O2). The stereochemistry of the hydroperoxide group is the result of the ene reaction occurring on the convex face of the unsaturated cis-... I just looked in the literature, and it has been done even without catalyst in a "reactive distillation column" at high temperatures (patent US201113194873). If you can remove the water formed, you can technically drive the reaction forward, even if it is slow. A catalyst will just make that process easier, requiring less heat and time.Phosphoric acid is a ... It seems the most used way is reaction of sulphuric acid with salts of volatile mineral acids :$$\ce{H2SO4 + NaCl -> NaHSO4 + HCl ^}$$Reaction with excess of the acid$$\ce{H2SO4 + NaOH ->[H2SO4] NaHSO4 +H2O}$$has several drawbacks:It is much more exothermic than the former oneIt releases extra waterIt needs somewhat diluted solution not to ... Note that the equation $$pV=nRT$$ is valid for a gas only ( more exactly an ideal gas), not for a liquid .Liquid$$\frac nt = \frac {V}{t} \cdot \frac{\rho }{ M}$$Beware of the proper units. Instead of standard ones, these are more suitable for liquids:$$\mathrm{[mol / s] = [ mL / s ] \cdot \frac {[g/mL]}{[g/mol]}}$$Gas$$\frac nt = \frac {V}{t} \... Find some nice instruction:http://nzetc.victoria.ac.nz/tm/scholarly/tei-Bio06Tuat03-t1-body-d2.htmlAs Picric Acid stored under layer of water, we could know it's concentration in the water. It should be enough for most of the applications.Temperature ° C. grams picric acid/100 grams solution0 | 0.6710 | 0.8020 | 1.1030 | 1.38 If accuracy is not essential, weigh a small amount drained for a fixed time but still wet, and then weigh again dried, to find the ratio of wet-to-dry weight (carefully disposing the dried, sensitized, explosive without destroying the scale or personnel).Though this could give a rough idea of the actual dried weight, it would vary from batch to batch and ... Lyophobic means solvent-hating, and here the solvent is water, so the lyophobic ends (presumably the tails, if we are discussing a normal detergent) will associate to form the hydrophobic core of the micelles. The headgroups meanwhile will preferentially interact with water, forming an interface between solvent and core.If you add detergent to a solution ... You might try a solution of cobalt(II) chloride, $\ce{CoCl2}$, in plain water (which might freeze) or in a mixture of water and ethanol or water and isopropyl alcohol. At some concentration, which you'd experimentally determine, it should turn from pink to blue on cooling to 273 K. See Flinn Scientific's site for more details.BTW, you might as well use ... It's not an all-or-nothing situation, where either we can reliably prediction reaction products, or we have not tried to.For instance, drug companies, for many years, have been making extensive use of increasingly powerful, sophisticated, and refined in silico predictive models to choose likely candidates for in vitro testing. And I would be surprised ...
I have a question about understanding the proof of Theorem 4.11 in the paper A Potential Theory for Monotone Multivalued Operators (accessible here). The authors claim to construct a convex functional and I'm not sure I follow their argument. My specific question is at the end, but I provide some background from the paper below before. Background: The paper shows, how for a pair of dual locally convex topological vector spaces $(X,X')$ and a monotone set-valued operator $M:X \to X'$, one can define a notion of path integral along polygonal paths (as the restriction of $M$ to any straight line in one's is monotone and hence Riemann-integrable). The authors call $M$ conservative if its path integral around any closed polygonal path in its domain (the set of points of $X$ where it is non-empty valued) is zero. The authors define the integral of $M$ along any line segment $[x,y] \subseteq \textrm{dom}(M)$ via: $$ \int_{0}^1 \langle M(x + t(y-x)), y-x \rangle \, dt = \sup \bigg\{\sum_{i=0}^{n-1}\langle x_i^*, x_{i+1} - x_i\rangle\bigg\} = \inf\bigg\{\sum_{i=0}^{n-1}\langle x_{i+1}^*, x_{i+1}- x_i\rangle \bigg\} $$ where $x_i^* \in M(x_i)$, and the sup/inf are over all refinements of the line segment, and just follow from their respective arguments being the left/right Riemann sums of monotone increasing functions. The authors then state the following theorem (4.11, p. 623), which I reproduce below. Theorem 4.11: To any conservative monotone multivalued map $M:X \to X'$ with a polygonally path connected domain, there corresponds, to within an arbitrary additive constant, a convex potential $f: X \to \mathbb{R} \cup \{+\infty\}$, which is the restriction on $\textrm{dom}(M)$ of a lower semicontinuous proper convex functional. The potential $f$ is assumed to be $+\infty$ outside $\textrm{dom}(M)$ and is defined on $\textrm{dom}(M)$ by: $$ \begin{aligned} f(x) - f(x_0) & = \int_\pi \langle M(z), dz\rangle = \\ & =\sup\bigg\{\sum_{i=0}^{n-1}\langle x_i^*, x_{i+1}- x_i\rangle + \langle x_n^*, x- x_n\rangle \bigg\}\\ & = \inf\bigg\{\sum_{i=0}^{n-1} \langle x_{i+1}^*, x_{i+1} -x_i\rangle + \langle x^*, x-x_n\rangle\bigg\} \end{aligned} $$ where the sup/inf are again over all refinements of the poylgonal path $\pi$. Source of confusion: The proof argues that, by definition, on $\textrm{dom}(M)$, $f$ is equal to the lower semicontinuous proper convex function defined as the pointwise supremum of a family of continuous affine functions, the Riemann sums. I don't follow this step. Normally, when I have seen that results about the supremum of a family of affine functionals is convex, the family of functionals does not vary point to point, whereas here it seems to, as long as $\textrm{dom}(M)$ is not convex (which the authors are explicitly allowing for). For example, if I have two points $x,y \in \textrm{dom}(M)$ but for which the line segment connecting them is not, it is not clear to me that how the suprema of the set of affine functionals given by refining a path $\pi_x$ from $x_0$ to $x$ relates to the suprema over refinements for a given path $\pi_y$. I'd be happy to provide my attempt to verify too and where I get stuck, but as this question is already fairly long I'll leave it for now, and I suspect the answer is probably something simple. Question: Why is $f$ lower semicontinuous/convex?
The process $X$ is not gaussian and its increments are not independent. Note first that $X$ is a Brownian martingale, hence a Brownian motion with a change of time, thus, it is distributed like $(\beta_{\langle X\rangle_t})$, where $\beta$ is a Brownian motion independent of $X$. For example, $X_1$ has the distribution of $\beta_{\langle X\rangle_1}=\sqrt{\alpha}\cdot\gamma$ where $\gamma$ is standard normal independent of $(X_t)$ and $\alpha=\langle X\rangle_1$. Thus, $E[X_1]=0$, $E[X_1^2]=E[\alpha]\cdot E[\gamma^2]=E[\alpha]$ and $E[X_1^4]=E[\alpha^2]\cdot E[\gamma^4]=3E[\alpha^2]$. Since $E[Z^4]=3E[Z^2]^2$ for every centered normal random variable $Z$, these remarks show that if $X_1$ is normal then $E[\alpha^2]=E[\alpha]^2$, that is, $\alpha$ is almost surely constant. But $\alpha=\int\limits_0^1B_t^4\,\mathrm dt$ hence this is not so and $X_1$ is not normal. To study the independence of the increments of $X$, fix some $s\geqslant0$ and consider the sigma-algebras $\mathcal F^X_s=\sigma(X_u;u\leqslant s)$ and $\mathcal F^B_s=\sigma(B_u;u\leqslant s)$, and the Brownian motion $C$ defined by $C_u=B_{s+u}-B_s$ for every $u\geqslant0$. Then $C$ is independent of $\mathcal F^B_s$. Furthermore, for every $t\geqslant0$,$$X_{t+s}=X_s+\int_0^t(B_s+C_u)^2\mathrm dC_u=X_s+B_s^2C_t+2B_s\int_0^tC_s\mathrm dC_s+\int_0^tC_s^2\mathrm dC_s.$$Rewrite this as$$X_{t+s}-X_s=B_s^2C_t+B_sD_t+G_t,$$where $D_t$ and $G_t$ are functionals of $C$ hence independent of $\mathcal F^B_s$. Thus,$$E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=B_s^4E[C_t^2]+B_s^2E[D_t^2]+E[G_t^2]+2B_s^3E[C_tD_t]+2B_s^2E[C_tG_t]+2B_sE[D_tG_t].$$One can check that $E[C_tD_t]=E[D_tG_t]=0$, $E[C_t^2]=t$, $E[D_t^2]=2t^2$, $E[G_t^2]=t^3$ and $E[C_tG_t]=\frac12t^2$ hence$$E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=tB_s^4+3t^2B_s^2+t^3.$$Note that $\mathrm d\langle X\rangle_s=B_s^4\mathrm ds$ and that $\langle X\rangle$ is $\mathcal F^X$-adapted hence $B_s^4$ and every function of $B_s^4$, for example $B_s^2$, are measurable with respect to $\mathcal F^X_s$. This yields$$E[(X_{t+s}-X_s)^2\mid\mathcal F^X_s]=tB_s^4+3t^2B_s^2+t^3.$$The RHS is not almost surely constant hence $(X_{t+s}-X_s)^2$ is not independent of $\mathcal F^X_s$, in particular the increments of $X$ are not independent. Edit: One may feel that the computation of the conditional expectation of $(X_{t+s}-X_s)^2$ above is rather cumbersome (it is) and try to replace it by the (definitely simpler) computation of the conditional expectation of $X_{t+s}-X_s$. Unfortunately,$$E[X_{t+s}-X_s\mid\mathcal F^X_s]=0,$$hence this computation is not sufficient to decide whether the conditional distribution of $X_{t+s}-X_s$ conditionally on $\mathcal F^X_s$ is constant or not (which is the reformulation of the independence of a random variable and a sigma-algebra this solution relies on). Another way of looking at the situation is that, fortunately, already the conditional second moments are not constant.
We can estimate this using the Polya-Vinagradov method. We get a main term, which comes from the fact that two elements of $\mathbb F_p$ that sum to something greater than $p$ are more likely to sum to something a little bit greater than $p$ than a lot, and an error term. The formula is: $$ \frac{ i p}{2\pi} + O( \sqrt{p}\log{p} )$$ View the sum as a sum of the product of two characteristic functions and an exponential funcion: $$\sum_{x,y\in \mathbb F_p}\mathbf 1_{\{xy=1\} } e(x+y) \mathbf 1_{\{x+y>p\}}$$ Let $f(a,b)$ be the Fourier transform of $\mathbf 1_{\{xy=1\} }$. Let $g(a,b)$ be the Fourier transform of $\mathbf 1_{\{x+y>p\}}$. Then by Plancherel's formula, this sum is: $$\frac{\sum_{a,b\in \mathbb F_p} f(a+1,b+1) \overline{g} ( a,b)}{p^2} $$ This sum, it turns out, is easier to estimate. Our first function: $$f(a,b) = \sum_{x \in \mathbb F_p} e(ax+ bx^{-1} ) = K(ab)$$ is a Kloosterman sum, unless $a=0$ or $b=0$, in which case it is $-1$, unless both $a$ and $b$ are $0$, in which case it is $p-1$. In particular, it is bounded by $2 \sqrt{p}$, unless $a=b=0$, in which case it is $p-1$. Our second sum we may estimate by more elementary means: $$g(a,b) = \sum_{0\leq x,y<p, x+y>p} e(ax + by) = \sum_{1\leq x <p} e(ax) \left( \sum_{p+1-x \leq y \leq p-1} e(by) \right) =\sum_{1\leq x <p} e(ax) \frac{ e(bp) - e(b (p+1-x))}{e(b)-1}= \frac{\sum_{1\leq x< p} e(ax+bp) }{e(b)-1} + \frac{\sum_{1\leq x< p} e((a-b) x+b) }{e(b)-1} $$ The first term depends on whether $a=0$, equaling $\frac{(p-1) e(bp)}{e(b)-1}$ if $a=0$ and $\frac{- e(bp)}{e(b)-1}$ otherwise. The second term depends on whether $a=b$, equaling $\frac{(p-1) e(bp)}{e(b)-1}$ if $a=b$ and $\frac{- e(bp)}{e(b)-1}$ otherwise. The whole equation is wrong if $b=0$, but we can use symmetry to handle that, unless $a=0$ and $b=0$, in which case the sum is obviously $(p-1)(p-2)/2$. Altogether, the $L_1$-norm of $g$ is $O(p^2 \log p)$. Since each term of $f$ but one is bounded by $2\sqrt{p}$, this gives a contribution of at most $\sqrt{p} \log{p}$. This is the error term. The leading term comes from $f(0,0)$, which is $p-1$, summing against $\overline{g}(-1,-1)$, which is $- p e(1) / (e(1)-1)=\frac{i p^2}{2 \pi} + O(p)$. This gives a contribution of $ip/2\pi+O(1)$
(Updated) I have looked the draft of Ch4 of the book "Abelian Varieties" by Gerard van der Geer and Ben Moonen. It looks like in order to see the group scheme structure on G/H, one should consider the fppf quotient. It is eaiser to see the group scheme structure on the fppf quotient. And one can prove that the fppf quotient is equal to the category quotient. Is this the standard way? ( In fact, I am curious if we need the notion of grothendieck topology to see the group scheme structure on G/H) === Let me just mention the original question for this topic which is about the quotient of a scheme by a finite group scheme action. In SGA 3, the (general) definition is as following: Consider a diagram $$ X_1 { \xrightarrow[]{d_0} \atop \xrightarrow[d_1]{} } X_0 \xrightarrow{ \ p \ } Y$$ We call $(Y,p)$ is a quotient if $p \circ d_0 = p \circ d_1$ and for any $q: X_0 \rightarrow Z$ such that $q \circ d_0 = q \circ d_1$, there exists a unique $r: Y \rightarrow Z$ such that $q = r \circ p$. The existence of the quotient $Y$ is equivalent to the representability of the functor $K: T \rightarrow K(T) $, i.e $K=\mathrm{Hom}(Y,-)$, here $K(T)$ is the kernel of $$ \mathrm{Hom}(X_0, T) { \xrightarrow[]{T(d_0)} \atop \xrightarrow[T(d_1)]{} } \mathrm{Hom}(X_1, T) $$ In SGA 3, it's proved that the quotient exists in some case. On the other hand, on wikipedia(group scheme), it's written that: "For a subgroup scheme H of a group scheme G, the functor that takes an S-scheme T to G(T)/H(T) is in general not a sheaf, and even its sheafification is in general not representable as a scheme. However, if H is finite, flat, and closed in G, then the quotient is representable, and admits a canonical left G-action by translation. If the restriction of this action to H is trivial, then H is said to be normal, and the quotient scheme admits a natural group law. Representability holds in many other cases, such as when H is closed in G and both are affine.[1]" It looks like that the two definitons of quotient are different. The first one considers morphisms to an object $T$ and the second definition considers morphisms from $T$. The first definition seems more natural to me for the quotient. My question is : are these two definitions equivalent, under the following assumptions: $X_0 = G$ is a group scheme, $X_1 = H \times G$ for a finite closed subgroup scheme $H$, $d_0 = m$ being the induced morphism from the multiplication and $d_1$ is the second projection. ps: When I tried to figure out how to give a multiplication on the quotient $G/H$ (of course, one needs the condition "normal"), I have the first definition of quotient in mind, and can't see why " ...and admits a canonical left G-action by translation". Using the second definition, it is easy to see it.