text
stringlengths
256
16.4k
Table of Contents The Boundary of Any Set is Closed in a Topological Space Recall from The Boundary of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $A \subseteq X$ then a point $x \in X$ is said to be a boundary point of $A$ if $x$ is contained in the closure of $A$ and not in the interior of $A$, i.e., $x \in \bar{A} \setminus \mathrm{int} (A)$. We also noted that the set of all boundary points of $A$ is called the boundary of $A$ and is denoted:(1) We will now look at a nice theorem that says the boundary of any set in a topological space is always a closed set. Theorem 1: Let $(X, \tau)$ be a topological space and $A \subseteq X$. Then $\partial A$ is closed. Proof:To show that $\partial A$ is closed we only need to show that $(\partial A)^c = X \setminus \partial A$ is open. Notice that $X \setminus \partial A$ can be written as: We therefore want to show that $A \setminus \partial A$ and $A^c \setminus \partial A$ are both open sets. Let $x \in A \setminus \partial A$. Then $x \in A$ and $x$ is not on the boundary of $A$. Since $x$ is not on the boundary of $A$ we have that there exists an open neighbourhood $U$ ($U \in \tau$) that intersects $A^c$ trivially, i.e., $U \cap A^c = \emptyset$. Therefore $U \subseteq A$ and $x \in U \subseteq A$, so $x \in \mathrm{int}(A \setminus \partial A)$ and $\mathrm{int} (A \setminus \partial A) = A \setminus \partial A$ so $A \setminus \partial A$ is open. Let $x \in A^c \setminus \partial A$. Then $x \in A^c$ and $x$ is not on the boundary of $A$. Since $x$ is not on the boundary of $A$ we have that there exists an open neighbourhood $U$ of $x$ ($U \in \tau$) that intersects $A$ trivially, i.e., $U \cap A = \emptyset$. Therefore $U \subseteq A^c$ and $x \in U \subseteq A^c$, so $x \in \mathrm{int}(A^c \setminus \partial A)$, so $\mathrm{int} (A^c \setminus \partial A) = A^c \setminus \partial A$ so $A^c \setminus \partial A$ is open. From $(*)$ we see that $(\partial A)^c = X \setminus \partial A$ is the union of two open sets and so $(\partial A)^c$ is open. Therefore $\partial A$ is closed. $\blacksquare$
The function is $f(K,L)= AK^{a}L^{b}$ on the set of points $(K,L)$ with $K\geq 0$ and $L\geq 0$, assuming $A>0$ How do I find the Hessian and check for concavity? As @Herr K. stated, the beginning point is being able to take a derivative. The Hessian is a matrix equivalent to a second order derivative sometimes denoted as $\nabla^{2}$. Start by finding the gradient, $\nabla$ which is a vector of first order derivatives of every variable in the function. In your case, this would be $$ \nabla= \left[ {\begin{array}{cc} \frac{\partial{f}}{\partial{K}} \\ \frac{\partial{f}}{\partial{L}} \\ \end{array} } \right] = \left[ {\begin{array}{cc} aAK^{a-1}L^{b} \\ bAK^{a}L^{b-1} \\ \end{array} } \right] $$ The Hessian, would be the next (second) derivative of each of the above with respect to both variables and it takes the form: $$ \nabla^{2}= \left[ {\begin{array}{cc} \frac{\partial^{2}f}{\partial{K}^2} & \frac{\partial^{2}f}{\partial{K}\partial{L}} \\ \frac{\partial^{2}f}{\partial{L}\partial{K}} & \frac{\partial^{2}f}{\partial{L}^2} \\ \end{array} } \right] $$ Using whatever this would be in your case (left to you to compute) you can take the determinant of the resulting matrix and, with values for your parameters $a$ and $b$ (or at least their signs like with the other variables and parameters) you can determine whether the determinant is positive or negative and determine the concavity. If it is positive definite or positive semi-definite, this would imply either strict or non-strict convexity, respectively. If you don't know how to take a determinant it is simply dont by using a matrix say $$ A= \left[ {\begin{array}{cc} a_{1} & a_{2} \\ a_{3} & a_{4} \\ \end{array} } \right] $$ and then taking its determinant using the operation (without explanation for sake of brevity): $$ A= {\begin{vmatrix} a_{1} & a_{2} \\ a_{3} & a_{4} \\ \end{vmatrix} } = a_{1}a_{4}-a_{2}a_{3} $$ Using this with the above, you should be able to figure the rest out and figure out whether or not its concave! (hint: for $a,b\geq 0$ and $a+b\leq1$ it is concave)
Background Let $V_n$ be the $\mathbb{C}$-module spanned by the set of derangements (permutations with no fixed points) inside the group ring of $S_n$. We make $V_n$ into a $\mathbb{C}S_n$-module with $S_n$ acting on $V_n$ by conjugation extended by linearity. That is \begin{align} V_n &:= \{\sum\limits_i c_i \sigma_i \,| \, \sigma_i \in S_n, \, \sigma_i \,\text{has no 1-cycles}, \, c_i \in \mathbb{C} \} \\ g\cdot \sigma_i &:= g \sigma_i g^{-1}, \quad g \in S_n \end{align} $V_n$ is the direct sum of $\mathbb{C}S_n$-invariant submodules where each submodule is spanned by derangements of the same cycle type: \begin{align} V_n = \bigoplus_{\mu} V_{\mu} \end{align} where we denote cycle type as an integer partition $\mu = (\mu_1,\dots, \mu_k )$. So for example \begin{align} V_4 \cong V_{(4)} \oplus V_{(2,2)} \end{align} because the only derangements when $n=4$ have cycle type $(4)$ or $(2,2)$. If we choose some derangement $x_{\mu} \in S_n$ where $x_{\mu}$ has cycle type $\mu$ and define \begin{align} H := \text{Stab}_{S_n}(x_{\mu}) = \{ g \, | \, g x_{\mu} g^{-1} = x_{\mu}, \, g \in S_n \} \end{align} then $V_{\mu}$ is acted on by the representation of $S_n$ that is induced from the trivial $H$-action by conjugation on the $\mathbb{C}H$-module that is spanned by the single element $x_{\mu}$. Explicitly, let \begin{align} W(x_{\mu}) &:= \{c x_{\mu} \, |\, c \in \mathbb{C} \} \\ \rho &: H \rightarrow \text{End}(W(x_{\mu})) \\ \rho(h)\cdot w &:= h w h^{-1} \\ &= w \quad (\text{by definition of the Stabilizer}) \end{align} then \begin{align} \text{Ind}_{H}^{S_n} \rho : S_n \rightarrow \text{End}(V_{\mu}) \end{align} is the induced representation describing the $S_n$-action on $V_{\mu}$. I don't know if this is tedious or unnecessary, but writing things this way buys us: By Frobenius reciprocity, the multiplicity of an irreducible representation of $S_n$ labelled by the integer partition $\lambda = ({\lambda_1, \dots , \lambda_j})$ in a decomposition of $V_{\mu}$ is given by \begin{align} m^{\lambda} = \frac{1}{|H|} \sum\limits_{h\in H} \chi_{S_n}^{\lambda}(h) \end{align} And so because characters of $S_n$ are straightforward to compute, we know how $V_{\mu}$ and thus $V_n$ decompose into irreducibles. The Question It's known that given any $n$-fold tensor product of a vector space $M$ \begin{align} M^{\otimes n} := M \otimes \dots \otimes M, \quad (n \, \text{times}) \end{align} one can choose a basis for each $S_n$-invariant irreducible submodule of $M^{\otimes n}$ such that there is a basis element for each standard Young Tableaux with $n$ boxes and all of these basis elements are mutually orthogonal in the sense that if $e_i$ is the element of the group ring of $S_n$ whose image is the span of the basis element in question, then $e_i e_j = \delta_{ij}$, where the multiplication is the normal group ring multiplication (see here, or here). For clarity: one example of a non-orthongal list of group algebra elements whose image would be a non-orthogonal basis for $M^{\otimes n}$ would be the set of Young Symmetrizers. However, applying the cited methods for obtaining orthogonal bases to $V_n$ will not produce a fully orthogonal basis, but it will produce an orthogonal basis for each submodule $V_{\mu}$. The reason for this is that same irreducible submodule indexed by a partition $\lambda$ can appear in say $V_{\mu}$ and $V_{\nu}$ $(\mu \neq \nu)$ and there is no reason why the basis for $V_{\mu}^{\lambda}$ should be orthogonal to $V_{\nu}^{\lambda}$. For example, I find that (remember subscripts label cycle type of derangements, superscripts label irreducible modules) \begin{align} V_{(4)} &\cong V_{(4)}^{(4)} \oplus V_{(4)}^{(2,2)} \oplus V_{(4)}^{(2,1,1)} \\ V_{(2,2)} &\cong V_{(2,2)}^{(4)} \oplus V_{(2,2)}^{(2,2)} \end{align} and so the 6 basis elements of $V_{(4)}$ will be mutually orthogonal and the 3 basis elements of $V_{(2,2)}$ will be mutually orthogonal, but the basis elements of $V_{(4)}^{(4)}$ will not be orthogonal to those of $V_{(2,2)}^{(4)}$ using the procedures cited above. So my question is: what is a canonical way to ensure that submodules corresponding to derangements of different cycle types are orthogonal to each other? Is such a procedure possible/obvious? In general this is possibly an unreasonable thing to ask, but maybe the structure of derangements and cycle types makes this a tractable problem? Or maybe at least this particular setup of irreducible representations of the $\mathbb{C}S_n$-module spanned by derangements ($S_n$ action by conjugation) is known under some other name, which could lead me to resources and ideas? Attempt at a solution I have wondered if the solution lies in working with non-standard tableaux, or higher $m$ such that $n < m$, but honestly I don't really see a preliminary "proof of concept" in these ideas and haven't gone further. Of course I could just apply Gram-Schmidt, but I'm wondering if there's something "canonical" that I don't know about.
Answer The answer is below. Work Step by Step We use the equation for wavelength to find: a) $\lambda= \frac{1500m/s}{8\times10^6Hz}=1.9\times10^{-4}m$ b) $\lambda= \frac{1500m/s}{3.5\times10^6Hz}=4.3\times10^{-4}m$ You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
In [1], we discuss several aspects of wave propagation in a computational framework. In particular, we consider the following one-dimensional wave equation and we discuss the propagation properties of its finite-difference solutions on uniform and non-uniform grids, by establishing comparisons with the usual behavior of the continuos waves. Our approach is based on the study of the propagation of high-frequency Gaussian beam solutions (that is, solutions originated from highly concentrated and oscillating initial data), both in continuous and discrete media. Roughly speaking, the idea at the basis of this techniques is that the energy of Gaussian beam solutions propagates along bi-characteristic rays, which are obtained from the Hamiltonian system associated to the symbol of the operator under consideration. At the continuous level of equation \eqref{main_eq_1d}, these mentioned rays are straight lines and travel with a uniform velocity. On the other hand, the finite difference space semi-discretization of \eqref{main_eq_1d} may introduce different dynamics, with a series of unexpected propagation properties at high frequencies. For instance, one can generate spurious solutions traveling at arbitrarily small velocities which, therefore, show lack of propagation in space. In addition, the introduction of a non-uniform mesh for the discretization may generate further pathologies such as internalreflections, meaning that the waves change direction without hitting the boundary. In order to illustrate these mentioned pathological phenomena, we perform simulations on a uniform space-mesh of $N$ points and mesh-size $h=2/(N+1)$ and on two non-uniform ones produced by applying to $\mathcal G_h$ the transformations Moreover, the initial data are constructed starting from the following Gaussian profile Low frequency simulations We present in Figure 1 our simulations for low-frequency solutions of \eqref{main_eq_1d}. In particular, we considered initial position and frequency $(x_0,\xi_0)=(0,\pi/4)$ and a time horizon $T=5s$. In this case, the numerical solutions behave basically like the continuous ones: they start traveling to the left along the straight characteristic line $x+t$ and, after having hit the boundary, they reflect following the Descartes-Snell’s law and continue propagating, this time to the right along the other branch of the characteristic ($x−t$). Figure 1. Numerical solutions of \eqref{main_eq_1d} with $(x_0,\xi_0)=(0,\pi/4)$ on uniform and non-uniform meshes. High frequency simulations When increasing the frequency, the situation changes and we encounter several interesting phenomena and pathologies: • The so-called umklapp or U-process, also known as internal reflection, consisting in the reflection of waves without touching only one or both the endpoints of the space interval. This phenomenon is typical for the semi-discretization of high-frequecncy solutions of \eqref{main_eq_1d} on non-uniform meshes, which may produce waves oscillating in the interior of the computational domain and reflecting without touching the boundary (see Figure 2 - middle) or touching the boundary only at one of the endpoints (see Figure 2 - right). Figure 2. Numerical solutions of \eqref{main_eq_1d} with $(x_0,\xi_0)=(1/2,\pi)$ on uniform and non-uniform meshes. • Non-propagating waves, corresponding to equilibrium (fixed) points on the phase diagram (see Figure 3). Figure 3. Numerical solutions of \eqref{main_eq_1d} with $(x_0,\xi_0)=(0,\pi)$ on uniform and non-uniform meshes. These phenomena are related with the particular nature of the discrete group velocity which, in the finite difference setting, is given by and vanishes for $\xi = (2k+1)\pi$, $k\in\mathbb{Z}$. Moreover, they can be understood by looking at the phase portrait of the hamiltonian system associated to the finite difference semi-discretization of \eqref{main_eq_1d} (see Figure 4). Figure 4. Phase portrait of the Hamiltonian system for the numerical wave equation corresponding to the and the grid transformation $g_1$ (left) and $g_2$ (right). In particular, the solutions displayed in Figure 2 correspond totrajectories which remain always in the red area of these phase portraits, while Figure 3 shows solutions starting from the equilibrium point $(0,\pi)$ (the green one in Figure 4) Notice that, for the grid transformation $g_1$, this equilibrium point is a center (stable) while it is a saddle (unstable) for the grid transformation $g_2$. For this reason, in the first case the non-propagating wave remains concentrated along the vertical ray, while in the second case the wave presents more very dispersive features. Two-dimensional simulations Analogous phenomena can be detected also in the case of the finite-difference semi-discretization of the two-dimensional wave equation Also in this case, we perform simulations on a uniform mesh of $N$ points both in the $x$ and $y$ direction and mesh-sizes $h_x=2/(N+1)=h_y$ and on two non-uniform ones produced by applying to $\mathcal G_h$ the transformations $g_1$ and $g_2$ above introduced. Moreover, the initial data are constructed once again starting from a Gaussian profile: Then, it can be observed in Video 1 that, as for the one-dimensional case, at low frequencies the solution remains concentrated and propagates along straight characteristics which reach the boundary, where there is reflection according to the Descartes-Snell’s law. This independently on whether we use a uniform or a non-uniform mesh. Video 1. Numerical solutions of \eqref{main_eq_2d} with $(x_0,y_0,\xi_0,\eta_0)=(0,0,\pi/4,\pi/4)$ on uniform and non-uniform meshes. Nevertheless, increasing the frequencies similar phenomena as in the one-dimensional case show up. For instance, on the non-uniform mesh corresponding to the transformation $g_1$, we observe the so-called rodeo effect, according to which, waves that should propagate along straight lines are trapped along closed circles ( Video 2 - middle). Video 2. Numerical solutions of \eqref{main_eq_2d} with $(x_0,y_0,\xi_0,\eta_0)=(0,\tan(\arccos(\sqrt[4]{1/2},\pi/2,\pi)$ on uniform and non-uniform meshes. Finally, waves starting from the point $(x_0,y_0,\xi_0,\eta_0)=(0,0,\pi,\pi)$, which is an equilibrium for the phase Hamiltonian system, cannot move, and remain trapped around the point $(0,0)$ in the physical plane for any time (see Video 3). Video 3. Numerical solutions of \eqref{main_eq_2d} with $(x_0,y_0,\xi_0,\eta_0)=(0,0,\pi,\pi)$ on uniform and non-uniform meshes. Conclusions Summarizing, our analysis shows that the finite-difference semi-discretization of one and two-dimensional waves may modify the dynamics of the continuous model. In particular, as a result of the accumulation of the local effects introduced by the heterogeneity of the employed grid, numerical high-frequency solutions can bend in a singular and unexpected manner. Moreover, this phenomenon has to be added to the well known numerical dispersion effect, producing the high-frequency discrete group velocity to vanish, even in uniform grids. Our results constitute a warning both for adaptivity and for the treating of control and inverse problems. In broad terms, the goal of adaptivity is to refine a mesh on the support of the solution, keeping it coarse where the solution has little oscillations and energy. Our analysis shows that, in this context, adaptivity has to be performed with some attention. Indeed, if one is not careful enough when refining the mesh, they can be produced spurious effects due to the fact that waves feel the fictitious numerical boundaries that are generated when the grid passes from fine to coarse. Finally, our results are also a signal that the dangers of uniform meshes in the study of numerical control and inverse problems (already observed in [2]) may be enhanced when the mesh is non-uniform. In more details, the heterogeneity of the grid may introduce added trapping effects, which need to be avoided in order to prove convergence in the context of controllability, stabilization or inversion algorithms. References [1] U. Biccari, A. Marica and E. Zuazua, Propagation of one and two-dimensional discrete waves under finite difference approximation. Submitted [2] E. Zuazua, Propagation, observation, control and numerical approximation of waves. SIAM Rev. 47, 2 (2005), 197-243.
I am new to Machine Learning and Data Science. By spending some time online, I was able to understand the perceptron learning rule fairly well. But I am still clueless about how to apply it to a set of data. For example we may have the following values of $x_1$, $x_2$ and $d$ respectively:- \begin{align}&(0.6 , 0.9 , 0)\\ &(-0.9 , 1.7 , 1)\\ &(0.1 , 1.4 , 1)\\ &(1.2 , 0.9 , 0)\end{align} I can't think of how to begin. I think we need to follow these rules. $$W_i = W_i + \Delta W_i$$ $$\Delta W_i = \eta(d_i - y_i)$$ $$\text{ If} y_i = \sum w_ix_i \ge 0, y = 1$ \text{ else} y=0$$ $$x_0 (\text{Bias}) = 0 $$ Where $d_i$ is the target value, $y_i$ is the output value $\eta$ is the learning rate and $x_i$ is the input value Any help is appreciated. Thanks!
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Consider the following statement (which follows easily from various results found in the literature): (†)There exists a primitive recursive (“p.r.”) relation $T$ on the ordinals such that, if $(f_e)_{e<\omega}$ is a standard numbering of the p.r. functions of one variable on the ordinals, we have $f_e(x)=y$ iff $\exists z.T(e,x,y,z)$; moreover, there then exists such a $z$ which is less than the smallest p.r.-closed ordinal $\geq x$. One straightforward consequence of (†) is a Kleene normal form theorem for $\alpha$-recursion: there exists a p.r. relation $T'$ on the ordinals such that, if $(\varphi_e)_{e<\omega}$ is a standard numbering of the partial recursive functions of one variable on the admissible ordinal $\alpha$, we have $\varphi_e(x)\simeq y$ iff $\exists z<\alpha.T'(e,x,y,z)$. (One straightforward consequence of that is that there exists a “universal” partial recursive $g$ on $\alpha$ with the property that $g(e,x) \simeq \varphi_e(x)$.) Now every proof of (†) that I was able to find boils down to something like this: if $f_e(x)=y$ then for $\beta$ large enough (larger than all the intermediate computation values) we have $L_\beta \models f_e(x)=y$, and “$L_\beta \models f_e(x)=y$” is a p.r. relation of the ordinals $e,x,y,\beta$ because it is a p.r. relation of the sets $e,x,y,\beta$ and p.r. relations on ordinals and on sets that happen to be ordinals coincide. This proof is explicit, in the sense that it actually gives $T$, but the $T$ in question seems rather insanely complicated (as a p.r. relation on ordinals) because of the process of converting a p.r. relation on sets to one on ordinals and because of the route through formulas and the $L_\beta$. Question:Can the statement (†) above be proved entirely at the level of p.r. functions and relations on the ordinals? Variant:Can we give an explicit $T$ that is reasonably short? In the case of ordinary recursion, it is not too difficult. What I would like to understand is whether there is some reason why the transfinite ordinal case should be harder or whether it is just an artefact of the way things are written in the literature. Motivation: Partially from trying to get a better understanding of this question I asked recently; partially from thinking about how, if $\alpha$-recursion is viewed as a transfinite programming language, one would proceed to write an “interpreter” for the language in the language itself (=universal function).
Quotient Theorem for Sets Theorem Thus: $f = i \circ r \circ q_{\mathcal R_f}$ where: $q_{\mathcal R_f}: S \to S / \mathcal R_f : \map {q_{\mathcal R_f} } s = \eqclass s {\mathcal R_f}$ $r: S / \mathcal R_f \to \Img f: \map r {\eqclass s {\mathcal R_f} } = \map f s$ $i: \Img f \to T: \map i t = t$ This can be illustrated using a commutative diagram as follows: Proof From Factoring Mapping into Surjection and Inclusion, $f$ can be factored uniquely into: A surjection $g: S \to \Img f$, followed by: The inclusion mapping $i: \Img f \to T$ (an injection). The quotient mapping $q_{\mathcal R_f}: S \to S / \mathcal R_f$ (a surjection), followed by: The renaming mapping $r: S / \mathcal R_f \to \Img f$ (a bijection). Thus: $f = i_T \circ \paren {r \circ q_{\mathcal R_f} }$ As Composition of Mappings is Associative it can be seen that $f = i_T \circ r \circ q_{\mathcal R_f}$. The commutative diagram as follows: $\begin {xy} \[email protected] + [email protected] + 1em { S \[email protected]{-->}[rrr]^*{f = i_T \circ r \circ q_{\mathcal R_f} } \ar[ddd]_*{q_{\mathcal R_f} } \[email protected]{..>}[drdrdr]_*{g = r \circ q_{\mathcal R_f} } & & & T \\ \\ \\ S / \mathcal R_f \ar[rrr]_*{r} & & & \Img f \ar[uuu]_*{i_T} } \end {xy}$ $\blacksquare$ Also known as Otherwise known as the factoring theorem or factor theorem. This construction is known as the canonical decomposition of $f$. Also see Sources 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Factoring Functions: Theorem $11$ 1967: George McCarty: Topology: An Introduction with Application to Topological Groups... (previous) ... (next): $\text{I}$: Exercise $\text{K}$ 1975: T.S. Blyth: Set Theory and Abstract Algebra... (previous) ... (next): $\S 6$. Indexed families; partitions; equivalence relations: Theorem $6.6$
I have read about how Alice can send Bob a qubit $\alpha |0\rangle + \beta|1\rangle$ if they share an EPR pair. This gives an initial state: $(\alpha |0\rangle + \beta|1\rangle) \otimes (\frac{1}{\sqrt{2}}|00\rangle + \frac{1}{\sqrt{2}}|11\rangle)$ The first two qubits belong to Alice, the third belongs to Bob. The first step is for Alice to apply a controlled not from first qubit onto her half of the EPR pair. This gives the result: $\frac{1}{\sqrt{2}} \big(\alpha (|000\rangle + |011\rangle) + \beta (|110\rangle + |101\rangle)\big)$ Next, let us say that Alice measures her second qubit. This has a 50/50 chance of resulting in a zero or a one. That leaves the system in one of two states: $\alpha |000\rangle + \beta |101\rangle \quad\text{OR}\quad \alpha |011\rangle + \beta |110\rangle$ If Alice measures the second qubit as zero, she is in the first state. She can tell Bob: "Your half of the EPR is now the qubit I wanted to send you." If Alice measures the second qubit as one, she is in the second state. She can tell Bob: please apply the matrix $ \begin{bmatrix} 0 & 1\\ 1 & 0 \end{bmatrix} $ to your qubit to flip the roles of zero and one. Hasn't Alice teleported her qubit at this point? The only problem I see is this: Alice must continue not to measure her original qubit. If her unmeasured qubit were to be measured, that would force Bob's qubit to collapse as well. Is this therefore why Alice needs to apply a Hadamard matrix to her first qubit? Let us apply the Hadamard to the state $\alpha |000\rangle + \beta |101\rangle$ (This is one of the two possibilities from above). We get: $ \frac{1}{\sqrt{2}} \big( (\alpha |000\rangle + \beta |001\rangle) (\alpha |100\rangle - \beta |101\rangle) \big) $ Alice measures her first qubit now. If it is a zero, she can tell Bob: your qubit is fine. If it is one, she can tell Bob: you need to fix it from $\alpha |100\rangle - \beta |101\rangle$ (by an appropriate rotation). Finally, my questions are: If Alice is okay with sharing an entangled copy of the transfered qubit with Bob, can she send just the first classical bit? Is the application of the Hadamard simply to separate Alice's first qubit from Bob's qubit? It is the application of the Hadamard to Alice's first qubit, followed by the measurement, which may disturb Bob's qubit, possibly necessitating a "fixup." The second classical bit is transferred to communicate whether the fixup is needed. Am I correct? The reason Alice wants Bob to have a qubit unentangled from her own is probably because it is burdensome for Alice to keep an entangled copy from being measured. Correct? Sorry for the very long and rambly question. I think I understand, but maybe this writeup will help someone else.
In a shot, the experiment of information construction in the field of urban planning in the past more than ten years, make clear that it is time to solve the theory and method of urban planning information system. Considering the influence of wind direction, wind speed, atmospheric dispersion parameter, chimney height and others factors, We use MATLAB technology to calculate dispersion consistence at some time, daily average and annual, and to simulate forecast chart about SO_2 and NO_2 in different instance, when the chimney height is 210m of the design height and 240m of the evaluation height; The proof is based on a variant of Moser's method using time-dependent vector fields. We present two-sided singular value estimates for a class of convolution-product operators related to time-frequency localization. The Balian-Low theorem (BLT) is a key result in time-frequency analysis, originally stated by Balian and, independently, by Low, as: If a Gabor system $\{e^{2\pi imbt} \, g(t-na)\}_{m,n \in \mbox{\bf Z}}$ Gabor Time-Frequency Lattices and the Wexler-Raz Identity Gabor time-frequency lattices are sets of functions of the form $g_{m \alpha , n \beta} (t) =e^{-2 \pi i \alpha m t}g(t-n \beta)$ generated from a given function $g(t)$ by discrete translations in time and frequency. This paper presents an economical algorithm for static storage allocation in FORTRAN implementation. In This paper CALL relation is applied between program units to complete repeated allocation for economizing run-time storage. The algorithm can be used in allocation of local, temporary variables and optimal array elements index register. This paper introduces the principle of the operation and application of a source program for calculating coordinates of shaft centers of a multiple- spindle gear box in the aggregate machine by means of an electronic computer. The feature of this program offers the possibility of automatically selecting the calculating process and method of the coordinates of shaft centers. As a result of this, it saves a lot of time and makes the work easier for preparing the input date. It is convenient for a designer... This paper introduces the principle of the operation and application of a source program for calculating coordinates of shaft centers of a multiple- spindle gear box in the aggregate machine by means of an electronic computer. The feature of this program offers the possibility of automatically selecting the calculating process and method of the coordinates of shaft centers. As a result of this, it saves a lot of time and makes the work easier for preparing the input date. It is convenient for a designer to handle and make use of it. In this paper the analysis of possible mistakes in calculation and the approach to finding mistakes with the help of the source program are discussed. As a part of the LSI CAD, a software for CAD photomask-ZB-761 was designed and put into use on a DJS-130 computer in 1976. Since then several dozens of IC photomasks have been made. The input language of software has better ability to describe IC mask graph. The data structure is so designed that it enables a minicomputer without an external memory, such as a DJS-130 computer, to handle the vast data of mask graph. By optimizing the object program the working time of specific photomask equipment is... As a part of the LSI CAD, a software for CAD photomask-ZB-761 was designed and put into use on a DJS-130 computer in 1976. Since then several dozens of IC photomasks have been made. The input language of software has better ability to describe IC mask graph. The data structure is so designed that it enables a minicomputer without an external memory, such as a DJS-130 computer, to handle the vast data of mask graph. By optimizing the object program the working time of specific photomask equipment is greatly reduced. It also has the ability to check data and to perform manmachine dialogue to a certain extent. In this paper, the particular features of input language, data structure, transformation algorithm of the compiler and optimization for the object program etc. are described in some detail. The schemes of the implementation of LSI mask design automation are also suggested.
Homotopic Mappings Relative to a Subset of a Topological Space Table of Contents Homotopic Mappings Relative to a Subset of a Topological Space Definition: Let $X$ and $Y$ be topological spaces, $A \subset X$, and let $f, g : X \to Y$ be two continuous functions such that $f(a) = g(a)$ for all $a \in A$. Then $f$ is Homotopic to $g$ Relative to $A$ written $f \simeq_A g$ if there is a continuous function $H : X \times I \to Y$ such that: a) $H_0 = f$. b) $H_1 = g$. c) $H_t(a) = f(a) = g(a)$ for all $a \in A$ and for all $t \in I$. If such a function $H$ exists, then $H$ is said to be a Homotopy from $f$ to $g$ relative to $A$. Definition: Let $X$ and $Y$ be topological spaces and let $f, g : X \to Y$ be two continuous functions. Then $f$ is Homotopic to $g$ written $f \simeq g$ if there is a continuous function $H : X \times I \to Y$ such that: a) $H_0 = f$. b) $H_1 = g$. If such a function $H$ exists, then $H$ is said to be a Homotopy from $f$ to $g$. Observe that $f$ being homotopic to $g$ is the same thing as $f$ being homotopic to $g$ relative to $A = \emptyset$. Theorem 1: Let $X$ and $Y$ be topological spaces and let $A \subset X$. Then the relation $f$ is homotopic to $g$ relative to $C$ is an equivalence relation on the set of continuous functions $f, g : X \to Y$ such that $f(a) = g(a)$ for all $a \in A$. Proof:Let $f, g, h : X \to Y$ be continuous functions such that $f(a) = g(a) = h(a)$ for all $a \in A$. Consider the function $H : X \times I \to Y$ defined for all $t \in I$ by: \begin{align} \quad H(x, t) = f(x) \end{align} Then $H$ is a continuous function since $f$ is a continuous function. Furthermore, $H_0 = f$, $H_1 = f$, and $H_t(a) = f(a) = f(a)$ for all $a \in A$. So $f$ is homotopic to $f$ relative to $A$. Suppose that $f$ is homotopic to $g$ relative to $A$. Then there exists a continuous function $H : X \times I \to Y$ such that $H_0 = f$, $H_1 = g$, and $H_t(a) = f(a) = g(a)$ for all $a \in A$ and for all $t \in I$. Define a new function $H' : X \times I \to Y$ by: \begin{align} \quad H'(x, t) = H(x, 1 -t) \end{align} Then $H'$ is a continuous function since $H$ is a continuous function. Furthermore, $H'_0 = H_1 = g$, $H'_1 = H_0 = f$, and $H'_t(a) = H_{1-t}(a) = f(a) =g(a)$ for all $a \in A$ and all $t \in I$. So $g$ is homotopic to $f$ relative to $A$. Let $f$ be homotopic to $g$ relative to $A$ and let $g$ be homotopic to $h$ relative to $A$. Then there exists continuous functions $H' : X \times I \to Y$ and $H'' : X \times I \to Y$ such that $H'_0 = f$, $H'_1 = g$, $H''_0 = g$, $H''_1 = h$, $H'_t(a) = f(a) = g(a)$ for all $a \in A$ and for all $t \in I$, and $H''_t(a) = g(a) = h(a)$ for all $a \in A$ and for all $t \in I$. Define a new function $H : X \times I \to X$ by: \begin{align} \quad H(x, t) = \left\{\begin{matrix} H'(x, 2t) & \mathrm{if}\: 0 \leq t \leq \frac{1}{2} \\ H''(x, 2t - 1) & \mathrm{if} \: \frac{1}{2} \leq t \leq 1 \end{matrix}\right. \end{align} Then $H$ is a continuous function since $H'$ and $H''$ are continuous and by The Gluing Lemma. Furthermore, [$H_0 = H'_0 = f$, $H_1 = H''_1 = h$, and $H_t(a) = f(a) = h(a)$ for all $a \in A$ and for all $t \in I$. So $f$ is homotopic to $h$ relative to $A$. So indeed, the relation of homotopic relative to $A$ is an equivalence relation. $\blacksquare$
Accumulation Points of a Set in a Topological Space Examples 1 Recall from the Accumulation Points of a Set in a Topological Space page that if $(X, \tau)$ is a topological space and $S \subseteq X$ then a point $x \in X$ is an accumulation point of $S$ if every open neighbourhood of $X$ contains points of $S$ different from $x$, that is, for all $U \in \tau$ with $x \in U$ we have that $A \cap U \setminus \{ x \} \neq \emptyset$. We will now look at some examples of accumulation points in topological spaces. Example 1 Let $(X, \tau)$ be a topological space and $S \subseteq X$. Prove that $\{ x \} \in \tau$ if and only if $x$ is not an accumulation point of $S$. Suppose that $\{ x \} \in \tau$. Then $\{ x \}$ is an open neighbourhood of $x$. But the open neighbourhood $\{ x \}$ contains no points of $S$ different from $x$. Therefore $x$ is not an accumulation point of any subset $S \subseteq X$. Now suppose that $x$ is not an accumulation point of $S$. Then there exists an open neighbourhood of $x$ that does not contain any points different from $S$, i.e., $\{ x \} \in \tau$. Example 2 Let $(X, \tau)$ be the discrete topological space, i.e., $\tau = \mathcal P (X)$. What are the accumulation points of $X$? Since $(X, \tau)$ is the discrete topological space, we have that for all $x \in X$ that $\{ x \} \in \tau$. So for each $x \in X$, $\{ x \}$ is an open neighbourhood of $x$ and the open neighbourhood $\{ x \}$ contains no points of $X$ different from $x$. Therefore, for all $x \in X$, $x$ is not an accumulation point of any subset $S \subseteq X$. So the discrete topological space has no accumulation points. Example 3 Let $(X, \tau)$ be the indiscrete topological space, i.e., $\tau = \{ \emptyset, X \}$. What are the accumulation points of $X$? Let $x \in X$. Then only open neighbourhood of $x$ is $X$. If $X$ contains more than $1$ element, then every $x \in X$ is an accumulation point of $X$. If $X$ contains only $1$ element, then $x$ is not an accumulation point of $X$ and therefore $X$ has no accumulation points. Example 4 Let $X = \{ a, b, c, d, e \}$ and $\tau = \{ \emptyset, \{ a \}, \{ b, c, d, e \}, X \}$. What are the accumulation points of $S = \{ a, b, c \} \subseteq X$? What are the accumulation points of $T = \{ a, b \} \subseteq X$? We note that $a \in X$ is not an accumulation point of $S$ since the open set $\{ a \} \in \tau$ does not contain any points in $S$ different from $a$. However, the points $b, c, d, e \in X$ are accumulation points of $S$ since the only open neighbourhoods of these points are $\{b, c, d, e \}$ and $X$. For $b \in X$, both of these open neighbourhoods contains elements in $S$ (namely $c \in S$) different from $b$. For $c \in X$, both of these open neighbourhoods contain elements in $S$ (namely $b \in S$) different from $c$. For $d \in X$, both of these open neighbourhoods contains elements in $S$ (namely $b, c \in S$), different from $d$. Lastly, for $e \in X$, both of these open neighbourhoods contain elements in $S$ (namely $b, c \in S$), different from $e$. Now we also note that $a \in X$ is not an accumulation point of $T$ since the open set $\{ a \} \in \tau$ does not (once again) contain any points in $T$ different from $a$. The point $b \in X$ is also not an accumulation point of $T$. The open neighbourhoods of $b \in X$ are $\{b, c, d, e \}$ and $X$. In particular, the open neighbourhood $\{ b, c, d, e \}$ does not contain any elements in $S$ different from $b$. However, the points $c, d, e \in X$ are accumulation points of $T$ since the only open neighbourhoods of these points are $\{ b, c, d, e \}$ and $X$. For $c \in X$, both of these open neighbourhoods contain elements in $T$ (namely $b \in T$) different from $c$. For $d \in X$, both of these open neighbourhoods contain elements in $S$ (namely $b \in T$) different from $d$. Lastly, for $e \in X$, both of these open neighbourhoods contain elements in $S$ (namely $b \in T$) different from $e$.
Mathematically it is a relationship between a bipartite linear operator vector space $L(X\otimes Y)$ and a superoperator vector space $C(X): L(X)\to L(Y)$ (maps of linear operators to linear operators). Bipartite density matrices are contained in the former, and quantum channels in the latter. The real "physical" meaning of the isomorphism for quantum ... As @NorbertSchuch said in a comment, matlab has a function for taking the logarithm of a matrix: logm. In general, there is a standard method for calculating the function $f(\sigma)$ of a matrix $\sigma$. You first diagonalise the matrix:$$\sigma=UDU^\dagger,$$where $U$ is a unitary and $D$ is diagonal. We then say$$f(\sigma)=Uf(D)U^\dagger,$$where $... You are running into problems because $\rho$ is not a density operator. A mixed state density operator has $\text{tr}(\rho^2) < 1$, but even a mixed state density operator must have $\text{tr}(\rho)=1$. This is necessary because $\text{tr} (\rho) = \sum \limits_i p_i \, \text{tr}\left(\vert \psi_i \rangle \langle \psi_i \vert \right) = \sum \limits_i ... $\def\braket#1#2{\langle#1|#2\rangle}\def\bra#1{\langle#1|}\def\ket#1{|#1\rangle}$I was being silly :)The sign change that happens is in some sense either associated with the $\ket{a}$ or with the ancilla qubit. By taking the ancilla qubit to be (say) $\left(\frac{\ket{0}-\ket{1}}{\sqrt{2}}\right)$, upon XORing with $f(x)$ we get a sign change that we ... In principle, yes, you can always do it. The Bloch representation can be generalised to arbitrary dimensions, and you can always parametrise states in it by their "angle coordinates".For example, you can write an arbitrary 3-modes pure state as$$|\psi\rangle=\cos\alpha|0\rangle + e^{i\theta}\sin\alpha\cos\beta|1\rangle+e^{i\phi}\sin\alpha\sin\beta|2\... The state of single qubit can be described as a point on the Bloch sphere. All the allowed transformations of a single qubit can then be described as rotations on the Bloch sphere. Unfortunately, bigger quantum systems can no longer be described as fitting on a sphere like geometry. As a result, this idea of considering transformations as rotations does not ...
Neural networks is a topic that recurrently appears throughout my life.Once, when I was a BSc student,I got obsessed with the idea to build an "intelligent" machine 1.I spent a couple of sleepless nights thinking. I read a few essaysshedding some light on this philosophical subject,among which the most prominent, perhaps, stand Marvin Minsky's writings 2.As a result, I came across neural networks idea.It was 2010, and deep learning was not nearly as popular asit is now 3.Moreover, no one made much effort linking to neural networksin calculus or linear algebra curricula. Even guys doing classicaloptimization and statistics sometimesseemed puzzled when hearing about neural nets.Some year later, I have implementeda simple neural net with sigmoidactivations as a part of a decision-making course.At the time I realized that existing state of our knowledgewas still lacking to really build "thinking computers" 4. It was 2012, the conference in Crimea, Ukraine whereI attended a brilliant talk by Prof. Laurent Larger.He explained how to build a high-speed hardware for speech recognitionusing a laser. The talk really inspired me, and a year laterI have started a PhD with an aim to develop reservoir computing, recurrent neural networks implementeddirectly in hardware.Finally, now I am using deep neural networks as a part of myjob. In this series of posts I will highlight some curious details of problem solving with neural networks. I am pretty much against duplicate efforts, therefore I would avoid repeating aspects described many times somewhere else. For those who are completely new to the subject, instead of introducing neural networks all over again I would refer to Chapter 1.2 of my PhD thesis. Here I will only summarize that neural networks are to some extent inspired by biological neurons. Similarly to its biological counterpart, artificial neuron receives many inputs, performs a nonlinear transformation, and produces an output. The equation below formalizes this kind of behavior: $$ \begin{equation} y = f(w_1 x_1 + w_2 x_2 + \dots + w_N x_N) = f(\sum_i w_i x_i), \end{equation} $$ where $N$ is the number of inputs $x_i$,$w_i$ are synaptic weights 5,and $y$ is the result. Surprising as it may appear,in a modern neural network$f$ can be practically any nonlinear function.This nonlinear function is often called an activation function.Congratulations, we have arrived at aneural network from 1950-s. To continue reading this article, some mathematical background is useful, but not mandatory. Anyone can develop an intuition about neural networks! A Word On Haskell This series of posts practically illustrate all the concepts in Haskell programming language. To motivate you, here are some Q & A you always wanted to know. Q:Is there anything that makes Haskell particularly good for Neural Networks or are you simply doing it because you prefer to use Haskell? A:Neural networks are very "function objects". A network is just a big composition of functions. These kinds of things are very natural in a functional language. Q:As I am completely new to Haskell, would like to know what are the benefits of using Haskell vs python or other languages? A: The benefits of using Haskell: It is much easier to reason about what your program is doing. Great when refactoring existing code base. Haskell shapes your thinking towards problem solving in a pragmatic way. Haskell programs are fast. Q:I figured my lack of Haskell knowledge would make it hard to read, but your code examples still make sense to me. A: Thank you. I find Haskell to be very intuitivewhen explaining neural nets. Gradient Descent: A CS Freshman Year The very basic idea behind neural networkstraining and deep learning is a local optimization method knownas gradient descent.For those who have hard time remembering their freshman year,just watch an introductory videoexplaining the idea. How does a concept as simple as gradient descent work for a neural network?Well, a neural network is only a function 6, mapping an input to some output.By comparing the neural network's output to some desired output, one can obtainanother function known as an error function. This error functionhas a certain error landscape, like in the mountains.By using gradient descent, we modify our neural networkin such a way that we are descending this landscape.Therefore, we aim at finding an error minimum.The key concept of the optimization method is that theerror gradients give us a direction in which to change the neural network.In a similar way one would be able to descend a hillcovered with a thick fog. In both cases only a local gradient (slope)is available. Gradient descent can be described by a formula: $$\begin{equation} x_{n+1} = x_n - \gamma \cdot \nabla F(x_n), \end{equation} $$ where constant $\gamma$ is what is referred to in deep learning asthe learning rate, i.e. the amount of learning per iteration $n$.In the simplest case, $x$ is a scalar variable. The gradient descentmethod can be implemented in few lines of code.The Haskell code snippetbelow is interactive. Press the green trianglebutton to Run the code. That outputs the following sequence: $0.0,2.4,2.88,2.976,2.9952,2.99904,2.999808,2.9999616,\dots$ Indeed, the value minimizing function $f(x)$ is $3$, i.e. $\min f(x) = f(3) = (x-3)^2 = 0$. And the sequence is gradually converging towards that number. Let us look more carefully what does the code above do. Lines 1-3: We define the gradient descent method, whichiteratively applies the function step implementing equation (2).We provide the intermediate results taking the first iterN values. Line 5: Suppose, we would like to optimize a function $f(x) = (x-3)^2$.Its gradient gradF_test is then $\nabla f(x) = 2\cdot(x-3)$. Lines 7-10: Finally, we run our gradient descent using learning rate $\gamma = 0.4$. It is crucial to realize that the value of $\gamma$ affectsthe convergence.When $\gamma$ is too small, the algorithm will take many moreiterations to converge, however,when $\gamma$ is too large the algorithm will never converge.At the moment, I am not aware of a good way how to determinethe best $\gamma$ for a given problem. Therefore, often different$\gamma$ values have to be tried. Feel free to modify the code aboveand see what comes out! For example, you can try out differentvalues of gamma, such as gamma = 0.01, gamma = 0.1, gamma = 1.1.The method is generalizable to $N$ dimensions. Essentially, we wouldreplace the gradF_test function with the one operating on vectorsrather than scalars. Neural Network Ingredients For Classification Now that we realize how the gradient descent works, we may want totrain a moderately useful network. Let's say we want to perform Iris flower classification using four distinctive features 7:sepal length, sepal width, petal length, petal width.There are three classes of flowers we want to be able to recognize: Setosa, Versicolour, and Virginica.Now, there is a problem: how do we encode those threeclasses so that our neural network can handle them? Naive Solution And Why It Does Not Work The most simple solution to indicate each specieswould be using natural numbers.For instance, Iris Setosa can be encoded as 1, Versicolour, as 2, and Virginica, as 3.There is, however, a problem with this kind of encoding:we impose a bias. First, by encoding those classesas numbers, we impose a linear order over those threeclasses. It means, we start our count with Setosa, then, Versicolour,and then we arrive at Virginica. However, in reality it doesn't really matterif we end with Virginica or Vernicolour.Second, we also assume that the distance between Virginica and Setosa 3 - 1 = 2 is larger than between Virginica and Versicolor 3 - 2 = 1,which is a priory wrong. One-Hot Encoding So which kind of encoding do we need?First, we want not to impose any restriction on ordering and second,we want the distances between classes to be equal.Therefore, we would prefer encoding each class to be orthogonal,i.e. independent from the other two.That becomes possible if we use vectors of three dimensions.Therefore, now Setosa class is encoded as $[𝟷, 0, 0]$, Versicolour, as $[0, 1, 0]$, and Virginica as $[0, 0, 1]$.The Euclidean distancebetween any two classes is equal to $1$. Putting It All Together Now that we are familiar with basic neural networks and gradient descentand also have some data to play with 7, let the fun begin! First, we create a network of three neurons. To do that, we generalize formula (1): $$\begin{equation} y_{i}=f (\sum_k w_{ik} x_k), \end{equation} $$ where $(x_1, x_2, x_3, x_4) = \mathbf{x}$ is a 4D input vector, $w_{ik} \in \mathbf{W}, i=1 \dots 3, k=1 \dots 4$ is synaptic weights matrix, and result $(y_1, y_2, y_3) = \mathbf{y}$ is a 3D vector. Generally speaking, we perform a matrix-vector multiplication with a subsequent element-wise activation: $$\begin{equation} \mathbf y = f (\mathbf {W x}). \end{equation} $$ As a nonlinear activation function $f$ we will use the sigmoid function 8$\sigma(x) = [1 + e^{-x}]^{-1}$.We will exploit hmatrixHaskell library for linear algebra operationssuch as matrix multiplication. With hmatrix,Equation (4) can be written as: import Numeric.LinearAlgebra as LAsigmoid = cmap f where f x = recip $ 1.0 + exp (-x)forward x w = let h = x LA.<> w y = sigmoid h in [h, y] where <> denotes the matrix product function from LA module.Note that x can be a vector, but it can be also a dataset matrix.In the latter case, forward will transform our entire dataset.Notice that we provide not only the result of our computation y,but also an intermediate step h since it will be laterreused for w gradient computation. Each neuron $y_i$is supposed to fire when it 'thinks' that it has detected one of the threespecies. E.g. when we have an output [0.89, 0.1, 0.2], we wouldassume that the fist neuron is the most 'confident',i.e. we interpret the result as Setosa.In other words, this output is treated as similar to [1, 0, 0].As you can see, the maximalelement was set to one and others, to zero.This is a so-called 'winner takes all' rule. Before training the neural network, we need somemeasure of error or loss function to minimize.For instance, we can usethe Euclidean distance$\text{loss} = \sum_i (\hat y_i - y_i)^2$where $\hat y_i$ is a prediction and $y_i$ is a real answer from ourdataset: loss y tgt = let diff = y - tgt in sumElements $ cmap (^2) diff For the sake of gradient descend illustration,we will reuse the descend function defined above.Now, we have to specify the gradient function for our neuralnetwork equation (4).We use what is called a backpropagationor shortly backprop method, which is essentially aresultof the chain rule and is illustratedfor an individual neuron in the figure below 9. Now we can calculate the weights gradient dW using the backprop method from above: grad (x, y) w = dW where [h, y_pred] = forward x w dE = loss' y_pred y dY = sigmoid' h dE dW = linear' x dY Here linear', sigmoid', loss'are gradients of linear operation (multiplication),sigmoid activation $\sigma(x)$,and the loss function.Note that by operating on matrices rather than scalar valueswe calculate the gradients vector dWdenoting every synaptic weight gradient $dw_i$.Below are those " vectorized"functions definitions in Haskell using hmatrix library 10: linear' x dy = cmap (/ m) (tr' x LA.<> dy) where m = fromIntegral $ rows xsigmoid' x dY = dY * y * (ones - y) where y = sigmoid x ones = (rows y) >< (cols y) $ repeat 1.0loss' y tgt = let diff = y - tgt in cmap (* 2) diff $ stack --resolver lts-10.6 --install-ghc runghc --package hmatrix-0.18.2.0 Iris.hsInitial loss 169.33744797846379Loss after training 61.41242708538934Some predictions by an untrained network:(5><3)[ 8.797633210095851e-2, 0.15127581829026382, 0.9482351750129188, 0.11279346747947296, 0.1733431584272155, 0.9502442520696124, 0.10592462402394615, 0.17057190568339017, 0.9367875655363787, 0.10167941966201806, 0.20651101803783944, 0.9300343579182122, 8.328154248684484e-2, 0.15568011758813116, 0.940816298954776 ]Some predictions by a trained network:(5><3)[ 0.6989749292681016, 0.14916793398555747, 0.1442697900857393, 0.678406436711954, 0.1691062984304366, 0.2052955124240905, 0.6842327447503195, 0.16782087736820395, 0.16721778476233148, 0.6262988163006756, 0.19656943129188192, 0.17521133197774072, 0.6905553549763312, 0.15299944611286123, 0.12910826989854146 ]Targets(5><3)[ 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0, 1.0, 0.0, 0.0 ] That's all for today. Please feel free to play with the code. Hint: you may have noticed that grad calls sigmoid twice on the samedata: once in forward and once in sigmoid'. Try optimizingthe code to avoid this redundancy. As soon as you understand the basics of neural networks, make sure you continue to Day 2. In the next post you will learn how to make your neural network operational. First of all, we will highlight the importance of multilayer structure. We will also show that nonlinear activations are crucial. Finally, we will improve neural network training and discuss weights initialization. There exists even a term describing exactly what I dreamed to achieve. ^ For instance, check Why people think computers can't. ^ With Google Trends in hand, we can witness the raise of global deep learning interest. ^ Despite people have fantasized about it long before Alan Turing. ^ Essentially, a synaptic weight $w_i$ determines the strength of connection to the $i$-th input. ^ With recurrent neural networks it is not true. Those have an internal state, making such networks equivalent to computer programs, i.e. potentially more complex than maps. ^ Here we refer to the classical Iris dataset. If you prefer another light dataset please let me know. ^ In this example nonlinear activation is not essential. However, as we will see in future posts, in multilayer neural networks nonlinear activations are strongly required. ^ We look closer at the backpropagation mechanics here. ^ And here are their mathematical derivations. ^
I am looking for a LaTeX package that will allow me to generate an exam with questions drawn from a particular question bank. Each question within the bank would be a self-contained block of LaTeX code. For example, in the spirit of the exam package, I might have the questions: \question$2+2=$\begin{choices}\choice 3\choice 0\choice 4\choice $\sqrt{2}$\choice $-\pi$\end{choices}\question$\int_0^1 x^2\,dx=$\begin{choices}\choice $-1$\choice $1/3$\choice $\infty$\choice $1/2$\choice None of the above.\end{choices} This would be a bank containing two questions. Each question is a block of LaTeX code that, if it were to be "drawn" from the bank and inserted into a "parent", compilable LaTeX file, would thereby generate an exam (presumably what such a package would do). Being greedy, I'd really like if I could specify the number of questions $q_1$ to be drawn from question bank $B_1$, $q_2$ from $B_2$, etc. where each bank $B_i$ would be over a specific topic. If this already exists, I have not been able to find it. Preserving the functionality of the exam documentclass (or something like it) would make assigning points and/or generating answer keys simultaneous with (random) exam creation.
Search Now showing items 1-10 of 15 Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Dielectron production in proton-proton collisions at √s=7 TeV (Springer, 2018-09-12) The first measurement of e^+e^− pair production at mid-rapidity (|ηe| < 0.8) in pp collisions at √s=7 TeV with ALICE at the LHC is presented. The dielectron production is studied as a function of the invariant mass (m_ee ... Anisotropic flow of identified particles in Pb-Pb collisions at √sNN=5.02 TeV (Springer, 2018-09-03) The elliptic (v_2), triangular (v_3), and quadrangular (v-4) flow coefficients of π^±, K^±, p+p¯,Λ+Λ¯,K^0_S , and the ϕ-meson are measured in Pb-Pb collisions at √s_NN=5.02 TeV. Results obtained with the scalar product ... Azimuthally-differential pion femtoscopy relative to the third harmonic event plane in Pb–Pb collisions at √sNN = 2.76 TeV (Elsevier, 2018-06-22) Azimuthally-differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze out, provide very important information on the ... Inclusive J/ψ production at forward and backward rapidity in p-Pb collisions at √sNN=8.16 TeV (Springer Berlin Heidelberg, 2018-07-25) Inclusive J/ψ production is studied in p-Pb interactions at a centre-of-mass energy per nucleon-nucleon collision 𝑠NN‾‾‾‾√=8.16 TeV, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed, via its ... Inclusive J/ψ proInclusive J/ψ production in Xe–Xe collisions at √sNN = 5.44 TeVduction in Xe–Xe collisions at √sNN = 5.44 TeV (Elsevier, 2018-08-31) Inclusive J/ψ production is studied in Xe–Xe interactions at a centre-of-mass energy per nucleon pair of TeV, using the ALICE detector at the CERN LHC. The J/ψ meson is reconstructed via its decay into a muon pair, in the ... Neutral pion and η meson production at midrapidity in Pb-Pb collisions at √sNN=2.76 TeV (American Physical Society, 2018-10-04) Neutral pion and η meson production in the transverse momentum range 1 < p_T < 20 GeV/c have been measured at midrapidity by the ALICE experiment at the Large Hadron Collider (LHC) in central and semicentral Pb-Pb collisions ...
Binary Logical Connectives with Inverse Theorem Let $\circ$ be a binary logical connective. Then there exists another binary logical connective $*$ such that: $\forall p, q \in \left\{{F, T}\right\}: \left({p \circ q}\right) * q \dashv \vdash p \dashv \vdash q * \left({p \circ q}\right)$ iff $\circ$ is either: $(1): \quad$ the exclusive or operator or: $(2): \quad$ the biconditional operator. Proof Necessary Condition Let $\circ$ be a binary logical connective such that there exists $*$ such that: $\left({p \circ q}\right) * q \dashv \vdash p$ That is, by definition (and minor abuse of notation): $\forall p, q \in \left\{{F, T}\right\}: \left({p \circ q}\right) * q = p$ $\begin{array}{|r|cccc|} \hline p & T & T & F & F \\ q & T & F & T & F \\ \hline f_T \left({p, q}\right) & T & T & T & T \\ p \lor q & T & T & T & F \\ p \impliedby q & T & T & F & T \\ \operatorname{pr}_1 \left({p, q}\right) & T & T & F & F \\ p \implies q & T & F & T & T \\ \operatorname{pr}_2 \left({p, q}\right) & T & F & T & F \\ p \iff q & T & F & F & T \\ p \land q & T & F & F & F \\ p \uparrow q & F & T & T & T \\ \neg \left({p \iff q}\right) & F & T & T & F \\ \overline {\operatorname{pr}_2} \left({p, q}\right) & F & T & F & T \\ \neg \left({p \implies q}\right) & F & T & F & F \\ \overline {\operatorname{pr}_1} \left({p, q}\right) & F & F & T & T \\ \neg \left({p \impliedby q}\right) & F & F & T & F \\ p \downarrow q & F & F & F & T \\ f_F \left({p, q}\right) & F & F & F & F \\ \hline \end{array}$ Suppose that for some $q \in \left\{{F, T}\right\}$: $\left({p \circ q}\right)_{p = F} = \left({p \circ q}\right)_{p = T}$ Then: $\left({\left({p \circ q}\right) * q}\right)_{p = F} = \left({\left({p \circ q}\right) * q}\right)_{p = T}$ and so either: $\left({\left({p \circ q}\right) * q}\right)_{p = F} \ne p$ or: $\left({\left({p \circ q}\right) * q}\right)_{p = T} \ne p$ Thus for $\circ$ to have an inverse operation it is necessary for $F \circ q \ne T \circ q$. This eliminates: \(\displaystyle \) \(\) \(\displaystyle f_T \left({p, q}\right)\) as $p \circ q = T$ for all values of $p$ and $q$ \(\displaystyle \) \(\) \(\displaystyle p \lor q\) as $p \circ q = T$ for $q = T$ \(\displaystyle \) \(\) \(\displaystyle p \impliedby q\) as $p \circ q = T$ for $q = F$ \(\displaystyle \) \(\) \(\displaystyle p \implies q\) as $p \circ q = T$ for $q = T$ \(\displaystyle \) \(\) \(\displaystyle \operatorname{pr}_2 \left({p, q}\right)\) as $p \circ q = T$ for $q = T$ and also $p \circ q = F$ for $q = F$ \(\displaystyle \) \(\) \(\displaystyle p \land q\) as $p \circ q = F$ for $q = F$ \(\displaystyle \) \(\) \(\displaystyle p \uparrow q\) as $p \circ q = T$ for $q = F$ \(\displaystyle \) \(\) \(\displaystyle \overline {\operatorname{pr}_2} \left({p, q}\right)\) as $p \circ q = T$ for $q = F$ and also $p \circ q = F$ for $q = T$ \(\displaystyle \) \(\) \(\displaystyle \neg \left({p \implies q}\right)\) as $p \circ q = F$ for $q = T$ \(\displaystyle \) \(\) \(\displaystyle \neg \left({p \impliedby q}\right)\) as $p \circ q = F$ for $q = F$ \(\displaystyle \) \(\) \(\displaystyle p \downarrow q\) as $p \circ q = F$ for $q = T$ \(\displaystyle \) \(\) \(\displaystyle f_F \left({p, q}\right)\) as $p \circ q = T$ for all values of $p$ and $q$ $\begin{array}{|r|cccc|} \hline p & T & T & F & F \\ q & T & F & T & F \\ \hline \operatorname{pr}_1 \left({p, q}\right) & T & T & F & F \\ p \iff q & T & F & F & T \\ \neg \left({p \iff q}\right) & F & T & T & F \\ \overline {\operatorname{pr}_1} \left({p, q}\right) & F & F & T & T \\ \hline \end{array}$ Suppose that for some $p \in \left\{{F, T}\right\}$: $\left({p \circ q}\right)_{q = F} = \left({p \circ q}\right)_{q = T}$ Then: $\left({q * \left({p \circ q}\right)}\right)_{q = F} = \left({q * \left({p \circ q}\right)}\right)_{q = T}$ and so either: $\left({q * \left({p \circ q}\right)}\right)_{q = F} \ne p$ or: $\left({q * \left({p \circ q}\right)}\right)_{q = T} \ne p$ This eliminates: \(\displaystyle \) \(\) \(\displaystyle \operatorname{pr}_1 \left({p, q}\right)\) as $p \circ q = T$ for $p = T$ and also $p \circ q = F$ for $p = F$ \(\displaystyle \) \(\) \(\displaystyle \overline {\operatorname{pr}_1} \left({p, q}\right)\) as $p \circ q = T$ for $p = F$ and also $p \circ q = F$ for $p = T$ $\blacksquare$ Sufficient Condition Let $\circ$ be the exclusive or operator. Then by Exclusive Or is Self-Inverse it follows that: $\left({p \circ q}\right) \circ q \dashv \vdash p$ Similarly, let $\circ$ be the biconditional operator. Then by Biconditional is Self-Inverse it follows that: $\left({p \circ q}\right) \circ q \dashv \vdash p$ $\blacksquare$
The Nonparaxial Gaussian Beam Formula for Simulating Wave Optics In a previous blog post, we discussed the paraxial Gaussian beam formula. Today, we’ll talk about a more accurate formulation for Gaussian beams, available as of version 5.3a of the COMSOL® software. This formulation based on a plane wave expansion can handle nonparaxial Gaussian beams more accurately than the conventional paraxial formulation. Paraxiality of Gaussian Beams The well-known Gaussian beam formula is only valid for paraxial Gaussian beams. Paraxial means that the beam mainly propagates along the optical axis. There are several papers that talk about paraxiality in a quantitative sense (see Ref. 1). Roughly speaking, if the beam waist size is near the wavelength, the beam propagates at a higher angle to a focus. Therefore, the paraxiality assumption breaks down and the formulation is no longer accurate. To alleviate this problem and to provide you with a more general and accurate formulation for general Gaussian beams, we introduced a nonpariaxial Gaussian beam formulation. In the user interface this is referred to as Plane wave expansion. Angular Spectrum of Plane Waves Let’s briefly review the paraxial Gaussian beam formula in 2D (for the sake of better visuals and understanding). We start from Maxwell’s equations assuming time-harmonic fields, from which we get the following Helmholtz’s equation for the out-of-plane electric field with the wavelength \lambda for our choice of polarization: where k=2 \pi/\lambda. The angular spectrum of plane waves is based on the following simple fact: an arbitrary field that satisfies the above Helmholtz equation can be expressed as the following plane wave expansion: where A(k_x,k_y) is an arbitrary function. The integration path is a circle of radius k for real k_x and k_y. (For complex k_x and k_y, the integration domain extends to a complex plane.) The function A(k_x,k_y) is called the angular spectrum function. One can prove that this E_z satisfies Helmholtz’s equation by direct substitution. Now that we know that this formulation always gives exact solutions to Helmholtz’s equation, let’s try to understand it visually. From the constraint, k_x^2+k_y^2=k^2, we can set k_x=k cos(\varphi) and k_y=k sin(\varphi) and rewrite the above equation as: The meaning of the above formula is that it constructs a wave as a sum, or integral, consisting of many waves propagating in various directions, all with the same wave number k. This is shown in the following figure. Visualization of the angular spectrum of plane waves. When actually solving a problem using this formula, all you have to do is find the angular spectrum function A(\varphi) that satisfies the boundary conditions. By assuming that the profile of the transverse field (perpendicular to the propagating direction, i.e., optical axis) is also a Gaussian shape (see Ref. 4), one can derive that A(\varphi) = \exp(-\varphi^2 / \varphi_0^2) , where \varphi_0 is the spectrum width. By some more mathematical manipulations, we get a relationship between the spectrum width \varphi_0 and the beam waist radius w_0. For example, for a slow Gaussian beam, the angular spectrum is narrow. A plane wave, on the other hand, is the extreme case where the angular spectrum function is a delta function. For a fast Gaussian beam, the angular spectrum is wider, and vice versa. This was a quick summary of the underlying theory for nonparaxial Gaussian beams. To recap what we have shown so far, let’s rewrite the formula once more by using polar coordinates, x=r \cos \theta, \ y = r \sin \theta: This is the formulation that Born and Wolf (Ref. 2) use in their book. The 3D formula is more complicated and looks different due to polarization, but the basic idea is the same as seen in the references mentioned above. It can also look different depending on whether or not you consider evanescent waves. The Plane Wave Expansion method used in the Wave Optics Module and the RF Module, although based on the angular spectrum theory, is adapted for numerical computations. Plane Wave Expansion: Settings and Results Let’s compare the new feature, Plane wave expansion, with the previously available feature, Paraxial approximation. The Settings window covering both methods is shown below. The Plane Wave Expansion feature settings. With the new feature, you have two options if the Automatic setting doesn’t give you a satisfactory approximation: Wave vector count Maximum transverse wave number The first option determines the number of discretization levels, depending on how fine you want to represent the Gaussian beam. The more plane waves, the finer it gets. The second option is related to the integral bound in the previous equation; i.e., -\pi/2 \le \varphi \le \pi/2. This integral bound can be the maximum \pi/2 for the smallest possible spot size and can be more shallow for slower beams, depending on how fast the Gaussian beam is. You need more angled plane waves with a larger transverse wave number to represent faster (more focused) beams. The following results compare the two formulas for the case where the spot radius is \lambda/2, which is considerably nonparaxial. As in the previous blog post, the simulation is done with the Scattered Field formulation and the domain is surrounded by a perfectly matched layer (PML). This way, the scattered field represents the error from the exact Helmholtz solution. The left images below show the new feature, while the images on the right show the paraxial approximation. The top images show the norm of the computed Gaussian beam background field, ewfd.Ebz, while the bottom images show the scattered field norm, ewfd.relEz, which represents the error from the exact Helmholtz solution. Obviously, the error from the Helmholtz solution is greatly reduced in the nonparaxial method. Concluding Remarks We have discussed the theory and results for an approximation method for nonparaxial Gaussian beams using the new plane wave expansion option. Remember that this formulation is extremely accurate, but is still an approximation under assumptions. First, we have made an assumption for the field shape in the focal plane. Second, we assume that the evanescent field is zero. If you are interested in the field coupling to some nanostructure near the focal region in a fast Gaussian beam, you may need to calculate the evanescent field. Next Step Learn more about the formulations and features available for modeling optically large problems in the COMSOL® software by clicking the button below: Note: This functionality can also be found in the RF Module. References P. Vaveliuk, “Limits of the paraxial approximation in laser beams”, Optics Letters, vol. 32, no. 8, 2007. M. Born and E. Wolf, Principles of Optics, ed. 7, Cambridge University Press, 1999. J. W. Goodman, Fourier Optics. G. P. Agrawal and M. Lax, “Free-space wave propagation beyond the paraxial approximation”, Phys. Rev.a. 27, pp. 1693–1695, 1983. Comments (6) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
The Trivial Difference Sets Recall from the Difference Sets page that if $v$, $k$, and $\lambda$ are positive integers such that $2 \leq k < v$ and $(G, +)$ is a finite group of order $v$ then a $(v, k, \lambda)$-difference set in $G$ is a nonempty proper subset $D \subset G$ such that the multiset of differences $\{ x - y : x, y \in D \: \mathrm{and} \: x \neq y \}$ contains each element of $G \setminus \{ 0 \}$ exactly $\lambda$ times. It is often noted that if $(G, +)$ is a finite group of order $v \geq$ then for any $g \in G$, $D = \{ g \}$ is a difference set. However, this does not agree with the definition we made above for if $D = \{ g \}$ then $\mid D \mid = k = 1 < 2$. Nevertheless, for completeness we will state that such a set is indeed a difference set. A more interesting (but still trivial) type of difference set is describe in the proposition below. Proposition 1: If $(G, +)$ is a finite group of order $v \geq 2$ and $g \in G$ then $D = G \setminus \{ g \}$ is a difference set. Proof:Let $g \in G$ and $D = G \setminus \{ g \}$. Since $\mid G \mid = v$, $\mid D \mid = v - 1$. We denote this set by $D = \{ g_1, g_2, ..., g_{v-1} \}$. We consider the multiset of differences: This multiset of differences can be obtained by the difference table: $g_1$ $g_2$ $\cdots$ $g_{v-1}$ $g_1$ $g_2 - g_1$ $\cdots$ $g_{v-1} - g_1$ $g_2$ $g_1 - g_2$ $\cdots$ $g_{v-1} - g_2$ $\vdots$ $\vdots$ $\vdots$ $\ddots$ $\vdots$ $g_{v-1}$ $g_1 - g_{v-1}$ $g_2 - g_{v-1}$ $\cdots$ Consider the first row of differences above, i.e., the set of differences $\{ g_2 - g_1, g_3 - g_1, ..., g_{v-1} - g_1 \}$. This set contains $v - 2$ differences. Each of these differences are also distinct, since if $g_j - g_1 = g_k - g_1$ then this would imply $g_j = g_k$ which is a contradiction since the elements of $D$ are distinct. For the second row of differences, i.e., the set of differences $\{ g_1 - g_2, g_3 - g_2, ..., g_{v-1} - g_2 \}$, the same logic applies. This set contains $v - 2$ differences, each of which is distinct. Of course, the same applies all $v - 1$ rows. Each row contains $v - 2$ elements and the set of such elements is a $(v - 2)$-subset of $G \setminus \{ 0 \}$. Note that $G \setminus \{ 0 \}$ contains $v - 1$ elements. The number of ways to construct a $(v - 2)$-subset from a set of $(v - 1)$ elements is $\binom{v-1}{v-2} = v - 1$. So each row of the difference table above is a unique $(v - 2)$-subset of $G \setminus \{ 0 \}$. Hence the $v - 1$ elements of $D$ must appear in the difference table an equal number of times. In particular, since the difference table above contains $(v - 1) \cdot (v - 2)$ elements we must have that $D$ is a difference set with $\displaystyle{\lambda = \frac{(v - 1) \cdot (v - 2)}{v - 1} = v - 2}$, that is, $D$ is a $(v, v-1, v-2)$ difference set. $\blacksquare$ Once again, it is important to note that if $v = 2$ then $D = G \setminus \{ g \}$ contains only one element which does not first our original definition of a difference set since we require $2 \leq k$. Regardless, we will still denote such cases as difference sets of completeness. The two examples mentioned above are what as known as trivial difference sets in a group. We formally define them below. Definition: If $(G, +)$ is a finite group of order $v \geq 2$ then for any $g \in G$, $\{ g \}$ and $G \setminus \{ g \}$ are called Trivial Difference Sets.
I am interested in the following: Let $G$ be a finite group of order $n$. Is there an explicit function $f$ such that $|s(G)| \leq f(n)$ for all $G$ and for all natural numbers $n$, where $s(G)$ denotes the set of subgroups of $G$? MathOverflow is a question and answer site for professional mathematicians. It only takes a minute to sign up.Sign up to join this community A non-identity subgroup of a group of order $n$ can be generated by $\log_{2}(n)$ or fewer elements. There is quite a lot of duplication, but if you count the number of subsets of $G$ of cardinality at most $\log_{2}(n),$ you will have an upper bound (which could be made more precise with more care) and the case of elementary Abelian $2$-groups show that such a bound is of the right general shape if it is to cover all groups of all orders. To be more precise, this gives an upper bound of approximately $\log_{2}(n)n^{\log_{2}(n)}$ for the total number of subgroups of a group of order $n,$ which is generically rather generous. On the other hand, the number of subgroups of an elementary Abelian group of order $n= 2^{r}$ is close to $\sum_{i=0}^{r}2^{i(r-i)},$ so generally larger that $n^{log_{2}(n)/4}.$ A theorem of Borovik, Pyber and Shalev (Corollary 1.6) shows that the number of subgroups of a group $G$ of order $n=\lvert G\rvert$ is bounded by $n^{(\frac{1}{4}+o(1)) \log_2(n)}$. This is essentially best possible, cf. Geoff Robinsons answer above. There is a variant of this question which has received a lot of attention and which may be of interest here: namely how many maximal subgroups a finite group may have. In this context the relevant conjecture is due to Wall: ConjectureThe number of maximal subgroups of a finite group G is less than the order of G. This has been the subject of much study with the landmark work (until recently) being the result of Liebeck, Pyber and Shalev which states that the number of maximal subgroups is at most $c|G|^{3/2}$ where $c$ is an absolute constant. They also show that the conjecture is true if the group G is simple, up to a finite number of exceptions. In very recent work it has now been shown that Wall's conjecture is not true in general. An account of the demise of the conjecture can be found here. (This is not a paper, rather it's a very engaging description of the research which resulted in counter-examples being found.) In light of this development, the bound $c|G|^{3/2}$ mentioned above assumes greater importance. Although, as the linked document mentions, it is likely that the index $\frac32$ can be reduced a great deal. Whether one can deduce bounds on the number of non-maximal subgroups from the results of Liebeck, Pyber and Shalev, I don't know... Can refine Stefan Kohl's suggestion by taking subsets containing the identity element of $G$ and of cardinality dividing $n$. So the upper bound is $\sum_{d>1, d| n}^n {n-1\choose d-1}$ In a similar vein to Geoff Robinson's answer, observe that any proper subgroup of a group of order $n$ can be generated by at most $\Omega(n) - 1$ elements, where $\Omega$ counts the number of prime factors (with multiplicity) of $n$. This is tight, with equality in the case of a product of prime cyclic groups. This gives an upper bound of: $$ \binom{n}{0} + \binom{n}{1} + \cdots + \binom{n}{\Omega(n) - 1} + 1$$ subgroups, where the final $+ 1$ includes $G$ itself. Now we can invoke Michael Lugo's strict upper bound from https://mathoverflow.net/a/17236/39521 to obtain a closed-form upper bound: $$ |s(G)| \leq \left\lceil \binom{n}{\Omega(n) - 1} \dfrac{n - \Omega(n) + 2}{n - 2 \Omega(n) + 2} \right\rceil \leq \dfrac{2 \times n^{\Omega(n) - 1}}{(\Omega(n) - 1)!}$$ Note that this is tight infinitely often: in the case where $n = p$ is prime (so $G = C_p$), we get: $$ \dfrac{2 \times p^0}{0!} = 2 $$ where the two subgroups are the trivial group and $G$ itself.
Hausdorff Topological Spaces Examples 1 Recall from the Hausdorff Topological Spaces page that a topological space $(X, \tau)$ is said to be a Hausdorff space if for every distinct $x, y \in X$ there exists open neighbourhoods $U, V \in \tau$ such that $x \in U$, $y \in V$ and $U \cap V = \emptyset$. We also looked at two notable examples of Hausdorff spaces - the first being the set of real numbers $\mathbb{R}$ with the usual topology of open intervals on $\mathbb{R}$, and the second being the discrete topology on any nonempty set $X$. We will now look at some problems regarding Hausdorff topological spaces. Example 1 Consider the set $X = \{ a, b, c ,d \}$ with the topology $\tau = \{ \emptyset, \{ a \}, \{ b \} \{ a, b \}, \{b, c \}, \{a, b, c \}, X \}$. Is $(X, \tau)$ a Hausdorff space? Notice that for $d \in X$ that the only open neighbourhood of $d$ is the whole space. However, for $a \in X$ we have that the open neighbourhoods of $a$ are $\{ a \}$, $\{a, b \}$, $\{a, b, c \}$, and $X$, all all of these open neighbourhoods intersect the whole space. Therefore $(X, \tau)$ is not a Hausdorff space. Example 2 Prove that for any nonempty set $X$ that if $\tau$ is the indiscrete topology then $(X, \tau)$ is not a Hausdorff space. Let $X$ be a nonempty set and let $\tau = \{ \emptyset, X \}$. Then for every $x, y \in X$ we have that the only open neighbourhood of $x$ is $X$ and the only open neighbourhood of $y$ is $X$. Since $X$ is nonempty, we have that $X \cap X \neq \emptyset$, and so $(X, \tau)$ cannot be a Hausdorff space. Example 3 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau = \{ (-n, n) : n \in \mathbb{Z}, n \geq 1 \}$. Is $(\mathbb{R}, \tau)$ a Hausdorff space? Consider the numbers $0, \frac{1}{2} \in \mathbb{R}$. We see that for all $n \in \mathbb{Z}$, $n \geq 1$ that:(1) I.e., every set in $\tau$ is an open neighbourhood of $0$ and $\frac{1}{2}$. Therefore, if $U$ is any open neighbourhood of $0$ and $V$ is an open neighbourhood of $\frac{1}{2}$ then $U \cap V \neq \emptyset$. Therefore, $(\mathbb{R}, \tau)$ is not a Hausdorff space.
Feynman diagrams provide a very compact and intuitive way of representing interactions between particles. These diagrams can be included into LaTeX documents thanks to a few packages. One of the older packages is feynmf which uses MetaPost in order to generate the diagrams. More recently, a new package called Ti kZ-Feynman has been published which uses Ti kZ in order to generate Feynman diagrams. Contents Ti kZ-Feynman is a LaTeX package allowing Feynman diagrams to be easily generated within LaTeX with minimal user instructions and without the need of external programs. It builds upon the Ti kZ package and its graph drawing algorithms in order to automate the placement of many vertices. Ti kZ-Feynman still allows fine-tuned placement of vertices so that even complex diagrams can be generated with ease. Currently, Ti kZ-Feynman is too new to have made it into ShareLaTeX's installation, but we are working to get it included soon. In the meantime, it is possible to include the package files manually in a ShareLaTeX project as shown in this template. After installing the package, the Ti kZ-Feynman package can be loaded with \usepackage{tikz-feynman} in the preamble. It is recommend that you also specify the version of Ti kZ-Feynman to use with the compat package option: \usepackage[compat=1.0.0]{tikz-feynman}. This ensures that any new versions of Ti kZ-Feynman do not produce any undesirable changes without warning. Feynman diagrams can be declared with the \feynmandiagram command. It is analogous to the \tikz command from Ti kZ and requires a final semi-colon ( ;) to finish the environment. For example, a simple s-channel diagram is: \feynmandiagram [horizontal=a to b] { i1 -- [fermion] a -- [fermion] i2, a -- [photon] b, f1 -- [fermion] b -- [fermion] f2, }; Let's go through this example line by line: \feynmandiagram introduces the Feynman diagram and allows for optional arguments to be given in the brackets [<options>]. In this instance, horizontal=a to b orients the algorithm outputs such that the line through vertices a and b is horizontal. i1, a and i2) and connecting them with edges --. Just like the \feynmandiagram command above, each edge also take optional arguments specified in brackets [<options>]. In this instance, we want these edges to have arrows to indicate that they are fermion lines, so we add the fermion style to them. As you will see later on, optional arguments can also be given to the vertices in exactly the same way. a and b with an edge styled as a photon. Since there is already a vertex labelled a, the algorithm will connect it to a new vertex labeled b. f1 and f2. It re-uses the previously labelled b vertex. ;) is important. The name given to each vertex in the graph does not matter. So in this example, i1, i2 denote the initial particles; f1, f2 denotes the final particles; and a, b are the end points of the propagator. The only important aspect is that what we called a in line 2 is also a in line 3 so that the underlying algorithm treats them as the same vertex. The order in which vertices are declared does not matter as the default algorithm re-arranges everything. For example, one might prefer to draw the fermion lines all at once, as with the following example (note also that the way we named vertices is completely different): \feynmandiagram [horizontal=f2 to f3] { f1 -- [fermion] f2 -- [fermion] f3 -- [fermion] f4, f2 -- [photon] p1, f3 -- [photon] p2, }; As a final remark, the calculation of where vertices should be placed is usually done through an algorithm written in Lua. As a result, LuaTeX is required in order to make use of these algorithms. If LuaTeX is not used, Ti kZ-Feynman will default to a more rudimentary algorithm and will warn the user instead. So far, the examples have only used the photon and fermion styles. The Ti kZ-Feynman package comes with quite a few extra styles for edges and vertices which are all documented over in the package documentation. For example, it is possible to add momentum arrows with momentum=<text>, and in the case of end vertices, the particle can be labelled with particle=<text>. To demonstrate how they are used, we take the generic s-channel diagram from earlier and make it a electron-positron pairs annihilating into muons: \feynmandiagram [horizontal=a to b] { i1 [particle=\(e^{-}\)] -- [fermion] a -- [fermion] i2 [particle=\(e^{+}\)], a -- [photon, edge label=\(\gamma\), momentum'=\(k\)] b, f1 [particle=\(\mu^{+}\)] -- [fermion] b -- [fermion] f2 [particle=\(\mu^{-}\)], }; In addition to the style keys documented below, style keys from Ti kZ can be used as well: \feynmandiagram [horizontal=a to b] { i1 [particle=\(e^{-}\)] -- [fermion, very thick] a -- [fermion, opacity=0.2] i2 [particle=\(e^{+}\)], a -- [red, photon, edge label=\(\gamma\), momentum'={[arrow style=red]\(k\)}] b, f1 [particle=\(\mu^{+}\)] -- [fermion, opacity=0.2] b -- [fermion, very thick] f2 [particle=\(\mu^{-}\)], }; For a list of all the various styles that Ti kZ provides, have a look at the Ti kZ manual; it is extremely thorough and provides many usage examples. By default, the \feynmandiagram and \diagram commands use the spring layout algorithm to place all the edges. The spring layout algorithm attempts to `spread out' the diagram as much as possible which—for most simpler diagrams—gives a satisfactory result; however in some cases, this does not produce the best diagram and this section will look at alternatives. There are three main alternatives: draw=none. The algorithm will treat these extra edges in the same way, but they are simply not drawn at the end; The underlying algorithm treats all edges in exactly the same way when calculating where to place all the vertices, and the actual drawing of the diagram (after the placements have been calculated) is done separately. Consequently, it is possible to add edges to the algorithm, but prevent them from being drawn by adding draw=none to the edge style. This is particularly useful if you want to ensure that the initial or final states remain closer together than they would have otherwise as illustrated in the following example (note that opacity=0.2 is used instead of draw=none to illustrate where exactly the edge is located). % No invisible to keep the two photons together \feynmandiagram [small, horizontal=a to t1] { a [particle=\(\pi^{0}\)] -- [scalar] t1 -- t2 -- t3 -- t1, t2 -- [photon] p1 [particle=\(\gamma\)], t3 -- [photon] p2 [particle=\(\gamma\)], }; % Invisible edge ensures photons are parallel \feynmandiagram [small, horizontal=a to t1] { a [particle=\(\pi^{0}\)] -- [scalar] t1 -- t2 -- t3 -- t1, t2 -- [photon] p1 [particle=\(\gamma\)], t3 -- [photon] p2 [particle=\(\gamma\)], p1 -- [opacity=0.2] p2, }; The graph drawing library from Ti kZ has several different algorithms to position the vertices. By default, \diagram and \feynmandiagram use the spring layout algorithm to place the vertices. The spring layout attempts to spread everything out as much as possible which, in most cases, gives a nice diagram; however, there are certain cases where this does not work. A good example where the spring layout doesn't work are decays where we have the decaying particle on the left and all the daughter particles on the right. % Using the default spring layout \feynmandiagram [horizontal=a to b] { a [particle=\(\mu^{-}\)] -- [fermion] b -- [fermion] f1 [particle=\(\nu_{\mu}\)], b -- [boson, edge label=\(W^{-}\)] c, f2 [particle=\(\overline \nu_{e}\)] -- [fermion] c -- [fermion] f3 [particle=\(e^{-}\)], }; % Using the layered layout \feynmandiagram [layered layout, horizontal=a to b] { a [particle=\(\mu^{-}\)] -- [fermion] b -- [fermion] f1 [particle=\(\nu_{\mu}\)], b -- [boson, edge label'=\(W^{-}\)] c, c -- [anti fermion] f2 [particle=\(\overline \nu_{e}\)], c -- [fermion] f3 [particle=\(e^{-}\)], }; You may notice that in addition to adding the layered layout style to \feynmandiagram, we also changed the order in which we specify the vertices. This is because the layered layout algorithm does pay attention to the order in which vertices are declared (unlike the default spring layout); as a result, c--f2, c--f3 has a different meaning to f2--c--f3. In the former case, f2 and f3 are both on the layer below c as desired; whilst the latter case places f2 on the layer above c (that, the same layer as where the W-boson originates). In more complicated diagrams, it is quite likely that none of the algorithms work, no matter how many invisible edges are added. In such cases, the vertices have to be placed manually. Ti kZ-Feynman allows for vertices to be manually placed by using the \vertex command. The \vertex command is available only within the feynman environment (which itself is only available inside a tikzpicture). The feynman environment loads all the relevant styles from Ti kZ-Feynman and declares additional Ti kZ-Feynman-specific commands such as \vertex and \diagram. This is inspired from PGFPlots and its use of the axis environment. The \vertex command is very much analogous to the \node command from Ti kZ, with the notable exception that the vertex contents are optional; that is, you need not have {<text>} at the end. In the case where {} is specified, the vertex automatically is given the particle style, and otherwise it is a usual (zero-sized) vertex. To specify where the vertices go, it is possible to give explicit coordinates though it is probably easiest to use the positioning library from Ti kZ which allows vertices to be placed relative to existing vertices. By using relative placements, it is possible to easily tweak one part of the graph and everything will adjust accordingly—the alternative being to manually adjust the coordinates of every affected vertex. Finally, once all the vertices have been specified, the \diagram* command is used to specify all the edges. This works in much the same way as \diagram (and also \feynmandiagram), except that it uses an very basic algorithm to place new nodes and allows existing (named) nodes to be included. In order to refer to an existing node, the node must be given in parentheses. This whole process of specifying the nodes and then drawing the edges between them is shown below for the muon decay: \begin{tikzpicture} \begin{feynman} \vertex (a) {\(\mu^{-}\)}; \vertex [right=of a] (b); \vertex [above right=of b] (f1) {\(\nu_{\mu}\)}; \vertex [below right=of b] (c); \vertex [above right=of c] (f2) {\(\overline \nu_{e}\)}; \vertex [below right=of c] (f3) {\(e^{-}\)}; \diagram* { (a) -- [fermion] (b) -- [fermion] (f1), (b) -- [boson, edge label'=\(W^{-}\)] (c), (c) -- [anti fermion] (f2), (c) -- [fermion] (f3), }; \end{feynman} \end{tikzpicture} The feynmf package lets you easily draw Feynman diagrams in your LaTeX documents. All you need to do is specify the vertices, the particles and the labels, and it will automatically layout and draw your diagram for you. Let's start with a quick example: \begin{fmffile*}{diagram} \begin{fmfgraph}(40,25) \fmfleft{i1,i2} \fmfright{o1,o2} \fmf{fermion}{i1,v1,o1} \fmf{fermion}{i2,v2,o2} \fmf{photon}{v1,v2} \end{fmfgraph} \end{fmffile*} The fmffile* environment must be put around all of your Feynman diagrams. You can use fmffile environment for multiple diagrams, so you can put one around your whole document and forget about it. The second argument to the fmffile environment tells LaTeX where to write the files that it uses to store the diagram. You can name this whatever you want, but you need to run metafont on your diagram between LaTeX runs in order for your diagram to show up (ShareLaTeX does this automatically): The 'fmfgraph' environment starts a Feynman diagram, and the figures in brackets afterwards specify the width and height of the diagram. The first thing you need to do is specify your external vertices, and where they should be positioned. You can name your vertices anything you like, and say where they should be positioned with the commands \fmfleft, \fmfright, \fmftop, \fmfbottom. For example % Creates two vertices on the left called i1 and i2 \fmfleft{i1,i2} % Creates two vertices on the right called o1 and o2 \fmfright{o1,o2} You can connect up vertices with the \fmf, which will create new vertices if you pass in names that haven't been created yet. For example % Will create a fermion line between i1 and % the newly created v1, and between v1 and o1. \fmf{fermion}{i1,v1,o1} % Will create a photon line between v1 and the newly created v2 \fmf{photon}{v2,v2} A vertex can be labelled using the \fmflabel command, which takes two arguments: the label to apply to the vertex, and the name of the vertex to apply it to. For example, in the above diagram, if we add in the following labels, we get the updated diagram below: Note that math mode can used inside the vertex labels, as we have done above. We've seen the 'photon' and 'fermion' line styles above, but the feynmf package support many more. Appearance Name(s) gluon, curly dbl_curly dashes scalar, dashes_arrow dbl_dashes dbl_dashes_arrow dots ghost, dots_arrow dbl_dots dbl_dots_arrow phantom phantom_arrow vanilla, plain fermion, electron, quark, plain_arrow double, dbl_plain double_arrow, heavy, dbl_plain_arrow boson, photon, wiggly dbl_wiggly zigzag dbl_zigzag For more information see:
It looks like you're new here. If you want to get involved, click one of these buttons! Isomorphisms are very important in mathematics, and we can no longer put off talking about them. Intuitively, two objects are 'isomorphic' if they look the same. Category theory makes this precise and shifts the emphasis to the 'isomorphism' - the way in which we match up these two objects, to see that they look the same. For example, any two of these squares look the same after you rotate and/or reflect them: An isomorphism between two of these squares is a process of rotating and/or reflecting the first so it looks just like the second. As the name suggests, an isomorphism is a kind of morphism. Briefly, it's a morphism that you can 'undo'. It's a morphism that has an inverse: Definition. Given a morphism \(f : x \to y\) in a category \(\mathcal{C}\), an inverse of \(f\) is a morphism \(g: y \to x\) such that and I'm saying that \(g\) is 'an' inverse of \(f\) because in principle there could be more than one! But in fact, any morphism has at most one inverse, so we can talk about 'the' inverse of \(f\) if it exists, and we call it \(f^{-1}\). Puzzle 140. Prove that any morphism has at most one inverse. Puzzle 141. Give an example of a morphism in some category that has more than one left inverse. Puzzle 142. Give an example of a morphism in some category that has more than one right inverse. Now we're ready for isomorphisms! Definition. A morphism \(f : x \to y\) is an isomorphism if it has an inverse. Definition. Two objects \(x,y\) in a category \(\mathcal{C}\) are isomorphic if there exists an isomorphism \(f : x \to y\). Let's see some examples! The most important example for us now is a 'natural isomorphism', since we need those for our databases. But let's start off with something easier. Take your favorite categories and see what the isomorphisms in them are like! What's an isomorphism in the category \(\mathbf{3}\)? Remember, this is a free category on a graph: The morphisms in \(\mathbf{3}\) are paths in this graph. We've got one path of length 2: $$ f_2 \circ f_1 : v_1 \to v_3 $$ two paths of length 1: $$ f_1 : v_1 \to v_2, \quad f_2 : v_2 \to v_3 $$ and - don't forget - three paths of length 0. These are the identity morphisms: $$ 1_{v_1} : v_1 \to v_1, \quad 1_{v_2} : v_2 \to v_2, \quad 1_{v_3} : v_3 \to v_3.$$ If you think about how composition works in this category you'll see that the only isomorphisms are the identity morphisms. Why? Because there's no way to compose two morphisms and get an identity morphism unless they're both that identity morphism! In intuitive terms, we can only move from left to right in this category, not backwards, so we can only 'undo' a morphism if it doesn't do anything at all - i.e., it's an identity morphism. We can generalize this observation. The key is that \(\mathbf{3}\) is a poset. Remember, in our new way of thinking a preorder is a category where for any two objects \(x\) and \(y\) there is at most one morphism \(f : x \to y\), in which case we can write \(x \le y\). A poset is a preorder where if there's a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(x = y\). In other words, if \(x \le y\) and \(y \le x\) then \(x = y\). Puzzle 143. Show that if a category \(\mathcal{C}\) is a preorder, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then \(g\) is the inverse of \(f\), so \(x\) and \(y\) are isomorphic. Puzzle 144. Show that if a category \(\mathcal{C}\) is a poset, if there is a morphism \(f : x \to y\) and a morphism \(g: x \to y\) then both \(f\) and \(g\) are identity morphisms, so \(x = y\). Puzzle 144 says that in a poset, the only isomorphisms are identities. Isomorphisms are a lot more interesting in the category \(\mathbf{Set}\). Remember, this is the category where objects are sets and morphisms are functions. Puzzle 145. Show that every isomorphism in \(\mathbf{Set}\) is a bijection, that is, a function that is one-to-one and onto. Puzzle 146. Show that every bijection is an isomorphism in \(\mathbf{Set}\). So, in \(\mathbf{Set}\) the isomorphisms are the bijections! So, there are lots of them. One more example: Definition. If \(\mathcal{C}\) and \(\mathcal{D}\) are categories, then an isomorphism in \(\mathcal{D}^\mathcal{C}\) is called a natural isomorphism. This name makes sense! The objects in the so-called 'functor category' \(\mathcal{D}^\mathcal{C}\) are functors from \(\mathcal{C}\) to \(\mathcal{D}\), and the morphisms between these are natural transformations. So, the isomorphisms deserve to be called 'natural isomorphisms'. But what are they like? Given functors \(F, G: \mathcal{C} \to \mathcal{D}\), a natural transformation \(\alpha : F \to G\) is a choice of morphism $$ \alpha_x : F(x) \to G(x) $$ for each object \(x\) in \(\mathcal{C}\), such that for each morphism \(f : x \to y\) this naturality square commutes: Suppose \(\alpha\) is an isomorphism. This says that it has an inverse \(\beta: G \to F\). This \(beta\) will be a choice of morphism $$ \beta_x : G(x) \to F(x) $$ for each \(x\), making a bunch of naturality squares commute. But saying that \(\beta\) is the inverse of \(\alpha\) means that $$ \beta \circ \alpha = 1_F \quad \textrm{ and } \alpha \circ \beta = 1_G .$$ If you remember how we compose natural transformations, you'll see this means $$ \beta_x \circ \alpha_x = 1_{F(x)} \quad \textrm{ and } \alpha_x \circ \beta_x = 1_{G(x)} $$ for all \(x\). So, for each \(x\), \(\beta_x\) is the inverse of \(\alpha_x\). In short: if \(\alpha\) is a natural isomorphism then \(\alpha\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\). But the converse is true, too! It takes a little more work to prove, but not much. So, I'll leave it as a puzzle. Puzzle 147. Show that if \(\alpha : F \Rightarrow G\) is a natural transformation such that \(\alpha_x\) is an isomorphism for each \(x\), then \(\alpha\) is a natural isomorphism. Doing this will help you understand natural isomorphisms. But you also need examples! Puzzle 148. Create a category \(\mathcal{C}\) as the free category on a graph. Give an example of two functors \(F, G : \mathcal{C} \to \mathbf{Set}\) and a natural isomorphism \(\alpha: F \Rightarrow G\). Think of \(\mathcal{C}\) as a database schema, and \(F,G\) as two databases built using this schema. In what way does the natural isomorphism between \(F\) and \(G\) make these databases 'the same'. They're not necessarily equal! We should talk about this.
Answer The proof is below. Work Step by Step In equation 17.6, it states: $ \beta = \frac{\frac{\Delta V}{v}}{\Delta T}$ From this, we see that the volume expansion coefficient for an ideal gas is inversely related to the change in temperature. You can help us out by revising, improving and updating this answer.Update this answer After you claim an answer you’ll have 24 hours to send in a draft. An editorwill review the submission and either publish your submission or provide feedback.
Search Now showing items 1-1 of 1 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
It looks like you're new here. If you want to get involved, click one of these buttons! In this chapter we learned about left and right adjoints, and about joins and meets. At first they seemed like two rather different pairs of concepts. But then we learned some deep relationships between them. Briefly: Left adjoints preserve joins, and monotone functions that preserve enough joins are left adjoints. Right adjoints preserve meets, and monotone functions that preserve enough meets are right adjoints. Today we'll conclude our discussion of Chapter 1 with two more bombshells: Joins are left adjoints, and meets are right adjoints. Left adjoints are right adjoints seen upside-down, and joins are meets seen upside-down. This is a good example of how category theory works. You learn a bunch of concepts, but then you learn more and more facts relating them, which unify your understanding... until finally all these concepts collapse down like the core of a giant star, releasing a supernova of insight that transforms how you see the world! Let me start by reviewing what we've already seen. To keep things simple let me state these facts just for posets, not the more general preorders. Everything can be generalized to preorders. In Lecture 6 we saw that given a left adjoint \( f : A \to B\), we can compute its right adjoint using joins: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$ Similarly, given a right adjoint \( g : B \to A \) between posets, we can compute its left adjoint using meets: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ In Lecture 16 we saw that left adjoints preserve all joins, while right adjoints preserve all meets. Then came the big surprise: if \( A \) has all joins and a monotone function \( f : A \to B \) preserves all joins, then \( f \) is a left adjoint! But if you examine the proof, you'l see we don't really need \( A \) to have all joins: it's enough that all the joins in this formula exist: $$ g(b) = \bigvee \{a \in A : \; f(a) \le b \} . $$Similarly, if \(B\) has all meets and a monotone function \(g : B \to A \) preserves all meets, then \( f \) is a right adjoint! But again, we don't need \( B \) to have all meets: it's enough that all the meets in this formula exist: $$ f(a) = \bigwedge \{b \in B : \; a \le g(b) \} . $$ Now for the first of today's bombshells: joins are left adjoints and meets are right adjoints. I'll state this for binary joins and meets, but it generalizes. Suppose \(A\) is a poset with all binary joins. Then we get a function $$ \vee : A \times A \to A $$ sending any pair \( (a,a') \in A\) to the join \(a \vee a'\). But we can make \(A \times A\) into a poset as follows: $$ (a,b) \le (a',b') \textrm{ if and only if } a \le a' \textrm{ and } b \le b' .$$ Then \( \vee : A \times A \to A\) becomes a monotone map, since you can check that $$ a \le a' \textrm{ and } b \le b' \textrm{ implies } a \vee b \le a' \vee b'. $$And you can show that \( \vee : A \times A \to A \) is the left adjoint of another monotone function, the diagonal $$ \Delta : A \to A \times A $$sending any \(a \in A\) to the pair \( (a,a) \). This diagonal function is also called duplication, since it duplicates any element of \(A\). Why is \( \vee \) the left adjoint of \( \Delta \)? If you unravel what this means using all the definitions, it amounts to this fact: $$ a \vee a' \le b \textrm{ if and only if } a \le b \textrm{ and } a' \le b . $$ Note that we're applying \( \vee \) to \( (a,a') \) in the expression at left here, and applying \( \Delta \) to \( b \) in the expression at the right. So, this fact says that \( \vee \) the left adjoint of \( \Delta \). Puzzle 45. Prove that \( a \le a' \) and \( b \le b' \) imply \( a \vee b \le a' \vee b' \). Also prove that \( a \vee a' \le b \) if and only if \( a \le b \) and \( a' \le b \). A similar argument shows that meets are really right adjoints! If \( A \) is a poset with all binary meets, we get a monotone function $$ \wedge : A \times A \to A $$that's the right adjoint of \( \Delta \). This is just a clever way of saying $$ a \le b \textrm{ and } a \le b' \textrm{ if and only if } a \le b \wedge b' $$ which is also easy to check. Puzzle 46. State and prove similar facts for joins and meets of any number of elements in a poset - possibly an infinite number. All this is very beautiful, but you'll notice that all facts come in pairs: one for left adjoints and one for right adjoints. We can squeeze out this redundancy by noticing that every preorder has an "opposite", where "greater than" and "less than" trade places! It's like a mirror world where up is down, big is small, true is false, and so on. Definition. Given a preorder \( (A , \le) \) there is a preorder called its opposite, \( (A, \ge) \). Here we define \( \ge \) by $$ a \ge a' \textrm{ if and only if } a' \le a $$ for all \( a, a' \in A \). We call the opposite preorder\( A^{\textrm{op}} \) for short. I can't believe I've gone this far without ever mentioning \( \ge \). Now we finally have really good reason. Puzzle 47. Show that the opposite of a preorder really is a preorder, and the opposite of a poset is a poset. Puzzle 48. Show that the opposite of the opposite of \( A \) is \( A \) again. Puzzle 49. Show that the join of any subset of \( A \), if it exists, is the meet of that subset in \( A^{\textrm{op}} \). Puzzle 50. Show that any monotone function \(f : A \to B \) gives a monotone function \( f : A^{\textrm{op}} \to B^{\textrm{op}} \): the same function, but preserving \( \ge \) rather than \( \le \). Puzzle 51. Show that \(f : A \to B \) is the left adjoint of \(g : B \to A \) if and only if \(f : A^{\textrm{op}} \to B^{\textrm{op}} \) is the right adjoint of \( g: B^{\textrm{op}} \to A^{\textrm{ op }}\). So, we've taken our whole course so far and "folded it in half", reducing every fact about meets to a fact about joins, and every fact about right adjoints to a fact about left adjoints... or vice versa! This idea, so important in category theory, is called duality. In its simplest form, it says that things come on opposite pairs, and there's a symmetry that switches these opposite pairs. Taken to its extreme, it says that everything is built out of the interplay between opposite pairs. Once you start looking you can find duality everywhere, from ancient Chinese philosophy: to modern computers: But duality has been studied very deeply in category theory: I'm just skimming the surface here. In particular, we haven't gotten into the connection between adjoints and duality! This is the end of my lectures on Chapter 1. There's more in this chapter that we didn't cover, so now it's time for us to go through all the exercises.
The rule of thumb I know for the error on a count for random sampling is that it goes as the square root of the count. For instance, if I observe a radioactive source for $1$ minute and measure $100$ decay events, a reasonable uncertainty on the count is $100\pm\sqrt{100} = 100 \pm 10$. I have a different sampling problem, though. To put it simply, imagine I have a bag filled with a large number of balls which are each either red or blue. I'm interested in measuring the fraction of red balls in the bag. I draw an unbiased sample of $N_{\rm balls}$ balls and count the red ones, finding $N_{\rm red}$. My estimate for the red fraction $f_{\rm red}=N_{\rm red}/N_{\rm balls}$, but what uncertainty should I associate with this? I'm tempted to say $f_{\rm red}=\frac{N_{\rm red}}{N_{\rm balls}}\pm\frac{\sqrt{N_{\rm red}}}{N_{\rm balls}}$, but I find this unsatisfying. This is because it makes no distinction in the case where I draw $N_{\rm red}=0$; if this occurs with $N_{\rm balls}=1$, then my measurement is not especially constraining and should have a large uncertainty, but if I found $N_{\rm red}=0$ with $N_{\rm balls}=10^5$ I would have a high confidence that $f_{\rm red}\sim0$. In the particular experiment I have in mind, repeated draws are not an option. Could someone point me to a correct handling of the statistics for such an experiment? I appreciate that the physics content here is fairly minimal, so if the community prefers that this move to CV.SE or M.SE that's fair enough; I put it here because the context is a physical experiment and I would prefer answers in terms familiar to a physicist.
Search Now showing items 1-1 of 1 Higher harmonic flow coefficients of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2016-09) The elliptic, triangular, quadrangular and pentagonal anisotropic flow coefficients for $\pi^{\pm}$, $\mathrm{K}^{\pm}$ and p+$\overline{\mathrm{p}}$ in Pb-Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV were measured ...
Search Now showing items 1-3 of 3 Azimuthally differential pion femtoscopy relative to the second and thrid harmonic in Pb-Pb 2.76 TeV collisions from ALICE (Elsevier, 2017-11) Azimuthally differential femtoscopic measurements, being sensitive to spatio-temporal characteristics of the source as well as to the collective velocity fields at freeze-out, provide very important information on the ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ... Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
When designing an aircraft, there has to be a decision as to the aspect ratio of a wing. It's been said that having a higher aspect wing will reduce drag for the same wing area, however most of the time wings are shorter than they can be. So my question is, what exactly dictates the aspect ratio of a wing, and why don't they make them as long as possible? Aspect ratio is the ratio based on the span and chord of an aircraft's wings. The span is the length of the wings measured wingtip to wingtip; the chord is the 'depth' of the wing from the leading edge to the trailing edge, measured in a straight line. Because very few aircraft have constant chord planforms, this requires a not-very-fancy formula to calculate (source: NASA), so that we can effectively average the chord: $$AR=\frac{b^2}{S}$$ Where: $AR=$ Aspect ratio $B=$ Wingspan $S=$ Wing's area Math aside, aspect ratio is chosen based on an aircraft's role or requirements. A need for agility dictates a low aspect ratio, as does a need for compactness. In both cases, fighter aircraft and bush aircraft benefit from agility and small size. High aspect ratios provide great cruise efficiency but can have poor landing characteristics (high drag at low speeds or high angles of attack due to frontal area) that are often offset by high-lift devices like flaps and slats. To the second half of your question: even when a high aspect ratio is desired, wings are not made as long as possible for two reasons. The first is structural; the bending forces associated with wings of extreme length are, well, extreme, and the materials required are pretty space-age. See high-performance gliders, or at the crazy end, solar- or human-powered aircraft, for examples of this. It's just hard to do at the size of an airliner. The second reason is more practical: space is expensive. An extremely high-aspect ratio wing takes up a ton of space relative to the rest of the aircraft. In an attempt to offset this, early 777s (which had a larger span than 767s and 747s) were offered with folding wingtips, but nobody bought that option and it got dropped. Now I am going to commit a heresy, but keep reading to get an explanation: Increasing the aspect ratio of a wing will not change its induced drag. Increasing the span will. The induced drag coefficient of a wing is $c_{Di} = \frac{c_L^2}{\pi\cdot AR\cdot\epsilon}$, and this seems to indicate that a bigger aspect ratio AR would lower the induced drag coefficient $c_{Di}$. But only at the same lift coefficient $c_L$! Now let's look at the real numbers and compare two wings of the same span, but different aspect ratios. For simplicity, wing 1 has an AR of 5 and wing 2 has an AR of 10. Let's further assume that both wings have the same mass. Since both wings have the same span, wing 1 has twice the wing area of wing 2. To create the same lift, wing 1 needs only half the lift per area than wing 2! This means its $c_L$ is only half as big as that of wing 2, and now lets look at the induced drag again: $D_i = q_\infty\cdot S\cdot c_{Di}$ Wing 1: $D_{i_1} = q_\infty\cdot S_1\cdot\frac{c_{L_1}^2}{\pi\cdot AR_1\cdot\epsilon}$ Wing 2: $D_{i_2} = q_\infty\cdot S_2\cdot\frac{c_{L_2}^2}{\pi\cdot AR_2\cdot\epsilon} = q_\infty\cdot 0.5\cdot S_1\cdot\frac{4\cdot c_{L_1}^2}{\pi\cdot 2\cdot AR_1\cdot\epsilon} = D_{i_1}$ If both have the same span efficiency $\epsilon$, both have the same induced drag at the same lift. To reduce induced drag requires a span increase, regardless of aspect ratio. However, a higher aspect ratio wing does have advantages: Lower surface area means less friction drag Lower surface area also means less mass, at least at moderate aspect ratios. Smaller pitching moments, requiring a smaller tailplane but also disadvantages: Less internal volume for fuel or the landing gear Needs more complex high lift devices for the same landing speed In the end, the wing chord is chosen to minimize wing mass and to yield the minimum required fuel volume, and the aspect ratio is just a consequence of the selected wing span. Driving wing mass down is also reducing induced drag, and the $\epsilon$ of modern airliner wings is only 0.75 to 0.8, which shows how little importance the induced drag coefficient has for finding an overall optimum. To answer your question it's maybe helpful to remember why a higher aspect ratio generates less drag. A higher aspect ratio causes less induced drag at the same lift than an aerofoil with a lower aspect ratio. Okay, we need a certain amount of lift and our aim is to gain this lift as efficient as possible. Lets do it: Higher aspect ratio -> less drag, less drag less fuel burn, less fuel burn -> higher efficiency - perfect, but there are maybe other ways to reduce drag and safe for example space, or weight - a longer leading edge creates more form drag and where do you park this giant aircraft and it means a lot of weight to get sufficient strength for this giant wing - weight needs lift to fly and more lift causes more drag. Okay, I think it's clear now, that it's not sufficient to consider only one way of tuning the efficiency of your aircraft. There are also nice possibilities like winglets to reduce induced drag for only a little extra weight and also just a little extra interference drag or a good range of possible CGs which requires left negative lift on the tail = less lift required at the wing. New materials and design techniques also allow to work on the all in all shape of the wing which increases the efficiency rapidly. Building a good aircraft is finding a good balance and so you can't just concentrate on only one possible solution. I hope my wired talk can help you a little bit, sorry I fly this stuff and I guess all my colleagues are happy that I don't build it ;)
Here's a statement of Yoneda's lemma for n-category. Let C be a n-category and $C^{\wedge}=[C^o,n-1Cat]$ be the n-category of presheaves on C. $C^o$ is the opposite n-category of C and $n-1Cat$ is the n-category of (n-1)-categories. We have the Yeneda embedding: $h_C:C \rightarrow C^{\wedge}$ , X being sent to the hom-presheaf $hom_C(-,X)$. Now Yoneda's lemma says: 1) For $X \in C $ and $A \in C^{\wedge}$, $hom_{C^{\wedge}}(h_C(X),A) \approx A(X) $ in the n-category $n-1Cat$ up to n-equivalence; 2) For $A \in C^{\wedge}$, $hom_{C^{\wedge}}(h_C(-),A) \approx A $ in the n-category $C^{\wedge}$ up to n-equivalence. I want to test the Yoneda in the case n=0. Now a 0-category is, supposedly, an ensemble and a (-1)-category is either 1 or 0 (truth values). So the 0-category of (-1)-categories is $$-1Cat=\{0,1\}$$. Now let C be a 0-category ; $C^{\wedge}=[C^o,-1Cat]$ is nothing but the power set of C since a function of C in {0,1} defines a subset of C. The Yoneda embedding is $h_C:C \rightarrow C^{\wedge}$ , x being sent to the hom-presheaf $hom_C(-,x)=\{x\}$ (since $hom_C(y,x)=0$ if $y \neq x$), that is, x being sent to the singleton {x}. So the Yoneda lemma gives: $hom_{C^{\wedge}}(h_C(x),A) \approx A(x) = 0$ if $x \notin A$ and $hom_{C^{\wedge}}(h_C(x),A) \approx A(x) = 1$ if $x \in A$ in the 0-category $-1Cat$ up to 0-equivalence which is equality. Thus $hom_{C^{\wedge}}(h_C(x),A) = 1$ as long as $x \in A$, so this indeed makes $C^{\wedge}$ not just a set but a set with equivalence relation, also called setoid in nLab. I don't actually expect this: Yoneda's lemma forces us to consider setoids instead of plain sets as 0-categories. So now my question is:Given two 0-categories C and D, [C, D] (functions of C in D) should be a 0-category, i.e., a setoid. So what is the equivalence on [C,D]? In the case of [C,{0,1}], a subset A and a singleton {x} are equivalent iff x is an element of A. Two functions A and B (i.e. two sunsets of C) are equivalent iff their intersection is not empty; indeed any subset is either equivalent to the empty set or the whole set C itself (if the 0-category C is strictly a set). This is indeed rather confusing. I hope someone can perhaps clarify. Or perhaps the Yoneda should not be applied in this rather trivial case at all?
Let $T > 0$, and let $(\Omega, \mathscr F, \{\mathscr F_t\}_{t \in [0,T]}, \mathbb P)$ be a filtered probability space where $\mathbb P = \tilde{\mathbb P}$ (risk-neutral measure) and $\mathscr F_t = \mathscr F_t^{{W}} = \mathscr F_t^{\tilde{W}}$ where $W = \tilde{W} = (\tilde{W_t})_{t \in [0,T]} = ({W_t})_{t \in [0,T]}$ is standard $\mathbb P=\tilde{\mathbb P}$-Brownian motion. Define forward measure $\hat{\mathbb P}$: $$A_T := \frac{d \hat{\mathbb P}}{d \mathbb P} = \frac{\exp(-\int_0^T r_s ds)}{P(0,T)}$$ It can be shown that $\exp(-\int_0^t r_s ds)P(t,T)$ is a $(\mathscr F_t, \mathbb P)-$martingale where $r_t$ is short rate process and $P(t,T)$ is bond price. We are given that $$\frac{dP(t,T)}{P(t,T)} = r_t dt + \zeta_t dW_t$$ where $r_t$ and $\zeta_t$ are $\mathscr F_t$-adapted and $\zeta_t$ satisfies Novikov's condition. I don't think $\zeta_t$ is supposed to represent anything in particular. Define the stochastic process $\hat{W} = (\hat{W_t})_{t\in[0,T]}$ s.t. $$\hat{W_t} := W_t + \int_0^t -\zeta_s ds$$ Use Girsanov Theorem to prove $\hat{W_t}$ is standard $\hat{\mathbb P}$-Brownian motion. What I tried: Since $\zeta_t$ satisfies Novikov's condition, $\int_0^T -\zeta_t dt < \infty$ a.s. and $$L_t := \exp(-\int_0^t (-\zeta_s dW_s) - \frac{1}{2} \int_0^t \zeta_s^2 ds)$$ is a $(\mathscr F_t, \mathbb P)-$martingale. By Girsanov Theorem, $\hat{W_t}$ is standard $\mathbb P^{*}$-Brownian Motion where $$\frac{d \mathbb P^{*}}{d \mathbb P} = L_T$$ I guess we have that $\hat{W_t}$ is standard $\hat{\mathbb P}$-Brownian Motion if we can show that $$L_T = \frac{d \hat{\mathbb P}}{d \mathbb P}$$ I think I was able to show (lost my notes) that $dL_t = L_t \zeta_t dW_t$, $dA_t = A_t \zeta_t dW_t$ and then $d(\ln L_t) = d(\ln A_t)$ From $d(\ln L_t) = d(\ln A_t)$, I infer that $L_t = A_t$ and hence $L_T = A_T$ QED. Is that right?
M 3: a new muon missing momentum experiment to probe (g – 2) μ and dark matter at Fermilab Abstract Here, new light, weakly-coupled particles are commonly invoked to address the persistent $$\sim 4\sigma$$ anomaly in $$(g-2)_\mu$$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $$\sim 10^{10}$$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $$(g-2)_\mu$$ anomaly, while Phase 2 with $$\sim 10^{13}$$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $$U(1)_{L_\mu - L_\tau}$$. Authors: Princeton Univ., Princeton, NJ (United States) Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Publication Date: Research Org.: Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States) Sponsoring Org.: USDOE Office of Science (SC), High Energy Physics (HEP) (SC-25) OSTI Identifier: 1439466 Report Number(s): arXiv:1804.03144; FERMILAB-PUB-18-087-A Journal ID: ISSN 1029-8479; 1667037; TRN: US1900618 Grant/Contract Number: AC02-07CH11359 Resource Type: Journal Article: Accepted Manuscript Journal Name: Journal of High Energy Physics (Online) Additional Journal Information: Journal Volume: 2018; Journal Issue: 9; Journal ID: ISSN 1029-8479 Publisher: Springer Berlin Country of Publication: United States Language: English Subject: 79 ASTRONOMY AND ASTROPHYSICS; 46 INSTRUMENTATION RELATED TO NUCLEAR SCIENCE AND TECHNOLOGY; 72 PHYSICS OF ELEMENTARY PARTICLES AND FIELDS; Fixed target experiments Citation Formats Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew. M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States: N. p., 2018. Web. doi:10.1007/JHEP09(2018)153. Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, & Whitbeck, Andrew. M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab. United States. doi:10.1007/JHEP09(2018)153. Kahn, Yonatan, Krnjaic, Gordan, Tran, Nhan, and Whitbeck, Andrew. Wed . "M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab". United States. doi:10.1007/JHEP09(2018)153. https://www.osti.gov/servlets/purl/1439466. @article{osti_1439466, title = {M3: a new muon missing momentum experiment to probe (g – 2)μ and dark matter at Fermilab}, author = {Kahn, Yonatan and Krnjaic, Gordan and Tran, Nhan and Whitbeck, Andrew}, abstractNote = {Here, new light, weakly-coupled particles are commonly invoked to address the persistent $\sim 4\sigma$ anomaly in $(g-2)_\mu$ and serve as mediators between dark and visible matter. If such particles couple predominantly to heavier generations and decay invisibly, much of their best-motivated parameter space is inaccessible with existing experimental techniques. In this paper, we present a new fixed-target, missing-momentum search strategy to probe invisibly decaying particles that couple preferentially to muons. In our setup, a relativistic muon beam impinges on a thick active target. The signal consists of events in which a muon loses a large fraction of its incident momentum inside the target without initiating any detectable electromagnetic or hadronic activity in downstream veto systems. We propose a two-phase experiment, M$^3$ (Muon Missing Momentum), based at Fermilab. Phase 1 with $\sim 10^{10}$ muons on target can test the remaining parameter space for which light invisibly-decaying particles can resolve the $(g-2)_\mu$ anomaly, while Phase 2 with $\sim 10^{13}$ muons on target can test much of the predictive parameter space over which sub-GeV dark matter achieves freeze-out via muon-philic forces, including gauged $U(1)_{L_\mu - L_\tau}$.}, doi = {10.1007/JHEP09(2018)153}, journal = {Journal of High Energy Physics (Online)}, issn = {1029-8479}, number = 9, volume = 2018, place = {United States}, year = {2018}, month = {9} } Citation information provided by Web of Science Web of Science Figures / Tables: left) and vector ( right) forces that couple predominantly to muons. In both cases, a relativistic muon beam is incident on a fixed target and scatters coherently off a nucleus to produce the new particle as initial- ormore »
The Discrete and Indiscrete Topologies Recall from the Topological Spaces page that a set $X$ and a collection $\tau$ of subsets of $X$ together denoted $(X, \tau)$ is called a topological space if: $\emptyset \in \tau$ and $X \in \tau$, i.e., the empty set and the whole set are contained in $\tau$. If $U_i \in \tau$ for all $i \in I$ where $I$ is some index set then $\displaystyle{\bigcup_{i \in I} U_i \in \tau}$, i.e., for any arbitrary collection of subsets from $\tau$, their union is contained in $\tau$. If $U_1, U_2, ..., U_n \in \tau$ then $\displaystyle{\bigcap_{i=1}^{n} U_i \in \tau}$, i.e., for any finite collection of subsets from $\tau$, their intersection is contained in $\tau$. We will now look at two rather trivial topologies known as the discrete topologies and the indiscrete topologies. Definition: If $X$ is any set, then the Discrete Topology on $X$ is the collection of subsets $\tau = \mathcal P(X)$. Here, the notation "$\mathcal P(X) = \{ Y : Y \subseteq X \}$" represents the power set of $X$ or rather, the set of all subsets of $X$. Let's verify that $(X, \tau) = (X, \mathcal P(X))$ is indeed a topological space. Since $\emptyset \subseteq X$ and $X \subseteq X$, we clearly have that $\emptyset, X \subseteq \mathcal P(X)$, so the first condition holds. Now consider any arbitrary collection of subsets $\{ U_i \}_{i \in I}$ from $\mathcal P(X)$ for some index set $I$. Suppose that $\displaystyle{\bigcup_{i \in I} U_i \not \in \mathcal P(X)}$. Then $\displaystyle{\bigcup_{i \in I} U_i \not \subseteq X}$ and so there exists an element $\displaystyle{x \in \bigcup_{i \in I} U_i}$ such that $x \not \in X$. Say that $x \in U_j$ for some $j \in I$. Then $U_j \not \subseteq X$, which contradicts the fact that $\{ U_i \}_{i \in I}$ is an arbitrary collection of subsets from $\mathcal P(X) = \{ U : U \subseteq X \}$. Therefore $\displaystyle{\bigcup_{i \in I} U_i \in \mathcal P(X)}$. Lastly, consider any finite collection of subsets $U_1, U_2, ..., U_n$ of $\mathcal P(X)$. Suppose that $\displaystyle{\bigcap_{i=1}^{n} U_i \not \in \mathcal P(X)}$. Then $\displaystyle{\bigcap_{i=1}^{n} U_i \not \subseteq X}$, so there exists an $\displaystyle{x \in \bigcap_{i=1}^{n} U_i}$ such that $x \not \in X$. So $x \in U_j$ for all $j \in \{1, 2, ..., n \}$. But then $U_j \not \subseteq X$ for all $j \in \{ 1, 2, ..., n \}$ which contradicts the fact that $U_1, U_2, ..., U_n$ are a collection of subsets of $\mathcal P(X)$. Therefore $\displaystyle{\bigcap_{i=1}^{n} U_i \in \mathcal P(X)}$. Since all three conditions for $\tau = \mathcal P(X)$ hold, we have that $(X, \mathcal P(X))$ is a topological space. Definition: If $X$ is any set, then the Indiscrete Topology on $X$ is the collection of subsets $\tau = \{ \emptyset, X \}$. One again, let's verify that $(X, \tau) = (X, \{ \emptyset, X \})$ is indeed a topological space. For the first condition, we clearly see that $\emptyset \in \{ \emptyset, X \}$ and $X \in \{ \emptyset, X \}$. For the second condition, the only possible unions are $\emptyset \cup \emptyset = \emptyset \in \{ \emptyset, X \}$, $\emptyset \cup X = X \in \{ \emptyset, X \}$, and $X \cup X = X \in \{ \emptyset, X \}$. For the third condition, the only possible intersections are $\emptyset \cap \emptyset = \emptyset \in \{ \emptyset, X \}$, $\emptyset \cap X = \emptyset \in \{ \emptyset, X \}$, and $X \cap X = X \in \{ \emptyset, X \}$. Since all three conditions for $\tau = \{ \emptyset, X \}$ hold, we have that $(X, \{ \emptyset, X \})$ is a topological space.
Vector Subspaces Examples 2 Recall from the Vector Subspaces page that a subset $U$ of the subspace $V$ is said to be a vector subspace of $V$ if $U$ contains the zero vector of $V$ and is closed under both addition and scalar multiplication defined on $V$. We also saw an important lemma which says that if $V$ is a vector space, then a subset $U$ of $V$ is a subspace of $V$ if and only if $(ax + by) \in U$ for all $x, y \in U$ and for all $a, b \in \mathbb{F}$. We will now look at some more examples and non-examples of vector subspaces. Example 1 Determine whether or not the subset $U$ of all differentiable real-valued functions $f$ on the interval $[-2, 2]$ where $f'(1) = 2f(-1)$ is a subspace of $C[-2, 2]$. Let $f(x) = 0$ be the zero function. Then $f'(x) = 0$ and so $f'(1) = 0$ and $2f(-1) = 0$ so $f'(1) = 2f(-1)$. Thus the zero function is in $U$. Now let $f(x), p(x) \in U$, and let $a, b \in \mathbb{R}$. Then we have that:(1) Therefore $(af(x) + bp(x)) \in U$, so indeed $U$ is a subspace of $C[-2, 2]$. Example 2 Determine whether or not the subset $U$ of continuous real-valued functions such that $\int_0^1 f(x) \: dx = 1$ is a subspace of $C[0, 1]$. Consider the zero function $f(x) = 0$. We have that:(2) Therefore the zero function is not in $U$ so $U$ is not a subspace of $C[0, 1]$. Example 3 Show that if $U_1$ and $U_2$ are subspaces of $V$ then $U_1 \cap U_2$ is also a subspace of $V$. We will prove this as a theorem later on. For now, suppose that $U_1$ and $U_2$ are subspaces of $V$. Then $0 \in U_1$ and $0 \in U_2$ which implies that $0 \in U_1 \cap U_2$. Let $x, y \in U_1 \cap U_2$. Then any scalar multiple of $x$, say $ax \in U_1 \cap U_2$. Furthermore, any scalar multiple of $y$, say $by \in U_1 \cap U_2$. Thus since $U_1$ and $U_2$ are subspaces of $V$ we must also have that $(ax + by) \in U_1$ and $(ax + by) \in U_2$ so $(ax + by) \in U_1 \cap U_2$. Therefore $U_1 \cap U_2$ is a subspace of $V$.
Research Open Access Published: Existence and quantum calculus of weak solutions for a class of two-dimensional Schrödinger equations in \(\mathbb{C}_{+}\) Boundary Value Problems volume 2018, Article number: 59 (2018) Article metrics 426 Accesses Abstract The aim of this paper is to investigate the existence of weak solutions for a two-dimensional Schrödinger equation with a singular potential in \(\mathbb{C}_{+}\). Under appropriate assumptions on the nonlinearity, we introduce a new type of quantum calculus via the Morse theory and variational methods. By applying Schrödinger type inequalities and the well-known Banach fixed point theorem in conjunction with the technique of measures of weak noncompactness, the new and more accurate estimations for boundary behaviors of them are also deduced. Introduction In this paper, we study the following two-dimensional Schrödinger equation (see [1]): in the upper half plane \(\mathbb{C}_{+} =\{ z=t+ix:x>0\}\), where the variables t and x are complex numbers in \(\mathbb{C}_{+}\) and \(C_{1}\) and \(C_{2}\) are real numbers. Our first aim is to construct the solution in terms of hypergeometric functions. in \(\mathbb{C}_{+}\), which shows that In general, the study of the solutions of the two-dimensional Schrödinger and their related properties are very complicated; especially if the roots of the characteristic polynomial are double and not analytic at the origin. The explicit difficulties in dealing with quadratic-type non-linearities in two dimensions are our inability to use the Strichartz inequalities. However, many authors have showed that the solution of the two-dimensional Schrödinger equations can be expressed by using a variational inequality. In recent years, various extensions and generalizations of the classical variational inequality models and complementarity problems have emerged in quantum and fluid mechanics, nonlinear programming, physics, optimization and control, economics, transportation, finance, structural, elasticity and applied sciences (see [3–7] and the references therein for details). The classical Schrödinger solution spaces \(\mathfrak{H}^{p}(\mathbb {C_{+}})\) (see [8]), are defined to consist of solutions of (1), holomorphic in \(\mathbb{C}_{+}\) with the property that \(\mathcal{M}_{p}(u,x)\) is uniformly bounded for \(x> 0\), where Define We remark that \(\phi^{(k)}(t)\in\mathcal{L}^{p}\) and \(\phi(t)\in\mathcal {C}^{\infty}\) if and only if \(\phi(t)\) belongs to the space \(\mathcal{D}_{\mathcal{L}^{p}}\) (see [6]). Let \(\mathcal{F}\) denote the space, which consists of infinitely differentiable weak solution of (1) in \(\mathbb {C}_{+}\). Let \(\mathcal{F}'_{\mathcal{L}^{p}}\) denote the dual of the space \(\mathcal{F}_{\mathcal{L}^{q}}\), that is, \(\mathcal{F}'_{\mathcal{L}^{p}}= (\mathcal{F}_{\mathcal{L}^{q}})^{\prime}\). We also denote \(q=\frac{p}{p-1}\) and by \(D'\) the dual of the space D. So we can get \(D\subseteq\mathcal{F}_{\mathcal{L}^{p}}\) and \(\mathcal {F}'_{\mathcal{L}^{p}}\subseteq D^{\prime}\). Definition 1.1 (see [10]) If \(u \in\mathcal{F}^{\prime}\), then it has the following representation: for any test function \(\phi\in\mathcal{F}\) and any function \(g(z)\) in \(\mathbb{C}_{+}\), where \(g(z)\) is analytic on the complement of the support of u. Definition 1.2 (see [9]) Let Du be the Stokes operator defined by on \(\mathcal{F}'_{\mathcal{L}^{p}}\) for all \(\phi\in\mathcal{F}_{\mathcal {L}^{q}}\). It is obvious that where \(u \in\mathcal{F}'_{\mathcal{L}^{p}}\). Since \(u\in\mathcal{F}^{\prime}_{\mathcal{L}^{p}}\), \(D\varphi\in D_{\mathcal{L}^{p^{\prime}}}\), Du defined as above is a functional on \(D_{\mathcal{L}^{p^{\prime}}}\). Linearity of Du is nontrivial. If \(\{\varphi_{v}\}\to\varphi\) in \(D_{\mathcal {L}^{p^{\prime}}}\), then it is easy to see that Construction of the solutions By virtue of the weak maximum principle of superposition, it is necessary to consider the following Riemann problems: and where Substituting \(\tau_{1}^{l}\vartheta(z) \) with ϑ, it follows that \(\mathfrak {R}\vartheta=0\), from which one concludes that By a simple calculation, we know that By replacing \(\frac{\tau_{1}}{\tau_{2}}\) by \(\frac{1}{4(1-x) }\), it is obvious that which is equivalent to a hyperbolic–parabolic differential equation with iff It follows from the hypergeometric equation theory that the first and the second solutions for the hyperbolic–parabolic equation are and respectively. Let \(z=\frac{t^{2}}{4x}\), where \(| z| <1\). It is easy to see that a complete solution of the hyperbolic–parabolic equation is So \(\vartheta=x^{l}(1-x)^{\sigma}x\) is a solution of \(\mathfrak{R}\vartheta=0\). Notice that which immediately shows that Similarly, we can solve the problem (2) by letting which shows that Boundary behaviors Theorem 3.1 If \(u\in\mathcal{F}'_{\mathcal{L}^{p}}\), then is one of the representations of the solution u such that and where \(x\to\infty\) and there exists a function \(G_{k}(z)\in\mathfrak{H}^{p}(\mathbb{C}_{+})\) such that and Theorem 3.2 If g is defined in (10) and \(G_{k}\in\mathfrak{H}^{p}(\mathbb {C}_{+})\), then there exists a Schrödinger distributional solution \(u(t)\in\mathcal{F}^{\prime}_{\mathcal{L}^{p}}\) such that \(g(z)\) is one of the analytic representations of u. Corollary 1 If \(u(t)\in\mathcal{F}^{\prime}_{\mathcal{L}^{p}}\), then satisfies and as \(x\to\infty\) and there exists a function \(G_{k}\) in \(\mathfrak{H}^{p}(\mathbb{C}_{+})\) such that Lemmas The following lemmas are required in this section. Lemma 4.1 (see [11, p. 69]) If \(u\in\mathcal {L}^{p}(\mathbb{R})\) and G is defined by then Lemma 4.2 (see [11, p. 77]) Let \(g(z)\) be any weak solution of Eq. (1) such that the following properties hold. (I) \(g(t+ix)\in\mathcal{L}^{p}\) for any fixed\(x>0\); (II) $$\lim_{x\to0^{+}}g(t+ix)=g^{+}(t) $$ in \(\mathcal{F}^{\prime}_{\mathcal{L}^{p}}\) ( weakly), as \(x\to\infty\) and Then where \(\operatorname{Im}z>0\). Proofs of main results Proof of Theorem 3.1 By virtue of the fixed point theorem with respect to the stationary Schrödinger operator in [8], we have for any \(u\in\mathcal{F}^{\prime}_{\mathcal{L}^{p}}\), which shows that where r is a nonnegative integer and \(u_{l}\in L_{p}\). So which shows that from the Hölder inequality. Put So which yields where C is a positive constant. Since \(u_{l} \in\mathcal{L}^{p}\), then where M is a positive constant, and So By virtue of the structure formula, we have where So we obtain \(G_{k}(z)\in\mathfrak{H}^{p}(C_{+})\) from Lemma 4.1, which shows that Proof of Theorem 3.2 Since \(G_{k}(z)\in\mathfrak{H}^{p}(C_{+})\), where \(G_{k}(t+ix)\in\mathcal {L}^{p}\) for fixed x, there exists the solution \(u_{l}(t)\in\mathcal {L}^{p}\), where \(u_{l}\) is the nontangential limit of \(g(z)\). Since \(D_{\mathcal{L}^{q}}\in\mathcal{L}^{q}\), we see that \(u_{l}(t)\in D^{\prime}_{\mathcal{L}^{p}}\) and So which shows that where \(x>0\) and We know that \(G_{k}(z)\) can be represented as follows: from Lemma 4.2, which yields Put where \(u\in D^{\prime}_{\mathcal{L}^{p}}\), which shows that \(g(z)\) is one of the analytic representations of u. Proof of Corollary 1 By virtue of the fixed point theorem with respect to the stationary Schrödinger operator in [8], we know that The rest of the proof of the corollary is similar to the proof of Theorem 3.1. So we omit the details here for the sake of brevity. The proof of Corollary 1 is complete. Conclusions In this paper, we investigated the existence of weak solutions for a two-dimensional Schrödinger equations with a singular potential in \(\mathbb{C}_{+}\). Under appropriate assumptions on the nonlinearity, we introduced a new type of quantum calculus via the Morse theory and variational methods. By applying the well-known Banach fixed point theorem in conjunction with the technique of measures of weak noncompactness, new and more accurate estimations for boundary behaviors of them were also deduced. We significantly extended and complemented some results from the current literature. References 1. Gil’, A., Nogin, V.: Complex powers of a differential operator related to the Schrödinger operator. Vladikavkaz. Mat. Zh. 19(1), 18–25 (2017) 2. Nakao, M.L., Narazaki, T.: Existence and decay of solutions of some nonlinear wave equations in noncylindrical domains. Math. Rep. Coll. Gen. Educ. Kyushu Univ. 11(2), 117–125 (1978) 3. Abramowitz, M., Stegun, I.A. (eds.): Handbook of Mathematical Functions, 10th printing. National Bureau of Standards, Washington (1972) 4. Bardos, C., Chen, G.: Control and stabilization for the wave equation. III: domain with moving boundary. SIAM J. Control Optim. 19, 123–138 (1981) 5. Bresters, D.W.: On the equation of Euler–Poisson–Darboux. SIAM J. Math. Anal. 1, 31–41 (1973) 6. Ferreira, J.: Nonlinear hyperbolic–parabolic partial differential equation in noncylindrical domain. Rend. Circ. Mat. Palermo 44(1), 135–146 (1995) 7. Medeiros, L.A.: Nonlinear wave equations in domains with variable boundary. Arch. Ration. Mech. Anal. 47, 47–58 (1972) 8. Li, Z.: Boundary behaviors of modified Green’s function with respect to the stationary Schrödinger operator and its applications. Bound. Value Probl. 2015, Article ID 242 (2015) 9. Schwartz, L.: Théorie des Distributions. Hermann, Paris (1978) 10. Marion, O.: Hilbert transform, Plemelj relation, and Fourier transform of distributions. SIAM J. Math. Anal. 4(4), 656–670 (1973) 11. Pandey, J.: The Hilbert Transform of Schwartz Distributions and Applications. Wiley, New York (1996) Acknowledgements The authors are grateful to the anonymous reviewers for carefully reading this paper and for their comments and suggestions which have improved the paper. Availability of data and materials Not applicable. Funding This work was supported by National Natural Science Foundation of China (No. 61403356) and Zhejiang Provincial Natural Science Foundation of China (No. LY18F030012). Ethics declarations Ethics approval and consent to participate Not applicable. Competing interests The authors declare that they have no competing interests. Consent for publication Not applicable. Additional information List of abbreviations Not applicable. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Question Four fair six-sided dice are rolled. The probability that the sum of the results being $22$ is $$\frac{X}{1296}.$$ What is the value of $X$? My Approach I simplified it to the equation of the form: $x_{1}+x_{2}+x_{3}+x_{4}=22, 1\,\,\leq x_{i} \,\,\leq 6,\,\,1\,\,\leq i \,\,\leq 4 $ Solving this equation results in: $x_{1}+x_{2}+x_{3}+x_{4}=22$ I removed restriction of $x_{i} \geq 1$ first as follows-: $\Rightarrow x_{1}^{'}+1+x_{2}^{'}+1+x_{3}^{'}+1+x_{4}^{'}+1=22$ $\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$ $\Rightarrow \binom{18+4-1}{18}=1330$ Now i removed restriction for $x_{i} \leq 6$ , by calculating the number of bad cases and then subtracting it from $1330$: calculating bad combination i.e $x_{i} \geq 7$ $\Rightarrow x_{1}^{'}+x_{2}^{'}+x_{3}^{'}+x_{4}^{'}=18$ We can distribute $7$ to $2$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{2}$ We can distribute $7$ to $1$ of $x_{1}^{'},x_{2}^{'},x_{3}^{'},x_{4}^{'}$ i.e$\binom{4}{1}$ and then among all others . i.e $$\binom{4}{1} \binom{14}{11}$$ Therefore, the number of bad combinations equals $$\binom{4}{1} \binom{14}{11} - \binom{4}{2}$$ Therefore, the solution should be: $$1330-\left( \binom{4}{1} \binom{14}{11} - \binom{4}{2}\right)$$ However, I am getting a negative value. What am I doing wrong? EDIT I am asking for my approach, because if the question is for a larger number of dice and if the sum is higher, then predicting the value of dice will not work.
If I want to implement the measurement operation corresponding to filtering, i.e. $$ M_1=\left(\begin{array}{cc}1 & 0 \\ 0 & \alpha \end{array}\right)\qquad M_2=\left(\begin{array}{cc}0 & 0 \\ 0 & \sqrt{1-\alpha^2} \end{array}\right), $$ how would I do that? These measreuements describe a non-projective measurement. We typically convert these into projective measurements by introducing ancilla qubits. In this case, define a unitary $U$ such that $$ U|0\rangle=\alpha|0\rangle+\sqrt{1-\alpha^2}|1\rangle. $$ Take the qubit that we want to measure, and introduce an ancilla in the state $|0\rangle$. Apply controlled-$U$ controlled from your qubit to be measured, and targeting the ancilla. Finally, perform a standard, $Z$, measurement on the ancilla qubit. Answers 0 and 1 correspond to implementing $M_1$ and $M_2$ respectively. To see this explicitly, consider the possible inputs of $|0\rangle$ and $1\rangle$. Everything else will follow by linearity. $$ |0\rangle|0\rangle\mapsto |0\rangle|0\rangle\qquad |1\rangle|0\rangle\mapsto |1\rangle(\alpha|0\rangle+\sqrt{1-\alpha^2}|1\rangle). $$ So, input $|0\rangle$ always returns $|0\rangle$ (good since $M_1|0\rangle=|0\rangle$ and $M_2|0\rangle=0$), while $|1\rangle$ returns either $M_1|1\rangle$ or $M_2\rangle$ depending on the measurement result.
The Connectedness of the Closure of a Set Recall from the Connected and Disconnected Sets in Topological Spaces page that if $X$ is a topological space and $A \subseteq X$ then $A$ is said to be connected if the topological subspace $A$ is connected, and similarly, $A$ is said to be disconnected if the topological subspace $A$ is disconnected. We will now look at a nice theorem which says that if $A$ is any connected set in a topological space then $\bar{A}$ is also a connected set. Of course, this is trivially true when $A$ itself is a closed set in $X$ (since then $A = \bar{A}$) but of course the result is more interesting when $A$ is not closed since then a connected $A$ is properly contained in a larger connected set $\bar{A}$. Theorem 1: Let $X$ be a topological space and let $A \subseteq X$. If the subspace $A$ is connected then the closure $\bar{A}$ is also connected. Proof:Let $A$ be a connected topological subspace and assume that $\bar{A}$ is disconnected. We will show that this leads to a contradiction. If $\bar{A}$ is disconnected then there exists open sets $B, C \subset \bar{A}$ where $B, C \neq \emptyset$, $B \cap C = \emptyset$ and $\bar{A} = B \cup C$. Since $\bar{A} = B \cup C$, if we take the closure of both sides of this equality we have that: The equality at $*$ comes from the fact that $\overline{B \cup C} = \bar{B} \cup \bar{C}$ as proven on the Basic Theorems Regarding the Closure of Sets in a Topological Space page. Now since $B$ and $C$ are clopen sets in $\bar{A}$ we have that in fact $B = \bar{B}$ and $C = \bar{B}$ (since $B$ and $C$ are closed sets in $\bar{A}$). Since $A \subseteq \bar{A}$, write $A = (A \cap B) \cup (A \cap C)$. Then $A \cap B$ and $A \cap C$ are both open in $A$. Since $A$ is connected we must have that $(A \cap B) = \emptyset$ or $(A \cap C) = \emptyset$ (otheriwse then $\{ A \cap B, A \cap C \}$ would be a separation of $A$. Suppose that $A \cap B = \emptyset$. Then $A = A \cap C$. This implies that $A \subseteq C$, i.e., $A \subseteq \bar{C}$. Taking the closure of both sides shows that $\bar{A} \subseteq \bar{C}$. But then $\bar{A} = \bar{C}$, a contradiction. Similarly, if we instead suppose that $A \cap C = \emptyset$ then $A = A \cap B$. This implies that $A \subseteq B$, i.e., $A \subseteq \bar{B}$. Taking the closure of both sides shows that $\bar{A} \subseteq \bar{B}$. But then $\bar{A} = \bar{C}$, once again, a contradiction. Therefore the assumption that $\bar{A}$ was disconnected is false. Hence if $A$ is a connected set then the closure $\bar{A}$ is also connected. $\blacksquare$
I) In this answer we would like to relax the conventional definition of a conservative force to include e.g. the Lorentz force. II) The standard definition of a conservative force is given on Wikipedia (October 2013) roughly as follows: A force field ${\bf F}={\bf F}({\bf r})$ is called a conservative force if it meets any of these three equivalent conditions: The force can be written as the negative gradient of a potential $U=U({\bf r})$: $$\tag{1} {\bf F} ~=~ - {\bf \nabla} U. $$ Equivalently, condition (1) means that the one-form $\phi:={\bf F}\cdot \mathrm{d}{\bf r}$ is exact: $\phi=-\mathrm{d}U$, where the exterior derivative is $\mathrm{d}:=\mathrm{d}{\bf r}\cdot{\bf \nabla}$. The position space is simply connected and the curl of ${\bf F}$ is zero: $$\tag{2} {\bf \nabla} \times {\bf F} ~=~ {\bf 0}. $$ Equivalently, condition (2) means that the one-form $\phi:={\bf F}\cdot \mathrm{d}{\bf r}$ is closed: $\mathrm{d}\phi=0$. There is zero net work $W$ done by the force ${\bf F}$ when moving a particle through a closed curve ${\rm r}: S^1 \to \mathbb{R}^3$ that starts and ends in the same position: $$\tag{3} W ~\equiv~ \oint_{S^1} \!\mathrm{d}s~ {\bf F}({\bf r}(s)) \cdot {\bf r}^{\prime}(s) ~=~ 0. $$ We stress that the parameter $s$ does not have to be actual time $t$. In fact time $t$ doesn't enter conditions (1-3) at all. The curve in condition (3) could be any virtual loop. In particular, the curve and its parametrization $s$ do in principle not have to reflect how an actual point particle would travel along a trajectory in a certain pace determined by some equations of motion, let alone move forward in time. III) Now recall that a velocity dependent potential $U=U({\bf r},{\bf v},t)$ of a force ${\bf F}$ by definition satisfies $$\tag{4} {\bf F}~=~\frac{\mathrm d}{\mathrm dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}}, \qquad {\bf v}~=~\dot{\bf r},$$ cf. Ref. 1. Next define the potential part of the action as $$\tag{5} S_{\rm pot}[{\bf r}]~:=~\int_{t_i}^{t_f} \!\mathrm{d}t~U({\bf r}(t),\dot{\bf r}(t),t),$$ and note that eq. (4) can be rewritten with the help of a functional derivative as $$\tag{6} F_i(t)~=~-\frac{\delta S_{\rm pot}}{\delta x^i(t)}, \qquad i~\in~\{1,2,3\}. $$ Technically at this point we need to impose pertinent boundary conditions (BC) (e.g. Dirichlet BC) at initial and final time, $t_i$ and $t_f$, respectively, in order for the functional derivative (6) to exists. These BC are implicitly assumed from now on. We dismiss the possibility that one would like to call a force with explicit time dependence for a conservative force. Let us therefore drop explicit time dependence from now on. However, see this Phys.SE post. IV) Seen in the light that velocity dependent potentials (4) are extremely useful in Lagrangian formulations, it is tempting to generalize the notion of a conservative force in the following non-standard way: A velocity dependent force field ${\bf F}={\bf F}({\bf r},{\bf v})$ is called a conservative force if it meets any of these three equivalent conditions: The force can be written as the negative functional gradient of a potential action $S_{\rm pot}[{\bf r}]=\int_{t_i}^{t_f} \!\mathrm{d}t~U({\bf r}(t),\dot{\bf r}(t))$: $$\tag{1'} {\bf F} ~=~ -\frac{\delta S_{\rm pot}}{\delta {\bf r}} ~\equiv~\frac{\mathrm d}{\mathrm dt} \frac{\partial U}{\partial {\bf v}} - \frac{\partial U}{\partial {\bf r}} . $$ Equivalently, condition (1') means that the one-form $\Phi:=\int_{t_i}^{t_f}\!\mathrm{d}t~ F_i(t)\mathrm{d}x^i(t)$ is exact in path space: $\Phi=-\mathrm{d}S_{\rm pot}$, where the exterior derivative is $\mathrm{d}:=\int_{t_i}^{t_f}\!\mathrm{d}t~ \mathrm{d}x^i(t)\frac{\delta}{\delta x^i(t)}$. The position space is simply connected and the force ${\bf F}$ satisfies a closedness condition wrt. to functional derivatives $$\tag{2'} \frac{\delta F_i(t)}{\delta x^j(t^{\prime})} ~=~[(i,t) \longleftrightarrow (j,t^{\prime})]. $$ Equivalently, condition (2') means that the one-form $\Phi:=\int_{t_i}^{t_f}\!\mathrm{d}t~ F_i(t)\mathrm{d}x^i(t)$ is closed in path space: $\mathrm{d}\Phi=0$. The equivalent Helmholtz conditions [2] wrt. to partial and total derivatives read $$ \frac{\partial F_i}{\partial x^j} -\frac{1}{2}\frac{\mathrm d}{\mathrm dt}\frac{\partial F_i}{\partial v^j} ~=~[i \longleftrightarrow j], \qquad \frac{\partial F_i}{\partial v^j}~=~-[i \longleftrightarrow j].$$ The following integral (3') over a two-cycle ${\rm r}: S^2 \to \mathbb{R}^3$ vanishes always: $$\tag{3'} \oint_{S^2}\!\mathrm{d}t \wedge \mathrm{d}s~ {\bf F}({\bf r}(t,s),\dot{\bf r}(t,s)) \cdot {\bf r}^{\prime}(t,s) ~=~ 0. $$ Here a dot and a prime mean differentiation wrt. $t$ and $s$, respectively. With this definition (1'-3') of a conservative force, then e.g. the Lorentz force and the Coriolis force become conservative forces, while the friction force ${\bf F}=-k {\bf v}$ will stay a non-conservative force, cf. this and this Phys.SE answers. It should be said that there are straightforward generalizations of conditions (1'-3'): Firstly one may allow the force ${\bf F}={\bf F}({\bf r}, {\bf v}, {\bf a}, {\bf j},\ldots)$ to depend on acceleration, jerk, etc. Secondly, one can generalize to generalized positions $q^i$, generalized velocities $\dot{q}^i$, and generalized forces $Q_i$, etc. Finally, let us mention that this construction (1'-3') is in spirit related to the inverse problem for Lagrangian mechanics. References: H. Goldstein, Classical Mechanics, Chapter 1. H. Helmholtz, Ueber die physikalische Bedeutung des Prinzips der kleinstenWirkung, J. für die reine u. angewandte Math. 100 (1887) 137.
Disclaimer: this post is pretty much a comprehensive solution to exercise 1.19 from Structure and Interpretation of Computer Programs by Harold Abelson and Gerald Jay Sussman (the solution is written in Haskell instead of Lisp though). Let us quickly recall the definition of Fibonacci numbers as well as a few well-known ways to calculate them. Fibonacci numbers are defined recursively, with \(n\)-th number denoted as \( F_n = \left\{ \begin{array}{l l} 0 & \quad \text{if $n = 0$} \\ 1 & \quad \text{if $n = 1$} \\ F_{n-2} + F_{n-1} & \quad \text{otherwise.} \end{array} \right. \) This definition can be translated to a Haskell program in a straightforward manner: fib1 :: Int -> Integer fib1 0 = 0 fib1 1 = 1 fib1 n = fib1 (n-2) + fib1 (n-1) However, computation of \(n\)-th Fibonacci number with the above function requires \(O(log_2(F_n)n)\) space and \(O(log_2(F_n)2^n)\) time: recursive calls create a binary tree of depth \(n\) (hence the space requirement as you need to keep track of \(n\) intermediate values, each taking up to \(O(log_2(F_n))\) bits), which contains at most \(2^n-1\) elements, where each node takes \(O(log_2(F_n))\) time to compute as we’re using arbitrary precision arithmetic. It is worth noting that \(O(log_2(F_n))\) is in fact equal to \(O(n)\) as the growth of Fibonacci sequence is exponential, but the former will be used as it’s more clear. The other, significantly more performant way is to use the iterative approach, i.e. start with \(F_0\) and \(F_1\) and by appropriately adding them, compute your way up to \(F_n\): fib2 :: Int -> Integer fib2 n = go n 0 1 where go k a b | k == 0 = a | otherwise = go (k - 1) b (a + b) The initial state is \((0, 1) = (F_0, F_1)\) and each iteration we replace a state \((F_{n-k}, F_{n-k+1})\) with \((F_{n-k+1}, F_{n-k+2})\), eventually ending up with \((F_n, F_{n+1})\) and taking the first element of a pair as a result. The performance of this variation is much better than its predecessor as its space complexity is \(O(log_2(F_n))\) and time complexity is \(O(log_2(F_n)n)\). Can we do better than that? As a matter of fact, we can. First of all, observe that in the last solution, each iteration we replaced a pair of two numbers with another pair of two numbers using elementary arithmetic operations, which is basically a transformation in two dimensional vector space (let us assume \(\mathbb{R}^2\)). In fact, this transformation is linear and its matrix representation is \(T = \begin{pmatrix} 0 & 1 \\ 1 & 1\end{pmatrix}\), because \(T \cdot \begin{pmatrix} F_n \\ F_{n+1} \end{pmatrix} = \begin{pmatrix} F_{n+1} \\ F_n + F_{n+1} \end{pmatrix} = \begin{pmatrix} F_{n+1} \\ F_{n+2} \end{pmatrix}\). With this knowledge the formula for Fibonacci numbers can be written in a compact way: \(F_n = \pi_1\left( T^n \cdot \begin{pmatrix} F_0 \\ F_1 \end{pmatrix} \right)\), where \(\pi_1 : \mathbb{R}^2 \rightarrow \mathbb{R}\), \(\pi_1\left(\begin{pmatrix}v_1 \\ v_2\end{pmatrix}\right) = v_1\). There are two routes to choose from at this point: we can either use an algorithm for fast exponentiation of matrices (such as square and multiply) to calculate \(T^n\) or try to apply more optimizations. We will go with the latter, because matrix multiplication is still pretty heavy operation and we can improve the situation by exploiting the structure of \(T\). First, close observation of the powers of \(T\) allows us to notice the following: Fact. \(T^n = \begin{pmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{pmatrix}\) for \(n \in \mathbb{N_+}\). Proof. For \(n = 1\) it is easy to see that \(T = \begin{pmatrix}F_0 & F_1 \\ F_1 & F_2\end{pmatrix}\). Now assume that the fact is true for some \(n \in \mathbb{N_+}\). Then \(T^{n+1} = T^n \cdot T = \begin{pmatrix} F_{n-1} & F_n \\ F_n & F_{n+1} \end{pmatrix} \cdot \begin{pmatrix} 0 & 1 \\ 1 & 1 \end{pmatrix} = \begin{pmatrix} F_n & F_{n-1} + F_n \\ F_{n+1} & F_n + F_{n+1} \end{pmatrix} = \begin{pmatrix} F_n & F_{n+1} \\ F_{n+1} & F_{n+2} \end{pmatrix}\). Corollary. For \(n \in \mathbb{N_+}\) there exists \(p\) and \(q\) such that \(T^n = \begin{pmatrix} p & q \\ q & p+q \end{pmatrix}\). Equipped with this knowledge we can not only represent \(T^k\) for some \(k \in \mathbb{N_+}\) using only two numbers instead of four, but also derive a transformation of these numbers that corresponds to the computation of \(T^{2k}\). Fact. Let \(n \in \mathbb{N_+}\), \(p, q \in \mathbb{N}\). Then \(\begin{pmatrix} p & q \\ q & p+q \end{pmatrix}^2 = \begin{pmatrix} p’ & q’ \\ q’ & p’+q’ \end{pmatrix}\), where \(p’ = p^2 + q^2\) and \(q’ = (2p + q)q\). Now, we can put all of these together and construct the final solution: fib3 :: Int -> Integer fib3 n = go n 0 1 0 1 where go k a b p q | k == 0 = a | odd k = go (k - 1) (p*a + q*b) (q*a + (p+q)*b) p q | otherwise = go (k `div` 2) a b (p*p + q*q) ((2*p + q)*q) Let us denote \(i\)-th bit of \(n\) by \(n_i \in \{0,1\}\), where \(i \in \{0, \dots, \lfloor log_2(n) \rfloor \}\). We start with \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_0 \\ F_1 \end{pmatrix}\) and \(\begin{pmatrix} p \\ q \end{pmatrix} = \begin{pmatrix} 0 \\ 1 \end{pmatrix} \cong T\). Then we traverse the bits of \(n\) and return \(a\). Note that while iterating through \(n_i\): \(\begin{pmatrix} p \\ q \end{pmatrix} \cong T^{2^i}\). \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix}\) for \(m = \displaystyle\sum_{j = 0}^{i-1} n_j \cdot 2^j\). If \(n_i = 1\), \(\begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix}\) is replaced with \(T^{2^i} \cdot \begin{pmatrix} F_m \\ F_{m+1} \end{pmatrix} = \begin{pmatrix} F_{m+2^i} \\ F_{m+2^i+1} \end{pmatrix}\). Hence, in the end \(\begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} F_n \\ F_{n+1} \end{pmatrix}\), so the function correctly computes \(n\)-th Fibonacci number. The space complexity of this solution is \(O(log_2(F_n))\), whereas its time complexity is \(O(log_2(F_n)(log_2(n)+H(n)))\) with \(H(n)\) being Hamming weight of \(n\). Now, let us put all of the implementations together and measure their performance using criterion library. {-# OPTIONS_GHC -Wall #-} {-# LANGUAGE BangPatterns #-} module Main where import Criterion.Main import Criterion.Types fib1 :: Int -> Integer fib1 0 = 0 fib1 1 = 1 fib1 n = fib1 (n - 2) + fib1 (n - 1) fib2 :: Int -> Integer fib2 n = go n 0 1 where go !k !a b | k == 0 = a | otherwise = go (k - 1) b (a + b) fib3 :: Int -> Integer fib3 n = go n 0 1 0 1 where go !k !a b !p !q | k == 0 = a | odd k = go (k - 1) (p*a + q*b) (q*a + (p+q)*b) p q | otherwise = go (k `div` 2) a b (p*p + q*q) ((2*p + q)*q) main :: IO () main = defaultMainWith (defaultConfig { timeLimit = 2 }) [ bgroup "fib1" $ map (benchmark fib1) $ [10, 20] ++ [30..42] , bgroup "fib2" $ map (benchmark fib2) $ 10000 : map (100000*) [1..10] , bgroup "fib3" $ map (benchmark fib3) $ 1000000 : map (10000000*) ([1..10] ++ [20]) ] where benchmark fib n = bench (show n) $ whnf fib n The above program was compiled with GHC 7.10.2 and run on Intel Core i7 3770. HTML report generated by it is available here. In particular, after substituting the main function with: main :: IO () main = defaultMainWith (defaultConfig { timeLimit = 2 }) [ bgroup "fib3" [benchmark fib3 1000000000] ] where benchmark fib n = bench (show n) $ whnf fib n we can see that the final implementation is able to calculate billionth Fibonacci number in a very reasonable time: benchmarking fib3/1000000000 time 30.82 s (29.86 s .. 31.97 s) 1.000 R² (0.999 R² .. 1.000 R²) mean 30.34 s (29.96 s .. 30.56 s) std dev 345.1 ms (0.0 s .. 387.0 ms) variance introduced by outliers: 19% (moderately inflated)
Acoustic Topology Optimization with Thermoviscous Losses Today, guest blogger René Christensen of GN Hearing discusses including thermoviscous losses in the topology optimization of microacoustic devices. Topology optimization helps engineers design applications in an optimized manner with respect to certain a priori objectives. Mainly used in structural mechanics, topology optimization is also used for thermal, electromagnetics, and acoustics applications. One physics that was missing from this list until last year is microacoustics. This blog post describes a new method for including thermoviscous losses for microacoustics topology optimization. Standard Acoustic Topology Optimization A previous blog post on acoustic topology optimization outlined the introductory theory and gave a couple of examples. The description of the acoustics was the standard Helmholtz wave equation. With this formulation, we can perform topology optimization for many different applications, such as loudspeaker cabinets, waveguides, room interiors, reflector arrangements, and similar large-scale geometries. The governing equation is the standard wave equation with material parameters given in terms of the density \rho and the bulk modulus K. For topology optimization, the density and the bulk modulus are interpolated via a variable, \epsilon. This interpolation variable ideally takes binary values: 0 represents air and 1 represents a solid. During the optimization procedure, however, its value follows an interpolation scheme, such as a solid isotropic material with penalization model (SIMP), as shown in Figure 1. Figure 1: The density and bulk modulus interpolation for standard acoustic topology optimization. The units have been omitted to have both values in the same plot. Using this approach will work for applications where the so-called thermoviscous losses (close to walls in the acoustic boundary layers) are of little importance. The optimization domain can be coupled to narrow regions described by, for example, a homogenized model (this is the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, if the narrow regions where the thermoviscous losses occur change shape themselves, this procedure is no longer valid. An example is when the cross section of a waveguide changes shape. Thermoviscous Acoustics (Microacoustics) For microacoustic applications, such as hearing aids, mobile phones, and certain metamaterial geometries, the acoustic formulation typically needs to include the so-called thermoviscous losses explicitly. This is because the main losses occur in the acoustic boundary layer near walls. Figure 2 below illustrates these effects. Figure 2: The volume field is the acoustic pressure, the surface field is the temperature variation, and the arrows indicate the velocity. An acoustic wave travels from the bottom to the top of a tube with a circular cross section. The pressure is shown in a ¾-revolution plot. The arrows indicate the particle velocity at this particular frequency. Near the boundary, the velocity is low and tends to zero on the boundary, whereas in the bulk, it takes on the velocity expected from standard acoustics via Euler’s equation. At the boundary, the velocity is zero because of viscosity, since the air “sticks” to the boundary. Adjacent particles are slowed down, which leads to an overall loss in energy, or rather a conversion from acoustic to thermal energy (viscous dissipation due to shear). In the bulk, however, the molecules move freely. Governing Equations of Thermoviscous Acoustics Modeling microacoustics in detail, including the losses associated with the acoustic boundary layers, requires solving the set of linearized Navier-Stokes equations with quiescent conditions. These equations are implemented in the Thermoviscous Acoustics physics interfaces available in the Acoustics Module add-on to the COMSOL Multiphysics® software. However, this formulation is not suited for topology optimization where certain assumptions can be used. A formulation based on a Helmholtz decomposition is presented in Ref. 1. The formulation is valid in many microacoustic applications and allows decoupling of the thermal, viscous, and compressible (pressure) waves. An approximate, yet accurate, expression (Ref. 1) links the velocity and the pressure gradient as where the viscous field \Psi_{v} is a scalar nondimensional field that describes the variation between bulk conditions and boundary conditions. In the figure above, the surface color plot shows the acoustic temperature variation. The variation on the boundary is zero due to the high thermal conductivity in the solid wall, whereas in the bulk, the temperature variation can be calculated via the isentropic energy equation. Again, the relationship between temperature variation and acoustic pressure can be written in a general form (Ref. 1) as where the thermal field \Psi_{h} is a scalar, nondimensional field that describes the variation between bulk conditions and boundary conditions. As will be shown later, these viscous and thermal fields are essential for setting up the topology optimization scheme. Topology Optimization for Thermoviscous Acoustics Applications For thermoviscous acoustics, there is no established interpolation scheme, as opposed to standard acoustics topology optimization. Since there is no one-equation system that accurately describes the thermoviscous physics (typically, it requires three governing equations), there are no obvious variables to interpolate. However, I will describe a novel procedure in this section. For simplicity, we look at only wave propagation in a waveguide of constant cross section. This is equivalent to the so-called Low Reduced Frequency model, which may be known to those working with microacoustics. The viscous field can be calculated (Ref. 1) via Equation 1 as (1) where \Delta_{cd} is the Laplacian in the cross-sectional direction only. For certain simple geometries, the fields can be calculated analytically (as done in the Narrow Region Acoustics feature in the Pressure Acoustics, Frequency Domain interface). However, when used for topology optimization, they must be calculated numerically for each step in the optimization procedure. In standard acoustics topology optimization, an interpolation variable varies between 0 and 1, where 0 represents air and 1 represents a solid. To have a similar interpolation scheme for the thermoviscoacoustic topology optimization, I came up with a heuristic approach, where the thermal and viscous fields are used in the interpolation strategy. The two typical boundary conditions for the viscous field (Ref. 1) are and These boundary conditions give us insight into how to perform the optimization procedure, since an air-solid interface could be represented by the former boundary condition and an air-air interface by the latter. We write the governing equation in a more general matter: We already know that for air domains, (a v,f v) = (1,1), since that gives us the original equation (1). If we instead set a v to a large value so that the gradient term becomes insignificant, and set f v to zero, we get This corresponds exactly to the boundary condition for no-slip boundaries, just as a solid-air interface, but obtained via the governing equation. We need this property, since we have no way of applying explicit boundary conditions during the optimization. So, for solids, (a v,f v) should have values of (“large”,0). Thus, we have established our interpolation extremes: and I carried out a comparison between the explicit boundary conditions and interpolation extremes, with the test geometry shown in Figure 3. On the left side, boundary conditions are used, whereas on the adjacent domains on the right, the suggested values of a v and f v are input. Figure 3: On the left, standard boundary conditions are applied. On the right, black domains indicate a modified field equation that mimics a solid boundary. White domains are air. The field in all domains is now calculated for a frequency with a boundary layer thick enough to visually take up some of the domain. It can be seen that the field is symmetric, which means that the extreme field values can describe either air or a solid. In a sense, that is comparable to using the actual corresponding boundary conditions. Figure 4: The resulting field with contours for the setup in Figure 3. The actual interpolation between the extremes is done via SIMP or RAMP schemes (Ref. 2), for example, as with the standard acoustic topology optimization. The viscous field, as well as the thermal field, can be linked to the acoustic pressure variable pressure via equations. With this, the world’s first acoustic topology optimization scheme that incorporates accurate thermoviscous losses has come to fruition. Optimizing an Acoustic Loss Response Here, we give an example that shows how the optimization method can be used for a practical case. A tube with a hexagonally shaped cross section has a certain acoustic loss due to viscosity effects. Each side length in the hexagon is approximately 1.1 mm, which gives an area equivalent to a circular area with a radius of 1 mm. Between 100 and 1000 Hz, this acoustic loss increases by a factor of approximately 2.6, as shown in Figure 7. Now, we seek to find an optimal topology so that we obtain a flatter acoustic loss response in this frequency range, with no regards to the actual loss value. The resulting geometry looks like this: Figure 5: The topology for a maximally flat acoustic loss response and resulting viscous field at 1000 Hz. A simpler geometry that resembles the optimized topology was created, where explicit boundary conditions can be applied. Figure 6: A simplified representation of the optimized topology, with the viscous field at 1000 Hz. The normalized acoustic loss for the initial hexagonal geometry and the topology-optimized geometry are compared in Figure 7. For each tube, the loss is normalized to the value at 100 Hz. Figure 7: The acoustics loss normalized to the value at 100 Hz for the initial cross section (dashed) and the topology-optimized geometry (solid), respectively. For the optimized topology, the acoustic loss at 1000 Hz is only 1.5 times higher than at 100 Hz, compared to the 2.6 times for the initial geometry. The overall loss is larger for the optimized geometry, but as mentioned before, we do not consider this in the example. This novel topology optimization strategy can be expanded to a more general 1D method, where pressure can be used directly in the objective function. A topology optimization scheme for general 3D geometries has also been established, but its implementation is still ongoing. It would be very advantageous for those of us working with microacoustics to focus on improving topology optimization, in both universities and industry. I hope to see many advances in this area in the future. References W.R. Kampinga, Y.H. Wijnant, A. de Boer, “An Efficient Finite Element Model for Viscothermal Acoustics,” Acta Acousticaunited with Acoustica, vol. 97, pp. 618–631, 2011. M.P. Bendsoe, O. Sigmund, Topology Optimization: Theory, Methods, and Applications, Springer, 2003. About the Guest Author René Christensen has been working in the field of vibroacoustics for more than a decade, both as a consultant (iCapture ApS) and as an engineer in the hearing aid industry (Oticon A/S, GN Hearing A/S). He has a special interest in the modeling of viscothermal effects in microacoustics, which was also the topic of his PhD. René joined the hardware platform R&D acoustics team at GN Hearing as a senior acoustic engineer in 2015. In this role, he works with the design and optimization of hearing aids. Comments (0) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
It looks like you're new here. If you want to get involved, click one of these buttons! Let's look at some examples of feasibility relations! Feasibility relations work between preorders, but for simplicity suppose we have two posets \(X\) and \(Y\). We can draw them using Hasse diagrams: Here an arrow means that one element is less than or equal to another: for example, the arrow \(S \to W\) means that \(S \le W\). But we don't bother to draw all possible inequalities as arrows, just the bare minimum. For example, obviously \(S \le S\) by reflexivity, but we don't bother to draw arrows from each element to itself. Also \(S \le N\) follows from \(S \le E\) and \(E \le N\) by transitivity, but we don't bother to draw arrows that follow from others using transitivity. This reduces clutter. (Usually in a Hasse diagram we draw bigger elements near the top, but notice that \(e \in Y\) is not bigger than the other elements of \(Y\). In fact it's neither \(\ge\) or \(\le\) any other elements of \(Y\) - it's just floating in space all by itself. That's perfectly allowed in a poset.) Now, we saw that a feasibility relation from \(X\) to \(Y\) is a special sort of relation from \(X\) to \(Y\). We can think of a relation from \(X\) to \(Y\) as a function \(\Phi\) for which \(\Phi(x,y)\) is either \(\text{true}\) or \(\text{false}\) for each pair of elements \( x \in X, y \in Y\). Then a feasibility relation is a relation such that: If \(\Phi(x,y) = \text{true}\) and \(x' \le x\) then \(\Phi(x',y) = \text{true}\). If \(\Phi(x,y) = \text{true}\) and \(y \le y'\) then \(\Phi(x,y') = \text{true}\). Fong and Spivak have a cute trick for drawing feasibility relations: when they draw a blue dashed arrow from \(x \in X\) to \(y \in Y\) it means \(\Phi(x,y) = \text{true}\). But again, they leave out blue dashed arrows that would follow from rules 1 and 2, to reduce clutter! Let's do an example: So, we see \(\Phi(E,b) = \text{true}\). But we can use the two rules to draw further conclusions from this: Since \(\Phi(E,b) = \text{true}\) and \(S \le E\) then \(\Phi(S,b) = \text{true}\), by rule 1. Since \(\Phi(S,b) = \text{true}\) and \(b \le d\) then \(\Phi(S,d) = \text{true}\), by rule 2. and so on. Puzzle 171. Is \(\Phi(E,c) = \text{true}\) ? Puzzle 172. Is \(\Phi(E,e) = \text{true}\)? I hope you get the idea! We can think of the arrows in our Hasse diagrams as one-way streets going between cities in two countries, \(X\) and \(Y\). And we can think of the blue dashed arrows as one-way plane flights from cities in \(X\) to cities in \(Y\). Then \(\Phi(x,y) = \text{true}\) if we can get from \(x \in X\) to \(y \in Y\) using any combination of streets and plane flights! That's one reason \(\Phi\) is called a feasibility relation. What's cool is that rules 1 and 2 can also be expressed by saying $$ \Phi : X^{\text{op}} \times Y \to \mathbf{Bool} $$is a monotone function. And it's especially cool that we need the '\(\text{op}\)' over the \(X\). Make sure you understand that: the \(\text{op}\) over the \(X\) but not the \(Y\) is why we can drive to an airport in \(X\), then take a plane, then drive from an airport in \(Y\). Here are some ways to lots of feasibility relations. Suppose \(X\) and \(Y\) are preorders. Puzzle 173. Suppose \(f : X \to Y \) is a monotone function from \(X\) to \(Y\). Prove that there is a feasibility relation \(\Phi\) from \(X\) to \(Y\) given by $$ \Phi(x,y) \text{ if and only if } f(x) \le y .$$ Puzzle 174. Suppose \(g: Y \to X \) is a monotone function from \(Y\) to \(X\). Prove that there is a feasibility relation \(\Psi\) from \(X\) to \(Y\) given by $$ \Psi(x,y) \text{ if and only if } x \le g(y) .$$ Puzzle 175. Suppose \(f : X \to Y\) and \(g : Y \to X\) are monotone functions, and use them to build feasibility relations \(\Phi\) and \(\Psi\) as in the previous two puzzles. When is $$ \Phi = \Psi ? $$ To read other lectures go here.
Assume n is an integer. If the square root of $n$ is rational, prove that $n$ is a perfect square. To prove the above statement, I used a trick rather than the standard way of using the unique factorization theorem over the field of integers. My professor claims that my proof is wrong but I don't know where it went wrong. The proof is as follows: If $\sqrt{n}$ is rational, then there exists integers $p,q$ with $\gcd(p,q) = 1$ such that $\sqrt{n} = \dfrac{p}{q}$.Therefore by multiplying both sides by $\sqrt{n}$ I obtain $n = \dfrac{p}{q}\sqrt{n}$ or $\sqrt{n} =\dfrac{q *n}{p}$. Since $\sqrt{n} = \dfrac{p}{q}$ and $ \sqrt{n} = \dfrac{q *n}{p}$, we can deduce that $ \dfrac{p}{q} = \dfrac{q *n}{p}$. I believe the proof is fine up to this part. Now I claim the following: Since $\gcd(p,q) = 1$, $\frac{p}{q}$ is in lowest terms. Since $p,q,n$ are all positive integers, there must exist some positive integer $r$ such that $p*r = q*n$ and $q *r = p$. (I believe this is the fishy part. Perhaps this can't be justified straight from definition of equivalence classes?). As such, we know $q*r = p$ or $ r = \dfrac{p}{q} = \sqrt{n} \implies r^2 = n$. Since $r$ is an integer, $n$ must be a perfect square. Where did my proof go wrong?
To show two sets are equivalent, you should show that $A\subseteq B$ and $B\subseteq A$. This implies that $A=B$. If $A=\varnothing$ and $B=\varnothing$, then try an element-chasing proof to show that $A=B$. ($\to$): If $x\in A$, then $x\in B$. Thus, $A\subseteq B$. $\qquad$[ Vacuously true] ($\leftarrow$): If $x\in B$, then $x\in A$. Thus, $B\subseteq A$.$\qquad$ [ Vacuously true] Thus, by mutual subset inclusion, we have that $A=B$. This conclusion is pretty lame though, as it is an example of a so-called vacuous truth. The implication $p\to q$ is only false when $p$ is true and $q$ is false. Thus, assuming anything to be in an empty set will give you all sorts of bizarre conclusions. Addendum: Some of the confusion seems to be rooted in what it means to be a subset as opposed to an element. Thus, I am going to list several claims where the goal is to figure out whether or not the claim is true or false (hopefully this may help the OP and some other users). Answers will be provided on the side of each claim. Claims: (a) $0\in\varnothing\qquad\qquad$ [ False] (b) $\varnothing\in\{0\}\qquad\qquad$ [ False] (c) $\{0\}\subset\varnothing\qquad\qquad$ [ False] (d) $\varnothing\subset\{0\}\qquad\qquad$ [ True] (e) $\{0\}\in\{0\}\qquad\qquad$ [ False] (f) $\{0\}\subset\{0\}\qquad\qquad$ [ False] (g) $\{\varnothing\}\subseteq\{\varnothing\}\qquad\qquad$ [ True] (h) $\varnothing\in\{\varnothing\}\qquad\qquad$ [ True] (i) $\varnothing\in\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True] (j) $\{\varnothing\}\in\{\varnothing\}\qquad\qquad$ [ False] (k) $\{\varnothing\}\in\{\{\varnothing\}\}\qquad\qquad$ [ True] (l) $\{\varnothing\}\subset\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True] (m) $\{\{\varnothing\}\}\subset\{\varnothing,\{\varnothing\}\}\qquad\qquad$ [ True] (n) $\{\{\varnothing\}\}\subset\{\{\varnothing\},\{\varnothing\}\}\qquad\qquad$ [ False] Note: Below, $x$ is meant simply to denote a letter, not a set (which is often indicated by writing a capital letter, as was done in the initial explanation). For (t), if $x$ did denote a set, then $x=\varnothing$ would make (t) true as opposed to false. (o) $x\in\{x\}\qquad\qquad$ [ True] (p) $\{x\}\subseteq\{x\}\qquad\qquad$ [ True] (q) $\{x\}\in\{x\}\qquad\qquad$ [ False] (r) $\{x\}\in\{\{x\}\}\qquad\qquad$ [ True] (s) $\varnothing\subseteq\{x\}\qquad\qquad$ [ True] (t) $\varnothing\in\{x\}\qquad\qquad$ [ False] (u) $\varnothing\in\varnothing\qquad\qquad$ [ False] (v) $\varnothing\subseteq\varnothing\qquad\qquad$ [ True]
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-10 of 24 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ...
\(\newcommand{\cauchy}{\boldsymbol{\sigma}}\) \(\newcommand{\strain}{\boldsymbol{\varepsilon}}\) \(\newcommand{\uV}{\boldsymbol}\) \(\newcommand{\uT}{\boldsymbol}\) \(\newcommand{\defu}{\boldsymbol{u}}\) J-integral > $J$-controlled crack growth Resistance curve & crack stability In the lecture on crack growth, we have introduced the concept of resistance curve. As a reminder, the active plastic zone moves with the crack tip in case of crack growth, meaning that the crack is opening in a zone affected by permanent plastic deformations: the plastic wake. This makes the crack more difficult to be opened, which corresponds to an increase of the apparent fracture energy $G_C$ with the crack propagation $\Delta a$. This defines the resistance curve $J_R(a)$, and its evaluation in the case of ductile materials has been detailed in the previous section in terms of the evolution of the $J$-integral during a toughness test. It is thus possible to assess a crack stability, in the same way as it has been done in LEFM, by comparing the evolution of $J(a)$ with the resistance curve $R_c(a)$ in terms of their derivatives: $\partial_a J(a) <?>\partial_a J_R(a)$. However, as it was lengthy discussed, the theory behind the $J$-integral assumes a proportional loading, which is not the case when a crack propagates since this involves partial unloading. Therefore the question which remains is "Under which condition can the $J$-integral be evaluated during a crack propagation?" $J$-controlled region Definition Let us assume that the crack propagates on a distance $da$ as illustrated in Picture IX.29. Clearly, since the crack lips are stress-free, there is a "green region" in the vicinity of the crack increment, which scales with $da$, in which the material is elastically unloaded. As a consequence there exists a "light blue" zone of radius $r^*$ in which the loading is non-proportional during the crack propagation. Besides, remembering that the HRR solution is an asymptotic one, the stress and strain fields are governed by the $J$-integral in a bounded zone of radius $r^{**}$ from the crack tip. Therefore, for the $J$-integral to be meaningful during a crack propagation, there is a requirement that there still exists a "dark blue" annular region separated by the radii $r^*$ and $r^{**}$ in which the stress and strain fields are governed by the $J$-integral, this is the $J$-controlled region. Existence One obvious condition for the $J$-controlled region to exist is that the crack increment remains limited in comparison with the zone in which the HRR theory is valid: \begin{equation} da<<r^{**} .\label{eq:JcontrolledCond1}\end{equation} The other condition is assessed by evaluating the size of the zone of non-proportional loading of radius $r^*$. Assuming it exists, in the zone of $J$-controlled region, the strain follows \begin{equation}\label{eq:strain_Jintbis} \strain=\frac{\sigma_p^0\alpha}{E}\left(\frac{J E}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}} \tilde{\strain}\left(\theta,\,n\right). \end{equation} Since the strain field depends on both the crack length and the $J$-integral, one has \begin{equation}d\strain = \partial_a \strain da + \partial_J \strain dJ,\label{eq:dstrain} \end{equation} The second term of the right hand side of Eq. (\ref{eq:dstrain}) is directly obtained from Eq. (\ref{eq:strain_Jintbis}), with \begin{equation} \partial_J \strain = \frac{n}{n+1}\frac{\sigma_p^0\alpha}{E} \left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}}\frac{1}{J} \tilde{\strain}\left(\theta,\,n\right).\label{eq:dstraindJ} \end{equation} To evaluate the first term of the right hand side of Eq. (\ref{eq:dstrain}), we consider a polar referential linked to the crack tip as illustrated in Picture IX.30, with $x'=x-a$. Therefore, the derivative with respect to the crack advance reads \begin{equation} \partial_a = -\partial_{x'} = - \cos{\theta}\partial_r +\frac{\sin{\theta}}{r}\partial_\theta. \label{eq:chgevar} \end{equation} We thus need to evaluate the derivative of the strain field (\ref{eq:strain_Jintbis}) with respect to the distance $r$, with \begin{equation} \partial_r \strain = - \frac{n}{n+1}\frac{\sigma_p^0\alpha}{E} \left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}}\frac{1}{r} \tilde{\strain}\left(\theta,\,n\right),\label{eq:dstraindr} \end{equation} and with respect to the direction $\theta$, with \begin{equation} \partial_\theta \strain = \frac{\sigma_p^0\alpha}{E} \left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}} \partial_\theta\tilde{\strain}\left(\theta,\,n\right). \label{eq:dstraindtheta}\end{equation} Therefore, defining \begin{equation} \tilde{\tilde{\strain}}\left(\theta,\,n\right) = \frac{n}{n+1} \cos{\theta}\tilde{\strain}+\sin{\theta}\partial_\theta\tilde{\strain} ,\label{eq:tildetildestrain} \end{equation} the combination of Eqs. (\ref{eq:chgevar}-\ref{eq:dstraindr}) leads to \begin{equation} \partial_a \strain = \frac{\sigma_p^0\alpha}{E} \left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}}\frac{1}{r} \tilde{\tilde{\strain}}\left(\theta,\,n\right) .\label{eq:dstrainda}\end{equation} Eventually, using Eqs. (\ref{eq:dstraindJ}) and (\ref{eq:dstrainda}), the relation (\ref{eq:dstrain}) becomes \begin{equation} d \strain = \frac{\sigma_p^0\alpha}{E} \left(\frac{JE}{r\alpha\left(\sigma_p^0\right)^2I_n}\right)^{\frac{n}{n+1}}\left(\frac{n}{n+1}\frac{dJ}{J} \tilde{\strain} +\frac{da}{r} \tilde{\tilde{\strain}}\right). \label{eq:dstrainfinal}\end{equation} Since both $\tilde{\strain}$ and $\tilde{\tilde{\strain}}$ are independent on $r$ and are on the same order of magnitude, and since only the term $\frac{da}{r}$ includes a non-proportional loading (since in $\frac{1}{r}$), the existence of the $J$-controlled region implies that this latter term can be neglected, i.e. that \begin{equation} \frac{dJ}{J} >> \frac{da}{r}, \label{eq:existenceJzonetmp}\end{equation} or again \begin{equation} r >> \frac{J}{\frac{dJ}{da}}. \label{eq:existenceJzone}\end{equation} During a fracture toughness test, the resistance curve $J_R(\Delta a)$ can be extracted allowing to evaluate at crack initiation the following material characteristic dimension \begin{equation} D = J_{IC}/\left.\frac{dJ}{d\Delta a}\right|_c. \label{eq:Dtoughness} \end{equation} As a result, the second condition, besides Eq. (\ref{eq:JcontrolledCond1}), for a $J$-controlled zone to exist reads \begin{equation} D << r^{**}.\label{eq:JcontrolledCond2} \end{equation} In practice, the length $r^{**}$ is test-dependent and can be obtained by using the finite-element method. Typically it is expressed as a fraction of the ligament $L-a$ or of another characteristic length such as the plastic zone $r_p$. Crack stability and tearing Assuming both conditions (\ref{eq:JcontrolledCond1}) and (\ref{eq:JcontrolledCond2}) are satisfied, there exists a $J$-controlled zone during crack propagation, and the crack stability is assessed by comparing the growth of the $J$-integral, which is geometry and loading-dependent, with the slope of the material resistance curve, i.e. \begin{equation} \frac{dJ\left(a,\,Q\right)}{da} <\left.\frac{dJ_R}{d a}\right|_{\Delta a=0} .\label{eq:JstabilityTmp}\end{equation} In order to use non-dimensional entities, one can define the, geometry and loading-dependent, tearing as \begin{equation} T =\frac{E}{\left(\sigma_p^0\right)^2} \frac{dJ\left(a,\,Q\right)}{da}\,, \label{eq:tearing}\end{equation} and the critical tearing, which is a material value, but might depend on the test specimen considered for its evaluation, \begin{equation} T_0 =\frac{E}{\left(\sigma_p^0\right)^2} \left.\frac{dJ_R}{da}\right|_{\Delta a=0}\,.\label{eq:tearing0}\end{equation} The crack stability condition (\ref{eq:JstabilityTmp}) thus becomes \begin{equation} T < T_0 .\label{eq:Jstability}\end{equation} Typical values obtained for toughness tests on a Compact Tension specimen are reported here below for indication purpose only. In particular, one wants to observe the difference of values between ductile materials such as steel, and the more brittle aluminum alloy. Note also the effect of the temperature, which is not monotonic. Material Specimen Temperature [$\deg$ C] $\frac{\sigma_p^0+\sigma_{\text{TS}}}{2}$ [MPA] $J_{IC}$ [MPA$\cdot$m] $\frac{dJ}{da}$ [MPA] $T$ [-] $D $ [mm] ASTM 470 (Cr-Mo-V) CT 149 621 0.084 48.5 25.5 1.78 260 621 0.074 49 25.8 1.52 427 592 0.088 84 48.6 1.01 ASTM A453 (Stainless steel) CT 24 931 0.124 141 32.8 204 820 0.107 84 25.6 1.27 427 772 0.092 65 22.4 6061-T651 (Aluminum) CT 24 298.2 0.0178 3.45 2.79 5
Light Gathering Power of a Telescope It is necessary that a telescope have a high magnification to get a good image of astronomical objects. Having a high magnification is only part of getting a good image, not least since distortions of the atmosphere limit the size of objects that can be resolved to 1 arc second or greater. In fact, for a telescope in normal adjustment, the magnification is equal to the ratio of the focal lengths of the primary or objective, and the secondary, or eyepiece lens, so it is possible to claim a high magnification just by having an eyepiece lens of very short focal length. Some cheap telescopes claim to have magnifications of several hundred, by this method, but this is misleading. As important, and often more important than magnification, is light gathering power. Light gathering power is proportional to the area of the telescope. If the diameter of a telescope lens double, the light gathering power increases by a factor of 4. If the diameter of a telescope increases by a factor of 3, the light gathering power increase by a factor of 9, and so on.. Having a high magnification telescope with a small aperture, or primary lens, will only increase the size of the blurry image. To get a good image, you must increase the size if the aperture. This will increase the light gathering power, and allow smaller detail to be resolved according to Rayleigh's Criterion: \[\theta \simeq \frac{1.22 \lambda}{D}\], where \[\theta\] is the angular size of the detail \[\lambda\] is the wavelength of light being used to observe the object \[D\] is the diameter of the lens or mirror. Add comment
SIF Computation This chapter introduces the existing different methods available to compute the Stress Intensity factor (SIF), which are The analytical methods based on Linear Fracture Elastic Mechanics: by establishing the full field solution, by applying the superposition of existing solutions or by considering the energetic method related to Griffith work; The numerical approaches: based on collocation method or on FEM by using the field approximation, the energetic method of the J-integral; The experimental approaches: based on normalized experiments or by using the strain Gauge Method; The use of tabulated solutions available in handbooks. SIF Computation > Analytical methods: Reminder of Linear Elasticity \begin{equation} \nabla^2 \nabla^2 \Phi = 0. \label{eq:biharmonic}\end{equation} One solution of this equation has the form: \begin{equation} \Phi = \frac{\bar{\zeta}\Omega+\zeta\bar{\Omega} + \omega + \bar{\omega}}{2} ,\label{eq:PhiCrack}\end{equation} where the functions $\omega(\zeta)$ and $\Omega(\zeta)$ have to be determined so that the boundary conditions are satisfied. The solution fields can be expressed in terms of these functions as: The stress field reads: \begin{equation} \left\{ \begin{array}{r c l} \mathbf{\sigma}_{xx} &= &\Omega'+\bar{\Omega}' - \frac{\bar{\zeta}\Omega''+\omega''+ \zeta\bar{\Omega}''+ \bar{\omega}''}{2},\\ \mathbf{\sigma}_{yy} &=&\Omega'+\bar{\Omega}' + \frac{\bar{\zeta}\Omega''+\omega''+ \zeta\bar{\Omega}''+ \bar{\omega}''}{2},\\ \mathbf{\sigma}_{xy} &=& i \frac{\zeta\bar{\Omega}''+\bar{\omega}''-\bar{\zeta}\Omega''-\omega''}{2}. \end{array} \right. \label{eq:stressAiry}\end{equation} The displacement field reads: \begin{equation} \mathbf{u} = -\frac{1+\nu}{E}\left(\zeta\bar{\Omega}'+\bar{\omega}' + \kappa \left(\zeta\right)\right),\label{eq:uAiry} \end{equation} with \begin{equation} \left\{ \begin{array}{r c l l} \kappa &=&\frac{3-\nu}{1+\nu}&\text{ in Plane} \quad \sigma,\\ \kappa &=&3-4\nu&\text{ in Plane} \quad\epsilon. \end{array} \right. \end{equation}
I recently came across this in a textbook (NCERT class 12 , chapter: wave optics , pg:367 , example 10.4(d)) of mine while studying the Young's double slit experiment. It says a condition for the formation of interference pattern is$$\frac{s}{S} < \frac{\lambda}{d}$$Where $s$ is the size of ... The accepted answer is clearly wrong. The OP's textbook referes to 's' as "size of source" and then gives a relation involving it. But the accepted answer conveniently assumes 's' to be "fringe-width" and proves the relation. One of the unaccepted answers is the correct one. I have flagged the answer for mod attention. This answer wastes time, because I naturally looked at it first ( it being an accepted answer) only to realise it proved something entirely different and trivial. This question was considered a duplicate because of a previous question titled "Height of Water 'Splashing'". However, the previous question only considers the height of the splash, whereas answers to the later question may consider a lot of different effects on the body of water, such as height ... I was trying to figure out the cross section $\frac{d\sigma}{d\Omega}$ for spinless $e^{-}\gamma\rightarrow e^{-}$ scattering. First I wrote the terms associated with each component.Vertex:$$ie(P_A+P_B)^{\mu}$$External Boson: $1$Photon: $\epsilon_{\mu}$Multiplying these will give the inv... As I am now studying on the history of discovery of electricity so I am searching on each scientists on Google but I am not getting a good answers on some scientists.So I want to ask you to provide a good app for studying on the history of scientists? I am working on correlation in quantum systems.Consider for an arbitrary finite dimensional bipartite system $A$ with elements $A_{1}$ and $A_{2}$ and a bipartite system $B$ with elements $B_{1}$ and $B_{2}$ under the assumption which fulfilled continuity.My question is that would it be possib... @EmilioPisanty Sup. I finished Part I of Q is for Quantum. I'm a little confused why a black ball turns into a misty of white and minus black, and not into white and black? Is it like a little trick so the second PETE box can cancel out the contrary states? Also I really like that the book avoids words like quantum, superposition, etc. Is this correct? "The closer you get hovering (as opposed to falling) to a black hole, the further away you see the black hole from you. You would need an impossible rope of an infinite length to reach the event horizon from a hovering ship". From physics.stackexchange.com/questions/480767/… You can't make a system go to a lower state than its zero point, so you can't do work with ZPE. Similarly, to run a hydroelectric generator you not only need water, you need a height difference so you can make the water run downhill. — PM 2Ring3 hours ago So in Q is for Quantum there's a box called PETE that has 50% chance of changing the color of a black or white ball. When two PETE boxes are connected, an input white ball will always come out white and the same with a black ball. @ACuriousMind There is also a NOT box that changes the color of the ball. In the book it's described that each ball has a misty (possible outcomes I suppose). For example a white ball coming into a PETE box will have output misty of WB (it can come out as white or black). But the misty of a black ball is W-B or -WB. (the black ball comes out with a minus). I understand that with the minus the math works out, but what is that minus and why? @AbhasKumarSinha intriguing/ impressive! would like to hear more! :) am very interested in using physics simulation systems for fluid dynamics vs particle dynamics experiments, alas very few in the world are thinking along the same lines right now, even as the technology improves substantially... @vzn for physics/simulation, you may use Blender, that is very accurate. If you want to experiment lens and optics, the you may use Mistibushi Renderer, those are made for accurate scientific purposes. @RyanUnger physics.stackexchange.com/q/27700/50583 is about QFT for mathematicians, which overlaps in the sense that you can't really do string theory without first doing QFT. I think the canonical recommendation is indeed Deligne et al's *Quantum Fields and Strings: A Course For Mathematicians *, but I haven't read it myself @AbhasKumarSinha when you say you were there, did you work at some kind of Godot facilities/ headquarters? where? dont see something relevant on google yet on "mitsubishi renderer" do you have a link for that? @ACuriousMind thats exactly how DZA presents it. understand the idea of "not tying it to any particular physical implementation" but that kind of gets stretched thin because the point is that there are "devices from our reality" that match the description and theyre all part of the mystery/ complexity/ inscrutability of QM. actually its QM experts that dont fully grasp the idea because (on deep research) it seems possible classical components exist that fulfill the descriptions... When I say "the basics of string theory haven't changed", I basically mean the story of string theory up to (but excluding) compactifications, branes and what not. It is the latter that has rapidly evolved, not the former. @RyanUnger Yes, it's where the actual model building happens. But there's a lot of things to work out independently of that And that is what I mean by "the basics". Yes, with mirror symmetry and all that jazz, there's been a lot of things happening in string theory, but I think that's still comparatively "fresh" research where the best you'll find are some survey papers @RyanUnger trying to think of an adjective for it... nihilistic? :P ps have you seen this? think youll like it, thought of you when found it... Kurzgesagt optimistic nihilismyoutube.com/watch?v=MBRqu0YOH14 The knuckle mnemonic is a mnemonic device for remembering the number of days in the months of the Julian and Gregorian calendars.== Method ===== One handed ===One form of the mnemonic is done by counting on the knuckles of one's hand to remember the numbers of days of the months.Count knuckles as 31 days, depressions between knuckles as 30 (or 28/29) days. Start with the little finger knuckle as January, and count one finger or depression at a time towards the index finger knuckle (July), saying the months while doing so. Then return to the little finger knuckle (now August) and continue for... @vzn I dont want to go to uni nor college. I prefer to dive into the depths of life early. I'm 16 (2 more years and I graduate). I'm interested in business, physics, neuroscience, philosophy, biology, engineering and other stuff and technologies. I just have constant hunger to widen my view on the world. @Slereah It's like the brain has a limited capacity on math skills it can store. @NovaliumCompany btw think either way is acceptable, relate to the feeling of low enthusiasm to submitting to "the higher establishment," but for many, universities are indeed "diving into the depths of life" I think you should go if you want to learn, but I'd also argue that waiting a couple years could be a sensible option. I know a number of people who went to college because they were told that it was what they should do and ended up wasting a bunch of time/money It does give you more of a sense of who actually knows what they're talking about and who doesn't though. While there's a lot of information available these days, it isn't all good information and it can be a very difficult thing to judge without some background knowledge Hello people, does anyone have a suggestion for some good lecture notes on what surface codes are and how are they used for quantum error correction? I just want to have an overview as I might have the possibility of doing a master thesis on the subject. I looked around a bit and it sounds cool but "it sounds cool" doesn't sound like a good enough motivation for devoting 6 months of my life to it
Is there a "simple" mathematical proof that is fully understandable by a 1st year university student that impressed you because it is beautiful? closed as primarily opinion-based by Daniel W. Farlow, Najib Idrissi, user91500, LutzL, Jonas Meyer Apr 7 '15 at 3:40 Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question. Here's a cute and lovely theorem. There exist two irrational numbers $x,y$ such that $x^y$ is rational. Proof. If $x=y=\sqrt2$ is an example, then we are done; otherwise $\sqrt2^{\sqrt2}$ is irrational, in which case taking $x=\sqrt2^{\sqrt2}$ and $y=\sqrt2$ gives us: $$\left(\sqrt2^{\sqrt2}\right)^{\sqrt2}=\sqrt2^{\sqrt2\sqrt2}=\sqrt2^2=2.\qquad\square$$ (Nowadays, using the Gelfond–Schneider theorem we know that $\sqrt2^{\sqrt2}$ is irrational, and in fact transcendental. But the above proof, of course, doesn't care for that.) How about the proof that $$1^3+2^3+\cdots+n^3=\left(1+2+\cdots+n\right)^2$$ I remember being impressed by this identity and the proof can be given in a picture: Edit: Substituted $\frac{n(n+1)}{2}=1+2+\cdots+n$ in response to comments. Cantor's Diagonalization Argument, proof that there are infinite sets that can't be put one to one with the set of natural numbers, is frequently cited as a beautifully simple but powerful proof. Essentially, with a list of infinite sequences, a sequence formed from taking the diagonal numbers will not be in the list. I would personally argue that the proof that $\sqrt 2$ is irrational is simple enough for a university student (probably simple enough for a high school student) and very pretty in its use of proof by contradiction! Prove that if $n$ and $m$ can each be written as a sum of two perfect squares, so can their product $nm$. Proof: Let $n = a^2+b^2$ and $m=c^2+d^2$ ($a, b, c, d \in\mathbb Z$). Then, there exists some $x,y\in\mathbb Z$ such that $$x+iy = (a+ib)(c+id)$$ Taking the magnitudes of both sides are squaring gives $$x^2+y^2 = (a^2+b^2)(c^2+d^2) = nm$$ I would go for the proof by contradiction of an infinite number of primes, which is fairly simple: Assume that there is a finite number of primes. Let $G$ be the set of allprimes $P_1,P_2,...,P_n$. Compute $K = P_1 \times P_2 \times ... \times P_n + 1$. If $K$ is prime, then it is obviously notin $G$. Otherwise, noneof its prime factors are in $G$. Conclusion: $G$ is notthe set of allprimes. I think I learned that both in high-school and at 1st year, so it might be a little too simple... By the concavity of the $\sin$ function on the interval $\left[0,\frac{\pi}2\right]$ we deduce these inequalities: $$\frac{2}\pi x\le \sin x\le x,\quad \forall x\in\left[0,\frac\pi2\right].$$ The first player in Hex has a winning strategy. There are no draws in hex, so one player must have a winning strategy. If player two has a winning strategy, player one can steal that strategy by placing the first stone in the center (additional pieces on the board never hurt your position) then using player two's strategy. You cannot have two dice (with numbers $1$ to $6$) biased so that when you throw both, the sum is uniformly distributed in $\{2,3,\dots,12\}$. For easier notation, we use the equivalent formulation "You cannot have two dice (with numbers $0$ to $5$) biased such that when you throw both, the sum is uniformly distributed in $\{0,1,\dots,10\}$." Proof:Assume that such dice exist. Let $p_i$ be the probability that the first die gives an $i$ and $q_i$ be the probability that the second die gives an $i$. Let $p(x)=\sum_{i=0}^5 p_i x^i$ and $q(x)=\sum_{i=0}^5 q_i x^i$. Let $r(x)=p(x)q(x) = \sum_{i=0}^{10} r_i x^i$. We find that $r_i = \sum_{j+k=i}p_jq_k$. But hey, this is also the probability that the sum of the two dice is $i$. Therefore, $$ r(x)=\frac{1}{11}(1+x+\dots+x^{10}). $$ Now $r(1)=1\neq0$, and for $x\neq1$, $$ r(x)=\frac{(x^{11}-1)}{11(x-1)}, $$ which clearly is nonzero when $x\neq 1$. Therefore $r$ does not have any real zeros. But because $p$ and $q$ are $5$th degree polynomials, they must have zeros. Therefore, $r(x)=p(x)q(x)$ has a zero. A contradiction. Given a square consisting of $2n \times 2n$ tiles, it is possible to cover this square with pieces that each cover $2$ adjacent tiles (like domino bricks). Now imagine, you remove two tiles, from two opposite corners of the original square. Prove that is is now no longer possible to cover the remaining area with domino bricks. Proof: Imagine that the square is a checkerboard. Each domino brick will cover two tiles of different colors. When you remove tiles from two opposite corners, you will remove two tiles with the samecolor. Thus, it can no longer be possible to cover the remaining area. (Well, it may be too "simple." But you did not state that it had to be a university student of mathematics. This one might even work for liberal arts majors...) One little-known gem at the intersection of geometry and number theory is Aubry's reflective generation of primitive Pythagorean triples, i.e. coprime naturals $\,(x,y,z)\,$with $\,x^2 + y^2 = z^2.\,$ Dividing by $\,z^2$ yields $\,(x/z)^2+(y/z)^2 = 1,\,$ so each triple corresponds to a rational point $(x/z,\,y/z)$ on the unit circle. Aubry showed that we can generate all such triples by a very simple geometrical process. Start with the trivial point $(0,-1)$. Draw a line to the point $\,P = (1,1).\,$ It intersects the circle in the rational point $\,A = (4/5,3/5)\,$ yielding the triple $\,(3,4,5).\,$ Next reflect the point $\,A\,$ into the other quadrants by taking all possible signs of each component, i.e. $\,(\pm4/5,\pm3/5),\,$ yielding the inscribed rectangle below. As before, the line through $\,A_B = (-4/5,-3/5)\,$ and $P$ intersects the circle in $\,B = (12/13, 5/13),\,$ yielding the triple $\,(12,5,13).\,$ Similarly the points $\,A_C,\, A_D\,$ yield the triples $\,(20,21,29)\,$ and $\,(8,15,17),\,$ We can iterate this process with the new points $\,B,C,D\,$ doing the same we did for $\,A,\,$ obtaining further triples. Iterating this process generates the primitive triples as a ternary tree $\qquad\qquad$ Descent in the tree is given by the formula $$\begin{eqnarray} (x,y,z)\,\mapsto &&(x,y,z)-2(x\!+\!y\!-\!z)\,(1,1,1)\\ = &&(-x-2y+2z,\,-2x-y+2z,\,-2x-2y+3z)\end{eqnarray}$$ e.g. $\ (12,5,13)\mapsto (12,5,13)-8(1,1,1) = (-3,4,5),\ $ yielding $\,(4/5,3/5)\,$ when reflected into the first quadrant. Ascent in the tree by inverting this map, combined with trivial sign-changing reflections: $\quad\quad (-3,+4,5) \mapsto (-3,+4,5) - 2 \; (-3+4-5) \; (1,1,1) = ( 5,12,13)$ $\quad\quad (-3,-4,5) \mapsto (-3,-4,5) - 2 \; (-3-4-5) \; (1,1,1) = (21,20,29)$ $\quad\quad (+3,-4,5) \mapsto (+3,-4,5) - 2 \; (+3-4-5) \; (1,1,1) = (15,8,17)$ See my MathOverflow post for further discussion, including generalizations and references. I like the proof that there are infinitely many Pythagorean triples. Theorem:There are infinitely many integers $ x, y, z$ such that $$ x^2+y^2=z^2 $$ Proof:$$ (2ab)^2 + ( a^2-b^2)^2= ( a^2+b^2)^2 $$ One cannot cover a disk of diameter 100 with 99 strips of length 100 and width 1. Proof: project the disk and the strips on a semi-sphere on top of the disk. The projection of each strip would have area at most 1/100th of the area of the semi-sphere. If you have any set of 51 integers between $1$ and $100$, the set must contain some pair of integers where one number in the pair is a multiple of the other. Proof: Suppose you have a set of $51$ integers between $1$ and $100$. If an integer is between $1$ and $100$, its largest odd divisor is one of the odd numbers between $1$ and $99$. There are only $50$ odd numbers between $1$ and $99$, so your $51$ integers can’t all have different largest odd divisors — there are only $50$ possibilities. So two of your integers (possibly more) have the same largest odd divisor. Call that odd number $d$. You can factor those two integers into prime factors, and each will factor as (some $2$’s)$\cdot d$. This is because if $d$ is the largest divisor of a number, the rest of its factorization can’t include any more odd numbers. Of your two numbers with largest odd factor $d$, the one with more $2$’s in its factorization is a multiple of the other one. (In fact, the multiple is a power of $2$.) In general, let $S$ be the set of integers from $1$ up to some even number $2n$. If a subset of $S$ contains more than half the elements in $S$, the set must contain a pair of numbers where one is a multiple of the other. The proof is the same, but it’s easier to follow if you see it for a specific $n$ first. The proof that an isosceles triangle ABC (with AC and AB having equal length) has equal angles ABC and BCA is quite nice: Triangles ABC and ACB are (mirrored) congruent (since AB = AC, BC = CB, and CA = BA), so the corresponding angles ABC and (mirrored) ACB are equal. This congruency argument is nicer than that of cutting the triangle up into two right-angled triangles. Parity of sine and cosine functions using Euler's forumla: $e^{-i\theta} = cos\ (-\theta) + i\ sin\ (-\theta)$ $e^{-i\theta} = \frac 1 {e^{i\theta}} = \frac 1 {cos\ \theta \ + \ i\ sin\ \theta} = \frac {cos\ \theta\ -\ i\ sin\ \theta} {cos^2\ \theta\ +\ sin^2\ \theta} = cos\ \theta\ -\ i\ sin\ \theta$ $cos\ (-\theta) +\ i\ sin\ (-\theta) = cos\ \theta\ +i\ (-sin\ \theta)$ Thus $cos\ (-\theta) = cos\ \theta$ $sin\ (-\theta) = -\ sin\ \theta$ $\blacksquare$ The proof is actually just the first two lines. I believe Gauss was tasked with finding the sum of all the integers from $1$ to $100$ in his very early schooling years. He tackled it quicker than his peers or his teacher could, $$\sum_{n=1}^{100}n=1+2+3+4 +\dots+100$$ $$=100+99+98+\dots+1$$ $$\therefore 2 \sum_{n=1}^{100}n=(100+1)+(99+2)+\dots+(1+100)$$ $$=\underbrace{101+101+101+\dots+101}_{100 \space times}$$ $$=101\cdot 100$$ $$\therefore \sum_{n=1}^{100}n=\frac{101\cdot 100}{2}$$ $$=5050.$$ Hence he showed that $$\sum_{k=1}^{n} k=\frac{n(n+1)}{2}.$$ If $H$ is a subgroup of $(\mathbb{R},+)$ and $H\bigcap [-1,1]$ is finite and contains a positive element. Then, $H$ is cyclic. Fermat's little theorem from noting that modulo a prime p we have for $a\neq 0$: $$1\times2\times3\times\cdots\times (p-1) = (1\times a)\times(2\times a)\times(3\times a)\times\cdots\times \left((p-1)\times a\right)$$ Proposition (No universal set): There does not exists a set which contain all the sets (even itself) Proof: Suppose to the contrary that exists such set exists. Let $X$ be the universal set, then one can construct by the axiom schema of specification the set $$C=\{A\in X: A \notin A\}$$ of all sets in the universe which did not contain themselves. As $X$ is universal, clearly $C\in X$. But then $C\in C \iff C\notin C$, a contradiction. Edit: Assuming that one is working in ZF (as almost everywhere :P) (In particular this proof really impressed me too much the first time and also is very simple) Most proofs concerning the Cantor Set are simple but amazing. The total number of intervals in the set is zero. It is uncountable. Every number in the set can be represented in ternary using just 0 and 2. No number with a 1 in it (in ternary) appears in the set. The Cantor set contains as many points as the interval from which it is taken, yet itself contains no interval of nonzero length. The irrational numbers have the same property, but the Cantor set has the additional property of being closed, so it is not even dense in any interval, unlike the irrational numbers which are dense in every interval. The Menger sponge which is a 3d extension of the Cantor set simultaneously exhibits an infinite surface area and encloses zero volume. The derivation of first principle of differentiation is so amazing, easy, useful and simply outstanding in all aspects. I put it here: Suppose we have a quantity $y$ whose value depends upon a single variable $x$, and is expressed by an equation defining $y$ as some specific function of $x$. This is represented as: $y=f(x)$ This relationship can be visualized by drawing a graph of function $y = f (x)$ regarding $y$ and $x$ as Cartesian coordinates, as shown in Figure(a). Consider the point $P$ on the curve $y = f (x)$ whose coordinates are $(x, y)$ and another point $Q$ where coordinates are $(x + Δx, y + Δy)$. The slope of the line joining $P$ and $Q$ is given by: $tanθ = \frac{Δy}{Δx} = \frac{(y + Δy ) − y}{Δx}$ Suppose now that the point $Q$ moves along the curve towards $P$. In this process, $Δy$ and $Δx$ decrease and approach zero; though their ratio $\frac{Δy}{Δx}$ will not necessarily vanish. What happens to the line $PQ$ as $Δy→0$, $Δx→0$? You can see that this line becomes a tangent to the curve at point $P$ as shown in Figure(b). This means that $tan θ$ approaches the slope of the tangent at $P$, denoted by $m$: $m=lim_{Δx→0} \frac{Δy}{Δx} = lim_{Δx→0} \frac{(y+Δy)-y}{Δx}$ The limit of the ratio $Δy/Δx$ as $Δx$ approaches zero is called the derivative of $y$ with respect to $x$ and is written as $dy/dx$. It represents the slope of the tangent line to the curve $y=f(x)$ at the point $(x, y)$. Since $y = f (x)$ and $y + Δy = f (x + Δx)$, we can write the definition of the derivative as: $\frac{dy}{dx}=\frac{d{f(x)}}{dx} = lim_{x→0} [\frac{f(x+Δx)-f(x)}{Δx}]$, which is the required formula. This proof that $n^{1/n} \to 1$ as integral $n \to \infty$: By Bernoulli's inequality (which is $(1+x)^n \ge 1+nx$), $(1+n^{-1/2})^n \ge 1+n^{1/2} > n^{1/2} $. Raising both sides to the $2/n$ power, $n^{1/n} <(1+n^{-1/2})^2 = 1+2n^{-1/2}+n^{-1} < 1+3n^{-1/2} $. Can a Chess Knight starting at any corner then move to touch every space on the board exactly once, ending in the opposite corner? The solution turns out to be childishly simple. Every time the Knight moves (up two, over one), it will hop from a black space to a white space, or vice versa. Assuming the Knight starts on a black corner of the board, it will need to touch 63 other squares, 32 white and 31 black. To touch all of the squares, it would need to end on a white square, but the opposite corner is also black, making it impossible. The Eigenvalues of a skew-Hermitian matrix are purely imaginary. The Eigenvalue equation is $A\vec x = \lambda\vec x$, and forming the vector norm gives $$\lambda \|\vec x\| = \lambda\left<\vec x, \vec x\right> = \left<\lambda \vec x,\vec x\right> = \left<A\vec x,\vec x\right> = \left<\vec x, A^{T*}\vec x\right> = \left<\vec x, -A\vec x\right> = -\lambda^* \|\vec x\|$$ and since $\|\vec x\| > 0$, we can divide it from left and right side. The second to last step uses the definition of skew-Hermitian. Using the definition for Hermitian or Unitarian matrices instead yields corresponding statements about the Eigenvalues of those matrices. I like the proof that not every real number can be written in the form $a e + b \pi$ for some integers $a$ and $b$. I know it's almost trivial in one way; but in another way it is kind of deep.
Number of items at this level: 4. Hasan, Ayazul (2018) On the socles of commutator invariant submodules of {\mathversion{bold}{$QTAG$}}-modules. Armenian Journal of Mathematics, 10 (7). pp. 1-11. ISSN 1829-1163 Bhat, V. K. (2017) Completely generalized right primary rings and their extensions. Armenian Journal of Mathematics, 9 (1). pp. 20-27. ISSN 1829-1163 Bhat, V. K. (2013) Minimal prime ideals of $\sigma(*)$-rings and their extensions. Armenian Journal of Mathematics, 5 (2). pp. 98-104. ISSN 1829-1163 Sujit, Kumar Sardar and Bibhas, Chandra Saha (2009) h-prime and h-semiprime ideals in $\Gamma_{N}$-semirings and Matrix Semiring $\left(\begin{array}{ll}R&\Gamma\\S&L\end{array}\right)$. Armenian Journal of Mathematics, 2 (3). pp. 105-119. ISSN 1829-1163
HINT $\rm\ \ (n,ab)\ =\ (n,nb,ab)\ =\ (n,(n,a)\:b)\ =\ (n,b)\ =\ 1\ $ using prior said GCD laws. Such exercises are easy on applying the basic GCD laws that I mentioned in your prior questions, viz. the associative, commutative, distributive and modular law $\rm\:(a,b+c\:a) = (a,b)\:.$ In fact, to make such proofs more intuitive one can write $\rm\:gcd(a,b)\:$ as $\rm\:a\dot+ b\:$ and then use familar arithmetic laws, e.g. see this proof of the GCD Freshman's Dream $\rm\:(a\:\dot+\: b)^n =\: a^n\: \dot+\: b^n\:.$ NOTE $\ $ Also worth emphasis is that not only are proofs using GCD laws more general, they are also more efficient notationally, hence more easily comprehensible. As an example, below is a proof using the GCD laws, followed by a proof using the Bezout identity (from Gerry's answer). $\begin{eqnarray}\qquad 1&=& &\rm(a\:,\ \ n)\ &\rm (b\:,\ \ n)&=&\rm\:(ab,\ &\rm n\:(a\:,\ &\rm b\:,\ &\rm n))\ \ =\ \ (ab,n) \\1&=&\rm &\rm (ar\!\!+\!\!ns)\:&\rm(bt\!\!+\!\!nu)&=&\rm\ \ ab\:(rt)\!\!+\!\!&\rm n\:(aru\!\!+\!\!&\rm bst\!\!+\!\!&\rm nsu)\ \ so\ \ (ab,n)=1\end{eqnarray}$ Notice how the first proof using GCD laws avoids all the extraneous Bezout variables $\rm\:r,s,t,u\:,\:$ which play no conceptual role but, rather, only serve to obfuscate the true essence of the matter. Further, without such noise obscuring our view, we can immediately see a natural generalization of the GCD-law based proof, namely $$\rm\ (a,\ b,\ n)\ =\ 1\ \ \Rightarrow\ \ (ab,\:n)\ =\ (a,\ n)\:(b,\ n) $$ This quickly leads to various refinement-based views of unique factorizations, e.g. the Euclid-Euler Four Number Theorem (Vierzahlensatz) or, more generally, Schreier refinement and Riesz interpolation. See also Paul Cohn's excellent 1973 Monthly survey Unique Factorization Domains.
Consider a conducting wheel with $N \in \mathbb{N}$ spokes which is completely in a homogenous magnetic field $\vec{B}$ perpendicular to the wheel plane. Then the Lorenz force on a charge $q$ on a spoke at distance $r$ from the axis is $F(r) = qvB = q\omega rB$. Thus the electromotoric force is $$ \mathcal{EMF} = \frac{1}{q} \int_0^R F(r) \, \mathrm{d}r = \omega B \int_0^R r \, \mathrm{d}r = \frac{1}{2} \omega BR^2 $$ However the flux at time $t$ is $$ \Phi(t) = \int_A \vec{B} \,\mathrm{d}\vec{A} = BA $$ Since the field is homogenous, where the area enlosed by the circle is denoted by $A$. Thus $\Phi$ is independent of $t$ and $\dot \Phi = 0$ which would mean that the induced $\mathcal{EMF}$ is zero. How can one resolve this (apparent) contradiction in detail? Especially the second method should work since it is more general than using the Lorentz force.
According to the mathematical definition of entropy, it is only defined for reversible processes only. Then how can it be defined for irreversible processes? Please explain clearly. In thermodynamics, from Clausius' theorem, the definition of entropy change is indeed $$\Delta S_{A \to B} = S(B)-S(A) = \left(\int_A^B \frac{\delta Q} T\right)_R$$ where the subscript $R$ means that the integral must be evaluated along a reversible path. It doesn't matter which reversible path we choose: the entropy difference between the states $A$ and $B$ will be $\left(\int_A^B \delta Q/T\right)_R$. Therefore, $\Delta S_{A\to B}$ is path-independent: it only depends on the states $A$ and $B$. To evaluate it, we just have to choose any reversible path connecting $A$ and $B$. When the change of a quantity only depends on the initial and final states, we say that that quantity is a state function. Now let's say that we perform an irreversible transformation from $A$ to $B$: what is the entropy change? Easy: it is $\left(\int_A^B \delta Q/T\right)_R$, where the integral is evaluated along any reversible path connecting $A$ and $B$. Notice that it would be wrong to say that $$\Delta S_{A \to B} = \left(\int_A^B \frac{\delta Q} T\right)_I \ \ \text{(wrong!)}$$ where $I$ is our irreversible path. In fact, the other part of Clausius' theorem tells us that $$\Delta S_{A \to B} > \left(\int_A^B \frac{\delta Q} T\right)_I \Rightarrow \left(\int_A^B \frac{\delta Q} T\right)_R > \left(\int_A^B \frac{\delta Q} T\right)_I $$ so if you (erroneously) computed $\Delta S_{A \to B}$ as $\left(\int_A^B \delta Q/T\right)_I$ (along the irreversible path) you would be underestimating it. Summing up: to compute the entropy change for a generic (reversible or irreversible) transformation from $A$ to $B$, choose any reversible path connecting $A$ to $B$ and compute the intrgral $\left(\int_A^B \delta Q/T\right)_R$.
SIF Computation >SIF Handbooks (LEMF) Solutions of crack problems for plates of finite dimensions are tabulated in handbooks. The solutions are obtained by different means (analytical, numerical, ...) for different crack lengths. As an example we report here a method for the problem of a centered crack of size $2a$ loaded in mode I in a plate of width $2W$, see Picture IV.13, under the condition $h/W>3$. The resolution method reads The general formula reads \begin{equation} K_I = \sigma \sqrt{\pi a}\; f\left(\tfrac{a}{W}\right)\,.\label{eq:SIFIsida1}\end{equation} The function $f\left(\tfrac{a}{W}\right)$ can be obtained by different resolution methods, leading to different accuracies. Some of the possible solutions for the function $f\left(\tfrac{a}{W}\right)$ are tabulated here below Using the periodic crack approximation \begin{equation} f\left(\tfrac{a}{W}\right)=\sqrt{\frac{2W}{\pi a}\tan{\frac{\pi a}{2W}}}\end{equation} <5% error for $a/W<0.5$; Irwin, 1957. Using Laurent series expansion Solution reported in Picture IV.14; Practically exact up to $a/W=0.9$; Isida, 1973. Fitting Isidia's values \begin{equation}f\left(\tfrac{a}{W}\right)=\left[ 1-0.025\left(\tfrac{a}{W}\right)^2+ 0.06\left(\tfrac{a}{W}\right)^4\right]\sqrt{\frac{1}{\cos{\frac{\pi a}{2 W}}}}\end{equation} <0.1% error; Tada, 1973.
Entropy is a function of the distribution. That is, the process used to generate a byte stream is what has entropy, not the byte stream itself. If I give you the bits 1011, that could have anywhere from 0 to 4 bits of entropy; you have no way of knowing that value. Here is the definition of Shannon entropy. Let $X$ be a random variable that takes on the values $x_1,x_2,x_3,\dots,x_n$. Then the Shannon entropy is defined as $$H(X) = -\sum_{i=1}^{n} \operatorname{Pr}[x_i] \cdot \log_2\left(\operatorname{Pr}[x_i]\right)$$ where $\operatorname{Pr}[\cdot]$ represents probability. Note that the definition is a function of a random variable (i.e., a distribution), not a particular value! So what is the entropy in a single flip of a coin? Let $F$ be a random variable representing such. There are two events, heads and tails, each with probability $0.5$. So, the Shannon entropy of $F$ is: $$H(F) = -(0.5\cdot\log_2 0.5 + 0.5\cdot\log_2 0.5) = -(-0.5 + -0.5) = 1.$$ Thus, $F$ has exactly one bit of entropy, what we expected. So, to find how much entropy is present in a byte stream, you need to know how the byte stream is generated and the entropy of any inputs (in the case of PRNGs). Recall that a deterministic algorithm cannot add entropy to an input, only take it away, so the entropy of all inputs to a deterministic algorithm is the maximum entropy possible in the output. If you're using a hardware RNG, then you need to know the probabilities associated with the data it gives you, else you cannot formally find the Shannon entropy (though you could give it a lower bound if you know the probabilities of some, but not all, events). But note that in any case, you are dependent on the knowledge of the distribution associated with the byte stream. You can do statistical tests, like you mention, to verify that the output "looks random" (from a certain perspective). But you'll never be able to say any more than "it looks pretty uniformly distributed to me!". You'll never be able to look at a bitstream without knowing the distribution and say "there are X bits of entropy here."
Search Now showing items 1-10 of 15 A free-floating planet candidate from the OGLE and KMTNet surveys (2017) Current microlensing surveys are sensitive to free-floating planets down to Earth-mass objects. All published microlensing events attributed to unbound planets were identified based on their short timescale (below 2 d), ... OGLE-2016-BLG-1190Lb: First Spitzer Bulge Planet Lies Near the Planet/Brown-Dwarf Boundary (2017) We report the discovery of OGLE-2016-BLG-1190Lb, which is likely to be the first Spitzer microlensing planet in the Galactic bulge/bar, an assignation that can be confirmed by two epochs of high-resolution imaging of the ... OGLE-2015-BLG-1459L: The Challenges of Exo-Moon Microlensing (2017) We show that dense OGLE and KMTNet $I$-band survey data require four bodies (sources plus lenses) to explain the microlensing light curve of OGLE-2015-BLG-1459. However, these can equally well consist of three lenses ... OGLE-2017-BLG-1130: The First Binary Gravitational Microlens Detected From Spitzer Only (2018) We analyze the binary gravitational microlensing event OGLE-2017-BLG-1130 (mass ratio q~0.45), the first published case in which the binary anomaly was only detected by the Spitzer Space Telescope. This event provides ... OGLE-2017-BLG-1434Lb: Eighth q < 1 * 10^-4 Mass-Ratio Microlens Planet Confirms Turnover in Planet Mass-Ratio Function (2018) We report the discovery of a cold Super-Earth planet (m_p=4.4 +/- 0.5 M_Earth) orbiting a low-mass (M=0.23 +/- 0.03 M_Sun) M dwarf at projected separation a_perp = 1.18 +/- 0.10 AU, i.e., about 1.9 times the snow line. ... OGLE-2017-BLG-0373Lb: A Jovian Mass-Ratio Planet Exposes A New Accidental Microlensing Degeneracy (2018) We report the discovery of microlensing planet OGLE-2017-BLG-0373Lb. We show that while the planet-host system has an unambiguous microlens topology, there are two geometries within this topology that fit the data equally ... OGLE-2017-BLG-1522: A giant planet around a brown dwarf located in the Galactic bulge (2018) We report the discovery of a giant planet in the OGLE-2017-BLG-1522 microlensing event. The planetary perturbations were clearly identified by high-cadence survey experiments despite the relatively short event timescale ... Spitzer Opens New Path to Break Classic Degeneracy for Jupiter-Mass Microlensing Planet OGLE-2017-BLG-1140Lb (2018) We analyze the combined Spitzer and ground-based data for OGLE-2017-BLG-1140 and show that the event was generated by a Jupiter-class (m_p\simeq 1.6 M_jup) planet orbiting a mid-late M dwarf (M\simeq 0.2 M_\odot) that ... OGLE-2016-BLG-1266: A Probable Brown-Dwarf/Planet Binary at the Deuterium Fusion Limit (2018) We report the discovery, via the microlensing method, of a new very-low-mass binary system. By combining measurements from Earth and from the Spitzer telescope in Earth-trailing orbit, we are able to measure the ... KMT-2016-BLG-0212: First KMTNet-Only Discovery of a Substellar Companion (2018) We present the analysis of KMT-2016-BLG-0212, a low flux-variation $(I_{\rm flux-var}\sim 20$) microlensing event, which is well-covered by high-cadence data from the three Korea Microlensing Telescope Network (KMTNet) ...
Dense Sets in Finite Topological Products Consider a finite collection of topological spaces $\{ X_1, X_2, ..., X_n \}$. If $A_i \subseteq X_i$ are dense in $X_i$ for all $i \in \{1, 2, ..., n \}$, then what can we say about the product $\displaystyle{\prod_{i=1}^{n} A_i}$? Conversely, if $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$, what can we say about each individual set $A_i$ in $X_i$? The following theorem gives us the desired answer. $A_i \subseteq X_i$ is dense in $X_i$ for all $i \in \{1, 2, ..., n \}$ if and only if $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$. Theorem 1: Let $\{ X_1, X_2, ..., X_n \}$ be a finite collection of topological spaces and let $A_i \subseteq X_i$ for all $i \in \{ 1, 2, ..., n \}$. Then $A_i$ is dense in $X_i$ for all $i \in \{ 1, 2, ..., n \}$ if and only if $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$. Proof:$\Rightarrow$ Suppose that $A_i$ is dense in $X_i$ for all $i \in \{1, 2, ..., n \}$. Let $\displaystyle{U = \prod_{i=1}^{n} U_i}$ be any open set in $\displaystyle{\prod_{i=1}^{n} X_i}$. Then $U_i$ is open in $X_i$ for all $i \in \{ 1, 2, ..., n \}$. So since $A_i$ is dense in $X_i$ we have that for all $i$ that: So take $x_i \in A_i \cap U_i$. Then $\mathbf{x} = (x_1, x_2, ..., x_n)$ is such that: Thus $\displaystyle{\left ( \prod_{i=1}^{n} A_i \right ) \cap \left ( \prod_{i=1}^{n} U_i \right ) \neq \emptyset}$ for all open sets $\displaystyle{U = \prod_{i=1}^{n} U_i}$ in $\displaystyle{\prod_{i=1}^{n} X_i}$ which shows that $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$. $\Leftarrow$ Suppose that $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$. Consider the set $A_i$ and let $U_i$ be any open set in $X_i$. Let: Then $U$ is an open set in $\displaystyle{\prod_{i=1}^{n} X_i}$. Since $\displaystyle{\prod_{i=1}^{n} A_i}$ is dense in $\displaystyle{\prod_{i=1}^{n} X_i}$ we have that: This shows that $A_i \cap U_i \neq \emptyset$. So for all $i \in \{ 1, 2, ..., n \}$ we have that $A_i$ is dense in $X_i$. $\blacksquare$
Evaluating Double Integrals over General Domains We recently saw that if $z = f(x, y)$ was a two variable real-valued function, then we could evaluate a double integral over a rectangular region $R = [a, b] \times [c, d]$ with iterated integrals, that is $\iint_R f(x, y) \: dA = \int_a^b \int_c^d f(x, y) \: dy \: dx = \int_c^d \int_a^b f(x, y) \: dx \: dy$. However, we now need to develop a method to evaluate double integrals over general domains. There are two different types of regions for which we will need to integrate upon. Type 1 Region:The first type of region that we may need to integrate over would be in the form $D = \{ (x, y) : a ≤ x ≤ b , g_1(x) ≤ y ≤ g_2(x) \}$. These types of domains are sometimes called y-simple domains/regions. Note that $D$ represents the area trapped between the continuous curves $y = g_1(x)$ and $y = g_2 (x)$ for $a ≤ x ≤ b$. Type 2 Region:The second type of region that we may need to integrate over would be in the form $D = \{ (x, y) : h_1(y) ≤ x ≤ h_2(y) , c ≤ y ≤ d \}$. These types of domains are sometimes called x-simple domains/regions. Note that here, $D$ represents the area trapped between the continuous curves $x = h_1(y)$ and $x = h_2(x)$ for $c ≤ x ≤ d$. Now recall that if $D \subseteq D(f)$ and if $R = [a, b] \times [c, d]$ is a rectangle that contains $D$ then for $\hat{f} = \left\{\begin{matrix} f(x,y) & \mathrm{if} (x,y) \in D \\ 0 & \mathrm{if} (x,y) \not \in D \end{matrix}\right.$ we have that:(1) Let's first consider Type 1 Regions. We note that if $(x, y)$ is such that $y < g_1(x)$ or if $y > g_2(x)$, then $(x, y) \not \in D$ and so $\hat{f} (x, y) = 0$. Since $f(x, y) = \hat{f}(x,y)$ when $g_1(x) ≤ y ≤ g_2(x)$, we have that: Therefore we can evaluate the double integral over $D = \{ (x, y) : a ≤ x ≤ b , g_1(x) ≤ y ≤ g_2(x) \}$ with the following formula:(3) Now let's consider Type 2 Regions. We note that if $(x, y)$ is such that $x < h_1(y)$ or if $x > h_2(y)$, then $(x, y) \not \in D$ and so $\hat{f} (x, y) = 0$. Since $f(x, y) = \hat{f}(x, y)$ when $h_1(y) ≤ x ≤ h_2(y)$, we have that: Therefore we can evaluate the double integral over $D = \{ (x, y) : h_1(y) ≤ x ≤ h_2(y) , c ≤ y ≤ d \}$ with the following formula:(5) Remark 1: For type 1 regions an alternative notation for the double integral over $D$ is $\iint_D f(x, y) \: dA = \int_a^b \: dx \int_{g_1(x)}^{g_2(x)} f(x, y) \: dy$, and similarly for type 2 regions we have $\iint_D f(x, y) \: dA = \int_c^d \: dy \int_{h_1(y)}^{h_2(y)} f(x, y) \: dx$. We will now look at an example of evaluating double integrals over general domains. Once again, it is important to note the following techniques of integration from single variable calculus that we may need to apply: Integration by Parts $\int u \: dv = uv - \int v \: du$. Example 1 Evaluate the double integral $\iint_D xe^y \: dA$ where $D = \{ (x, y) : 0 ≤ x ≤ 1 , 0 ≤ y ≤ x \}$. We note that example 1 requires double integration over a type 1 region. The region $D$ is depicted in the following image: Using the formula above, we have that:(6) We will first evaluate the inner integral $\int_{0}^{x} xe^y \: dy$ as follows, holding $x$ as fixed.(7) Therefore we have that:(8) The integral $\int_0^1 xe^x \: dx$ can be solved with either Integration by Parts or Tabular Integration. We get that $\int_0^1 xe^x \: dx = \left [ xe^x - e^x \right]_0^1 = 1$. Furthermore, $\int_0^1 x \: dx = \left [ \frac{x^2}{2} \right ]_0^1 = \frac{1}{2}$. Therefore:(9)
This article is about Restricted Boltzmann Machine as a recommender model. RBM stands for Restricted Boltzmann Machine. Notation Edit Below: The letters: u denotes a user i denotes an item j also denotes an item (if there are two items, they are denoted by i and j) r denotes rating (or anything else which a recommender tries to forecast, i. e. probability to select an item) R denotes the maximal rating f denotes a feature number F denotes the total number of features U denotes the number of users I denotes the number of items Model Edit Each user also has a vector v of ratings $ v = (v_{11}, ..., v_{1R}, v_{21}, ..., v_{2R}, ..., v_{I1}, ..., v_{IR}) $. Here $ v_{ir} $ is 1 if the item i received the rating r, otherwise 0. For instance, if the item 7's received 3 in the 1...5 rating system, then $ v_{73}=1 $, while $ v_{71}=v_{72}=v_{74}=v_{75}=0 $. Each user also has a hidden vector $ h\in \mathbb{R}^F $. The hidden vector determines the probability distribution for the vector of ratings. However, as we'll see, the RBM model does not try to restore the hidden vectors explicitly, thats why the name. The joint distribution of v and h is: $ P(v, h) = \exp(-E(v, h)) / \sum_{v', h'} \exp(-E(v', h')) $ $ (1.1a) $ where $ E(v,h) $ is called energy and defined as $ E(v,h)= - \sum_{ir}b_{ir}v_{ir} - \sum_{f} b_f h_f - \sum_{ifr}W_{ifr} v_{ir} h_f + \sum_i \log Z_i $ $ (1.1b) $ $ Z_i = \sum_r \exp\left(b_{ir}+\sum_f h_f W_{ifr}\right) $ $ (1.1c) $ Here b are called biases and W are called weights. It is biases and weights, not hidden vectors, which the learning process should restore. Missing (unrated) items are excluded from all sums. The term $ \sum_i \log Z_i $ is the normalization term that ensures $ \sum_r p(v_{ir}=1|h)=1 $ for each i. From the joint distribution of v and h, we can obtain conditional distributions and the distribution of v: $ P(v_{ir}|h) = {\exp \left( b_{ir}+\sum_f h_f W_{ifr} \right) \over \sum_r \exp \left( b_{ir}+\sum_f h_f W_{ifr} \right)} $ $ (1.2) $ $ P(h_f=1|v) = \sigma \left( b_f + \sum_i \sum_r v_{ir} W_{ifr} \right) $ $ (1.3) $, where $ \sigma(t)=1/\left(1+e^{-t}\right) $ $ P(v) = \sum_h {\exp \left( -E(v, h) \right) \over \sum_{v', h'} \exp \left( -E(v', h') \right)} $ $ (1.4) $ Learning Edit Given the expression (1.4) for $ P(v) $, learning should maximize the likelihood with respect to $ W $ and $ b $, which is performed by the gradient ascent: $ \Delta W_{ifr} = \epsilon {\partial \log P(v) \over \partial W_{ifr}} $ Denoting the gradient by $ W_{ifr} $ as $ \nabla_{ifr} $, we have $ \Delta W_{ifr} = \epsilon \nabla_{ifr} \log P(v) $ $ \nabla_{ifr} \log P(v) = \nabla_{ifr} \log \sum_h \exp \left( -E(v, h) \right) - \nabla_{ifr} \log \sum_{v', h'} \exp \left( -E(v', h') \right) $ The first term is $ \nabla_{ifr} \log \sum_h \exp \left( -E(v, h) \right) = $ $ = {\nabla_{ifr} \sum_h \exp \left( -E(v, h) \right) \over \sum_h \exp \left( -E(v, h) \right)} = $ $ = - {\sum_h \exp \left( -E(v, h) \right) \nabla_{ifr} E(v, h) \over \sum_h \exp \left( -E(v, h) \right)} = $ $ = {\sum_h \exp \left( -E(v, h) \right) v_{ir} h_f \over \sum_h \exp \left( -E(v, h) \right)} = $ $ = \left\langle v_{ir} h_f \right\rangle_{data} $ The notation $ \left\langle ... \right\rangle_{data} $ means that the expression in braces is calculated for given v and averaged over the conditional distribution $ P(h|v) $. The second term is $ \nabla_{ifr} \log \sum_{v', h'} \exp \left( -E(v', h') \right) = $ $ = {\nabla_{ifr} \sum_{v', h'} \exp \left( -E(v', h') \right) \over \sum_{v', h'} \exp \left( -E(v', h') \right)} = $ $ = - {\sum_{v', h'} \exp \left( -E(v', h') \right) \nabla_{ifr} E(v', h') \over \sum_{v', h'} \exp \left( -E(v', h') \right)} = $ $ = {\sum_{v', h'} \exp \left( -E(v', h') \right) v^\prime_{ir} h^\prime_f \over \sum_{v', h'} \exp \left( -E(v', h') \right)} = $ $ = \left\langle v_{ir} h_f \right\rangle_{model} $ The notation $ \left\langle ... \right\rangle_{model} $ means that the expression in braces is averaged over the joint distribution $ P(v, h) $. $ \Delta W_{ifr} = \epsilon \left( \left\langle v_{ir} h_f \right\rangle_{data} - \left\langle v_{ir} h_f \right\rangle_{model} \right) $ The first term can be calculated pretty easily. Given v, we can calculate the distribution for each element of h. But the second term needs exponential time to be calculated exactly. However, the function can be approximated by another function $ \Delta W_{ifr} = \epsilon \left( \left\langle v_{ir} h_f \right\rangle_{data} - \left\langle v_{ir} h_f \right\rangle_T \right) $ This function is called "Contrastive Divergence" (CD) and proposed by Hinton in 2002. The notation $ \left\langle ... \right\rangle_{T} $ means averaging over the value running Gibbs sampler for T full steps. Below is the excerpt from the wikipedia.org article, modified: Take a training sample $ v $, compute the probabilities of the hidden units with (1.3), and sample a hidden activation vector $ h $ from this probability distribution. Compute the outer product of $ v $ and $ h $ and call this the positive gradient. Set h'=h Repeat the following step T times: From $ h' $, sample a reconstruction $ v' $ of the visible units using (1.2), then resample the hidden activations $ h' $ from this using (1.3). Compute the outer product of $ v' $ and $ h' $ and call this the negative gradient. Let the weight update to $ W_{ifr} $, $ b_{ir} $ and $ b_f $ be the positive gradient minus the negative gradient, times the learning rate $ \epsilon $ In practice, T=3...10 is high enough. It is possible to start learning with small T and then increase it. Predictions Edit After we have all the $ W $ and $ b $, we are able to make predictions. Given a user $ u $, a set of items with known ratings $ I(u) $, and a set of known ratings $ v_{ir} $, $ i \in I(u) $, we are to predict the ratings $ v_{jr} $ for some $ j \notin I(u) $: $ p(v_{jr}=1) = {e^{b_{jr}} \prod_f \left(p(h_f=1|v)e^{-W_{jfr}}+p(h_f=0|v)\right) \over \sum_{r'} e^{b_{jr'}} \prod_f \left( p(h_f=1|v)e^{-W_{jfr'}}+p(h_f=0|v) \right) } $, where $ p(h_f=1|v) $ is given by (1.3) which we reproduce here: $ P(h_f=1|v) = \sigma \left( b_f + \sum_{i \in I(u)} \sum_r v_{ir} W_{ifr} \right) $ $ \sigma(t)=1/\left(1+e^{-t}\right) $ Modifications Edit Gaussian Hidden Units Edit For Gaussian Hidden Units, the formula (1.1b) $ E(v,h)= - \sum_{ir}b_{ir}v_{ir} - \sum_{f} b_f h_f - \sum_{ifr}W_{ifr} v_{ir} h_f + \sum_i \log Z_i $ is replaced by $ E(v,h)= - \sum_{ir}b_{ir}v_{ir} - \sum_{f} {(h_f-b_f)^2 \over 2\sigma_f^2} - \sum_{ifr}W_{ifr} v_{ir} h_f + \sum_i \log Z_i $ $ (1.1b') $ This modifies (1.3) from $ P(h_f=1|v) = \sigma \left( b_f + \sum_i \sum_r v_{ir} W_{ifr} \right) $ to $ P(h_f=h|v) = {1 \over \sqrt{2\pi} \sigma_f} \exp \left( - { \left( h - b_f - \sigma_f \sum_{ir} v_{ir} W_{ifr} \right)^2 \over 2 \sigma_f^2} \right) $ In order to avoid updating the variances, they can be fixed as $ \sigma_f=1 $. Conditional Factored RBM Edit This approach factorizes the matrix of weights: $ W_{ifr} = \sum_{k=1}^C A_{ikr} B_{kfr} $ Parameters Edit For Netflix prize, the following parameters were selected: F=100 for RBM F=500, C=30 for conditional factored RBM The weight used learning rate of 0.01/batch size, momemtum of 0.9, weight decay of 0.01 T started as 1, then increased (to 9 for RBM, to 3 for conditional factored RBM). After about 40-50 iterations (epochs), the model converged.
SIF Computation > Experimental methods (LEMF) Strain Gauge Method The idea is similar to the correlation method used with FE analyzes. However in this case we extract the deformation field using a stain gauge, and then try to correlate the measures with the asymptotic solution. When considering a strain gauge, the question which arises is where to locate it? The asymptotic solution as developed before is only valid really close to the crack tip. Due to the size of the strain gauge it is not possible to mount it in such a small location. The asymptotic solution should thus account for higher order terms. Details on the method can be found in "Dally JW and Sanford RJ (1988), Strain gauge methods for measuring the opening mode stress intensity factor $K_I$, Experimental Mechanics 27, 381-388", and only the main steps are given here. Let us go back the asymptotic solution for mode I: \begin{equation}\begin{cases} \mathbf{\sigma}_{xx} &= & \sum_{\lambda}\left(\lambda+1\right) r^\lambda \left[\left(2C_1-C_3\right)\cos{\left(\lambda\theta\right)} - C_1\lambda \cos{\left(\left(\lambda-2\right)\theta\right)}\right] \,;\\\mathbf{\sigma}_{yy} &= & \sum_{\lambda}\left(\lambda+1\right) r^\lambda \left[\left(2C_1+C_3\right)\cos{\left(\lambda\theta\right)} + C_1\lambda \cos{\left(\left(\lambda-2\right)\theta\right)}\right] \,;\\ \mathbf{\sigma}_{xy} &= & \sum_{\lambda}\left(\lambda+1\right) r^\lambda \left[ C_1\lambda \sin{\left(\left(\lambda-2\right)\theta\right)}+C_3 \sin{\left(\lambda\theta\right)} \right]\,,\end{cases} \label{eq:stressAiryModeI} \end{equation} with $\lambda=-1/2,0,1/2,...$ The first order asymptotic solution in $r^{-1/2}$ cannot be matched accurately using a strain gauge. So we consider the third order solution, which can be expressed in the polar coordinates under the form \begin{equation} \left( \begin{array}{c} \sigma_{rr}\\ \sigma_{\theta \theta}\\ \sigma_{r\theta} \end{array}\right) =\frac{K_I}{\sqrt{2\pi r}}\mathbf{f}_{-1/2}\left(\theta\right) + C_0 \mathbf{f}_{0}\left(\theta\right) + C_{1/2} \sqrt{r} \mathbf{f}_{1/2}\left(\theta\right)\,.\label{eq:3rdorder} \end{equation} Remaining general, the strain gauge is located at the coordinates (r,$\theta$) linked to the crack tip, and with an orientation $\alpha$, see Picture IV.25. The third order asymptotic solution (\ref{eq:3rdorder}) leads to the expression of the measured strain: \begin{equation}\begin{array}{ll} 2\mu \epsilon_{x'x'} &=& \frac{K_I}{\sqrt{2\pi r}} F\left(\nu,\, \theta,\,\alpha\right) +\\ &&C_0 \left(\frac{1-\nu}{1+\nu}+\cos{2\alpha}\right) + C_{1/2}\sqrt{r} \\&&\left(\frac{1-\nu}{1+\nu}+\sin^2{\frac{\theta}{2}}\cos{2\alpha} -\frac{1}{2}\sin\theta\sin{2\alpha}\right)\end{array}\,. \end{equation} with \begin{equation} F\left(\nu,\, \theta,\,\alpha\right) = \frac{1-\nu}{1+\nu}\cos{\frac{\theta}{2}} -\frac{1}{2}\sin{\theta}\sin{\frac{3\theta}{2}}\cos{2\alpha}+\frac{1}{2}\sin{\theta}\cos{\frac{3\theta}{2}}\sin{2\alpha} \,.\end{equation} From these two expressions, one can see that by locating the strain gauge at (r,$\theta^*$, $\alpha^*$), such that \begin{equation} \cos{2\alpha^*} = -\frac{1-\nu}{1+\nu} \text{ & } -\cot{2\alpha^*} = \tan{\frac{\theta^*}{2}}\,, \end{equation} one can directly measure the SIF from \begin{equation} K_I=\frac{2\mu \epsilon_{x'x'}\sqrt{2\pi r}}{ F\left(\nu,\, \theta^*,\,\alpha^*\right)}\,. \end{equation} Remarks: Be aware that the mounting of the strain gauge is critical. Nowadays, the strain gauges are replaced by Digital Image Correlation (DIC).
A function $f:X\to Y$ between topological spaces is called $\sigma$-continuous if there exists a countable cover $\mathcal C$ of $X$ such that for every $C\in\mathcal C$ the restriction $f{\restriction}C$ is continuous. A typical example of a function (of the first Baire class) which is not $\sigma$-continuous is the Pawlikowski function $P:(\omega+1)^\omega\to\omega^\omega$ (which is defined as the countable power $P=f^\omega$ of a bijection $f:\omega+1\to\omega$). Let $\mathcal I_P$ be the $\sigma$-ideal of subsets $X$ of the compact metrizable space $(\omega+1)^\omega$ such that $P{\restriction}X$ is $\sigma$-continuous. I am interesting in evaluating the standard cardinal characteristics $\mathrm{add}(\mathcal I_P)$, $\mathrm{cov}(\mathcal I_P)$, $\mathrm{non}(\mathcal I_P)$, $\mathrm{cof}(\mathcal I_P)$ of the $\sigma$-ideal $\mathcal I_P$. It seems that among these four cardinal characteristics only the covering number $\mathrm{cov}(\mathcal I_P)$ was studied in the literature. In particular, Cichon, Morayne, Pawlikowski and Solecki proved that $\mathrm{cov}(\mathcal I_P)\ge\mathrm{cov}(\mathcal M)$ where $\mathcal M$ is the $\sigma$-ideal of meager subset in $\mathbb R$. Steprans gave a combinatorial description of the ideal $\mathcal I_P$ and proved the consistency of the strict inequality $\mathrm{cov}(\mathcal I_P)>\mathrm{cov}(\mathcal M)$. Steprans also observed that for every $A\in\mathcal I_P$ the image $P(A)$ is a meager subset of $\omega^\omega$, which implies that $\mathrm{cov}(\mathcal I_P)\ge\mathrm{cov}(\mathcal M)$ and $\mathrm{non}(\mathcal I_P)\le\mathrm{non}(\mathcal M)$. On the other hand, it can be shown that $\mathrm{non}(\mathcal I_P)\ge\mathfrak p$. It is well-known that the strict inequality $\mathfrak p<\mathrm{non}(\mathcal M)$ is consistent. In particular, according to Table 4 in the survey paper of Blass, this strict inequality holds in the random, Hechler, Laver, and Mathias forcing models. Problem 1.Which of the inequalities $\mathfrak p<\mathrm{non}(\mathcal I_P)$ and $\mathrm{non}(\mathcal I_P)<\mathrm{non}(\mathcal M)$ is consistent? Problem 2.What is the value of $\mathrm{non}(\mathcal I_P)$ (and other cardinal characteristics of the ideal $\mathcal I_P)$ in the random, Hechler, Laver, and Mathias forcing models?
Electronic Journal of Probability Electron. J. Probab. Volume 23 (2018), paper no. 110, 61 pp. Non-Hermitian random matrices with a variance profile (I): deterministic equivalents and limiting ESDs Abstract For each $n$, let $A_n=(\sigma _{ij})$ be an $n\times n$ deterministic matrix and let $X_n=(X_{ij})$ be an $n\times n$ random matrix with i.i.d. centered entries of unit variance. We study the asymptotic behavior of the empirical spectral distribution $\mu _n^Y$ of the rescaled entry-wise product \[ Y_n = \left (\frac 1{\sqrt{n} } \sigma _{ij}X_{ij}\right ). \] For our main result we provide a deterministic sequence of probability measures $\mu _n$, each described by a family of Master Equations, such that the difference $\mu ^Y_n - \mu _n$ converges weakly in probability to the zero measure. A key feature of our results is to allow some of the entries $\sigma _{ij}$ to vanish, provided that the standard deviation profiles $A_n$ satisfy a certain quantitative irreducibility property. An important step is to obtain quantitative bounds on the solutions to an associate system of Schwinger–Dyson equations, which we accomplish in the general sparse setting using a novel graphical bootstrap argument. Article information Source Electron. J. Probab., Volume 23 (2018), paper no. 110, 61 pp. Dates Received: 2 March 2018 Accepted: 3 October 2018 First available in Project Euclid: 30 October 2018 Permanent link to this document https://projecteuclid.org/euclid.ejp/1540865373 Digital Object Identifier doi:10.1214/18-EJP230 Mathematical Reviews number (MathSciNet) MR3878135 Zentralblatt MATH identifier 1401.60008 Citation Cook, Nicholas; Hachem, Walid; Najim, Jamal; Renfrew, David. Non-Hermitian random matrices with a variance profile (I): deterministic equivalents and limiting ESDs. Electron. J. Probab. 23 (2018), paper no. 110, 61 pp. doi:10.1214/18-EJP230. https://projecteuclid.org/euclid.ejp/1540865373
Forgot password? New user? Sign up Existing user? Log in Is there a positive integer nnn such that 2n≡1(mod7) ?2^n \equiv 1 \pmod{7} \, ?2n≡1(mod7)? 115,215,315,…,1415,1515\frac{1}{15}, \frac{2}{15}, \frac{3}{15}, \ldots, \frac{14}{15}, \frac{15}{15}151,152,153,…,1514,1515How many of these fractions cannot be reduced? Is 999999 divisible by 7? Hint: 999999=106−1.999999 = 10^6 - 1.999999=106−1. What is the last digit of 3100 ?3^{100} \, ?3100? Which of these is congruent to 10100(mod11) ?10^{100} \pmod{11} \, ?10100(mod11)? Problem Loading... Note Loading... Set Loading...
The Open Neighbourhoods of Points in a Topological Space Examples 2 Recall from The Open Neighbourhoods of Points in a Topological Space page that if $(X, \tau)$ is a topological space then an open neighbourhood of the point $x \in X$ is any open set $U \in \tau$ such that $x \in U$. We will now look at some more examples of open neighbourhoods of points in topological spaces. Example 1 Consider the topological space $(\mathbb{R}, \tau)$ where $\tau$ is the countable complement topology. Give two examples of open neighbourhoods of $1 \in \mathbb{R}$. The open neighbourhoods of $1 \in \mathbb{R}$ are the open sets containing $1$. Consider the set $A = ( \mathbb{R} \setminus \mathbb{N} ) \cup \{ 1 \}$. Then $A^c = \mathbb{N} \setminus \{ 1 \}$ which is a countable set. Therefore $A \in \tau$ and $1 \in A$, so $A$ is an open neighbourhood of $1$. For another example, consider the set $B = \mathbb{R} \setminus \{ 2 \}$. Then $B^c = \{ 2 \}$ which is a countable set. Therefore $B \in \tau$ and $1 \in B$, so $B$ is an open neighbourhood of $1$. In fact, for any $b \in \mathbb{R}$ where $b \neq 1$, then if we define $B_b = \mathbb{R} \setminus \{ b \}$ then $B_b^c = \{ b \}$ which is countable so $B_b \in \tau$, and so $B_b$ is an open neighbourhood of $1$ since $1 \in B_b$. Example 2 Consider the set $X = \{ a, b, c, d, e \}$. Find a non-discrete topology $\tau$ on $X $] such that [[$ a, b \in X$ share exactly $3$ open neighbourhoods. If $a$ and $b$ share exactly $3$ open neighbourhoods then there must be $3$ open sets containing both $a$ and $b$. Consider the following topology:(1) Then $a, b \in U_1 = \{a, b, c \} \in \tau$, $a, b \in U_2 = \{a, b, c, d \} \in \tau$ and $a, b \in U_3 = X \in \tau$ so $a$ and $b$ share exactly $3$ open neighbourhoods. Example 3 Let $X$ be a set and let $\tau_1$ and $\tau_2$ both be topologies on $X$. Consider the topological space $(X, \tau_1 \cap \tau_2)$. Show that if $U$ is an open neighbourhood of $x \in X$ with respect to the topology $\tau_1 \cap \tau_2$ then $U$ is also an open neighbourhood of $x$ with respect to $\tau_1$ and with respect to $\tau_2$. Suppose that $U$ is an open neighbourhood of $x \in X$ with respect to the topology $\tau_1 \cap \tau_2$. Then $x \in U \in \tau_1 \cap \tau_2$. Since $U \in \tau_1 \cap \tau_2$ we have that $U \in \tau_1$ and $U \in \tau_2$. Therefore $x \in U \in \tau_1$ and $x \in U \in \tau_2$. So $U$ is an open neighbourhood of $x$ with respect to $\tau_1$ and with respect to $\tau_2$.
2002-05-15 00:00 A Study of Inclusive Double-Pomeron-Exchange in $p\overline{p} \to pX \overline{p}$ at $\sqrt{s}$ = 630 GeV / Brandt, A. (UCLA) ; Erhan, S. (UCLA) ; Kuzucu, A. (UCLA) ; Medinnis, M. (UCLA) ; Ozdes, N. (UCLA) ; Schlein, P.E. (UCLA) ; Zeyrek, M.T. (UCLA) ; Zweizig, J.G. (UCLA) ; Cheze, J.B. (Saclay) ; Zsembery, J. (Saclay)/UA8 Collaboration We report measurements of the inclusive reaction, p pbar -> p X pbar, in events where either or both the beam-like final-state baryons were detected in Roman-pot spectrometers and the central system was detected in the UA2 calorimeter. A Double-Pomeron-Exchange (DPE) analysis of these data and single diffractive data from the same experiment demonstrates that, for central masses of a few GeV, the extracted Pomeron-Pomeron total cross section, sigma_PomeromPromeron, exhibits an enhancement which exceeds factorization expectations by an order-of-magnitude. [...] hep-ex/0205037.- Geneva : CERN, 2002 - 52 p. - Published in : Eur. Phys. J. C 25 (2002) 361-77 Access to fulltext document: PDF; Preprint: PDF; External link: DURHAM Detailed record - Similar records 1998-09-08 00:00 Detailed record - Similar records 1998-08-27 00:00 Measurements of inclusive $\lambda$ degrees production with large $x_{F}$ at the SppS collider / Brandt, A ; Erhan, S ; Kuzucu-Polatoz, A ; Medinnis, M ; Ozdes, N ; Schlein, Peter E ; Zeyrek, M T ; Zweizig, J G ; Chèze, J B ; Zsembery, J/UA8 Collaboration 1998 - Published in : Nucl. Phys. B 519 (1998) 3-18 Detailed record - Similar records 1997-12-15 00:00 Measurements of Inclusive $\overline{\Lambda}$ Production with Large $x_f$ at the $Sp\overline{p}S$-Collider / Brandt, A. (UCLA) ; Erhan, S. (UCLA) ; Kuzucu, A. (UCLA) ; Medinnis, M. (UCLA) ; Ozdes, N. (UCLA) ; Schlein, P.E. (UCLA) ; Zeyrek, M.T. (UCLA) ; Zweizig, J.G. (UCLA) ; Cheze, J.B. (Saclay) ; Zsembery, J. (Saclay)/UA8 Collaboration We report results of inclusive measurements of anti-Lambda, produced in the forward direction at the SPS with sqrt(s) = 630 GeV, using the UA8 small angle Roman Pot spectrometers. These measurements cover the range in Feynman-x_f and transverse momentum, 0.6 < x_f < 1.0 and 0.4 < p_t < 0.7 GeV, respectively. [...] hep-ex/9712017.- 1998 - 23 p. - Published in : Nucl. Phys. B 519 (1998) 3-18 Access to fulltext document: PDF; Preprint: PDF; External link: DURHAM Detailed record - Similar records 1997-10-02 00:00 Cross-section measurements of hard diffraction at the S$p\overline{p}$S- Collider / Brandt, A ; Erhan, S ; Kuzucu-Polatoz, A ; Medinnis, M ; Ozdes, N ; Schlein, Peter E ; Zeyrek, M T ; Zweizig, J G ; Chèze, J B ; Zsembery, J/UA8 Collaboration The UA8 experiment previously reported the observation of jets in diffractive events containing leading protons (``hard diffraction"), which was interpreted as evidence for the partonic structure of an exchanged Reggeon, believed to be the \pom . An analysis of the longitudinal momentum distribution of the 2-jet system in the \pom -proton rest frame showed that the \pom\ structure is ``hard'', like $x(1-x)^1$, with an additional ``super-hard'' component near $x = 1$. [...] hep-ex/9709015; CERN-PPE-97-126.- Geneva : CERN, 1998 - 19 p. - Published in : Phys. Lett. B 421 (1998) 395 Fulltext: ppe-97-126 - PDF; 9709015 - PDF; External link: hep-ex/9709015 PDF - CERN library copies Detailed record - Similar records 1992-11-26 00:00 The small angle spectrometer of experiment UA8 at the Sp$\overline{p}$S-collider / Brandt, A ; Ellett, J ; Erhan, S ; Jackson, R ; Kuzucu-Polatoz, A ; Medinnis, M ; Oillataguerre, P ; Ozdes, N ; Schlein, Peter E ; Zeyrek, M T et al. CERN-PPE-92-187.- Geneva : CERN, 1993 - 34 p. - Published in : Nucl. Instrum. Methods Phys. Res., A 327 (1993) 412-426 Fulltext: PDF; - CERN library copies Detailed record - Similar records 1992-11-11 00:00 Evidence for a super-hard Pomeron structure / Brandt, A (UCLA) ; Erhan, S (UCLA) ; Kuzucu-Polatoz, A (UCLA) ; Medinnis, M (UCLA) ; Ozdes, N (Cukurova U.) ; Schlein, Peter E (UCLA) ; Zeyrek, M T (Ankara U.) ; Zweizig, J G (UCLA) ; Chèze, J B (SPhN, DAPNIA, Saclay) ; Zsembery, J (SPhN, DAPNIA, Saclay)/UA8 Collaboration CERN-PPE-92-179.- Geneva : CERN, 1992 - 15 p. - Published in : Phys. Lett. B 297 (1992) 417-424 Fulltext: PDF; - CERN library copies Detailed record - Similar records 1990-01-29 00:00 Evidence for transverse jets in high mass diffraction / Bonino, R ; Brandt, A ; Chèze, J B ; Erhan, S ; Ingelman, G ; Medinnis, M ; Schlein, Peter E ; Zsembery, J ; Zweizig, J G ; Clark, A G et al. CERN-EP-88-60.- Geneva : CERN, 1988 - 13 p. - Published in : Phys. Lett. B 211 (1988) 239-246 Fulltext: PDF; - CERN library copies Detailed record - Similar records 1990-01-29 00:00 Detailed record - Similar records
The Complexity of Data Aggregation in Directed Networks Abstract We study problems of data aggregation, such as approximate counting and computing the minimum input value, in synchronous directed networks with bounded message bandwidth B = Ω(log n). In undirected networks of diameter D, many such problems can easily be solved in O( D) rounds, using O(log n)-size messages. We show that for directed networks this is not the case: when the bandwidth B is small, several classical data aggregation problems have a time complexity that depends polynomially on the size of the network, even when the diameter of the network is constant. We show that computing an ε-approximation to the size n of the network requires \(\Omega(\min \left\{n, 1/\epsilon ^2\right\} / B)\) rounds, even in networks of diameter 2. We also show that computing a sensitive function (e.g., minimum and maximum) requires \(\Omega(\sqrt{n/B})\) rounds in networks of diameter 2, provided that the diameter is not known in advance to be \(o(\sqrt{n/B})\). Our lower bounds are established by reduction from several well-known problems in communication complexity. On the positive side, we give a nearly optimal \(\tilde{O}(D + \sqrt{n/B})\)-round algorithm for computing simple sensitive functions using messages of size B = Ω(log N), where N is a loose upper bound on the size of the network and D is the diameter. KeywordsSpan Tree Data Aggregation Sensitive Function Communication Complexity Directed Network Preview Unable to display preview. Download preview PDF. References 1.Awerbuch, B.: Optimal distributed algorithms for minimum weight spanning tree, counting, leader election and related problems (detailed summary). In: Proc. 19th ACM Symp. on Theory of Computing (STOC), pp. 230–240 (1987)Google Scholar 2.Chakrabarti, A., Regev, O.: An optimal lower bound on the communication complexity of gap-hamming-distance. In: Proc. 43rd ACM Symp. on Theory of Computing (STOC), pp. 51–60 (2011)Google Scholar 3.Frederickson, G.N., Lynch, N.A.: The impact of synchronous communication on the problem of electing a leader in a ring. In: Proc. 16th ACM Symp. on Theory of Computing (STOC), pp. 493–503 (1984)Google Scholar 4.Indyk, P., Woodruff, D.: Tight lower bounds for the distinct elements problem. In: Proc. 44th IEEE Symp. on Foundations of Computer Science (FOCS), pp. 283–288 (October 2003)Google Scholar 5. 6.Kempe, D., Dobra, A., Gehrke, J.: Gossip-based computation of aggregate information. In: Proc. 44th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 482–491 (2003)Google Scholar 7.Kuhn, F., Locher, T., Schmid, S.: Distributed computation of the mode. In: Proc. 27th ACM Symp. on Principles of Distributed Computing (PODC), pp. 15–24 (2008)Google Scholar 8.Kuhn, F., Locher, T., Wattenhofer, R.: Tight bounds for distributed selection. In: Proc. 19th ACM Symp. on Parallelism in Algorithms and Architectures (SPAA), pp. 145–153 (2007)Google Scholar 9.Kuhn, F., Lynch, N.A., Oshman, R.: Distributed computation in dynamic networks. In: Proc. 42nd ACM Symp. on Theory of Computing (STOC), pp. 513–522 (2010)Google Scholar 10. 11. 12. 13.Patt-Shamir, B.: A note on efficient aggregate queries in sensor networks. In: Proc. 23rd ACM Symp. on Principles of Distributed Computing (PODC), pp. 283–289 (2004)Google Scholar 14. 15. 16. 17.Shrira, L., Francez, N., Rodeh, M.: Distributed k-selection: From a sequential to a distributed algorithm. In: Proc. 2nd ACM Symp. on Principles of Distributed Computing (PODC), pp. 143–153 (1983)Google Scholar 18.
Recently i am studying Statistical mechanics and reading about the Boltzman hypothesis about entropy $$S = k \,ln\,\Omega(E)$$ where it says $\Omega(E)$ is total no. of microstates , the available volume in phase space . Now if i consider an ideal gas confined in a volume V then the Total no. of microstate is $$ \Omega\,=\,\frac{V}{h^3}\int dp_1\,dp_2\,dp_3...dp_{3N}$$ And as for free particles $E\,=\,\frac{p^2}{2m}$ so if i write the integration in momentum space in spherical polar coordinate then Radius of the sphere is $p\,=\,\sqrt{2mE}$ Now the total no. of microstates $\Omega(E)$ is no. of microstates from energy 0 to E. And we use this to calculate Entropy S. Now my question is in microcanonical ensemble if i say the system is in Energy E to E+$\Delta$E then the total no. of available microstate is $$ \Delta\Omega\,=\,\Omega(E+\Delta E)-\Omega(E)$$ the microstates which are in the hyper shell, but while writing entropy we use total no. of microstate $\Omega(E)$ which is actually total no. of microstate from energy o to E not from E to E+$\Delta$E. So apparently we will get wrong answer , we should calculate $$ S\,=\,k\,ln(\Delta \Omega)\,=\, k\,ln\,(\Omega(E+\Delta E)-\Omega(E))$$ So why we use total no. of microstate from energy 0 to E to calculate Entropy although the system can have energy only from E to E+$\Delta$E ?
Table of Contents Error Analysis of Newton's Method for Approximating Roots Recall from the Newton's Method for Approximating Roots page that if $f$ is a differentiable function that contains the root $\alpha$, and $x_0$ is an approximation of $\alpha$, then we can obtain a sequence of approximations $\{ x_{n+1} \} = \left \{ x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)} \right \}$ for $n ≥ 0$ that may or may not converge to $\alpha$. If our initial approximation $x_0$ is too far away from $\alpha$, then this sequence may not converge to $\alpha$. We will now look at what must hold so that the error between our approximations $x_n$ and $\alpha$ converge to $0$ so that $\{ x_n \}$ converges to $\alpha$. Consider the interval $[a, b]$ and suppose that there exists a root $\alpha \in (a, b)$ ($f(\alpha) = 0$). Assume that both $f'$ and $f''$ are continuous functions and that $f'(\alpha) \neq 0$ (that is the slope of the tangent line at $(\alpha, f(\alpha))$ is not $0$ and hence is not a horizontal line). Using Taylor's Theorem we have that for some $c_n$ between $\alpha$ and $x_n$ that:(1) If we divide both sides of the equation by $f'(x_n)$ we get that:(2) Now since $x_{n+1} = x_n - \frac{f(x_n)}{f'(x_n)}$ then by rearranging these terms we get that $\frac{f(x_n)}{f'(x_n)} = x_n - x_{n+1}$, and substituting this into the equation above and isolating for $\alpha - x_{n+1}$ we get:(3) Note that in the above equation for the error in the approximation $x_{n+1}$ of $\alpha$, that $\mathrm{Error} (x_n) = \alpha - x_n$ appears. We can see that if the initial approximations are very small, then Newton's Method converges very quickly towards $\alpha$ upon successive iterations. Now suppose that $x_n$ is very close to the root $\alpha$. Then since $c_n$ is between $x_n$ and $\alpha$ then $c_n$ is also very close to $\alpha$ and hence:(4) Therefore, for $n ≥ 0$ we can approximate the error of $x_{n+1}$ from $\alpha$ as:(5) We will now multiply both sides of the equation above by $M_{\alpha}$ to get that:(6) Now note that $M_{\alpha} (\alpha - x_n) \approx M_{\alpha}^2(\alpha - x_{n-1}) = \left ( M_{\alpha} (\alpha - x_{n-1}) \right)^2$. If we repeat this process then we get that for $n ≥ 0$:(7) Now for the error $\alpha - x_n$ to converge to $0$ (once again, so that our approximations $x_n$ converge to $\alpha$), we must have that $-1 < M_{\alpha} (\alpha - x_0) < 1$, i.e, $\mid M_{\alpha}(\alpha - x_0) \mid < 1$, because if so, then as $n \to \infty$ we have that $M_{\alpha}(\alpha - x_n) \to 0$.(8) One important thing to note is that if we maximize $M_{\alpha}$ on the interval $[a, b]$, that is let $M$ be defined to be the largest possible $M_{\alpha}$ over $[a, b]$, then for any $M_{\alpha}$ we have that:(9) If we have that $M < 1$, then all $M_{\alpha} < 1$ which will guarantee us convergence of Newton's Method to $\alpha$.
Alright, I have this group $\langle x_i, i\in\mathbb{Z}\mid x_i^2=x_{i-1}x_{i+1}\rangle$ and I'm trying to determine whether $x_ix_j=x_jx_i$ or not. I'm unsure there is enough information to decide this, to be honest. Nah, I have a pretty garbage question. Let me spell it out. I have a fiber bundle $p : E \to M$ where $\dim M = m$ and $\dim E = m+k$. Usually a normal person defines $J^r E$ as follows: for any point $x \in M$ look at local sections of $p$ over $x$. For two local sections $s_1, s_2$ defined on some nbhd of $x$ with $s_1(x) = s_2(x) = y$, say $J^r_p s_1 = J^r_p s_2$ if with respect to some choice of coordinates $(x_1, \cdots, x_m)$ near $x$ and $(x_1, \cdots, x_{m+k})$ near $y$ such that $p$ is projection to first $m$ variables in these coordinates, $D^I s_1(0) = D^I s_2(0)$ for all $|I| \leq r$. This is a coordinate-independent (chain rule) equivalence relation on local sections of $p$ defined near $x$. So let the set of equivalence classes be $J^r_x E$ which inherits a natural topology after identifying it with $J^r_0(\Bbb R^m, \Bbb R^k)$ which is space of $r$-order Taylor expansions at $0$ of functions $\Bbb R^m \to \Bbb R^k$ preserving origin. Then declare $J^r p : J^r E \to M$ is the bundle whose fiber over $x$ is $J^r_x E$, and you can set up the transition functions etc no problem so all topology is set. This becomes an affine bundle. Define the $r$-jet sheaf $\mathscr{J}^r_E$ to be the sheaf which assigns to every open set $U \subset M$ an $(r+1)$-tuple $(s = s_0, s_1, s_2, \cdots, s_r)$ where $s$ is a section of $p : E \to M$ over $U$, $s_1$ is a section of $dp : TE \to TU$ over $U$, $\cdots$, $s_r$ is a section of $d^r p : T^r E \to T^r U$ where $T^k X$ is the iterated $k$-fold tangent bundle of $X$, and the tuple satisfies the following commutation relation for all $0 \leq k < r$ $$\require{AMScd}\begin{CD} T^{k+1} E @>>> T^k E\\ @AAA @AAA \\ T^{k+1} U @>>> T^k U \end{CD}$$ @user193319 It converges uniformly on $[0,r]$ for any $r\in(0,1)$, but not on $[0,1)$, cause deleting a measure zero set won't prevent you from getting arbitrarily close to $1$ (for a non-degenerate interval has positive measure). The top and bottom maps are tangent bundle projections, and the left and right maps are $s_{k+1}$ and $s_k$. @RyanUnger Well I am going to dispense with the bundle altogether and work with the sheaf, is the idea. The presheaf is $U \mapsto \mathscr{J}^r_E(U)$ where $\mathscr{J}^r_E(U) \subset \prod_{k = 0}^r \Gamma_{T^k E}(T^k U)$ consists of all the $(r+1)$-tuples of the sort I described It's easy to check that this is a sheaf, because basically sections of a bundle form a sheaf, and when you glue two of those $(r+1)$-tuples of the sort I describe, you still get an $(r+1)$-tuple that preserves the commutation relation The stalk of $\mathscr{J}^r_E$ over a point $x \in M$ is clearly the same as $J^r_x E$, consisting of all possible $r$-order Taylor series expansions of sections of $E$ defined near $x$ possible. Let $M \subset \mathbb{R}^d$ be a compact smooth $k$-dimensional manifold embedded in $\mathbb{R}^d$. Let $\mathcal{N}(\varepsilon)$ denote the minimal cardinal of an $\varepsilon$-cover $P$ of $M$; that is for every point $x \in M$ there exists a $p \in P$ such that $\| x - p\|_{2}<\varepsilon$.... The same result should be true for abstract Riemannian manifolds. Do you know how to prove it in that case? I think there you really do need some kind of PDEs to construct good charts. I might be way overcomplicating this. If we define $\tilde{\mathcal H}^k_\delta$ to be the $\delta$-Hausdorff "measure" but instead of $diam(U_i)\le\delta$ we set $diam(U_i)=\delta$, does this converge to the usual Hausdorff measure as $\delta\searrow 0$? I think so by the squeeze theorem or something. this is a larger "measure" than $\mathcal H^k_\delta$ and that increases to $\mathcal H^k$ but then we can replace all of those $U_i$'s with balls, incurring some fixed error In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set S in a Euclidean space Rn, or more generally in a metric space (X, d). It is named after the German mathematician Hermann Minkowski and the French mathematician Georges Bouligand.To calculate this dimension for a fractal S, imagine this fractal lying on an evenly spaced grid, and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid... @BalarkaSen what is this ok but this does confirm that what I'm trying to do is wrong haha In mathematics, Hausdorff dimension (a.k.a. fractal dimension) is a measure of roughness and/or chaos that was first introduced in 1918 by mathematician Felix Hausdorff. Applying the mathematical formula, the Hausdorff dimension of a single point is zero, of a line segment is 1, of a square is 2, and of a cube is 3. That is, for sets of points that define a smooth shape or a shape that has a small number of corners—the shapes of traditional geometry and science—the Hausdorff dimension is an integer agreeing with the usual sense of dimension, also known as the topological dimension. However, formulas... Let $a,b \in \Bbb{R}$ be fixed, and let $n \in \Bbb{Z}$. If $[\cdot]$ denotes the greatest integer function, is it possible to bound $|[abn] - [a[bn]|$ by a constant that is independent of $n$? Are there any nice inequalities with the greatest integer function? I am trying to show that $n \mapsto [abn]$ and $n \mapsto [a[bn]]$ are equivalent quasi-isometries of $\Bbb{Z}$...that's the motivation.
Well I think it is about time we have another proof-golf question. This time we are going to prove the well known logical truth \$(A \rightarrow B) \rightarrow (\neg B \rightarrow \neg A)\$ Here is how it works: Axioms The Łukasiewicz system has three axioms. They are: \$\phi\rightarrow(\psi\rightarrow\phi)\$ \$(\phi\rightarrow(\psi\rightarrow\chi))\rightarrow((\phi\rightarrow\psi)\rightarrow(\phi\rightarrow\chi))\$ \$(\neg\phi\rightarrow\neg\psi)\rightarrow(\psi\rightarrow\phi)\$ The axioms are universal truths regardless of what we choose for \$\phi\$, \$\psi\$ and \$\chi\$. At any point in the proof we can introduce one of these axioms. When we introduce an axiom you replace each case of \$\phi\$, \$\psi\$ and \$\chi\$, with a "complex expression". A complex expression is any expression made from Atoms, (represented by the letters \$A\$-\$Z\$), and the operators implies (\$\rightarrow\$) and not (\$\neg\$). For example if I wanted to introduce the first axiom (L.S.1) I could introduce \$A\rightarrow(B\rightarrow A)\$ or \$(A\rightarrow A)\rightarrow(\neg D\rightarrow(A\rightarrow A))\$ In the first case \$\phi\$ was \$A\$ and \$\psi\$ was \$B\$, while in the second case both were more involved expressions. \$\phi\$ was \$(A\rightarrow A)\$ and \$\psi\$ was \$\neg D\$. What substitutions you choose to use will be dependent on what you need in the proof at the moment. Modus Ponens Now that we can introduce statements we need to relate them together to make new statements. The way that this is done in Łukasiewicz's Axiom Schema (L.S) is with Modus Ponens. Modus Ponens allows us to take two statements of the form \$\phi\$ \$\phi\rightarrow \psi\$ and instantiate a new statement \$\psi\$ Just like with our Axioms \$\phi\$ and \$\psi\$ can stand in for any arbitrary statement. The two statements can be anywhere in the proof, they don't have to be next to each other or any special order. Task Your task will be to prove the law of contrapositives. This is the statement \$(A\rightarrow B)\rightarrow(\neg B\rightarrow\neg A)\$ Now you might notice that this is rather familiar, it is an instantiation of the reverse of our third axiom \$(\neg\phi\rightarrow\neg\psi)\rightarrow(\psi\rightarrow\phi)\$ However this is no trivial feat. Scoring Scoring for this challenge is pretty simple, each time you instantiate an axiom counts as a point and each use of modus ponens counts as a point. This is essentially the number of lines in your proof. The goal should be to minimize your score (make it as low as possible). Example Proof Ok now lets use this to construct a small proof. We will prove \$A\rightarrow A\$. Sometimes it is best to work backwards since we know where we want to be we can figure how we might get there. In this case since we want to end with \$A\rightarrow A\$ and this is not one of our axioms we know the last step must be modus ponens. Thus the end of our proof will look like φφ → (A → A)A → A M.P. Where \$\phi\$ is an expression we don't yet know the value of. Now we will focus on \$\phi\rightarrow(A\rightarrow A)\$. This can be introduced either by modus ponens or L.S.3. L.S.3 requires us to prove \$(\neg A\rightarrow\neg A)\$ which seems just as hard as \$(A\rightarrow A)\$, so we will go with modus ponens. So now our proof looks like φψψ → (φ → (A → A))φ → (A → A) M.P.A → A M.P. Now \$\psi\rightarrow(\phi\rightarrow(A\rightarrow A))\$ looks a lot like our second axiom L.S.2 so we will fill it in as L.S.2 A → χA → (χ → A)(A → (χ → A)) → ((A → χ) → (A → A)) L.S.2(A → χ) → (A → A) M.P.A → A M.P. Now our second statement \$(A\rightarrow(\chi\rightarrow A))\$ can pretty clearly be constructed from L.S.1 so we will fill that in as such A → χA → (χ → A) L.S.1(A → (χ → A)) → ((A → χ) → (A → A)) L.S.2(A → χ) → (A → A) M.P.A → A M.P. Now we just need to find a \$\chi\$ such that we can prove \$A\rightarrow\chi\$. This can very easily be done with L.S.1 so we will try that A → (ω → A) L.S.1A → ((ω → A) → A) L.S.1(A → ((ω → A) → A)) → ((A → (ω → A)) → (A → A)) L.S.2(A → (ω → A)) → (A → A) M.P.A → A M.P. Now since all of our steps our justified we can fill in \$\omega\$, as any statement we want and the proof will be valid. We could choose \$A\$ but I will choose \$B\$ so that it is clear that it doesn't need to be \$A\$. A → (B → A) L.S.1A → ((B → A) → A) L.S.1(A → ((B → A) → A)) → ((A → (B → A)) → (A → A)) L.S.2(A → (B → A)) → (A → A) M.P.A → A M.P. And that is a proof. Resources Verification program Here is a Prolog program you can use to verify that your proof is in fact valid. Each step should be placed on its own line. -> should be used for implies and - should be used for not, atoms can be represented by any string of alphabetic characters. Metamath Metamath uses the Łukasiewicz system for its proofs in propositional calculus, so you may want to poke around there a bit. They also have a proof of the theorem this challenge asks for which can be found here. There is an explanation here of how to read the proofs. The Incredible Proof Machine @Antony made me aware of a tool called The Incredible Proof machine which allows you to construct proofs in a number of systems using a nice graphical proof system. If you scroll down you will find they support the Łukasiewicz system. So if you are a more visual oriented person you can work on your proof there. Your score will be the number of blocks used minus 1.
Cauchy Sequences of Complex Numbers Cauchy Sequences of Complex Numbers Recall from the Sequences of Complex Numbers page that a sequence of complex numbers $(z_n)_{n=1}^{\infty}$ is simply just an infinite order list of complex numbers. We will now state an important type of sequence of complex numbers. Definition: A sequence of complex numbers $(z_n)_{n=1}^{\infty}$ is a Cauchy Sequence of Complex Numbers if for all $\epsilon > 0$ there exists an $N \in \mathbb{N}$ such that if $m, n \geq N$ then $\mid z_m - z_n \mid < \epsilon$.
Clopen Set Criterion for Disconnected Topological Spaces Examples 1 Recall from the Clopen Set Criterion for Disconnected Topological Spaces page that that a topological space $X$ is disconnected if and only if $X$ contains clopen set $A \subset X$, $A \neq \emptyset$. We saw that if $A$ is a proper clopen subset of $X$ then $\{ A, A^c \} = \{ A, X \setminus A \}$ can form a separation for the set $X$. Equivalently, we can say that a topological space $X$ is open if and only if $X$ does not contain any clopen sets. We will now look at some examples concerning this very nice theorem. Example 1 Let $X$ be a set containing more than $1$ element and give $X$ the discrete topology. Prove that $X$ is disconnected. If $X$ has the discrete topology then every subset in $X$ is open in $X$. But then every subset of $X$ is also closed in $X$. Hence every subset of $X$ is clopen in $X$. Since $X$ contains more than $1$ element, there exists a clopen set $A \subset X$, $A \neq \emptyset$. So $X$ is disconnected. Example 2 Consider the set $X = \{ a, b, c, d \}$ with the topology $\tau = \{ \emptyset, \{ a\}, \{b \}, \{a, b \}, \{a, b, c \}, X \}$. Determine whether or not this set is connected. The closed sets in $X$ with the topology $\tau$ are:(1) So no subset of $X$ is both open and closed apart from the emptyset, $\emptyset$, and the whole set, $X$. So $X$ is connected. Example 3 Consider the set $X = \{ a, b, c, d \}$. Give a non-discrete topology $\tau$ on $X$ that makes $X$ disconnected. Consider the following topology:(2) Then the closed sets of $X$ are also $\{ \emptyset, \{a, b \}, \{c, d \}, X \}$. So $\{a, b \}$ is a proper subset of $X$ that is both open and closed, so $X$ is disconnected and $\{ \{a, b \}, \{c, d \} \}$ form a separation of $X$.
I have some spare time, and a few hundred DJB2-hashed values sitting around. I thought I'd try to do something "useful" and invert DJB2, such that I could calculate the plaintext of the hashes (which has long since been lost, a fact that is often bemoaned). For those who don't know, DJB2 is implemented like this: (C#) public int Djb2(string text){ int r = 5381; foreach (char c in text) { r = (r * 33) + (int)c; } return r;} Text is a string of ASCII characters, so DJB2 has a lot of collisions, a fact that I knew full well going in to this. Fortunately, the plaintext I have has characteristics that will allow me to use heuristical filtering, so false positives shouldn't be much of an issue :) My algorithm is essentially this, plus some recursion-control (pesudocode): reverse(hash): y = hash mod 33 for c in [65, 120]: if c mod 33 == y: print reverse((hash - c) / 33), c In other words, find the remainder of the hash / 33. Then, for all the ASCII values from 65 to 120, check to see if the value / 33 has the same remainder. If it does, subtract it from the hash, divide the hash by 33, and continue the algorithm. In this way, we only investigate promising paths, because we know that the subtraction of C must leave a number that is evenly divisible by 33. For example, here is the algorithm working to decode a simple hash: $h = 177676$ $y = h\mod{33} = 4$ Values of $c$ where $4\equiv c\mod{33}$: $70$ and $103$ I can now pursue onlythose two values of $c$. (In this case, $103$, or 'g', was correct) Thus, I know my algorithm works to reverse the hashing process. Unfortunately, I ran into a nasty problem. Djb2 will rapidly overflow the bounds of an int, often with plaintext as small as four characters. This overflow essentially results in r being implicitly modded by $2^{32}$. Normal division won't work. I asked a question on programming SE about division in this case, and was informed about the multiplicative inverse of 33. Unfortunately, I don't need a division operation (yet), I need a remainder operation! This is proving to be much trickier, and I'm not sure it's even possible. Here's an illustrative example: $h = 2090289493$ <-- h is actually $6385256691\pmod2^{32}$ because of the overflow $y = h\mod{33} = 28$ <-- Incorrect! Should be 32 Here is an example of the algorithm using the operation $\Omega$ and an $h$ that has overflowed. This is what I'd like to do, but I don't know what operation $\Omega$ represents. $h = 2090289493$ $y = h\ \Omega\ {33} = 32$ Values of $c$ where $32\equiv c\mod{33}$: $65$ and $98$ I can now pursue onlythose two values of $c$. Am I on the right track with reversing DJB2 ( can it be reversed?)? Is there some way of finding the remainder of a large number that has been modded by $2^{32}$?
I suppose this is a terminology question. "A digital signature scheme which can sign many documents with one private key" means something like this: There are some sets $M$ (the "message space", often the set of bitstrings of any length, or some useful subset thereof), $K_{pub}$, $K_{priv}$ (the public and private "key spaces") and $S$ (the signature space). There is a tuple of (probabilistic and efficiently computable) functions $(g, s, v)$. $g : \varnothing \to K_{pub} \times K_{priv}$ ($g$ generates a random key pair). $s : M \times K_{priv} \to S$ ($s$ signs a message, i.e. generates a signature from the private key and message) $v : M \times K_{pub} \times S \to \{\mathsf{true}, \mathsf{false}\}$ ($v$ verifies a signature) If $(y, x) = g()$ and $m \in M$, then $v(m, y, s(m, x)) = \mathsf{true}$ (or at least with overwhelming probability). Normally we also want that the scheme is secure, which then adds some more properties: Given $y$ (the public key) and a message $m$, without knowledge of $x$ it is hard to create a $\sigma \in S$ such that $v(m, y, \sigma) = \mathsf{true}$, even if some (or even many) other $(m', \sigma')$ are known, or even signatures to other messages can be requested. (This is known as "universal forgery".) It should even be hard to create any message $m \in M$ and $\sigma \in S$ such that $v(m, y, \sigma) = \mathsf{true}$ (other than the $m'$s which were given/queried as examples). (This is known as "existential forgery".) (There are also variants of this definition where the number of documents which can be signed with each key before the security gets lost is limited, but these obviously are not "signature schemes which can sign many documents with one key".) Usually there also is some formalization of "hard", often with a security parameter. Similarly, we have the term of "hash function", something like that: There is a message space $M$ (usually the set of bitstrings of any length, or some useful subset thereof) and a hash space $H$ (usually the set of bitstrings of some fixed length). There is a function $h : M \to H$. The "collision resistance" property then is something like It is hard to find $m_1, m_2 \in M$ with $h(m_1) = h(m_2)$. The "there exists a digital signature scheme which ..." in the quoted lecture extract supposedly means as much as "there exists a secure signature scheme which ...". We already know that there are signature schemes (RSA, DSA, ECDSA, ...), but it is not proven if they are actually secure. (We sure hope so, and nobody did break them publicly yet, but it might not actually be possible to prove anything, or it might be that there is no secure signature scheme at all.) We also don't know if there exist any actually collision-resistant hash function. We have some candidates (like SHA-2), and some previous candidates which proved to actually be not collision resistant (MD4, MD5). The argument then is: If there is any secure signature scheme (for multiple documents with each key), then there has to be a collision-resistant hash function. (The idea is that you actually can use the signature scheme as a hash function by using a fixed key, and the signature scheme is not secure if you can find two messages with the same signature (i.e. a hash collision). I think this doesn't actually follow from my security properties mentioned above, so maybe your lecture used some other security definition.) Also there will be shown (in the following paragraphs/pages of your text) that from any collision-resistant hash function you can build a secure hash-based signature scheme. So it follows: If there is any secure signature scheme (which can sign many messages) at all, then there is a secure hash based signature scheme.
Let $c_0$ be the space of real-valued sequences $\{x_n\}$ which converge to zero, equipped with the metric $d(\{x_n\}, \{y_n\}) = sup_n |x_n − y_n|$. I want to show that the metric space $(c_0, d)$ is complete, in other words every Cauchy sequence converges. I need to show that every Cauchy sequence has a limit, and that that limit actually belongs to $c_0$. Right now I am having trouble with the first part. If we assume that $\{a_n\}$ in $c_0$ is Cauchy, I know by the definition of the metric $d$ that the real valued sequence along the $k^\text{th}$ term of each term (which itself is a sequence of reals) of $\{a_n\}$ is also Cauchy. In other words, the sequence $\{a_n\}$ converges term-wise (since Cauchy sequences in the reals converge), but how do I use that to show the entire sequence converges? There is a notational issue here. You need to show that a Cauchy sequence of elements of $c_0$ converges to an element of $c_0$. It might help to refer to an element of $c_0$ as $x$, and then address a specific element of the sequence as $x(k)$. So suppose you have $x_n \in c_0$ such that $x_n$ is Cauchy with the distance given above. For each $k$, you have $|x_n(k)-x_m(k)| \le d(x_n,x_m)$, so there is some $x(k)$ such that $x_n(k) \to x(k)$. This is the candidate sequence. Now you must show that $x \in c_0$ and $d(x,x_n) \to 0$. Let $\epsilon>0$, choose $N$ such that if $m,n \ge N$ then $d(x_n,x_m) < { 1\over 3} \epsilon$. Now choose $K$ such that if $k \ge K$, then $|x_N(k) | < { 1\over 3} \epsilon$. Then, for $k \ge K$ \begin{eqnarray} |x(k)| &\le& |x(k)-x_m(k)| + |x_m(k)-x_N(k)| + |x_N(k)| \\ &<& |x(k)-x_m(k)| + {2 \over 3}\epsilon \end{eqnarray} Now choose $m \ge N$ (which will, in general, depend on $k$) such that $|x(k)-x_m(k)| < {1\over 3} \epsilon$, and we see that $|x(k)| <\epsilon$. Hence $x (k) \to 0$ and so $x \in c_0$. Showing that $x_n \to x$ is similar: Let $\epsilon>0$ and choose $N$ such that if $m,n \ge N$ then $d(x_n,x_m) < { 1\over 2} \epsilon$. Then, for $m,n \ge N$ we have \begin{eqnarray} |x(k)-x_n(k)| &\le& |x(k)-x_m(k)| + |x_m(k)-x_n(k)| \\ &<& |x(k)-x_m(k)| + {1 \over 2} \epsilon \end{eqnarray} Now choose $m \ge N$ (which will, in general, depend on $k$) such that $|x(k)-x_m(k)| < {1 \over 2} \epsilon$, then we see that $|x(k)-x_n(k)| < \epsilon$ for all $n \ge N$, and so $d(x,x_n) \le \epsilon$. Let $\{x_n\}$ be a Cauchy sequence. Then, $d(x_n,x_m) \to 0$ as $n,m \to \infty$. Let $c \in \mathbb{N}$. Note that for all $m,n$, $|x_m(c)- x_n(c)| \leq d(x_m-x_n)$. Hence, $x_m(k)$ is a Cauchy sequence in $\mathbb{R}$, for all fixed $k \in \mathbb{N}$. Moreover, $x_m(k)$ is uniformly convergent, which is to say, that given $\epsilon > 0$, there is $M$ large enough so that $|x_m(k)-x(k)| < \epsilon$ for $m > M$, for all $k \in \mathbb{N}$. Now, $\mathbb R$ is complete, so there exists $l_k$ such that $x_m(k) \to l_k$. Define a new sequence $x(k) = l_k$. See that $d(x_n , x) = \sup |x(d)-x_n(d)|$. By uniform convergence property, this converges to zero, hence $x_n \to x$. To see that $x \in c_0$, we want that $x(d) \to 0$ as $d \to \infty$. We know that $x(d) = \lim x_n(d)$, so use this knowledge, along with the fact that $x_n(d) \to 0$ as $n \to \infty$, to show that $x(d) \to 0$. (It's a limit switching argument).
At Jim's request, here's an expanded version of my comments above. I will have to use some facts from the topological theory of complex algebraic varieties, but out of stubbornness I will not use any such facts which are part of the theory of Lie groups (the maximal compact subgroup, facts specific to complex semisimple Lie algebras, etc.) Let $X$ be a smooth connected affine scheme over a field $k$ (the case of interest being a connected semisimple $k$-group). Consider the collection of connected finite \'etale covers of $X$. This is an inverse system of affine schemes (with coordinate rings that are domains). Consider the inverse limit (i.e., Spec of direct limit of coordinate rings), call it $\widetilde{X}$. This is an algebraist's analogue of a universal cover: it is Spec of a (typically huge, not finite type over $k$, nor noetherian) domain, so it is connected, and one can show by "standard" direct limit arguments that it has no nontrivial connected finite etale cover. So every finite etale cover of $X$ is totally split by pullback over $\widetilde{X}$, "as if" it were a universal cover in topology. The automorphism group of $\widetilde{X}$ over $X$ is (by one definition) the opposite group of the etale fundamental group of $X$ (upon fixing some geometric point of $X$ as the base point, and lifting it to a geometric point of $\widetilde{X}$; if $k$ were separably closed then we could take a point in $X(k)$ as the base point and lift it to a $k$-point of $\widetilde{X}$). Is $\widetilde{X} \rightarrow X$ a finite-degree covering, say when $k$ is separably closed? This is the same to ask that $\widetilde{X}$ is finite type over $k$. In characteristic $p > 0$ one can use the Artin-Schreier method to make infinitely many pairwise non-isomorphic degree-$p$ connected finite etale Galois covers of $X$ if $\dim X > 0$, so in positive characteristic $\widetilde{X}$ is never of finite type over $k$ when $\dim X > 0$. (To verify infinitude, one approach involves working with finite generically-etale maps to an affine space to essentially reduce the problem to the more familiar case of affine spaces of positive dimension.) But sometimes in char. 0 it is of finite type (such as algebraic varieties over $\mathbf{C}$ whose complex points are simply connected in the topological case; we'll come to some examples below). Let's call a connected scheme $S$ simply connected if it has no nontrivial connected finite etale covers; e.g, $\widetilde{X}$ above (see, no noetherian hypotheses). Now there arises a question (inspired by the topological case): is $\widetilde{X} \times_{{\rm{Spec}}(k)} \widetilde{X}$ simply connected (assuming it is at least connected, which is automatic when $k$ is separably closed)? This amounts to asking if the natural map $\pi_1(X \times X) \rightarrow \pi_1(X) \times \pi_1(X)$ is an isomorphism (with $X \times X$ connected). The Artin-Schreier method shows that this fails in char. $> 0$ when $\dim X > 0$, even if $k = k_s$. But if $k$ is alg. closed of char. 0 then it is true. (Here is a sketch of a proof. The content is to show that a cofinal system of connected finite etale covers of $X \times X$ is given by products of such covers of the factors, and then group theory handles the rest. By specialization arguments typically called "Lefschetz principle", we can assume $k = \mathbf{C}$. Then the known result on the topological side reduces the task to checking that $E \rightsquigarrow E(\mathbf{C})$ sets up an equivalence from the category of finite \'etale covers of $X$ to the category of finite-degree covering spaces over $X(\mathbf{C})$. This is the so-called Riemann Existence Theorem, and is proved in section 5 of Exp. XII of SGA1 via resolution of singularities; can also be proved via alterations. Maybe there is an more elementary algebraic proof of the product compatibility by exploiting tame ramification in char. 0, but if so then it is escaping my memory at the moment.) So when $k$ is alg. closed of char. 0, if $G$ is a smooth connected affine $k$-group then by simple connectedness (and connectedness) of $\widetilde{G} \times \widetilde{G}$ we can copy the same argument with Lie groups to uniquely equip $\widetilde{G}$ with a $k$-group scheme structure over that of $G$ making a chosen $k$-rational base point on $\widetilde{G}$ over the identity of $G$ as the identity point and the covering map a $k$-homomorphism. Continuing with such $k$, the coordinate ring $\widetilde{A}$ of $\widetilde{G}$ is a Hopf algebra over $k$, yet it is constructed as a directed union of $k$-subalgebras that are finite etale over the coordinate ring $A$ of $G$. By a general fact from the land of Hopf algebras (proved in Waterhouse's book on affine group schemes, for example), $\widetilde{A}$ is a directed union of finite type $k$-subalgebras $A_i$ that are also Hopf subalgebras. Since $A$ is finite type over $k$, by considering only "big enough" $i$, we may assume that every $A_i$ contains $A$. But $A_i$ is finitely generated over $k$, hence over $A$, yet $\widetilde{A}$ is a directed union of finite \'etale $A$-algebras. Thus, each $A_i$ is contained in a finite $A$-algebra and hence is itself module-finite over $A$. In other words, $G_i := {\rm{Spec}}(A_i) \rightarrow G$ is an isogeny between smooth connected affine $k$-groups. This isogeny isnecessarily etale (as we're in char. 0), and hence the kernel is central by connectedness of the $k$-groups, and each $G_i$ is necessary semisimple when $G$ is. To summarize, if $k$ is alg. closed of char. 0 and $G$ is connected semisimple, then $\widetilde{G}$ is an inverse limit of connected semisimple $k$-groups equipped with an etale isogeny over $G$. But by the theory of connected semisimple groups over general fields, the collection of central isogenous covers of $G$ has a single maximal element that dominates all others (called the simply connected member of the central isogeny class, and characterized by the property that it admits no non-trivial central isogenous covers by another smooth connected semisimple $k$-group; more on this dude below). Voila, so for $k$ alg. closed of char. 0 the collection of $G_i$'s is actually finite and terminates at $\widetilde{G}$. That is, for such $k$ the "abstract" $\widetilde{G}$ coincides with the "simply connected" central cover of $G$ in the sense of algebraic groups, so we conclude that the etale fundamental group is actually finite and coincides with the Cartier dual of the algebraic fundamental group (as the latter is by definition of the Cartier dual of the kernel of the central isogeny from the simply connected central cover; more on this over general fields below). In particular, $G$ is simply connected in the sense of algebraic groups if and only if it is simply connected as a scheme. In the special case $k = \mathbf{C}$, we recover the fact that a connected semisimple $\mathbf{C}$-group $G$ is simply connected in the sense of algebraic groups if and only if $G$ is simply connected as a scheme, and (by the Riemann Existence Theorem) also if and only if $G(\mathbf{C})$ is simply connected in the sense of topology. This latter "if and only if" rests on the fact that when $G(\mathbf{C})$ is not simply connected then it has a nontrivial connected cover of finite degree, which is a consequence of the topological fundamental group being commutative and (as for any algebraic variety over $\mathbf{C}$) finitely generated. Meanwhile, as indicated above with Artin-Schreier coverings (with details left to the interested reader), in characteristic $p > 0$ (say assuming $k = k_s$) the etale fundamental group of a positive-dimensional smooth affine $k$-scheme is always infinite. But the etale fundamental group is an entirely different creature from the algebraic fundamental group over such $k$, as is most easily seen by noting that ${\rm{PGL}}_p$ is not simply connected in the sense of algebraic groups due to the non-etale central isogeny ${\rm{SL}}_p \rightarrow {\rm{PGL}}_p$ of degree $p$. Finally, let's address the characteristic-free theory of the "simply connected central cover" for connected semisimple groups over any field, and the related notion of "algebraic fundamental group". A connected semisimple group $G$ over a field $k$ is simply connected if any central $k$-isogeny $f:G' \rightarrow G$ from a connected semisimple $k$-group is necessarily an isomorphism. (By "central isogeny" I mean that the scheme-theoretic kernel of $f$ is contained in the scheme-theoretic center of $G'$; see Definition A.1.10 and preceding discussion in "Pseudo-reductive groups".) Since every maximal $k$-torus in a connected semisimple $k$-group is its own scheme-theoretic centralizer, the finite scheme-theoretic center of such $k$-groups is contained in a $k$-torus and hence is of multiplicative type. Together with properties of "multiplicative type" groups under central extensions, this underlies the fact that a composition of central isogenies between connected semisimple groups is again central: the crux is that even when the kernels are not etale, their automorphism schemes are always etale. (Beyond the connected reductive setting, over any field $k$ of char. $p > 0$ there exists a pair of central $k$-isogenies $G \rightarrow G'$ and $G' \rightarrow G''$ whose composition has kernel that is not central in $G$. For example, over $\mathbf{F}_p$ let $G$ and $G''$ be the standard upper-triangular unipotent subgroup of ${\rm{SL}}_3$, whose scheme-theoretic center is the upper-right $\mathbf{G}_a$. Take $G \rightarrow G''$ to be the Frobenius homomorphism, and take $G'$ to be the intermediate quotient of $G$ by the unique central $\alpha_p$ from the upper-right entry.) Then the real theorem is the existence and uniqueness (up to unique isomorphism) of a simply connected central cover of any connected semisimple $k$-group, and the compatibility of its formation with respect to any extension of the base field. By Galois descent, the strong uniqueness requirements reduce the proof of this assertion to the case $k = k_s$, so all connected semisimple $k$-groups are split. Hence, we can appeal to the Existence and Isomorphism/Isogeny Theorems with root data to conclude. (For further discussion, see Corollary A.4.11 in "Pseudo-reductive groups" and back-references in its proof.) If $G$ is a connected semisimple group over a field $k$ and $\pi:\widetilde{G} \rightarrow G$ is its simply connected central cover in the sense of algebraic groups, then ${\rm{ker}}(\pi)$ is a finite $k$-group scheme of multiplicative type (since it is central in the connected semisimple $\widetilde{G}$) and hence its Cartier dual is a commutative finite \'etale $k$-group. That is called the algebraic fundamental group $\pi_1(G)$ in the sense of algebraic groups. (So by definition, the algebraic fundamental group is trivial if and only if $G$ is simply connected in the sense of algebraic groups.) As we saw above, if $k$ is alg. closed of char. 0 then this is "dual" to the etale fundamental group of the variety $G$, and in characteristic $p > 0$ the example $G = {\rm{PGL}}_p$ shows that it is really quite unrelated to the usual etale fundamental group in the sense of schemes (even when $k = k_s$). Jim, what were you saying about being exhausted? :)
Bases for a Topology Definition: Let $(X, \tau)$ be a topological space. Let $x \in X$. A Base at $x$ (or Local Base at $x$) is a collection $\mathcal B_x$ of open neighbourhoods of $x$ such that for every open neighbourhood $U$ of $x$ there is a $B \in \mathcal B_x$ for which $x \in B_x \subseteq U$. A Base for the Topology $\tau$ is a collection of open sets $\mathcal B \subseteq \tau$ which contains a base at $x$ for each $x \in X$. Observe that $\mathcal B = \tau$ is always a base for the topology $\tau$ albeit not that interesting. For example, consider the set $\mathbb{R}$ with the usual Euclidean topology. There are many types of open sets in $\mathbb{R}$. For each $x \in \mathbb{R}$, an example of a base at $x$ is the collection $\mathcal B_x = \{ B(x, r) : r > 0 \}$ of open balls entered at $x$. Thus, a base for the usual topology on $\mathbb{R}$ is the collection of all open balls of various centers and radii. In general, if $(X, \| \cdot \|_X)$ is a normed linear space then the topology on $X$ induced by the norm (i.e., the metric topology on $X$ induced by the metric $d(x, y) = \| x - y \|_X$) is such that for each $x \in X$ the collection $\mathcal B_x = \{ B_X(x, r) : r > 0 \}$ of open balls where $B_X(x, r) = \{ y \in X : \| x - y \|_X < r \}$ is a base at $x$, and the collection $\mathcal B$ of all such open balls of various centers and radii is a base for the topology induced by the norm. Proposition 1: Let $(X, \tau)$ be a topological space. Then $\mathcal B \subseteq \tau$ is a base for the topology $\tau$ if and only if every open set $U \in \tau$ can be expressed as a union of sets in $\mathcal B$. By convention, we will say that $\emptyset$ can be expressed as the empty union. Proof:$\Rightarrow$ Suppose that $\mathcal B$ is a base for the topology $\tau$. Let $U \in \tau$. For each $x \in U$ there is a base at $x$, call it $\mathcal B_x$. Since $x \in U$ and $U \in \tau$ by definition there exists a $B_x \in \mathcal B$ such that $x \in B_x \subseteq U$. Then $\displaystyle{U = \bigcup_{x \in X} B_x}$ and each $B_x \in \mathcal B_X \subseteq \mathcal B$. $\Leftarrow$ Suppose that every open set can be expressed as a union of sets in $\mathcal B$. If $x \in X$ and $U \in \tau$ is such that $x \in U$ then since $U$ is open it can be expressed as a union of sets from $\mathcal B$. For each $U$ select a nonempty set in the expression of $U$ as a union of sets from $\mathcal B$. Let $\mathcal B_x$ be the collection of such sets. Then $\mathcal B_x$ is a base at $x$ since for each $U \in \tau$ with $x \in U$ there is a $B \in \mathcal B_x$ such that $x \in B \subseteq U$. Furthermore, $\mathcal B_x \subseteq \mathcal B$. So $\mathcal B$ is a base for the topology $\tau$. $\blacksquare$ From proposition 1 above, if a base $\mathcal B$ for a topology $\tau$ is given then the topology $\tau$ is completely defined. The topology will consist of $\emptyset$ and all unions of sets from $\mathcal B$. For this reason, it is extremely convenient to describe a topology by a base. The next question to ask is whether a given collection of sets is a base for some topology on $X$. The following proposition tells us when that’s possible. Proposition 2: Let $X$ be a nonempty set and let $\mathcal B$ be a collection of subsets of $X$. Then $\mathcal B$ is a base for some topology $\tau$ on $X$ if and only if the following conditions hold: 1) $\displaystyle{X = \bigcup_{B \in \mathcal B} B}$. 2) For all $B_1, B_2 \in \mathcal B$ and all $x \in B_1 \cap B_2$ there exists a $B \in \mathcal B$ for which $x \in B \subseteq B_1 \cap B_2$.
I read an example that said explain what "$f(n)$ is $n^{O(1)}$" means. I can't interpret the $n^{O(1)}$ syntax. I know what Big $O$ notation is, its just that this example looks odd to me. Computer Science Stack Exchange is a question and answer site for students, researchers and practitioners of computer science. It only takes a minute to sign up.Sign up to join this community It's short-hand for "$n^{f(n)}$ for some function $f(n)\in O(1)$". In other words, the function is at most $n^c$ for some constant $c$. You can see this by directly substituting the definition of $O(1)$ in the expression. $g(n)=n^{O(1)}$ if there's constant $c$ such that, for all large enough $n$, $f(n)\leq n^{c\cdot 1} = n^c$. This includes all polynomially-bounded functions. For example, for $n>0$, $$n^2+3n = n^2(1+\tfrac3n) = n^{2+\log(1+3/n)/\log n} = n^{O(1)}\,,$$ since, for $n\geq 3$, we have $1<1+\tfrac3n\leq 2$ and $\log n>1$, so the exponent is between $2+\log 2$ and $2$.
Suppose you have an action $S(\epsilon) = S_1 + S_2 + \epsilon\, S_\mathrm{int}$. Assume that $S_1$ is gauge invariant under the action of the group $G$ and $S_2$ is gauge invariant under the action of the group $H$, such that the action $S_1$ + $S_2$ is gauge invariant under the action of $G\times H$. Suppose that $S_\mathrm{int}$ breaks the gauge group down to $F \in G\times H$, that is, the action $S(\epsilon)$ is gauge invariant under the action of $F$ only. This implies that $$ S(0)=S_1+S_2= \lim\limits_{\epsilon\, \rightarrow\, 0}\,S(\epsilon) $$ has a wider gauge group than $S(\epsilon)$, that is, the gauge symmetry of the action $S(\epsilon)$ is enhanced when sending $\epsilon$ to $0$. For clarity, by "gauge invariant" I mean that the theory has a redundancy of description. Does this imply that the parameter $\epsilon$ is technically natural? To clarify, I mean " natural" in the sense of 't Hooft, e.g. as discussed in The question is motivated by the fact that I could only find the concept of technical naturalness associated with global symmetries in the literature. On the other hand, I did not find any statement saying that it does not hold in the case of gauge symmetries. EDIT: I can provide a simpler example to clarify ever more what I mean. Consider the Proca lagrangian density for a real massive spin-1 field, $$ \mathcal{L}=-\dfrac{1}{2}F^{\mu\nu}F_{\mu\nu}+m^2A_\mu A^\mu, \qquad F_{\mu\nu}= \partial_\mu A_\nu - \partial_\nu A_\mu. $$ The corresponding Proca action is not invariant under the gauge group $U(1)$, but taking the limit $m\rightarrow 0$ gives us the action for a free photon, which is gauge invariant under $U(1)$. Hence, sending $m\rightarrow 0$ enhances the gauge symmetry of the action. In this particular case, my question becomes: is the Proca mass $m$ natural in the sense of 't Hooft? In other words, is a small Proca mass $m$ protected against large quantum corrections, the latter being proportional to the small mass itself?
Description Parallel sessions on Neutrino physics A detailed understanding of neutrino(ν)-nucleus interactions is essential for the precise measurement of neutrino oscillations at long baseline experiments, such as T2K. The T2K near detector complex, designed to constrain the T2K flux and cross section models, also provides a complementary program of neutrino interaction cross-section measurements. Through the use of multiple target materials... The experimental observation of the coherent elastic neutrino-nucleus scattering (CE$\nu$NS) opened up a new window to explore different sectors from nuclear to neutrino physics, passing through electroweak parameters determination. Indeed, from the analysis of the data provided by COHERENT experiment, we determined for the first time the average neutron rms radius of $^{133}\text{Cs}$ and... The DUNE experiment directs a neutrino beam from Fermilab towards a 40 kiloton liquid argon time-projection chamber (TPC) 1300 km away in the Sanford Underground Research Facility in South Dakota. By measuring electron neutrino and anti-neutrino appearance from the predominantly muon neutrino and anti-neutrino beams, DUNE will determine the neutrino mass ordering and explore leptonic CP... The knowledge of initial flux, energy and flavor of current neutrino beams is currently the main limitation for a precise measurement of neutrino cross sections. The ENUBET ERC project (2016-2021) is studying a facility based on a narrow band neutrino beam capable of constraining the neutrino fluxes normalization through the monitoring of the associated charged leptons in an instrumented decay... T2K is a long baseline neutrino experiment producing a beam of muon neutrinos at the Japan Particle Accelerator Research Centre on the East coast of Japan and measuring their oscillated state 295 km away at the Super Kamiokande detector. Since 2016 T2K has doubled its data in both neutrino and antineutrino beam modes. Coupled with improvements in analysis techniques this has enabled the... The NOvA experiment is a long-baseline neutrino oscillation experiment that uses the upgraded NuMI beam from Fermilab to detect both electron appearance and muon disappearance. NOvA employs two functionally identical detectors: a Near Detector, located at Fermilab, and a Far Detector, located at Ash River, Minnesota over an 810 km baseline. NOvA's primary physics goals include precision... The Deep Underground Neutrino Experiment (DUNE) has a broad physics program, which includes measuring the CP violating phase, determining the neutrino mass hierarchy and performing precision tests of the three-flavor paradigm in long-baseline neutrino oscillations by means of making measurements of neutrino oscillation parameters. Other science goals are the detection of neutrinos from... Hyper-Kamiokande is a next generation large-scale water Cherenkov detector. Its fiducial volume will be about an order of magnitude larger than Super-Kamiokande and the detector performance is significantly improved with newly developed photo-sensors. Combination of the Hyper-Kamiokande detector with the upgraded J-PARC neutrino beam will provide unprecedented high statistics of the neutrino... The Jiangmen Underground Neutrino Observatory (JUNO) is the first multi-kton liquid scintillator detector to come on scene in 2021. It will have 20 kt target mass and an overburden of 1900 m.w.e. It is currently under construction near Kaiping in the Guangdong province in southern China, at a strategic baseline of 53 km from two nuclear power plants. The main physics goal is to determine the... The Kilometer Cube Neutrino Telescope (KM3NeT) is a next generation undersea neutrino telescope in the Mediterranean sea, currently under deployment. It's low energy configuration ORCA (Oscillations Research with Cosmics in the Abyss) will have a low neutrino energy detection threshold of 3 GeV. The effective mass of the fully completed detector is estimated to be around 5.8 Mega tonnes. The... We discuss the concept and detection prospects of general neutrino interactions (GNI) as a well-motivated generalisation of the widely-studied non-standard interactions (NSI), both encompassing effects of new physics at energies below the electroweak scale. If GNI (tensor, (pseudo)scalar, and (axial) vector interactions) arise from heavy new physics, they should be related to effective field... The small neutrino masses might be a consequence of the well known seesaw mechanism, which requires new fields as heavy as $10^{14}$ GeV. However, there are alternative explanations. For example neutrino masses might be generated through loops or via high-dimensional operators. In both cases, the mediating particles can have TeV-scale masses, and if so it might be possible to produced them at the LHC. ESSnuSB is a design study for an experiment which will attempt to measure CP violation in lepton sector by observing neutrino oscillations at the second muon neutrino to electron neutrino oscillation maximum. The very intense neutrino beam will be generated by uniquely powerful (5 MW average) ESS linear proton accelerator, which is currently under construction near Lund, Sweden. The experiment... In view of the J-PARC program of upgrades of the beam intensity, the T2K collaboration is preparing towards an increase of the exposure aimed at establishing leptonic CP violation at 3 $\sigma$ level for a significant fraction of the possible $\delta_{CP}$ values. To reach this goal, an upgrade of the T2K near detector ND280 has been launched, with the aim of reducing the overall statistical... The Accelerator Neutrino Neutron Interaction Experiment (ANNIE) is a Gadolinium doped water Cherenkov detector located in the Booster Neutrino Beam at Fermilab with the primary goal of measuring the final state neutron multiplicity of neutrino-nucleus interactions. The measurement of the neutron yield as a function of the outgoing lepton kinematics will be useful to constrain systematic... The Muon Ionization Cooling Experiment (MICE) at RAL has collected extensive data to study the ionization cooling of muons. This is a decisive demonstration towards new neutrino sources based on muon storage rings. Several million individual muon tracks have been recorded passing through a series of focusing magnets in a number of different configurations and a liquid hydrogen or lithium... MicroBooNE is an 85 ton active-mass liquid argon time projection chamber located in the Booster Neutrino Beam at Fermilab, at a baseline of 470 m. The primary aims of MicroBooNE are to investigate the low-energy excess observed by the MiniBooNE experiment and to make precision measurements of neutrino interactions on argon. In addition, important lessons are being learned about the performance... The ICARUS collaboration employed the 760-ton T600 detector in a successful three-year physics run at the underground LNGS laboratories studying neutrino oscillations with the CNGS neutrino beam from CERN, and searching for atmospheric neutrino interactions. ICARUS performed a sensitive search for LSND-like anomalous $\nu_e$ appearance in the CNGS beam, which contributed to the constraints the... In the recent years two unsolved anomalies have appeared during the study of the reactor neutrinos: one related to the neutrino spectral shape, and another to the absolute neutrino flux. The latter, known as the Reactor Antineutrino Anomaly (RAA), presents a deficit in the observed flux compared to the expected one. This anomaly could point to the existence of a light sterile neutrino... One of the hottest topics in present-day neutrino physics is provided by the hints of sterile species coming from the short-baseline (SBL) anomalies. Waiting for a definitive (dis-)confirmation of these indications by future SBL experiments, other complementary avenues can be explored in the hunt of such elusive particles. An important opportunity is that offered by the long-baseline (LBL)... In this talk I present a global fit to $\nu_\mu$ disappearance data in the context of 3 + 1 neutrino oscillations. I explain the analysis method for the experiments with most impact in the global picture, namely MINOS/MINOS+, DeepCore and Antares, before presenting the results of the combined fit. To finish I discuss the implications of our results to the global 3 + 1 picture. Recently the MiniBooNE Collaboration has reported an anomalous excess in muon to electron (anti-)neutrino oscillation data. Combined with long-standing results from the LSND experiment this amounts to a 6.1 sigma evidence for new physics beyond the Standard Model. We develop a framework with 3 active and 3 sterile neutrinos with altered dispersion relations that can explain these anomalies... MicroBooNE is a liquid argon time projection chamber in the Booster Neutrino Beam at Fermilab. The large event rate and 3 mm wire spacing of the detector provide high-statistics, precise-resolution imaging of neutrino interactions leading to low-threshold, high-efficiency event reconstruction with full angular coverage. As such, this is an ideal place to probe neutrino-argon interactions in... The European Strategy for Particle Physics has classified in 2013 the long-baseline neutrino programme as one of the four highest-priority scientific objectives. The Neutrino Platform was then born as the CERN enterprise to encourage and support the next generation of accelerator-based neutrino oscillation experiments. Part of the present CERN Medium-Term Plan, the Neutrino Platform has since... A paper wrote by the speaker together with two colleagues on 2007 restarted the discussion on the topic of relic neutrino detection after many year of silence on the subject. In the paper a process that makes possible the detection of neutrinos of vanishing energy was discussed and its cross sections with beta unstable elements have been evaluated. After this paper it took 10 years to get to... The discovery of coherent elastic neutrino nucleus scattering (CE$\nu$NS) by the COHERENT experiment set the stage for new investigations within and beyond the standard model's neutrino sector. However, its detection in the fully coherent regime at low neutrino energies is still pending since the associated low nuclear recoils are experimentally challenging in terms of detection threshold and... We discuss SO(3) as the origin of finite family symmetries such as A4, S4 and A5 in the SUSY framework for the first time. We propose a supersymmetric gauged SO(3)xU(1) flavour model. This model goes through two-step symmetry breaking, first from SO(3) to A4 and then from A4 to residual Z2 and Z3. The model is consistent with current oscillation data and predicts sum rules of mixing... Numerous experimental efforts have shown that antineutrino-based monitoring provides a non-intrusive means to estimate the fissile content and relative thermal power of nuclear reactors for nonproliferation. However, close proximity to the reactor core is required in order to collect relatively high-statistics data needed for such applications. This has limited the focus of most studies to the... The SHiP Collaboration has proposed a general-purpose experimental facility operating in beam dump mode at the CERN SPS accelerator with the aim of searching for light, long-lived exotic particles of Hidden Sector models. The SHiP experiment incorporates a muon shield based on magnetic sweeping and two complementary apparatuses. The detector immediately downstream of the muon shield is... FASER is a new experiment at the LHC aiming to search for light, weakly-interacting new particles, complementing other experiments. A particle detector will be located 480 m downstream of the ATLAS interaction point. In addition to searches for new particles, we also aim to study high-energy neutrinos of all flavors, as there is a huge flux of neutrinos at this location. To date, muon neutrino... The Cryogenic Underground Observatory for Rare Events (CUORE) is the first bolometric experiment searching for neutrinoless double beta decay (0νββ) that has been able to reach the one-ton scale. The detector consists of an array of 988 TeO2 crystals arranged in a compact cylindrical structure of 19 towers. The construction of the experiment was completed in August 2016 with the installation... The GERDA experiment searches for the neutrinoless double-beta decay using high purity germanium detectors enriched in $^{76}$Ge, simultaneously used as source and detector. The observation of such a process would demonstrate the presence of a Majorana term in the neutrino mass and prove that lepton number is not conserved. The experimental setup is located at the LNGS underground laboratory... A convincing observation of neutrino-less double beta decay (0$\nu$DBD) relies on the possibility of operating high-energy resolution detectors in background-free conditions. Scintillating cryogenic calorimeters are one of the most promising tools to fulfill the requirements for a next-generation experiment. Several steps have been taken to demonstrate the maturity of this technique, starting... The SNO+ experiment is a low background, liquid scintillator neutrino detector with the goal of detecting neutrinoless double beta decay in Tellurium-130. The experiment has been taking data filled with water since early 2017 setting world-leading limits in invisible nucleon decay and a very low background measurement of solar neutrinos. SNO+ is in the process of being filled with liquid... EXO-200 is a neutrinoless double beta decay (0vBB) experiment using a time projection chamber filled with ~150kg of liquid xenon, enriched in 136Xe. The experiment, located at the Waste Isolation Pilot Plant (WIPP) near Carlsbad New Mexico, recently completed data taking that started in 2011. The last two years of data, after some hardware upgrades, resulted in improved energy resolution. ... The Neutrino Experiment with a Xenon TPC (NEXT) searches for the neutrinoless double beta decay of 136Xe using a high pressure xenon gas time projection chamber. This detector technology has several key advantages, including excellent energy resolution, powerful event classification based on track topology, and favorable mass scalability. It also offers the tantalising possibility of tagging... FERS-5200 is a Front-End Readout System designed for the readout of large detector arrays, such as SiPMs, multi-anode PMTs, Silicon Strip detectors, Wire Chambers, GEM, Gas Tubes and others. FERS is a distributed and scalable system, where each unit is a small card that houses 32 or 64 channels with preamplifier, shaper, discriminator, A/D converter, trigger logic, synchronization, local...
Research Open Access Published: Some properties of solutions for an isothermal viscous Cahn–Hilliard equation with inertial term Boundary Value Problems volume 2018, Article number: 62 (2018) Article metrics 738 Accesses Abstract In this paper, we study the global existence and blow-up of solutions for an isothermal viscous Cahn–Hilliard equation with inertial term, which arises in isothermal fast phase separation processes. Based on the Galerkin method and the compactness theorem, we establish the existence of the global generalized solution. Using a lemma on the ordinary differential inequality of second order, we prove the blow-up of the solution for the initial-boundary problem. Introduction In this paper, we are concerned with the following initial-boundary problem: where \(\Omega\subset{\mathbb {R}}^{n}\) (\(n\leq3\)) is a bounded domain with smooth boundary, \(\delta> 0\) is an inertial parameter, \(k \geq0\) is a viscosity coefficient, and \(f (s)\) is a given nonlinear function. Equation (1.1) was proposed in [1] to model rapid spinodal decompositions in a binary alloy. Zheng and Milani [2] proved that the dynamical systems generated by problem (1.1)–(1.3) admit exponential attractors and inertial manifolds. Zheng and Milani [3] show that the dynamical systems admit global attractors and that these global attractors are at least upper-semicontinuous with respect to the vanishing of the perturbation parameter. Gatti et al. [4] considered problem (1.1)–(1.3). Their result is the construction of a robust family of exponential attractors, whose common basins of attraction are the whole phase-space. They [5] also considered the same problem in the three-dimensional setting. Grasselli et al. [6] studied a differential model describing nonisothermal fast phase separation processes taking place in a three-dimensional bounded domain. where \(\sigma\in[0,1]\). This model consists of a viscous Cahn–Hilliard equation characterized by the presence of an inertial term \(\chi_{tt}\), χ being the order parameter, which is linearly coupled with an evolution equation for the (relative) temperature ϑ. The blow-up of solutions for the fourth order equation has been intensively studied. Chen and Lu [7] considered the initial-boundary value problem for the nonlinear wave equation They obtained the blow-up of the solution and the energy decay of the solutions. Wang [8] studied the equation He gave necessary and sufficient conditions for global existence and finite time blow-up of solutions. Escudero et al. [9] discussed a fourth order parabolic equation involving the Hessian The authors proved the global existence versus blow-up results. Qu and Zhou [10] studied the following: By using the method of potential wells, they obtained a threshold result of global existence and blow-up for the sign-changing weak solutions and the conditions under which the global solutions become extinct in finite time. In this paper, we consider the global existence and blow-up of solutions for problem (1.1)–(1.3). To prove the blow-up of solutions, we establish a new functional and consider the solution of the Bernoulli type equation. Basing on the required estimates and using a lemma on the ordinary differential inequality of second order, we prove the blow-up of the solution for the initial-boundary problem. The main method is nontrivial because of both the nonlinearity of \(\Delta f(u)\) and more delicate estimates which are necessary to overcome some delicate technical points. The plan of this paper is as follows. In Sect. 2, we prove the existence and uniqueness of the global generalized solution for the initial-boundary value problems (1.1)–(1.3) by the Galerkin method. We also give some sufficient conditions of the blow-up of the solutions for the initial-boundary value problems (1.1)–(1.3) in Sect. 3. Finally, in Sect. 4, we discussed the decay rate of energy. For simplicity, we set \(\delta= 1\) in this paper. Existence of the global solution Let \({y_{i}(x)}\) be the orthonormal basis in \(L^{2}(\Omega)\) composed of the eigenfunctions of the eigenvalue problem corresponding to eigenvalue \(\lambda_{i}\) (\(i=1,2,\ldots\)). Let be the Galerkin approximate solution for problem (1.1)–(1.3), where \(\gamma_{Ni}(t)\) are the undetermined functions and N is a natural number. Suppose that the initial value functions \(\varphi(x)\) may be expressed as where \(\mu_{i}\) and \(\nu_{i}\) (\(i=1,2,\ldots\)) are constants. Substituting the approximate solution \(u_{N}(x,t)\) into Eq. (1.1), multiplying both sides by \(y_{s}(x)\), we obtain where \((\cdot,\cdot)\) denotes the inner product of \(L^{2}(\Omega)\). Substituting the approximate solution \(u_{N}(x,t)\) and the approximations of the initial value functions into the initial condition (1.3), we get where \(\dot{\gamma}_{Ns}(t)=\frac{d}{dt}\gamma_{Ns}(t)\). Lemma 2.1 Suppose that \(\varphi\in H^{2}(\Omega)\) and \(\psi\in L^{2}(\Omega)\) satisfy the boundary condition (1.2), \(f \in C^{1}(R)\), \(0\leq F(s)=\int_{0}^{s}f(\eta)\,d\eta\), and \(|f'(s)|\leq C_{1}|s|^{2}+C_{2}\), where \(C_{1}>0\), \(C_{2}>0\) are constants. Then the following estimate holds: where and in the sequel \(C>0\) is a constant which only depends on T. Proof Let \(w_{n}\) be the unique solution of the problem Substituting the approximate solution \(u_{N}(x,t)\) into Eq. (1.1), multiplying both sides by \(2w_{Nt}\), we obtain Integrating by parts with respect to x on Ω, we have Hence, we know Multiplying both sides of (2.4) by \(2\gamma_{Nst}(t)\), summing up for \(s=1,2,\ldots, N\), we have Integrating by parts with respect to x on Ω, we get By \(|f'(s)|\leq C_{1}|u|^{2}+C_{2}\), hence Therefore, by (2.11), Then, integrating (2.12) on \([0,t]\) and using the Gronwall inequality, we deduce Lemma 2.2 Suppose that the conditions of Lemma 2.1 hold. If \(f\in C^{3}(R)\), \(\varphi\in H^{4}(\Omega)\), \(\psi\in H^{2}(\Omega)\), and \(|f''(s)|\leq C_{3}|s|+C_{4}\), \(|f'''(s)|\leq C\), then the approximate solution for problem (1.1) –(1.3) satisfies the following estimate: Proof Multiplying both sides of (2.4) by \(2\lambda_{s}^{2}\gamma _{Nst}(t)\), summing up for \(s=1,2,\ldots, N\), we have Integrating by parts with respect to x, we get On the other hand, we know and By (2.13), we know that \(\|u\|_{\infty}\leq C\), hence Thus, By (2.13) and the Sobolev imbedding theorem, we see that Using the Gagliardo–Nirenberg inequality, we conclude On the other hand, by boundary conditions (1.2), we obtain Substituting the above inequalities into (2.15), we get Integrating the above inequality, and using the Gronwall inequality, we have Similarly, multiplying both sides of (2.4) by \(\gamma_{Nstt}(t)\), summing up for \(s=1,2,\ldots, N\), we deduce Integrating by parts with respect to x and using the Cauchy inequality, we have Therefore, we conclude Theorem 2.1 Suppose that \(\varphi\in H^{4}(\Omega)\) and \(\psi\in H^{2}(\Omega)\) satisfy the boundary conditions (1.2), \(f\in C^{3}(R)\), \(0\leq F(s)=\int_{0}^{s}f(\eta)\,d\eta\), \(|f'(s)|\leq C_{1}|s|^{2}+C_{2}\), \(|f''(s)|\leq C_{3}|s|+C_{4}\), and \(|f'''(s)|\leq C\). Then problem (1.1) –(1.3) has a unique global generalized solution Proof From (2.14) we know that \(u_{N}\in C ([0,T];H^{4}(\Omega) )\), \(u_{Nt}\in C ([0,T];H^{2}(\Omega) )\), \(u_{Ntt}\in C ([0,T];L^{2}(\Omega) )\). Using the Sobolev imbedding theorem, we have \(D^{k}u_{N}\in C ([0,T]\times\Omega )\), \(0\leq k\leq2\). It follows from the above two relations and the Ascoli–Arzelá theorem that there exist a function \(u(x,t)\) and a subsequence of \({u_{N}(x,t)}\), still denoted by \({u_{N}(x,t)}\), such that as \(N\rightarrow\infty\), \({u_{N}(x,t)}\) uniformly converges to \(u(x,t)\) in \([0,T]\times\Omega\). The corresponding subsequence of \(\Delta u_{N}(x,t)\) also uniformly converges to \(\Delta u(x,t)\) in \([0,T]\times\Omega\). According to the compactness theorem, the subsequence \(D^{k}u_{N}(x,t)\) (\(0\leq k\leq4\)), \(D^{k}u_{Nt}(x,t)\) (\(0\leq k\leq2\)), and \(u_{Ntt}(x,t)\) weakly converge to \(D^{k}u(x,t)\) (\(0\leq k\leq4\)), \(D^{k}u_{t}(x,t)\) (\(0\leq k\leq2\)), and \(u_{tt}(x,t)\) in \(L^{2} ([0,T]\times\Omega )\), respectively. Hence, we know that \(u(x,t)\) satisfies (2.18). Therefore \(u(x,t)\) is the generalized solution for problem (1.1)–(1.3). It is easy to prove the uniqueness of the solutions for problem (1.1)–(1.3). This completes the proof of the theorem. □ Blow-up of solutions In the previous sections, we have seen that the solution of problem (1.1)–(1.3) is globally existent, provided that \(F(s)\geq0\). In this section, we will prove the blow-up of the solution for \(F(s)<0\). For this purpose, we need the following lemma. Lemma 3.1 ([7]) Assume that \(u'=G(t, u)\), \(v'\geq G(t, v)\), \(G\in C([0,\infty)\times (-\infty, \infty))\), and \(u(t_{0})=v(t_{0})\), \(t_{0}\geq0\), then when \(t\geq t_{0}\), \(v(t)\geq u(t)\), where \(u'=\frac{d}{dt}u(t)\). Let w be the unique solution of the problem We have the following theorem. Theorem 3.1 Suppose that (1) \(f(s)s\leq\gamma F(s)\), \(F(s)\leq-\alpha|s|^{p+1}\), where\(F(s)=\int_{0}^{s}f(u)\,du\), \(\gamma>2\), \(\alpha>0\), and\(p>1\) are constants. (2) \(\varphi\in H^{4}(\Omega)\), \(\psi\in H^{2}(\Omega)\) and$$\begin{aligned} E(0)={}&\big\| \nabla w_{t}(0)\big\| ^{2}+\|\nabla\varphi \|^{2}+2 \int_{\Omega}F\bigl(\varphi (x)\bigr)\,dx \\ \leq{}& \frac{-2^{\frac{2p}{p-1}}}{ (\frac{\alpha(\gamma -2)}{p+3} )^{\frac{2}{p-1}} (1-e^{-\frac{p-1}{4}} )^{\frac{4}{p-1}}}< 0,\end{aligned} $$$$\|\nabla w\|^{2}+ \int_{0}^{t}\big\| u(\cdot,\tau)\big\| ^{2}\,d\tau+ \int_{0}^{t}\big\| \nabla w(\cdot ,\tau) \big\| ^{2}\,d\tau+ \int_{0}^{t} \int_{0}^{\tau}\|u\|^{2}\,ds\,d\tau\to \infty,\quad \textit{as } t\to T^{*}. $$ Proof Let A simple calculation shows that Noticing equation (1.1), we know that which implies Moreover, we easily see Now, we define It is obvious that Further, we have Integrating (3.5), we conclude that Integrating (3.6), we deduce Recalling \(H''(t)>0\), \(H(t)\geq0\), and therefore, from (3.9) we obtain On the other hand, the Hölder inequality implies that Thus Substituting the above inequalities into (3.10), and by the fact \((x+y+z)^{n}\leq2^{2(n-1)}(x^{n}+y^{n}+z^{n})\), \(x,y,z>0\), \(n>1\), we know that In addition, note from (3.6) and (3.7) that \(H'(t)\to+\infty \) and \(H(t)\to+\infty\) as \(t\to\infty\). Therefore, we see that there is \(t_{0}\geq1\) such that when \(t\geq t_{0}\), \(H'(t) > 0\) and \(H(t) > 0\). Multiplying (3.11) by \(2H'(t)\), we get where It follows from (3.12) that Integrating the above inequality over \((t_{0}, t)\), we easily see Note that when \(t\to\infty\), the right-hand side of (3.13) approaches positive infinity, hence, there is \(t_{1} > t_{0}\) such that when \(t\geq t_{1}\), the right-hand side of (3.13) is larger than or equal to zero. We thus have that is, where \(M_{1}=M^{1/2}\). Now, we consider the initial value problem of the ordinary differential equation Therefore, we conclude that where It is obvious that \(J(t_{1})=1>0\), and By (3.7), we can take \(t_{1}\) sufficiently large such that Condition (2) of Theorem 3.1 implies Noticing the continuity of \(J (t)\), we know that there is a constant \(T^{*}\) (\(t_{1} < T^{*}< t_{1} + 1\)) such that \(J (T^{*} ) = 0\). Hence \(S(t)\to\infty\), as \(t\to T^{*}\). It follows from Lemma 3.1 that when \(t\geq t_{1}\), \(H(t)\geq S(t)\). Thus, \(H(t)\to \infty\) as \(t\to T^{*}\). Theorem 3.1 is proved. □ Decay rate of energy Lemma 4.1 ([11]) Suppose that \(J : [0,\infty)\to[0,\infty)\) is a non- increasing function and assume that there is a constant \(L > 0\) such that Then Theorem 4.1 where \(G(t)=\int_{\Omega}|\nabla w_{t}|^{2}\,dx+\int_{\Omega}|\nabla u|^{2}\, dx+2\int_{\Omega}F(u)\,dx\). Proof Recalling (3.1), we derive A simple computation gives, for any \(0\leq t_{1}\leq t_{2}<\infty\), which shows that \(G(t)\) is non-increasing. Multiplying (1.1) by \(w(x, t)\), integrating over \((t_{1},t_{2})\times \Omega\), and integrating by parts, we have which implies Recalling the assumption \(f(s)s\leq2F(s)\), we know Using the Poincaré inequality, we obtain The Cauchy inequality yields On the other hand, by the non-increasing property of \(G(t)\), we get By Lemma 4.1, we conclude that This completes the proof. □ References 1. Galenko, P.: Phase-field models with relaxation of the diffusion flux in nonequilibrium solidification of a binary system. Phys. Lett. A 287, 190–197 (2001) 2. Zheng, S., Milani, A.: Exponential attractors and inertial manifolds for singular perturbations of the Cahn–Hilliard equations. Nonlinear Anal. 57, 843–877 (2004) 3. Zheng, S., Milani, A.: Global attractors for singular perturbations of the Cahn–Hilliard equations. J. Differ. Equ. 209, 101–139 (2005) 4. Gatti, S., Grasselli, M., Miranville, A., Pata, V.: On the hyperbolic relaxation of the one-dimensional Cahn–Hilliard equation. J. Math. Anal. Appl. 312, 230–247 (2005) 5. Gatti, S., Grasselli, M., Miranville, A., Pata, V.: Hyperbolic relaxation of the viscous Cahn–Hilliard equation in 3-D. Math. Models Methods Appl. Sci. 15, 165–198 (2005) 6. Grasselli, M., Petzeltová, H., Schimperna, G.: Asymptotic behaviour of a non-isothermal viscous Cahn–Hilliard equation with inertial term. J. Differ. Equ. 239, 38–60 (2007) 7. Chen, G., Lu, B.: The initial-boundary value problems for a class of nonlinear wave equations with damping term. J. Math. Anal. Appl. 351, 1–15 (2009) 8. Wang, Y.: Finite time blow-up and global solutions for fourth order damped wave equations. J. Math. Anal. Appl. 418, 713–733 (2014) 9. Escudero, C., Gazzola, F., Peral, I.: Global existence versus blow-up results for a fourth order parabolic PDE involving the Hessian. J. Math. Pures Appl. 103, 924–957 (2015) 10. Qu, C., Zhou, W.: Blow-up and extinction for a thin-film equation with initial-boundary value conditions. J. Math. Anal. Appl. 436, 796–809 (2016) 11. Komornik, V.: Exact Controllability and Stabilization: The Multiplier Method. Wiley, New York (1994) Acknowledgements The authors would like to express their deep thanks to the referee for valuable suggestions for the revision and improvement of the manuscript. Availability of data and materials Not applicable. Funding This work is supported by the Jilin Scientific and Technological Development Program [number 20170101143JC]. Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Abbreviations Not applicable. Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Drops, also known as Loot, are the items that monsters leave behind for the player that killed them when they die, or when they are defeated. These items may then be picked up by players. Drops often include bones, coins or other items. Most monsters have "100%-chance drops", which is an item or items that are always dropped by that monster upon defeat or death. 100%-chance drops are most commonly bones or demonic ashes. Certain monsters may have more than one type of 100%-chance drops, however; a common example of this are metal dragons, who drop dragon bones as well as metal bars corresponding to their composite metal (see the image to the right). Typically, the player who attacks the monster first will see the drop before other players, and the attacked NPC is marked with an asterisk (*) to denote such "ownership". This does not apply to specific monsters, however, such as bosses. Large monsters (those that take up more than one square) will always drop their drops in the south-westernmost square. This also applies to any Ranged ammunition that falls to the ground when ranging those monsters. Drops will remain on the ground for 200 game ticks (2 minutes or 120 seconds), after which they will disappear. Drops are invisible to other players for the first 100 game ticks (1 minute or 60 seconds), after which time, anyone will be able to see (and take) the item if it is tradeable. Drops tables When a monster dies, it will roll most of its drops tables to see if a player should obtain an item, and then which item they should obtain. 100% drop These are the items that a monster is guaranteed to drop when it dies. Items on this table are usually remains such as bones and ashes, but there is no strict limitation to what can appear on this drop table. While all items on this table have a drop chance of "Always", not every item with an "Always" drop chance is a 100% drop; i.e., some items, such as the First dragonkin journal, are guaranteed as a drop on the first kill, but are not obtainable afterwards. This distinction keeps one-off items from being classed here; such items are usually tertiary drops. Charms Charms are dropped by a large selection of monsters. Charms are dropped alongside the other drops. Main drop The main drops table, if a monster has one, is guaranteed to be rolled when a monster dies. It contains a selection of items, with one randomly chosen. For a small handful of monsters, this table can be rolled multiple times in one kill. Tertiary drops Tertiary drops are a separate table that is rolled for alongside the main drop. Unlike the main drop, multiple tertiary drops can be obtained in a single kill; however, similarly classed items cannot be obtained at the same time; e.g., if a spirit sapphire is obtained as a tertiary drop, it is impossible to also obtain a spirit ruby, but this roll will have no effect on obtaining a clue scroll. Rare drop table The rare drop table is an extra table of drops that can be rolled if a drop slot indicating it is rolled on the main drop. As such, the rare drop table drop will count as the main drop. The items available on the rare drop table are the same across every monster. Universal drops Universal drops are a class of items that can be obtained by nearly every monster. Usually, this drops table only includes the Key token; however, promotional items can be added during Treasure Hunter promotions. Drops on this table usually follow a time release mechanic; e.g. if a key token was obtained as a drop, it will not be possible to obtain another key token for several minutes. This time gate is universal as well, being closed by, for example, skilling and obtaining a key token. Drops on this table may or may not be added directly to a player's inventory/bank. Drop rate All items have a chance of being dropped that is expressible as a number, their drop rate. Drop rates are not necessarily a guarantee; an item with a drop rate of "1 in 5" does not equate to "This item will be dropped after 5 kills." While each kill does nothing to increase the drop rate itself, it is trivial to state that more kills gives rise to more chance overall. Binomial model Given a known value of $ \frac{1}{x} $, the chance of receiving such an item $ k $ times in $ n $ kills can be calculated using binomial distribution. The probability of receiving an item $ k $ times in $ n $ kills with a drop rate of $ \frac{1}{x} = p $ follows: $ \binom n k p^k(1-p)^{n-k} $ where $ \binom n k =\frac{n!}{k!(n-k)!} $ For finding the probability of obtaining an item at least once, rather than a specified number of times, we can drop the binomial coefficient and simplify the equation to: $ 1 - (1 - p)^x $ Where $ (1-p)^x $ is calculating the probability of not receiving the item, and we use that to calculate the inverse. For example, it is known that the drop rate of the abyssal whip is $ \frac{1}{1024} \approx 0.000977 $. If we want to know the probability of receiving at least one abyssal whip in a task of 234 demons, we would plug into the equation: $ \begin{align} & 1 - (1 - 0.000977)^{234} \\ = & \ 1 - 0.999023^{234} \\ \approx & \ 1 - 0.795543 \\ \approx & \ 0.204457 \end{align} $ Giving us the answer, we have approximately a 20.45% chance of receiving an abyssal whip during this task. Distribution There are two basic types of monsters: taggable and untaggable. Taggable monsters will give their drops to the first player who attacked it. Most untaggable monsters will give their drops to the player who dealt the most damage. Some monsters, such as Vorago, have unique mechanics that distribute the drops with a more complex algorithm, yet others, such as Araxxi and the Raids bosses, are looted with a coffer system, giving drops to all players who participated independently. For Ironman and Hardcore ironman mode, players must deal the most damage on the monster to receive the drop, even if they tagged it first. Loot interface The loot tab is a way to quickly pickup a stack of items in one go. Items that are dropped by the player, or appears for the player will appear in this interface. Certain items however, such as Pet droppings can also appear on the interface despite not being dropped by the player. The most valuable items based on Grand Exchange price will appear at the top of the interface followed by lesser valued items. Untradeables follow their own assigned value that can be found using the Wealth evaluator. Area loot This picks up all items around the player in a 5x5 grid. Since this has the chance of exceeding the space for the loot tab, players wishing to pick up drops in areas filled with junk should either customise what to pickup or disable area looting/loot tab. Disabled areas There are certain areas where the loot tab will not open even if enabled. The following areas have the loot interface disabled: Keyboard shortcuts While the loot interface is active: ESC: dismiss loot interface Spacebar: Loot All Shift + Spacebar: Loot custom Loot beams Since an update on 19 November 2013, valuable and various other rare drops will be highlighted by a beam of light when they're dropped. These beams will only shine for a short amount of time, however. If multiple items drop with each other to set off a loot beam, the displayed loot beam size will correspond to the highest-valued item within the dropped item stack. Drops with a Grand Exchange value (total value of given drops, not counting 100%-chance drops) of 1,000–1,000,000 coins or more may be given a loot beam. A player may select the minimum Grand Exchange value of an item at which a loot beam is deployed, within the aforementioned threshold, by going into the Settings under Gameplay under Loot Settings. If the player does not select a minimum value, it will default to 500,000 until changed. Modes Maths Equipment Mechanics (removed) Miscellaneous Drops
@Secret et al hows this for a video game? OE Cake! fluid dynamics simulator! have been looking for something like this for yrs! just discovered it wanna try it out! anyone heard of it? anyone else wanna do some serious research on it? think it could be used to experiment with solitons=D OE-Cake, OE-CAKE! or OE Cake is a 2D fluid physics sandbox which was used to demonstrate the Octave Engine fluid physics simulator created by Prometech Software Inc.. It was one of the first engines with the ability to realistically process water and other materials in real-time. In the program, which acts as a physics-based paint program, users can insert objects and see them interact under the laws of physics. It has advanced fluid simulation, and support for gases, rigid objects, elastic reactions, friction, weight, pressure, textured particles, copy-and-paste, transparency, foreground a... @NeuroFuzzy awesome what have you done with it? how long have you been using it? it definitely could support solitons easily (because all you really need is to have some time dependence and discretized diffusion, right?) but I don't know if it's possible in either OE-cake or that dust game As far I recall, being a long term powder gamer myself, powder game does not really have a diffusion like algorithm written into it. The liquids in powder game are sort of dots that move back and forth and subjected to gravity @Secret I mean more along the lines of the fluid dynamics in that kind of game @Secret Like how in the dan-ball one air pressure looks continuous (I assume) @Secret You really just need a timer for particle extinction, and something that effects adjacent cells. Like maybe a rule for a particle that says: particles of type A turn into type B after 10 steps, particles of type B turn into type A if they are adjacent to type A. I would bet you get lots of cool reaction-diffusion-like patterns with that rule. (Those that don't understand cricket, please ignore this context, I will get to the physics...)England are playing Pakistan at Lords and a decision has once again been overturned based on evidence from the 'snickometer'. (see over 1.4 ) It's always bothered me slightly that there seems to be a ... Abstract: Analyzing the data from the last replace-the-homework-policy question was inconclusive. So back to the drawing board, or really back to this question: what do we really mean when we vote to close questions as homework-like?As some/many/most people are aware, we are in the midst of a... Hi I am trying to understand the concept of dex and how to use it in calculations. The usual definition is that it is the order of magnitude, so $10^{0.1}$ is $0.1$ dex.I want to do a simple exercise of calculating the value of the RHS of Eqn 4 in this paper arxiv paper, the gammas are incompl... @ACuriousMind Guten Tag! :-) Dark Sun has also a lot of frightening characters. For example, Borys, the 30th level dragon. Or different stages of the defiler/psionicist 20/20 -> dragon 30 transformation. It is only a tip, if you start to think on your next avatar :-) What is the maximum distance for eavesdropping pure sound waves?And what kind of device i need to use for eavesdropping?Actually a microphone with a parabolic reflector or laser reflected listening devices available on the market but is there any other devices on the planet which should allow ... and endless whiteboards get doodled with boxes, grids circled red markers and some scribbles The documentary then showed one of the bird's eye view of the farmlands (which pardon my sketchy drawing skills...) Most of the farmland is tiled into grids Here there are two distinct column and rows of tiled farmlands to the left and top of the main grid. They are the index arrays and they notate the range of inidex of the tensor array In some tiles, there's a swirl of dirt mount, they represent components with nonzero curl and in others grass grew Two blue steel bars were visible laying across the grid, holding up a triangle pool of water Next in an interview, they mentioned that experimentally the process is uite simple. The tall guy is seen using a large crowbar to pry away a screw that held a road sign under a skyway, i.e. ocassionally, misshaps can happen, such as too much force applied and the sign snapped in the middle. The boys will then be forced to take the broken sign to the nearest roadworks workshop to mend it At the end of the documentary, near a university lodge area I walked towards the boys and expressed interest in joining their project. They then said that you will be spending quite a bit of time on the theoretical side and doddling on whitebaords. They also ask about my recent trip to London and Belgium. Dream ends Reality check: I have been to London, but not Belgium Idea extraction: The tensor array mentioned in the dream is a multiindex object where each component can be tensors of different order Presumably one can formulate it (using an example of a 4th order tensor) as follows: $$A^{\alpha}_{\beta}_{\gamma,\delta,\epsilon}$$ and then allow the index $\alpha,\beta$ to run from 0 to the size of the matrix representation of the whole array while for the indices $\gamma,\delta,epsilon$ it can be taken from a subset which the $\alpha,\beta$ indices are. For example to encode a patch of nonzero curl vector field in this object, one might set $\gamma$ to be from the set $\{4,9\}$ and $\delta$ to be $\{2,3\}$ However even if taking indices to have certain values only, it is unsure if it is of any use since most tensor expressions have indices taken from a set of consecutive numbers rather than random integers @DavidZ in the recent meta post about the homework policy there is the following statement: > We want to make it sure because people want those questions closed. Evidence: people are closing them. If people are closing questions that have no valid reason for closure, we have bigger problems. This is an interesting statement. I wonder to what extent not having a homework close reason would simply force would-be close-voters to either edit the post, down-vote, or think more carefully whether there is another more specific reason for closure, e.g. "unclear what you're asking". I'm not saying I think simply dropping the homework close reason and doing nothing else is a good idea. I did suggest that previously in chat, and as I recall there were good objections (which are echoed in @ACuriousMind's meta answer's comments). @DanielSank Mostly in a (probably vain) attempt to get @peterh to recognize that it's not a particularly helpful topic. @peterh That said, he used to be fairly active on physicsoverflow, so if you really pine for the opportunity to communicate with him, you can go on ahead there. But seriously, bringing it up, particularly in that way, is not all that constructive. @DanielSank No, the site mods could have caged him only in the PSE, and only for a year. That he got. After that his cage was extended to a 10 year long network-wide one, it couldn't be the result of the site mods. Only the CMs can do this, typically for network-wide bad deeds. @EmilioPisanty Yes, but I had liked to talk to him here. @DanielSank I am only curious, what he did. Maybe he attacked the whole network? Or he toke a site-level conflict to the IRL world? As I know, network-wide bans happen for such things. @peterh That is pure fear-mongering. Unless you plan on going on extended campaigns to get yourself suspended, in which case I wish you speedy luck. 4 Seriously, suspensions are never handed out without warning, and you will not be ten-year-banned out of the blue. Ron had very clear choices and a very clear picture of the consequences of his choices, and he made his decision. There is nothing more to see here, and bringing it up again (and particularly in such a dewy-eyed manner) is far from helpful. @EmilioPisanty Although it is already not about Ron Maimon, but I can't see here the meaning of "campaign" enough well-defined. And yes, it is a little bit of source of fear for me, that maybe my behavior can be also measured as if "I would campaign for my caging".
Speaker Dr Hoang Chuong LAM (Can Tho University) Description We prove the quenched central limit theorem and the law of large numbers for reversible random walks in a stationary random environment on $Z$. In this model, the conductivity of the edge between $[k; k+1]$ is equal to $\alpha_{k} c(T^{k}\omega)$, where $\alpha_{k}$ be a positive number and $c$ be a positive measurable function on $\Omega.$ Fix $\omega \in \Omega,$ we consider the Poisson equation $(P_{\omega}-I)f=\psi$, and then use the pointwise ergodic theorem to treat the limit of solutions and then the limit theorems will be established by the convergence of moments. Depauw, J and Derrien, J.-M. (2009). Variance limite d'une marche aléatoire réversible en milieu aléatoire sur ${Z}$. C. R. Acad. Sci. Paris, Ser. I. 347 p.401-406. Lam, H.-C. (2014). Quenched central limit theorem for reversible random waks in random environment on ${Z}$. Journal of Applied Probability. 51 1-14. Primary author Dr Hoang Chuong LAM (Can Tho University)
A logical qubit is a very fluid concept. You could use physical qubits as logical qubits. Or, you can encode multiple physical qubits as a single logical qubit. The more physical qubits you use, the better the resistance to noise. So, I would suggest that you question isn't exactly the right one to ask, and a better question is whether something useful can ... What follows turned out to be a rather technical explanation, so I'll start with the main point: The qubit state can change the resonator's state, and the resonator's state can be easily measured only if there is a large different in frequencies between the qubit and the resonator.Let's model a qubit as a two-level system and a resonator as a harmonic ... There is not anything that you cant do with U3 so ideally there is no reason for U1 and U2. Eventually, as the transpilers gets better we may remove them and just have U3 and CNOT. So why did we make U1 and U2? It is because of the hardware. The U1 is done using a frame change (see https://arxiv.org/abs/1612.00858) which means they are done in software (... I am going to try to give guesses that can make sense:More qubits does not mean better machines. They may be less noise-tolerant and with less connectivity between qubits. That is why, when you benchmark them (with or without error-correction), you look first at the simplest implementations of state of art algorithms. Plus, you may change some calibrations ... At Xanadu, we're using integrated quantum photonics to build our photonic quantum computing chips. In this case, we have integrated chips containing waveguides --- these are coupled to lasers to generate input resource states, undergo manipulation on the chip, and then are measured via a variety of detectors available in quantum optics. These can include ... Actually, after having researched the question over the last months, the two answers (one above and one below) are correct, but we can build upon them to get something more up to date.The first answer, however, relies on figures and data which are slightly obsolete, while the source is uncertain (it is impossible to know if the source is McKinsey or The ... So, to begin, I would point out that the 500 micosec T1 time is for a single qubit in isolation, while the GHZ results are on a 20 qubit device. This device has an avg T1 time of around ~75 microsec.The GHZ results were done by Ken Wei from IBM, and will be published shortly.In short, the circuit is a standard GHZ building circuit, with a hadamard ... There's a superconducting circuit element called Josephson junction, which is roughly a nonlinear inductor.The inductance of a Josephson junction depends on current via the relation$$L(I) = \frac{L_0}{\sqrt{1 - (I/I_c)^2}}$$where $L_0$ is the inductance of the junction with no bias current and $I_c$ is the so-called "critical current" which is the maximum ... When referring to the commercial quantum computers of both parties, it is that both are based on a different quantum principles.The D-Wave machine works via quantum annealing and is suited for optimization problems. The machine by IBM is a gate-based quantum computer, similar to how digital computers work at the elementary level.As the two quantum ... It's unlikely. And even if there is, they've not announced it publicly. Most of the private companies and startups in this area are still in the stealth mode. This is the most complete list of quantum computing startups that I know of. Among the companies listed, Atom Computing may be working on diamond-based quantum computers, but they haven't released much ... By an example with a control qubit in superposition and the target in $ |0\rangle $ state:$$ \frac{|0\rangle + |1\rangle}{\sqrt{2}} |0\rangle = \frac{|0\rangle|0\rangle + |1\rangle |0\rangle}{\sqrt{2}}$$Applying a CNOT will have the following result:$$ \frac{ CNOT(|0\rangle|0\rangle + |1\rangle |0\rangle)}{\sqrt{2}} = \frac{ CNOT(|0\rangle|0\rangle) + ... It does, unless you have a way to tune your Hamiltonian such that $\Delta$ becomes zero. Since a tunable Hamiltonian is something you usually want in a quantum computer implementation, this should not be a problem.If this term is non-switchable, it just means that the basis in which you are working is continuously rotating, and you have to keep track of ... You can look up work by Gil Kalai, who is a longstanding and outspoken critic of quantum computing (his most recent essay: Kalai, 2019). He often bases his view on assumptions that I entirely disagree with, but its a refreshing reminder that certain ideas are taken for granted in the industry, namely that NISQ computers will yield practical applications. ... They are always rotating in the lab reference frame, but most quantum algorithms take a rotating reference frame to simply things, so that a z rotation only happens when you want it to.The rotating reference frame spins along with the natural spinning rate of each qubit, so in general in can spin at different rates for different qubits. I think the subject matter of supercondcuting qubits is rather broad and diverse, making it challenging to accurately capture it in a 'brief explanation'.With that said, this recent review (Krantz et al., Applied Physics Reviews 6, 021318 (2019)) - "A Quantum Engineer's Guide to Superconducting Qubits" (arXiv:1904.06560) from the MIT group may be a good ... Hope this late contribution won't be a meaningless contribution, but as mentioned in one of the comments above, by using D-Waves version of NetworkX you can visualize the Pegasus network. I have attached a few images here of the Pegasus 2 (P2) and Pegasus 6 (P6) architectures using the D-Wave NetworkX.The reason that I find Pegasus interesting is that the ... In 1996, David DiVincenzo listed five key criteria to build a quantum computer:A quantum computer must be scalable,It must be possible to initialise the qubits,Good qubits are needed, the quantum state cannot be lost,We need to have a universal set of quantum gates,We need to be able to measure all qubits.Two additional criteria:The ability to ... This is certainly how theorists think of this being done. I don't know if there's an experimental reality to compare this to. Whether they actually decompose it in terms of the eigenvectors, or find some other terms to decompose it as.Just as an example of what I mean, let$$W=\left(\begin{array}{cccc}1 & 0 & 0 & 0 \\ 0 & 0 & 1 & ... The Quantum Volume is a benchmark for near term, noisy quantum systems. Indeed, like other random unitary benchmarks, you need to be able to sample the ideal distribution. This distribution comes from classical simulations, so your limited to about the ~40 or so qubit limit. However, the Quantum Volume itself was designed to benchmark not only the quantum ... Quantum volume is a bad metric for this purpose.For example, suppose you have a ten thousand by ten thousand grid of qubits with a gate error rate of 1 in one thousand. The quantum volume of this grid is basically 0, because if you pick two qubits at random they will on average be more than one thousand steps apart. So an error will almost certainly occur ... Easiest thing talk about the algorithms for each architecture and the difference between physical and logical qubits. As far as I know we do not know yet how to perform quantum error correction efficiently on an adiabatic machine. Most computations on these devices are just repeated lots and lots of times without much error correction. For the gate model ... Parallel computation of the sum of two qubitsI wanted to experience parallel computation of the sum of two qubits, a superposition of 0 and "1 with phase -1" added to 1; and I was inspired by Mithrandir24601's answer. The results are below. I hope my answer is within the context of what was asked. It shows how 1 is added 1, and to 0, at the same time, ...
Search Now showing items 1-10 of 33 The ALICE Transition Radiation Detector: Construction, operation, and performance (Elsevier, 2018-02) The Transition Radiation Detector (TRD) was designed and built to enhance the capabilities of the ALICE detector at the Large Hadron Collider (LHC). While aimed at providing electron identification and triggering, the TRD ... Constraining the magnitude of the Chiral Magnetic Effect with Event Shape Engineering in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2018-02) In ultrarelativistic heavy-ion collisions, the event-by-event variation of the elliptic flow $v_2$ reflects fluctuations in the shape of the initial state of the system. This allows to select events with the same centrality ... First measurement of jet mass in Pb–Pb and p–Pb collisions at the LHC (Elsevier, 2018-01) This letter presents the first measurement of jet mass in Pb-Pb and p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV and 5.02 TeV, respectively. Both the jet energy and the jet mass are expected to be sensitive to jet ... First measurement of $\Xi_{\rm c}^0$ production in pp collisions at $\mathbf{\sqrt{s}}$ = 7 TeV (Elsevier, 2018-06) The production of the charm-strange baryon $\Xi_{\rm c}^0$ is measured for the first time at the LHC via its semileptonic decay into e$^+\Xi^-\nu_{\rm e}$ in pp collisions at $\sqrt{s}=7$ TeV with the ALICE detector. The ... D-meson azimuthal anisotropy in mid-central Pb-Pb collisions at $\mathbf{\sqrt{s_{\rm NN}}=5.02}$ TeV (American Physical Society, 2018-03) The azimuthal anisotropy coefficient $v_2$ of prompt D$^0$, D$^+$, D$^{*+}$ and D$_s^+$ mesons was measured in mid-central (30-50% centrality class) Pb-Pb collisions at a centre-of-mass energy per nucleon pair $\sqrt{s_{\rm ... Search for collectivity with azimuthal J/$\psi$-hadron correlations in high multiplicity p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 and 8.16 TeV (Elsevier, 2018-05) We present a measurement of azimuthal correlations between inclusive J/$\psi$ and charged hadrons in p-Pb collisions recorded with the ALICE detector at the CERN LHC. The J/$\psi$ are reconstructed at forward (p-going, ... Systematic studies of correlations between different order flow harmonics in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (American Physical Society, 2018-02) The correlations between event-by-event fluctuations of anisotropic flow harmonic amplitudes have been measured in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE detector at the LHC. The results are ... $\pi^0$ and $\eta$ meson production in proton-proton collisions at $\sqrt{s}=8$ TeV (Springer, 2018-03) An invariant differential cross section measurement of inclusive $\pi^{0}$ and $\eta$ meson production at mid-rapidity in pp collisions at $\sqrt{s}=8$ TeV was carried out by the ALICE experiment at the LHC. The spectra ... J/$\psi$ production as a function of charged-particle pseudorapidity density in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2018-01) We report measurements of the inclusive J/$\psi$ yield and average transverse momentum as a function of charged-particle pseudorapidity density ${\rm d}N_{\rm ch}/{\rm d}\eta$ in p-Pb collisions at $\sqrt{s_{\rm NN}}= 5.02$ ... Energy dependence and fluctuations of anisotropic flow in Pb-Pb collisions at √sNN=5.02 and 2.76 TeV (Springer Berlin Heidelberg, 2018-07-16) Measurements of anisotropic flow coefficients with two- and multi-particle cumulants for inclusive charged particles in Pb-Pb collisions at 𝑠NN‾‾‾‾√=5.02 and 2.76 TeV are reported in the pseudorapidity range |η| < 0.8 ...
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
I’m at the Banff International Research Station this week for a conference on metric geometry. I’ve listened to several nice talks already but one that stood out for me was by Yevgeny Liokumovich on the problem of cutting a sphere in half. (It had, of course, a more official title!) Consider the sphere \(S^2\) with some Riemannian metric, scaled so that the total area is 1. Is there an upper bound to the length of a geodesic loop that divides the sphere into two disks of equal area? It seems plausible at first that the answer might be “yes”, but in fact it is “no”. To see the counterexample, think about balloon animals: specifically a “balloon starfish” that has three thin, cylindrical arms of length \(\ell\) emanating from a central core. Such a “starfish” can be constructed with area 1 and arbitrarily large \(\ell\), and the length of the shortest geodesic loop that cuts it in half is of order \(\ell\). The main point of the talk was to obtain a more delicate bound for the equal-division problem which involves both the area and the diameter of the given metric. But in the course of the proof there is a very pretty lemma that I had not seen before (but which apparently goes back to Gromov’s Filling Riemannian Manifolds). Theorem Given any Riemannian metric on the 2-sphere \(X\), with area \(A\) and diameter \(d\), and any \(\epsilon>0\), there exists a simple closed curve of length at most \(2d+\epsilon\) that divides the sphere into two disks both of which have area at least \({\frac13}A-\epsilon\). The proof goes by contradiction. Suppose not. Triangulate \(X\) with geodesic simplices of side \(<\epsilon\). Fix a base-point \(p\in X\). Now consider a 2-simplex \(s\) of the triangulation. Build three arcs, each of length \(\le d\), from the vertices of \(s\) to \(p\). The three loops, each formed by two of these arcs together with an edge of \(s\), have length \(<2d+\epsilon\); so, by assumption, each bounds a disk of area \(\le {\frac13}A-\epsilon\). The union of these three disks and the simplex \(s\) has area less than \(A\) so it misses a point, and thus lies in a contractible subset of \(X\). Thus we can find a map from the cone on \(s\) to \(X\) that fixes \(s\) and maps the vertex of the cone to \(p\). Do this construction inductively over the simplices of \(X\) to obtain a map from the cone \(cX \cong B^3\) to \(X\cong S^2\) which is the identity on the boundary, i.e. a retraction. This contradicts the Brouwer fixed-point theorem.
Using the Boussinesq Approximation for Natural Convection Today, we compare the Boussinesq approximation to the full Navier-Stokes equations for a natural convection problem. We also show you how to implement the Boussinesq approximation in COMSOL Multiphysics software and discuss potential benefits of doing so. Application: Natural Convection in a Square Cavity For our example, we will use a model that couples the Navier-Stokes equations and the heat transfer equations to model natural convection in a square cavity with a heated wall. The temperature on the left and right walls is 293 K and 294 K, respectively. The top and bottom walls are insulated. The fluid is air and the length of the side is 10 cm. We will use this model to compare the computational cost of three different modeling approaches: Solving the full Navier-Stokes equations (Approach 1) Solving the full Navier-Stokes equations with pressure shift (Approach 2) Using the Boussinesq approximation with pressure shift (Approach 3) Each of these three approaches and their variables are defined here. In COMSOL Multiphysics, the model is solved with a stationary study using the Laminar Flow, and Heat Transfer in Fluids interfaces, and the Non-Isothermal Flow multiphysics coupling: While setting up the model, it is important to check whether the flow is laminar or turbulent. For a natural convection problem, this is done by calculating the Grashof number, Gr. For an ideal gas, it is defined as The Grashof number is the ratio of buoyancy to viscous forces. A value below 10^8 indicates that the flow is laminar, while a value above 10^9 indicates that the flow is turbulent. In this case, the Grashof number is around 1.5 \times \hspace{1pt} 10^5, meaning that the flow is laminar. Approach 1 When using the full Navier-Stokes equation, we set the buoyancy force to \rho \mathbf{g}: The buoyancy term is added using a volume force feature. The terms nitf1.rho and g_const represent the temperature- and pressure-dependent density, \rho, and the gravitational acceleration, \mathbf{g}, respectively. Approach 2 When using the Navier-Stokes equations with pressure shift, we have to make three changes. First, we need to change the definition of the volume force to (\rho-\rho_0)\mathbf{g}, as such: The term rho0 refers to the reference density \rho_0. Next, we evaluate the reference density \rho_0 and the reference viscosity \mu from the material properties in a table of variables: Here, pA and T0 represent the reference temperature and pressure. The air viscosity is set to the constant \mu_{0}: Approach 3 Finally, when using the Boussinesq approximation, we need to set the buoyancy force to -\rho_0\frac{T-T_0}{T_0}\,\mathbf{g}: As with Approach 2, we also evaluate the reference density and viscosity from the material properties. A third and final step with Approach 3 is to set the fluid density to the constant reference density \rho_{0} (the Boussinesq approximation states that the density is constant except in the buoyancy term). Note: If your model includes a pressure boundary condition (open domain), set the pressure to the hydrostatic pressure -rho0*g_const*y for Approach 1 or to 0[Pa] for Approach 2 and Approach 3. The boundary conditions for models including gravitational forces are also discussed here. Results The resulting velocity magnitude and streamlines are nearly identical for all three approaches. The maximum temperature difference between Approach 1 and 2 is less than 2 \times \hspace{1pt} 10^{-6} K and the maximum temperature difference between Approach 1 and 3 is around 5 \times \hspace{1pt} 10^{-4} K. The only thing that differs is the simulation time. Velocity magnitude and streamlines. Because of the short running time of this 2D simulation (around 30 seconds), we look at the computational load by comparing the number of iterations it takes the solver to converge to the steady-state solution. The number of iterations, in this case, is nearly proportional to the CPU time. The table below compares the number of iterations across all three approaches. Approach 1 Approach 2 Approach 3 Number of Iterations 39 55 55 These results are very surprising! While the Boussinesq approximation is supposed to reduce the nonlinearity of the model and the number of iterations required for convergence, the full Navier-Stokes equations (39 iterations) can be solved faster than the Boussinesq approximation (55 iterations). We also note that the use of Navier-Stokes equations with a pressure shift leads to the same number of iterations as the Boussinesq approximation. To better understand these results, we can run a second set of simulations after disabling the pseudo time-stepping algorithm. Pseudo time stepping is used for stabilizing the convergence toward steady state in transport problems. The pseudo time stepping relies on an adaptive feedback regulator that controls a Courant–Friedrichs–Lewy (CFL) number. The pseudo time stepping is often necessary to get the model to converge. In this particular case, however, it is not needed . Here’s a look at the COMSOL Multiphysics settings window for the default solver settings with pseudo time stepping: The following snapshot shows the updated solver settings without pseudo time stepping. We recommend that you always keep pseudo time stepping switched on, unless you feel comfortable tuning the solver settings. Note on the solver settings for natural convection: Due to the very strong coupling between the laminar flow and heat transfer physics in natural convection modeling, always use a fully coupled solver. The COMSOL software automatically switches to a fully coupled solver when a volume force is added in the laminar flow physics, meaning that you are modeling natural convection. This second table shows the number of iterations without pseudo time stepping: Approach 1 Approach 2 Approach 3 Number of Iterations 9 7 7 These results make more sense than the previous ones with pseudo time stepping. This is because Approach 3, the most linear problem, now converges faster than Approach 1. What is surprising is that Approach 2 and Approach 3 converge with the same number of iterations. Comparing these results with the first set of results, a speed-up of 8 (from 55 to 7 iterations) is observed for the third approach — the Boussinesq approximation. These results also indicate that the number of iterations in the first set of results not only depend on the linearity of the problem, but also on the tuning of the pseudo time-stepping algorithm. What Did We Learn? Here, we have discussed the implementation and benefits of the Boussinesq approximation as well as using the pressure shift method. The results show that, for this particular model, there are no real benefits in terms of computational time for using the Boussinesq approximation, regardless of whether or not pseudo time stepping is enabled. This is generally the case since the Boussinesq approximation is only valid when the nonlinearity is small. A much shorter computational time for the Boussinesq approximation with respect to the full Navier-Stokes equations would indicate that the Boussinesq approximation might not be valid. Because of the small speed-up observed with the Boussinesq approximation and the fact it is not always easy to know a priori if the Boussinesq approximation is valid, we generally recommend solving for the full Navier-Stokes equations. Implementing the pressure shift (Approach 2 and 3), however, does avoid round-off errors and simplifies the implementation of time-dependent problems as well as models with open boundaries. This will be the object of a future blog entry. Using Approach 3 (Boussinesq approximation with pressure shift) involves more implementation steps and does not reduce the number of iterations as compared with Approach 2 (Navier-Stokes equations with pressure shift). The final simulation time might be slightly shorter for Approach 3, since it does not require the evaluation of the temperature- and pressure-dependent density and the temperature-dependent viscosity, but this speed-up might not be noticeable. The number of iterations is reduced by a factor 4 to 8, depending on the chosen approach, by disabling the pseudo time-stepping algorithm. Please keep in mind, however, that most problems will not converge without pseudo time stepping or other load ramping or nonlinearity ramping strategies. You can set up and solve this model using the CFD Module or the Heat Transfer Module. If you have any questions about the models that I’ve presented here, contact our Technical Support team. If you are not yet a COMSOL Multiphysics user and would like to learn more about our software, please contact us via this form — we’d love to connect with you. Comments (2) CATEGORIES Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Let $\mu$ be the Mobius function. In his paper "Explicit estimates on several summatory functions involving the Moebius function", Olivier Ramaré proves the following effective bound:$$\left|\sum_{n\leq x} \frac{\mu(n)}{n}\right|\log{x}\leq 1/69,$$ When $x\geq 96955.$ Unfortunately, I couldn't access to his paper since it is published in American Mathematical Society so I just take the result from the free abstract on the Web site of this journal. My question is the following: Is there an estimation of the sum $\sum_{n\leq x} \frac{\mu(n)}{n}$ in termes of $x$ which means with Big $O$ error term $(\sum_{n\leq x} \frac{\mu(n)}{n}=A(x)+O(B(x)))$. I guess it exists in the Ramaré's paper and from this estimation he derived his upper bound mentioned above. Many thanks. Let $\mu$ be the Mobius function. In his paper "Explicit estimates on several summatory functions involving the Moebius function", Olivier Ramaré proves the following effective bound:$$\left|\sum_{n\leq x} \frac{\mu(n)}{n}\right|\log{x}\leq 1/69,$$ When $x\geq 96955.$ Unfortunately, I couldn't access to his paper since it is published in American Mathematical Society so I just take the result from the free abstract on the Web site of this journal. My question is the following: Is there an estimation of the sum $\sum_{n\leq x} \frac{\mu(n)}{n}$ in termes of $x$ which means with Big $O$ error term $(\sum_{n\leq x} \frac{\mu(n)}{n}=A(x)+O(B(x)))$. I guess it exists in the Ramaré's paper and from this estimation he derived his upper bound mentioned above. Actually Ramaré's paper is freely avaible at his Lille university page (link here). The estimate that he uses to deduce his result is $$\bigg|\sum_{n\leq x} \frac{\mu(n)}{n}\bigg|\leq \bigg(\frac{3}{2} +o(1)\bigg) \exp \bigg(-\max_{x^{7/8}\leq t \leq x} \log \frac{2+|M(t)|}{t}\bigg)+O(x^{-1/4})$$ The sharpest one which works for all $x \geq 1$ seems to be (as of 2015) $$\bigg|\sum_{n\leq x} \frac{\mu(n)}{n}\bigg|\leq \frac{726}{(\log x)^2}$$ Peter Humphries has alredy mentioned a (much cheaper!) estimate with $A(x)=0$ in the comments.
The Weak-* Topology on X* Definition: Let $X$ be a normed linear space. The Weak-* Topology on $X^*$ denoted by $\sigma (X^*, X)$ and is the topology induced by the subset $\hat{X} = \{ \hat{x} : x \in X \} \subseteq X^{**}$ where $\hat{x} : X^* \to \mathbb{R}$ defined for each $f \in X^*$ by $\hat{x}(f) = f(x)$. Recall that for every $x \in X$ the function $\hat{x} : X^* \to \mathbb{R}$ defined for all $f \in X^*$ by $\hat{x}(f) = f(x)$ is a bounded linear functional on $X^*$ with $\| \hat{x} \| = \| x \|$. Therefore, $\hat{X}$ is a subset of $X^{**}$ and we can consider the topology on $X^*$ induced by $\hat{X}$ which we defined to be the weak-* topology on $X^*$. A Local Base for f For each $f \in X^*$, a local base for $f$ in the weak topology on $X^*$ consists of sets of the form:(1) where $\epsilon > 0$ and $\{ x_1, x_2, ..., x_n \} \subseteq X$ is finite. Weak-* Convergence in X* Definition: Let $X$ be a normed linear space. A sequence of bounded linear functionals $(f_n)$ in $X^*$ is said to Weak-* Converge to a bounded linear functional $f \in X^*$ if for all $\hat{x} \in \hat{X}$, $\lim_{n \to \infty} \hat{x} (f_n) = \hat{x}(f)$. Equivalently, $(f_n)$ weak-* converges to $f$ if $\lim_{n \to \infty} f_n(x) = f(x)$ for each $x \in X$. Note that a sequence of bounded linear functionals $(f_n)$ converges to $f \in X^*$ if and only if $(f_n)$ converges pointwise to $f$.
Earlier this year, a colleague of mine sent me an email on Bendford’s Law. He had run across it somewhere and was fascinated by it. It seems counterintuitive that small digits would occur more frequently in the leading digits of arbitrary numerical data. One is tempted to think that arbitrary data would be made up of arbitrary digits, but that turns out not to be the case. It’s a genuine numerical phenomenon, and below I have provided a couple of ways to explain it. I point out that, utlimately, this law results from the notation that we use to represent real values. Upon learning about Bendford’s Law, my colleague decided to put it to a test. So he grabbed an ENDF file, endf66a, and pulled as many values as he could from it. (ENDF stands for “Evaluated Nuclear Data File.” It is a large database of nuclear data, such as cross-sections of various nuclides. For our purposes here, it is a large collection of real-world numerical data.) He collected the leading digit in the mantissa of each floating-point number in the file and examined the frequency of occurrence of each digit. The plot of the results that he produced is shown below. After seeing his empirical verification of this law, I tried to explain why this law works in a couple of ways. The first way is to consider the population of data, and here we’re talking about values that span multiple orders of magnitude. For example, if we were talking about lengths, the list could include values measured in centimeters, meters, and kilometers. Now for all of these units to be represented in our population we’d need the number of values expressed in centimeters to be roughly equivalent to the number of values expressed in kilometers. Naturally, such a population wouldn’t even come close to being distributed uniformly over the range of values—there are simply too many centimeters in a kilometer to sample from them in the same way that one would sample over centimeters in one meter. Mathematically, this means that, if \(f(x)\) is the probability density function (p.d.f.) of our population, then\[ \int_{10\,\text{cm}}^{1\,\text{m}} f(x)\,dx \approx \int_{100\,\text{m}}^{1\,\text{km}} f(x)\,dx \]which means that\[ \bar f_{(1\,\text{m})} \approx 1000\times\bar f_{(1\,\text{km})} \]where \(\bar f_{(a)}\) is the average value of the p.d.f. over the neighborhood where \(x \approx a\). Therefore, it makes sense to look at the value of \(f(x) \cdot x\) plotted versus \(x\) on a semi-log scale, keeping in mind that the probability of \(X\) (a random value drawn from the population) being between \(x_1\) and \(x_2\) is\[ P(x_1 < X < x_2) = \ln(10) \int_{\log_{10}(x_1)}^{\log_{10}(x_2)} f(10^\xi)\cdot 10^\xi\,d\xi \]That is, when \(f(x) \cdot x\) is plotted versus \(x\), with a logarithmic scale for \(x\), the area under the curve represents the relative probability of \(X\) being in a particular region. (\(\xi\) gives the linear distance along the logarithmic scale: \(\xi = \log_{10}x\).) A (made-up) example of such a distribution is shown below. Here the regions where the leading digit would be 1 are marked off. Compare this to the same figure with the regions where the leading digit would be 5 are highlighted. As we can see, the areas marked off in in the first figure are significantly larger than the areas marked off in the second, which means that 1 is more likely than 5 to show up as the first digit of a number that is randomly drawn from this distribution, and this is the case for many collections of numbers that span multiple scales. But perhaps someone objects to the p.d.f. that I used as an example. OK. Then consider this. To simplify the situation let’s consider only integers and let’s look at the range of integers from 0 to 9. If I have a population of these integers that is uniformly distributed, then any random variable that I produce is as likely to be one digit as another. But what happens if I double my range of possibilities by expanding to the right? Now, I’m looking at the range of numbers from 0 to 19. Once again, if the distribution is uniform, I’m equally as likely to draw any of the numbers, but now the probability of drawing a number with 1 as the leading digit has changed from 10% to 55%! Over half of the time I’m going to get a number that starts with 1. This example is contrived, of course, but it readily generalizes. For example, consider what happens if I expand the range to extend from 0 to 49. Although the probability of getting a number that begins with a 1 is no longer 55%, it’s still larger than getting a number that begins with a 5, 6, 7, 8, or 9. This phenomenon is an artifact of the way we represent real values, not the real values themselves. To see this, it is instructive to consider just one number—for example, the fine-structure constant:\[ \alpha \approx 7.297\times 10^{-3} \]This is a physical constant. It’s dimensionless. There is no mathematical basis for this constant. As Richard Feynmann once wrote, It’s one of the greatest damn mysteries of physics: a magic number that comes to us with no understanding by man. You might say the “hand of God” wrote that number, and “we don’t know how He pushed his pencil.” So this is about an arbitrary a number as one can find. It happens to begin with a 7, not a 1, but that’s just because we use a number system with ten digits. (Ignore, for the moment, that its reciprocal, \(\alpha^{-1} \approx 137.036\), an equally arbitrary number, does begin with a 1.) The number is what it is, regardless of how we write it. We can think of it as a single point on a line that represents all real numbers (the \(x\)-axis of the complex plane). What digits we use to write that number depend on where we lay down the grid lines that denote 1, 2, 3, and so on. If the real values are scaled logarithmically, this line looks like the following: The point that is \(\alpha\) also is shown above and falls between the ticks for 7 and 8, resulting in a number that begins with 7. But what happens when a different base (number of digits) is used to represent the same number. In octal (base 8, which is often used in computer science, because \(8 = 2^3\)), the real line looks like (Keep in mind that now the octal “10” is really an “8” in decimal notation.) In this system of numbers, the numerical representation of \(\alpha\) begins with a 3. If we double the number of digits to 16 (hexadecimal), we find the following: The leading digit is now 1. It is important to note that the real line itself and all of the real values it represents, including \(\alpha\), haven’t changed. They are the same in all three diagrams. By changing the number of digits in our number system, we change only where the grid lines (tick marks) are located. All of these number systems agree on the location of 1 (and zero and infinity), but everything else changes. Below, I have tabulated the value of \(\alpha\) for number systems with 3 to 16 digits (base 3 to base 16): Base \(\alpha\) 3 1.210e-12 4 1.313e-10 5 4.240e-4 6 1.324e-3 7 2.334e-3 8 3.571e-3 9 5.278e-3 10 7.297e-3 11 9.793e-3 12 1.074e-3 13 1.305e-2 14 1.605e-2 15 1.996e-2 16 1.DE4e-2 In 8 out of the 14 number schemes (57%), the representation of \(\alpha\) begins with a 1. This is not surprising when you think about it, because although the locations of the grid lines change, the structure of the grid lines remains similar. The space along the (logarithmically scaled) real line where a 1 is the leading digit is always the largest space. Therefore, 1 is the most likely leading digit, regardless of the base. For a truly arbitrary number, chosen without any restrictions, it is not difficult to predict that the likelihood of the digit \(n\) being the first digit is\[ P(n) = \log_b(n+1) – \log_b(n) \] where \(b\) is the base of the number system being used. This can be deduced geometrically from the figures above. As long as one uses a notation system for numbers consisting of a series of symbols in which each successive symbol in the series constitutes a value that is \(b\) times smaller than the symbol before it (where \(b\) is the number of symbols used), the symbol denoting the smallest (non-zero) value (in our case, 1) will be the most likely symbol to appear first in the series.
In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... That seems like what I need to do, but I don't know how to actually implement it... how wide of a time window is needed for the Y_{t+\tau}? And how on earth do I load all that data at once without it taking forever? And is there a better or other way to see if shear strain does cause temperature increase, potentially delayed in time Link to the question: Learning roadmap for picking up enough mathematical know-how in order to model "shape", "form" and "material properties"?Alternatively, where could I go in order to have such a question answered? @tpg2114 For reducing data point for calculating time correlation, you can run two exactly the simulation in parallel separated by the time lag dt. Then there is no need to store all snapshot and spatial points. @DavidZ I wasn't trying to justify it's existence here, just merely pointing out that because there were some numerics questions posted here, some people might think it okay to post more. I still think marking it as a duplicate is a good idea, then probably an historical lock on the others (maybe with a warning that questions like these belong on Comp Sci?) The x axis is the index in the array -- so I have 200 time series Each one is equally spaced, 1e-9 seconds apart The black line is \frac{d T}{d t} and doesn't have an axis -- I don't care what the values are The solid blue line is the abs(shear strain) and is valued on the right axis The dashed blue line is the result from scipy.signal.correlate And is valued on the left axis So what I don't understand: 1) Why is the correlation value negative when they look pretty positively correlated to me? 2) Why is the result from the correlation function 400 time steps long? 3) How do I find the lead/lag between the signals? Wikipedia says the argmin or argmax of the result will tell me that, but I don't know how In signal processing, cross-correlation is a measure of similarity of two waveforms as a function of a time-lag applied to one of them. This is also known as a sliding dot product or sliding inner-product. It is commonly used for searching a long signal for a shorter, known feature. It has applications in pattern recognition, single particle analysis, electron tomographic averaging, cryptanalysis, and neurophysiology.For continuous functions f and g, the cross-correlation is defined as:: (f \star g)(t)\ \stackrel{\mathrm{def}}{=} \int_{-\infty}^{\infty} f^*(\tau)\ g(\tau+t)\,d\tau,whe... Because I don't know how the result is indexed in time Related:Why don't we just ban homework altogether?Banning homework: vote and documentationWe're having some more recent discussions on the homework tag. A month ago, there was a flurry of activity involving a tightening up of the policy. Unfortunately, I was really busy after th... So, things we need to decide (but not necessarily today): (1) do we implement John Rennie's suggestion of having the mods not close homework questions for a month (2) do we reword the homework policy, and how (3) do we get rid of the tag I think (1) would be a decent option if we had >5 3k+ voters online at any one time to do the small-time moderating. Between the HW being posted and (finally) being closed, there's usually some <1k poster who answers the question It'd be better if we could do it quick enough that no answers get posted until the question is clarified to satisfy the current HW policy For the SHO, our teacher told us to scale$$p\rightarrow \sqrt{m\omega\hbar} ~p$$$$x\rightarrow \sqrt{\frac{\hbar}{m\omega}}~x$$And then define the following$$K_1=\frac 14 (p^2-q^2)$$$$K_2=\frac 14 (pq+qp)$$$$J_3=\frac{H}{2\hbar\omega}=\frac 14(p^2+q^2)$$The first part is to show that$$Q \... Okay. I guess we'll have to see what people say but my guess is the unclear part is what constitutes homework itself. We've had discussions where some people equate it to the level of the question and not the content, or where "where is my mistake in the math" is okay if it's advanced topics but not for mechanics Part of my motivation for wanting to write a revised homework policy is to make explicit that any question asking "Where did I go wrong?" or "Is this the right equation to use?" (without further clarification) or "Any feedback would be appreciated" is not okay @jinawee oh, that I don't think will happen. In any case that would be an indication that homework is a meta tag, i.e. a tag that we shouldn't have. So anyway, I think suggestions for things that need to be clarified -- what is homework and what is "conceptual." Ie. is it conceptual to be stuck when deriving the distribution of microstates cause somebody doesn't know what Stirling's Approximation is Some have argued that is on topic even though there's nothing really physical about it just because it's 'graduate level' Others would argue it's not on topic because it's not conceptual How can one prove that$$ \operatorname{Tr} \log \cal{A} =\int_{\epsilon}^\infty \frac{\mathrm{d}s}{s} \operatorname{Tr}e^{-s \mathcal{A}},$$for a sufficiently well-behaved operator $\cal{A}?$How (mathematically) rigorous is the expression?I'm looking at the $d=2$ Euclidean case, as discuss... I've noticed that there is a remarkable difference between me in a selfie and me in the mirror. Left-right reversal might be part of it, but I wonder what is the r-e-a-l reason. Too bad the question got closed. And what about selfies in the mirror? (I didn't try yet.) @KyleKanos @jinawee @DavidZ @tpg2114 So my take is that we should probably do the "mods only 5th vote"-- I've already been doing that for a while, except for that occasional time when I just wipe the queue clean. Additionally, what we can do instead is go through the closed questions and delete the homework ones as quickly as possible, as mods. Or maybe that can be a second step. If we can reduce visibility of HW, then the tag becomes less of a bone of contention @jinawee I think if someone asks, "How do I do Jackson 11.26," it certainly should be marked as homework. But if someone asks, say, "How is source theory different from qft?" it certainly shouldn't be marked as Homework @Dilaton because that's talking about the tag. And like I said, everyone has a different meaning for the tag, so we'll have to phase it out. There's no need for it if we are able to swiftly handle the main page closeable homework clutter. @Dilaton also, have a look at the topvoted answers on both. Afternoon folks. I tend to ask questions about perturbation methods and asymptotic expansions that arise in my work over on Math.SE, but most of those folks aren't too interested in these kinds of approximate questions. Would posts like this be on topic at Physics.SE? (my initial feeling is no because its really a math question, but I figured I'd ask anyway) @DavidZ Ya I figured as much. Thanks for the typo catch. Do you know of any other place for questions like this? I spend a lot of time at math.SE and they're really mostly interested in either high-level pure math or recreational math (limits, series, integrals, etc). There doesn't seem to be a good place for the approximate and applied techniques I tend to rely on. hm... I guess you could check at Computational Science. I wouldn't necessarily expect it to be on topic there either, since that's mostly numerical methods and stuff about scientific software, but it's worth looking into at least. Or... to be honest, if you were to rephrase your question in a way that makes clear how it's about physics, it might actually be okay on this site. There's a fine line between math and theoretical physics sometimes. MO is for research-level mathematics, not "how do I compute X" user54412 @KevinDriscoll You could maybe reword to push that question in the direction of another site, but imo as worded it falls squarely in the domain of math.SE - it's just a shame they don't give that kind of question as much attention as, say, explaining why 7 is the only prime followed by a cube @ChrisWhite As I understand it, KITP wants big names in the field who will promote crazy ideas with the intent of getting someone else to develop their idea into a reasonable solution (c.f., Hawking's recent paper)
Wronskian Determinants of Two Functions We are going to look more into second order linear homogenous differential equations, but before we do, we need to first learn about a type of determinant known as a Wronskian Determinant which we define below. Definition: Let $f$ and $g$ be two differentiable functions. Then the Wronskian Determinant of $f$ and $g$ is the $2 \times 2$ determinant $W(f, g) = \begin{vmatrix} f(x) & g(x) \\ f'(x) & g'(x) \end{vmatrix} = f(x)g'(x) + f'(x)g(x)$. Sometimes the term "Wronskian" by itself is used to mean the same thing as "Wronskian Determinant". Furthermore, sometimes we can just write "$W$", or "$W(x)$" instead of $W(f, g)$ to represent the Wronskian of $f$ and $g$. Let's look at some examples of computing the Wronskian determinant of two differentiable functions. Example 1 Determine the Wronskian of the functions $f(x) = x^2$ and $g(x) = 3x^2$. For what values of $x$ is the Wronskian equal to zero? We note that $f$ and $g$ are both differentiable functions and that $f'(x) = 2x$ and $g'(x) = 6x$. Therefore the Wronskian of $f$ and $g$ is:(1) Therefore the Wronskian of $f$ and $g$ is equal to zero for all $x \in \mathbb{R}$. Example 2 Determine the Wronskian of the functions $f(x) = e^x \sin t$ and $g(x) = e^x \cos x$. For what values of $x$ is the Wronskian equal to zero? We note that $f$ and $g$ are both differentiable functions and that $f'(x) = e^x \sin x + e^x \cos x$ and $g'(x) = e^x \cos x - e^x \sin x$. Therefore the Wronskian of $f$ and $g$ is:(2) Note that $-e^{2x} < 0$ for all $x \in \mathbb{R}$ so the Wronskian of $f$ and $g$ is zero nowhere.
Table of Contents Generalizing Continuity to Maps on Topological Spaces The reader should be somewhat familiar with the definition of continuity of a function $f$ mapping $\mathbb{R}$ into itself. Recall that $f$ is said to be continuous at $a \in \mathbb{R}$ if for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $\mid x - a \mid < \delta$ then $\mid f(x) - f(a) \mid < \epsilon$. Our aim is to generalize this concept of continuity of real-variable functions to functions/maps $f : X \to Y$ where $X$ and $Y$ are topological spaces. With the definition of continuity of a function $f$ mapping $\mathbb{R}$ into itself, we see an equivalent definition for the continuity of $f$ at $a$ is that for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $x \in (a - \delta, a + \delta)$ then $f(x) \in (f(a) - \epsilon, f(a) + \epsilon)$. The following theorem will make the notion of continuity at a point $a \in \mathbb{R}$ a little more abstract in preparing us for the definition of a function $f : X \to Y$ on topological spaces to be continuous at a point. Theorem 1: Consider the set $\mathbb{R}$ with the usual topology $\tau$ of open intervals on $\mathbb{R}$. Let $f$ be a function mapping $\mathbb{R}$ into itself. Then $f$ is continuous at a point $a \in \mathbb{R}$ if and only for each open set $V$ containing $f(a)$ there exists an open set $U$ containing $a$ such that $f(U) \subseteq V$. Proof:$\Rightarrow$ Suppose that $f$ is continuous at a point $a \in \mathbb{R}$. Then for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $x \in (a - \delta, a + \delta)$ then $f(x) \in (f(a) - \epsilon, f(a) + \epsilon)$. Now let $V \in \tau$ be such that $f(a) \in V$. Provided that $V$ is not the empty set, we have that $V$ will either be an open interval containing $f(a)$ or a union of open intervals - one of which contains $f(a)$. Hence let $(c, d)$ be the interval such that: Since $f(a) \in (c, d)$ we have that $c < f(a) < d$ so $f(a) - c, d - f(a) > 0$. Let $\epsilon$ be defined as: Then we have that: Since $f$ is continuous we have that for this $\epsilon$ there exists a $\delta > 0$ such that for all $x \in (a - \delta, a + \delta)$ we have that $f(x) \in (f(a) - \epsilon, f(a) + \epsilon)$. Take $U = (a - \delta, a + \delta)$. Then: $\Leftarrow$ Suppose that for $a \in \mathbb{R}$ we have that every every open set $V$ containing $f(a)$ is such that there exists an open set $U$ containing $a$ such that $f(U) \subseteq V$. For some $\epsilon > 0$ let $V = (f(a) - \epsilon, f(a) + \epsilon)$. Then there exists an open set $U$ containing $a$ such that $f(U) \subseteq V$. Since $U$ is an open set that is nonempty $U$ must contain an interval containing $a$, say: So $a \in (c, d)$ tells us that $c < a < d$. Therefore $a - c, d - a > 0$ and so Define: Hence we see that: So, for all $\epsilon > 0$ there exists a $\delta > 0$ such that if $x \in (a - \delta, a + \delta)$ then $f(x) \in (f(a) - \epsilon, f(a) + \epsilon)$, so $f$ is continuous at $a \in \mathbb{R}$. $\blacksquare$