text
stringlengths
256
16.4k
Show that the first excited state of 8-Be nucleus fits the experimental value of the rotational band $E(2^+) = 92 keV$ (this is the first excited state, which is 92 keV above the ground state). To do so model 8-Be to be made of 2 alpha particles $4.5 fm$ apart rotating about their center of mass. The method I used is: 1) Get the moment of inertia of the system. For two particles rotating each other (in the same plane) about its center of mass, one gets: $$I = m_1 r_1^2 + m_2 r_2^2 = 2m_{\alpha} (d/2)^2 = 6.728 \times 10^{-56}kgm^2$$ where: $$m_{\alpha} = 6.645 \times 10^{-27} kg$$ $$d = 4.5 \times 10^{-15}m$$ I've checked this value and it's correct. 2) Verify the experimental value for the rotational band by applying the rotational energy formula: $$E = J (J + 1) \frac{\hbar^2}{2I}$$ OK let's first get what's called "characteristic rotational energy": $\frac{\hbar^2}{2I}$ $$\frac{\hbar^2}{2I} = 8.256 \times 10^{-14} J= 0.516 MeV$$ where: $$\hbar = 1.054 \times 10^{-34}Js$$ $$1 eV = 1.6 \times 10^{-19}J$$ $2^+$ band has $J=2$ associated with it. So it's just a matter of plugging numbers in: $$E = J (J + 1) \frac{\hbar^2}{2I} = 3096 keV$$ Which is way off $92keV$. Where is my method gone wrong? Maybe the original question provided an incorrect experimental result. PS/ For anyone interested in rotational energy related to nucleus this is a good video to check: https://www.youtube.com/watch?v=rwdBnwznt3s
I am reading a paper Lie superbialgebras and poisson-lie supergroups and trying to figure out how to compute a super Poisson bracket from a super $r$-matrix. Let $G$ be a Lie supergroup and $\mathfrak{g}$ its Lie superalgebra. The formula (3) on page 158 of the paper Lie superbialgebras and poisson-lie supergroups is \begin{align} \{ \phi, \psi \} = \sum_{\mu, \nu \in B} (-1)^{|\phi||\nu|} r^{\mu \nu}( R_{\mu} \phi R_{\nu} \psi - L_{\mu} \phi L_{\nu} \psi ), \end{align} where $B$ is a homogeneous basis of $\mathfrak{g}$, $r = \sum_{\mu, \nu} r^{\mu \nu} \mu \otimes \nu$. This is the super Poisson bracket of $\mathcal{O}(G)$ which comes from $r$. This formula is very similar to the formula in the end of page 60 of a guide to quantum groups by Chari and Pressley. Has the following formula \begin{align} \{T \overset{\otimes}{,} T\} = [T \otimes T, r] \end{align} on page 61 of "a guide to quantum groups" been translated to the super case? Are there some references about this? Thank you very much. This post imported from StackExchange MathOverflow at 2016-10-02 10:49 (UTC), posted by SE-user Jianrong Li
Ex.7.2 Q5 Coordinate Geometry Solution - NCERT Maths Class 10 Question Find the ratio in which the line segment joining \(A\; (1, -5)\) and \(B\; (-4, 5)\) is divided by the \(x\)-axis. Also find the coordinates of the point of division. Text Solution Reasoning: The coordinates of the point \(P(x, y)\) which divides the line segment joining the points \(A(x1, y1)\) and \(B(x2, y2)\), internally, in the ratio \(\rm m1 : m2\) is given by the Section Formula. What is Known? The \(x\) and \(y\) co-ordinates of the line segment which is divided by the \(x\)-axis. What is Unknown? The ratio in which the line segment joining \(A\;(1, -5)\) and \(B\; (-4, 5)\) is divided by the \(x\)-axis and the coordinates of the point of division Steps: From the Figure, Given, Let the ratio be \(k : 1\). Let the line segment joining \(A\;(1, -5)\) and \(B\; (-4, 5)\) By Section formula \[\begin{align}{{P(x,}}\,{{y)}} = \left[ {\frac{{{{m}}{{{x}}_2} + {{n}}{{{x}}_1}}}{{{{m}} + {{n}}}},\,\;\frac{{{{m}}{{{y}}_2} + {{n}}{{{y}}_1}}}{{{{m}} + {{n}}}}} \right] & & ...\,\rm{Equation} \,(1)\end{align}\] By substituting the values in Equation (1) Therefore, the coordinates of the point of division is \(\begin{align}\left( {\frac{{ - 4{\text{k}} + 1}}{{{\text{k}} + 1}},\;\frac{{5{\text{k}} - 5}}{{{\text{k}} + 1}}} \right)\end{align}\) We know that \(y\)-coordinate of any point on \(x\)-axis is \(0\). \[\begin{align}∴\; \frac{{5{{k}} - 5}}{{{{k}} + 1}} &= 0\\\;\;\;5{{k}} - 5 &= 0\\\;\;\;\, \to 5{{k}}& = 5 \qquad (\text{By cross multiplying Transposing)}\\{{k}} &= 1\end{align}\] Therefore, \(x\)-axis divides it in the ratio \(1:1\). \[\begin{align}{\text{Division point}} &= \left( {\frac{{ - 4(1) + 1}}{{1 + 1}},\frac{{5(1) + 5}}{{1 + 1}}} \right)\\ &= \left( {\frac{{ - 4 + 1}}{2},\frac{{5 + 5}}{2}} \right)\\ &= \left( {\frac{{ - 3}}{2},0} \right)\end{align}\]
Because the slab extends infinitely in the xy plane, the electric field lies only along the z direction. The differential form of Gauss' Law for such a one-dimensional electric field is $$\frac{dE}{dz}=\frac{\rho(z)}{\epsilon_0}$$ This can be integrated, using the boundary condition that the electric field must be zero at large distances from the slab, because it is electrically neutral. A more elementary way of solving the problem is to divide the slab into infinitesimally thin layers of thickness $dz$. The electric field of each layer is uniform, independent of distance from the layer, and is $dE=\frac{d\sigma}{2\epsilon}$ pointing away from the layer on each side for +ve surface charge density $d\sigma=\rho(z)dz$. The total electric field at any point inside or outside of the slab is found by superposition of fields from every such layer in the slab. Note that the total electric field at any point due to all layers which are closer to the centre plane $z=0$, is zero. This is because the charge density is anti-symmetric : for every layer of +ve area charge density on one side of $z=0$ there is a layer of -ve charge with the same magnitude of area density on the other side of $z=0$. The electric fields of these two layers cancel out for points which outside of the two layers, in the same way that the total electric field is zero outside of a parallel plate capacitor (if the plate dimensions are very much bigger than the distance from them). From this observation you can see that the electric field outside of the slab is zero, because all layers in the slab are closer to the centre plane. The simplest way of getting the field inside the slab is to apply Gauss' Law using a "pill box" Gaussian surface which has one face A of area S at the surface of the slab $z=a$ (where $E(a)=0$) and the other face B at distance $|z|<a$ from the centre plane. The other face(s) of the pill box are parallel to the z direction so the electric flux through them is zero. There is no electric flux through face A; the flux through face B is $E(z)S=\frac{q}{\epsilon_0}$ where by integration $$q=\int_a^z S\rho(z)dz$$ is the total charge inside the pill box. See Electric field in a non-uniformly charged sheet. An answer identical to mine is given in the duplicate question Finding the electric field of a NON uniform slab?
No. The most basic way to see this is to use the fact that ACF$_0$ (the theory of algebraically closed fields of characteristic $0$, of which $\overline{\mathbb{Q}}$ is a model) has quantifier elimination. So every formula $\varphi(x,\overline{a})$ (with parameters $\overline{a}$, in one free variable $x$) is equivalent to a quantifier-free formula. But any quantifier-free formula is equivalent to a boolean combination of polynomial equalities of the form $p(x) = 0$, where $p\in \mathbb{Q}[\overline{a}]$. Since a polynomial has at most finitely many solutions, such a formula defines a finite or cofinite set. And $\mathbb{Q}$ is infinite and coinfinite in $\overline{\mathbb{Q}}$. At a higher level: What we've really shown above is that ACF$_0$ is strongly minimal (every definable set in one variable is finite or cofinite), from which it follows that it is uncountably categorical, in particular $\omega$-stable, i.e. the tamest of the tame kind of theory. On the other hand, any theory which interprets $\mathbb{Q}$ (as a field) is the wildest of the wild kind of the theory: it can define arithmetic and all recursively enumerable sets and do Gödelian tricks, and it's definitely definitely not $\omega$-stable, much less strongly minimal. So this is stronger: not only is the "standard" copy of $\mathbb{Q}$ not definable in a model of ACF$_0$, but ACF$_0$ doesn't interpret $\text{Th}(\mathbb{Q})$, by which I mean there is no definable subset $Q\subseteq (\overline{\mathbb{Q}})^n$ and definable functions $\hat{+}\colon Q^2\to Q$ and $\hat{*}\colon Q^2\to Q$ such that $(Q,\hat{+},\hat{*})\models \text{Th}(\mathbb{Q},+,*)$.
@mickep I'm pretty sure that malicious actors knew about this long before I checked it. My own server gets scanned by about 200 different people for vulnerabilities every day and I'm not even running anything with a lot of traffic. @JosephWright @barbarabeeton @PauloCereda I thought we could create a golfing TeX extension, it would basically be a TeX format, just the first byte of the file would be an indicator of how to treat input and output or what to load by default. I thought of the name: Golf of TeX, shortened as GoT :-) @PauloCereda Well, it has to be clever. You for instance need quick access to defining new cs, something like (I know this won't work, but you get the idea) \catcode`\@=13\def@{\def@##1\bgroup} so that when you use @Hello #1} it expands to \def@#1{Hello #1} If you use the D'Alembert operator as well, you might find pretty using the symbol \bigtriangleup for your Laplace operator, in order to get a similar look as the \Box symbol that is being used for D'Alambertian. In the following, a tricky construction with \mathop and \mathbin is used to get the... Latex exports. I am looking for a hint on this. I've tried everything I could find but no solution yet. I read equations from file generated by CAS programs. I can't edit these or modify them in any way. Some of these are too long. Some are not. To make them fit in the page width, I tried resizebox. The problem is that this will resize the small equation as well as the long one to fit the page width. Which is not what I want. I want only to resize the ones that are longer that pagewidth and keep the others as is. Is there a way in Latex to do this? Again, I do not before hand the size of… \documentclass[12pt]{article}\usepackage{amsmath}\usepackage{graphicx}\begin{document}\begin{equation*}\resizebox{\textwidth}{!}{$\begin{split}y &= \sin^2 x + \cos^2 x\\x &= 5\end{split}$}\end{equation*}\end{document} The above will resize the small equation, which I do not want. But since I do not know before how long the equation is, I do resize on everyone. Is there a way to find in Latex using some latex command, if an equation "will fit" the page width or how long it is? If so I can add logic to add resize if needed in that case. What I mean, I want to resize DOWN only if needed. And not resize UP. Also, if you think I should ask this on main board, I can. But thought to check here first. @egreg what other options do I have? sometimes cas generates an equation which do not fit the page. Now it overflows the page and one can't see the rest of it at all. Now, since in pdf one can zoom in a little, at least one can see it if needed. It is impossible to edit or modify these by hand, as this is done all using a program. @UlrikeFischer I do not generate unreadable equations. These are solutions of ODE's. The latex is generated by Maple. Some of them are longer than the page width. That is all. So what is your suggestion I do? Keep the long solutions flow out of the page? I can't edit these by hand. This is all generated by a program. I can add latex code around them that is all. But editing them is out of question. I tried breqn package, but that did not work. It broke many things as well. @egreg That was just an example. That was something I added by hand to make up a long equation for illustration. That was not real solution to an ODE. Again, thanks for the effort. but I can't edit the latex generated at all by hand. It will take me a year to do. And I run the program many times each day. each time, all the latex files are overwritten again any way. CAS providers do not generate good Latex also. That is why breqn did not work. many times they add {} around large expressions, which made breqn not able to break it. Also breqn has many other problems. So I no longer use it at all.
Let $\mathfrak{g}$ be a sub-Lie-algebra of $\mathfrak{gl}_n(\mathbb{C})$, the Lie algebra of complex $n\times n$ square matrices. Let us call $(H)$ the hypothesis: for all $x, y\in\mathbb{C}^n$, whenever $x$ and $y$ are linearly independent, we have $\langle \mathfrak{g}(x)\cup \mathfrak{g}(y)\rangle=\mathbb{C}^n$. Here, $\mathfrak{g}(x)$ is the linear subspace of $\mathbb{C}^n$ that consists of all the images of $x$ by a $g\in \mathfrak{g}$ (idem for $\mathfrak{g}(y)$). I think of $(H)$ as "no hyperplane is conjugate to a pair of linearly independent vectors". My problem is to determine the minimal dimension that $\mathfrak{g}$ must have, in order for $\mathfrak{g}$ to be able to have the property $(H)$. It is clear that we must have $\dim(\mathfrak{g})>\frac{n-1}{2}$. Indeed, if $\dim(\mathfrak{g})\leq\frac{n-1}{2}$, we take two arbitrary linearly independant vectors $x, y$, and we have $\dim(\langle\mathfrak{g}(x)\cup \mathfrak{g}(y)\rangle)\leq n-1$. We should be able to prove in fact that $\dim(\mathfrak{g})\geq n-1$ (I know it from an assertion of Hermann Weyl). But, in order to reach this new lower bound, we cannot anymore take two arbitrary linearly independent vectors. I think, to reach that goal, that we must start from a vector basis that is correctly adapted to all the transformations of $\mathfrak{g}$. But I don't see for the moment how to get the point. Could anyone give me some advice?
I am trying to understand the logic behind chi-squared test. The Chi-squared test is $\chi ^2 = \sum \frac{(obs-exp)^2}{exp}$. $\chi ^2$ is then compared to a Chi-squared distribution to find out a p.value in order to reject or not the null hypothesis. $H_0$: the observations come from the distribution we used to created our expected values. For example, we could test if the probability of obtaining head is given by $p$ as we expect. So we flip 100 times and find $n_H$ Heads and $1-n_H$ tails. We want to compare our finding to what is expected ($100 \cdot p$). We could as well use a binomial distribution but it is not the point of the question… The question is: Can you please explain why, under the null hypothesis, $\sum \frac{(obs-exp)^2}{exp}$ follows a chi-squared distribution? All I know about the Chi-squared distribution is that the chi-squared distribution of degree $k$ is the sum of $k$ squared standard normal distribution.
Fit Cluster or Cox Point Process Model Fit a homogeneous or inhomogeneous cluster process or Cox point process model to a point pattern. Usage kppm(X, …) # S3 method for formulakppm(X, clusters = c("Thomas","MatClust","Cauchy","VarGamma","LGCP"), …, data=NULL) # S3 method for pppkppm(X, trend = ~1, clusters = c("Thomas","MatClust","Cauchy","VarGamma","LGCP"), data = NULL, ..., covariates=data, subset, method = c("mincon", "clik2", "palm"), improve.type = c("none", "clik1", "wclik1", "quasi"), improve.args = list(), weightfun=NULL, control=list(), algorithm="Nelder-Mead", statistic="K", statargs=list(), rmax = NULL, covfunargs=NULL, use.gam=FALSE, nd=NULL, eps=NULL) # S3 method for quadkppm(X, trend = ~1, clusters = c("Thomas","MatClust","Cauchy","VarGamma","LGCP"), data = NULL, ..., covariates=data, subset, method = c("mincon", "clik2", "palm"), improve.type = c("none", "clik1", "wclik1", "quasi"), improve.args = list(), weightfun=NULL, control=list(), algorithm="Nelder-Mead", statistic="K", statargs=list(), rmax = NULL, covfunargs=NULL, use.gam=FALSE, nd=NULL, eps=NULL) Arguments X A point pattern dataset (object of class "ppp"or "quad") to which the model should be fitted, or a formulain the R language defining the model. See Details. trend An R formula, with no left hand side, specifying the form of the log intensity. clusters Character string determining the cluster model. Partially matched. Options are "Thomas", "MatClust", "Cauchy", "VarGamma"and "LGCP". data,covariates The values of spatial covariates (other than the Cartesian coordinates) required by the model. A named list of pixel images, functions, windows, tessellations or numeric constants. … Additional arguments. See Details. subset Optional. A subset of the spatial domain, to which the model-fitting should be restricted. A window (object of class "owin") or a logical-valued pixel image (object of class "im"), or an expression (possibly involving the names of entries in data) which can be evaluated to yield a window or pixel image. method The fitting method. Either "mincon"for minimum contrast, "clik2"for second order composite likelihood, or "palm"for Palm likelihood. Partially matched. improve.type Method for updating the initial estimate of the trend. Initially the trend is estimated as if the process is an inhomogeneous Poisson process. The default, improve.type = "none", is to use this initial estimate. Otherwise, the trend estimate is updated by improve.kppm, using information about the pair correlation function. Options are "clik1"(first order composite likelihood, essentially equivalent to "none"), "wclik1"(weighted first order composite likelihood) and "quasi"(quasi likelihood). improve.args Additional arguments passed to improve.kppmwhen improve.type != "none". See Details. weightfun Optional weighting function \(w\) in the composite likelihood or Palm likelihood. A functionin the R language. See Details. control List of control parameters passed to the optimization function optim. algorithm statistic Name of the summary statistic to be used for minimum contrast estimation: either "K"or "pcf". statargs Optional list of arguments to be used when calculating the statistic. See Details. rmax Maximum value of interpoint distance to use in the composite likelihood. covfunargs,use.gam,nd,eps Arguments passed to ppmwhen fitting the intensity. Details This function fits a clustered point process model to the point pattern dataset X. The model may be either a Neyman-Scott cluster process or another Cox process. The type of model is determined by the argument clusters. Currently the options are clusters="Thomas" for the Thomas process, clusters="MatClust" for the Matern cluster process, clusters="Cauchy" for the Neyman-Scott cluster process with Cauchy kernel, clusters="VarGamma" for the Neyman-Scott cluster process with Variance Gamma kernel (requires an additional argument nu to be passed through the dots; see rVarGamma for details), and clusters="LGCP" for the log-Gaussian Cox process (may require additional arguments passed through …; see rLGCP for details on argument names). The first four models are Neyman-Scott cluster processes. The algorithm first estimates the intensity function of the point process using ppm. The argument X may be a point pattern (object of class "ppp") or a quadrature scheme (object of class "quad"). The intensity is specified by the trend argument. If the trend formula is ~1 (the default) then the model is homogeneous. The algorithm begins by estimating the intensity as the number of points divided by the area of the window. Otherwise, the model is inhomogeneous. The algorithm begins by fitting a Poisson process with log intensity of the form specified by the formula trend. (See ppm for further explanation). The argument X may also be a formula in the R language. The right hand side of the formula gives the trend as described above. The left hand side of the formula gives the point pattern dataset to which the model should be fitted. If improve.type="none" this is the final estimate of the intensity. Otherwise, the intensity estimate is updated, as explained in improve.kppm. Additional arguments to improve.kppm are passed as a named list in improve.args. The clustering parameters of the model are then fitted either by minimum contrast estimation, or by maximum composite likelihood. Minimum contrast: If method = "mincon"(the default) clustering parameters of the model will be fitted by minimum contrast estimation, that is, by matching the theoretical \(K\)-function of the model to the empirical \(K\)-function of the data, as explained in mincontrast. For a homogeneous model ( trend = ~1) the empirical \(K\)-function of the data is computed using Kest, and the parameters of the cluster model are estimated by the method of minimum contrast. For an inhomogeneous model, the inhomogeneous \(K\) function is estimated by Kinhomusing the fitted intensity. Then the parameters of the cluster model are estimated by the method of minimum contrast using the inhomogeneous \(K\) function. This two-step estimation procedure is due to Waagepetersen (2007). If statistic="pcf"then instead of using the \(K\)-function, the algorithm will use the pair correlation function pcffor homogeneous models and the inhomogeneous pair correlation function pcfinhomfor inhomogeneous models. In this case, the smoothing parameters of the pair correlation can be controlled using the argument statargs, as shown in the Examples. Additional arguments …will be passed to mincontrastto control the minimum contrast fitting algorithm. Composite likelihood: If method = "clik2"the clustering parameters of the model will be fitted by maximising the second-order composite likelihood (Guan, 2006). The log composite likelihood is $$ \sum_{i,j} w(d_{ij}) \log\rho(d_{ij}; \theta) - \left( \sum_{i,j} w(d_{ij}) \right) \log \int_D \int_D w(\|u-v\|) \rho(\|u-v\|; \theta)\, du\, dv $$ where the sums are taken over all pairs of data points \(x_i, x_j\) separated by a distance \(d_{ij} = \| x_i - x_j\|\) less than rmax, and the double integral is taken over all pairs of locations \(u,v\) in the spatial window of the data. Here \(\rho(d;\theta)\) is the pair correlation function of the model with cluster parameters \(\theta\). The function \(w\) in the composite likelihood is a weighting function and may be chosen arbitrarily. It is specified by the argument weightfun. If this is missing or NULLthen the default is a threshold weight function, \(w(d) = 1(d \le R)\), where \(R\) is rmax/2. Palm likelihood: If method = "palm"the clustering parameters of the model will be fitted by maximising the Palm loglikelihood (Tanaka et al, 2008) $$ \sum_{i,j} w(x_i, x_j) \log \lambda_P(x_j \mid x_i; \theta) - \int_D w(x_i, u) \lambda_P(u \mid x_i; \theta) {\rm d} u $$ with the same notation as above. Here \(\lambda_P(u|v;\theta\) is the Palm intensity of the model at location \(u\) given there is a point at \(v\). In all three methods, the optimisation is performed by the generic optimisation algorithm optim. The behaviour of this algorithm can be modified using the argument control. Useful control arguments include trace, maxit and abstol (documented in the help for optim). Fitting the LGCP model requires the RandomFields package, except in the default case where the exponential covariance is assumed. Value An object of class "kppm" representing the fitted model. There are methods for printing, plotting, predicting, simulating and updating objects of this class. Log-Gaussian Cox Models To fit a log-Gaussian Cox model with non-exponential covariance, specify clusters="LGCP" and use additional arguments to specify the covariance structure. These additional arguments can be given individually in the call to kppm, or they can be collected together in a list called covmodel. For example a Matern model with parameter \(\nu=0.5\) could be specified either by kppm(X, clusters="LGCP", model="matern", nu=0.5) or by kppm(X, clusters="LGCP", covmodel=list(model="matern", nu=0.5)). The argument model specifies the type of covariance model: the default is model="exp" for an exponential covariance. Alternatives include "matern", "cauchy" and "spheric". Model names correspond to functions beginning with RM in the RandomFields package: for example model="matern" corresponds to the function RMmatern in the RandomFields package. Additional arguments are passed to the relevant function in the RandomFields package: for example if model="matern" then the additional argument nu is required, and is passed to the function RMmatern in the RandomFields package. Note that it is not possible to use anisotropic covariance models because the kppm technique assumes the pair correlation function is isotropic. Error and warning messages See ppm.ppp for a list of common error messages and warnings originating from the first stage of model-fitting. References Guan, Y. (2006) A composite likelihood approach in fitting spatial point process models. Journal of the American Statistical Association 101, 1502--1512. Jalilian, A., Guan, Y. and Waagepetersen, R. (2012) Decomposition of variance for spatial Cox processes. Scandinavian Journal of Statistics 40, 119--137. Tanaka, U. and Ogata, Y. and Stoyan, D. (2008) Parameter estimation and model selection for Neyman-Scott point processes. Biometrical Journal 50, 43--57. Waagepetersen, R. (2007) An estimating function approach to inference for inhomogeneous Neyman-Scott processes. Biometrics 63, 252--258. See Also Minimum contrast fitting algorithm: mincontrast. See also ppm Aliases kppm kppm.formula kppm.ppp kppm.quad Examples # NOT RUN { # method for point patterns kppm(redwood, ~1, "Thomas") # method for formulas kppm(redwood ~ 1, "Thomas") kppm(redwood ~ 1, "Thomas", method="c") kppm(redwood ~ 1, "Thomas", method="p") kppm(redwood ~ x, "MatClust") kppm(redwood ~ x, "MatClust", statistic="pcf", statargs=list(stoyan=0.2)) kppm(redwood ~ x, cluster="Cauchy", statistic="K") kppm(redwood, cluster="VarGamma", nu = 0.5, statistic="pcf") # LGCP models kppm(redwood ~ 1, "LGCP", statistic="pcf") if(require("RandomFields")) { kppm(redwood ~ x, "LGCP", statistic="pcf", model="matern", nu=0.3, control=list(maxit=10)) } # fit with composite likelihood method kppm(redwood ~ x, "VarGamma", method="clik2", nu.ker=-3/8) # fit intensity with quasi-likelihood method kppm(redwood ~ x, "Thomas", improve.type = "quasi")# } Documentation reproduced from package spatstat, version 1.55-1, License: GPL (>= 2)
Let $\mathcal{P}:=\mathcal{P}(\mathcal{X})$ be the $n$-dimensional manifold of all (strictly positive) probability vectors (distributions) on $\mathcal{X}=\{x_0,\dots,x_n\}$, i.e., each $p=(p(x_0),\dots,p(x_n))\in \mathcal{P}$ is such that $p(x_i)>0$ for all $i$ and $\sum_{i}p(x_i)=1$ and can be thought of a point in $\mathbb{R}^{n+1}$. $\mathcal{P}$ is an $n$-dimensional manifold. Let $\mathcal{P}=\{p_{\xi}\}$, where $\xi=(\xi_1,\dots,\xi_n)$ is the (global) coordinate system. A Riemannian metric $G(\xi) = [g_{i,j}(\xi)]$ is defined on $\mathcal{P}$, where \begin{eqnarray} g_{i,j}(\xi) & = & \sum_x \frac{\partial}{\partial\xi_i} (p_{\xi}(x))~ \frac{\partial}{\partial\xi_j}(\log p_{\xi}(x)). \end{eqnarray} An affine connection $\nabla$ is defined on $\mathcal{P}$, given by the Christoffel symbols \begin{eqnarray} \Gamma_{ij}^k({\xi}) & = & \sum_x \frac{\partial}{\partial\xi_k}(p_{\xi}(x))~\frac{\partial}{\partial\xi_i}\left(\frac{\partial}{\partial\xi_j}\log p_{\xi}(x)\right). \end{eqnarray} Suppose that $\gamma_t$ is a geodesic on $\mathcal{P}$. Having the metric and the connection coefficients on hand, can I then claim from the geodesic equation $\nabla_{\dot\gamma_t}\dot\gamma_t=0$ that the following must be true? \begin{eqnarray} \sum_x \frac{\partial}{\partial\xi_k} (p_{\xi}(x))~\frac{d^2}{dt^2}\left(\log \gamma_t(x)\right) = 0 \end{eqnarray} Update: From this article of Amari in Ann. of Statistics, I came to know that the geodesic equation (for this connection) is given by $\ddot l_t+i_t=0$, where $l_t=\log\gamma_t$, and $i_t=\sum_x \dot\gamma_t(x)\dot l_t(x)=0$. But he hasn't given any explanation how he obtained this. See Appendix of the paper. $\alpha=1$ corresponds to my question. Once this geodesic equation is obtained, my claimed equation is obvious. If anyone can help me derive this geodesic equation, it would be great. Thank you.
Imagine I was a hypothetical ant man the size of an atom, and I position myself at the exact, down to the atom, center of mass of the earth. A move in any direction will move me out of the center. Would I experience gravity pulling outward on my body in all directions? There's actually a very useful way to solve this using the Gauss law for gravity, which is given by: $$\oint\vec{g}\cdot d\vec{A}=-4\pi GM_{enc}$$ where $\vec{g}$ is the gravitational field, $\vec{A}$ the area enclosed in the surface of interest, and $M_{enc}$ the enclose mass of the object by the gaussian surface. Assuming the Earth has an uniform volumetric density $\rho$, let's consider the two situations proposed: a) You're at the exact center of the planet: in this case, all forces will cancel each other due to the symmetry of the object, so you will experience zero gravity. From Gauss Law, this is equivalent to having no enclosed mass. b) You move away a distance $r$ from the object: in this case, the enclosed surface will be $4\pi r$, and assuming a constant density $$\rho=3M/4\pi R^3=3M_{enc}/4\pi r^3 \ \ \rightarrow \ \ M_{enc}=4\pi r^3 \rho/3$$ Substituting in the Gauss equation, $$g(4\pi r^2)=-4\pi G(4\pi r^3 \rho/3)$$ Simplifying, $$\vec{g}=-\frac{4\pi G\rho}{3}\vec{r}$$ Or in terms of the radius of the Earth, $$\vec{g}=-GM\left (\frac{r}{R^3}\right )\hat{r}$$ So you can see that only the area enclosed by your Gaussian surface will contribute to the net acceleration you feel towards the center. Obviously the density of the Earth isn't constant (it's more concentrated on the nucleus than on the surface), so you can have a better approximation using a more empirical model of such density.
Abbreviation: DLOS A is a structure $\mathbf{A}=\langle A,\vee,\wedge,\cdot\rangle$ of type $\langle 2,2,2\rangle$ such that distributive lattice ordered semigroup $\langle A,\vee,\wedge\rangle$ is a distributive lattice $\langle A,\cdot\rangle$ is a semigroup $\cdot$ distributes over $\vee$: $x\cdot(y\vee z)=(x\cdot y)\vee (x\cdot z)$ and $(x\vee y)\cdot z=(x\cdot z)\vee (y\cdot z)$ Let $\mathbf{A}$ and $\mathbf{B}$ be distributive lattice-ordered semigroups. A morphism from $\mathbf{A}$ to $\mathbf{B}$ is a function $h:A\rightarrow B$ that is a homomorphism: $h(x\vee y)=h(x) \vee h(y)$, $h(x\wedge y)=h(x) \wedge h(y)$, $h(x\cdot y)=h(x) \cdot h(y)$ Example 1: Any collection $\mathbf A$ of binary relations on a set $X$ such that $\mathbf A$ is closed under union, intersection and composition. H. Andreka 1) proves that these examples generate the variety DLOS. Feel free to add or delete properties from this list. The list below may contain properties that are not relevant to the class that is being described. $\begin{array}{lr} f(1)= &1\\ f(2)= &6\\ f(3)= &44\\ f(4)= &479\\ f(5)= &\\ \end{array}$ Hajnal Andreka, 1) , Algebra Universalis Representations of distributive lattice-ordered semigroups with binary relations 28(1991), 12–25
I am learning Fourier analysis and without any teacher, just trying to read books on my own. I think I have made some decent progress but they are a couple of points which are still very unclear to me and that I can't find explained in any of the books I have. One of the sources that I found in this document, which is great because it teaches DFT without really using complex numbers. I shall say that I understand complex numbers and I am aware of Euler's Formula. So what they say in this PDF/document is that in fact, in the simple case you can create a DFT by just using $N/2$ coefficients. I thought the choice of $N/2$ was related to the Nyquist frequency. If the signal contains $N$ samples then the Nyquist sampling theorem says that the signal can't contain a wave whose frequency is higher than half of the sampling frequency (hence the $N/2$ harmonics in the DFT). So to me, explained that way everything made a lot of sense and in the simplest case you just needed to do something like this: \begin{align} a[k] &= \sum_{x = 0}^{N-1} s[x] \cos\left(2 \pi k {1 \over N } x\right), \quad \text{ for } k = \left\{ 1, 2, ..., \frac N2\right\},\\ b[k] &= \sum_{x = 0}^{N-1} s[x] \sin\left(2 \pi k {1 \over N } x\right), \quad \text{ for } k = \left\{ 1, 2, ..., \frac N2\right\}. \end{align} So this seemed simple. Now it says that when $k = 0$ and when $k = N/2$ then we need to divide $a$ and $b$ by $N$ or multiply them by $2/N$ otherwise. I understand why when $k = 0$, because it's the DC offset, but didn't really understand why you had to do the same thing when $k = N/2$ until I read this post QUESTION 1: It seems to indicate that when you use the exponential form of the DFT then when $k = N/2$ then you have $\exp(\pi)$ which is equal to 1. Then it seems that in that situation the coefficient $a[N/2]$ has a particular meaning by I don't know which one? Now this is where I am lost. In the "complete" equation for the DFT you don't compute $N/2$ coefficients by $N$ coefficients. That means that has soon as $k > N/2$ then the frequency of the harmonics is greater than the Nyquist frequency. I have illustrated this with the following image: We have $N=8$ samples, thus the fundamental frequency is $\frac 18$ and we have harmonics: $1\cdot \frac 18, \ 2\cdot \frac 18, \ 3\cdot \frac 18, \ 4\cdot \frac 18$. However as soon as go above that then the harmonics go beyond the Nyquist frequency. QUESTION 2: why do we test the signal with harmonics whose frequency go beyond the Nyquist frequency? Finally, and I think this is actually related to question 2, I keep reading about positive and negative frequencies, but I just can make sense of this at all? This is my question 3. QUESTION 3: could you please briefly explained if that's possible why we do speak and need positive and negative frequencies? Why do we need to care for negative frequencies?
Let $A$ be an $m \times n$ matrix, and define: \begin{align*} U &= {\rm diag} \{ \frac{1}{\beta_j} \}, \beta_j = \sum_{k=1}^m |a_{kj}|, j = 1 \dots n \\ V &= {\rm diag} \{ \frac{1}{\alpha_i} \}, \alpha_i = \sum_{k=1}^n |a_{ik}|, i = 1 \dots m. \end{align*} i.e. $\beta_j$ is the 1-norm of the $j$th column of $A$, and $\alpha_i$ is the 1-norm of the $i$th row. Let $M = UA^TVA$, an $n \times n$ matrix. A direct calculation gives its $(i,j)$th element as \begin{align*} m_{ij} &= \frac{1}{\beta_i} \sum_{k=1}^m \frac{a_{ki} a_{kj}}{\alpha_k}. \end{align*} If all the elements of $A$ are positive, it's fairly straightforward to show that all the rows of $M$ sum to 1, thus $\Vert M \Vert_\infty=1$, and since $\lambda=1$ is an eigenvalue, it follows that $\rho(M)=1$. My question is: can one prove that all the eigenvalues of $M$ are positive as well (i.e. that they all lie between 0 and 1)? Empirically this seems to be the case, but I'm having a hard time proving why. $M$ is not SPD. It seems that it might be totally positive, but I'm not sure how to prove that. Any ideas? (This matrix arises in the Simultaneous Algebraic Reconstruct Technique (SART), an iterative method for solving linear systems.)
The $x$ component of vector $A$ is 25.0 m and the $y$ component is 40.0 m. (a) What is the magnitude of $A$? (b) What is the angle between the direction of $A$ and the positive direction of x? For (b) I tried using the formula $\tan \theta = \frac{a_y}{a_x} = \frac{40}{-25} = -1.6$, thus $\arctan(-1.6)=58$ degrees which does not match the answer key: $122$ degrees. Any help is appreciated.
Here are a couple of ways to go about making some informed decisions in estimating a line integral given a picture: We note the path is a simple path, it is a semi-circle of radius $r=1$ and total arc length of $\pi$, this can help in the estimation step. We also note that $f$ is approximately monotone decreasing along the path in such a way as to make estimation of values between the contour lines somewhat simple or at least probable. First lets use a trick from single variable calculus to constrain our integral between its max and min values. We will take two integrals, both treating $f$ as a constant set by the beginning and ending of the path (because $f$ is approximately monotone we have a highest and lowest point). I will take these points as $f(3,3)\approx 4.1$ the beginning of the curve above the $f=4$ contour, and $f(3,1)\approx 1.3$ the point on the curve above the $f = 1$ contour. We then compute the two approximate integrals to help us bound the answers a bit. Integral 1:$$\int_C f(x,y)ds = \int_C f(3,3)ds \approx \int_C (4.1)ds = 4.1 \cdot \pi \approx 12.88 $$ Integral 2: $$\int_C f(x,y)ds = \int_C f(3,1)ds \approx \int_C (1.3)ds = 1.3 \cdot \pi \approx 4.08 $$ At least we have cut off anything less in magnitude than $4$, so the answers $-3,0,3$ are all too small. To decide if the answer is closer to $9$ or $6$, I appeal to the drawing, the path spends more time traveling through the segment (is longer) $3<f<4$ than the segment $2<f<3$, this pushes the value closer to $9$. And lastly the integration path travels backwards towards decreasing values of $f$ and this could make the value of the integral negative, I would suspect that the value of the integral is most likely $\pm9$ with a preference towards $-9$
I'm looking at algorithms to construct short paths in a particular Cayley graph defined in terms of quadratic residues. This has led me to consider a variant on Lagrange's four-squares theorem. The Four Squares Theorem is simply that for any $n \in \mathbb N$, there exist $w,x,y,z \in \mathbb N$ such that $$ n = w^2 + x^2 + y^2 + z^2 . $$ Furthermore, using algorithms presented by Rabin and Shallit (which seem to be state-of-the-art), such decompositions of $n$ can be found in $\mathrm{O}(\log^4 n)$ random time, or about $\mathrm{O}(\log^2 n)$ random time if you don't mind depending on the ERH or allowing a finite but unknown number of instances with less-well-bounded running time. I am considering a Cayley graph $G_N$ defined on the integers modulo $N$, where two residues are adjacent if their difference is a "quadratic unit" (a multiplicative unit which is also quadratic residue) or the negation of one (so that the graph is undirected). Paths starting at zero in this graph correspond to decompositions of residues as sums of squares. It can be shown that four squares do not always suffice; for instance, consider $N = 24$, where $G_N$ is the 24-cycle, corresponding to the fact that 1 is the only quadratic unit mod 24. However, finding decompositions of residues into "squares" can be helpful in finding paths in the graphs $G_N$. The only caveat is that only squares which are relatively prime to the modulus are useable. So, the question: let $p$ be prime, and $n \in \mathbb Z_p ( := \mathbb Z / p \mathbb Z)$. Under what conditions can we efficiently discover multiplicative units $w,x,y,z \in \mathbb Z_p^\ast$ such that $n = w^2 + x^2 + y^2 + z^2$? Is there a simple modification of Rabin and Shallit's algorithms which is helpful? Edit: In retrospect, I should emphasize that my question is about efficiently finding such a decomposition, and for $p > 3$. Obviously for $p = 3$, only $n = 1$ has a solution. Less obviously, one may show that the equation is always solvable for $n \in \mathbb Z_p^\ast$, for any $p > 3$ prime.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Journal of High Energy Physics, ISSN 1126-6708, 3/2018, Volume 2018, Issue 3, pp. 1 - 23 The ratios of the branching fractions of the decays Λ c + → pπ − π +, Λ c + → pK − K +, and Λ c + → pπ − K + with respect to the Cabibbo-favoured Λ c + →... Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Nuclear and particle physics. Atomic energy. Radioactivity | LHCb | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Nuclear and particle physics. Atomic energy. Radioactivity | LHCb | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Physical Review Letters, ISSN 0031-9007, 12/2017, Volume 119, Issue 23 Journal Article Journal of High Energy Physics, ISSN 1126-6708, 3/2018, Volume 2018, Issue 3, pp. 1 - 21 The difference between the CP asymmetries in the decays Λ c + → pK − K + and Λ c + → pπ − π + is presented. Proton-proton collision data taken at... Charm physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Skewed distributions | Luminosity | Statistical methods | Statistical analysis | Particle collisions | Decay | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Charm physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Skewed distributions | Luminosity | Statistical methods | Statistical analysis | Particle collisions | Decay | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article ISSN 0370-2693, 2018 The decay $\Lambda_b^0 \to \Lambda_c^+ p \overline{p} \pi^-$ is observed using $pp$ collision data collected with the LHCb detector at centre-of-mass energies... dibaryon | experimental results | intermediate state: mass | statistical | branching ratio: ratio: measured | Sigma/c | CERN LHC Coll | LHC-B | Breit-Wigner | structure | Lambda/b0 --> Lambda/c+ p anti-p pi | 7000 GeV-cms8000 GeV-cms | phase space | mass: width | Lambda/b0: hadronic decay | mass spectrum: (Lambda/c+ pi-) | p p: colliding beams | p p: scattering | pentaquark | mass spectrum: (Lambda/c+ p anti-p pi-) | scattering [p p] | width [mass] | Hadron-Hadron scattering (experiments) | High Energy Physics - Experiment | Sigma baryons | (Lambda/c+ pi-) [mass spectrum] | Physics Institute | LHCb | 530 Physics | mass [intermediate state] | (Lambda/c+ p anti-p pi-) [mass spectrum] | Lambda baryons | Physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | hadronic decay [Lambda/b0] | Particles and fields | colliding beams [p p] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | ratio: measured [branching ratio] dibaryon | experimental results | intermediate state: mass | statistical | branching ratio: ratio: measured | Sigma/c | CERN LHC Coll | LHC-B | Breit-Wigner | structure | Lambda/b0 --> Lambda/c+ p anti-p pi | 7000 GeV-cms8000 GeV-cms | phase space | mass: width | Lambda/b0: hadronic decay | mass spectrum: (Lambda/c+ pi-) | p p: colliding beams | p p: scattering | pentaquark | mass spectrum: (Lambda/c+ p anti-p pi-) | scattering [p p] | width [mass] | Hadron-Hadron scattering (experiments) | High Energy Physics - Experiment | Sigma baryons | (Lambda/c+ pi-) [mass spectrum] | Physics Institute | LHCb | 530 Physics | mass [intermediate state] | (Lambda/c+ p anti-p pi-) [mass spectrum] | Lambda baryons | Physics | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | hadronic decay [Lambda/b0] | Particles and fields | colliding beams [p p] | [ PHYS.HEXP ] Physics [physics]/High Energy Physics - Experiment [hep-ex] | ratio: measured [branching ratio] Journal Article Physical Review Letters, ISSN 0031-9007, 05/2019, Volume 122, Issue 19 Journal Article 07/2018 Phys. Lett. B 787 (2018) 124-133 A search for $C\!P$ violation in $\Lambda^0_b \to p K^-$ and $\Lambda^0_b \to p \pi^-$ decays is presented using a sample of... Physics - High Energy Physics - Experiment Physics - High Energy Physics - Experiment Journal Article Physical Review Letters, ISSN 0031-9007, 05/2018, Volume 120, Issue 22, p. 221803 Journal Article Journal of High Energy Physics, ISSN 1126-6708, 04/2019, Volume 2019, Issue 4, pp. 1 - 18 Journal Article Physics Letters, Section B: Nuclear, Elementary Particle and High-Energy Physics, ISSN 0370-2693, 12/2018, Volume 787, pp. 124 - 133 A search for CP violation in Λ →pK and Λ →pπ decays is presented using a sample of pp collisions collected with the LHCb detector and corresponding to an... Journal Article Physical Review Letters, ISSN 0031-9007, 11/2018, Volume 121, Issue 22, p. 222001 The cross section for prompt antiproton production in collisions of protons with an energy of 6.5 TeV incident on helium nuclei at rest is measured with the... Antiparticles | Large Hadron Collider | Collisions | Luminosity | Antiprotons | Nuclei (nuclear physics) | Helium | Cross sections | Cosmic rays | Física de partícules | Experiments | Particle physics Antiparticles | Large Hadron Collider | Collisions | Luminosity | Antiprotons | Nuclei (nuclear physics) | Helium | Cross sections | Cosmic rays | Física de partícules | Experiments | Particle physics Journal Article
Search Now showing items 1-10 of 26 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... Anomalous evolution of the near-side jet peak shape in Pb-Pb collisions at $\sqrt{s_{\mathrm{NN}}}$ = 2.76 TeV (American Physical Society, 2017-09-08) The measurement of two-particle angular correlations is a powerful tool to study jet quenching in a $p_{\mathrm{T}}$ region inaccessible by direct jet identification. In these measurements pseudorapidity ($\Delta\eta$) and ... Online data compression in the ALICE O$^2$ facility (IOP, 2017) The ALICE Collaboration and the ALICE O2 project have carried out detailed studies for a new online computing facility planned to be deployed for Run 3 of the Large Hadron Collider (LHC) at CERN. Some of the main aspects ... Evolution of the longitudinal and azimuthal structure of the near-side peak in Pb–Pb collisions at $\sqrt{s_{\rm NN}}=2.76$ TeV (American Physical Society, 2017-09-08) In two-particle angular correlation measurements, jets give rise to a near-side peak, formed by particles associated to a higher $p_{\mathrm{T}}$ trigger particle. Measurements of these correlations as a function of ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Enhanced production of multi-strange hadrons in high-multiplicity proton-proton collisions (Nature Publishing Group, 2017) At sufficiently high temperature and energy density, nuclear matter undergoes a transition to a phase in which quarks and gluons are not confined: the quark–gluon plasma (QGP)1. Such an exotic state of strongly interacting ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $\Sigma(1385)^{\pm}$ and $\Xi(1530)^{0}$ in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Springer, 2017-06) The transverse momentum distributions of the strange and double-strange hyperon resonances ($\Sigma(1385)^{\pm}$, $\Xi(1530)^{0}$) produced in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV were measured in the rapidity ... Charged–particle multiplicities in proton–proton collisions at $\sqrt{s}=$ 0.9 to 8 TeV, with ALICE at the LHC (Springer, 2017-01) The ALICE Collaboration has carried out a detailed study of pseudorapidity densities and multiplicity distributions of primary charged particles produced in proton-proton collisions, at $\sqrt{s} =$ 0.9, 2.36, 2.76, 7 and ... Energy dependence of forward-rapidity J/$\psi$ and $\psi(2S)$ production in pp collisions at the LHC (Springer, 2017-06) We present ALICE results on transverse momentum ($p_{\rm T}$) and rapidity ($y$) differential production cross sections, mean transverse momentum and mean transverse momentum square of inclusive J/$\psi$ and $\psi(2S)$ at ...
If you know the gradient and Hessian of the log-likelihood, you can write quick functions in R similar to the one you need for the LL itself. If you pass the gradient, you can use (L)BFGS in R as opposed to Nelder-Mead, which should converge a bit faster. Regardless, once you have the point of convergence, you can plug the values for the point of convergence into the function for the Hessian, and the sqrt of the diagonals is your estimated error. Here is an example using the Pareto distribution for which:$$f(x) = \frac{\alpha\theta^{\alpha}}{x+\theta}$$ LL <- function(pars, X){ a <- pars[[1]] q <- pars[[2]] return(-sum(a * log(q) + log(a) - (a + 1) * log (X + q))) } LLG <- function(pars, X){ a <- pars[[1]] q <- pars[[2]] ga <- -sum(log(q) + 1 / a - log(X + q)) gq <- -sum(a / q - (a + 1) / (X + q)) Z <- c(ga, gq) names(Z) <- c('a', 'q') return(Z) } LLH <- function(pars, X){ a <- pars[[1]] q <- pars[[2]] n <- length(X) haa <- n / a ^ 2 hqq <- n * a / q ^ 2 - sum((a + 1) / (X + q) ^ 2) haq <- hqa <- sum(1 / (X + q)) - n / q Z <- matrix(c(haa, hqa, haq, hqq), ncol = 2) rownames(Z) <- colnames(Z) <- c('a', 'q') return(Z) } I tend to use nloptr for linear-search optimization, the call for `optim' would be similar. So assuming your data is stored as DATA: Fit <- nloptr(x0 = c(2, 1e6), eval_f = LL, eval_grad_f = LLG, lb = c(0,0), X = DATA, opts = list(algorithm = "NLOPT_LD_LBFGS", maxeval = 1e5)) Your values are in Fit$solution so your Fisher information matrix estimate is the inverse of the Hessian (not negative Hessian since we are minimizing NLL, not maximizing LL) and so the estimate of the standard error can be calculated using: sqrt(diag(solve(LLH(Fit$solution, DATA)))) and their correlation would be the off-diagonal in: cov2cor(solve(LLH(Fit$solution, DATA)))
Search Now showing items 1-10 of 26 Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Measurement of electrons from heavy-flavour hadron decays in p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV (Elsevier, 2016-03) The production of electrons from heavy-flavour hadron decays was measured as a function of transverse momentum ($p_{\rm T}$) in minimum-bias p–Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with ALICE at the LHC for $0.5 ... Direct photon production in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2016-03) Direct photon production at mid-rapidity in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV was studied in the transverse momentum range $0.9 < p_{\rm T} < 14$ GeV/$c$. Photons were detected via conversions in the ALICE ... Multi-strange baryon production in p-Pb collisions at $\sqrt{s_\mathbf{NN}}=5.02$ TeV (Elsevier, 2016-07) The multi-strange baryon yields in Pb--Pb collisions have been shown to exhibit an enhancement relative to pp reactions. In this work, $\Xi$ and $\Omega$ production rates have been measured with the ALICE experiment as a ... $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ production in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Elsevier, 2016-03) The production of the hypertriton nuclei $^{3}_{\Lambda}\mathrm H$ and $^{3}_{\bar{\Lambda}} \overline{\mathrm H}$ has been measured for the first time in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV with the ALICE ... Multiplicity dependence of charged pion, kaon, and (anti)proton production at large transverse momentum in p-Pb collisions at $\sqrt{s_{\rm NN}}$= 5.02 TeV (Elsevier, 2016-09) The production of charged pions, kaons and (anti)protons has been measured at mid-rapidity ($-0.5 < y < 0$) in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV using the ALICE detector at the LHC. Exploiting particle ... Jet-like correlations with neutral pion triggers in pp and central Pb–Pb collisions at 2.76 TeV (Elsevier, 2016-12) We present measurements of two-particle correlations with neutral pion trigger particles of transverse momenta $8 < p_{\mathrm{T}}^{\rm trig} < 16 \mathrm{GeV}/c$ and associated charged particles of $0.5 < p_{\mathrm{T}}^{\rm ... Centrality dependence of charged jet production in p-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 5.02 TeV (Springer, 2016-05) Measurements of charged jet production as a function of centrality are presented for p-Pb collisions recorded at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector. Centrality classes are determined via the energy ...
Wind-Driven Gyres: Quasi-Geostrophic Limit¶ Contributed by Christine Kaufhold and Francis Poulin. Building on the previous two demos that used the Quasi-Geostrophic (QG) model for the time-stepping and eigenvalue problem, we now consider how to determine a wind-driven gyre solution that includes bottom drag and nonlinear advection. This is referred to as the Nonlinear Stommel Problem. This is a classical problem going back to [Sto48]. Even though it is far too simple to describe the dynamics of the real oceans quantitatively, it does explain qualitatively why we have western intensification in the world’s gyres. The curl of the wind stress adds vorticity into the gyres and the latitudinal variation in the Coriolis parameter causes a weak equatorward flow away from the boundaries (Sverdrup flow). It is because of the dissipation that arises near the boundaries that we must have western intensification. This was first shown by [Sto48] using simple bottom drag but it was only years later after [Mun50] did a similar calculation using lateral viscosity that people took the idea seriously. After three quarters of a century we are still unable to parametrise the dissipative effects of the small scales so it is very difficult to get a good quantitative predictions as to the mean structure of the gyre that is generated. However, this demo aims to compute the structure of the oceanic gyre given particular parameters. The interested reader can read more about this in [Ped92] and [Val06]. In this tutorial we will consider the nonlinear Stommel problem. Governing PDE: Stommel Problem¶ The nonlinear, one-layer, QG model equation that is driven by the winds above (say \(Q_{\textrm{winds}})\), which is the vorticity of the winds that drive the ocean from above) is, with the Potential Vorticity (PV) and geostrophic velocities defined as where \(\psi\) is the stream-function, \(\vec{u}=(u, v)\) is the velocity field, \(q\) is the PV, \(\beta\) is the latitudinal gradient of Coriolis parameter, and \(F\) is the rotational Froude number. The non-conservative aspects of this model occur because of \(r\), the strength of the bottom drag, and \(Q_{\textrm{winds}}\), the vorticity of the winds. We pick the wind forcing as to generate a single gyre, where \(L_y\) is the length of our domain and \(\tau\) is the strength of our wind forcing. By putting a \(2\) in front of the \(\pi\) we get a double gyre [Val06]. If we only look for steady solutions in time, we can ignore the time derivative term, and we get We can write this out in one equation, which is the nonlinear Stommel problem: Note that we dropped the \(-F \psi\) term in the nonlinear advection because the streamfunction does not change following the flow, and therefore, we can neglect that term entirely. Weak Formulation¶ To build the weak form of the problem in Firedrake we must find the weak form of this equation. We begin by multiplying this equation by a test function, \(\phi\), which is in the same space as the streamfunction, and then integrate over the domain \(\Omega\), The nonlinear term can be rewritten using the fact that the velocity is divergent free and then integrating by parts, Note that because we have no normal flow boundary conditions the boundary contribution is zero. For the term with bottom drag we integrate by parts and use the fact that the streamfunction is zero on the walls The boundary integral above vanishes because we are setting the streamfunction to be zero on the boundary. Finally we can put the equation back together again to produce the weak form of our problem. The above problem is the weak form of the nonlinear Stommel problem. The linear term arises from neglecting the nonlinear advection, and can easily be obtained by neglecting the first term on the left hand side. Defining the Problem¶ Now that we know the weak form we are now ready to solve this using Firedrake! First, we import the Firedrake, PETSc, NumPy and UFL packages, from firedrake import *from firedrake.petsc import PETScimport numpy as npimport ufl Next, we can define the geometry of our domain. In this example, we will be using a square of length one with 50 cells. n0 = 50 # Spatial resolutionLy = 1.0 # Meridonal lengthLx = 1.0 # Zonal lengthmesh = RectangleMesh(n0, n0, Lx, Ly, reorder = None) We can then define the Function Space within which the solution of the streamfunction will reside. Vcg = FunctionSpace(mesh, 'CG', 3) # CG elements for Streamfunction We will also impose no-normal flow strongly to ensure that the boundary condition \(\psi = 0\) will be met, bc = DirichletBC(Vcg, 0.0, 'on_boundary') Now we will define all the parameters we are using in this tutorial. beta = Constant('1.0') # Beta parameterF = Constant('1.0') # Burger numberr = Constant('0.2') # Bottom dragtau = Constant('0.001') # Wind Forcingx = SpatialCoordinate(mesh)Qwinds = Function(Vcg).interpolate(-tau * cos(pi * (x[1]/Ly - 0.5))) We can now define the Test Function and the Trial Function of this problem, both must be in the same function space: phi, psi = TestFunction(Vcg), TrialFunction(Vcg) We must define functions that will store our linear and nonlinear solutions. In order to solve the nonlinear problem, we use the linear solution as a guess for the nonlinear problem. psi_lin = Function(Vcg, name='Linear Streamfunction')psi_non = Function(Vcg, name='Nonlinear Streamfunction') We can finally write down the linear Stommel equation in its weak form. We will use the solution to this as the input for the nonlinear Stommel equation. a = - r * inner(grad(psi), grad(phi)) * dx - F * psi * phi * dx + beta * psi.dx(0) * phi * dxL = Qwinds * phi * dx We set-up an elliptic solver for this problem, and solve for the linear streamfunction, linear_problem = LinearVariationalProblem(a, L, psi_lin, bcs=bc)linear_solver = LinearVariationalSolver(linear_problem, solver_parameters= {'ksp_type': 'preonly', 'pc_type': 'lu'})linear_solver.solve() We will employ the solution to the linear problem as the initial guess for the nonlinear one: psi_non.assign(psi_lin) And now we can define the weak form of the nonlinear problem. Note that the problem is stated in residual form so there is no trial function. G = - inner(grad(phi), perp(grad(psi_non))) * div(grad(psi_non)) * dx \ -r * inner(grad(psi_non), grad(phi)) * dx - F * psi_non * phi * dx \ + beta * psi_non.dx(0) * phi * dx \ - Qwinds * phi * dx We solve for the nonlinear streamfunction now by setting up another elliptic solver, nonlinear_problem = NonlinearVariationalProblem(G, psi_non, bcs=bc)nonlinear_solver = NonlinearVariationalSolver(nonlinear_problem, solver_parameters= {'snes_type': 'newtonls', 'ksp_type': 'preonly', 'pc_type': 'lu'})nonlinear_solver.solve() Now that we have the full solution to the nonlinear Stommel problem, we can plot it, try: import matplotlib.pyplot as pltexcept: warning("Matplotlib not imported")try: plot(psi_non)except Exception as e: warning("Cannot plot figure. Error msg '%s'" % e)try: plt.show()except Exception as e: warning("Cannot show figure. Error msg '%s'" % e)file = File('Nonlinear Streamfunction.pvd')file.write(psi_non) We can also see the difference between the linear solution and the nonlinear solution. We do this by defining a weak form. (Note: other approaches may be possible.) tf, difference = TestFunction(Vcg), TrialFunction(Vcg)difference = assemble(psi_lin - psi_non)try: plot(difference)except Exception as e: warning("Cannot plot figure. Error msg '%s'" % e)try: plt.show()except Exception as e: warning("Cannot show figure. Error msg '%s'" % e)file = File('Difference between Linear and Nonlinear Streamfunction.pvd')file.write(difference) Below is a plot of the linear solution to the QG wind-driven Stommel gyre. Below is a plot of the difference between the linear and nonlinear solutions to the QG wind-driven Stommel gyre. This demo can be found as a Python script in qg_winddrivengyre.py. References Mun50 Walter H. Munk. On the wind-driven ocean circulation. Journal of Meteorology, 7:79–93, 1950. doi:10.1175/1520-0469(1950)007<0080:OTWDOC>2.0.CO;2. Ped92 Joseph Pedlosky. Geophysical Fluid Dynamics. Springer study edition. Springer New York, 1992. ISBN 9780387963877. Sto48(1,2) Henry Stommel. The westward intensifciation of wind driven ocean currents. Trans. Am. Geophys. Union, 29:202–206, 1948. doi:10.1007/s10915-004-4635-5. Val06(1,2) Geoffrey K. Vallis. Atmospheric and Oceanic Fluid Dynamics. Cambridge University Press, Cambridge, U.K., 2006.
The number of Dyck paths in a square is well-known to equal the catalan numbers: http://mathworld.wolfram.com/DyckPath.html But what if, instead of a square, we ask the same question with a rectangle? If one of its sides is a multiple of the other, then again there is a nice formula for the number of paths below the diagonal, but is there a nice formula in general? What is the number of paths from the lower-left corner of a rectangle with side lengths a and b to its upper-right corner staying below the diagonal (except for its endpoint)? I am also interested in asymptotics. Since that Mirko Visontai told me that the answer is ${a+b\choose a}/(a+b)$ if $\gcd(a,b)=1$. The proof is the following (with k=a and l=b): The number of 0--1 vectors with $k$ 0's and $l$ 1's is ${k+l\choose k}$, so we have to prove that out of these vectors exactly $1/(k+l)$ fraction is an element of $L(k,l)$. The set of all vectors can be partitioned into equivalence classes. Two vectors $p$ and $q$ are equivalent if there is a cyclic shift that maps one into the other, i.e., if for some $j$, $p_i = q_{i+j}$ for all $i$. We will prove that exactly one element from each equivalence class will be in $L(k,l)$. This proves the statement as each class consists of $k+l$ elements because $gcd(k,k+l)=1$. We can view each 0--1 sequence as a walk on $\mathbb R$ where each 0 is a $-l/(k+l)$ step and each 1 is a $+k/(k+l)$ step. Each $(k,l)$ walk starts and ends at zero and each walk reaches its maximum height exactly once, otherwise $ak + bl = 0$ for some $0 < a +b < k+l$ which would imply $\gcd(k,l) \neq 1$. If we take the cyclic shift that ``starts from the top'', we stay in the negative region throughout the walk, which corresponds to remaining under the diagonal in the lattice path case. Any other cyclic shift goes above zero, which corresponds to going above the diagonal at some point. I heard a talk at Indiana University last March by Timothy Chow. Here's his abstract, which seems to give a negative answer to your question about rectangles whose sides have non-integer ratio: It is a classical result that if k is a positive integer, then the number of lattice paths from (0,0) to (a+1,b) taking unit north or east steps that avoid touching or crossing the line x = ky is (a+b choose b) - k (a+b choose b-1). Disappointingly, no such simple formula is known if k is rational but not an integer (although there does exist a determinant formula). We show that if we replace the straight-line boundary with a periodic staircase boundary, and if we choose our starting and ending points carefully, then the natural generalization of the above simple formula holds. By varying the boundary slightly we obtain other cases with simple formulas, but it remains somewhat mysterious exactly when a simple formula can be expected. Time permitting, we will also describe some recent related work by Irving and Rattan that provides an alternative proof of some of our results. This is joint work with Chapman, Khetan, Moulton, and Waters. Is a sum OK? I am used to a different rotation of the paths. I think the paths you are looking for can also be described as all paths above the x-axis, with steps (1,1) and (1,-1), that starts at (0,0) and ends on the line x=y+n for some (x,y) from (n,0) to (n+m,m). (If instead they end at the line x=n, we get the Ballot paths.) Let B(n,k) be the Ballot numbers, B(n,k)= # paths from (0,0) to (n,k). Now, all paths must pass the line x=n. From there on it is just a binomial path, so the number of paths are sum_{k=0,2,4,...,n} B(k,n)*( (n-m-k)/2 choose k/2) (n choose k)= Binomial coefficient, n!/(k!(n-k)!) If I understood your question correctly, the numbers you're looking for are called Ballot numbers. The number of paths from $(0,0)$ to $(m,n)$ (where $m>n$) which stay below the diagonal is $\frac{m-n}{m+n}\binom{m+n}{m}$. Moreover, if $m>r \cdot n$, then the number of lattice paths from $(0,0)$ to $(m,n)$ which stay below the line $x=r\cdot y$ is $\frac{m-rn}{m+n}\binom{m+n}{m}$. (I haven't worked this out, but Ira Gessel says so in Introduction to Lattice Path Enumeration)
Bernoulli Bernoulli Volume 25, Number 4A (2019), 2793-2823. Self-normalized Cramér type moderate deviations for martingales Abstract Let $(X_{i},\mathcal{F}_{i})_{i\geq 1}$ be a sequence of martingale differences. Set $S_{n}=\sum_{i=1}^{n}X_{i}$ and $[S]_{n}=\sum_{i=1}^{n}X_{i}^{2}$. We prove a Cramér type moderate deviation expansion for $\mathbf{P}(S_{n}/\sqrt{[S]_{n}}\geq x)$ as $n\to +\infty $. Our results partly extend the earlier work of Jing, Shao and Wang ( Ann. Probab. 31 (2003) 2167–2215) for independent random variables. Article information Source Bernoulli, Volume 25, Number 4A (2019), 2793-2823. Dates Received: February 2018 Revised: June 2018 First available in Project Euclid: 13 September 2019 Permanent link to this document https://projecteuclid.org/euclid.bj/1568362043 Digital Object Identifier doi:10.3150/18-BEJ1071 Citation Fan, Xiequan; Grama, Ion; Liu, Quansheng; Shao, Qi-Man. Self-normalized Cramér type moderate deviations for martingales. Bernoulli 25 (2019), no. 4A, 2793--2823. doi:10.3150/18-BEJ1071. https://projecteuclid.org/euclid.bj/1568362043 Supplemental materials Supplement to “Self-normalized Cramér type moderate deviations for martingales”. The supplement gives the detailed proofs of Propositions 3.1 and 3.2.
In LCAO, it is the set of atomic orbitals (AOs) that is the basis, and the coefficients are the basis expansion coefficients. For example, take the hydrogen molecule, with 1 atomic orbital on each atom: The lower energy MO, the bonding one, will be (excluding normalization): $$\psi_{\mathrm{1s}} = \phi_{\text{left}} + \phi_{\text{right}}$$ and the higher energy MO, the antibonding one, will be (excluding normalization): $$\psi_{\mathrm{1s}^{*}} = \phi_{\text{left}} - \phi_{\text{right}}$$ This means that for the bonding MO, $c_{\text{left}} = 1$ and $c_{\text{right}} = 1$, while for the antibonding MO, $c_{\text{left}} = 1$ and $c_{\text{right}} = -1$. If your basis was different (say, the atomic orbitals were of different radial extents for the two atoms), then the coefficients would be different to accommodate that. They aren't part of the basis set. Since you tagged this with computational-chemistry, I assume you also want to know about basis sets in computational chemistry. Conceptually, they are identical to LCAO-MO bases, but what may be confusing is that each atomic orbital itself may be composed of multiple functions (called primitive functions), rather than just the single function for each AO as seen above. This leads to another set of coefficients, called contraction coefficients, describing how the primitive functions are linearly combined to form a contracted function. An "implementation" detail is that the functional form of AOs is usually Gaussian functions, which have a parameter in the exponent (here, $\alpha$): $$\phi_{r}(x) = e^{-\alpha x^2}$$ This means that to fully define an atomic orbital basis composed of contracted Gaussian-type orbitals (CGTOs), one needs both the contraction coefficients and exponents. They usually look like the following. For each angular momentum type, there may be one or more contracted functions, each composed of one or more primitives. For example, in the hydrogen definition below, there are 2 functions to describe s orbitals, one of which has three primitives and the other is not contracted. The contraction coefficients are in the first column and the exponents in the second column. **** H 0 S 3 1.00 13.0107010 0.19682158E-01 1.9622572 0.13796524 0.44453796 0.47831935 S 1 1.00 0.12194962 1.0000000 **** C 0 S 5 1.00 1238.4016938 0.54568832082E-02 186.29004992 0.40638409211E-01 42.251176346 0.18025593888 11.676557932 0.46315121755 3.5930506482 0.44087173314 S 1 1.00 0.40245147363 1.0000000 S 1 1.00 0.13090182668 1.0000000 P 3 1.00 9.4680970621 0.38387871728E-01 2.0103545142 0.21117025112 0.54771004707 0.51328172114 P 1 1.00 0.15268613795 1.0000000 D 1 1.00 0.8000000 1.0000000 **** The last note is that when performing a Hartree-Fock or similar calculation, the contraction coefficients and the exponents don't change. That would mean the basis set changes. The basis set is fixed, it is the MO coefficients that change, the same coefficients that appear in the LCAO-MO equations. Image taken from here. Basis set is Def2-SV(P).
Quasi-Geostrophic Model¶ The Quasi-Geostrophic (QG) model is very important in geophysical fluid dynamics as it describes some aspects of large-scale flows in the oceans and atmosphere very well. The interested reader can find derivations in [Ped92] and [Val06]. In these notes we present the nonlinear equations for the one-layer QG model with a free-surface. Then, the weak form will be derived as is needed for Firedrake. Governing Equations¶ The Quasi-Geostrophic (QG) model is very similar to the 2D vorticity equation. Since the leading order geostrophic velocity is incompressible in the horizontal, the governing equations can be written as where the \(\psi\) and \(q\) are the streamfunction and Potential Vorticity (PV). The Laplacian is 2D since we are only in the horizontal plane and we defined The first equation above states that the PV is conserved following the flow. The second equation forces the leading order velocity to be geostrophic and the third equation is the definition for the QG PV for this barotropic model. To solve this using Finite Elements it is necessary to establish the weak form of the model, which is done in the next subsection. Weak Form¶ Evolving the nonlinear equations consists of two steps. First, the elliptic problem must be solved to compute the streamfunction given the PV. Second, the PV equation must be integrated forward in time. This is done using a strong stability preserving Runge Kutta 3 (SSPRK3) method. Elliptic Equation¶ First, we focus on the elliptic inversion in the case of a flat bottom. If we compute the inner product of the equation with the test function \(\phi\) we obtain, where in the second equation we used the divergence theorem and the homogeneous Dirichlet boundary conditions on the test function. Evolution Equation¶ The SSPRK3 method used as explained in [Got05] can be written as To get the weak form we need to introduce a test function, \(p\), and take the inner product of the first equation with \(p\). The first and second terms on the left hand side are referred to as \(a_{mass}\) and \(a_{int}\) in the code. The first term on the right-hand side is referred to as \(a_{mass}\) in the code. The second term on the right-hand side is the extra term due to the DG framework, which does not exist in the CG version of the problem and it is referred to as \(a_{flux}\). This above problem must be solved for \(q^{(1)}\) and then \(q^{(2)}\) and then these are used to compute the numerical approximation to the PV at the new time \(q^{n+1}\). We now move on to the implementation of the QG model for the case of a freely propagating Rossby wave. As ever, we begin by importing the Firedrake library. from firedrake import * Next we define the domain we will solve the equations on, square domain with 50 cells in each direction that is periodic along the x-axis. Lx = 2.*pi # Zonal lengthLy = 2.*pi # Meridonal lengthn0 = 50 # Spatial resolutionmesh = PeriodicRectangleMesh(n0, n0, Lx, Ly, direction="x", quadrilateral=True) We define function spaces: Vdg = FunctionSpace(mesh,"DG",1) # DG elements for Potential Vorticity (PV)Vcg = FunctionSpace(mesh,"CG",1) # CG elements for StreamfunctionVu = VectorFunctionSpace(mesh,"DG",1) # DG elements for velocity and initial conditions for the potential vorticity, here we use Firedrake’s ability to interpolate UFL expressions. x = SpatialCoordinate(mesh)q0 = Function(Vdg).interpolate(0.1*sin(x[0])*sin(x[1])) We define some Functions to store the fields: dq1 = Function(Vdg) # PV fields for different time stepsqh = Function(Vdg)q1 = Function(Vdg)psi0 = Function(Vcg) # Streamfunctions for different time stepspsi1 = Function(Vcg) along with the physical parameters of the model. F = Constant(1.0) # Rotational Froude numberbeta = Constant(0.1) # beta plane coefficientDt = 0.1 # Time stepdt = Constant(Dt) Next, we define the variational problems. First the elliptic problem for the stream function. psi = TrialFunction(Vcg)phi = TestFunction(Vcg)# Build the weak form for the inversionApsi = (inner(grad(psi),grad(phi)) + F*psi*phi)*dxLpsi = -q1*phi*dx We impose homogeneous dirichlet boundary conditions on the stream function at the top and bottom of the domain. bc1 = DirichletBC(Vcg, 0., (1, 2))psi_problem = LinearVariationalProblem(Apsi,Lpsi,psi0,bcs=bc1)psi_solver = LinearVariationalSolver(psi_problem, solver_parameters={ 'ksp_type':'cg', 'pc_type':'sor' }) Next we’ll set up the advection equation, for which we need an operator \(\vec\nabla^\perp\), defined as a python anonymouus function: gradperp = lambda u: as_vector((-u.dx(1), u.dx(0))) For upwinding, we’ll need a representation of the normal to a facet, and a way of selecting the upwind side: n = FacetNormal(mesh)un = 0.5*(dot(gradperp(psi0), n) + abs(dot(gradperp(psi0), n))) Now the variational problem for the advection equation itself. q = TrialFunction(Vdg)p = TestFunction(Vdg)a_mass = p*q*dxa_int = (dot(grad(p), -gradperp(psi0)*q) + beta*p*psi0.dx(0))*dxa_flux = (dot(jump(p), un('+')*q('+') - un('-')*q('-')) )*dSarhs = a_mass - dt*(a_int + a_flux)q_problem = LinearVariationalProblem(a_mass, action(arhs,q1), dq1) Since the operator is a mass matrix in a discontinuous space, it can be inverted exactly using an incomplete LU factorisation with zero fill. q_solver = LinearVariationalSolver(q_problem, solver_parameters={ 'ksp_type':'preonly', 'pc_type':'bjacobi', 'sub_pc_type': 'ilu' }) q0.rename("Potential vorticity")psi0.rename("Stream function")v = Function(Vu, name="gradperp(stream function)")v.project(gradperp(psi0))output = File("output.pvd")output.write(q0, psi0, v) Now all that is left is to define the timestepping parameters and execute the time loop. t = 0.T = 10.dumpfreq = 5tdump = 0v0 = Function(Vu)while(t < (T-Dt/2)): # Compute the streamfunction for the known value of q0 q1.assign(q0) psi_solver.solve() q_solver.solve() # Find intermediate solution q^(1) q1.assign(dq1) psi_solver.solve() q_solver.solve() # Find intermediate solution q^(2) q1.assign(0.75*q0 + 0.25*dq1) psi_solver.solve() q_solver.solve() # Find new solution q^(n+1) q0.assign(q0/3 + 2*dq1/3) # Store solutions to xml and pvd t += Dt print(t) tdump += 1 if tdump == dumpfreq: tdump -= dumpfreq v.project(gradperp(psi0)) output.write(q0, psi0, v, time=t) A python script version of this demo can be found here. References Got05 Sigal Gottlieb. On high order strong stability preserving Runge–Kutta and multi step time discretizations. Journal of Scientific Computing, 25(1):105–128, 2005. doi:10.1007/s10915-004-4635-5. Ped92 Joseph Pedlosky. Geophysical Fluid Dynamics. Springer study edition. Springer New York, 1992. ISBN 9780387963877. Val06 Geoffrey K. Vallis. Atmospheric and Oceanic Fluid Dynamics. Cambridge University Press, Cambridge, U.K., 2006.
Preamble for clarity: The many worlds interpretation is usually used to explain the measurement of a 2 level system ($|0\rangle$ or $|1\rangle$) as: $$\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|\text{device ready}\rangle|\text{env}_a\rangle\to\frac{1}{\sqrt{2}}(|0\rangle|\text{device says 0}\rangle|\text{env}_b\rangle+|1\rangle|\text{device says 1}\rangle|\text{env}_c\rangle)$$ where $|\text{env}_a\rangle$, $|\text{env}_b\rangle$ and $|\text{env}_c\rangle$ are orthogonal (or nearly orthogonal states of the greater environment). The universe is then said to have essentially split into 2 "worlds", one in which the spin is in state $0$ and the device says it is in state $0$ and the other where the spin is in state $1$ and the device says it is in state $1$. My question: This picture works for an interaction with a 2 level system but it seems to me that in general one is making an arbitrary discretisation (or coarse-graining of the wavefunction). How does one describe the same process for the measurement of a continuous variable, say of the location of a particle? Secondary question: Also, there seems to be an additional difficulty (or maybe its actually the same one in disguise) in that, in reality, we should really say $$\frac{1}{\sqrt{2}}(|0\rangle+|1\rangle)|\text{device ready}\rangle|\text{env}_a\rangle\to\\\int\int(c^{(0)}_{\theta,\theta'}|0\rangle|\text{device 0}_\theta\rangle|\text{env}_{(b,\theta')}\rangle+c^{(1)}_{\theta,\theta'}|1\rangle|\text{device 1}_\theta\rangle|\text{env}_{(c,\theta')}\rangle)\mathrm{d}\theta\mathrm{d}\theta'$$ where $\theta$ is a variable which we use to enumerate the compatible device and environment states. This illustrates that in reality the device and environment will also become entangled. Now the issue seems to be that even some of these states will actually be completely decohered from each other, and may have other observables be incompatible on a macroscopic scale. Hence it seems that we have not 2 "worlds" but 2 sets of "worlds", which it seems may even be continuously connected! (i.e. connected in the sense that for large separation in $\theta$ they correspond to macroscopically distinct worlds but for small separation, they are still coherently connected). More explicitly I mean that if we project into the measured $0$ "world" in the simplified (standard) example and consider the reduced density matrix for the device we will get $$\hat{\rho}^{(0)}_{\text{device}} = |\text{device says 0}\rangle\langle \text{device says 0}|$$ i.e. the device is in a pure state of having measured $0$ given that the spin is in state $0$, and so it feels reasonable to call this a unique "world". However, in the more realistic second example, we would find that $$\hat{\rho}^{(0)}_{\text{device}} = \int\int c^{(0)}_{\theta,\theta'}|\text{device 0}_\theta\rangle\mathrm{d}\theta\mathrm{d}\theta'\int\int c^{*(0)}_{\phi,\theta'}\langle\text{device 0}_\phi|\mathrm{d}\phi $$ $$\hat{\rho}^{(0)}_{\text{device}} = \int\int \rho^{(0)}_{\theta,\phi}|\text{device 0}_\theta\rangle\langle\text{device 0}_\phi|\mathrm{d}\theta\mathrm{d}\phi $$ with $$\rho^{(0)}_{\theta,\phi} = \int\mathrm{d}\alpha c^{(0)}_{\theta,\alpha}c^{*(0)}_{\phi,\alpha}.$$ This is clearly not, in general, a pure state and so the question arises does it correspond to multiple worlds (i.e. are there parts that are totally decohered and behave separately) or is there some way to explain this away and say it is just one? Hence the question arises more generally how does one define a "world"? Any explanation to either question would be appreciated.
Last edited: March 15th 2018 Newton's law of gravitation states that the force between two point masses is proportional to the product of their masses and inversely proportional to the square of the distance between them. This can be written as\begin{equation} F= G\frac{m_1m_2}{r^2}, \label{eq:newton_grav} \end{equation} where $m_1$ and $m_2$ is the masses of the particles, $r$ is the distance between them and $G$ is some constant known as the gravitational constant. The force is directed along the line intersecting the point masses. It can be shown that this law holds for all spherically symmetric mass distributions, such as solid balls. The equation above can even be applied to the gravitational pull between stars and other celestial bodies with high accuracy! The current recommended value by CODATA for the gravitational constant is $(6.674 08 \pm 0.000 31)\cdot 10^{-11} \text{Nm$^2$/kg$^2$}$ [1]. This is a small number, and measuring the gravitational constant thus requires extremely precise equipment. The first direct laboratory measurement of the gravitational constant was performed in 1798 by Henry Cavendish [2]. The apparatus in the original experiment consisted of a 1.8 meter long wooden arm with two lead balls about 5 cm in diameter attached on either side. The wooden arm was suspended in a horizontal position from a 1 meter long wire. This is known as a torsion pendulum. Two large lead balls was used to exert a gravitational pull on the torsion pendulum, making it rotate. Due to the torque in the wire, the pendulum began to oscillate around its new equilibrium position. A similar experiment is performed by all first year physics students at NTNU. In this notebook we will discuss the Cavendish experiment. We will create a model that describes the oscillation of the torsion pendulum and by using curve fitting on a set of measurements, we will estimate the period of the oscillation and its equilibrium position. This will in turn be used to estimate the gravitational constant. The theory section is to a large extent based on the laboratory manual used in the course FY1001 Mechanical Physics at NTNU (see ref. [3]). We start by briefly discussing experimental setup (we refer to the Laboratory Manual for a more complete review of the experiment). The apparatus used at NTNU is similar to the one used by Cavendish. The experiment is in principle performed in the following way. The two large lead balls of mass $M$ are set in position 1, as shown in figure 1. The torsion pendulum will begin to oscillate around some equilibrium angle $\theta_1$. A laser beam is reflected on a mirror attached to the torsion pendulum, and hits a ruler at a position $S(t)$. The position on the ruler is recorded every 30 seconds. When the system is at rest, the lead spheres are set in position 2, making the pendulum oscillate around $\theta_2$. Figure 1. The figure to the left is a schematic diagram of the torsion pendulum used in the experiment, directed along the torsion wire. (1) Torsion wire, (2) mirror, (3) large lead balls, (4) small lead balls, (5) laser beam. The entire setup can be seen in the figure to the right. Position 1 is shown in solid lines and position 2 is shown in dashed lines. The equilibrium position in the absence of the large lead balls is along the horizontal dotted line. The figures are taken from the Laboratory Manual in the course F1001 Mechanical Physics at NTNU (ref. [2]). Let's import needed packages and set common figure parameters before we proceed to derive a model that describes the damped oscillation of the pendulum. import numpy as npimport matplotlib.pyplot as pltfrom scipy.optimize import curve_fit%matplotlib inline# Set some figure parametersnewparams = {'figure.figsize': (18, 9), 'axes.grid': False, 'lines.markersize': 6, 'lines.linewidth': 2, 'font.size': 15, 'mathtext.fontset': 'stix', 'font.family': 'STIXGeneral'}plt.rcParams.update(newparams) As mentioned in the introduction, Cavendish used a torsion pendulum (also called a torsion balance) in his measurements. When the rotational angle (the torsion angle) $\theta$ is small, its torque $\vec \tau = \vec F\cdot\vec r$ is approximately proportional to $\theta$. That is,\begin{equation} \label{eq:torsion_pendulum} \tau_1 =-D\theta, \end{equation} for some constant $D$, called the torsion constant. This is analogous to Hooke's law for a spring. The air resistance of an object is approximately proportional to the velocity at small velocities. This relation is called Stoke's law. The torque due to air resistance is thus proportional to the angular velocity $\dot \theta$. That is,\begin{equation} \tau_2 = -b\dot\theta. \label{eq:air_resistance} \end{equation} We neglect friction in the torsion wire. Newton's second law for rotation reads,\begin{equation} \sum \tau_i = I\ddot\theta, \label{eq:N2_rotation} \end{equation} where $I$ is the moment of inertia, in our case given by $I=2mr^2$ (two spherically symmetric masses $m$ at a distance $r$ from the reference point). By combining the equations \eqref{eq:torsion_pendulum}, \eqref{eq:air_resistance} and \eqref{eq:N2_rotation}, we obtain the differential equation\begin{equation} I\ddot\theta + b\dot\theta+D\theta = 0, \label{eq:diff_eq} \end{equation} which describes the oscillation of the torsion pendulum. The general solution can in this case be written as\begin{equation} \theta(t) = \theta_0 e^{-\alpha t}\sin\left(\omega t+\phi\right), \label{eq:model_theta} \end{equation} where $\theta_0$ is the initial amplitude, $\phi$ is some phase factor, $\omega\equiv\sqrt{\omega_0^2-\alpha^2}$, $\alpha \equiv b/2I$ and $\omega_0 \equiv\sqrt{D/I}$. The oscillation is in our case underdamped (see e.g. [4] for more information), which means that $\omega^2 = \omega_0^2-\alpha^2>0$. This is confirmed by the measured data. In the actual experiment we do not measure $\theta$, but $S$ as a position of the laser beam on a ruler (see figure 1). The position $S$ is given by\begin{equation} S/L=\tan\theta\approx \theta \label{eq:S_approx} \end{equation} when $L\ll S$ (see exercise 1). We therefore obtain\begin{equation} S(t) = S_0 + A e^{-\alpha t}\sin\left(\omega t + \phi\right). \label{eq:model_S} \end{equation} def osc(t, S_0, A, alpha, omega, phi): # Model for harmonic oscillations return S_0 + A*np.exp(-alpha*t)*np.sin(omega*t + phi) The period of the oscillation is $T=2\pi/\omega$. The parameters $\phi$, $\omega$, $\alpha$, $A$ and $S_0$ can be estimated using curve fitting of the model on the measured data. There are several ways to perform curve fitting in Python. In this notebook we will be using optimize.curve_fit from SciPy, which uses non-linear least squares to fit a function to data. The function has three input parameters: the model function ( osc), the measured $x$-data ( t) and the measured $y$-data ( S). In addition, we will use the optional argument p0, which is the initial guess for the parameters. The function returns an array of the optimal values for the parameters ( params) and a corresponding covariance matrix ( cov). The diagonals of the covariance matrix provide the variance of the parameter estimates. The standard deviation is in turn given by the square root of the variances. def fit(S, t, params_init=[1,1,1,1,1]): """ Performs curve fitting of the function osc() on the data points stored in S. Parameters: S: array-like vector, len(N). The measured displacement as a function of time. t: array-like vector, len(N). Time corresponding to the measurements in S. params_init: Initial guess for the parameters. Returns: params: Parameters which minimizes the quadratic error (best fit). cov: Covariance matrix for the parameters. """ try: params, cov = curve_fit(osc, t, S, p0=params_init) return params, cov # curve_fit will return a RuntimeError if it can't estimate the parameters. except RuntimeError as err: print("Fit failed.") return params_init, np.zeros(len(params_init), len(params_init)) # Read data from file with time in minutes in first column, time in# seconds in second column and swing in third column# Position 1data = np.loadtxt('S1data.txt')t1data = data[:, 0]*60 + data[:, 1]S1data = data[:, 2]*0.001 # m# Position 2data = np.loadtxt('S2data.txt')t2data = data[:, 0]*60 + data[:, 1]S2data = data[:, 2]*0.001 # m We are now ready to perform the curve fit! # Initial values for fit (educated guesses)S0_0 = 3.50e+00 # Equilibrium lineA0 = 0.3 # Amplitude, swingAlpha0 = 0.001 # Exponential damping coefficient for the amplitudeT0 = 600 # Swing periodphi0 = 0 # Phase angleparams_init = [S0_0, A0, Alpha0, 2*np.pi/T0, phi0]# Fit model parameters to dataparams1, cov1 = fit(S1data, t1data, params_init) # POSITION 1err1 = np.sqrt(np.diag(cov1))params2, cov2 = fit(S2data, t2data, params_init) # POSITION 2err2 = np.sqrt(np.diag(cov2)) Let's make a plot! # Position 1t = np.linspace(t1data[0], t1data[-1], 200)plt.plot(t, params1[0]*np.ones(len(t)), '--', color='0.6')plt.plot(t, osc(t, *params1), '-', color=(.5,.5,1), label='Fit position 1')plt.plot(t1data, S1data, 'o', color=(0,0,1), label='Position 1 data')# Position 2t = np.linspace(t2data[0], t2data[-1], 200)plt.plot(t, params2[0]*np.ones(len(t)), '--', color='0.6')plt.plot(t, osc(t, *params2), '-', color=(1,.5,.5), label='Fit position 2')plt.plot(t2data, S2data, 'o', color=(1,0,0), label='Position 1 data')plt.xlabel('Time, (s)')plt.ylabel('Displacement, (mm)')plt.legend(loc='best')plt.show() Consider for the moment only the gravitational force $F_0$ between the large lead balls and the nearest small balls (that is, neglect $f$ in figure 2). From figure 2 it is clear that the torque on the pendulum due to the gravitational force $F_0$ must be equal to the torque due to the torsion wire (equation \eqref{eq:torsion_pendulum}) at equilibrium (when the system is at rest). That is $2F_0r=D\theta_1=D\theta_2$. By making use of equation \eqref{eq:S_approx}, we obtain\begin{equation} F_0 = \frac{D}{r}\cdot\frac{\theta_1+\theta_2}{4}\approx \frac{D}{r}\cdot\frac{S}{4L}. \label{eq:F0} \end{equation} Figure 2. The large ball acts on the nearest small ball with a gravitational force $F_0$. There is also a gravitational pull $F_0'$ from the opposite large ball with a component $f$ that reduces the total torque on the pendulum. The figure is taken from the Laboratory Manual in the course F1001 Mechanical Physics at NTNU (ref. [2]). We assume for simplicity that $\sqrt{\omega_0^2-\alpha^2}\approx \omega_0=\sqrt{D/I}$ in equation \eqref{eq:model_theta} (see exercise 2 and 3). The torsion constant can in this case be written as\begin{equation} D = \frac{4\pi^2 I}{T^2}. \label{eq:torsion_const} \end{equation} If we insert equation \eqref{eq:F0} and \eqref{eq:torsion_const} and $I=2mr^2$ into Newton's law of Gravitation \eqref{eq:newton_grav} and solve for $G$, we obtain\begin{equation} G = \frac{r^2}{F_0mM}=\frac{\pi^2b^2rS}{T^2LM}, \label{eq:grav_const} \end{equation} where $m$ is the mass of the small lead balls, $M$ is the mass of the large lead balls and $b$ is the distance between the masses $m$ and $M$. Note that $m$ is canceled in the final expression. The component $\vec f$ of $\vec F_0'$ parallel to $\vec F_0$ lowers the total torque on the pendulum and gives a small correction to equation \eqref{eq:grav_const}. From figure 2 is is clear that$$f = F_0'\cdot\frac{b}{r'},$$ where $r'=\sqrt{b^2+4r^2}$ is the distance between the balls. If we insert this into the Newton's law of Gravitation \eqref{eq:newton_grav}, we obtain$$f = G\frac{mM}{b^2+4r^2}\cdot\frac{b}{\sqrt{b^2+4r^2}}= G\frac{mM}{b^2}\cdot\frac{b^3}{(b^2+4r^2)^{3/2}}=F_0\cdot \beta,$$ where $\beta\equiv b^3/(b^2+4r^2)^{3/2}$. The total force on each of the small balls are thus $F' = F_0-f = F_0(1-\beta)$. By comparing with equation \eqref{eq:grav_const}, we a corrected expression for the gravitational constant,\begin{equation} G = \frac{1}{1-\beta}\cdot\frac{\pi^2b^2rS}{T^2LM}. \label{eq:grav_const_corr} \end{equation} $L$, $M$, $b$ and $r$ is found by measuring on the apparatus used in the experiment. The measured quantities corresponding to the data used in this notebook are defined in the following code snippet. We store these quantities as arrays of length 2, with the first position being the value and the second position being the uncertainty. b = [0.042, 0.001 ] # mr = [0.050, 0.0001] # mL = [2.265, 0.01 ] # mM = [1.493, 0.002 ] # kg In addition, we need to extract the difference in equilibrium positions $S=|S_{01}-S_{02}|$ and the period $T$ from the curve fit performed earlier. We can extract $T_1$, $T_2$, $S_{01}$ and $S_{02}$ and their standard deviations (uncertainties) from params1, err1, params2 and err2. The period is given by $T_i=2\pi/\omega_i$, and their uncertainties are related by $\Delta T/T = \delta \omega/\omega\Rightarrow \Delta T = \Delta\omega\cdot2\pi/\omega^2$. S01 = (params1[0], err1[0])S02 = (params2[0], err2[0])print("S01 = ( %.4f ± %.5f ) mm"%(S01))print("S02 = ( %.4f ± %.5f ) mm"%(S02))T1 = (2*np.pi/params1[3], 2*np.pi/params1[3]**2*err1[3])T2 = (2*np.pi/params2[3], 2*np.pi/params2[3]**2*err1[3])print("T1 = ( %.2f ± %.2f ) s"%(T1))print("T2 = ( %.2f ± %.2f ) s"%(T2)) S01 = ( 0.3587 ± 0.00025 ) mm S02 = ( 0.4471 ± 0.00020 ) mm T1 = ( 651.53 ± 3.02 ) s T2 = ( 636.84 ± 2.88 ) s The physical setup is the same in position 1 and position 2, and we would therefore expect that $T_1\approx T_2$. Note however that the estimated values of $T_1$ and $T_2$ differs by almost 15 seconds. This indicates that the uncertainty in the period is larger than the standard deviation in the fit. We will therefore use $$T = (T_1+T_2)/2, \qquad\Delta T = |T_2-T_1|/2,$$ as the period of the oscillations and its uncertainty. The only error estimate we have for the equilibrium positions $S_{01}$ and $S_{02}$ are the standard deviation from the fit. We will therefore use $$S = |S_{02} - S_{01}|,\qquad \Delta S = \sqrt{\Delta S_{01}^2 + \Delta S_{01}^2}.$$ Note that we have not taken errors in the measurements into count. Better results for the uncertainties is obtained if the experiment is repeated. S = (abs(S01[0] - S02[0]), (S01[1]**2 + S02[1]**2)**.5)print("S = ( %.2e ± %.1e ) mm"%(S))T = ((T1[0] + T2[0])/2, abs(T1[0] - T2[0])/2)print("T = ( %.2f ± %.2f ) s"%(T)) S = ( 8.84e-02 ± 3.2e-04 ) mm T = ( 644.19 ± 7.34 ) s We now insert these quantaties into equation \eqref{eq:grav_const_corr} in order to estimate the gravitational constant $G$. beta = b[0]**3*(b[0]**2+4*r[0]**2)**(-3/2.)G = 1/(1 - beta)*np.pi**2*b[0]**2*r[0]*S[0]/(T[0]**2*L[0]*M[0])print("G = %.2e m^3/(kg s^2)"%(G)) G = 5.82e-11 m^3/(kg s^2) This is in the same order of magnitude as the recommended value from CODATA $(6.674 08 \pm 0.000 31)\cdot 10^{-11} \text{Nm$^2$/kg$^2$}$. The gravitational constant is quite difficult to measure and the experiment is influenced by many systematic errors [5]. Moreover, the measurement was conducted during a laboratory session in a room full of people. This is not ideal conditions! Since we have acquired the uncertainty in all the quantities used in equation \eqref{eq:grav_const_corr}, we can also compute the uncertainty in $G$. This is left as an exercise for the reader (exercise 4 and 5). There are several exercises in the Laboratory Manual (ref. [3]). Check them out! [1] Mohr, Peter J., Newell, David B. & Taylor, Barry N. (2016). CODATA recommended values of the fundamental physical constants: 2014. Rev. Mod. Phys., 88, 035009. See http://www.codata.org/. [2] Cavendish, Henry. Philosophical Transactions 17, 469 (1798). Note that the goal of the original experiment was to measure the density of the Earth. However, one can express his result in terms of the gravitational constant. [3] Herland, Egil V., Sperstad, Iver B., Gjerden, Knut, et al.: Laboratorium i emne FY1001 mekanisk fysikk. NTNU 2016. URL: http://home.phys.ntnu.no/brukdef/undervisning/fy1001_lab/KompendiumMaster.pdf [4] Hyperphysics.phy-astr.gsu.edu: Damped Harmonic Oscillator [Online]. http://hyperphysics.phy-astr.gsu.edu/hbase/oscda.html [retrieved Sep. 2017] [5] Cross, William D. Systematic Error Sources in a Measurement of G using a Cryogenic Torsion Pendulum. University of California 2009. URL: http://www.physics.uci.edu/gravity/papers/WDCross%20thesis.pdf
Search Now showing items 11-20 of 167 Multiplicity dependence of the average transverse momentum in pp, p-Pb, and Pb-Pb collisions at the LHC (Elsevier, 2013-12) The average transverse momentum <$p_T$> versus the charged-particle multiplicity $N_{ch}$ was measured in p-Pb collisions at a collision energy per nucleon-nucleon pair $\sqrt{s_{NN}}$ = 5.02 TeV and in pp collisions at ... Production of light nuclei and anti-nuclei in $pp$ and Pb-Pb collisions at energies available at the CERN Large Hadron Collider (American Physical Society, 2016-02) The production of (anti-)deuteron and (anti-)$^{3}$He nuclei in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV has been studied using the ALICE detector at the LHC. The spectra exhibit a significant hardening with ... Forward-central two-particle correlations in p-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (Elsevier, 2016-02) Two-particle angular correlations between trigger particles in the forward pseudorapidity range ($2.5 < |\eta| < 4.0$) and associated particles in the central range ($|\eta| < 1.0$) are measured with the ALICE detector in ... K$^{*}(892)^{0}$ and $\phi(1020)$ meson production at high transverse momentum in pp and Pb-Pb collisions at $\sqrt{s_\mathrm{NN}}$ = 2.76 TeV (American Physical Society, 2017-06) The production of K$^{*}(892)^{0}$ and $\phi(1020)$ mesons in proton-proton (pp) and lead-lead (Pb-Pb) collisions at $\sqrt{s_\mathrm{NN}} =$ 2.76 TeV has been analyzed using a high luminosity data sample accumulated in ... Production of $K*(892)^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$ =7 TeV (Springer, 2012-10) The production of K*(892)$^0$ and $\phi$(1020) in pp collisions at $\sqrt{s}$=7 TeV was measured by the ALICE experiment at the LHC. The yields and the transverse momentum spectra $d^2 N/dydp_T$ at midrapidity |y|<0.5 in ... Directed flow of charged particles at mid-rapidity relative to the spectator plane in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (American Physical Society, 2013-12) The directed flow of charged particles at midrapidity is measured in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV relative to the collision plane defined by the spectator nucleons. Both, the rapidity odd ($v_1^{odd}$) and ... Measurement of D-meson production versus multiplicity in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (Springer, 2016-08) The measurement of prompt D-meson production as a function of multiplicity in p–Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV with the ALICE detector at the LHC is reported. D$^0$, D$^+$ and D$^{*+}$ mesons are reconstructed ... Centrality, rapidity and transverse momentum dependence of the J/$\psi$ suppression in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76 TeV (Elsevier, 2014-06) The inclusive J/$\psi$ nuclear modification factor ($R_{AA}$) in Pb-Pb collisions at $\sqrt{s_{NN}}$=2.76TeV has been measured by ALICE as a function of centrality in the $e^+e^-$ decay channel at mid-rapidity |y| < 0.8 ... Multiplicity Dependence of Pion, Kaon, Proton and Lambda Production in p-Pb Collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2014-01) In this Letter, comprehensive results on $\pi^{\pm}, K^{\pm}, K^0_S$, $p(\bar{p})$ and $\Lambda (\bar{\Lambda})$ production at mid-rapidity (0 < $y_{CMS}$ < 0.5) in p-Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV, measured ... Long-range angular correlations of π, K and p in p–Pb collisions at $\sqrt{s_{NN}}$ = 5.02 TeV (Elsevier, 2013-10) Angular correlations between unidentified charged trigger particles and various species of charged associated particles (unidentified particles, pions, kaons, protons and antiprotons) are measured by the ALICE detector in ...
Lecture: HGX208, F 9:55-11:35 Section: HGX208, T 18:30-20:10 Lecture: H5116, F 8:00-9:40 Section: HGW2403, F(e) 18:30-20:10 Textbook: https://doi.org/10.1017/CBO9781107050884 Lecture: HGW2403, T 18:30-21 Section: HGW2403, R 18:30-20 Instruction for the final paper You can choose one of the following topics. An introduction to one specific forcing notion from this list (for Cohen forcing you can introduce an application not discussed in our course) and explain how it works. On one or more issues concerning the meta-theory of forcing. For example, why and how we can talk about an “outer model” or a “object” not in our universe, why we can and why we need to assume the existence of a transitive model, how to account forcing arguments as purely constructive methods, etc. Problem set 01 Let \(\pi\) be the canonical interpretation of PA into ZF. Can we prove “for each arithmetic formula \(\varphi\), if ZFC \(\vdash\pi(\varphi)\), then PA \(\vdash\varphi\)”? Prove it if we can, explain it if we cannot. Assume ZF \(\vdash\varphi^L\) for each formula \(\varphi\in\Sigma\), and \(\Sigma\vdash\psi\). Show that ZF \(\vdash\psi^L\). Why Con(ZF) does not imply there is a countable transitive model of ZF? Problem set 02 Kunen’s set theory (2013) Exercise I.16.6 – I.16.10, I.16.17. Problem set 03 Kunen’s set theory (2013) Exercise II.4.6, 4.8. Jech’s set theory (2002) Exercise 7.1, 7.3 – 7.5, 7.13, 7.16, 7.18 – 7.20, 7.22 – 7.33. Problem set 04 Let \(M^\mathbb{B}\) be a Boolean valued model. Prove the following statements are valid in \(M^\mathbb{B}\). \(\forall y\big(\forall x\varphi(x)\rightarrow\varphi(y)\big)\). \(\forall x(\varphi\rightarrow\psi)\rightarrow\forall x\varphi\rightarrow\forall x\psi\). \(\alpha\rightarrow\forall x\alpha\), \(x\) does not occur in \(\alpha\) freely. Jech’s set theory (2002) Exercise 14.12. Problem set 05 Let \(\sigma\) be a \(\mathbb{B}\)-name. Show that \[ |\!|\exists x\in\sigma~\varphi(x)|\!| = \sum_{\xi\in\textrm{dom}\sigma}\sigma(\xi)\cdot|\!|\varphi(\xi)|\!|.\] For any partial order \(\mathbb{P}\), there is a separative partial order \(\mathbb{Q}\) and a surjection \(h:\mathbb{P}\to\mathbb{Q}\) such that \(x\leq y\) implies \(h(x)\leq h(y)\); \(x\) and \(y\) are compatible in \(\mathbb{P}\) if and only if \(h(x)\) and \(h(y)\) are compatible in \(\mathbb{Q}\). Such \(\mathbb{Q}\) is unique up to isomorphism. We call it the separative quotientof \(\mathbb{P}\). Jech’s set theory (2002) Exercise 14.1, 14.9, 14.14, 14.16. Lemma 14.13. Lecture: HGX205, M 18:30-21 Section: HGW2403, F 18:30-20 Exercise 01 Prove that \(\neg\Box(\Diamond\varphi\wedge\Diamond\neg\varphi)\) is equivalent to \(\Box\Diamond\varphi\rightarrow\Diamond\Box\varphi\). What you have assumed? Define strategyand winning strategyfor modal evaluation games. Prove Key Lemma: \(M,s\vDash\varphi\) iff V has a winning strategy in \(G(M,s,\varphi)\). Prove that modal evaluation games are determined, i.e. either V or F has a winning strategy. And all exercises for Chapter 2 (see page 23, open minds) Exercise 02 Let \(T\) with root \(r\) be the tree unraveling of some possible world model, and \(T’\) be the tree unraveling of \(T,r\). Show that \(T\) and \(T’\) are isomorphic. Prove that the union of a set of bisimulations between \(M\) and \(N\) is a bisimulation between the two models. We define the bisimulation contraction of a possible world model \(M\) to be the “quotient model”. Prove that the relation links every world \(x\) in \(M\) to the equivalent class \([x]\) is a bisimulation between the original model and its bisimulation contraction. And exercises for Chapter 3 (see page 35, open minds): 1 (a) (b), 2. Exercise 03 Prove that modal formulas (under possible world semantics) have ‘Finite Depth Property’. And exercises for Chapter 4 (see page 47, open minds): 1 – 3. Exercise 04 Prove the principle of Replacement by Provable Equivalents: if \(\vdash\alpha\leftrightarrow\beta\), then \(\vdash\varphi[\alpha]\leftrightarrow\varphi[\beta]\). Prove the following statements. “For each formula \(\varphi\), \(\vdash\varphi\) is equivalent to \(\vDash\varphi\)” is equivalent to “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable”. “For every set of formulas \(\Sigma\) and formula \(\varphi\), \(\Sigma\vdash\varphi\) is equivalent to \(\Sigma\vDash\varphi\)” is equivalent to “for every set of formulas \(\Sigma\), \(\Sigma\) being consistent is equivalent to \(\Sigma\) being satisfiable”. Prove that “for each formula \(\varphi\), \(\varphi\) being consistent is equivalent to \(\varphi\) being satisfiable” using the finite version of Henkin model. And exercises for Chapter 5 (see page 60, open minds): 1 – 5. Exercise 05 Exercises for Chapter 6 (see page 69, open minds): 1 – 3. Exercise 06 Show that “being equivalent to a modal formula” is not decidable for arbitrary first-order formulas. Exercises for Chapter 7 (see page 88, open minds): 1 – 6. For exercise 2 (a) – (d), replace the existential modality E with the difference modality D. In the clause (b) of exercise 4, “completeness” should be “correctness”. Exercise 07 Show that there are infinitely many non-equivalent modalities under T. Show that GL + Idis inconsistent and Unproves GL. Give a complete proof of the fact: In S5, Every formula is equivalent to one of modal depth \(\leq 1\). Exercises for Chapter 8 (see page 99, open minds): 1, 2, 4 – 6. Exercise 08 Let \(\Sigma\) be a set of modal formulas closed under substitution. Show that \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W,R,V’),w\vDash\Sigma\] hold for any valuation \(V\) and \(V’\). Define a \(p\)- morphismbetween \((W,R),w\) and \((W’,R’),w’\) as a “functional bisimulation”, namely bisimulation regardless of valuation. Show that if there is a \(p\)-morphism between \((W,R),w\) and \((W’,R’),w’\), then for any valuation \(V\) and \(V’\), we have \[(W,R,V),w\vDash\Sigma~\Leftrightarrow~ (W’,R’,V’),w\vDash\Sigma.\] Exercises for Chapter 9 (see page 99, open minds). Exercise the last Exercises for Chapter 10 and 11 (see page 117 and 125, open minds). 欢迎在评论区提交有关本书的勘误与修改意见。 Continue reading “《数理逻辑:证明及其限度》勘误” 欢迎在评论区提交有关本书的勘误与修改意见。
It's a long time since I learned about point groups. But I'll have a go. I believe that we are talking about the matrix representation in the basis $(x,y,z)$, in which case they are the familiar looking $3\times 3$ rotation matrices. But not the set of all $3\times 3$ rotation matrices! There are (as you said) 8 symmetry operations of the $C_3$ type, which are basically clockwise and anticlockwise rotations of $120^\circ$ about the four three-fold axes of an object with tetrahedral symmetry. Each of them can be represented by a $3\times 3$ rotation matrix, and the ones about the $z$-axis take the form given in your question. It is just conventional to orient the system so that $z$ lies along this high-symmetry axis. It is not true that a general $R_z$ with arbitrary $\theta$ corresponds to the $T$ irreducible representation: just the rotation matrices corresponding to the specific angles and axes of the symmetry operations. The key point is that the character of the operation is given by the trace of the matrix representative in each case. For the $C_3^+$ and $C_3^-$ operations about $z$, $\theta=\pm120^\circ=\pm2\pi/3$, $\cos\theta=-\frac{1}{2}$ and the trace of the matrix in your question can be seen to be zero, which is what appears in the character tables. The same is true for all the other matrices representing the other $C_3^\pm$ operations about other axes: although they have a more complicated form in general, they have this property in common: the trace of a rotation matrix is always $1+2\cos\theta$ where $\theta$ is the overall angle of rotation. (In a similar way, for the $C_2$ operations for which the rotation matrices correspond to $\theta=180^\circ=\pi$, $\cos\theta=-1$, the character as calculated from the trace, is $1+2\times(-1)=-1$, and this is the number appearing in the character tables.) One applies the matrix representing the symmetry operation to the elements in the basis. In your question, you seem to be considering applying the rotation operation ($C_3$) to the rotation matrix ($R_z$) itself, which is not how things work (by which I mean, not helpful in determining the characters appearing in character tables for the various operations in the 3-dimensional $T$ irreducible representation). As the various commenters have suggested, I am sure that there is a broader general knowledge of point groups on Chemistry StackExchange. It is central to an understanding of molecular symmetry, and atomic orbitals in crystal fields, so there is bound to be plenty of expertise. So if this answer doesn't help, and nobody offers a better one, you should certainly try there. But hopefully this makes some kind of sense. [Edit, following OP comments].Several points to respond to. Yes, the main feature of the character is that it is invariant to a change of basis (similarity transform) and hence is associated with the trace of the transformation matrices, which has that property. I don't agree with the terminology such an irrep (representing the highest order rotation in the group) The set of matrices as a whole constitutes the representation (for a given basis). Subsets of those matrices will correspond to conjugate operations, i.e. operations belonging to the same class: those matrices in the same class will have the same character. Typically, they will involve the same kind of operation, but carried out with respect to symmetry elements that are (themselves) related by a symmetry operation. This means that it is not necessarily true that similar kinds of operations carried out with respect to "different type of rotation axis" will have the same character. It all depends on the irrep, which in turn is related to the basis: the number and kind of functions that are being interconverted by the transformations. I have only been discussing this particular case of the $(x,y,z)$ basis, in which the matrices are the familiar $3\times 3$ rotation matrices (because effectively, we are rotating vectors). For the $D_{8h}$ group, there is no corresponding 3-dimensional irrep. There are various non-equivalent twofold axes (different classes), and various one- and two-dimensional irreps: in the character table, there are various different characters depending on both irrep and axis type (class). You would need to look at the matrix for a simple example of each case, to determine the character. Coming to the icosahedral group, there is a three-dimensional $T_1$ irrep, which transforms the basis $(x,y,z)$, and as far as I can tell this fits the pattern I described above. At the Wikipedia page the character for $C_5$ is given as $2\cos\theta$ where $\theta=\pi/5$, and this is equal to $1+2\cos 2\pi/5$ (the rotation angle is $2\pi/5$). However there is another 3-dimensional irrep, $T_2$, for which the character for $C_5$ is different. So those operations are different. They are still rotations through $2\pi/5$, but the objects being transformed are not simple vectors. You would need to look into a suitable basis for this irrep, I'm not familiar enough with it. Generally, the matrices correspond to both the operation being performed, and the things being rotated. (Or, in the more general case, reflected etc). Again, I hope that this clarifies things a bit. It's not completely straightforward. You may find a book helpful: Chemical Applications of Group Theory by FA Cotton is thorough, but there may be more up to date alternatives.
I'm trying to understand the answer here to the question of finding the induced maps of coordinate rings corresponding to explicit isomorphisms between $\mathcal Z(xy-z)$ and $\mathbb A^2$. A morphism $\pi\colon \mathcal Z(xy-z) \to \mathbb A^2$ is given by $\pi(a_1,a_2,a_3) = (a_1,a_2)$, and its inverse is the morphism $\psi\colon \mathbb A^2 \to \mathcal Z(xy-z)$ given by $\psi(a_1,a_2) = (a_1, a_2, a_1a_2)$. The induced maps of the coordinate rings are given by pre-composition. Since $$ (x\circ \psi)(a_1,a_2) = a_1, \quad (y\circ \psi)(a_1,a_2) = a_2, \quad \text{and} \quad (z\circ \psi)(a_1,a_2) = a_1a_2, $$ am I correct in understanding that $$ \tilde\psi(x) = x, \quad \tilde\psi(y) = y, \quad \text{and} \quad \tilde\psi(z) = xy? $$ Also, how would one go about determining whether or not $\mathcal Z(xy - z^2)$ is isomorphic to $\mathbb A^2$? I don't think the morphism $\pi\colon \mathcal Z(xy - z^2) \to \mathbb A^2$ that projects onto the first two coordinates is an isomorphism, since the third component of the inverse would involve $\sqrt{a_1a_2}$. However, this only shows that the map $\pi$ is not an isomorphism. How could one go about showing that there exists no isomorphism between the two algebraic sets?
Your alternative method uses the non-relativistic approximation twice : first when you insert a value for $v/c$ in the numerator, second when you insert a value for $v$ in the denominator. Because of this your error is compounded, so you get a result which is less accurate than if you inserted the approximation only once. Whenever you make approximations, it is most accurate to do so only once, at the last possible step, as follows : First identify a dimensionless variable $y$ which depends on the independent variable in your problem. Examples : $\beta=v/c$ if the independent variable is $v$ or $x=T/mc^2$ if it is $T=eV$( see below). Obtain a formula for the quantity you are interested in, as a function of $y$. Expand the formula into a power series in $y$, using Taylor's Theorem. Check that the expansion is valid for the range of values of $y$ which interest you. For example, the expansion $$(1+y)^n=1+ny+\frac12 n(n-1)y^2+\frac16n(n-1)(n-2)y^3+...$$ is valid provided that$y\lt 1$. Finally throw out higher powers of the dimensionless variable, depending on how much accuracy you want to have. The accurate formula which you got can be written as $$\lambda=\frac{hc}{\sqrt{2Tmc^2+T^2}}=\lambda_0 (1+\frac12 x)^{-1/2}= \lambda_0 (1-\frac14 x+\frac{3}{32}x^2-\frac{5}{128}x^3-...)$$ where $\lambda_0=\frac{h}{\sqrt{2mT}}$ is the de Broglie wavelength assuming the electron is non-relativistic, and $x=\frac{T}{mc^2}=\frac{eV}{mc^2}$. This expansion is valid because $T=85keV$ and $mc^2=511keV$ (the rest mass of the electron) so $\frac12 x\lt 1$. Note that the approximation is introduced only at the last step, by ignoring the higher powers of $x$. The full infinite series expansion, and the steps before it, are 100% accurate. Using the figures given, we get $\lambda_0=4.2066pm, x=0.08317$ and $\lambda\approx4.0317pm$ when we ignore terms in $x^2$ or higher powers. Using the exact equation we get $\lambda=4.0419pm$. Your method first makes the classical approximation $T=eV\approx \frac12 mv^2$ before any. Then $$\frac{v^2}{c^2}\approx\frac{2T}{mc^2}=4x$$ $$mv\approx\sqrt{2mT}$$ $$\lambda=\frac{h\sqrt{1-\frac{v^2}{c^2}}}{mv}\approx \lambda_0 (1-4x)^{+1/2}\approx \lambda_0 (1-2x-2x^2+4x^3-...)$$ Compare this power series with the one above. The term in $x$ now has a coefficient of$2$ instead of $\frac14$, so the correction to $\lambda_0$ is $8\times$ what it should be. The previous line was already an approximation, so the power series is also an approximation, regardless of however many terms we retain. Unlike the series above, which gets close to being 100% accurate as we retain more and more powers. Ignoring terms in $x^2$ or higher powers. we get $\lambda \approx 3.5069pm$ instead of$\lambda \approx 4.0317pm$. The larger is the value of $T=eV$ then the larger is $x$ also, and the greater is the difference between the correct approximation $\lambda\approx \lambda_0 (1-\frac14 x)$ and your incorrect approximation $\lambda\approx \lambda_0 (1-2x)$. And as noted above, if $x\gt \frac14$ then your approximation $\lambda\approx \lambda_0 (1-4x)^{+1/2}$ gives an imaginary result, whereas the exact formula $\lambda= \lambda_0 (1+\frac12x)^{-1/2}$ gives a real result because $x\ge 0$ for all values of $V$. There is no sharp cut off point for $V$ at which the electron suddenly switches over from being classical to relativistic, or above which your approximation suddenly ceases to be a good one. The change is gradual. At $V=33kV$ the value of $x$ is less than half that at $V=85kV$ so the smaller is the difference between your erroneous calculation and the correct one. At both values of $V$ your method gives an error which is $8\times$ bigger than it should be if you used the correct approximate formula. In fact, if the accelerating potential is such that $T=eV \gt \frac12 mc^2$ then $v \gt c$ so the factor $\sqrt{1-\frac{v^2}{c^2}}$ in the numerator would be imaginary, and so would the de Broglie wavelength.
[1003.0299] The local B-polarization of the CMB: a very sensitive probe of cosmic defects Authors: Juan Garcia-Bellido, Ruth Durrer, Elisa Fenu, Daniel G. Figueroa, Martin Kunz Abstract: We present a new and especially powerful signature of cosmic strings and other topological or non-topological defects in the polarization of the cosmic microwave background (CMB). We show that even if defects contribute 1% or less in the CMB temperature anisotropy spectrum, their signature in the local $\tilde{B}$-polarization correlation function at angular scales of tens of arc minutes is much larger than that due to gravitational waves from inflation, even if the latter contribute with a ratio as big as $r\simeq 0.1$ to the temperature anisotropies. Proposed B-polarization experiments, with a good sensitivity on arcminute scales, may either detect a contribution from topological defects produced after inflation or place stringent limits on them. Even Planck should be able to improve present constraints on defect models by at least an order of magnitude, to the level of $\ep <10^{-7}$. A future full-sky experiment like CMBpol, with polarization sensitivities of the order of $1\mu$K-arcmin, will be able to constrain the defect parameter $\ep=Gv^2$ to a few $\times10^{-9}$, depending on the defect model. [PDF] [PS] [BibTex] [Bookmark] Discussion related to specific recent arXiv papers Post Reply 3 posts • Page 1of 1 Topological defects can source scalar, vector and tensor modes in the early universe. The vector modes have power on small scales and can generate E and B polarization; the B signal can be quite distinctive, and used to constrain defect models with future data. This paper appears to take some previous results for the B-mode power spectrum and multiply them by l^4, so e.g. in Fig 1 the power is very blue. Of course to be consistent you also have to multiply the noise and the any other spectrum of interest by l^4 as well, so you seem to gain nothing by doing this. Is there some point I have missed? The paper also defines a 'local' scalar [tex]\tilde{B}[/tex] by taking two derivatives of the polarization tensor. However you gain nothing by doing this; with noisy or non-band-limited data you cannot calculate derivatives on a scale L without having data available over a scale L - the non-locality just hits you in a different form (see astro-ph/0305545 and refs). This paper appears to take some previous results for the B-mode power spectrum and multiply them by l^4, so e.g. in Fig 1 the power is very blue. Of course to be consistent you also have to multiply the noise and the any other spectrum of interest by l^4 as well, so you seem to gain nothing by doing this. Is there some point I have missed? The paper also defines a 'local' scalar [tex]\tilde{B}[/tex] by taking two derivatives of the polarization tensor. However you gain nothing by doing this; with noisy or non-band-limited data you cannot calculate derivatives on a scale L without having data available over a scale L - the non-locality just hits you in a different form (see astro-ph/0305545 and refs). The main point is that vector components of defects' contribution to CMB polarization anisotropies peak at scales smaller than those from inflation. On the other hand, the ordinary E- and B-modes depend non-locally on the Stokes parameters, so they cannot be used to put constraints on causal sources like defects using the angular correlation function of E- and B-modes on small scales. That is the reason why Baumann and Zaldarriaga [0901.0958] suggested using instead the local modes. Those are the true causal modes, written in terms of derivatives of the Stokes parameters. These local B-modes then have power spectra that are much bluer than the non-local ones, and hence enhance the small scale (high-l) end of the spectrum. It is by looking at the angular correlation functions at small separations (tens of arcmin) that one has a chance to measure the defect's contribution to the local B-modes, and distinguish it from the one of inflation. Of course, the usual white noise power spectrum for polarization will also be modified by this [tex]\ell^4[/tex] factor, but by a suitable gaussian smoothing of the data (following Baumann&Zaldarriaga), we can indeed obtain large signal to noise ratios for binned data at small angular scales. Baumann&Zaldarriaga looked at the model-independent signature of inflation at angles [tex]\theta>2[/tex] degrees. What we have realiazed is that, although model-dependent, the signal at angles [tex]\theta < 1[/tex] degrees can be much more significant. In fact, the feature at small angles is rather universal. The differences between defect models (and we considered four different ones) is just in the height and width of the first and second oscillations in the angular correlation functions (related to the heigth and position of the angular power spectrum). Therefore, with sufficient angular resolution one could not only detect defects (if they are there) but also differentiate between different models. On the other hand, the ordinary E- and B-modes depend non-locally on the Stokes parameters, so they cannot be used to put constraints on causal sources like defects using the angular correlation function of E- and B-modes on small scales. That is the reason why Baumann and Zaldarriaga [0901.0958] suggested using instead the local modes. Those are the true causal modes, written in terms of derivatives of the Stokes parameters. These local B-modes then have power spectra that are much bluer than the non-local ones, and hence enhance the small scale (high-l) end of the spectrum. It is by looking at the angular correlation functions at small separations (tens of arcmin) that one has a chance to measure the defect's contribution to the local B-modes, and distinguish it from the one of inflation. Of course, the usual white noise power spectrum for polarization will also be modified by this [tex]\ell^4[/tex] factor, but by a suitable gaussian smoothing of the data (following Baumann&Zaldarriaga), we can indeed obtain large signal to noise ratios for binned data at small angular scales. Baumann&Zaldarriaga looked at the model-independent signature of inflation at angles [tex]\theta>2[/tex] degrees. What we have realiazed is that, although model-dependent, the signal at angles [tex]\theta < 1[/tex] degrees can be much more significant. In fact, the feature at small angles is rather universal. The differences between defect models (and we considered four different ones) is just in the height and width of the first and second oscillations in the angular correlation functions (related to the heigth and position of the angular power spectrum). Therefore, with sufficient angular resolution one could not only detect defects (if they are there) but also differentiate between different models. I think it is clear from the normal power spectra that the sourced vector mode B-polarization peaks at much smaller scales than the gravitational wave spectrum: mostly scales sub-horizon at recombination as opposed to tensor modes which decay on sub-horizon scales. I agree that with low enough noise this is an interesting signal (and has been calculated many times before), though it needs to be distinguished from other possible vector mode sources like magnetic fields. I thought the point of the Baumann paper was to make a nice picture showing visually the structure of the correlations. The E and B modes contain exactly the same information as the tilde versions; in the same way the WMAP7 papers make some nice plots of the polarization-temperature correlation to visually show a physical effect, but these constrain the same information as the usual power spectra. In the Gaussian limit the usual E/B spectra contain all the information on the defect power spectrum. Only Q and U can actually be measured locally on the sky (in one pixel you cannot calculate any spatial derivatives). The two-point Q/U correlations can be calculated from the usual E and B spectra. I thought the point of the Baumann paper was to make a nice picture showing visually the structure of the correlations. The E and B modes contain exactly the same information as the tilde versions; in the same way the WMAP7 papers make some nice plots of the polarization-temperature correlation to visually show a physical effect, but these constrain the same information as the usual power spectra. In the Gaussian limit the usual E/B spectra contain all the information on the defect power spectrum. Only Q and U can actually be measured locally on the sky (in one pixel you cannot calculate any spatial derivatives). The two-point Q/U correlations can be calculated from the usual E and B spectra.
Given a graph $G = (V,E,w)$, we define $G'=(V,E,w')$ with $w'(e) = aw(e) + 1$ where $a = |E| + \varepsilon$ for some $\varepsilon \geq 0$ as proposed in the comments of the question. Lemma Let $P$ a path in $G$ with cost $C$, i.e. $w(P)=C$. Then, $P$ has cost $aC + |P|$ in $G'$, i.e. $w'(P) = aC + |P|$. The lemma follows directly from the definition of $w'$. Call the result of Dijkstra on $G'$ $P$, which is a shortest path in $G'$. Assume $P$ was not a shortest path with fewest edges (among all shortest paths) in $G$. This can happen in one of two ways. $P$ is not a shortest path in $G$. Then, there is a path $P'$ with $w(P') < w(P)$. As $|P|,|P'| \leq |E| \leq a$, this implies that also $w'(P') < w'(P)$ with above lemma¹. This contradicts that $P$ was chosen as shortest path in $G'$. $P$ is a shortest path but there is a shortest path with fewer edges. Then, there is another shortest path $P'$ -- i.e. $w(P) = w(P')$ -- with $|P'| < |P|$. This implies that $w'(P') < w'(P)$ by above lemma, which again contradicts that $P$ is a shortest path in $G'$. As both cases have led to a contradiction, $P$ is indeed a shortest path with fewest edges in $G$. That covers one half of the proposition. What about $a < |E|$, i.e. $a = |E| - \varepsilon$ with $\varepsilon \in (0,|E|)$? Actually, we also need that $a$ or all weights in $G$ are integers. Otherwise, $w(P') < w(P)$ does not cause the weights in $G'$ to be at least $|E|$ apart. This is not a restriction, though; we can always scale $w$ with a constant factor so that all weights are integer, assuming we start with rational weights.
That's the continuous (differential) entropy; not the entropy of the discrete random variable that your ADC output is! You could (and seeing your profile page, you'll probably have a lot of fun doing that) look into what is called rate distortion in the field that concerns itself with information entropy, information theory. Essentially, an ADC doesn't "let through" all the entropy entering it, and there's ways of measuring that. But in this specific case, things are simpler. Remember: The entropy of a discrete source \$X\$ is the expectation of information $$H(X) = E(I(X))$$ and since your ADC output \$X\$ has very finitely many, countable amounts of output states, the expectation is just a sum of probability of an output \$x\$ times the information of that output: \begin{align}H(X) &= E(I(X))\\&= \sum_{x} P(X=x) \cdot I(X=x)\\&= \sum_{x} P(X=x) \cdot \left(-\log_2(P(X=x))\right)\\&= -\sum_{x} P(X=x) \log_2(P(X=x))\end{align} Now, first observation: \$H(X)\$ of an \$N\$-bit ADC has an upper bound: you can't ever get more than \$N\$ bits of info out of that ADC. And: you get exactly \$H(X)=N\$ if you use the discrete uniform distribution for the values over the \$2^N\$ ADC steps (try it! set \$P(X=x) = \frac1{2^N}\$ in the formula above, and remember that you sum over \$2^N\$ different possible output states). So, we can intuitively conclude that the digitized normal distribution yields fewer bits of entropy than the digitized uniform distribution. Practically, that means something immensely simple: Instead of using, say, one 16-bit ADC to digitize your normally distributed phenomenon, use sixteen 1-bit ADCs to observe 16 analog normally distributed entities, and only measure whether the observed value is smaller or larger than the mean value of the normal distribution. That "sign bit" is uniformly distributed over \$\{-1, +1\}\$, and thus, you get one full bit out of every 1-bit ADC, summing to 16 total bits. If your noise source is white (that means: one sample isn't correlated to the next), then you can just sample 16 times as fast as your 16-bit ADC with your 1-bit ADC, and get the full 16 bit of entropy in the same time you would have done one 16-bit analog-to-digital conversion¹. How many bits of entropy does the quantized normal distribution have? But you had a very important question: How many bits do you get when you digitize a normal distribution? First thing to realize: The normal distribution has tails, and you will have to truncate them; meaning that all the values above the largest ADC step will be mapped to the largest value, and all the values below the smallest to the smallest value. So, what probability do the ADC steps then get? I'll do the following model: As shown above, the sign of our \$\mathcal N(0,1)\$ distributed variable is stochastically independent from the absolute value. So, I'll just calculate the values for the ADC bins \$\ge0\$; the negative ones will be symmetrically identical. Our model ADC covers \$[-1,1]\$ analog units; it has \$N+1\$ bits. Meaning that I'll cover \$[0,1]\$ with \$N\$ bits below. The smallest non-negative ADC bin thus covers analog values \$v_0=[0,2^{-N}[\$; the one after that \$v_1=[2^{-N},2\cdot 2^{-N}[\$. The \$i\$th bin (counting from 1, \$i<2^N-1\$) covers $$v_i=[i\cdot 2^{-N},(i+1)\cdot 2^{-N}[$$. Most interesting, however, is the last bin: it covers $$v_{2^N-1}=[(2^N-1)\cdot 2^{-N},\infty[=[1-2^{-N},\infty[$$. So, to get the entropy, i.e. the expected information, we'll need to calculate the individual bin's information, and average these, i.e. weigh each bin's information with its probability. What is the probability of each bin? \begin{align}P(X\in v_i|X>0) &= \int\limits_{i\cdot 2^{-N}}^{(i+1)\cdot 2^{-N}} f_X(x)\,\mathrm dx && {0\le i<2^N-1}\\&= 2\left(F_X\left((i+1)\cdot 2^{-N}\right)-F_X\left(i\cdot 2^{-N}\right)\right) &&|\text{ standard normal}\\&= 2\Phi\left((i+1)\cdot 2^{-N}\right)-2\Phi\left(i\cdot 2^{-N}\right)\\[1.5em]P(X\in v_{2^N-1}|X>0) &=\Phi(\infty)-2\Phi\left(1- 2^{-N}\right)\\&= 1-2\Phi\left(1- 2^{-N}\right)\end{align} So, let's plot the information of a 5-bit ADC (which means 4 bits represent the positive values): Summing up the $\P\$ from above, we see that we only get 3.47 bits. Let's actually do an experiment: Does my ADC pay? We'll simply plot entropy that we get over ADC bits that we pay for. As you can see, you get diminishing returns for added cost; so don't use a high-resolution ADC to digitize a strongly truncated normal distribution Scaling the ADC range As was rather intuitive from the previous sections, the problem is that the boundary bin accumulates too much probability mass. What if we scaled the ADC range, to cover \$[-a\sigma, a\sigma]\$ instead of just \$[-\sigma, \sigma]\$? Which confirms our suspicion that there's a maximum entropy you'll get when you "kinda" make your normal distribution look relatively uniform: source code The scripts used to generate above figures can be found on Github. ¹ in fact, you must take a close look at the architecture of your ADC: ADCs are typically not meant to digitize white noise, and thus, many ADC architectures strive to "color" the white noise e.g. by shifting the noise energy to higher frequencies when measuring the output value successively; an introduction to the Delta-Sigma ADC is a must-read here! And I'm certain you'll enjoy that :)
In short: wavelets and relatives are pretty good at compacting a sufficiently large class of regular-enough and useful signals and images. What follows are signal properties (and corresponding wavelet features) that make wavelets good (not best) candidates for lossy compression: piecewise-smooth ( vanishing moments, or cancellation of low-order polynomials) edges or jumps ( gradient-like or Laplacian behavior of wavelets) localized oscillations (zero-average and wiggling wavelet shape) noise or spurious events ( orthogonality and sparsity enhancement) As said by @hops, the efficiency of wavelets for compression depends of the good matching of a signal class and the chosen wavelets. Let us restrict here to non-redundant discrete transformations: discrete Fourier and discrete wavelets. Both are orthogonal, or close enough (bio-orthogonal wavelets) to skip the distinction. So both, when transform coefficients are discarded, are least-squares approximations. But least-squares are not the best metric for compression: if you double the amplitude of a sample, the energy is multiplied by $4$, but it only adds one bit to the stored data, in a $\log_2$ reasoning. In a way, a transform will help compression if it reduces the logarithmic cost or bit-budget of a discrete signal; hence, if the coefficients have a power law $1/c^{\beta}$ that decreases fast enough (the highest the $\beta$, the better). This feature is often called the compressibility of signals. It can be assessed empirically, but also theoretically based on complicated functional analysis (Besov, Sobolev spaces). However, consider the useful class of $C^\alpha$ piecewise regular signals, $\alpha \ge 1$. They are locally smooth, with jumps (edges). Taking the largest $M$ coefficients for compression (or nonlinear approximation), the mean squared error for the Fourier basis will asymptotically decrease in $1/M$ for Fourier (because of Gibbs ringing). While wavelet approximations can decrease as $1/M^{-2\alpha}$. The smoother the signal, the better the wavelet compressibility. In other words, Fourier cannot use the regularity of the signal. In practice, this is not so simple. The complex aspect of Fourier is somewhat difficult to handle. Quantization should be taken into account. The storage of the highest coefficients location costs too. Perceptual distortion should play a role, at least for lossy compression. So at low compression rates, local Fourier decompositions like the JPEG DCT can perform as well as wavelets, since asymptotic proofs do not apply. Indeed, local cosine bases, lapped orthogonal transforms that bridge the gap between Fourier and standard wavelets perform well too, and they are used in the MP3 standard (MDCT). For images, the 2D aspect renders separable wavelets less efficient than in 1D. For strict lossless compression through, (wavelet) transforms are not the state-of-the-art yet.
Search Now showing items 1-1 of 1 Production of charged pions, kaons and protons at large transverse momenta in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (Elsevier, 2014-09) Transverse momentum spectra of $\pi^{\pm}, K^{\pm}$ and $p(\bar{p})$ up to $p_T$ = 20 GeV/c at mid-rapidity, |y| $\le$ 0.8, in pp and Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV have been measured using the ALICE detector ...
Search Now showing items 1-2 of 2 Anisotropic flow of inclusive and identified particles in Pb–Pb collisions at $\sqrt{{s}_{NN}}=$ 5.02 TeV with ALICE (Elsevier, 2017-11) Anisotropic flow measurements constrain the shear $(\eta/s)$ and bulk ($\zeta/s$) viscosity of the quark-gluon plasma created in heavy-ion collisions, as well as give insight into the initial state of such collisions and ... Investigations of anisotropic collectivity using multi-particle correlations in pp, p-Pb and Pb-Pb collisions (Elsevier, 2017-11) Two- and multi-particle azimuthal correlations have proven to be an excellent tool to probe the properties of the strongly interacting matter created in heavy-ion collisions. Recently, the results obtained for multi-particle ...
Is it possible to construct a Lorentz invariant, three rank Levi-Civita tensor in Minkowski Spacetime? If not, why so? I am talking about something like this $\epsilon_{\alpha\beta\gamma}$ or $\epsilon^{\alpha\beta\gamma}$, where each indices run from $0$ to $3$. As in this answer here, which proves the Lorentz co-variance of the Levi-Civita tensor by using the determinant formula, I guess one would run into trouble if we have three rank Levi-Civita tensors. Kindly elaborate on that. You can use Young tableaux/diagrams and the permutation group to figure out the symmetries of the general rank-3 tensor. The spaces correspond to the partitions of the rank: 3=3: One 20 dimensional total symmetric subspace. 3=2+1: Two 20 dimensional mixed symmetry subspaces. 3=1+1+1: One 4 dimensional totally antisymmetric subspace: $$ A_{\alpha\beta\gamma} = \frac 1 6 [T_{\alpha\beta\gamma} + T_{\beta\gamma\alpha} + T_{\gamma\alpha\beta} - T_{\gamma\beta\alpha} - T_{\beta\alpha\gamma} - T_{\alpha\gamma\beta}] $$ That is the only antisymmetric thing you can make according to Schurl-Weyl theory. To find the dimensions, I used the Hook Length Formula (summing of the boxes $x$ in a diagram $Y(\lambda)$) for the Young diagram corresponding to the integer partition: $$ {\rm dim}\pi_{\lambda} = \frac {n!}{\prod_{x\in Y}{\rm hook}(x)}$$ If you consider 3 dimensions ($n=3$), you get ${\rm dim} = 1$, that is the standard Levi-Civita symbol $\epsilon_{ijk}$. If you set $n=4$, the result is ${\rm dim} = 4$. That means $A_{\alpha\beta\gamma}$ transforms like a 4-vector. So, the only antisymmetric part of a rank-3 tensor in Minkowski space rotates like a 4-vector, which means it is not invariant and is not a candidate to be Levi-Civita like. Meanwhile, the dimensions of the 3 other irreducible spaces are all 20--which are certainly not scalars, and thus not candidates to be Levi-Civita like. Note that if you consider rank-4 tensors, the partitions are as follows: 4=4: 35 dimensional and symmetric. 4=3+1: Three 45-dimensional mixed symmetry spaces. 4=2+2: Two 20-dimensional mixed symmetry spaces. 4=2+1+1: Three 15-dimensional mixed symmetry spaces. 4=1+1+1+1: One total antisymmetric 1 dimensional space, which is proportional to the Levi-Civita symbol $\epsilon_{\mu\nu\sigma\lambda}$. In summary, the answer is "No", and the reason why has to do with the representations of the symmetric group on 3-letters. You partition the rank=3, use the Robinson-Schensted correspondence to associate that partition with irreducible representations of the permutation group. (The Young Diagrams make this step a snap). Then, Schur-Weyl duality associates those with irreducible subspaces of and rank-N tensor (signed permutations of indices). Finally, the Hook Length formula tells you the dimensions of those subspaces. The Levi-Civita symbol needs to be invariant (e.g., dimension 1, like a scalar) and it need to be totally antisymmetric in all indices--and that simply did not exist for rank 3 in 4 dimensions. Does this satisfy the requirements? In Minkowski spacetime (signature: -,+,+,+), let $\epsilon_{abcd}$ be the alternating tensor satisfying $\epsilon_{abcd}=\epsilon_{[abcd]}$ and $\epsilon_{abcd}\epsilon^{abcd}=-24$ and let $v^a$ be the 4-velocity of an observer ($v^av_a=-1$). Define the "spatial alternating tensor seen by the observer $v^a$" $$\epsilon_{abc}=\epsilon_{abcd}v^d,$$ which satisfies $\epsilon_{abc}=\epsilon_{[abc]}$, $\epsilon_{abc}\epsilon^{abc}=6$, and $v^a\epsilon_{abc}=0$. (This is extracted from Robert Geroch's "General Relativity, 1972 Lecture Notes" [ISBN 978-0987987174] .)
It is an illusion that the computation rules "define" or "construct" the objects they speak about. You correctly observed that the equation for $\mathrm{ind}_{=_A}$ does not "define" it, but failed to observe that the same is true in other cases as well. Let us consider the induction principle for the unit type $1$, which seems particularly obviously "determined". According to Section 1.5 of the HoTT book we have$$\mathrm{ind}_1 : \prod_{C : 1 \to \mathtt{Type}} C(\star) \to \prod_{x : 1} P(x)$$with the equation$$\mathrm{ind}_1 (C, c, \star) = c.$$Does this "define" or "construct" $\mathrm{ind}_1$ in the sense that it leaves no doubt as to what $\mathrm{ind}_1$ "does"? For instance, set $C(x) = \mathbb{N}$ and $a = 42$, and consider what we could say about$$\mathrm{ind}_1(C, 42, e)$$for a given expression $e$ of type $1$. Your first thought might be that we can reduce this to $42$ because "$\star$ is the only element of $1$". But to be quite precise, the equation for $\mathrm{ind}_1$ is applicable only if we show $e \equiv \star$, which is impossible when $e$ is a variable, for example. We can try to wiggle out of this and say that we are only interested in computation with closed terms, so $e$ should be closed. Is it not the case that every closed term $e$ of type $1$ is judgmentally equal to $\star$? That depends on nasty details and complicated proofs of normalization, actually. In the case of HoTT the answer is "no" because $e$ could contain instances of the Univalence Axiom, and it is not clear what do to about that (this is the open problem in HoTT). We can circumvent the trouble with univalance by considering a version of type theory which does have good properties so that every closed term of type $1$ is judgmentally equal to $\star$. In that case it is fair to say that we do know how to compute with $\mathrm{ind}_1$, but: The same will hold for the identity type, because every closed term of an identity type will be judgmentally equal to some $\mathrm{refl}(a)$, and so then the equation for $\mathrm{ind}_{=_A}$ will tell us how to compute. Just because we know how to compute with closed terms of a type, that does not mean we have actually defined anything because there is more to a type than its closed terms, as I tried to explain once. For example, Martin-Löf type theory (without the identity types) can be interpreted domain-theoretically in such a way that $1$ contains two elements $\bot$ and $\top$, where $\top$ corresponds to $\star$ and $\bot$ to non-termination. Alas, since there is no way to write down a non-terminating expression in type theory, $\bot$ cannot be named. Consequently, the equation for $\mathrm{ind}_1$ does not tell us how to compute on $\bot$ (the two obvious choices being "eagerly" and "lazily"). In software engineering terms, I would say we have a confusion between specification and implementation. The HoTT axioms for the identity types are a specification. The equation $\mathrm{ind}_{=_C}(C,c,x,x,\mathrm{refl}(x)) \equiv c(x)$ is not telling us how to compute with, or how to construct $\mathrm{ind}_{=_C}$, but rather that however $\mathrm{ind}_{=_C}$ is "implemented", we require that it satisfy the equation. It is a separate question whether such $\mathrm{ind}_{=_C}$ can be obtained in a constructive fashion. Lastly, I am a bit weary of how you use the word "constructive". It looks as if you think that "constructive" is the same as "defined". Under that interpretation the Halting oracle is constructive, because its behavior is defined by the requirement we impose on it (namely that it output 1 or 0 according to whether the given machine halts). It is prefectly possible to describe objects which only exist in a non-constructive setting. Conversely, it is perfectly possible to speak constructively about properties and other things that cannot actually be computed. Here is one: the relation $H \subseteq \mathbb{N} \times \{0,1\}$ defined by$$H(n,d) \iff (d = 1 \Rightarrow \text{$n$-th machine halts}) \land (d = 0 \Rightarrow \text{$n$-th machine diverges})$$is constructive, i.e., there is nothing wrong with this definition from a constructive point of view. It just so happens that constructively one cannot show that $H$ is a total relation, and its characteristic map $\chi_H : \mathbb{N} \times \{0,1\} \to \mathsf{Prop}$ does not factor through $\mathtt{bool}$, so we cannot "compute" its values. Addendum: The title of your question is "Is path induction constructive?" After having cleared up the difference between "constructive" and "defined", we can answer the question. Yes, path induction is known to be constructive in certain cases: If we restrict to type theory without Univalence so that we can show strong normalization, then path induction and everything else is constructive because there are algorithms that perform the normalization procedure. There are realizability models of type theory, which explain how every closed term in type theory corresponds to a Turing machine. However, these models satisfy Streicher's Axiom K, which rules out Univalence. There is a translation of type theory (again without Univalence) into constructive set theory CZF. Once again, this validates Streicher's axiom K. There is a groupoid model inside realizability models which allows us to interpret type theory without Streicher's K. This is preliminary work by Steve Awodey and myself. We really need to sort out the constructive status of Univalence.
Complex networks is a young and active area of scientific research inspired largely by the empirical study of real-world networks such as computer networks, social networks, biological networks, ecological networks, telecommunication networks etc. A lot of interesting and important properties of complex networks were found by statistics such as sparsity, small world phenomenon (also called six degree separation, for example, from the ‘small world’ studies of Harvard sociologist Stanley Milgram[1], play of Guare [2], Hands shaking(Watt’s father remarks that he was only six handshakes away from the president of US) example of Watts[3]), power law distribution(also called long tail or heavy tail distribution or scale free, Barabási and Albert [4] noticed actor collaboration networks and World Wide Web had power law distribution). The field continues to develop at a brisk pace, and has brought together researchers from many areas including mathematics, physics, biology, telecommunications, computer science, sociology, epidemiology, and others[5]. To set up the foundations of complex network, the first step is to define some new metrics. There are a lot of different metrics widely used in complex networks such as average degree, diameter, entropy, average distance, centrality, betweenness, density, clustering coefficient, transitivity, diffusion time etc. How to model complex networks is one big issue in this area. Traditional well known model such as the popular Erdős-Rényi random network model does not fit on the real world network so well in most cases. For example, although Erdős-Rényi random network has small diameter, but have very few triangles, which is not like in social networks the friends of one person tend to be friends, too. Scientists are trying to find new mathematical models for complex networks. For small world model, Watts and Strogatz [6] started from ring lattice with $n$ vertices and $k$ edges per vertex , and then rewired each edge with probability $p$, connecting one end to a vertex chosen at random, this can drastically reduce the diameter. Bollobás and Chung [7] added a random matching to a ring of $n$ vertices with nearest neighbor connections and showed the resulting graph had diameter $\sim\log n$. This model is called BC small world, however, this is weak in social network. Newman and Watts [8] (NW small world)added a Poisson number of shortcuts with mean $n\rho/2$ instead of rewiring edges, and then attaches them to randomly chosen pairs of sites. This results in a Poisson mean $\rho$ number of long distance edges per site. For Power Law distribution model, Barabási and Albert [4] found some real world network such as actor collaboration networks and World Wide Web had degree distributions $p_k\sim Ck^{-\gamma}$ as $k\rightarrow \infty$. Usually $\gamma$ is between $2$ and $3$, but Liljeros et al. [9] found $\gamma_{male}=3.3$ and $\gamma_{female}=3.5$ by analyzing the data of sexual behavior of 4781 Swedes. Ebel et al. [10] found $\gamma=1.81$ by studying email network of Keil University. Newman[11][12][13] found the number of collaborators was better fit by a power law with an exponential cutoff $p_k\sim Ck^{-\tau}exp(-k/k_c)$ in collaboration network. Later, scholars found other values of $\gamma$. Barabási and Albert [4] introduced the preferential attachment model to give a mechanism explanation for power law distribution. Whether this model still keeps small world property? Bollobás and Riordan [14] showed for this model, the diameter $\sim \log n/(\log \log n)$. Chung and Lu[15][16] showed for their corresponding model (known as Chung-Lu model) has the distance between to randomly chosen vertices is $O(\log n/(\log \log n))$. Callaway, Hopcroft, Kleinberg, Newman and Strogatz [17] introduced CHKNS model inspired by Barabási and Albert [4]. Just name a few. Further details and more models are referenced to [18][19]. However, it is difficult to find a model which can fit perfectly on the real world networks in every aspect. Now the research in this area is usually data and application driven. For example, they study application problems such as influence maximization, community detection, rumor detection, recommender system, PageRank etc. , in which they design algorithm or mechanism and use some real world network data to make simulation to test the quality of algorithms, usually they don’t care too much about the model itself. But the big issue without an exact mathematical model is large scale data is hard to handle, thus some sparse optimization and large scale optimization techniques were introduced. For other application such as the famous Kidney Exchange Long Chain mechanism of Al Roth, who won Nobel Prize, dynamic homogeneous random graph [20], Random compatibility graphs[21], sparse graphs [21]etc. were used. Those models are more mathematical than better fitting on the real world networks. In communication networks, such as ad hoc sensor network, it is not small world nor power law distribution, random geometry network or other models are used. [1] S. Milgram, “The small world problem,” Psychology Today, vol. 2, no. 1, pp.60–67, 1967. [2] J. Guare, Six Degrees of Separation: A Play. New York: Vintage, 1990. [3] Watts, Duncan J. Six degrees: The science of a connected age. WW Norton & Company, 2004. [4] Barabási, Albert-László, and Réka Albert. "Emergence of scaling in random networks." science 286.5439 (1999): 509-512. [5] A.E. Motter, R. Albert (2012). "Networks in Motion". Physics Today 65 (4): 43–48. [6] Watts, Duncan J., and Steven H. Strogatz. "Collective dynamics of ‘small-world’networks." nature 393.6684 (1998): 440-442. [7]Bollobás, Bela, and Fan R. K. Chung. "The diameter of a cycle plus a random matching." SIAM Journal on discrete mathematics 1.3 (1988): 328-333. [8] Newman, Mark EJ, and Duncan J. Watts. "Renormalization group analysis of the small-world network model." Physics Letters A 263.4 (1999): 341-346. [9]Liljeros, Fredrik, et al. "The web of human sexual contacts." Nature 411.6840 (2001): 907-908. [10] Ebel, Holger, Lutz-Ingo Mielsch, and Stefan Bornholdt. "Scale-free topology of e-mail networks." Physical review E 66.3 (2002): 035103. [11] Newman, Mark EJ. "The structure of scientific collaboration networks." Proceedings of the National Academy of Sciences 98.2 (2001): 404-409. [12] Newman, Mark EJ. "Scientific collaboration networks. I. Network construction and fundamental results." Physical review E 64.1 (2001): 016131. [13] Newman, Mark EJ. "Scientific collaboration networks. II. Shortest paths, weighted networks, and centrality." Physical review E 64.1 (2001): 016132. [14] Bollobás, Béla, and Oliver Riordan. "The diameter of a scale-free random graph." Combinatorica 24.1 (2004): 5-34. [15] Chung, Fan, and Linyuan Lu. "Connected components in random graphs with given expected degree sequences." Annals of combinatorics 6.2 (2002): 125-145. [16] Chung, Fan, and Linyuan Lu. "The average distance in a random graph with given expected degrees." Internet Mathematics 1.1 (2004): 91-113. [17] Callaway, D. S., Hopcroft, J. E., Kleinberg, J. M., Newman, M. E., & Strogatz, S. H. (2001). Are randomly grown graphs really random?. Physical Review E, 64(4), 041902. [18] Chung, Fan RK, and Linyuan Lu. Complex graphs and networks. Vol. 107. Providence: American mathematical society, 2006. [19] Durrett, Richard. Random graph dynamics. Vol. 200. No. 7. Cambridge: Cambridge university press, 2007. [20] Ashlagi, Itai, Patrick Jaillet, and Vahideh H. Manshadi. "Kidney exchange in dynamic sparse heterogenous pools." arXiv preprint arXiv:1301.3509 (2013). [21] Ashlagi, Itai, et al. The need for (long) chains in kidney exchange. No. w18202. National Bureau of Economic Research, 2012.
This vignette illustrates the basic usage of the knockoff package with Model-X knockoffs. In this scenario we assume that the distribution of the predictors is known (or that it can be well approximated), but we make no assumptions on the conditional distribution of the response. For simplicity, we will use synthetic data constructed from a linear model such that the response only depends on a small fraction of the variables. set.seed(1234) # Problem parametersn = 1000 # number of observationsp = 1000 # number of variablesk = 60 # number of variables with nonzero coefficientsamplitude = 4.5 # signal amplitude (for noise level = 1)# Generate the variables from a multivariate normal distributionmu = rep(0,p)rho = 0.25Sigma = toeplitz(rho^(0:(p-1)))X = matrix(rnorm(n*p),n) %*% chol(Sigma)# Generate the response from a linear modelnonzero = sample(p, k)beta = amplitude * (1:p %in% nonzero) / sqrt(n)y.sample = function(X) X %*% beta + rnorm(n)y = y.sample(X) To begin, we call knockoff.filter with all the default settings. library(knockoff)result = knockoff.filter(X, y) We can display the results with print(result) ## Call:## knockoff.filter(X = X, y = y)## ## Selected variables:## [1] 3 9 40 44 46 61 67 78 85 108 148 153 172 173 177 210 223## [18] 238 248 281 295 301 302 317 319 326 334 343 360 364 378 384 389 421## [35] 426 428 451 494 506 510 528 534 557 559 595 596 617 668 676 682 708## [52] 770 775 787 836 844 875 893 906 913 931 937 953 959 The default value for the target false discovery rate is 0.1. In this experiment the false discovery proportion is fdp = function(selected) sum(beta[selected] == 0) / max(1, length(selected))fdp(result$selected) ## [1] 0.171875 By default, the knockoff filter creates model-X second-order Gaussian knockoffs. This construction estimates from the data the mean \(\mu\) and the covariance \(\Sigma\) of the rows of \(X\), instead of using the true parameters (\(\mu, \Sigma\)) from which the variables were sampled. The knockoff package also includes other knockoff construction methods, all of which have names prefixed with knockoff.create. In the next snippet, we generate knockoffs using the true model parameters. gaussian_knockoffs = function(X) create.gaussian(X, mu, Sigma)result = knockoff.filter(X, y, knockoffs=gaussian_knockoffs)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = gaussian_knockoffs)## ## Selected variables:## [1] 3 9 40 46 61 85 108 148 153 172 173 177 210 223 238 248 281## [18] 295 301 302 319 326 334 343 360 364 378 384 389 421 426 428 451 494## [35] 506 510 528 538 557 559 595 668 676 682 702 708 770 775 787 836 844## [52] 893 906 913 931 937 953 959 Now the false discovery proportion is fdp(result$selected) ## [1] 0.1034483 By default, the knockoff filter uses a test statistic based on the lasso. Specifically, it uses the statistic stat.glmnet_coefdiff, which computes \[W_j = |Z_j| - |\tilde{Z}_j|\] where \(Z_j\) and \(\tilde{Z}_j\) are the lasso coefficient estimates for the jth variable and its knockoff, respectively. The value of the regularization parameter \(\lambda\) is selected by cross-validation and computed with glmnet. Several other built-in statistics are available, all of which have names prefixed with stat. For example, we can use statistics based on random forests. In addition to choosing different statistics, we can also vary the target FDR level (e.g. we now increase it to 0.2). result = knockoff.filter(X, y, knockoffs = gaussian_knockoffs, statistic = stat.random_forest, fdr=0.2)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = gaussian_knockoffs, ## statistic = stat.random_forest, fdr = 0.2)## ## Selected variables:## [1] 9 85 108 148 172 173 210 223 248 301 343 378 421 426 428 559 595## [18] 668 708 785 931 953 959 fdp(result$selected) ## [1] 0.04347826 In addition to using the predefined test statistics, it is also possible to use your own custom test statistics. To illustrate this functionality, we implement one of the simplest test statistics from the original knockoff filter paper, namely \[ W_j = \left|X_j^\top \cdot y\right| - \left|\tilde{X}_j^\top \cdot y\right|. \] my_knockoff_stat = function(X, X_k, y) { abs(t(X) %*% y) - abs(t(X_k) %*% y)}result = knockoff.filter(X, y, knockoffs = gaussian_knockoffs, statistic = my_knockoff_stat)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = gaussian_knockoffs, ## statistic = my_knockoff_stat)## ## Selected variables:## integer(0) fdp(result$selected) ## [1] 0 As another example, we show how to customize the grid of \(\lambda\)’s used to compute the lasso path in the default test statistic. my_lasso_stat = function(...) stat.glmnet_coefdiff(..., nlambda=100)result = knockoff.filter(X, y, knockoffs = gaussian_knockoffs, statistic = my_lasso_stat)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = gaussian_knockoffs, ## statistic = my_lasso_stat)## ## Selected variables:## [1] 3 9 40 44 46 61 78 85 108 148 153 172 173 177 210 223 238## [18] 248 281 295 301 302 319 326 334 343 360 364 378 384 389 421 426 428## [35] 451 494 506 528 538 557 559 595 596 668 682 702 708 770 775 836 844## [52] 893 906 913 931 937 953 959 fdp(result$selected) ## [1] 0.1206897 The nlambda parameter is passed by stat.glmnet_coefdiff to the glmnet, which is used to compute the lasso path. For more information about this and other parameters, see the documentation for stat.glmnet_coefdiff or glmnet.glmnet. In addition to using the predefined procedures for construction knockoff variables, it is also possible to create your own knockoffs. To illustrate this functionality, we implement a simple wrapper for the construction of second-order Model-X knockoffs. create_knockoffs = function(X) { create.second_order(X, shrink=T)}result = knockoff.filter(X, y, knockoffs=create_knockoffs)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = create_knockoffs)## ## Selected variables:## [1] 9 40 44 61 78 85 108 148 153 172 173 177 210 223 238 248 295## [18] 301 319 326 334 343 360 364 378 384 389 421 426 428 451 494 506 557## [35] 559 595 651 668 702 708 770 775 844 893 906 913 931 937 953 959 fdp(result$selected) ## [1] 0.08 The knockoff package supports two main styles of knockoff variables, semidefinite programming (SDP) knockoffs (the default) and equi-correlated knockoffs. Though more computationally expensive, the SDP knockoffs are statistically superior by having higher power. To create SDP knockoffs, this package relies on the R library [Rdsdp][Rdsdp] to efficiently solve the semidefinite program. In high-dimensional settings, this program becomes computationally intractable. A solution is then offered by approximate SDP (ASDP) knockoffs, which address this issue by solving a simpler relaxed problem based on a block-diagonal approximation of the covariance matrix. By default, the knockoff filter uses SDP knockoffs if \(p<500\) and ASDP knockoffs otherwise. In this example we generate second-order Gaussian knockoffs using the estimated model parameters and the full SDP construction. Then, we run the knockoff filter as usual. gaussian_knockoffs = function(X) create.second_order(X, method='sdp', shrink=T)result = knockoff.filter(X, y, knockoffs = gaussian_knockoffs)print(result) ## Call:## knockoff.filter(X = X, y = y, knockoffs = gaussian_knockoffs)## ## Selected variables:## [1] 9 40 46 61 78 85 108 148 172 173 177 210 223 238 248 281 295## [18] 301 302 317 319 326 334 343 360 364 378 384 389 421 426 428 451 494## [35] 506 528 534 557 559 595 596 668 682 702 708 770 775 844 893 906 913## [52] 931 937 953 959 fdp(result$selected) ## [1] 0.1090909
Let's say I have a normal, space-suited human who, for various reasons, must climb a very long, indestructible ladder in space. Air, heat, food, water and waste are already accounted for. What I want to know is what their rate of progress up the ladder will be ? How many kilometers a day can they travel ? How much rest do they need? Also, how long before ambient radiation begins to affect them? How likely are they to be hit by micro-meteors or other space debris ? "The" Assumption Since your human is in a spacesuit, I will assume the ladder is in a (near) vacuum. Zero gravity v. microgravity The use of "zero gravity" versus microgravity—which is the common feeling of "weightlessness" felt by astronauts in orbit—is an important distinction. I will assume you mean microgravity. The gravity they experience is almost as high as on the surface of the Earth, and they are falling! The trick is, they're moving sideways (in orbit) fast enough that they basically fall "around" the Earth. That's kind of an oversimplification, but with the links I've provided, you can get into as much detail as you like. Now, back to your guy/gal in the space suit. Acceleration Since the human doesn't have any gravity to pull them down, or air drag to reduce their velocity, they could, in theory, accelerate to very fast speeds. Each time they push "down" on a ladder rung, the equal-and-opposite force (see Newton's 2nd law) imparts a force on them in the "upward" direction, which is felt as acceleration. The acceleration in this case is momentary; it lasts only for as long as they are pushing on the ladder in the opposite direction. Think of it like a spaceship firing its maneuvering thruster. That momentary acceleration imparts a permanent increase in velocity (or at least until some other force acts to accelerate them in a different direction). So in a sense, they're not "climbing" the ladder so much as accelerating themselves along. In real EVAs (spacewalks), astronauts always tether themselves to the frame of what they're working on, so they don't "accelerate" themselves to deep space. So, realistically, safety concerns would probably be the bottleneck. However, there's a way to get those tethers out of the way: Maybe you build a cylindrical cage around the ladder to obviate the need for tethering. You still have a problem with protrusions on the suit getting caught or sheared off on the way "up" if the velocity is too high, but I'll recommend some high(er) tech solutions for that. (So far, all you need is a decent welder.) Micro meteors This risk is very minimal, at least near Earth. If you're talking about another planetary system, I would need to know more to say with certainty. But especially given the cylindrical cage idea, above, most small meteor/debris impacts would impact/deflect/ablate on the cage itself, rather than injuring the astronaut or her suit. Km/day This is where I take all of my soft descriptive text and turn it into hard numbers. A useful equation for acceleration is: $$a = \frac{\Delta v}{\Delta t}$$ Reasonable values? Let's say your astronaut can grip that rung for 0.5 seconds and push "down". The force he can exert on that would certainly be less than his mass (including suit), but let's say he can manage (50kg, or let's round up to 500N). With some unit conversions and basic algebra, we get: $$\Delta v = a \cdot \Delta t = 5 m/s^2 \cdot 0.5 s = 2.5 m/s$$ So every time he pushes off a rung, our astronaut's speed increases by $2.5 m/s$. Now there's a limit to how many times he can do that, because at a certain speed, he's more likely to break a finger than actually increase his speed, plus the interval ($\Delta t$) will decrease. Think of this like trying to "speed up" a thrown baseball already in the air. Thus, it's a little hard to estimate, but let's say, from a standstill in the ladder's reference frame, the astronaut could manage ten good, equal pulls on the ladder. Her speed would increase to a final velocity of $2.5 m/s \cdot 10 = 25 m/s = 90 km/h$, which is a reasonable freeway speed in many cities (and about as fast as I would reasonably like to travel in a narrow tube, in a spacesuit!). Air, heat, food, water and waste are already accounted for. ... How many kilometers a day can they travel ? How much rest do they need? Distance: $90 \frac{km}{h} \cdot 24 \frac{h}{day} = 2,160 \frac{km}{day}$ Rest No more than usual. Those "ten pulls" would be nothing for a trained astronaut. In fact, most terrestrial office workers could pull that off. Deceleration At the end of the ladder, the same process must happen in reverse: the astronaut must find a way to decelerate (which simply means acceleration in the opposite direction), so they don't go head-first into a bulkhead at 90 km/h. There are many methods they could use, such as manual (lightly grab the rungs to slow down), or mechanical (the "station" has built-in decelerator flaps near the end that work like a series of nets to gently slow down the astronaut). High tech options To keep the astronaut aligned in the center of the tube, you could use strong (electro)magnets in a ring to repulse the astronaut, pushing her into the center of the cylinder. Her suit would also need a magnet (strong earth magnet would do well enough). Going a step further, you could even use those magnets to accelerate (and decelerate) your astronaut automatically, similar to a coilgun, which would allow for very high speeds (safer, due to the magnetic containment idea, above). If safety is sufficiently addressed, the same daily journey could easily traverse a 3-5 times the distance of a manual launch, however any failure in the magnetics could easily prove fatal. Rest No more than usual! With either solution, the manual effort required is minimal (just a few minutes of work), and only related to stopping and starting the journey. Radiation I'm not exactly sure where your ladder will be located (it makes a big difference), but for reference, have a look at the Health Threat from Cosmic Rays on Wikipedia; it has some charts that compare the levels with mean sea level dose, with durations listed. You might have to do a little arithmetic to make it relevant to your story, but you can always ask another question if you run into any uncertainty there. Conclusion I believe I've hit all of your sub-questions as accurately as I possibly can. My goal here (as with many of my answers) is not only to give you the specific answers you're looking for, but provide a little bit of the background information so you can understand the basics a little better, and be able to adapt them as needed to fit your story/work. Type_outcast has really explained the crux of the matter. Here I would comment on a few flaws in the environment presented in the question. In open space (as in, with no artificial gravity) there is no (negligible) gravity so the very idea of building a "ladder" is questionable. You can achieve the same results with simply having one metal pole of 3-4 inch diameter. The rungs would actually make it harder for the astronaut to traverse through because it would make it hard for him/her to tether to the ladder while moving along it. If you have one really long, smooth metallic pipe, the astronaut can easily build a loose curl with his/her rope and attach firmly to the pole without any hindrance to motion. You only need to hold the pipe once and push it down to get yourself moving upwards forever. Do it a few more times and you would be moving at a car's speed on a highway. I would not go into details as that has been sufficiently explained by Type_outcast. Rest? None is required, really. Since your astronaut is going on a free trip, all his/her time is rest already. I cannot say anything about the probability of getting hit by a meteor, since it really depends on your settings. If your dude/girl is going anywhere between mars and Jupiter, good luck is all that could save him/her. However if he is going between Earth and Venus, there would be other concerns (heat) to attend to, instead of meteors. One important thing to notice about meteors is that the damage done does not only depend on the speed of the falling meteor, but also on the speed of the astronaut. It is the relative speed of the two (alongwith their masses) which determines the damage done. The type_outcast estimate breaks at the very first assumption that the astronaut may grip the ladder for half a second. On the contrary, faster you go, less time you have to accelerate. The ultimate speed is limited precisely by how fast you can move your limbs. Some data indicate that a world class boxer delivers a punch at 11 to 14 mps while an average person may reach 7 mps. In a space suite I guess 5 mps (i.e about 10 miles per hour) is the best one can hope for.
This question already has an answer here: Show using the Poisson distribution that $$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$ Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community This question already has an answer here: Show using the Poisson distribution that $$\lim_{n \to +\infty} e^{-n} \sum_{k=1}^{n}\frac{n^k}{k!} = \frac {1}{2}$$ By the definition of Poisson distribution, if in a given interval, the expected number of occurrences of some event is $\lambda$, the probability that there is exactly $k$ such events happening is $$ \frac {\lambda^k e^{-\lambda}}{k!}. $$ Let $\lambda = n$. Then the probability that the Poisson variable $X_n$ with parameter $\lambda$ takes a value between $0$ and $n$ is $$ \mathbb P(X_n \le n) = e^{-n} \sum_{k=0}^n \frac{n^k}{k!}. $$ If $Y_i \sim \mathrm{Poi}(1)$ and the random variables $Y_i$ are independent, then $\sum\limits_{i=1}^n Y_i \sim \mathrm{Poi}(n) \sim X_n$, hence the probability we are looking for is actually $$ \mathbb P\left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P( Y_1 + \dots + Y_n \le n) = \mathbb P(X_n \le n). $$ By the central limit theorem, the variable $\frac {Y_1 + \dots + Y_n - n}{\sqrt n}$ converges in distribution towards the Gaussian distribution $\mathscr N(0, 1)$. The point is, since the Gaussian has mean $0$ and I want to know when it is less than equal to $0$, the variance doesn't matter, the result is $\frac 12$. Therefore, $$ \lim_{n \to \infty} e^{-n} \sum_{k=0}^{n} \frac{n^k}{k!} = \lim_{n \to \infty} \mathbb P(X_n \le n) = \lim_{n \to \infty} \mathbb P \left( \frac{Y_1 + \dots + Y_n - n}{\sqrt n} \le 0 \right) = \mathbb P(\mathscr N(0, 1) \le 0) = \frac 12. $$ Hope that helps,
First I have defined the exterior algebra of a module $M$ as the quotient $T(M)/A(M)$ where $T(M)$ is the tensor algebra of $M$ and $A(M)$ is the ideal generated by all elements of the form $m\otimes m$ for $m\in M$. The exterior algebra is graded, with k'th homogeneous component $\bigwedge ^k(M)=T^k(M)/A^k(M)$ where $A^k(M)=A(M)\cap T^k(M)$. Elements of $A^k(M)$ are finite sums of elements of the form $m_1\otimes ...\otimes m_{i-1}\otimes m\otimes m\otimes m_{i+2}\otimes ...\otimes m_k$ for some $1\leq i<k$ (right?) Now I want to show that the k'th exterior power of M is equal to $T^k(M)/J^k(M)$, where $J^k(M)$ is the submodule of $T^k(M)$ generated by all elements of the form $m_1\otimes ...\otimes m_k$ where $m_i=m_j$ for some $i\neq j$. The proof (Dummit & Foote) says this: The k-tensors $A^k(M)$ inn the ideal $A(M)$ are finite sums of elements of the form $m_1\otimes ...\otimes m_{i-1}\otimes m\otimes m\otimes m_{i+2}\otimes ...\otimes m_k$ for some $1\leq i<k$, which is a k-tensor with two equal entries, so is an element in $J^k(M)$, i.e. $A^k(M)\subseteq J^k(M)$. I understand up to here. What I am struggling to see is how this implies that $T^k(M)/A^k(M)\subseteq T^k(M)/J^k(M)$. I feel like I've misinterpreted something or am missing something obvious. Many thanks!
Search Now showing items 1-10 of 20 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of pion, kaon and proton production in proton–proton collisions at √s = 7 TeV (Springer, 2015-05-27) The measurement of primary π±, K±, p and p¯¯¯ production at mid-rapidity (|y|< 0.5) in proton–proton collisions at s√ = 7 TeV performed with a large ion collider experiment at the large hadron collider (LHC) is reported. ... Two-pion femtoscopy in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV (American Physical Society, 2015-03) We report the results of the femtoscopic analysis of pairs of identical pions measured in p-Pb collisions at $\sqrt{s_{\mathrm{NN}}}=5.02$ TeV. Femtoscopic radii are determined as a function of event multiplicity and pair ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... Centrality dependence of high-$p_{\rm T}$ D meson suppression in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 2.76 TeV (Springer, 2015-11) The nuclear modification factor, $R_{\rm AA}$, of the prompt charmed mesons ${\rm D^0}$, ${\rm D^+}$ and ${\rm D^{*+}}$, and their antiparticles, was measured with the ALICE detector in Pb-Pb collisions at a centre-of-mass ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ...
Difference between revisions of "De Bruijn-Newman constant" Line 9: Line 9: It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes. It is known that <math>\Phi</math> is even, and that <math>H_t</math> is even, real on the real axis, and obeys the functional equation <math>H_t(\overline{z}) = \overline{H_t(z)}</math>. In particular, the zeroes of <math>H_t</math> are symmetric about both the real and imaginary axes. − De Bruijn and Newman showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math>. + De Bruijn and Newman showed that there existed a constant, the ''de Bruijn-Newman constant'' <math>\Lambda</math>, such that <math>H_t</math> has all zeroes real precisely when <math>t \geq \Lambda</math>. The Riemann hypothesis is equivalent to the claim that <math>\Lambda \leq 0</math>. Currently it is known that <math>0 \leq \Lambda < 1/2</math> + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + . Revision as of 13:59, 25 January 2018 For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3] Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] (formally, at least) obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum may have to be interpreted in a principal value sense. (See for instance [CSV1994, Lemma 2.4]. This lemma assumes that [math]t \gt \Lambda[/math], but it is likely that one can extend to other [math]t \geq 0[/math] as well.) Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is negative, preprint. arXiv:1801.05914
Let $f(x)$ be a function which is differentiable on $[0,1]$ with $f(0)=0$ and $f(1)=1$. Show that for every $n\in \Bbb N$ there exists numbers $x_1,x_2,\ldots,x_n\in [0,1]$ such as $$ \sum_{k = 1}^n \frac{1}{f' (x_k)} = n $$ I think the mean value theorem should be applied. So there exists $x_1$ in $[0,1]$ such that $f ' (x_1) = \frac{f(1) - f(0)}{1-0} =1$ and there exists $x_2$ in $[0,x_1]$ such that $f ' (x_2) = \frac{f(x1) - f(0)}{x1-0} = \frac{f(x1)}{x1}$, so on and so forth for $x_3 ,x_4, \ldots,x_n$ and we have the sum $$1+\frac{x_1}{f(x1)} + \frac{x_2}{f(x2)} +\cdots+\frac{x_{n-1}}{f(x_{n-1})}$$ and from here I have no idea what to do . I was wondering if anyone could be so kind to help ? By the intermediate value property of continuous functions, there are $0=x_0,x_1,\ldots,x_{n-1},x_n=1\in[0,1]$ such that:$$ f(x_i) = \frac{i}{n} $$for every $i\in[0,n]$, and we may assume WLOG $x_0<x_1<\ldots<x_n$. Moreover, there is some $\xi_i$ in the interval $(x_{i-1},x_i)$ such that: $$ \frac{f(x_{i})-f(x_{i-1})}{x_{i}-x_{i-1}}=f'(\xi_i) $$ that is: $$ \frac{1}{f'(\xi_i)} = n(x_i-x_{i-1}).$$ By summing those identities over $i\in[1,n]$ we get: $$ \sum_{i=1}^{n}\frac{1}{f'(\xi_i)}= n(x_n-x_0) = \color{red}{n} $$ as wanted.
$ \newcommand{\ra}[1]{\kern-1.5ex\xrightarrow{\ \ #1\ \ }\phantom{}\kern-1.5ex} \newcommand{\ras}[1]{\kern-1.5ex\xrightarrow{\ \ \smash{#1}\ \ }\phantom{}\kern-1.5ex} \newcommand{\da}[1]{\bigg\downarrow\raise.5ex\rlap{\scriptstyle#1}}$ Suppose we have $CW$-complex $X$. All the self-homotopy equivalences form a monoid, denote it by $G$. Question: is there any good way to construct another space $\widetilde X$, such that $\widetilde X$ is homotopy equivalent to $X$ and there exists a homomorphism from $G$ to the group $\widetilde G$ of all self-homeomorphisms of $\widetilde X$? A good way means that, (besides the functoriality) for any homotopy between elements of $G$ we should have an isotopy between corresponding elements of $\widetilde G$. Also, for every $f\in G$ the following diagram should be homotopy-commutative: $$ \begin{array}{c} X & \ra{f} & X \\ \da{e} & & \da{e} \\ \widetilde X & \ra{\widetilde f} & \widetilde X\end{array} $$ Here $e$ is a fixed homotopy equivalence between $X$ and $\widetilde X$. (I have one idea how to change all the homotopy equivalence to the homeomorphisms using mapping telescope, but I don't know what to do with homotopies and isotopies)
Here I have some confusing points about the definition of flux in the projective construction. For example, consider the same mean-field Hamiltonian in my previous question, and assume the $2\times 2$ complex matrix $\chi_{ij}$ has the form $\begin{pmatrix} t_{ij}& \Delta_{ij}\\ \Delta_{ij}^* & -t_{ij}^*\end{pmatrix}$. Consider a loop with $n$ links on the 2D lattice, the flux through this loop can be defined as the phase of $tr(\chi_1 \cdots\chi_n)$, where $\chi_i=\begin{pmatrix} t_i& \Delta_i\\ \Delta_i^* & -t_i^*\end{pmatrix},i=1,2,...,n$ representing the $i$ th link. And due to the identity $\chi_i^*=-\sigma_y\chi_i\sigma_y$, it's easy to show that $[tr(\chi_1 \cdots\chi_n)]^*=(-1)^ntr(\chi_1 \cdots\chi_n)$, which means that for an even loop, the flux is always $0$ or $\pi$; while for an odd loop, the flux is always $\pm\frac{\pi}{2}$. My questions are as follows: (1)When $\chi_{ij}=\begin{pmatrix} t_{ij}& 0\\ 0 & -t_{ij}^*\end{pmatrix}$, the mean-field Hamiltonian can be rewritten as $H_{MF}=\sum(t_{ij}f_{i\sigma}^\dagger f_{j\sigma}+H.c.)$, if we define the flux through a loop $i\rightarrow j\rightarrow k\rightarrow \cdots\rightarrow l\rightarrow i$ as the phase of $t_{ij}t_{jk}\cdots t_{li}$, then the flux may take any real value in addition to the above only allowed values $0,\pi,\pm\frac{\pi}{2}$. So which definition of flux is correct? (2)If $tr(\chi_1 \cdots\chi_n)=0$, how we define the flux(now the phase is highly uncertain)? Thank you very much.
(Bug?) HP48 Equation Library 02-14-2017, 03:36 AM (This post was last modified: 02-14-2017 04:15 AM by Han.) Post: #21 RE: (Bug?) HP48 Equation Library Thank you all -- Brad Barton, SlideRule, and rprosperi -- for your insights. It has helped me understand the formulas much better. So from what I gathered, \[ \text{if} \quad \quad f(x) = \begin{cases} P\cdot (x-a), & x \le a \\ 0, & x>a \end{cases} \quad \quad \text{and} \quad \quad g(x) = \begin{cases} M, & x\le c\\ 0, & x> c \end{cases} \quad \quad \text{then}\quad \quad Mx = f(x) + g(x) - \frac{w}{2}\cdot (L^2-2\cdot L \cdot x + x^2) \] So \( c \) is indeed hidden in the equation that shown in the AUR and in the display of the equation within the Equation Library. I was trying to put together a list of formulas for a current project of mine, but it looks like it will be more than just copying and pasting the formulas from the AUR. I guess I will have to either learn or brush up on a lot of topics before this project is complete. Again, thank you all for your insights and patience. Graph 3D | QPI | SolveSys 02-14-2017, 03:43 AM Post: #22 RE: (Bug?) HP48 Equation Library (02-14-2017 03:09 AM)rprosperi Wrote: c does matter to understand the externally applied Moment although it is not used in the core Bending Moment calculation. Basically, if c is less than x or greater than x determines the role of the applied moment in the overall calculation. In the example, try plugging-in different values of c while holding all other variables constant and you will see the resulting Mx changes when c exceeds x. Yes, this behavior is confirmed. It is because the moment adds to the reaction forces when c exceeds x. If you send the equation to the stack, you can see the IFTE logic for that case. Thanks for making us look closer Bob. Brad 02-14-2017, 03:57 AM Post: #23 RE: (Bug?) HP48 Equation Library (02-14-2017 03:43 AM)Brad Barton Wrote: Yes, this behavior is confirmed. It is because the moment adds to the reaction forces when c exceeds x. If you send the equation to the stack, you can see the IFTE logic for that case. This would have saved so much time/headache. It never occurred to me to even look at the formula in the calculator. I just assumed it was going to reflect the formula in the book. Graph 3D | QPI | SolveSys 02-14-2017, 04:07 AM Post: #24 RE: (Bug?) HP48 Equation Library I want to say something about 1,000 monkeys on 1,000 typewriters eventually pounding out Hamlet. A similar amount of thought went into this "discovery". Lol. Hopefully you can use it to make extracting the rest of the equations much less painful. Thanks again for sharing your work with us. Brad PS. Might need a slight edit in the last term of your equation above. 02-14-2017, 04:16 AM Post: #25 RE: (Bug?) HP48 Equation Library (02-14-2017 04:07 AM)Brad Barton Wrote: PS. Might need a slight edit in the last term of your equation above. Thanks! I hope my fix was indeed just that. Graph 3D | QPI | SolveSys 02-14-2017, 03:22 PM Post: #26 RE: (Bug?) HP48 Equation Library (02-14-2017 03:36 AM)Han Wrote: Thank you all ... for your insights. It has helped me understand ... a list of formulas ... more than just copying and pasting the formulas... Again, thank you all for ... patience. Han, keep chuggin & pluggin away; we ALL benefit from honest endeavors. BEST! SlideRule 02-14-2017, 04:29 PM Post: #27 RE: (Bug?) HP48 Equation Library Thanks are due mostly to you for your ongoing contributions to the folks here; access to the 48 Eq Lib on Prime will likely get me to use it again, so thanks for that too! (02-14-2017 03:43 AM)Brad Barton Wrote: If you send the equation to the stack, you can see the IFTE logic for that case. Thanks for pointing that out Brad, I honestly never noticed one could retrieve the internal equations to examine. Maybe I did learn it 25 years ago and forgot since, but it seems it's something I should have recalled if so. Lots of topics discussed here are in that category lately, has anyone else noticed that? --Bob Prosperi 02-14-2017, 05:20 PM Post: #28 RE: (Bug?) HP48 Equation Library (02-14-2017 03:43 AM)Brad Barton Wrote: If you send the equation to the stack, you can see the IFTE logic for that case. I do not have the HP48G manual, but this is from P. 786 of the HP50g User's Guide: Quote:All equations have a display form and some applications also have a John 02-14-2017, 07:08 PM Post: #29 RE: (Bug?) HP48 Equation Library (02-14-2017 04:29 PM)rprosperi Wrote: Thanks are due mostly to you for your ongoing contributions to the folks here; access to the 48 Eq Lib on Prime will likely get me to use it again, so thanks for that too! In the interest of providing a useful list of equations, how should we approach the provision of these formulas? Would users find it more useful to have a single formula using piecewise functions as shown in one of the examples above? Or would it be better to separate the single formula into several "copies", and then allow the user to select the one that best fits their parameters? In other words, provide a single formula: \[ Mx = \begin{cases} P\cdot (x-a), & x \le a\\ 0, & x>a \end{cases} \quad + \quad \begin{cases} M, & x\le c\\ 0, & x>c \end{cases} \quad + \quad \frac{w}{2}\cdot (L^2−2\cdot L \cdot x+x^2) \] or provide all the cases and have the user pick one among \begin{align} Mx & = P\cdot (x-a) + M + \frac{w}{2}\cdot (L^2−2\cdot L \cdot x+x^2) \\ Mx & = P\cdot (x-a) + \frac{w}{2}\cdot (L^2−2\cdot L \cdot x+x^2) \\ Mx & = M + \frac{w}{2}\cdot (L^2−2\cdot L \cdot x+x^2) \\ Mx & = \frac{w}{2}\cdot (L^2−2\cdot L \cdot x+x^2) \end{align} These formulas, in context, are most likely going to used in the fashion of computing \( Mx \) while knowing the other parameters (not really solving as much as evaluating). However, as an equation, one could in theory provide a value for \( Mx \) and solve any of the other variables. (Again, unlikely in this context, but theoretically possible). The HP48 fails spectacularly, solving for \( c \) to be 9.999...E499 and I imagine the solver I have implemented to do no better using the first formula (the one with piecewise functions). I will definitely have to include some warnings either way about using the solver for such problems. In the former case, we have discontinuities that would make Newton's method possibly fail. In the latter case, these formulas, taken individually, imply \( Mx \) is continuous in the remaining variables (which is clearly not the case for \(x \)). Graph 3D | QPI | SolveSys 02-14-2017, 08:25 PM (This post was last modified: 02-14-2017 08:28 PM by Brad Barton.) Post: #30 RE: (Bug?) HP48 Equation Library My personal preference is the piecewise function. It's a more succinct representation of the math involved, and also points out the importance of the relative values of x, a and c. This is JMHO though, so there may be very good reasons to do it the other way. Perhaps a decision tree keyed to these relative values could be used to select the equation (of the 4 listed at the bottom) that is presented to the solver. You'd have to have some error handling if, for example, the user asked for a solution for P when no value for a was entered, but I'm not telling you anything new. 02-14-2017, 09:28 PM Post: #31 RE: (Bug?) HP48 Equation Library (02-14-2017 07:08 PM)Han Wrote: In the interest of providing a useful list of equations ... more useful to have a single formula ... or provide all the cases ...There are additional external loading cases not accounted for in the 48 routine, such as partially distributed load, varying uniform load, partially distributed varying load, as well as multiple loads of the same type but at additional locations. If the LAW of SUPERPOSITION is deployed, then multiple external loading's can be assessed, both as a SUM of MOMENTS at point x on length L as well as individual but varying magnitude and location contributions to ∑Mx. Just my 3¢ worth. BEST! SlideRule 02-15-2017, 01:53 AM Post: #32 RE: (Bug?) HP48 Equation Library (02-14-2017 08:25 PM)Brad Barton Wrote: My personal preference is the piecewise function. It's a more succinct representation of the math involved, and also points out the importance of the relative values of x, a and c. This is JMHO though, so there may be very good reasons to do it the other way. 1 + ... mainly because of the bolded section, but also because many (most?) users of these equations are likely to not know which best fits his/her conditions. The latter style is likely preferred by mathematicians, but as someone noted above, Engineers tend to want to 'plug values into the magic equation' worrying more about the quality of their measurements than the underlying math. I can't imagine a scenario (in the real world) where one would be trying to solve for c, but this may not apply for other equations, so agree some guidance is likely useful regarding these types of parameters. Who'da thought we'd be discussing Statics of all things. I was relatively sure by sophomore year that I'd never be discussing that again. --Bob Prosperi 02-15-2017, 10:03 PM Post: #33 RE: (Bug?) HP48 Equation Library (02-13-2017 04:03 PM)Vtile Wrote: Some fields of engineering are also extremely conservative, what comes how things are done and what formulas are used.That is pretty sure! I'm a fluid flow engineer and I use by myself all the formulas in most-useful-format - as I learned many years before. But: sometimes, for quick calculations I use some personal trick: for example the Blasius formula is f=0.3164/(Re^0.25), but 0.3164 is approximately 1/(100^0.25)=0.3162, therefore f=0.3164/(Re^0.25) = (100×Re)^(-0.25), which is easy to memorize and calculate ([EEX] [2] [×] [1/x] [SQRT] [SQRT] on my 15C). Another hydraulic or mechanical engineer specific thing is the units: I always use Pa as pressure but for the pumps or fans I use it as J/m^3 specific energy (1_Pa=1_N/m^2=1_(N×m)/(m^3)), or for acceleration of gravity is not 9.81_m/s^2, this is 9.81_N/kg (=the force which is works on every 1 kg in the Earth gravitational field). And of course, the area of the circle is D^2×PI/4, what else?! Csaba 02-19-2017, 10:53 PM Post: #34 RE: (Bug?) HP48 Equation Library Programming an equation library has lead me to learn about all sorts of stuff that I really never cared much about (and still do not -- but at least I am learning). Anyway, for those of you who have some background in materials science: The following formulas are for carrier concentration of silicon (intrinsic density \( n_i \)) as a function of temperature \(T\): \[ E_g = 1.17 - 4.73\times 10^{-4}\cdot \frac{T^2}{T+636} \] \[ n_i = \sqrt{ N_c \cdot N_v} \cdot e^{-E_g/(2\cdot k\cdot T)} \] The HP48 uses a value of \( 7.2756517951 \times 10^{15} \) for \( \sqrt{N_c \cdot N_v} \) -- the square root of the product of the effective density of states for the conductance and valence bands, respectively. However, I cannot seem to find any references where this is the case. The formulas I have found for the effective density states do not produce this value. Can anyone provide references to this value, or those of the appropriate \( N_c \) and \( N_v \) that form this value? Graph 3D | QPI | SolveSys 03-06-2017, 05:31 PM (This post was last modified: 03-06-2017 06:54 PM by Han.) Post: #35 RE: (Bug?) HP48 Equation Library Well, I eventually did find an actual bug (or perhaps limitation) of the HP48 MES. According to this patent: https://www.google.com/patents/US5175700 the MES basically automates the process of substituting in known values, and finding an equation that has only one unknown, solving that equation, then repeating the process after updating its list of known/solved values. The problem with this is that an inconsistent system shows up as having a solution. For example: Code: { 'X+Y=10' 'X+Y=11' } STEQ 1 'X' STO MINIT MSOLVR Proceed to solve for Y and the HP48 produces: Searching... Solving for Y... Y:9 Zero Except X=1 Y=9 is only a zero of the first equation. There is no check at all whether that solution is in fact a solution to the system. I only noticed this when I started working on a hybrid solver engine that mimics the Multiple Equation Solver (MES). In the meantime, I'm still interested in the formulas for the silicon density of states equations in the immediately previous post. Graph 3D | QPI | SolveSys 03-06-2017, 07:45 PM Post: #36 RE: (Bug?) HP48 Equation Library (02-19-2017 10:53 PM)Han Wrote: for the conductance and valence bands, respectively. However, I cannot seem to find any references where this is the case. The formulas I have found for the effective density states do not produce this value. Can anyone provide references to this value, or those of the appropriate \( N_c \) and \( N_v \) that form this value? Is this reference helpful? Effective density of states - Example BEST! SlideRule 03-06-2017, 07:51 PM Post: #37 RE: (Bug?) HP48 Equation Library (03-06-2017 07:45 PM)SlideRule Wrote:(02-19-2017 10:53 PM)Han Wrote: for the conductance and valence bands, respectively. However, I cannot seem to find any references where this is the case. The formulas I have found for the effective density states do not produce this value. Can anyone provide references to this value, or those of the appropriate \( N_c \) and \( N_v \) that form this value? None of those values for \( N_c \) and \( N_v \) in the linked reference produces 7.2756517951e15 Graph 3D | QPI | SolveSys User(s) browsing this thread: 2 Guest(s)
Algebraic Geometry Seminar Fall 2016 The seminar meets on Fridays at 2:25 pm in Van Vleck B305. Contents Algebraic Geometry Mailing List Please join the AGS Mailing List to hear about upcoming seminars, lunches, and other algebraic geometry events in the department (it is possible you must be on a math department computer to use this link). Fall 2016 Schedule date speaker title host(s) September 16 Alexander Pavlov (Wisconsin) Betti Tables of MCM Modules over the Cones of Plane Cubics local September 23 PhilSang Yoo (Northwestern) Classical Field Theories for Quantum Geometric Langlands Dima October 7 Botong Wang (Wisconsin) Enumeration of points, lines, planes, etc. local October 14 Luke Oeding (Auburn) Border ranks of monomials Steven October 28 Adam Boocher (Utah) Bounds for Betti Numbers of Graded Algebras Daniel November 4 Lukas Katthaen Finding binomials in polynomial ideals Daniel November 11 Daniel Litt (Columbia) Arithmetic restrictions on geometric monodromy Jordan November 18 David Stapleton (Stony Brook) Hilbert schemes of points and their tautological bundles Daniel December 2 Rohini Ramadas (Michigan) Dynamics on the moduli space of pointed rational curves Daniel and Jordan December 9 Robert Walker (Michigan) Uniform Asymptotic Growth on Symbolic Powers of Ideals Daniel Abstracts Alexander Pavlov Betti Tables of MCM Modules over the Cones of Plane Cubics Graded Betti numbers are classical invariants of finitely generated modules over graded rings describing the shape of a minimal free resolution. We show that for maximal Cohen-Macaulay (MCM) modules over a homogeneous coordinate rings of smooth Calabi-Yau varieties X computation of Betti numbers can be reduced to computations of dimensions of certain Hom groups in the bounded derived category D(X). In the simplest case of a smooth elliptic curve embedded into projective plane as a cubic we use our formula to get explicit answers for Betti numbers. In this case we show that there are only four possible shapes of the Betti tables up to a shifts in internal degree, and two possible shapes up to a shift in internal degree and taking syzygies. PhilSang Yoo Classical Field Theories for Quantum Geometric Langlands One can study a class of classical field theories in a purely algebraic manner, thanks to the recent development of derived symplectic geometry. After reviewing the basics of derived symplectic geometry, I will discuss some interesting examples of classical field theories, including B-model, Chern-Simons theory, and Kapustin-Witten theory. Time permitting, I will make a proposal to understand quantum geometric Langlands and other related Langlands dualities in a unified way from the perspective of field theory. Botong Wang Enumeration of points, lines, planes, etc. It is a theorem of de Brujin and Erdős that n points in the plane determines at least n lines, unless all the points lie on a line. This is one of the earliest results in enumerative combinatorial geometry. We will present a higher dimensional generalization to this theorem. Let E be a generating subset of a d-dimensional vector space. Let [math]W_k[/math] be the number of k-dimensional subspaces that is generated by a subset of E. We show that [math]W_k\leq W_{d-k}[/math], when [math]k\leq d/2[/math]. This confirms a "top-heavy" conjecture of Dowling and Wilson in 1974 for all matroids realizable over some field. The main ingredients of the proof are the hard Lefschetz theorem and the decomposition theorem. I will also talk about a proof of Welsh and Mason's log-concave conjecture on the number of k-element independent sets. These are joint works with June Huh. Luke Oeding Border ranks of monomials What is the minimal number of terms needed to write a monomial as a sum of powers? What if you allow limits? Here are some minimal examples: [math]4xy = (x+y)^2 - (x-y)^2[/math] [math]24xyz = (x+y+z)^3 + (x-y-z)^3 + (-x-y+z)^3 + (-x+y-z)^3[/math] [math]192xyzw = (x+y+z+w)^4 - (-x+y+z+w)^4 - (x-y+z+w)^4 - (x+y-z+w)^4 - (x+y+z-w)^4 + (-x-y+z+w)^4 + (-x+y-z+w)^4 + (-x+y+z-w)^4[/math] The monomial [math]x^2y[/math] has a minimal expression as a sum of 3 cubes: [math]6x^2y = (x+y)^3 + (-x+y)^3 -2y^3[/math] But you can use only 2 cubes if you allow a limit: [math]6x^2y = \lim_{\epsilon \to 0} \frac{(x^3 - (x-\epsilon y)^3)}{\epsilon}[/math] Can you do something similar with xyzw? Previously it wasn't known whether the minimal number of powers in a limiting expression for xyzw was 7 or 8. I will answer this and the analogous question for all monomials. The polynomial Waring problem is to write a polynomial as linear combination of powers of linear forms in the minimal possible way. The minimal number of summands is called the rank of the polynomial. The solution in the case of monomials was given in 2012 by Carlini--Catalisano--Geramita, and independently shortly thereafter by Buczynska--Buczynski--Teitler. In this talk I will address the problem of finding the border rank of each monomial. Upper bounds on border rank were known since Landsberg-Teitler, 2010 and earlier. We use symmetry-enhanced linear algebra to provide polynomial certificates of lower bounds (which agree with the upper bounds). This work builds on the idea of Young flattenings, which were introduced by Landsberg and Ottaviani, and give determinantal equations for secant varieties and provide lower bounds for border ranks of tensors. We find special monomial-optimal Young flattenings that provide the best possible lower bound for all monomials up to degree 6. For degree 7 and higher these flattenings no longer suffice for all monomials. To overcome this problem, we introduce partial Young flattenings and use them to give a lower bound on the border rank of monomials which agrees with Landsberg and Teitler's upper bound. I will also show how to implement Young flattenings and partial Young flattenings in Macaulay2 using Steven Sam's PieriMaps package. Adam Boocher Let R be a standard graded algebra over a field. The set of graded Betti numbers of R provide some measure of the complexity of the defining equations for R and their syzygies. Recent breakthroughs (e.g. Boij-Soederberg theory, structure of asymptotic syzygies, Stillman's Conjecture) have provided new insights about these numbers and we have made good progress toward understanding many homological properties of R. However, many basic questions remain. In this talk I'll talk about some conjectured upper and lower bounds for the total Betti numbers for different classes of rings. Surprisingly, little is known in even the simplest cases. Lukas Katthaen (Frankfurt) In this talk, I will present an algorithm which, for a given ideal J in the polynomial ring, decides whether J contains a binomial, i.e., a polynomial having only two terms. For this, we use ideas from tropical geometry to reduce the problem to the Artinian case, and then use an algorithm from number theory. This is joint work with Anders Jensen and Thomas Kahle. David Stapleton Fogarty showed in the 1970s that the Hilbert scheme of n points on a smooth surface is smooth. Interest in these Hilbert schemes has grown since it has been shown they arise in hyperkahler geometry, geometric representation theory, and algebraic combinatorics. In this talk we will explore the geometry of certain tautological bundles on the Hilbert scheme of points. In particular we will show that these tautological bundles are (almost always) stable vector bundles. We will also show that each sufficiently positive vector bundles on a curve C is the pull back of a tautological bundle from an embedding of C into the Hilbert scheme of the projective plane. Rohini Ramadas The moduli space M_{0,n} parametrizes all ways of labeling n distinct points on P^1, up to projective equivalence. Let H be a Hurwitz space parametrizing holomorphic maps, with prescribed branching, from one n-marked P^1 to another. H admits two different maps to M_{0,n}: a ``target curve'’ map pi_t and a ``source curve map pi_s. Since pi_t is a covering map,pi_s(pi_t^(-1)) is a multi-valued map — a Hurwitz correspondence — from M_{0,n} to itself. Hurwitz correspondences arise in topology and Teichmuller theory through Thurston's topological characterization of rational functions on P^1. I will discuss their dynamics via numerical invariants called dynamical degrees. Robert Walker Symbolic powers ($I^{(N)}$) in Noetherian commutative rings are mysterious objects from the perspective of an algebraist, while regular powers of ideals ($I^s$) are essentially intuitive. However, many geometers tend to like symbolic powers in the case of a radical ideal in an affine polynomial ring over an algebraically closed field in characteristic zero: the N-th symbolic power consists of polynomial functions "vanishing to order at least N" on the affine zero locus of that ideal. In this polynomial setting, and much more generally, a challenging problem is determining when, given a family of ideals (e.g., all prime ideals), you have a containment of type $I^{(N)} \subseteq I^s$ for all ideals in the family simultaneously. Following breakthrough results of Ein-Lazarsfeld-Smith (2001) and Hochster-Huneke (2002) for, e.g., coordinate rings of smooth affine varieties, there is a slowly growing body of "uniform linear equivalence" criteria for when, given a suitable family of ideals, these $I^{(N)} \subseteq I^s$ containments hold as long as N is bounded below by a linear function in s, whose slope is a positive integer that only depends on the structure of the variety or the ring you fancy. My thesis (arxiv.org/1510.02993, arxiv.org/1608.02320) contributes new entries to this body of criteria, using Weil divisor theory and toric algebraic geometry. After giving a "Symbolic powers for Geometers" survey, I'll shift to stating key results of my dissertation in a user-ready form, and give a "comical" example or two of how to use them. At the risk of sounding like Paul Rudd from "Ant-Man," I hope this talk will be awesome.
De Bruijn-Newman constant For each real number [math]t[/math], define the entire function [math]H_t: {\mathbf C} \to {\mathbf C}[/math] by the formula [math]\displaystyle H_t(z) := \int_0^\infty e^{tu^2} \Phi(u) \cos(zu)\ du[/math] where [math]\Phi[/math] is the super-exponentially decaying function [math]\displaystyle \Phi(u) := \sum_{n=1}^\infty (2\pi^2 n^4 e^{9u} - 3 \pi n^2 e^{5u}) \exp(-\pi n^2 e^{4u}).[/math] It is known that [math]\Phi[/math] is even, and that [math]H_t[/math] is even, real on the real axis, and obeys the functional equation [math]H_t(\overline{z}) = \overline{H_t(z)}[/math]. In particular, the zeroes of [math]H_t[/math] are symmetric about both the real and imaginary axes. One can also express [math]H_t[/math] in a number of different forms, such as [math]\displaystyle H_t(z) = \frac{1}{2} \int_{\bf R} e^{tu^2} \Phi(u) e^{izu}\ du[/math] or [math]\displaystyle H_t(z) = \frac{1}{2} \int_0^\infty e^{t\log^2 x} \Phi(\log x) e^{iz \log x}\ \frac{dx}{x}.[/math] In the notation of [KKL2009], one has [math]\displaystyle H_t(z) = \frac{1}{8} \Xi_{t/4}(z/2).[/math] De Bruijn [B1950] and Newman [N1976] showed that there existed a constant, the de Bruijn-Newman constant [math]\Lambda[/math], such that [math]H_t[/math] has all zeroes real precisely when [math]t \geq \Lambda[/math]. The Riemann hypothesis is equivalent to the claim that [math]\Lambda \leq 0[/math]. Currently it is known that [math]0 \leq \Lambda \lt 1/2[/math] (lower bound in [RT2018], upper bound in [KKL2009]). The Polymath15 project seeks to improve the upper bound on [math]\Lambda[/math]. The current strategy is to combine the following three ingredients: Numerical zero-free regions for [math]H_t(x+iy)[/math] of the form [math]\{ x+iy: 0 \leq x \leq T; y \geq \varepsilon \}[/math] for explicit [math]T, \varepsilon, t \gt 0[/math]. Rigorous asymptotics that show that [math]H_t(x+iy)[/math] whenever [math]y \geq \varepsilon[/math] and [math]x \geq T[/math] for a sufficiently large [math]T[/math]. Dynamics of zeroes results that control [math]\Lambda[/math] in terms of the maximum imaginary part of a zero of [math]H_t[/math]. Contents [math]t=0[/math] When [math]t=0[/math], one has [math]\displaystyle H_0(z) = \frac{1}{8} \xi( \frac{1}{2} + \frac{iz}{2} ) [/math] where [math]\displaystyle \xi(s) := \frac{s(s-1)}{2} \pi^{s/2} \Gamma(s/2) \zeta(s)[/math] is the Riemann xi function. In particular, [math]z[/math] is a zero of [math]H_0[/math] if and only if [math]\frac{1}{2} + \frac{iz}{2}[/math] is a non-trivial zero of the Riemann zeta function. Thus, for instance, the Riemann hypothesis is equivalent to all the zeroes of [math]H_0[/math] being real, and Riemann-von Mangoldt formula (in the explicit form given by Backlund) gives [math]\displaystyle N_0(T) - (\frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} - \frac{7}{8})| \lt 0.137 \log (T/2) + 0.443 \log\log(T/2) + 4.350 [/math] for any [math]T \gt 4[/math], where [math]N_0(T)[/math] denotes the number of zeroes of [math]H_0[/math] with real part between 0 and T. The first [math]10^{13}[/math] zeroes of [math]H_0[/math] (to the right of the origin) are real [G2004]. This numerical computation uses the Odlyzko-Schonhage algorithm. In [P2017] it was independently verified that all zeroes of [math]H_0[/math] between 0 and 61,220,092,000 were real. [math]t\gt0[/math] For any [math]t\gt0[/math], it is known that all but finitely many of the zeroes of [math]H_t[/math] are real and simple [KKL2009, Theorem 1.3]. In fact, assuming the Riemann hypothesis, all of the zeroes of [math]H_t[/math] are real and simple [CSV1994, Corollary 2]. Let [math]\sigma_{max}(t)[/math] denote the largest imaginary part of a zero of [math]H_t[/math], thus [math]\sigma_{max}(t)=0[/math] if and only if [math]t \geq \Lambda[/math]. It is known that the quantity [math]\frac{1}{2} \sigma_{max}(t)^2 + t[/math] is non-decreasing in time whenever [math]\sigma_{max}(t)\gt0[/math] (see [KKL2009, Proposition A]. In particular we have [math]\displaystyle \Lambda \leq t + \frac{1}{2} \sigma_{max}(t)^2[/math] for any [math]t[/math]. The zeroes [math]z_j(t)[/math] of [math]H_t[/math] obey the system of ODE [math]\partial_t z_j(t) = - \sum_{k \neq j} \frac{2}{z_k(t) - z_j(t)}[/math] where the sum is interpreted in a principal value sense, and excluding those times in which [math]z_j(t)[/math] is a repeated zero. See dynamics of zeros for more details. Writing [math]z_j(t) = x_j(t) + i y_j(t)[/math], we can write the dynamics as [math] \partial_t x_j = - \sum_{k \neq j} \frac{2 (x_k - x_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] [math] \partial_t y_j = \sum_{k \neq j} \frac{2 (y_k - y_j)}{(x_k-x_j)^2 + (y_k-y_j)^2} [/math] where the dependence on [math]t[/math] has been omitted for brevity. In [KKL2009, Theorem 1.4], it is shown that for any fixed [math]t\gt0[/math], the number [math]N_t(T)[/math] of zeroes of [math]H_t[/math] with real part between 0 and T obeys the asymptotic [math]N_t(T) = \frac{T}{4\pi} \log \frac{T}{4\pi} - \frac{T}{4\pi} + \frac{t}{16} \log T + O(1) [/math] as [math]T \to \infty[/math] (caution: the error term here is not uniform in t). Also, the zeroes behave like an arithmetic progression in the sense that [math] z_{k+1}(t) - z_k(t) = (1+o(1)) \frac{4\pi}{\log |z_k|(t)} = (1+o(1)) \frac{4\pi}{\log k} [/math] as [math]k \to +\infty[/math]. See asymptotics of H_t for asymptotics of the function [math]H_t[/math]. Threads Polymath proposal: upper bounding the de Bruijn-Newman constant, Terence Tao, Jan 24, 2018. Polymath15, first thread: computing H_t, asymptotics, and dynamics of zeroes, Terence Tao, Jan 27, 2018. Polymath15, second thread: generalising the Riemann-Siegel approximate functional equation, Terence Tao and Sujit Nair, Feb 2, 2018. Polymath15, third thread: computing and approximating H_t, Terence Tao and Sujit Nair, Feb 12, 2018. Other blog posts and online discussion Heat flow and zeroes of polynomials, Terence Tao, Oct 17, 2017. The de Bruijn-Newman constant is non-negative, Terence Tao, Jan 19, 2018. Lehmer pairs and GUE, Terence Tao, Jan 20, 2018. A new polymath proposal (related to the Riemann hypothesis) over Tao's blog, Gil Kalai, Jan 26, 2018. Code and data Wikipedia and other references Bibliography [B1950] N. C. de Bruijn, The roots of trigonometric integrals, Duke J. Math. 17 (1950), 197–226. [CSV1994] G. Csordas, W. Smith, R. S. Varga, Lehmer pairs of zeros, the de Bruijn-Newman constant Λ, and the Riemann hypothesis, Constr. Approx. 10 (1994), no. 1, 107–129. [G2004] Gourdon, Xavier (2004), The [math]10^{13}[/math] first zeros of the Riemann Zeta function, and zeros computation at very large height [KKL2009] H. Ki, Y. O. Kim, and J. Lee, On the de Bruijn-Newman constant, Advances in Mathematics, 22 (2009), 281–306. Citeseer [N1976] C. M. Newman, Fourier transforms with only real zeroes, Proc. Amer. Math. Soc. 61 (1976), 246–251. [P2017] D. J. Platt, Isolating some non-trivial zeros of zeta, Math. Comp. 86 (2017), 2449-2467. [P1992] G. Pugh, The Riemann-Siegel formula and large scale computations of the Riemann zeta function, M.Sc. Thesis, U. British Columbia, 1992. [RT2018] B. Rodgers, T. Tao, The de Bruijn-Newman constant is non-negative, preprint. arXiv:1801.05914 [T1986] E. C. Titchmarsh, The theory of the Riemann zeta-function. Second edition. Edited and with a preface by D. R. Heath-Brown. The Clarendon Press, Oxford University Press, New York, 1986. pdf
@JosephWright Well, we still need table notes etc. But just being able to selectably switch off parts of the parsing one does not need... For example, if a user specifies format 2.4, does the parser even need to look for e syntax, or ()'s? @daleif What I am doing to speed things up is to store the data in a dedicated format rather than a property list. The latter makes sense for units (open ended) but not so much for numbers (rigid format). @JosephWright I want to know about either the bibliography environment or \DeclareFieldFormat. From the documentation I see no reason not to treat these commands as usual, though they seem to behave in a slightly different way than I anticipated it. I have an example here which globally sets a box, which is typeset outside of the bibliography environment afterwards. This doesn't seem to typeset anything. :-( So I'm confused about the inner workings of biblatex (even though the source seems.... well, the source seems to reinforce my thought that biblatex simply doesn't do anything fancy). Judging from the source the package just has a lot of options, and that's about the only reason for the large amount of lines in biblatex1.sty... Consider the following MWE to be previewed in the build in PDF previewer in Firefox\documentclass[handout]{beamer}\usepackage{pgfpages}\pgfpagesuselayout{8 on 1}[a4paper,border shrink=4mm]\begin{document}\begin{frame}\[\bigcup_n \sum_n\]\[\underbrace{aaaaaa}_{bbb}\]\end{frame}\end{d... @Paulo Finally there's a good synth/keyboard that knows what organ stops are! youtube.com/watch?v=jv9JLTMsOCE Now I only need to see if I stay here or move elsewhere. If I move, I'll buy this there almost for sure. @JosephWright most likely that I'm for a full str module ... but I need a little more reading and backlog clearing first ... and have my last day at HP tomorrow so need to clean out a lot of stuff today .. and that does have a deadline now @yo' that's not the issue. with the laptop I lose access to the company network and anythign I need from there during the next two months, such as email address of payroll etc etc needs to be 100% collected first @yo' I'm sorry I explain too bad in english :) I mean, if the rule was use \tl_use:N to retrieve the content's of a token list (so it's not optional, which is actually seen in many places). And then we wouldn't have to \noexpand them in such contexts. @JosephWright \foo:V \l_some_tl or \exp_args:NV \foo \l_some_tl isn't that confusing. @Manuel As I say, you'd still have a difference between say \exp_after:wN \foo \dim_use:N \l_my_dim and \exp_after:wN \foo \tl_use:N \l_my_tl: only the first case would work @Manuel I've wondered if one would use registers at all if you were starting today: with \numexpr, etc., you could do everything with macros and avoid any need for \<thing>_new:N (i.e. soft typing). There are then performance questions, termination issues and primitive cases to worry about, but I suspect in principle it's doable. @Manuel Like I say, one can speculate for a long time on these things. @FrankMittelbach and @DavidCarlisle can I am sure tell you lots of other good/interesting ideas that have been explored/mentioned/imagined over time. @Manuel The big issue for me is delivery: we have to make some decisions and go forward even if we therefore cut off interesting other things @Manuel Perhaps I should knock up a set of data structures using just macros, for a bit of fun [and a set that are all protected :-)] @JosephWright I'm just exploring things myself “for fun”. I don't mean as serious suggestions, and as you say you already thought of everything. It's just that I'm getting at those points myself so I ask for opinions :) @Manuel I guess I'd favour (slightly) the current set up even if starting today as it's normally \exp_not:V that applies in an expansion context when using tl data. That would be true whether they are protected or not. Certainly there is no big technical reason either way in my mind: it's primarily historical (expl3 pre-dates LaTeX2e and so e-TeX!) @JosephWright tex being a macro language means macros expand without being prefixed by \tl_use. \protected would affect expansion contexts but not use "in the wild" I don't see any way of having a macro that by default doesn't expand. @JosephWright it has series of footnotes for different types of footnotey thing, quick eye over the code I think by default it has 10 of them but duplicates for minipages as latex footnotes do the mpfoot... ones don't need to be real inserts but it probably simplifies the code if they are. So that's 20 inserts and more if the user declares a new footnote series @JosephWright I was thinking while writing the mail so not tried it yet that given that the new \newinsert takes from the float list I could define \reserveinserts to add that number of "classic" insert registers to the float list where later \newinsert will find them, would need a few checks but should only be a line or two of code. @PauloCereda But what about the for loop from the command line? I guess that's more what I was asking about. Say that I wanted to call arara from inside of a for loop on the command line and pass the index of the for loop to arara as the jobname. Is there a way of doing that?
I'm interested in numerical analysis but I don't have experience on it. I was wondering how one can solve integral equations numerically like $\int_0^x e^{t^3}dt=4$? I was thinking whether there is some numerical differential equation solution method faster that finding some range $a\leq x\leq b$ and then splitting that range half and repeating. I will map out the approach and see if you can fill in the details. We are asked to find the value of $x$ where: $$\int_0^x e^{t^3}~dt = 4$$ We need two numerical approaches here. One to find the zeros of the function $f(x)$, Newton's Method, and one to estimate the integral, Composite Simpson or whichever integration rule you prefer, like Composite Trapezoidal or many others, above where: $$f(x) = \displaystyle \int_0^{x}~ e^{t^3}~dt - 4 = 0$$ The derivative wrt $x$ of this function is: $$f'(x) = e^{x^3}$$ The Newton-Raphson method is given by: $$\displaystyle x_{n+1} = x_n - \dfrac{f(x_n)}{f'(x_n)} = x_n - \dfrac{\displaystyle \int_0^{x_n} e^{t^3}~dt - 4}{e^{x_n^3}}$$ At each iteration, we have to use the Composite Simpson's Rule to find the value of that integral for the next $x_n$. $$s = \int_a^b f(x) \approx \dfrac{h}{3} \left( f(a) + f(b) + 4 \sum_{i=1}^{n/2}~f(a + (2i - 1)h)+2 \sum_{i=1}^{(n-2)/2} f(a+2 ih) \right)$$ Choose the initial starting point is $x_0$ with a desired accuracy of $\epsilon$. The iterations will proceed as follows: $x_0 = x_0$ Using Composite Simpson, with needed $n$: $s$ evaluated between $(0, x_0)$ gives $s = s_0$ Using Newton's iteration: $x_1 = x_0 - \dfrac{f(x_0)}{f'(x_0)} = x_1$ $s$ evaluated between $(0, x_1)$ gives $s = s_1$ Continue this until the number of iterations converges to the desired accuracy. The numerical approach should yield an $x \approx 1.39821$. Next, you can compare this to the exact result and validate we found the correct value of $x$. Curious question, is it possible to calculate the value of $n$ for the desired accuracy apriori to doing the iterative steps when using these two numerical approaches? Probably (actually the answer is yes), but I will leave that for you to ponder. Aside: We can also just use a numerical integrator and randomly try points and more easily bound the problem and then use a the procedure above to fine tune to the desired accuracy. It all comes down to computational complexity.
I am carefully following all the new preprints by ATLAS and CMS that are currently being presented at the Moriond 2013 conference so that you don't have to. So far, everything is compatible with the Standard Model including the \(126\GeV\) Higgs boson and the latter beast is still behaving as obediently as the Standard Model assumes. If something changes about these statements, you will probably learn about it on this blog almost instantly. However, there's an interesting 3-sigma anomaly in an otherwise obscure search so let me tell you what it is. It appears in the following preprint: Search for Type III Seesaw Model Heavy Fermions in Events with Four Charged Leptons using \(5.8\,{\rm fb}^{-1}\) of \(\sqrt{s} = 8\TeV\) data with the ATLAS Detector (ATLAS-CONF-2013-019)It is a relatively obscure search for an electroweak triplet of new fermions, \(N^\pm,N^0\), that are used in the so-called type III seesaw models. Note that all seesaw models are meant to produce the neutrino masses (equal to zero in the "truly minimal" Standard Model) – and explain why they're so small. The type I seesaw models add at least two right-handed neutrinos \(\nu_R\) with masses near the GUT scale. The type II seesaw models add a new Higgs triplet. The type III seesaw models add the triplet of fermions \(N^\pm,N^0\) that are approximately equally heavy. It is supposed that the proton-proton collisions may produce either \(N^\pm N^\mp\) or \(N^\pm N^0\) where the latter possibilities are approximately 2 times more likely than the former possibility. As you may expect, the ATLAS folks exclude the existence of these new fermions \(N^\pm,N^0\) up to some mass, namely \(245\GeV\). But there's an interesting 3-sigma excess near the (higher) mass \(m_N\sim 420\GeV\): its \(p\)-value (probability of a similarly strong signal according to the null hypothesis) is about \(p_0=0.20\), a statement whose origin I don't quite understand. I would understand \(0.20\%\) but maybe their figure is right and unimpressive due to some look-elsewhere reduction. At any rate, the picture (Figure 4) says a clear story of a rather strong excess by itself: Click to zoom in. On the \(x\)-axis, you have the assumed mass of the new fermions, \(m_N\), in the units of \(\GeV\). The \(y\)-axis contains the relevant cross section \[ \frac{\sigma(pp\!\to\! N^\pm N^0)\times BF(N^\pm \!\to\! Z \ell^\pm)\times BF(N^0\!\to\! W^\pm \ell^\mp) }{\rm fb} \] In other words, it's some cross section (in the units of one femtobarn) for the production of a pair of the new fermions (one neutral fermion and one charged fermions) using a proton pair but only the "branching fractions" in which these new fermions decay to \(W^\pm/Z^0\) gauge bosons plus leptons in the indicated way (pretty much the dominant decays expected for the new fermions) are included. The decays of these hypothetical new fermionic triplets violate the lepton flavor if not the lepton number. They can probably achieve what they can achieve – the neutrino masses – but I haven't encountered them anywhere else. In particular, I am not aware of any top-down explanation why these things should exist. But of course, it's not impossible that these otherwise unwanted beasts are employed by Mother Nature. It's more likely that the excess is a fluke. But even if it is due to new physics, I suspect that the details of the new physics could be a bit different (sleptons and sneutrinos of some kind?). This particular paper has only used \(5.8/{\rm fb}\) of the 2012 data. Over twenty inverse femtobarns have (already) been collected last year so when they're processed, the signals – if they're due to new physics – should grow to indisputable proportions. TBBT and women in science Last night, the latest episode of The Big Bang Theory made Leonard want to help young women enter science. Sheldon and Howard ultimately agreed to co-operate. They went to a high school to meet girls and the sitcom showed a very realistic picture of how hopeless disinterest most of the girls of this age have in science and how complete hypocritical waste of time similar attempts to "draft girls" are. New Czech president Miloš Zeman was inaugurated as the new Czech president. Lots of fun formalities at the Prague Castle and the cathedral over there. His inauguration speech was given off-hand, rather impressive. Among other things, he declared war against three main enemies of the society – mafia's godfathers, neo-Nazi guerrilla groups, and most of the journalists. ;-) The latter group (Zeman's comment about this group was the only thing that excited an applause among the audience dominated by top politicians) is composed of jealous and stupid individuals who love to criticize people for doing something they can't do at all and who love to brainwash the citizens. Fully agreed. There were things I disagreed with, too. He rewrote the history when he presented Masaryk as the guy who wanted to eliminated all traces of monarchy and introduced pure republicanism. That's rubbish. Masaryk deliberately preserved some of the royal functions and image of the kings for the Czechoslovak presidents. At any rate, Zeman surrendered his right to declare amnesties and pardons (that's like not doing a part of his job!) and promised to be an intermediary of a political dialogue, not a judge.
DBSCAN Engine Given the graph, we extract the alarms from all the vertices and use these as points as input to the DBSCAN algorithm. DBSCAN requires a constant \(\epsilon\) and a distance function, which we define as follows: where: \(a_{1}\) and \(a_{2}\) are the points representing the alarms \(\alpha \in (0, \infty)\) is a scaling constant (directly related to \(\epsilon\)) \(\beta \in [0,1\)] is a weighting constant When \(\beta\) is closer to 0, more weight is given to the temporal component When \(\beta\) is closer to 1, more weight is given to the spatital component \(t(a_{k})\) returns the time (timestamp in seconds) of the last occurence of the given alarm \(dg(a_{i}, a_{j})\) returns the normalized distance on the shortest path between the vertices for \(a_{i}\) and \(a_{k}\) If both alarms are on the same vertex, then the distance is 0 If there is no path between both alarms, then the distance is \(\infty\) In simpler terms, we can think of the distance function as taking a weighted combination of both the distance in time and in space. We set the constants with the following defaults: \(\epsilon = 100\) \(minpts = 1\) \(\alpha = 144.47\) \(\beta = 0.55\) These were derived empirically during our testing. Let’s assume that we have the following graph: Let’s start determining the distance between \(a_{1}\) and \(a_{2}\). We can calculate the time component with: And given that \(a_{1}\) and \(a_{2}\) are on the same vertex, the spatial component is simply zero: Placing these results in the original equation gives us: and \(d(a_{1}, a_{2}) < \epsilon\), so the alarms will be clustered together. Now let’s determine the distance between \(a_{3}\) and \(a_{4}\). We can calculate the time component with: To calculate the spatial distance between \(a_{3}\) and \(a_{4}\), we sum up the weights on the edges between the shortest path and divide this result by the default weight (=100), so: Placing these results in the original equation gives us: and \(d(a_{3}, a_{4}) < \epsilon\), so the alarms will be clustered together. Now let’s determine the distance between \(a_{2}\) and \(a_{3}\). We can calculate the time component with: The value of the spatial component is: Placing these results in the original equation gives us: and \(d(a_{2}, a_{3}) > \epsilon\), so the alarms will not be clustered together. The DBSCAN algorithm performs well when there are less than 500 candidate alarms. It has a worst-case complexity of \(O(n^2)\). Note that alarms are only considered to be candidates for correlation when they have been created and/or updated in the last 2 hours (configurable). This means that the engine can still be used on systems with more than 500 active alarms, since many of these will age out over time.
What is the energy limit for fibre optics? For a dual-pulsed nd:yag laser firing at 10Hz with energy roughly 2x 400mJ, and pulse width ~5-10ns. My gut says that this is far too high to pass through fiber optics, but I have no experience in this matter. It depends. The first challenge is damage at the surface. Is your fiber single-mode or multi-mode? What is the mode-field diameter? If it is single-mode then the mode diameter will probably ~10um, or so, and the fluence will be very large. If you are using multi-mode fiber the mode could be quite large, or not, depending on the fibre. The second challenge, if you can get the light into the fibre without surface damage, will be nonlinearities as the light propagates. Brillion scattering, Raman scattering, etc will act to "modify" your pulses in some way. The longer the fiber the worse these effects get. So, this challenge depends upon the length, and the mode-field diameter. Back of envelope - .4 J in 10 ns is 40 MW peak power. Put that into a 10 µm fiber, and the peak flux is $5\cdot 10^{17} W/m^2$. Nonlinearity sets in at field strengths above 10$^8$ V/m. Relationship between power and field is $$E = \sqrt{\frac{P}{\epsilon_0 c}} = 1.4\cdot 10^{10}V/m$$ - so your power density is two orders of magnitude above where nonlinearity sets in. It's likely that you will have trouble transmitting the power - in the nonlinear regime, fibers can get pretty weird (although exactly how they respond depends on the type of fiber). If you used a bigger fiber (multi mode) it might work. Be careful to avoid any surface where absorption could take place. Clean, clean, clean. But if you are using this kind of laser you already know that.
Ex.5.3 Q7 Arithmetic Progressions Solution - NCERT Maths Class 10 Question Find the sum of first \(22\) terms of an AP in which \(d = 7\) and \(22\rm{nd}\) term is \(149.\) Text Solution What is Known? \(d\) and \({a_{22}}\) What is Unknown? \({S_{22}}\) Reasoning: Sum of the first \(n\) terms of an AP is given by \({S_n} = \frac{n}{2}\left[ {2a + \left( {n - 1} \right)d} \right]\) or \({S_n} = \frac{n}{2}\left[ {a + l} \right]\), and \(nth\) term of an AP is\(\,{a_n} = a + \left( {n - 1} \right)d\) Where \(a\) is the first term, \(d\) is the common difference and \(n\) is the number of terms and \(l\) is the last term. Steps: Given, 22nd term, \(l = {a_{22}} = 149\) Common difference, \(d = 7\) We know that \(n\rm{th}\) term of AP, \({a_n} = a + \left( {n - 1} \right)d\) \[\begin{align}{{a}_{22}}&= a+\left( 22-1 \right)d \\149&=a+21\times 7 \\ 149&=a+147 \\ a &=2 \\\end{align}\] \[\begin{align}{S_n}&= \frac{n}{2}\left( {a + l} \right)\\ &= \frac{{22}}{2}\left( {2 + 149} \right)\\& = 11 \times 151\\ &= 1661\end{align}\]
EDIT: The following is (mostly) for $\alpha < 2$; scroll to the bottom for more on general $\alpha$. Kudos to Abdelmalek Abdesselam (again). As for $\mathbb{R}^d$, this is classical: the solution is given by the "fractional heat kernel": $$u(t,x) = u_0 * p_t(x),$$ where $p_t(x)$ is the inverse Fourier transform of $\exp(-t |\xi|^\alpha)$. Since $p_t$ is a probability density function, convolution with $p_t$ is a contraction on every $L^p(\mathbb{R}^d)$. The kernel $p_t$ has several alternative representations; in particular, we have Bochner's subordination formula, which asserts that $p_t(x)$ is a mixture of Gaussians: $$p_t(x) = \int_0^\infty q_s(x) \eta_t(s) ds,$$ where $$q_s(x) = (4 \pi s)^{-d/2} \exp(-|x|^2 / (4t))$$ is the Gaussian and $\eta_t(s)$ is a probability density distribution with Laplace transform $\exp(-t \xi^{\alpha/2})$. For the torus, all you need to do is to periodize the heat kernel: if we write $$\tilde p_t(x) = \sum_{n \in \mathbb{Z}^d} p_t(x + n),$$ then the solution on the torus is given by $$u(t, x) = u_0 * \tilde p_t(x),$$ where the convolution on the torus is defined as $$ \int_{[0,1)^d} u_0(y) \tilde p_t(x - y) dy .$$ You can find a number of references for the $\mathbb{R}^d$ case in my two survey papers: a more probabilistic view in Section 4 of M. Kwaśnicki, Fractional Laplace Operator and its Properties, in: A. Kochubei, Y. Luchko, Handbook of Fractional Calculus with Applications. Volume 1: Basic Theory, De Gruyter Reference, De Gruyter, Berlin, 2019 or an analytical perspective in Section 2.6 of M. Kwaśnicki, Ten equivalent definitions of the fractional Laplace operator, Frac. Calc. Appl. Anal. 20(1) (2017): 7–51 I do not know any reference specifically for the torus. You may search for papers on the fractional heat equation on manifolds (or even "fractals"/"$d$-sets"/"metric measure spaces"), but this will likely be too general and abstract for your needs. (I only know of a paper Fractional Laplacian on the torus by Luz Roncal and Pablo Raúl Stinga, but this one is about the extension technique, not very useful for the heat equation.) EDIT: what about general $\alpha > 0$? For general $\alpha > 0$, in $\mathbb{R}^d$, a solution is again given by the convolution with $p_t$ given as an inverse Fourier transform of $\exp(-t |\xi|^\alpha)$. This is no longer a positive function if $\alpha > 2$, but it is anyway an integrable function. Here is a short (but perhaps not the most elementary) proof of this fact. If $\alpha$ is an even integer, then the Fourier transform of $p_t$ is a Schwartz class function, and hence $p_t$ is Schwartz class. If $\alpha$ is not an even integer, then $p_t$ is still smooth, but it no longer decays rapidly. Now the result of: K. Soni, R.P. Soni, Slowly Varying Functions and Asymptotic Behavior of a Class of Integral Transforms I, II, III. J. Anal. Appl. 49 (1975): 166--179; 477--495; 612--628 applied to the $d$-dimensional Hankel transform provides an asymptotic expansion of $p_t(x)$ at infinity, which in particular implies that $p_t$ is of constant sign in a neighbourhood of infinity. This easily leads to the conclusion that $p_t$ is integrable: otherwise, its Fourier transform would necessarily diverge at zero. The convolution with $p_t$ is therefore again a bounded operator on $L^p(\mathbb{R}^d)$ for every $p \in [1, \infty]$. With no doubt it is written somewhere, but I do not have a reference at hand. Of course, periodization leads to similar results on the torus. However, for general $\alpha > 0$ it is way easier to simply note that $\tilde{p}_t$ is given by a Fourier series with rapidly decreasing coefficients, and hence it is infinitely smooth. For this reason, the convolution with $\tilde p_t$ is a bounded operator on $L^p(\mathbb{T}^d)$.
The energy loss in a hydraulic jump is still calculated with the old equation of Bresse from year 1860; (I.e., equation 7 in this paper from 2017) $$ \frac{\Delta E}{E_1} = \frac{(\sqrt{1+8Fr^2}-3)^3}{16(\sqrt{1+8Fr^2}-1)(1+\frac{1}{2}Fr^2)} $$ There is no measured energy loss when $Fr<\sqrt3$, though this equation predicts at $Fr=\sqrt3$ a loss of; $$ \frac{\Delta E}{E_1} = \frac{(\sqrt{1+8*3}-3)^3}{16(\sqrt{1+8*3}-1)(1+\frac{1}{2}*3)}=\frac{2^3}{16(4)(2\frac{1}{2})}=\frac{8}{160}=5\% $$ This is obviously wrong, as it violates badly the conservation of energy, which must mean that the whole equation of Bresse is simply wrong. Is there a better way to calculate this loss, where the logic is rigorously derived from the fundamentals? Equation 15-1 from the book of Chow 1959 gives of course the same result for $Fr=\sqrt3$, as it's just another presentation of the same equation of Bresse 1860; $$ \frac{E_2}{E_1} = \frac{(1+8Fr^2)^{3/2}-4Fr^2+1}{8Fr^2(2+Fr^2)}=0.95 $$
Votes cast (58) all time by type 58 up 15 question 0 down 43 answer 3 Find all random variables $X$ such that if $Y$ is $N(0,1)$ and independent from $X$, then $X+Y$ and $\frac{1}{3}X+2Y-1$ have the same distribution 3 Find $\lim_{n \rightarrow \infty} \int_0^n (1+ \frac{x}{n})^{n+1} \exp(-2x) \, dx$ 3 Existence of some differentiable function 3 Show that $\frac{X_1+\dots+X_n}{n}$ converges to $\infty$ a.s. for $X_n \sim U([0,n])$ independent 2 Self-independent random variable all time by type 58 up 15 question 0 down 43 answer
In the book "Statistical Physics, Part I ($3^{{\rm rd}}$ edition)" by Landau and Lifshitz, at $\S59$ when he treats the diamagnetic part of the magnetisation of a degenerate electron gas for weak fields ($\mu_BH\ll\varepsilon_F$, where $\varepsilon_F$ is the Fermi energy), he says that the energy levels of the orbital motion are given by $$ \varepsilon=\frac{p_z^2}{2m}+(2n+1)\mu_BH,\;n=0,1,2,\ldots $$ where $p_z$ is the component of the momentum in the direction of the field $H$ and $\mu_B=e\hbar/2m_ec$ is the Bohr magneton. That's ok! But then he says that the number of states of the particle, at a given value of $n$, such that its momentum lies in the interval $p_z$ and $p_z+{\rm d}p_z$ is $$ 2\frac{VeH}{(2\pi\hbar)^2c}{\rm d}p_z,$$ where the factor $2$ accounts for the two possible spin orientations. I tried to reach this result, but I had no success. And that's strange, because the number of states depends on the field $H$. Well, if you could help me understand this, I would be very grateful.
What is the simplest fermionic normalized quantum many-particle wavefunction, expressed in the first-quantized position representation, that you can think of? The normal single-particle examples don't seem to be that simple: Slater determinant of Gaussians times Hermite polynomials for harmonic oscillator, the free particle is not normalizable, and the infinite square well doesn't seem any better. I understand simple in a fairly intuitive way: easy to manipulate, possible to differentiate and integrate, not too many symbols etc.. The best candidate I can think of (which is still not that good according to my example criteria above) is a single full Laundau level for a charged particle in a magnetic field, expressed as $\displaystyle\Psi(z_1,z_2,\ldots,z_N)=\big(4\pi(N-1)\big)^{-N(N-1)/2}\prod_{i<j}^N(z_i-z_j)e^{-\frac{1}{4}\sum_{i=1}^N|z_i|^2}\ ,$ in terms of complex coordinates $z=x+iy$ and with physical constants set to 1 (normalization from Wikipedia). Do you know a simpler one? I'm asking because I would like a simple one to test out concepts and understanding.
Osaka Journal of Mathematics Osaka J. Math. Volume 54, Number 3 (2017), 499-516. Feller evolution families and parabolic equations with form-bounded vector fields Abstract We show that the weak solutions of parabolic equation $\partial_t u - \Delta u + b(t,x) \cdot \nabla u=0$, $(t,x) \in (0,\infty) \times \mathbb R^d$, $d \geqslant 3$, for $b(t,x)$ in a wide class of time-dependent vector fields capturing critical order singularities, constitute a Feller evolution family and, thus, determine a Feller process. Our proof uses an a priori estimate on the $L^p$-norm of the gradient of solution in terms of the $L^q$-norm of the gradient of initial function, and an iterative procedure that moves the problem of convergence in $L^\infty$ to $L^p$. Article information Source Osaka J. Math., Volume 54, Number 3 (2017), 499-516. Dates First available in Project Euclid: 7 August 2017 Permanent link to this document https://projecteuclid.org/euclid.ojm/1502092825 Mathematical Reviews number (MathSciNet) MR3685589 Zentralblatt MATH identifier 1377.35128 Citation Kinzebulatov, Damir. Feller evolution families and parabolic equations with form-bounded vector fields. Osaka J. Math. 54 (2017), no. 3, 499--516. https://projecteuclid.org/euclid.ojm/1502092825
The following is probably one of the weirdest unexpected bridges between abstract mathematics and theoretical physics I know of; the two weirdest features being that it's pretty simple to explain, and that the area of math it connects to is number theory. While the latter thing can occur occasionally, especially in the area of string theory, the former is basically close to impossible. A while back I was in a very ugly situation I needed to distract myself out of. I was struck by the fact that the Riemann zeta function looks a lot like a partition function from statistical mechanics. The Riemann zeta is: $$ \zeta(\beta) = \sum_{n=1}^\infty n^{-\beta}$$ for \(Re(\beta) > 1\) of course, and then one extends analytically. The partition function of a system with energy levels \( E_n\) is given by $$ Z(\beta) = \sum_n e^{-\beta E_n}$$ where \( \beta = \frac{1}{k_B T}\). Note that if you have a system with energy levels \( E_n = \log n\), its partition function will be precisely the Riemann zeta. It's curious how they even use the same letter. The bosonic primon gas After googling I found out such a system has indeed been investigated. Basically, imagine a single particle, let's call it a primon, had the available energy levels $$ \log p_0, \log p_1, \ldots $$ where the \( p_i\) are the prime numbers. Then imagine a system made by a bunch of these particles, a gas, and that they don't interact. Your system can have any number of primons, and each of these can be in any single particle state; moreover they are indistinguishable, and finally they are bosons so that you can place as many as you want in any single-particle state. The system's state could then be described by just specifying how many primons you want on each level, so the occupation numbers \( a_0,a_1,a_2,\ldots\). The energy of this state is then $$ E = a_0\log p_0 + a_1 \log p_1 + \ldots = \log \left( p_0^{a_0} p_1^{a_1} \ldots \right)$$ Of course, for the energy of the state to be finite, the total number of particles $$ N = a_0 + a_1 + \ldots$$ must be finite and so the \( a_i\) must be zero from a certain \(i\) onwards. Note how a prime decomposition just popped up. The energy is \( \log n\) for some integer \( n = p_0^{a_0} p_1^{a_1} \ldots\) decomposed as powers of primes. By the prime factorization theorem, this decomposition exists for all integers and is unique. So the physics version of this theorem is that this system has the states \( |n\rangle\) for all the integers \( n\), with energy \(\log n\), occupation numbers given by the exponents in the prime decomposition, and most importantly no degeneracy: there is one and only one way to place primons on the energy levels to make a state of energy \( \log n\). Here's a super explicit example: The state \(|40 \rangle\) has energy \(\log 40\). Since \(40 = 2^3 \cdot 5^1\), this state has \(3\) primons with energy \(\log 2\) and \(1\) primon with energy \(\log 5\). The occupation numbers are \((3,0,1,0,0,0,\ldots)\). Cool. So the (grand canonical, actually) partition function of this thing is the sum over all possible states of the system of \(e^{-\beta E_n}\); but as we saw the states are indexed by all the positive integers, with energy \( \log n\), so actually $$ Z(\beta) = \sum_n e^{-\beta E_n} = \sum_{n=1}^\infty n^{-\beta} = \zeta(\beta)$$ So yeah, the Riemann zeta is the partition function for the primon gas. But that's not all. This is still a gas of bosons, and normal statistical mechanics applies. In particular, we know the partition function must be $$ Z(\beta) = \prod_i \left(1 - e^{-\beta \epsilon_i }\right)^{-1} $$ where \( \epsilon_i\) are the single-particle energies. Inserting our value for the energies we get: $$ Z(\beta) = \prod_{p} (1 - p^{-\beta} )^{-1} $$ where the product is over prime \(p\)... we rediscovered Euler's celebrated product formula: $$ \zeta(\beta) = \prod_p (1- p^{-\beta})^{-1}$$ but essentially only doing physics. Now we could stop here and this would be fun all by itself. We could also muse on whether this allows for an alternative approach to the Riemann hypothesis. However, there's two main obstacles: the Riemann hypothesis is about \( \beta = \frac{1}{2} + it \); while purely imaginary \( \beta\)s would be immensely interesting physically (because of Wick rotation, relating very roughly temperature in the statistical mechanics system with time in the quantum mechanics equivalent) I wouldn't really know what to do with the shift by one-half. the Riemann hypothesis is about zeroes. We have an easy interpretation of poles: for example the divergence at $$ \beta=1$$ in the partition function means the primon gas cannot actually get any hotter than that because it would require infinite energy - this an example of a Hagedorn temperature. But I wouldn't know what the zero of big zeta would mean physically. Sad. Maybe someone more informed can shed some light. But that's not all we can squeeze out of this. Let me introduce you to: The fermionic primon gas So now we want all primons to be fermions. The only thing we need to change is that no two particles can share the same state because of Pauli exclusion, so occupation numbers must be \( 0\) or \(1\). This means that we cannot get the state \(|n\rangle\) for all integers \( n\), but only if \( n\) has a decomposition into primes where all the exponents aren't bigger than one. Equivalently, it has to be a product of distinct primes, aka a square-free number. So the partition function is a sum over square-free numbers: $$ Z(\beta) = \sum_{n \text{S.F}} n^{-\beta} = \sum_{n=1}^\infty |\mu(n)| n^{-\beta} $$ I was able to extend the sum to all integers by weighing with the absolute value of the Möbius function $$ \mu(n)$$ from number theory. This function is \( 0\) on non-square-free numbers, \( 1\) if your number is a product of an even number of primes, and \( -1\) otherwise. The Möbius function admits a crystalline interpretation as the operator giving the statistics of the state: $$ \mu(n) = (-1)^{N} $$ that is, if there is an odd number of fermions, our state will be a fermion, even, it will be a boson. $$ \mu(n)$$ will tell us that. Anyways, the partition function can also be written using the known statmech result for a system of fermions: $$ Z(\beta) = \prod_i (1+e^{-\beta \epsilon_i}) = \prod_p (1 + p^{-\beta}) = \prod_p \frac{1- p^{-2\beta}}{1-p^{-\beta}} = \frac{\zeta(\beta)}{\zeta(2\beta)} $$ So we found another famous number theory result $$ \sum_n |\mu(n)| n^{-\beta} = \frac{\zeta(\beta)}{\zeta(2\beta)} $$ Now the final step for our trascendence beyond this material hyperplane is the Supersymmetric primon gas Now we're talking. The two theories above describe respectively a bosonic and a fermionic particle species. A theory containing both would be supersymmetric and these particles would be superpartners. (Of course, we don't have to fiddle with the ultra-complex math of actual supersymmetry transformations in D-dimensional spacetime - because there's no spacetime here!). The supersymmetry would just be the symmetry sending bosons to fermions and viceversa. This is nothing to be scared of. I'm just taking the two systems, the bosonic and the fermionic gas, and putting them together. Simple composite system. Nothing is interacting so everything factorizes. We just consider the total energy = energy of bosons + energy of fermions. Now \( | n \rangle\) does not identify a single state. Calling the occupation numbers of bosons and fermions respectively \( a_i\) and \( b_i\), we still define the total \( n\) as $$ n = p_0^{a_0 + b_0} \cdot p_1^{a_1 + b_1} \cdot \ldots $$ and the energy is indeed \( \log n\), but now giving \(n\) only fixes the total exponents \( a_i + b_i\) by prime decomposition; to fix the occupation number we also specify the total number built only with the \( b^i\): $$ d = p_0^{b_0} \cdot p_1^{b_1} \cdot \ldots $$ so we can describe all states as \( |n,d\rangle\) where \(d\) is square-free. Note that \( d\) divides \(n\). So actually \(d\) must be a square-free divisor of n. Let's forget about the partition function (which is just the product of the previous two, since this is just a composite system of the previous two systems). Let's talk expectation values. If you have an operator \( O\), you can compute the average value of the operator at a given temperature through: $$ \langle O \rangle = \sum_{\text{states}} e^{-\beta E} O$$ and note the partition function is just the expectation value of 1. So we want to compute the expectation value of the operator $$ (-1)^{N_F}$$ counting fermions, defined above. This is $$ \Delta = \langle (-1)^{N_F} \rangle = \sum_n \sum_{d|n} \mu(d) n^{-\beta}$$ I've swapped the operator with its representation as the Möbius function, and I should sum over square-free divisors of \( n\), but actually I can just sum over all divisors since \( \mu\) projects out the square-free ones. Now the inner sum is $$ \sum_{d|n} \mu(d) \;= 1 \text{ if }n>1,\; 0 \text{ if }n=1 $$ Why? Well there's a math reason and a physics reason. Math reason, I'll let you check it out in the Wiki article for the Möbius function. Physics reason, is supersymmetry. At any given energy, the set of states is supersymmetric, and there are as many fermionic (-1) states as there are bosonic (+1). So in the sum they cancel out to 0. The only exception is the ground state \( |1,0\rangle\). So \( \Delta = 1\). But then, this is the expectation value computed in the supersymmetric theory; since this is a simple sum of the bosonic and fermionic theory, this should factorize: $$ \Delta = \langle (-1)^{N_F} \rangle_B \langle (-1)^{N_F} \rangle_F $$ first one, the operator always takes the value 1, so the expectation value is the partition function in the bosonic case, good ol' \( \zeta(\beta)\). Second one, it's the sum \( \sum_{d=1}^\infty \mu(d) d^{-\beta}\). So here's the last number theory fact for today: $$ \frac{1}{\zeta(\beta)} = \sum_{d=1}^\infty \mu(d) d^{-\beta}$$ It's not over yet: you can also prove the more substantial Möbius inversion formula, but I'm not going to talk about that. Read about it in this paper: http://projecteuclid.org/euclid.cmp/1104180135 End of transmission.
A limit point of a set $S$ in a metric space $X$ can be either in $S$ or in $X$. So you have this point, call it $x$. Now think of an arbitrary distance (using your distance metric, $d(x,y)$). Call it $r$. If $x$ is a limit point, then it means that for ANY $r>0$, you will be able to find some other point $y \in S$ such that $d(x,y) <r$. You can make $r$ arbitrarily small, but within the distance $r$, our point $x$ will not be alone. Its neighborhood will never be empty: a neighborhood is just the set of points that are within the distance $r$ from $x$. This essentially means that for any neighborhood of $x$ that we select, there is an infinite number of points in the neighborhood, 'accumulated around' x. Example: Consider the set $\mathbb{R}^1$, the real line. This is our metric space with the usual Euclidean Norm for distance. Now consider: $$ S = \{1 + \frac1n \mid n \in \mathbb{N}\}$$This set $S$ has a limit point at $1 \in \mathbb{R}^1$ (and only at 1) because for any $r>0$, we can always find some $n \in \mathbb{N}$ such that $1+ \frac1n < 1+r$, so that there is some $s \in S$ with a distance to $1$ that is less than $r$. Intuition: you can then expand this notion: in $\mathbb{R}^2$, we think of $r$ as a radius, and the neighborhood is a circle if you draw the set. In $\mathbb{R}^3$, $r$ becomes a sphere (ball), etc...
In categorical literature, the notation $f: X \rightarrow Y$ only means "$f$ is a function from $X$ to $Y$" in the case where the category you are working in is Set, that is the category of sets and functions. In the category of sets and relations, the same notation means $f$ is a relation from $X$ to $Y$. To answer (perhaps more verbosely) you question about interest, there is actually loads of interest in binary relations. With a bit of digging, one sees that they crop up in various guises all over category theory. AS A (MONOIDAL) CATEGORY For starters, the category Rel of sets and binary relations is a perfectly good category, and its properties are well understood. Perhaps the reason Rel isn't well covered in the CT classics is because it is primarily of interest as a monoidal category. These are categories that have a notion of "product" that is much weaker than the usual, in that they don't satisfy the usual universal properties, but still form an associative, unital operation one can apply to objects and arrows. While this isn't too far out of the categorical mainstream, you may not necessarily meet them in an introductory course on category theory. Axiomatically, the category of relations is much more like the category of vector spaces. To see this intuitively, picture a relation from $A$ to $B$ as a matrix $M$ whose columns are indexed by the elements of $A$ and whose rows are indexed by the elements of $B$. Then, let $M_{a,b} = 1$ if $a M b$ and $M_{a,b} = 0$ otherwise. Relational composition is just matrix multiplication, replacing $+$ with OR and $*$ with AND. From this point of view, its easy to see that the cartesian product of relations is is much more closely resembles the tensor product of linear maps than it does the cartesian product of functions. The category FRel of finite sets and relations (as well as FHilb, the category of finite dimensional Hilbert spaces, and many more) forms a dagger-compact closed category. If you become familiar with this definition and associated literate, it sums up a good chunk of what categories like FRel are like and what they are good for. There are lots of ways to get stuck in to this. These categories are presented in a quite accessible, physics-oriented format in Bob Coecke's paper: "Introducing categories to the practicing physicist". You can get it here: http://arxiv.org/abs/0808.1032 And Pete Selinger introduces the whole zoo of such categories in "A survey of graphical languages for monoidal categories". http://www.mscs.dal.ca/~selinger/papers/graphical.pdf AS SPANS As @Mikola already mentioned, Spans (which are a generalisation of relations) are often used in the place of binary relations in arbitrary categories. A span is just a pair of arrows $f: K \rightarrow X$, $g : K \rightarrow Y$ from a common domain $K$. In categories with products the connection to relations is especially easy to see. Because of the universal property of products, such a pair of arrows determines a unique third arrow $h : K \rightarrow A \times B$. In the case that $h$ is monic, this is just the same as a subset of $A \times B$, i.e. relations as you normally think of them (c.f. @Andrej). Spans are a beautiful and elegant way to deal with maps that are "relation-like" in suitably rich categories. In fact, any category with pullbacks has a natural notion of a "category of spans of C", whose objects are objects of C, whose arrows are spans (i.e. generalised relations), and where composition of spans is by a pullback-based construction that is much like how one composes relations. John Baez has some interesting papers about taking spans of groupoids rather than just spans of sets. In line with the intuition that relations are a bit like matrices over the booleans, he treats spans of groupoids as things that are a bit like matrices over the positive real numbers. The place to start on this is: http://math.ucr.edu/home/baez/groupoidification/ AS PROFUNCTORS Another way you can think of a binary relation between sets is as a function $\chi_R$ out of a cartesian product $A \times B$ of sets onto the two element set {$0, 1$}. Then let $a R b$ iff $\chi_R(a,b) = 1$. Such a function can be used to define any relation, and is called the characteristic function of the relation $R$. We can generalise this in quite a natural way from sets and functions to categories and functors. However, in the category Cat of categories and functors, it turns out the most natural thing to send "characteristic" functors to is the category of Sets. A functor $F : C^{op} \times D \rightarrow Set$ is called a profunctor from the category $C$ to the category $D$. I won't spell out to many details here (for instance, the category $C$ is "op-ed" for a good reason), but suffice it to say that such a map behaves very analogously to a binary relation and comes with natural notions of composition, etc. These become a very powerful tool when working with internal and higher categories, because certain structures you can put on profunctors let you define the notion of a category inside another category. This is a good example of a structure that is quite deep (and that people are only beginning to fully appreciate!) that essentially generalises the basic notion of a binary relation. When you have built up sufficient courage, the canonical reference for profunctors is Bénabou's "Distributors at Work." For the applications I described, they are detailed in papers by Steve Lack and Paul-Andre Mellies, and probably many others.
Answer $sin~C = 1$ $C = 90^{\circ}$ Triangle ABC is a $30^{\circ},60^{\circ},90^{\circ}$ triangle. Work Step by Step We can use the law of sines to find the angle $C$: $\frac{a}{sin~A} = \frac{c}{sin~C}$ $sin~C = \frac{c~sin~A}{a}$ $sin~C = \frac{(2\sqrt{5})~sin~(30^{\circ})}{\sqrt{5}}$ $sin~C = 1$ $C = arcsin(1)$ $C = 90^{\circ}$ Triangle ABC is a $30^{\circ},60^{\circ},90^{\circ}$ triangle.
Search Now showing items 1-5 of 5 Forward-backward multiplicity correlations in pp collisions at √s = 0.9, 2.76 and 7 TeV (Springer, 2015-05-20) The strength of forward-backward (FB) multiplicity correlations is measured by the ALICE detector in proton-proton (pp) collisions at s√ = 0.9, 2.76 and 7 TeV. The measurement is performed in the central pseudorapidity ... Rapidity and transverse-momentum dependence of the inclusive J/$\mathbf{\psi}$ nuclear modification factor in p-Pb collisions at $\mathbf{\sqrt{\textit{s}_{NN}}}=5.02$ TeV (Springer, 2015-06) We have studied the transverse-momentum ($p_{\rm T}$) dependence of the inclusive J/$\psi$ production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in three center-of-mass rapidity ($y_{\rm cms}$) regions, down to ... Measurement of charm and beauty production at central rapidity versus charged-particle multiplicity in proton-proton collisions at $\sqrt{s}$ = 7 TeV (Springer, 2015-09) Prompt D meson and non-prompt J/$\psi$ yields are studied as a function of the multiplicity of charged particles produced in inelastic proton-proton collisions at a centre-of-mass energy of $\sqrt{s}=7$ TeV. The results ... Coherent $\rho^0$ photoproduction in ultra-peripheral Pb-Pb collisions at $\mathbf{\sqrt{\textit{s}_{\rm NN}}} = 2.76$ TeV (Springer, 2015-09) We report the first measurement at the LHC of coherent photoproduction of $\rho^0$ mesons in ultra-peripheral Pb-Pb collisions. The invariant mass and transverse momentum distributions for $\rho^0$ production are studied ... Inclusive, prompt and non-prompt J/ψ production at mid-rapidity in Pb-Pb collisions at √sNN = 2.76 TeV (Springer, 2015-07-10) The transverse momentum (p T) dependence of the nuclear modification factor R AA and the centrality dependence of the average transverse momentum 〈p T〉 for inclusive J/ψ have been measured with ALICE for Pb-Pb collisions ...
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
The hint by Yuval in his comment is the right one. An approach to writing context-sensitive grammars is to design them like a machine. Send messages over the string. This is very close to writing a linear bounded automaton (or linear space Turing machine). In such a machine the reading/writing head scans over the tape while updating the string. This head (with the machine state) can be modelled by a production, like $ap\to qb$ for "in state $p$ while reading $a$, write $b$ and move left, changing state to $q$", where we assume the state is written to the right of the position of the head. There are differences. The CSG is more flexible as one can insert letters in the middle of the string, and moreover there can be parallel processes going on at the same time. The latter can be a nuisance when arguing the approach is correct. Lets see how the messages work in an example:$L = \{ww\mid w\in\{a,b\}^*, |w|\ge 1 \}$, or squares. Start with left and right sides of the string with the rule $S\to LR$. As markers and messengers cannot disappear in a CSG they must represent a letter in the final string. This letter can be $a$ or $b$ so we have two rules instead: $S\to L_aR_a\mid L_bR_b$. Now the left marker generates new letters and a messenger that becomes a copy at the other half of the string. $L_\sigma\to L_\sigma aM_a\mid L_\sigma bM_b$. Everywhere in these rules $\sigma,\tau$ may take the values $a,b$.Messengers move over other letters: $M_\sigma\tau\to \tau M_\sigma$. At the beginning of the second half write the letter: $M_\sigma R_\tau \to R_\tau \sigma$. We should end the derivation by removing the boundary markers $L_\sigma\to\sigma$, $R_\sigma\to \sigma$. We have a problem when the right marker is gone, while there is still a messenger under way. This can be solved by extra symchronization. However, no terminal string will be generated this way; the drivation is lost, so there is no real problem. I tried a similar approach here for the language $\{a^ib^jc^{ij} \mid i,j\ge 1\}$. Here a hint for squares (numbers) $\{a^{n^2} \mid n\ge 1\}$. As you see, I practise a lot. PS. I hope you mean a monotonic grammar rather than a "real" context-sensitive one. Those are harder, but there is a standard (boring) transformation.
This exercise is inspired by exercises 83 and 100 of Chapter 10 in Giancoli's book A uniform disk ($R = 0.85 m$; $M =21.0 kg$) has a rope wrapped around it. You apply a constant force $F = 35 N$ to unwrap it (at the point of contact ground-disk) while walking 5.5 m. Ignore friction. a) How much has the center of mass of the disk moved? Explain. Now derive a formula that relates the distance you have walked and how much rope has been unwrapped when: b) You don't assume rolling without slipping. c) You assume rolling without slipping. a) I have two different answers here, which I guess one is wrong: a.1) Here there's only one force to consider in the direction of motion: $\vec F$. Thus the center of mass should also move forward. a.2) You are unwinding the rope out of the spool and thus exerting a torque $FR$ (I am taking counterclockwise as positive); the net force exerted on the CM is zero and thus the wheel only spins and the center of mass doesn't move. The issue here is that my intuition tells me that there should only be spinning. I've been testing the idea with a paper roll and its CM does move forward, but I think this is due to the roll not being perfectly cylindrical; if the unwrapping paper were to be touching only at a point with an icy ground the CM's roll shouldn't move. 'What's your reasoning to assert that?' Tangential velocity points forwards at distance $R$ below the disk's CM but this same tangential velocity points backwards at distance $R$ above the disk's CM and thus translational motion is cancelled out. Actually, we note that opposite points on the rim have opposite tangential velocities (assuming there's no friction so that the tangential velocity is constant). My book assumes a.1) is OK. I say a.2) is OK. Who's right then? b) We can calculate the unwrapped distance noting that the arc length is related to the radius by the angle (radian) enclosed: $$\Delta s = R \Delta \theta$$ Assuming constant acceleration and zero initial angular velocity: $$\Delta \theta = 1/2 \alpha t^2 = 1/2 \frac{\omega}{t} t^2 = 1/2 \omega t$$ By Newton's second Law (rotation) we can solve for $\omega$ and then plug it into the above equation: $$\tau = FR = I \alpha = I \frac{\omega}{t} = 1/2 M R^2 \frac{\omega}{t}$$ $$\omega = \frac{2F}{M R}t$$ Let's plugg it into the other EQ. $$\Delta \theta = \frac{F}{M R}t^2$$ Mmm we still have to eliminate $t$. Assuming constant acceleration we get by the kinematic equation (note I am using the time $t$ you take to walk 5.5 m so that we know how much rope has been unwrapped in that time): $$t^2 = \frac{2M\Delta x}{F}$$ Plugging it into $\Delta \theta$ equation: $$\Delta \theta = \frac{2\Delta x}{R}$$ Plugging it into $\Delta s$ equation we get the equation we wanted: $$\Delta s = 2 \Delta x$$ If we calculate both $v$ and $\omega$ we see that $v=R\omega$ is not true so the disk doesn't roll without slipping. c) Here $v=R\omega$ must be true. We know that if that's the case the tangential velocity must be related to the center of mass' velocity as follows: $$2v_{cm} = v$$ Assuming that the person holding the rope goes at speed $2v_{cm}$ we get: $$\Delta x= 2 \Delta s$$ I get reversed equations at b) and c). How can we explain that difference in both equations beyond the fact of rolling without slipping?
I have a signal, $f(t)$. I know a function that can be used to generate this signal, such that I can determine its Fourier series. I want to express this Fourier series in simpler terms so that the waveform can be rendered by a GPU in parallel, with the added ability to change the lowpass cutoff frequency of the signal dynamically. Take for example a saw wave with a fundamental frequency of 1 Hz, represented by the function: $$f(t) = {-2\tan^{-1}(\cot(\pi t)) \over \pi}$$ I'm using this form of the saw wave because it's cyclic and easy for a GPU to calculate. Its Fourier series is rather difficult to solve, but I have the series on hand: $$g(t, n) = {-2 \over \pi} \sum_{k=1}^n {\sin(k \pi t) \over k}$$ Now, I could technically use additive synthesis to generate the waveform using this series, but my project has real-time constraints and this simply won't do. A single saw wave might have hundreds of partials up to the Nyquist frequency. This takes a lot of time to compute, and on a GPU it's a branching nightmare. Since I'm rendering all of the samples in parallel, there is no history buffer for existing filtering algorithms to operate on, and I can't do that either. What I'm trying to do is approximate this series as a function of $n \in Z$, without the summation. That way, it doesn't matter what cut-off I use, I get consistent performance across the board that's most likely a huge improvement over additive synthesis. Right now I'm trying to fit the curve manually, but this is a laborious process. Is there a way to find $h(t, n) \approx g(t, n)$, where $h(t, n)$ is the approximation of the series?
Practical and theoretical implementation discussion. Post Reply 8 posts • Page 1of 1 Hi, I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I have a simple question as in the title. Why is volumetric emission proportional to absorption coefficient? I often see the volumetric emission term is written as I can also see another representation, volumetric emittance function (e.g. in Mark Pauly's thesis: Robust Monte Carlo Methods for Photorealistic Rendering of Volumetric Effects) , which has the unit of radiance divided by metre (that is W sr^-1 m^-3). Do particles that emit light do not scatter light at all? Thanks I don't quite understand the question: doesn't the first term refer to a particle density at an integration position x that emits radiance L_e in viewing direction w, and that the particle density absorbs radiance by w.r.t. sigma_a(x)? The emitted radiance is not proportional to the absorption, but is scaled by sigma_a(x). Let sigma_a := 1.0 be a constant for all x with respect to the density field, then your model only accounts for emission. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. With L_b(xb,w)+\int_xb^x L_e(x,w) sigma_a(x), b being the position of a constantly radiating background light and \int_xb^x meaning integration over the viewing ray from the backlight to the integration position, you get the classical emission+absorption model that is e.g. used for interactive DVR in SciVis. In- and out-scattering can be incorporated in the equation. See Nelson Max's '95 paper on optical models for DVR for the specifics: https://www.cs.duke.edu/courses/cps296. ... dering.pdf Also note that those models usually don't consider individual particles, but rather particle densities, and then derive coefficients e.g. by considering the projected area of all particles inside an infinitesimally flat cylinder projected on the cylinder cap. Emission and absorption are sometimes expressed with a single coefficient in code for practical reasons, e.g. so that a single coefficient in [0..1] can be used to look up an RGBA tuple in a single, pre-computed and optionally pre-integrated transfer function texture. Thanks for reply. I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I can find the volumetric emission term which proportional to absorption coefficient for example in, Jensen's Photon Mapping book, Spectral and Decomposition Tracking paper or Wojciech Jarosz's thesis. In the Jarosz's thesis reference, there is the following sentence by the equation (4.12) in the page 60: I think it is required that emitting particles should not scatter light in order L_e^V to be represented as a decomposed form \sigma_a(x) L_e(x, w).Media, such as fire, may also emit radiance, Le , by spontaneously converting other forms of energy into visible light. This emission leads to a gain in radiance expressed as: I'm not quite sure if I understand how you come to this assumption and if I totally understand your question, but I don't see why particles that emit light shouldn't also scatter light.I think it is required that emitting particles should not scatter light However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. The model rather derives the radiance in a density field due to emission, absorption, and scattering phenomena at certain sampling positions and in certain directions. So the question is: for position x, how much light is emitted by particles at or near x, how much light arrives there due to other particles scattering light into direction x ("in-scattering"), and conversely: how much light is absorbed due to local absorption phenomena at x, and how much light is scattered away from x ("out-scattering", distributed w.r.t. the phase function). See e.g. Hadwiger et al. "Real-time Volume Graphics", p. 6: (https://doc.lagout.org/science/0_Comput ... aphics.pdf)Analogously, the total emission coefficient can be split into a source term q, which represents emission (e.g., from thermal excitation), and a scattering term Out-scattering + heat dissipation etc. ==> total absorption at point x contributed to a viewing ray in direction w In-scattering + emission ==> added radiance at point x along the viewing direction w It is not about individual particles. The scattering equation is about the four effects contributing to the total radiance at a point x in direction w. There are no individual particles associated with the position x, you consider particle distributions and how they affect the radiance at x. The radiance increases if particles scatter light towards x, or if particles at (or near) x emit light. The radiance goes down due to absorption and out-scattering from the particle density at x. The point x is usually the sampling position that is encountered when marching a ray through the density field, and is not associated with individual particle positions. I didn't find a more general source and am working with this paper anyway - the paper also shows the scattering equation and states that it has a combined emission+in-scattering term: http://www.vis.uni-stuttgart.de/~amentm ... eprint.pdf (cf. Eq. 3 on page 3). Hope I'm not misreading your question? My current thinking process when reading the paper you lastly mentioned is like following: 0. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. 0. - Yes, I know.However, the mental model behind radiance transfer is not one that considers the interaction of individual particles. 1. eq. (1) says that contribution from source radiance Lm(x', w) is proportional to the extinction coefficient sigma_t(x'). - I can understand RTE of this form. Probability density with which light interact (one of scattering/absorption/emission) with medium at x' is proportional to particle density, that is sigma_t(x'). 2. eq (3) says that once interaction happens, it is emission with probability (1 - \Lambda) and scattering with probability \Lambda. - I can understand the latter because scattering albedo \Lambda is the probability that scattering happens out of some interaction. This is straightforward. However I can't understand the former. The original question: Why is volumetric emission proportional to absorption coefficient? I can understand absorption happens with the probability (1 - \Lambda), but cannot understand emission also happens with the probability (1 - \Lambda) Now my question can be paraphrased as follows: Shouldn't the probability emission happens be independent of absorption coefficient? I'm sorry in case that the above explanation confuse you more and thank you for your kindness for detailed replying. I found an interesting lecture script. http://www.ita.uni-heidelberg.de/~dulle ... pter_3.pdf Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. Section 3.3 Eq 3.9. As I understand, ultimately it is a matter of definition motivated by thermodynamics of a special case. I imagine that the same particles that block light along some beam also emit light of their own. So it makes sense that emission and absorption strength have a common density-related prefactor. The reverse view from the point of importance being emitted into the scene seems more intuitive to me: Importance particle have a chance to interact with particles of the medium in proportion to their cross section. If they interact, the medium transfers energy to the imaging sensor. That lecture script says: "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. "This is Kirchhoff’s law.It says that a medium in thermal equilibrium can have any emissivity jν and extinction αν, as long as their ratio is the Planck function." Which sounds like they really CAN'T have any emissivity and extinction, but have to have them in a specific ratio. For example, for green light of wavelength 570 nm and 2000 degrees K, that ratio is (from the Planck function) 6537. So the extinction is relatively small in comparison. Later it says "If the temperature is constant along the ray, then the intensity will indeed exponentially approach [the Planck function]". Anyway, the real reason I am replying is so I can share this video of a "black" flame. The flame emits light but has no shadow (it seems fires don't have shadows), but can be made to have one and even appear black under single-frequency lighting: https://www.youtube.com/watch?v=5ZNNDA2WUSU This seems to contradict the notion that media has to absorb light in order to emit it.... unless the amount absorbed is very tiny, as suggested by the lecture. Ha! Now this comes a bit late, but I appreciate you posting this experiment. It is very cool indeed. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not. I think, in contrast to the assumptions in that part of the lecture, the lamp is not a black body. At least, obviously, its emission spectrum is does not follow Planck's law. Please don't ask when in reality the idealization as black body is justified ... Somewhere I read that good emitters are generally also good absorbers, in the sense of material properties. The experiment displays this very well since the Sodium absorbs a lot of the light, whereas normal air and normal flame do not.
Hello, I've a problem to calculate the Position of a pendulum as a function of theta. For example: $\theta (t)$ is a function of time which returns the angle made by the pendulum at a particular instant wrt it's equilibrium Position. So, $$ T = \dfrac 12 m l^2 \dot \theta^2 $$$$ U = - mgl \cos \theta $$ $$ L(\theta, \dot \theta) = \dfrac 12 m l^2 \dot \theta^2 + mgl \cos \theta $$ Using, the Euler - Lagrangian Formula, $$ \dfrac d{dt} \left ( \dfrac{\partial L}{\partial \dot \theta}\right) - \dfrac{\partial L}{\partial \theta} = 0 $$ We get, $$ \boxed{\ddot \theta =- \dfrac gl \sin \theta} $$ which is the equation of motion. But, most of the derivations, I've seen/read go this way: $$ \ddot \theta = \dfrac gl \theta \quad \dots \quad (\text{As, } \sin \theta \approx \theta, \theta \rightarrow 0) \tag{*} $$ $$ \theta (t) = \cos \left ( \sqrt{\dfrac gl} t \right) $$ Because it satisfies $(*)$ So, I've 2 questions here. Other possible solutions of the Second Order Differential Equations exist like $\theta (t) = e^{\left( \sqrt{\dfrac gl}t \right)}$. So, why we choose that only one? One would argue that the sine function oscillates similar to the pendulum, so this makes sense to accept the sine one. But, in general case, when we solve the Lagrangian and get the equation of motion in differential form, then there are tons of complex situation possible, How can you determine which kind is needed? How can we solve the Second Order Differential Equation $\ddot \theta = - \dfrac gl \sin \theta$ and get an exact formula for that? Thanks :)
Young's Modules is defined as, $Y = \frac{\sigma}{\epsilon}$ where $\sigma$ is the strain defined as $\sigma = F/A$ and $\epsilon$ is the stress defined as $\epsilon = \frac{\Delta L}{L_0}$. In this case we require the stress, thus re-arranging the first equation gives, $\sigma = \epsilon Y$ After plugging in the definition of $\epsilon$ we then arrive at, $\sigma = \frac{\Delta L}{L_0}Y$ Now as you already worked out, $\Delta L = L_0\alpha\Delta T$. Therefore after plugging this in, we arrive at the required result: $\sigma = \frac{L_0\alpha\Delta T}{L_0}Y = \alpha\Delta T Y$
In a paper by Joos and Zeh, Z Phys B 59 (1985) 223, they say:This 'coming into being of classical properties' appears related to what Heisenberg may have meant by his famous remark [7]: 'Die "Bahn" entsteht erst dadurch, dass wir sie beobachten.'Google Translate says this means something ... @EmilioPisanty Tough call. It's technical language, so you wouldn't expect every German speaker to be able to provide a correct interpretation—it calls for someone who know how German is used in talking about quantum mechanics. Litmus are a London-based space rock band formed in 2000 by Martin (bass guitar/vocals), Simon (guitar/vocals) and Ben (drums), joined the following year by Andy Thompson (keyboards, 2001–2007) and Anton (synths). Matt Thompson joined on synth (2002–2004), while Marek replaced Ben in 2003. Oli Mayne (keyboards) joined in 2008, then left in 2010, along with Anton. As of November 2012 the line-up is Martin Litmus (bass/vocals), Simon Fiddler (guitar/vocals), Marek Bublik (drums) and James Hodkinson (keyboards/effects). They are influenced by mid-1970s Hawkwind and Black Sabbath, amongst others.They... @JohnRennie Well, they repeatedly stressed their model is "trust work time" where there are no fixed hours you have to be there, but unless the rest of my team are night owls like I am I will have to adapt ;) I think u can get a rough estimate, COVFEFE is 7 characters, probability of a 7-character length string being exactly that is $(1/26)^7\approx 1.2\times 10^{-10}$ so I guess you would have to type approx a billion characters to start getting a good chance that COVFEFE appears. @ooolb Consider the hyperbolic space $H^n$ with the standard metric. Compute $$\inf\left\{\left(\int u^{2n/(n-2)}\right)^{-(n-2)/n}\left(4\frac{n-1}{n-2}\int|\nabla u|^2+\int Ru^2\right): u\in C^\infty_c\setminus\{0\}, u\ge0\right\}$$ @BalarkaSen sorry if you were in our discord you would know @ooolb It's unlikely to be $-\infty$ since $H^n$ has bounded geometry so Sobolev embedding works as expected. Construct a metric that blows up near infinity (incomplete is probably necessary) so that the inf is in fact $-\infty$. @Sid Eating glamorous and expensive food on a regular basis and not as a necessity would mean you're embracing consumer fetish and capitalism, yes. That doesn't inherently prevent you from being a communism, but it does have an ironic implication. @Sid Eh. I think there's plenty of room between "I think capitalism is a detrimental regime and think we could be better" and "I hate capitalism and will never go near anything associated with it", yet the former is still conceivably communist. Then we can end up with people arguing is favor "Communism" who distance themselves from, say the USSR and red China, and people who arguing in favor of "Capitalism" who distance themselves from, say the US and the Europe Union. since I come from a rock n' roll background, the first thing is that I prefer a tonal continuity. I don't like beats as much as I like a riff or something atmospheric (that's mostly why I don't like a lot of rap) I think I liked Madvillany because it had nonstandard rhyming styles and Madlib's composition Why is the graviton spin 2, beyond hand-waiving, sense is, you do the gravitational waves thing of reducing $R_{00} = 0$ to $g^{\mu \nu} g_{\rho \sigma,\mu \nu} = 0$ for a weak gravitational field in harmonic coordinates, with solution $g_{\mu \nu} = \varepsilon_{\mu \nu} e^{ikx} + \varepsilon_{\mu \nu}^* e^{-ikx}$, then magic?
I'm convinced that radians are, at the very least, the most convenient unit for angles in mathematics and physics. In addition to this I suspect that they are the most fundamentally natural unit for angles. What I want to know is why this is so (or why not). I understand that using radians is useful in calculus involving trigonometric functions because there are no messy factors like $\pi/180$. I also understand that this is because $\sin(x) / x \rightarrow 1$ as $x \rightarrow 0$ when $x$ is in radians. But why does this mean radians are fundamentally more natural? What is mathematically wrong with these messy factors? So maybe it's nice and clean to pick a unit which makes $\frac{d}{dx} \sin x = \cos x$. But why not choose to swap it around, by putting the 'nice and clean' bit at the unit of angle measurement itself? Why not define 1 Angle as a full turn, then measure angles as a fraction of this full turn (in a similar way to measuring velocities as a fraction of the speed of light $c = 1$). Sure, you would have messy factors of $2 \pi$ in calculus but what's wrong with this mathematically? I think part of what I'm looking for is an explanation why the radius is the most important part of a circle. Could you not define another angle unit in a similar way to the radian, but with using the diameter instead of the radius? Also, if radians are the fundamentally natural unit, does this mean that not only $\pi \,\textrm{rad} = 180 ^\circ$, but also $\pi = 180 ^\circ$, that is $1\,\textrm{rad}=1$?
First of all, your transfer function has a pole at $z=-3$, which means your filter is unstable and you will probably see your output going to infinity if your process anything with it. However, in general, you can process an impulse with your difference equation and do an FFT (or freqz) of the resulting impulse response. That should give the same result as freqz of the transfer function (if your filter is not unstable). You can also process an arbitrary input and determine the frequency response of the filter by taking the ratio of the frequency response of the output and frequency response of the input. If you do this by FFT then dependent on your filter choosing a window function can be important. If you get strange results then try a triangular or parzen window (they have less leakage than other common windows). Edit: Maybe, in the first place, the idea of using an unstable filter is to prevent the use of your method (filtering some samples) to compute the frequency response. Although, the filter is unstable it gives a perfectly fine frequency response that doesn't 'blow up' because the pole is not actually directly on or in the near vicinity to the unit cirle if you use freqz or if you do it analytically. E.g $|H(e^{j\omega})|^2 = H(e^{j\omega}) H^*(e^{j\omega}) = H(e^{j\omega}) H(e^{-j\omega}) = \frac{b^2}{1+a^2+2a\cos(\omega)}$ , where $b=1.5$ and $a=3$ You can't really tell from this expression that the underlying filter is unstable. But let's say that you 'like' the frequency response of this transfer function, is it then possible to find another transfer function which has exactly the same frequency response (amplitude response) but with pole(s) inside the unit circle?
What is Electronics Introduction One answer to the question posed by the title might be: "The understanding that allows a designer to interconnect electrical components to perform electrical tasks." These tasks can involve measurement, amplification, moving and storing digital data, dissipating energy, operating motors, etc. Circuit theory uses the sinusoidal relations between components, voltages, current and time to describe how a circuit functions. The parameters we can measure directly are voltage and time. This means that electronics is the art of manipulating voltages to perform various electrical tasks. I want to present a different definition. That is the subject of this article. In digital electronics, a voltage difference can represent data, or be a measure of the energy used to operate an active component. All conductor geometries have capacitance. A voltage difference associated with a capacitance means there is stored energy. To change the voltages on a circuit means that energy must be moved or dissipated. These are the fundamental processes we must consider if we are going to handle fast data. Circuit theory is not based on moving energy yet that is nature's only objective. Playing nature's game rather than fighting her makes for designs that perform well. We often do not realize what nature does. Fortunately, she is very consistent and we can eventually figure out what she does over and over. Storing or moving energy. There is a common misconception that signals are carried in conductors. Somehow this association crosses over to the idea that conductors carry both signals and energy. A few simple calculations can show that this is a false idea. Consider a 50-ohm transmission line carrying a 5-volt logic signal. The initial current at switch closure is 500 mA. A typical trace is a a gram-mole of copper that has 6 x 10 23 copper atoms (Avogadro's number). Each atom can contribute one electron to current flow. Knowing the charge on an electron makes it easy to show that the average electron velocity for 500 mA is a few centimeters per second. What is even more interesting is that only a trillion electrons are involved in this current flow. This means that only one electron in a trillion carries the current. This also says that the magnetic field that moves energy is not located in the conductors. The only explanation that makes sense is that energy in the magnetic field must be located in the space between two conductors.Conductors end up directing energy flow - not carrying the energy. The electric field in the conductor that causes current flow presents a similar picture. For a transmission line trace 5 mils above a ground plane, the electric field strength in the space under the trace is about 49,000 V/m. The electric field inside the conductor might be 0.1 V per meter. Energy in an electric field is proportional to field strength squared. The ratio of the square of field strengths in and near a conductor is about 2.4 x 10 11. It is safe to say that there is very little electric or magnetic field energy in a trace or conducting plane. Since the energy is present and it is not in the conductors it must be in the space between the conductors. This is true for sine waves or square waves at all frequencies including dc. This one idea is not often discussed in circuit theory. This one idea solves most interference problems. This one idea is at the heart of a good circuit board layout. If the energy that represents information is carried in spaces it makes sense that we must keep these spaces free from interfering fields. The path should also control the characteristic impedance so there are controlled reflections. What we really need to do is supply a smooth path for logic energy flow. Field Theory In 1858 Maxwell presented his famous field equations to the world. It is hard to believe that he did this when there were no components or even wires as we know them today. There were no circuits and no circuit theory. There were no oscilloscopes. Maxwell’s Equations are considered to be one of the greatest achievements in science. These equations do not include voltage or impedance; only electric and magnetic field intensities. We have difficulty using these field equations even in a world of advanced computing power. Engineers are very creative and they have found ways to go around most of the mathematics. The methods that have evolved are called circuit theory. When new problems occur engineers often invent a new explanation rather than use the fundamentals of physics. The idea that spaces need to be designed is simply not accepted by the engineering community. Trace spacing that controls characteristic impedance is understood but via placement is not recognized as an issue. The role of decoupling capacitors is also not fully appreciated. The way energy moves in a transmission line is usually not fully explained. I will give you a simple example. Consider that a step voltage is applied to a transmission line. The leading edge converts half of the arriving energy to static energy. Behind the leading edge there is both energy stored in the line capacitance and energy in motion. A voltage measurement cannot separate energy storage from energy motion. There is a further philosophical problem. The fields are stationary behind the wave front. We cannot detect any motion yet half the electric and magnetic fields are moving energy at the speed of light. The fact that a steady pair of fields when coupled together and coupled to conductors moves energy is also not easy to accept. After a reflection at an open circuit, energy continues to move forward in the line from the source. Energy is not reflected or dissipated at an open circuit. The leading edge of the reflected wave converts the arriving energy into electric field thus doubling the voltage. The reflected wave moves in a direction opposite to energy flow. Assuming no losses, the wave action we have describe is a part of an oscillation. When the return wave reaches the source no more energy is taken from the energy source and we have an oscillator. It takes two round trips of a leading edge to complete one cycle. At one point in the cycle all the stored energy is electric and in another part of the cycle the stored energy is all magnetic. This has to be because if there is capacitance and inductance in parallel, this is the familiar LC tank circuit. Fast Circuit Boards and Energy Management Electronics is in a constant state of flux. Today, energy is moved in fast logic structures and often intentionally radiated to receiving devices. The trend is for more data and faster operations. As you can tell from my writing, I think that there can be many improvements in what we call electronics. If you wish to read more about the problems of designing modern circuit boards, read my latest book. The name of the book is the title to this section. It is published by John Wiley and Sons. My definition of electronics is: The smooth flow of electromagnetic field energy in conducting structures to perform specific electrical tasks. This energy should not leave the circuit except at planned points.Note that a component is a conducting structure. Next post by Ralph Morrison: Voltage - A Close Look The electric field inside the conductor might be 0.1 V per meter. Hmm. This caught my eye, since the electric field inside a perfect conductor is supposed to be zero.... This is copper, which has a conductivity sigma = around 6 × 10 7 S/m, and with \( J = \sigma E = 6\times10^7 \text{S/m} \cdot 0.1 \text{V/m} = 6\times10^6 \text{A/m}^2 \). A 10mil trace of 1 oz copper is 0.254mm × 34.79 μm = 8.8 × 10 -9 m 2, multiplied by current density J is 0.05A. 5V / 50 ohm = 0.1A. Yeah, not far off. Sorry I doubted you. :-) I'm no spring chicken, but, this is a new paradigm to me and I am inspired to investigate further. I look forward to reading your publications. This is a very interesting way of looking at the principles of electronic circuits. A very different viewpoint than what I received as an EE student many years ago. I was well-trained in circuit theory and the fundamentals of electromagnetics, but left most of even that behind--to my detriment--as I plunged into the nascent field of digital electronics. I would be interested in more articles by this author. Hello Ralph Morriso, thanks for your writeup. it sounds interesting! for me its a little hard to understand all- i think that has to do with to points: - i have absolute only the very basic theoretical education about electronics / circuits and so on (like 'this is an resistor'. and 'this is an capacitor' - that is all - all other things are self thought - and so mostly practical and not theoretically ;-) ) - and english is not my mother language - so sometimes its hard to understand the meaning behind the words ;-) for this i think some simple graphics that complement the text would be helpful :-) i will re read your post in some days- and hopefully on the second reading things get clearer ;-) sunny greetings stefan As an unexperienced engineer, I totally agree that an understanding of this concept of what electronics is, can be critical to designing electronic products that are mass manufactured, must be long term reliable and must be developed as fast as possible. I have found myself in very uncomfortable positions in which my understanding and preparation in electronics was not enough to solve electromagnetic issues, and had to deep dive in books and rely in almost "magical" solutions. However the understand of how energy is transmitted and where it resides opens the real world of how the PCB and circuits mounted on it are interacting with each other. Thanks for such a nice article, definitely looking at your books. To post reply to a comment, click on the 'reply' button attached to each comment. To post a new comment (not a reply to a comment) check out the 'Write a Comment' tab at the top of the comments. Registering will allow you to participate to the forums on ALL the related sites and give you access to all pdf downloads.
I've heard rumors of covert acoustic room impulse response measurement being done during concerts. You would need the people in the room if you want to use the impulse response for artificial reverberation and you don't want the room to sound overly echoey like there's no-one in it. As received at the microphone, the signal would be a rather quiet noise-like test signal, and the music, the sounds from the people, and equipment noise would be the noise. Because the impulse response is in practice only a few seconds long, a repetitive test signal can be used. The recording would be accumulated into a circular buffer. Because noise has on average zero correlation with what has already been recorded in the buffer, its root mean square amplitude will grow on average as $\sqrt{N}$ for $N$ recording cycles, while the signal's root mean square amplitude will grow as $N$. At the end of the session, deconvolution is used to get from the test signal response to the impulse response. For a spectrally flat test signal such as a maximum length sequence (MLS), it suffices to convolve with the reverse of the test signal. For a buffer of $M$ sample points the deconvolution gain is $M$ for the signal and $\sqrt{M}$ for the noise. Combining the gains from repetition and deconvolution and assuming a sampling frequency of 44.1 kHz, we get a recording time $t$ dependent gain difference of $20\log_{10}\left(\frac{44.1\text{ kHz }\times\ t}{\sqrt{44.1\text{ kHz }\times\ t}}\right)$ dB between the two: Figure 1. Signal-to-noise ratio (SNR) gain for room impulse response measurement as a function of recording time. Guessing some numbers for an "easy" scenario, if we start with the signal at a sound pressure level (SPL) of 30 dB and the noise at 60 dB SPL which is a starting SNR of -30 dB, in 1 hour of recording we get a SNR of 52 dB, growing very slowly after that, by 10 dB for every 10-times increase of the recording time. 52 dB is already useful for artificial reverberation.
I was reading the following notes on tensor products: http://www.math.uconn.edu/~kconrad/blurbs/linmultialg/tensorprod.pdf At some point (p. 39) there is the following example In the last paragraph, he says that using exterior powers it can be proved that if $I\oplus I\simeq S^2$ as $S$-module, then $I\otimes_S I\simeq S$ as $S$-modules. I do not know a lot about exterior powers (just the definition), but I would like to know what is the property being used here and what is the isomorphism he finds out. Can you give me some hints? As matter of fact I think it really proves that $I\otimes_S I$ is isomorphic to $S\otimes_S S\simeq S$, but I cannot construct a surjective map from $S\otimes_S S$ to $I\otimes_S I$.
If $M$ is a Riemannian manifold and $f:M\to \mathbb{R}$ a Morse-Smale function (which is just a rigorous way to say "generic smooth function"), then Morse theory essentially recovers the manifold itself from relatively basic information about the gradient flow diffeomorphisms of $f$. To sketch briefly: for each pair of critical points $p$ and $q$ of $f$ (i. e., fixed points of the diffeomorphisms), we can consider the subset $S\_{p,q}$ of $M$ that is attracted to $p$ and repelled from $q$ under the diffeomorphisms. ("Repelled from $q$" just means attracted to $q$ under the inverse of the diffeomorphisms.) These $S\_{p,q}$ essentially constitute a decomposition of $M$ as a cell complex. If you just want the homology groups, then you can get away with just considering pairs of critical points whose indices differ by one, and if you just want the Euler characteristic, then you only need local information around each critical point (to define its index). The index of a critical point $p$ is the number of negative eigenvalues of the Hessian (which does not actually depend on the coordinates chosen or even the metric), and it is also the dimension of the submanifold of points in any small neighborhood of $p$ that are attracted to $p$ under the gradient flow diffeomorphisms. I want to know how much of that can be done if we don't have $f:M\to \mathbb{R}$, but just some transformation $F:M\to M$ homotopic to the identity, and if $M$ isn't necessarily even a manifold (but probably compact and metrizable). Given information about the fixed points of $F$ (or other dynamical information?), how much of the topology of $M$ can be recovered? (Can we still try to define the "index" of a fixed point of $F$ by looking at the set of points that are attracted to it as $F$ is iterated?) Some thoughts: For some $M$, there might well be maps $F$ that have no fixed points at all. If the Euler characteristic can be recovered from the fixed points of $F$, then such $M$ would have to have an Euler characteristic of zero. (Is that the case??) So the fixed points of $F$ are not very useful in such cases, but are there more general dynamical features of $F$ that relate to the topology of $M$? Some $M$ might admit perfectly continuous, even smooth, $F$ with chaotic dynamics. If $F$ has a unique fixed point $x\_0$ and for every $x\in M$, $F^n(x)\to x\_0$, then $M$ is contractible (recalling our assumption that $F$ is homotopic to the identity). Can we get better results by considering an even more restrictive class of transformations? Of course, I don't want to go as far as to say that $F$ belongs to some group of gradient flow diffeomorphisms on a manifold, but maybe we can try to relax that by supposing there exists $f:M\to \mathbb{R}$ such that $f\circ F \geq f$. (That condition makes sense even if $M$ is not a manifold.)
Calculating analytically solutions to differential equations can be hard and sometimes even impossible. Methods exist for special types of ODEs. One method is to solve the ODE by separation of variables. The idea of substitution is to replace some variable so that the resulting equation has the form of such a special type where a solution exists. In this scenario, I want to look at homogeneous ODEs which have the form\begin{equation} y'(x) = F\left( \frac{y(x)}{x} \right). \label{eq:homogeneousODE} \end{equation} They can be solved by replacing \(z=\frac{y}{x}\) followed by separation of variables. Equations of this kind have the special property of being invariant against uniform scaling (\(y \rightarrow \alpha \cdot y_1, x \rightarrow \alpha \cdot x_1\)):\begin{align*} \frac{\alpha \cdot \partial y_1}{\alpha \cdot \partial x_1} &= F\left( \frac{\alpha \cdot y_1}{\alpha \cdot x_1} \right) \\ \frac{\partial y_1}{\partial x_1} &= F\left( \frac{y_1}{x_1} \right) \end{align*} Before analysing what this means, I want to introduce the example from the corresponding Lecture 4: First-order Substitution Methods (MIT OpenCourseWare), which is the source of this article. I derive the substitution process for this example later. Imagine a small island with a lighthouse built on it. In the surrounding sea is a drug boat which tries to sail silently around the sea raising no attention. But the lighthouse spots the drug boat and targets the boat with its light ray. Panic-stricken the boat tries to escape the light ray. To introduce some mathematics to the situation, we assume that the boat always tries to escape the light ray in a 45° angle. Of course, the lighthouse reacts accordingly and traces the boat back. The following image depicts the situation. We now want to know the boat's curve, when the light ray always follows the boat directly and the boat, in turn, evades in a 45° angle. We don't model the situation as a parametric curve where the position would depend on time (so no \((x(t), y(t))\) here). This also means that we don't set the velocity of the curve explicitly. Instead, the boat position just depends on the angle of the current light ray. Mathematically, this means that in the boat's curve along the x-y-plane the tangent of the curve is always enclosed in a 45° angle with the light ray crossing the boat's position. \(\alpha\) is the angle of the light ray and when we assume that the lighthouse is placed at the origin so that the slope is just given by the fraction \(\frac{y}{x}\), \(\alpha\) is simply calculated as\begin{equation*} \tan(\alpha) = \frac{y}{x}. \end{equation*} Now we can define the tangent of the boat's curve, which is given by its slope value\begin{equation} y'(x) = f(x,y(x)) = \tan(\alpha + 45°) = \frac{\tan(\alpha) + \tan(45°)}{1 - \tan(\alpha) \cdot \tan(45°)} = \frac{\frac{y(x)}{x} + 1}{1 - \frac{y(x)}{x}} = \frac{x + y(x)}{x - y(x)}. \label{eq:slopeBoat} \end{equation} In the first simplification step, a trigonometric addition formula is used. This again can be simplified so that the result fulfils the definition of \eqref{eq:homogeneousODE}. This means that the ODE can be solved by separation of variables if the substitution \(z(x) = \frac{y(x)}{x}\) is made. We want to replace \(y'(x)\), so we first need to calculate the derivative of the substitution equation\begin{align*} y(x) &= z(x) \cdot x \\ y'(x) &= \frac{\partial y(x)}{\partial x} = z'(x) \cdot x + z(x). \end{align*} Note that we calculate the derivative with respect to \(x\) and not \(y(x)\) (which is a function depending on \(x\) itself). Therefore the product rule was used. Next we substitute and try to separate variables.\begin{align*} y'(x) &= \frac{x + y(x)}{x - y(x)} \\ z'(x) \cdot x + z(x) &= \frac{x + z(x) \cdot x}{x - z(x) \cdot x} \\ \frac{\partial z(x)}{\partial x} \cdot x &= \frac{1 + z(x)}{1 - z(x)} - z(x) = \frac{1 + z(x)}{1 - z(x)} - \frac{\left( 1-z(x) \right) \cdot z(x)}{1-z(x)} = \frac{1 + z(x) - z(x) + z^2(x)}{1 - z(x)} \\ \frac{\partial z(x)}{\partial x} &= \frac{1 + z^2(x)}{1 - z(x)} \cdot \frac{1}{x} \\ \frac{1 - z(x)}{1 + z^2(x)} \partial z(x) &= \frac{1}{x} \cdot \partial x \\ \int \frac{1 - z(x)}{1 + z^2(x)} \, \mathrm{d} z(x) &= \int \frac{1}{x} \, \mathrm{d} x \\ \tan^{-1}(z(x)) - \frac{1}{2} \cdot \ln \left( z^2(x)+1 \right) &= \ln(x) + C \\ 0 &= -\tan^{-1}\left(z(x)\right) + \frac{1}{2} \cdot \ln \left( z^2(x)+1 \right) + \ln(x) + C \\ 0 &= -\tan^{-1}\left(z(x)\right) + \ln \left( \sqrt{z^2(x) + 1} \cdot x \right) + C \\ 0 &= -\tan^{-1}\left(z(x)\right) + \ln \left( \sqrt{z^2(x) \cdot x^2 + x^2} \right) + C \\ \end{align*} (I used the computer for the integration step) We now have a solution, but we first need to substitute back to get rid of the \(z(x) = \frac{y(x)}{x}\)\begin{align*} 0 &= -\tan^{-1}\left(\frac{y(x)}{x}\right) + \ln \left( \sqrt{\frac{y^2(x)}{x^2} \cdot x^2 + x^2} \right) + C \\ 0 &= -\tan^{-1}\left(\frac{y(x)}{x}\right) + \ln \left( \sqrt{y^2(x) + x^2} \right) + C \end{align*} Next, I want to set \(C\) to the starting position of the boat by replacing \(x = x_0\) and \(y(x) = y_0\)\begin{align*} C &= \tan^{-1}\left(\frac{y_0}{x_0}\right) - \ln \left( \sqrt{y_0^2 + x_0^2} \right) \end{align*} The final result is then an implicit function\begin{equation} 0 = -\tan^{-1}\left( \frac{y}{x} \right) + \ln\left( \sqrt{x^2 + y^2} \right) + \tan^{-1}\left( \frac{y_0}{x_0} \right) - \ln \left( \sqrt{x_0^2 + y_0^2} \right). \label{eq:curveBoat} \end{equation} So, we now have a function where we plug in the starting point \((x_0,y_0)^T\) of the boat and then check every relevant value for \(y\) and \(x\) where the equation is fulfilled. In total, this results in the boat's curve. Since the boat's position always depends on the current light ray from the lighthouse, you can think of the curve being defined by the ray. To clarify this aspect, you can play around with the slope of the ray in the following animation. As you can see, the boat's curve originates by rotating the light ray. Also, note that there are actually two curves. This is because we can enclose a 45° angle on both sides of the light ray. The right curve encloses the angle with the top right side of the line and the left curve encloses with the bottom right side of the line. Actually, one starting point defines already both curves, but you may want to depict the situation like two drug boats starting at symmetric positions. I marked the position \((x_c,y_c) = (1,2)\) as “angle checkpoint” to see if the enclosed angle is indeed 45°. To check, we first need the angle of the light ray which is just the angle \(\alpha\) defined above for the given coordinates\begin{equation*} \phi_{ray} = \tan^{-1}\left( \frac{y_c}{x_c} \right) = \tan^{-1}\left( \frac{2}{1} \right) = 63.43°. \end{equation*} For the angle of the boat, we need its tangent at that position which is given by its slope value. So we only need to plug in the coordinates in the ODE previously defined\begin{equation*} f(x_c,y_c) = \frac{1 + 2}{1 - 2} = -3. \end{equation*} Forwarding this to \(\tan^{-1}\) results in the angle of the tangent\begin{equation*} \phi_{tangent} = \tan^{-1}\left(-3\right) + 180° = -71.57° + 180° = 108.43°. \end{equation*} I added 180° so that the resulting angle is positive (enclosed angles can only be in the range \(\left[0°;180°\right[\)). Calculating the difference \( \phi_{tangent} - \phi_{ray} = 108.43° - 63.43° = 45°\) shows indeed the desired result. Of course, this is not only true at the marked point, but rather at any point instead, because that is the way we defined it in \eqref{eq:slopeBoat}. Another way of visualising \eqref{eq:curveBoat} is to switch to the the polar coordinate system by using \(\theta = \tan^{-1}\left( \frac{y}{x} \right)\) respectively \( r = \sqrt{x^2 + y^2} \)\begin{align*} 0 &= -\theta + \ln\left( r \right) + \theta_0 - \ln \left( r_0 \right) \\ \ln \left( \frac{r_0}{r} \right) &= -\theta + \theta_0 \\ \frac{r_0}{r} &= e^{-\theta + \theta_0}, \end{align*} and solve by \(r\)\begin{equation} r\left( \theta \right) = \frac{r_0}{e^{\theta_0 -\theta}}. \label{eq:curveBoatPolar} \end{equation} We can now visualize this function using a polar plot where we move around a (unit) circle and adjust the radius accordingly to \eqref{eq:curveBoatPolar}. The result is a graph which looks like a spiral. Beginning from the starting point the light ray forces the boat to move counter-clockwise in a circle with increasing distance from the island. So, without considering physics (infinity light ray, ...) and realistic human behaviour (escaping strategy of the boat, ...), this cat-and-mouse game lasts forever. Next, I want to analyse how the curve varies when the starting position of the boat changes. Again, each position of the curve is just given by the corresponding light ray crossing the same position. The curve in total is, therefore, the result of a complete rotation (or multiple like in the polar plot) of the light ray (like above, just with all possible slope values). In the next animation, you can change the starting position manually. Do you remember the property of scale invariance for a homogeneous ODE introduced in the beginning? Let's have a lock what this means for the current problem. For this, it helps to analyse the different slope values which the equation \(f(x,y)\) produces. This is usually done via a vector field. At sampled positions (\(x_s,y_s\)) in the grid, a vector \((1, f(x_s,y_s))\) is drawn which points in the direction of the slope value at that position (here, the vectors are normalized). So the vector is just a visualization technique to show the value of \(f(x,y)\). Additionally, I added some isoclines where on all points on one line the slope value is identical. This means that all vectors along a line have the same direction (easily checked on the horizontal line). You can check this if you move the boat along the line \(y=x\). This will result in different curves, but the tangent of the boat's starting point is always the same (vertical). Actually, this is already the property of scale invariance: starting from one point, you can scale your point (= moving along an isocline) and always get the same slope value. List of attached files:
I am basically very new to this image processing field. I am presently working on edge detection on colour images. While learning the basics of edges and edge detection in images, I encountered image derivatives and spatial masks for the corresponding operations. It's where I happened to learn about Prewitt operator and Sobel operators. I cannot understand the logic behind the construction of these masks and how does it detect lines. Can someone help me please? A first rationale is to be very short, as there was a time when computing on images was expensive. Then, a contour or an edge often present a fast variation in image intensities, that can be enhanced by derivatives. Sobel filters emulate such derivatives in one direction, and slightly average pixels in the complementary direction, to smooth small variations or noise. One direction implements the shortest possible centered 1D discrete derivative: $$\begin{bmatrix} -1 &0 &1 \end{bmatrix} $$ to detect variations across lines, the other the shortest non-trivial Pascal/Gaussian smoothing $$\begin{bmatrix} 1&2&1 \end{bmatrix}^T $$ to smooth along columns, resulting in, for instance: $$ \begin{bmatrix} 1&2&1 \end{bmatrix}^T\cdot \begin{bmatrix} -1 &0 &1 \end{bmatrix} $$ or $$ \begin{bmatrix} -1 &0 &1 \\ -2 &0 &2 \\ -1 &0 &1 \\ \end{bmatrix} $$ As you can see, this only involves dyadic numbers, so it can be implemented with adds and binary shifts. Of course, the 3-point derivative often has an additional $1/2$ factor: $$\begin{bmatrix} -1/2 &0 &1/2 \end{bmatrix} $$ to get the appropriate scale factor, and the Pascal smoother has a $1/4$ factor to have its coefficients sum to one $$\begin{bmatrix} 1/4&1/2&1/4 \end{bmatrix} $$ but the resulting global scaling of $1/2\times 1/4$ does not change the edge detection power for such linear filters.
If you need to insert cross-references to numbered elements in the document, (like equations, sections and figures) there are commands to automate it in LaTeX. This article explains how. Contents Below you can see a simple example of images cross referenced: \section{Introduction} \label{introduction} This is an introductory paragraph with some dummy text. This section will be later referenced. \begin{figure}[hbt!] \centering \includegraphics[width=0.3\linewidth]{lion-logo.png} \caption{This image will be referenced below} \label{fig:lion} \end{figure} You can reference images, for instance, figure \ref{fig:lion} shows the logo of the red lion logo. The command \label{ } is used to set an identifier that is later used in the command \ref{ } to set the reference. Below an example on how to reference a section \section{Introduction} \label{introduction} This is an introductory paragraph with some dummy text. This section will be later referenced. \begin{figure}[h] \centering \includegraphics[width=0.3\linewidth]{lion-logo.png} \caption{This image will be referenced below} \label{fig:lion} \end{figure} You can reference images, for instance, the image \ref{fig:lion} shows the red lion logo. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document Again, the commands \label and \ref are used for references. The label can be set either right before or after the \section statement. This also works on chapters, subsections and subsubsections. See Sections and chapters. At the introduction an example of a image referenced was shown, below cross referencing equations is presented. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document \subsection{powers series} \label{subsection} \begin{equation} \label{eq:1} \sum_{i=0}^{\infty} a_i x^i \end{equation} The equation \ref{eq:1} is a typical power series. For further and more flexible examples with labels and references see Elements usually are referenced by a number assigned to them, but if you need to, you can insert the page where they appear. \section{Math references} \label{mathrefs} As mentioned in section \ref{introduction}, different elements can be referenced within a document \subsection{powers series} \label{subsection} \begin{equation} \label{eq:1} \sum_{i=0}^{\infty} a_i x^i \end{equation} The equation \ref{eq:1} is a typical power series. \section{Last section} In the subsection \ref{subsection} at the page \pageref{eq:1} an example of a power series was presented. The command \pageref will insert the page where the element whose label is used appears. In the example above the equation 1. This command can be used with all other numbered elements mentioned in this article. On Overleaf cross references work immediately, but for cross references to work properly in your local LaTeX distribution you must compile your document twice. There's also a command that can automatically do the job for all the references to work. For instance, if your document is saved as main.tex latexmk -pdf main.tex generates the file main.pdf with all cross-references working. To change the output format use -dvi or -ps. For more information see:
A (weak) composition of a positive integer $n$ into $k$ parts is an ordered sequence of nonnegative integers $(a_1, a_2, \ldots, a_k)$ such that $ \sum_{i=1}^k a_i = n $. I am interested in the case when the parts are bounded: $a_i\in\{0, 1, \ldots, j-1\}$. The number of such compositions satisfies the two-variable recurrence \begin{equation} \kappa(n,j,k)=\sum_{i=0}^{j-1}\kappa(n-i,j,k-1) \end{equation} and can be expressed as \begin{equation} \kappa(n,j,k) = \sum_{s\geq0} (-1)^s {k \choose s} {k-sj+n-1 \choose k-1} \end{equation} [R. P. Stanley, Enumerative Combinatorics, Vol. I, p. 307]. Does someone know how to find the asymptotics of $\kappa(n,j,k)$ when $k\to\infty$, $n\sim\lambda k$, $\lambda\in(0,j-1)$, and $j$ is fixed? In particular, I would like to determine \begin{equation} \lim_{k\to\infty,\ n = \lambda k} (\kappa(n,j,k))^{\frac{1}{k}} =\ ? \end{equation} Up to a factor of $j^k$, What you're asking for is the probability $P$ that a k-step random walk with steps chosen uniformly from $S = \{0, 1, ..., j-1\}$ lands on $\lambda n$. This is the probability of return to the origin at time $k$ of the (typically biased) random walk with steps chosen uniformly from $S - \lambda = \{-\lambda, -\lambda+1, \dots, -\lambda + j-1\}$. If we choose some $t$ and bias this random walk so that $-\lambda+r$ has probability $t^{-\lambda + r}/Z_t$, we are multiplying the probability of return to the origin by $(j/Z_t)^{k}$. But we can choose $t$ so that this new random walk is unbiased, in which case its probability of return to the origin is roughly proportional to $k^{-1/2}$. So, $P \sim k^{-1/2 }j^{-k}Z_t^k$, and the number you're interested in is roughly $$k^{-1/2} Z_t^k.$$ All that remains is to find the values of $t$ such that this new random walk is unbiased, but this is the value that maximizes the probability of return to the origin, i.e. the value for which $$Z_t = \sum_{r=0}^{j-1} t^{-\lambda + r} = t^{-\lambda}\frac{t^j-1}{t-1}$$ is minimal. So the number you're interested in is roughly $$k^{-1/2} \rho^k,$$ where $$\rho = \min_t t^{-\lambda}\frac{t^j-1}{t-1}.$$ You can get an explicit upper bound on $\rho$ by noting that, by convexity of the summands, $$\frac1j Z_t = \frac1j \sum_{r=0}^{j-1} t^{-\lambda + r} \leq \frac{1}{2}(t^{-\lambda} + t^{-\lambda+j-1}).$$ The optimal value of $t$ is $((j-1)/\lambda-1)^{-1/(j-1)}$, which gives $$\rho \leq \frac{j}{2} \left( \left(\alpha^{-1}-1\right)^{\alpha} + \left(\alpha^{-1}-1\right)^{-(1-\alpha)} \right) = \frac{j}{2} \alpha^{-\alpha}(1-\alpha)^{-(1-\alpha)},$$ where $\alpha = \lambda/(j-1).$ Such problems can be solved routinely using the methods explained in the book by me and Robin Pemantle, Analytic Combinatorics in Several Variables (preprint version online at our websites, book published by Cambridge in 2013). The bivariate generating function for $\kappa(n,j,k)$ is $F_j(x,y) = 1/(1 - yf_j(x))$ where $f_j(x) = \sum_{i=0}^{j-1} x^i$. This simplifies to a nice rational function - use the standard smooth point formula for Riordan arrays (Section 12.2 as I recall) to get the desired results. I won't work out all details, because I am on vacation, but this is an absolutely standard application of our basic theory.
Quantitative Modeling for Algorithmic Traders – Primer Quantitative Modeling techniques enable traders to mathematically identify, what makes data “tick” – no pun intended 🙂 . They rely heavily on the following core attributes of any sample data under study: Expectation– The mean or average value of the sample Variance– The observed spread of the sample Standard Deviation– The observed deviation from the sample’s mean Covariance– The linear association of two data samples Correlation– Solves the dimensionality problem in Covariance Why a dedicated primer on Quantitative Modeling? Understanding how to use the five core attributes listed above in practice, will enable you to: Construct diversified DARWIN portfolios using Darwinex’ proprietary Analytical Toolkit. Conduct mean-variance analysisfor validating your DARWIN portfolio’s composition. Build a solid foundation for implementing more sophisticated quantitative modeling techniques. Potentially improve the robustnessof trading strategies deployed across multiple assets. Hence, a post dedicated to defining these core attributes, with practical examples in R (statistical computing language) should hopefully serve as good reference material to accompany existing and future posts. Why R? It facilitates the analysis of large price datasets in short periods of time. Calculations that would otherwise require multiple lines of code in other languages, can be done much faster as R has a mature base of libraries for many quantitative finance applications. It’s free to download here. * Sample data ( EUR/USD and GBP/USD End-of-Day Adjusted Close Price) used in this post was obtained from Yahoo, where it is freely available to the public. Before progressing any further, we need to download EUR/USD and GBP/USD sample data from Yahoo Finance (time period: January 01 to March 31, 2017) In R, this can be achieved with the following code: library(quantmod) getSymbols("EUR=X",src="yahoo",from="2017-01-01", to="2017-03-31") getSymbols("GBP=X",src="yahoo",from="2017-01-01", to="2017-03-31") Note: “EUR=X” and “GBP=X” provided by Yahoo are in terms of US Dollars, i.e. the data represents USD/EUR and USD/GBP respectively. Hence, we will need to convert base currencies first. To achieve this, we will first extract the Adjusted Close Price from each dataset, convert base currency and merge both into a new data frame for use later: eurAdj = unclass(`EUR=X`$`EUR=X.Adjusted`) # Convert to EUR/USD eurAdj = 1/eurAdj gbpAdj <- unclass(`GBP=X`$`GBP=X.Adjusted`) # Convert to GBP/USD gbpAdj <- 1/gbpAdj # Extract EUR dates for plotting later. eurDates = index(`EUR=X`) # Create merged data frame. eurgbp_merged <- data.frame(eurAdj,gbpAdj) Finally, we merge the prices and dates to form one single dataframe, for use in the remainder of this post: eurgbp_merged = data.frame(eurDates, eurgbp_merged) colnames(eurgbp_merged) = c("Dates", "EURUSD", "GBPUSD") The mean is its average value. μ of a price series μof a price series It is calculated by adding all elements of the series, then dividing this sum by the total number of elements in the series. Mathematically, the mean μ of a price series P, where elements p ∈ P, with n number of elements in P, is expressed as: \(μ = E(p) = \frac{1}{n} ∑ (p_1 + p_2 + p_3 + … + p_n)\) In R, the mean of a sample can be calculated using the mean() function. For example, to calculate the mean price observed in our sample of EUR/USD data, ranging from January 01 to March 31, 2017, we execute the following code to arrive at mean 1.065407: mean(eurgbp_merged$EURUSD) [1] 1.065407 Using the plotly library in R, here’s the mean overlayed graphically on this EUR/USD sample: library(plotly) plot_ly(name="EUR/USD Price", x = eurgbp_merged$Dates, y = as.numeric(eurgbp_merged$EURUSD), type="scatter", mode="lines") %>% add_trace(name="EUR/USD Mean", y=(as.numeric(mean(eurgbp_merged$EURUSD))), mode="lines") The variance of a price series is simply the mean or expectation, of the square of (how much price deviates from the mean). σ² σ² It characterises the range of movement around the mean, or “spread” of the price series. Mathematically, the variance σ² of a price series P, with elements p ∈ P, and mean μ, is expressed as: \(σ²(p) = E[(p – μ)²]\) Standard Deviation is simply the square root of variance, expressed as σ: \(σ = \sqrt{σ²(p)} = \sqrt{E[(p – μ)²]}\) In R, the standard deviation of a sample can be calculated using the sd() function. For example, to calculate the standard deviation observed in our sample of EUR/USD data, ranging from January 01 to March 31, 2017, we execute the following code to arrive at s.d. 0.00996836: sd(eurgbp_merged$EURUSD) [1] 0.00996836 Using the plotly library in R again, we can overlay a single (or more) positive and negative standard deviation from the mean, as follows: plot_ly(name="EUR/USD Price", x = eurgbp_merged$Dates, y = as.numeric(eurgbp_merged$EURUSD), type="scatter", mode="lines") %>% add_trace(name="+1 S.D.", y=(as.numeric(mean(eurgbp_merged$EURUSD))+sd(eurgbp_merged$EURUSD)), mode="lines", line=list(dash="dot")) %>% add_trace(name="-1 S.D.", y=(as.numeric(mean(eurgbp_merged$EURUSD))-sd(eurgbp_merged$EURUSD)), mode="lines", line=list(dash="dot")) %>% add_trace(name="EUR/USD Mean", y=(as.numeric(mean(eurgbp_merged$EURUSD))), mode="lines") The sample covariance of two price series, in this case EUR/USD and GBP/USD, each with its respective sample mean, describes their linear association, i.e. how they move together in time. Let’s denote EUR/USD by variable ‘ e’ and GBP/USD by variable ‘ g‘. These price series will then have respective sample means of \(\overline{e}\) and \(\overline{g}\) respectively. Mathematically, their sample covariance, Cov(e, g), where both have n number of data points \((e_i, g_i)\), can be expressed as: \(Cov(e,g) = \frac{1}{n-1}\sum_{i=1}^{n}(e_i – \overline{e})(g_i – \overline{g})\) In R, sample covariance can be calculated easily using the cov() function. Before we calculate covariance, let’s first use the plotly library to draw a scatter plot of EUR/USD and GBP/USD. To visualize linear association, we will also perform a linear regression on the two price series, followed by drawing this as a line of best fit on the scatter plot. This can be achieved in R using the following code: # Perform linear regression on EUR/USD and GBP/USD fit <- lm(EURUSD ~ GBPUSD, data=eurgbp_merged) # Draw scatter plot with line of best fit plot_ly(name="Scatter Plot", data=eurgbp_merged, y=~EURUSD, x=~GBPUSD, type="scatter", mode="markers") %>% add_trace(name="Linear Regression", data=eurgbp_merged, x=~GBPUSD, y=fitted(fit), mode="lines") Based on this plot, EUR/USD and GBP/USD have a positive linear association. To calculate the sample covariance of EUR/USD and GBP/USD between January 01 and March 31, 2017, we execute the following code to arrive at covariance 7.629787e-05: cov(eurgbp_merged$EURUSD, eurgbp_merged$GBPUSD) [1] 7.629787e-05 Problem: Being dimensional in nature, calculating just Covariance makes it difficult to compare price series with significantly different variances. Solution: Calculate Correlation, which is Covariance normalized by the standard deviations of each price series, hence making it dimensionless and a more interpretable ratio of linear association between two price series. Mathematically, Correlation ρ(e,g) of EUR/USD and GBP/USD, where \(σ_e\) and \(σ_g\) are their respective standard deviations, can be expressed as: \(ρ(e,g) = \frac{Cov(e,g)}{σ_e σ_g} = \frac{\frac{1}{n-1}\sum_{i=1}^{n}(e_i – \overline{e})(g_i – \overline{g})}{σ_e σ_g}\) Correlation = +1 indicates EXACT positive association. Correlation = -1 indicates EXACT negative association. Correlation = 0 indicates NO linear association. In R, correlation can be calculated easily using the cor() function. For example, to calculate the correlation between EUR/USD and GBP/USD, from January 01 to March 31, 2017, we execute the following code to arrive at 0.5169411: cor(eurgbp_merged$EURUSD, eurgbp_merged$GBPUSD) [1] 0.5169411 0.5169411 implies reasonable positive correlation between EUR/USD and GBP/USD, which is what we visualized earlier with our scatter plot and line of best fit. In future blog posts, we will examine how to construct diversified DARWIN Portfolios using the information above in practice. Trade safe, The Darwinex Team — Additional Resource: Learn more about DARWIN Portfolio Risk (VIDEO) * please activate CC mode to view subtitles. Do you have what it takes? – Join the Darwinex Trader Movement!
This is notation from Distribution Theory in Functional Analysis. The theory of distributions is meant to make things like the Dirac Delta rigorous. In this context, just to give you one overview, a distribution is a functional on the space of test functions. We define the space of test functions over $\mathbb{R}$ as $\mathcal{D}(\mathbb{R})$ being the space of smooth functions with compact support (that is, the set where they are not zero is bounded and closed). In that case, the space of distributions is the space of continuous linear functionals over $\mathcal{D}(\mathbb{R})$ and is denoted as $\mathcal{D}'(\mathbb{R})$. If $\eta\in \mathcal{D}'(\mathbb{R})$ and $\phi\in \mathcal{D}(\mathbb{R})$ we usually denote $\eta(\phi)$ by $(\eta,\phi)$. Since distributions are just linear functionals, we say that two distributions $\eta,\zeta$ are equal if $(\eta,\phi)=(\zeta,\phi)$ for all $\phi\in \mathcal{D}(\mathbb{R})$. The Dirac Delta, for instance, is defined as $\delta\in \mathcal{D}'(\mathbb{R})$ whose action on $\phi\in \mathcal{D}(\mathbb{R})$ is $(\delta,\phi)=\phi(0)$. Now, given $\phi\in\mathcal{D}(\mathbb{R})$ one can always build a distribution associated with it: $$(\phi,\psi)=\int_{-\infty}^{\infty}\phi(x)\psi(x)dx, \qquad \forall \ \psi\in \mathcal{D}(\mathbb{R}).$$ There are other ways, though, to make one usual function into a distribution, even if the function is not a test function. One of them is the principal value. Consider $f(x) = \frac{1}{x}$. This obviously doesn't have compact support, so $f\notin \mathcal{D}(\mathbb{R})$. We can make $f$ into a distribution, though, by considering the principal value: $$\left(\operatorname{Pv}\frac{1}{x},\phi\right)=\lim_{\epsilon\to 0^+}\left(\int_{-\infty}^{-\epsilon}\dfrac{\phi(x)}{x}dx+\int_\epsilon^\infty \dfrac{\phi(x)}{x}dx\right).$$ This is what the book means by $\operatorname{Pr}$. Now, the formula you state is the Sokhotski–Plemelj formula. It should be read in the distributional sense. Saying that: $$\lim_{\epsilon\to 0}\frac{1}{x+i\epsilon}=\operatorname{Pr}\frac1x -i\pi\delta(x).$$ Really means that for all $\phi\in \mathcal{D}(\mathbb{R})$ we have $$\lim_{\epsilon\to 0}\left(\frac{1}{x+i\epsilon},\phi\right)=\left(\operatorname{Pr}\frac1x,\phi\right) -i\pi\left(\delta(x),\phi\right),$$ where $$\left(\frac{1}{x+i\epsilon},\phi\right)=\int_{-\infty}^{\infty}\dfrac{\phi(x)}{x+i\epsilon}dx.$$
July 1 (Wed)-July 4 (Sat), 2015 Confirmed Speakers: 13:00-14:00 Oshima Break 14:15-15:15 Matumoto 15:15-16:00 Coffee Break (Fujiwara Hall) 16:00-17:00 Hirai Break 17:15-18:15 Nishiyama 13:00-14:00 Orsted 14:15-15:15 Bianchi 15:15-16:00 Coffee Break (Fujiwara Hall) 16:00-17:00 Pevzner Break 17:15-18:15 Kaizuka Conference Dinner (18:30 Bus) 13:00-14:00 Vershik 14:15-15:15 Bianchi 15:30-16:30 Orsted Speaker: Gabriele Bianchi (Universita di Firenze) Title: The covariogram and Fourier-Laplace transform in {\mathbb C}^n Abstract: The covariogram g_{K} of a convex body K in R^n is the function which associates to each x in R^n the volume of the intersection of K with K+x. Determining K from the knowledge of g_K is known as the covariogram problem. It is equivalent to determining the characteristic function 1_K of K from the modulus of its Fourier-Laplace transform, a particular instance of the phase retrieval problem. We will present this problem and a recent result that shows that when K is sufficiently smooth and in any dimension n, K is determined by g_K in the class of sufficiently smooth bodies. The proof uses in an essential way a study of the asymptotic behavior at infinity of the zero set of the Fourier-Laplace transform of 1_K in C^n done by Toshiyuki Kobayashi. We also discuss the relevance for the covariogram problem of known determination results for the phase retrieval problem and the difficulty of finding explicit geometric conditions on K which grant that the entire Fourier-Laplace transform of 1_K cannot be factored as the product of non-trivial entire functions. This shows a connection between the covariogram problem and the Pompeiu problem. Speaker: Takeshi Hirai (½δ ) (Kyoto University) Title: A review of my work on characters of semisimple Lie groups Abstract: Concentrating on the subject of characters of semisimple Lie groups, I try to review series of my papers. The talk will contain subjects such as Speaker: Koichi Kaizuka (LΛφκ) (Gakushuin University) Title: Scattering theory for invariant differential operators on symmetric spaces of noncompact type and its application to unitary representations Abstract: We develop the scattering theory for invariant differential operators on symmetric spaces of noncompact type. We study asymptotic behavior of (joint) eigenfunctions in a suitable Banach space. By the scattering theory, we present three types of unitary representations of semisimple Lie groups in an explicit form as a uniform limit of representations on the Banach space. Speaker: Masatoshi Kitagawa (kμX«) (the University of Tokyo) Title: On the irreducibility of U(g)^H-modules Abstract: I will report on the irreducibility of U(g)^H-modules arising from branching problems. It is well-known that a U(g)^K-module Hom_K(W,V) is irreducible for any irreducible (g, K)-module V and K-type W. For a non-compact subgroup H, the same statement is not true in general. In this talk, I will introduce a positive example and negative example for the irreducibility of Hom_H(W,V). Speaker: Toshiyuki Kobayashi (¬Ρrs) (the University of Tokyo) Title: Analysis of minimal representatinons---"geometric quantization" of minimal nilpotent orbits Abstract: Minimal representations are the smallest infinite dimensional unitary representations of reductive groups. About ten years ago, I suggested a program of "geometric analysis" with minimal representations as a motif. We have found various geometric realizations of minimal representations that interact with conformal geometry, conservative quantities of PDEs, holomorphic model (e.g. Fock-type model), $L^2$-model (Schr\"odinger-type model), and Dolbeault cohomology models. I plan to discuss some of these models based on works with my collaborators, Hilgert, Mano, M\"ollers, and \O rsted among others. From the viewpoint of the orbit philosoply by Kirillov-Kostant, minimal representations may be thought of as a quantization of minimal nilpotent orbits. In certain setting, we give a "geometric quantization" of minimal representations by using certain Lagrangean manifolds. Our construction includes the Schr\"odinger model of the Segal-Shale-Weil representation of the metaplectic group, and the commutative model of the complementary series representations of O(n,1) due to A. M. Vershik and M. I. Graev. Speaker: Toshihisa Kubo (vΫv) (the University of Tokyo) Title: On the reducible points for scalar generalized Verma modules Abstract: In 1980's Enright-Howe-Wallach and Jakobsen individually classified the reducible points for scalar generalized Verma modules induced from parabolic subalgebras with abelian nilpotent radicals, for which the generalized Verma modules are unitarizable. Recently, Haian He classified all the reducible points for such scalar generalized Verma modules. In this talk we will discuss about classifying reducible points for scalar generalized Verma modules induced from maximal parabolic subalgebras with two-step nilpotent radicals.This is a joint work in progress with Haian He and Roger Zierau. Speaker: Hisayosi Matumoto (Ό{v`) (the University of Tokyo) Title: Homomorphisms between scalar generalized Verma modules of ${\ mathfrak gl}(n, {\mathbb C})$ Abstract: An induced module of a complex reductive Lie algebra from a one-dimensional representation of a parabolic subalgebra is called a scalar generalized Verma module. In this@talk, we give a classification of homomorphisms between scalar generalized Verma modules of ${\mathfrak gl}(n,{\mathbb C})$. In fact such homomorphisms are compositions of elementary homomorphisms. Speaker: Kyo Nishiyama (ΌR ) (Aoyama Gakuin University) Title: Double flag variety over reals: Hermitian symmetric case Abstract: Let $G$ be a reductive Lie group and $L$ its symmetric subgroup, i.e., $ L$ is open in $G^{\theta}$ for a certain involution $\theta$. Choose parabolic subgroups $ P \subset G $ and $ Q \subset L $ respectively, and put $ X = G/P \times L/Q $. We call $X$ a double flag variety. $L$ acts on $X$ diagonally, and $X$ is said tobe of finite type if there are only finitely many $L$-orbits. In this talk, we concentrate on the pair $ (G, L) = (Sp_{2n}(\R), GL_n(\ R)) $ and consider $ X=LGrass(\R^{2n}) \times Grass_d(\R^n) $ (product of Lagrangian Grassmannian and Grassmannian of $d$-dimensional subspaces). This double flag variety is turned out to be of finite type and we discuss various interesting properties of $X$, which is not fully investigated yet. This is based on an on-going joint work with Bent Ørsted. Speaker: Hiroyuki Ochiai ([V) (Kyushu University) Title: Covariant differential operators and Heckman-Opdam hypergeometric systems Abstract: This is a joint work with Tomoyoshi Ibukiyama and Takako Kuzumaki. We consider holomorphic linear differential operators with constant coefficients acting on Siegel modular forms, which preserve the automorphy when restricted to a subdomain. We give a characterization of the symbols of such differential operators, and mention an explicit form in terms of hypergeometric functions with respect to root systems introduced by G.Heckman and E.Opdam. Speaker: Bent Ørsted (Århus University) Title: Generalized Fourier transforms Abstract: In these lectures, based on joint work with Salem Ben Said and Toshiyuki Kobayashi, we shall define a natural family of deformations of the usual Fourier transform in Euclidian space. The main idea is to replace the standard Laplace operator by a two-parameter family of deformations in such a way, that it still is a member of a triple generating the three-dimensional simple Lie algebra. In particular we shall describe Speaker: Michael Pevzner (Reims University) Title: Symmetry breaking operators and resonance phenomena for branching laws Abstract: We shall explain the fundamental role of the Gauss hypergeometric equation in the explicit realization of symmetry breaking operators for reductive pairs and the control of multiplicities of the corresponding branching laws for singular parameters. Speaker: Anatoly Vershik (St. Petersburg State University) Title: Representtions of current groups and theory of special representations Abstract: In the beginning of the 70-th H. Araki gave a general scheme of the consrutruction of the representations in the Fock space (=Ito-Wiener space) of the groups of the functions with values in Lie groups. Independently in the paper (1973) Gelfand-Graev-Vershik gave the first example of the irreducible represnetations for the case of $SL(2,R)$ and he for semi-simple group of rank one — $O(n,1), U(n,1)$. Many authors work on this direction(VGG, Ismagilov, Delorm, Guichardet et al). The main point is the cocycle of the group with value in the irreducibe faithful represention. During the last 10 year in the papers by Graev et Vershik the following progress was obtained 1) New models ("integral model" , Poisson, Quasipoussin model instead of Fock model) of the representation of current group was constructed; 2) Systematic approach to the studying of cohomology in the special unitary representations; In particulary for the Iwasawa subgroup of semsimple groups like $U(p,q),O(p,q)$ and other solvable groups. 3) Recent attepmt to extend theory for nonunitary representations. There many open problems and link with other areas. © Toshiyuki Kobayashi
Let $p$ be an odd prime. Prove that $\frac{p-1}{2}$ is a primitive root modulo $p$ if and only if $2(-1)^{(p-1)/2}$ is a primitive root modulo $p$. I was thinking that since $\frac{p-1}{2}$ is a primitive root modulo $p$ that its order is $p-1$. Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Let $p$ be an odd prime. Prove that $\frac{p-1}{2}$ is a primitive root modulo $p$ if and only if $2(-1)^{(p-1)/2}$ is a primitive root modulo $p$. I was thinking that since $\frac{p-1}{2}$ is a primitive root modulo $p$ that its order is $p-1$. If $g$ is a primitive root modulo $p$ then it's order is $p-1$ : $$\left(\frac{p-1}{2}\right)^n\equiv 1\mod p\ \ \ \Leftrightarrow (-1)^n \equiv 2^n\mod p\ \Leftrightarrow 1 \equiv (-2)^n\mod p$$ hence $\frac{p-1}{2}$ is a primitive root if and only if $-2$ is a primitive root modulo $p$. If $\frac{p-1}{2}$ is odd this is what we need. If $\frac{p-1}{2}$ is even prove that $-2$ is a primitive root implies that $2$ is a primitive root, let $t$ be the order of $-2$ then $(-2)^t=1$ hence $2^{2t}=1$ but the order of $2$ is $p-1$ so $p-1$ divides $2t$ hence $2t\geq p-1$ and because the order of any element is less then $p-1$ we have: $$ \frac{p-1}{2} \leq t \leq p-1$$ but if $t=\frac{p-1}{2}$ then $(-2)^t=2^t=1$ which is absurd because the order of $2$ is $p-1$ hence $t=p-1$
Let V be the universe (the class of all sets), let W(0)=V, W(1) be the class of all singletons whose unique member element is a member set of W(0), and for n>0 let W(n+1) be the class of all singletons whose unique member element is a member set of W(n). Let, for every n>=0 S(n)=W(n+1)/W(n) be the class that is the difference of the class W(n+1) and of the class W(n). We are interested with the proposition (S): "For every set x, there exists an unique natural number n such that x is a member set of S(n)" . Question 1: Let ZFC be our set theory; does ZFC prove (S) ? Question 2: Suppose the answer to question 1 is YES, and let now our set theory be ZF-(I mean ZF with the omission of the axiom of Regularity (or Foundation)); does ZF- prove (S) ? Question 3: Suppose the answer to question 2 is NO; does ZF- prove the equivalence of (S) with the axiom of regularity ? Gérard Lang I assume that you meant to write $S(n)=W(n)-W(n+1)$, rather than what you have written, since inductively one can show $W(n+1)\subseteq W(n)$. With this understanding, all the $S(n)$ are disjoint, and the question is whether every set eventually falls out, or whether there can be a set in every $W(n)$. In ZFC, there can be no set $x$ in every $W(n)$, since the transitive closure of such an $x$ would consist of the set containing as members the unique element of $x$, the unique element of that set, and so on, and thus have no $\in$-minimal element, contrary to the foundation axiom. So the answer to question 1 is Yes. Meanwhile, it is relatively consistent with ZF- that there is a set $x$ which has $x=\{x\}$; for example, such sets exist under the Anti-Foundation axiom. Such a set is in every $W(n)$, and so is not in any $S(n)$, and so the answer to question 2 is No. Question 3 is quite interesting, but I don't know the answer. You may want to add the axiom of Dependent Choices DC, a mild version of AC, for with this axiom a violation of the foundation axiom will give rise to an $\in$-descending $\omega$-sequence $x_0,x_1,\ldots$ with $x_{n+1}\in x_n$, and from such a sequence one might hope to build a set that is in every $W(n)$. But I don't see this how to complete this idea just yet... Since Joel has already answered Questions (1), and (2), I will only offer an answer for Question 3. This is a revised version of my answer; thanks to Joel Hamkins for pointing out that my previous construction was not quite right. Start with a simple graph with 3 elements {$a,b,c$}, where each of the three nodes has an edge to the other two. So in this 3-element model of "set theory", $a$ = {$b$, $c$}, $b$ = {$c$, $a$}, and $c$ = {$a$, $b$}. Given an extensional digraph $G=(X,E)$, with $X$ as the vertex set and $E$ as the edge set, define the deficiency set $D(G)$ of $G$ to be the collection of subsets $S$ of $X$ that are not "coded" in $G$, i.e., there is no element $a$ in $X$ such that $S$ = {$x \in X : xEa$}. We now can define by recursion a digraph $G_\alpha = (X_\alpha, E_\alpha)$ for each ordinal $\alpha$ as follows: $G_0 = G$; $G_{\alpha+1} = (X_{\alpha+1}, E_{\alpha+1})$, where $X_{\alpha+1} = X_{\alpha} \cup D(G_{\alpha})$, and $E_{\alpha+1} = E_{\alpha}$ together with edges of the form $(x,X)$, where $x\in X_{\alpha}$, $X \in D(G_{\alpha})$, and $x\in X$. For limit $\alpha$, $G_\alpha$ is the union of $G_\beta$ for $\beta<\alpha$. The model/digraph we are interested in is the union of all the $G_\alpha$, as $\alpha$ ranges over the ordinals, and $G$ is the 3-element digraph on {$a,b,c$} mentioned earlier. Let's call this model $V(a,b,c)$. It satisfies all the axioms of $ZF$ with the exception of Foundation. $V(a,b,c)$ also satisfies $S$ since any infinite descending epsilon chain must eventually hit $a$, $b$, or $c$. So this shows that Question 3 has a negative answer. Since {$a,b,c$} are indiscernibles in $V(a,b,c)$, I suspect that $DC$ fails in $V(a,b,c)$, but a variation on this theme might produce a model with enough asymmetry for $DC$ to hold as well. PS. Models of $ZF$ in which the Foundation fails are often constructed using the so-called Bernays-Rieger permutation method (not to be confused with the Fraenkel-Mostowski permutation method of constructing models of $ZF$ in which the axiom of choice fails). The model constructed above is based on a different idea, explored in detail for models of finite set theory in the following paper: A. Enayat, J. Schmerl, and A. Visser, Omega Models of Finite Set Theory , to appear in Set theory, Arithmetic, and Foundations of Mathematics: Theorems, Philosophies (edited by J. Kennedy and R. Kossak), Cambridge University Press, to appear October 2011. A preprint can be found here.
Interested in the following function:$$ \Psi(s)=\sum_{n=2}^\infty \frac{1}{\pi(n)^s}, $$where $\pi(n)$ is the prime counting function.When $s=2$ the sum becomes the following:$$ \Psi(2)=\sum_{n=2}^\infty \frac{1}{\pi(n)^2}=1+\frac{1}{2^2}+\frac{1}{2^2}+\frac{1}{3^2}+\frac{1}{3^2}+\frac{1... Consider a random binary string where each bit can be set to 1 with probability $p$.Let $Z[x,y]$ denote the number of arrangements of a binary string of length $x$ and the $x$-th bit is set to 1. Moreover, $y$ bits are set 1 including the $x$-th bit and there are no runs of $k$ consecutive zer... The field $\overline F$ is called an algebraic closure of $F$ if $\overline F$ is algebraic over $F$ and if every polynomial $f(x)\in F[x]$ splits completely over $\overline F$. Why in def of algebraic closure, do we need $\overline F$ is algebraic over $F$? That is, if we remove '$\overline F$ is algebraic over $F$' condition from def of algebraic closure, do we get a different result? Consider an observer located at radius $r_o$ from a Schwarzschild black hole of radius $r_s$. The observer may be inside the event horizon ($r_o < r_s$).Suppose the observer receives a light ray from a direction which is at angle $\alpha$ with respect to the radial direction, which points outwa... @AlessandroCodenotti That is a poor example, as the algebraic closure of the latter is just $\mathbb{C}$ again (assuming choice). But starting with $\overline{\mathbb{Q}}$ instead and comparing to $\mathbb{C}$ works. Seems like everyone is posting character formulas for simple modules of algebraic groups in positive characteristic on arXiv these days. At least 3 papers with that theme the past 2 months. Also, I have a definition that says that a ring is a UFD if every element can be written as a product of irreducibles which is unique up units and reordering. It doesn't say anything about this factorization being finite in length. Is that often part of the definition or attained from the definition (I don't see how it could be the latter). Well, that then becomes a chicken and the egg question. Did we have the reals first and simplify from them to more abstract concepts or did we have the abstract concepts first and build them up to the idea of the reals. I've been told that the rational numbers from zero to one form a countable infinity, while the irrational ones form an uncountable infinity, which is in some sense "larger". But how could that be? There is always a rational between two irrationals, and always an irrational between two rationals, ... I was watching this lecture, and in reference to above screenshot, the professor there says: $\frac1{1+x^2}$ has a singularity at $i$ and at $-i$, and power series expansions are limits of polynomials, and limits of polynomials can never give us a singularity and then keep going on the other side. On page 149 Hatcher introduces the Mayer-Vietoris sequence, along with two maps $\Phi : H_n(A \cap B) \to H_n(A) \oplus H_n(B)$ and $\Psi : H_n(A) \oplus H_n(B) \to H_n(X)$. I've searched through the book, but I couldn't find the definitions of these two maps. Does anyone know how to define them or where there definition appears in Hatcher's book? suppose $\sum a_n z_0^n = L$, so $a_n z_0^n \to 0$, so $|a_n z_0^n| < \dfrac12$ for sufficiently large $n$, so $|a_n z^n| = |a_n z_0^n| \left(\left|\dfrac{z_0}{z}\right|\right)^n < \dfrac12 \left(\left|\dfrac{z_0}{z}\right|\right)^n$, so $a_n z^n$ is absolutely summable, so $a_n z^n$ is summable Let $g : [0,\frac{ 1} {2} ] → \mathbb R$ be a continuous function. Define $g_n : [0,\frac{ 1} {2} ] → \mathbb R$ by $g_1 = g$ and $g_{n+1}(t) = \int_0^t g_n(s) ds,$ for all $n ≥ 1.$ Show that $lim_{n→∞} n!g_n(t) = 0,$ for all $t ∈ [0,\frac{1}{2}]$ . Can you give some hint? My attempt:- $t\in [0,1/2]$ Consider the sequence $a_n(t)=n!g_n(t)$ If $\lim_{n\to \infty} \frac{a_{n+1}}{a_n}<1$, then it converges to zero. I have a bilinear functional that is bounded from below I try to approximate the minimum by a ansatz-function that is a linear combination of any independent functions of the proper function space I now obtain an expression that is bilinear in the coeffcients using the stationarity condition (all derivaties of the functional w.r.t the coefficients = 0) I get a set of $n$ equations with the $n$ the number of coefficients a set of n linear homogeneus equations in the $n$ coefficients Now instead of "directly attempting to solve" the equations for the coefficients I rather look at the secular determinant that should be zero, otherwise no non trivial solution exists This "characteristic polynomial" directly yields all permissible approximation values for the functional from my linear ansatz. Avoiding the neccessity to solve for the coefficients. I have problems now to formulated the question. But it strikes me that a direct solution of the equation can be circumvented and instead the values of the functional are directly obtained by using the condition that the derminant is zero. I wonder if there is something deeper in the background, or so to say a more very general principle. If $x$ is a prime number and a number $y$ exists which is the digit reverse of $x$ and is also a prime number, then there must exist an integer z in the mid way of $x, y$ , which is a palindrome and digitsum(z)=digitsum(x). > Bekanntlich hat P. du Bois-Reymond zuerst die Existenz einer überall stetigen Funktion erwiesen, deren Fouriersche Reihe an einer Stelle divergiert. Herr H. A. Schwarz gab dann ein einfacheres Beispiel. (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) (Translation: It is well-known that Paul du Bois-Reymond was the first to demonstrate the existence of a continuous function with fourier series being divergent on a point. Afterwards, Hermann Amandus Schwarz gave an easier example.) It's discussed very carefully (but no formula explicitly given) in my favorite introductory book on Fourier analysis. Körner's Fourier Analysis. See pp. 67-73. Right after that is Kolmogoroff's result that you can have an $L^1$ function whose Fourier series diverges everywhere!!
In a nut-shell: exchange integrals are two-electron integrals, and two-electron integrals yield positive values. Note that the "kind" or "meaning" of the input functions is irrelevant, because in practice, you will always have linear combinations of primitives, and in most cases gaussians. For the proof of the claim about positive values, I will defer to the experts, [1, HJO] who cite previous work. [2] As taken from the book: The two-electron intergrals can be viewed as a matrix with the electron distributions [($\Omega_{ab}, \Omega_{cd}$)] as row and column labels [using AO labels $a,b,c,d$, see above] $$ g_{abcd} = \int \int \frac{\Omega_{ab}(\mathbf{r}_1) \Omega_{cd}(\mathbf{r}_2)}{r_{12}} \mathrm{d}\mathbf{r}_1 \mathrm{d}\mathbf{r}_2 $$ Assuming that the orbitals are real, we shall demonstrate that this matrix is positive definite [2]. Let us consider the interaction between two electrons in the same distribution $\rho(\mathbf{r})$: $$ I[\rho] = \int \int \frac{\rho(\mathbf{r}_1) \rho(\mathbf{r}_2)}{r_{12}} \mathrm{d}\mathbf{r}_1 \mathrm{d}\mathbf{r}_2 $$ Inserting the Fourier transform of the interaction operator $$ \frac{1}{r_{12}} = \frac{1}{2\pi^{2}} \int k^{-2} \exp[\mathrm{i}\mathbf{k} \cdot(\mathbf{r}_1 - \mathbf{r}_2)] \mathrm{d}\mathbf{k} $$ and carrying out the integration over the Cartesian coordinates, we obtain $$ I[\rho] = \frac{1}{2\pi^{2}} \int k^{-2} \vert \rho(\mathbf{k}) \vert^2 \mathrm{d}\mathbf{k} \quad\quad \text{(eq. 4)} $$ where we have introduced the distributions $$ \rho(\mathbf{k}) = \int \exp(-\mathrm{i}\mathbf{k}\cdot\mathbf{r}) \rho(\mathbf{r}) \mathrm{d}\mathbf{r} $$ Since the integrand in [(eq. 4)] is always positive or zero, we obtain the inequality $$ I[\rho] > 0 $$ HJO go on to expand the charge distribution $\rho$ in one-electron orbital distributions and get back to the original $g_{abcd}$, noting afterwards that two-electrons thus satisfy the conditions for inner products, in a metric defined by $r^{-1}_{12}$. Therefore, Schwarz-style inequalities hold and are used extensively in integral screening to throw out insignificant integrals before evaluating them. [1] T Helgaker, P Jørgensen, J Olsen, Molecular Electronic-Structure Theory, Wiley (2002), p. 403f. [2] CCJ Roothaan, Rev. Mod. Phys., 23, 69 (1951).
Practical and theoretical implementation discussion. Post Reply 10 posts • Page 1of 1 I've had a question that I couldn't answer myself and haven't found any formal derivation on the topic. If one has a point light with position \(\vec{c}\) and intensity \(I\), then the irradiance at point \(\vec{p}\) at some surface can be derived as: \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? \(E(\vec{p}) = \frac{d\Phi}{dA} = \frac{d\Phi}{d\omega}\frac{d\omega}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2}\frac{dA}{dA} = I \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Assuming that the brdf is constant \(f(\omega_o,\vec{p},\omega_i) = C = \text{const}\) the outgoing radiance from that surface point due to that point light can be computed as: \(L_o(\vec{p},\omega_o) = L_e(\vec{p},\omega_o) + \int_{\Omega}{f(\omega_o,\vec{p},\omega_i)L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} =\) \( L_e(\vec{p},\omega_o) + C\int_{\Omega}{L_i(\vec{p},\omega_i)\cos\theta_i\,d\omega_i} = L_e(\vec{p},\omega_o) + CE(\vec{p}) = \) \(L_e(\vec{p},\omega_o) + CI \frac{\cos\theta}{||\vec{c}-\vec{p}||^2} \) Which agrees with what one is used to seeing in implementations in real-time graphics. How does one motivate similar expressions for more complex brdfs considering the fact that radiance for a point light is not defined? Point and directional light sources are not physical and are commonly defined via delta functions/distributions. For a delta directional light, you take the solid angle version of the rendering integration (which is what you have above) and replace \(L_i\) by a an angular delta function, which makes the integral degenerate to a simple integrand evaluation. For a point light, which is a delta volume light, you would take the volume version of the rendering equation, i.e. the one where integrates over 3D space, and then again replace \(L_i\) by a positional delta function. Click here.You'll thank me later. I am aware that a point light is not physical, I am just wondering how the commonly accepted formulae in real-time cg are motivated. They are using in most cases the intensity as if it were radiance, with brdfs defined in terms of radiance. Do you have a reference for the part where you mention that you're replacing the radiance with a delta function? That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. What I am looking for is a reference with a mathematically robust derivation that is consistent with radiometry definitions. Papers/books suggestions are welcome. Well, you have an integral over an angle that is non-zero if and only if it includes a singular direction. That's a delta function, isn't it? If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. Yup. You know what else causes problems? Perfectly specular reflectors and refractors, which also have deltas in their BSDFs. You have two choices when dealing with them--approximate them with spikey but non-delta BSDFs, or sample them differently. In general, you don't try to evaluate those for arbitrary directions; you just cast one ray in the appropriate direction. It's exactly the same for point and directional lights. Either use light sources that subtend a small but finite solid angle, or special-case them. You evaluate the integral (for a single light source) by ignoring everything but the delta direction and then ignoring the delta factor in the integrand. You can think of it as using a Monte Carlo estimator f(x)/pdf(x), where both the f and the pdf have identical delta functions that cancel out.That will also cause even more problems mathematically as it will turn out that you're multiplying distributions in some cases. I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. I understand it in the sense that you want just that one direction, however, I don't see how that solves the intensity vs radiance issue. After all the rendering equation considers radiance and not intensity. I would be very grateful if you could refer me to a publication of his that tackles this issue, I am not exactly sure how to find his publications from his username.If you really need academic respectability to be convinced, maybe you should take a look at ingenious' publications. I mean purely theoretical problems, not specifically the ones you are referring to, though I meant precisely the case where you have a perfect mirror/refraction and a point light, since then you get a product of distributions.Yup. You know what else causes problems? PBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:I don't have a specific reference for this, but the first place I'd look for one is the PBRT book. And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. Here's what I found in PBRT after a brief search: http://www.pbr-book.org/3ed-2018/Light_ ... eIntegrandvchizhov wrote: ↑Fri Apr 26, 2019 3:27 pmPBRT doesn't really go much further beyond implementation details, most of the arguments are of "an intuitive" nature, I want something a bit more formal. The closest thing I could find on the topic was from: http://www.oceanopticsbook.info/view/li ... f_radiance, namely:And seeing as the rendering equation uses radiance I want to understand how a point light source for which radiance is not defined fits into this framework.Likewise, you cannot define the radiance emitted by the surface of a point source because \( \Delta A\) becomes zero even though the point source is emitting a finite amount of energy. It classifies point sources as producing delta distributions. It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. Thank you, I have looked through pbrt already, but just saying - "it's a Dirac delta" is hardly mathematically robust - there's no derivation. It doesn't even derive central relationships like \(\frac{\cos\theta}{r^2}dA = d\omega\). The issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however.It doesn't try to evaluate the incoming radiance directly, but instead evaluates the integral over incoming radiance, which is well-defined even with deltas. "You didn't use real wrestling. If you use real wrestling, it's impossible to get out of that hold."vchizhov wrote: ↑Fri Apr 26, 2019 7:27 pmThe issue is that I am not convinced that it is well-defined, hence why I want to see a formal proof (be it through measure theory, differential geometry, or even starting with Maxwell's equations). For one thing the brdf is defined in terms of radiance, not in terms of intensity. And if it is indeed well-defined I want to understand the details why it so, and not just rely on hand-waving arguments. As in, have a similar proof to what I did above for the diffuse case, obviously the thing I derived above does not apply to a brdf that actually depends on $\omega_i$ however. - Bobby Hill If your concern is with undefined quantities, you might need to start by defining exactly you mean by a point light, since it's already a physically implausible entity. E.g., is it meaningful to have the dw or dA terms from your Lambertian derivation? Does it have a normal? Is it a singular point, or is it the limit of an arbitrarily small sphere? If it were me, I'd just start with a definition that is consistent with a delta distribution, because it's consistent with what I want to represent, I know it makes the math work, and I can actually start implementing something. Beyond that, I'm not sure there's anything else I can say that will convince you without a lot more work than I have time for. Best of luck. I mostly agree with you, \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. vchizhov. My understanding is that the delta function is an ad-hoc construct, often defined as \(\int_{-\infty}^\infty f(x) \delta(x - x_0) \, dx = f(x_0)\) Note that the above is not a valid Lebesgue integral. A more rigorous, Lebesgue-compatible definition is as a measure: \(\int_{-\infty}^\infty f(x) \, \mathrm{d} \delta_{x_0}(x) = f(x_0)\) The whole point of this exercise is, in my understanding, for notational convenience: to write a specific discrete value \(f(x_0)\) as integral in order to avoid special-casing when you're working in an integral framework. So if you really don't want to deal with delta functions/measures, you can explicitly special-case. For the rendering equation, this means avoiding the reflectance integral altogether and evaluating the integrand at the chosen location. So in effect you are defining the outgoing radiance at the shading pointrather than defining some infinitesimally small light source. To also account for the contribution of other light sources, you'll need to sum that just-defined radiance with a proper reflectance integral. You'd do the same for plastic-like BRDFs that are the sum of perfect specular lobe and a smooth lobe. This seems like a more mathematically correct approach to me. It's more cumbersome though, that's why people prefer the more convenient (and controversial) approach of just thinking of it as defining a point light source instead. Honestly, I like this discussion and I'd like to have a bullet-proof definition of everything that doesn't involve ad-hoc constructs. I myself try to avoid the use of delta stuff whenever I can. Volumetric null scattering is one example where I don't like the introduction of a forward-scattering delta phase function – there's a cleaner way to do it. Click here.You'll thank me later.
Suppose we have the derived category $\mathcal{D}(A)$ of some algebra $A$ over some commutative ring $R$. I would like an example of two objects $X,Y$ in $\mathcal{D}(A)$ such that for all $n \in \mathbb{Z}$, $H^n(X) \cong H^n(Y)$, but they are not isomorphic in $\mathcal{D}(A)$. Though I am not sure such an example exists, the converse statement seems even more untrue to me. Consider $A$, the algebra of dual numbers, or $A = k \oplus k[\epsilon]$, with $\epsilon^2 =0$. Then the following complexes have isomorphic homology, but there is no quasi-isomorphism between them, since $k$ as an $A$-module is isomorphic to the direct summand $k[\epsilon]$ of $A$. $$ 0 \xrightarrow{} A \xrightarrow{\cdot \epsilon} A \rightarrow 0 \rightarrow k \xrightarrow{0} k \rightarrow0 $$ $$ 0\rightarrow k \xrightarrow{0} k\rightarrow 0\rightarrow A \xrightarrow{\cdot \epsilon} A \rightarrow0 $$ However I am not sure how to prove that there is no zigzag of quasi-isomorphisms connecting the two.
This question is from Abstract Algebra book by Pierre Grillet (2nd edition). Let $\phi:R\to S$ be a homomorphism of rings with identity and let $A$ be a unital left $S$-module. Make $A$ a unital left $R$-module. Aim is to find a map $g:R \times A \to A$ that satisfies module conditions. We have $\phi(1_R)=1_S$ and since $A$ is a unital left $S$-module, we have a function $f: S \times A \to A$ with $s \in S, a\in A, f(s,a)=sa$, and since it is unital, we also have $f(1_S,a)=a $ for all $a\in A$. Can I define the function $g$ as $g:f\circ\phi$, the composition? If it is possible, we have $(1_R,a)\to (1_S,a)\to a$, which gives unitarity. Also commutativity is done, since for $r_1,r_2\in R, a\in A$, $R$ action is like, $(r_1+r_2)a=\phi(r_1+r_2)a=[\phi(r_1)+\phi(r_2)]a=\phi(r_1)a+\phi(r_2)a$ which is equal to $r_1a+r_2a$ But $r_1(r_2a)=(r_1r_2)a$ doesn't seem to hold because $\phi$ is a ring homomorphism and we can't have $\phi(r1r2)=r_1\phi(r_2)$. I might be totally wrong on my work, if so, can someone help me find the function?
I'm reading this book about electrical properties of materials where the electron is introduced as a wave. Using the equation of a wave, they bring about the "envelope" of a wave. So here is how the derivation goes: Consider the equation of a wave traveling in one dimension: $ u = ae^{-i(\omega t - kz)} $ where $\omega = 2 \pi f$ is the frequency, $k$ is the wave number, and $a$ is the amplitude. In addition to this information, the book refers to the phase velocity defined as: ${\partial z}/{\partial t} = {\omega}/{k}$ Instead of 1 single wave, let there be multiple waves that are superimposed such that $$u = \sum a_n e^{-i(\omega_n t - k_n z)}.$$ Consider the continuous case: $$ u(z) = \int_{-\infty}^\infty a(k)e^{-i(\omega t - kz)}dk'. $$ Make two more assumptions. First assumption: the waves are zero everywhere except at a small interval $\Delta k $ and here the amplitude is unity such at $a(k)=1$. This brings the equation to $$ u(z) = \int_{k_0 - \Delta k /2}^{k_0 + \Delta k /2} e^{-i(\omega t - kz)}dk .$$ Suppose that $t=0$, thus yielding the endpoint: $$ u(z) = \int_{k_0 - \Delta k /2}^{k_0 + \Delta k /2} e^{i kz}dk .$$ When the integration is carried out, the result is $$ u(z) = \Delta k e^{ik_0 z} \frac {\sin\left(\tfrac12(\Delta kz)\right)}{\tfrac12(\Delta k z)}.$$ My question is how is the integration done here? I do notice that $u$ is a function of $z$ and that the integrand integrates over the wave number $k$. Is there some change of variables going on here? By the way I only posses knowledge of ODEs and only know how to do basic Separation of variables for PDEs.