text
stringlengths
256
16.4k
The point "I" marks the transition from a low-current glow discharge, where the gas characteristics determine the behavior, to a high current arc, where electrode thermionic or high-field emission and even evaporation dominate. Below is a "first-order" analysis of the transitions at points "D" and "I". At low levels of current ("B" - "D"), what's happening here is that an electron emitted from the cathode collides with and ionizes gas atoms on its way to the anode, producing additional electrons, which can ionize additional atoms, etc. For $n_0$ initial electrons emitted by the cathode, with the cathode-anode spacing $d$, the number of electrons reaching the anode $n_a$ is:$$ n_a = n_0 e^{\alpha d} $$ Here $\alpha$ is the first Townsend ionization coefficient; $1/\alpha$ is the average distance between ionizing collisions. On average, each emitted electron produces $ \left( e^{\alpha d} - 1 \right)$ new electrons. It turns out that: $$ \frac{\alpha}{p} = f \left( \frac{E}{p} \right) $$ where $p$ is the gas pressure, $E$ is the applied electric field, and $f$ denotes a functional relationship. (So at constant pressure, as the applied voltage increases, $\alpha$ does also.) Of course, each ionized gas atom results in a positively charged ion as well as an electron; these ions drift back to the cathode. (Total current $I=I_0e^{\alpha d}$ is the sum of the electron and ion currents across any plane between the electrodes.) What happens when these ions reach the cathode is crucial to the dark current / glow discharge transition. When ions collide with the cathode, some fraction $\gamma$ will "kick out" a new electron from the cathode; $\gamma$ is the second Townsend ionization coefficient. With this effect included, the total number of electrons emitted from the cathode $n_{0t}$ is: $$ n_{0t} = n_0 + \gamma n_{0t} \left(e^{\alpha d} -1 \right) $$ Note the feedback effect now present: each electron emitted by the cathode can produce additional cathode electrons via the two effects. Solving: $$ n_{0t} = n_0 \frac{e^{\alpha d}}{ 1 - \gamma \left(e^{\alpha d} -1 \right)} $$ $\gamma$ is also a (different) function of $E/p$: $$\gamma=g (E/p)$$ From this formula one can identify the Townsend criterion for a spark breakdown, where the total number of electrons (and the current) go to infinity: $$ \gamma \left( e^{\alpha d} -1 \right) = 1 $$ What happens next depends on the source of electricity. If we were gradually increasing the voltage of a "stiff" source (one with low series resistance) connected directly across the tube, the "infinite" current when the Townsend criterion is met, would run away directly to the arc region beyond "I", bypassing the glow discharge region entirely. Suppose instead that we have a fixed voltage source connected to the tube via a large variable series resistor. (See this nice analysis, for example.) The operating point is varied by adjusting the resistor. Then, after breakdown at "D", the tube can operate stably within the glow discharge region, with the tube operating point automatically adjusting to the source (voltage supply + variable resistor). [The operating point is stable if the total resistance (variable resistor + incremental resistance of tube) is positive.] The glow discharge current can be increased by reducing the external variable resistor. However, glow discharge current cannot be increased indefinitely. As the tube current and voltage increase, the bombardment of that current heats the electrodes, leading to increased thermionic emission from the cathode. With sufficient heating (i.e. at point "I"), cathode thermionic emission and material evaporation creates a lower-voltage "alternative" to the glow discharge, and a full-blown arc results. In this mode, the original gas in the tube is in a sense redundant; the arc can create its own plasma from vaporized electrode material. [Note: there appears to still be uncertainty about the relative importance of thermionic and field emission in arc behavior.]
Empirical evidence: First of all, to fix our ideas, I assume that you are talking about an tangential wind profile that looks something like this: $\phantom{texttexttexttexttexttexttexttexttextte}$ The plot shows the (azimuthally averaged, i.e. averaged along circles) tangential wind profile as a function of radius at a fixed height. Looking at this picture one is naturally reminded of a Rakine vortex, which consists of a forced vortex core surrounded by a free vortex. You said: [...] a forced vortex has a velocity profile u∝r (r is radial distance from centre of vortex), concluding at some outer boundary r=R to avoid fluid particles travelling at infinite speed. At this outer boundary it requires an external torque to be constantly supplied to keep going. The misconception here is that in a tropical cyclone, there is no solid (nor an effectively solid) boundary at the point where the "forced" vortex transitions into the "free" part. There is no wall that imparts momentum to the fluid by wall friction as in the cylinder. So why does the air spin in a cyclone? Because it is a low-pressure system. To good approximation (assuming we can neglect friction), there is a force balance between the pressure gradient force, the Coriolis force and the centrifugal force: $$ -\frac{1}{\rho}\frac{\partial p}{\partial r}+fv+\frac{v^2}{r} = 0, \tag 1$$ where I denote by $v$ the tangential velocity and $f$ is the Coriolis parameter. It may be interesting to you that the approximate balance (1) follows from a scale analysis of the radial Navier-Stokes equation in cylindrical coordinates. The balance (1) is often called "gradient wind balance". From (1) it is clear that in a low-pressure system ($\frac{\partial p}{\partial r}<0$), there will be cyclonic motion ($v>0$) to sustain the balance, hence the name 'cyclone' for low-pressure systems. So, plainly speaking, the tropical cyclone should not (or cannot) be compared to water in a rotating cylindrical tank which is maintained in solid body rotation by an external torque on the cylinder. Rather, the air spins because of the pressure deficit between environment and cyclone eye. Hence the question of what provides the torque on the "outer boundary" to maintain solid body rotation is misleading because no such boundary exists in a cyclone. Why the tropical cyclone wind profile is more complicated than a Rankine vortex: The Rankine vortex model (just like free and forced vortex models individually) describes an axis-symmetric vortex in a two-dimensional fluid (all motion occurs in the horizontal plane and motion is independent of z).A real tropical cyclone is neither exactly symmetric and, more importantly, it is very much a three-dimensional flow structure. Before I go on, let me just say a few words about what that structure looks like: $\phantom{texttettexttextte}$ Due to the low pressure in the centre of the cyclone, the air is set in circling motion according to (1). However, there is a layer close to the surface called 'planetary boundary layer' where the balance (1) is broken by frictional effects. Here, the flow is no longer along constant radius but has a component pointing from high to low pressure (i.e. a radial component). The air masses converge in the centre of the cyclone and hence are forced (by mass conservation) to move upward in the so-called eyewall. As the air rises, water vapour condenses releasing latent heat and giving the air parcels further buoyancy so they can keep rising up to the tropopause. this illustrates the fact that the three-dimensionality along with thermodynamic effects make the cyclone a more complicated problem than the theoretical forced or free vortices. A first (attempted) explanation: Nonetheless, we can explain why the velocity profile looks like it does.To begin with, we remember that (even approximately) conserved quantities are useful when trying to understand physics problems so let's see if we can find one here. The azimuthal component of the NS-equations in cylindrical polars reads:$$\frac{1}{r}\frac{D(vr)}{Dt}+fu = -\frac{1}{\rho r}\frac{\partial p}{\partial \theta} + F \tag{ 2},$$where $D/Dt$ is the material derivative, $(u,v)$ are radial and azimuthal wind respectively, $\theta$ is the azimuthal angle and F is friction. Defining $M=rv+\frac{1}{2}fr^2$, the absolute angular momentum and treating $f$ as a constant, it follows from (2) that$$\frac{DM}{Dt} = -\frac{1}{\rho}\frac{\partial p}{\partial \theta}+ Fr \tag{3}$$Now if frictional effects are negligible and the flow is axisymmetric, the RHS can be set to 0 and absolute angular momentum is materially conserved.Now, keeping this in mind, consider an air parcel in the ambient atmosphere which is "sucked" into the cyclone. As the air parcel travels inward, approximately conserving its absolute angular momentum, its tangential velocity $v$ has to increase. When $M$ is dominated by the $rv$ term (which is likely because $f$ is very small), then it needs to increase approximately as $v\approx \frac{1}{r}$, i.e. like a free vortex (isn't this pretty ^^). However, the free vortex cannot extend all the way to $r=0$, since the tangential velocity must go to zero at the origin due to frictional effects. Hence we expect that there will be a maximum at some finite radius. The only question remaining is why is the fluid inside the eye in solid body rotation, or equivalently: Why is the vertical component of vorticity homogeneous inside the cyclone eye? Assuming axisymmetry, this question is equivalent to your original question because $v=\omega r \Rightarrow \zeta = \frac{1}{r}\frac{\partial(rv)}{\partial r} =2\omega = \textrm{const.}$ for $\omega=\textrm{const.} $ I will offer two explanations: the first one is more hand-wavy (and possibly not entirely correct), the other more mathematical, but maybe less transparent for you, unless you have time to go through the maths. Since the velocity needs to go to zero between the radius of maximum wind and $r=0$, there is a significant shear (meaning large velocity gradient). Such a shear flow in the eye may fuel turbulence according to the turbulence kinetic energy equation, look at the "shear production" term. Turbulent diffusion in turn is a mechanism by which vorticity might be homogenised inside the eye. This argument is hand-wavy because although the shear production term in the TKE equation is (empirically) almost always positive in the atmosphere, it is impossible to tell whether this will be the case here since we don't know the sign of the eddy correlation term. Also, there are many other terms in the TKE equation that could potentially cancel out the effects of shear production. More formally, we can do the following: in addition to the gradient wind balance (1) which holds in the horizontal, there is (again to good approximation) a hydrostatic balance in the vertical between gravity and the vertical pressure gradient:$$ -\frac{1}{\rho}\frac{\partial p}{\partial z}-g=0 \tag 4$$. Based on these two balances, a form of the 1st law of thermodynamics and two conservation laws, Emanuel (1986) (it's a very famous paper in meteorology) derived (on page 3 and top of p.4) an equation relating M and the distribution of specific entropy $s$ in the cyclone, his equation (13). When doing my project at Imperial, I found out that the specific entropy at fixed height very closely follows a Gaussian distribution: $$s(r) = \Delta s e^{-r^2/\left(2\lambda^2\right)}+s_\textrm{env}, \tag 5$$with amplitude $\Delta_s$, offset $s_{env}$ and width $\lambda$. Why exactly it has this functional form has not yet been explained theoretically, but the fact that it has a maximum at/near the center makes sense because most of the diabatic heating due to condensation occurs in the eyewall and entropy is closely related to heat (given that temperature doesn't change much). (Aside: in fact, the entropy needs to be monotonically decreasing with radius, otherwise (13) from Emanuels paper implies that the vortex is "inertially unstable"). If you use this Gaussian distribution to solve (13) from Emanuel's paper, you can derive formulae for the wind profile, angular momentum and pressure distributions (cf. the following paper which I co-authored :)). If you want you can check the maths, it's simple algebra. In particular the velocity distribution at fixed height that one obtains is: $$v(r) = \sqrt{2\Delta p \alpha}\sqrt{\frac{2\lambda^2}{r^2}\left(1-e^{-r^2/\left(2\lambda^2\right)}\right)-e^{-r^2/\left(2\lambda^2\right)}}-\frac{1}{2}fr^2,\tag 6$$ where $\Delta p$ is the pressure deficit between environment and cyclone center and $\alpha$ is specific volume. In the paper, we show that this profile is indeed a good fit for wind profiles simulated by solving the three-dimensional Navier-Stokes equations using a supercomputer. Taylor expanding (6) for small radii and using that $f$ term much smaller than $\frac{\sqrt{\Delta p \alpha}}{\lambda}$ for typical parameter values, gives $$v(r) = \left(\frac{\sqrt{\Delta p \alpha}}{\sqrt{2}\lambda}-\frac{1}{2}f\right) r + O(r^2) \approx \frac{\sqrt{\Delta p \alpha}}{\sqrt{2}\lambda}r,\tag 7$$This is indeed a linear profile whose slope is governed by thermodynamic factors: $dW \equiv\Delta p \alpha$ is the work done by the an air parcel as it expands during the inflow from ambient pressure to the lower central pressure. In the the scope of model it may be shown that the radius of maximum wind depends (to good approximation) purely on $\lambda$, while the value of the maximum wind is governed by $dW$. Hence it is consistent that the slope of $v$ increases with increasing $dW$ for fixed $\lambda$.The physical interpretation of the result however is not clear to me at this point. Hope this helps, if you have any further questions, let me know and I'll try to improve my answer. If anyone else has any ideas about the constant vorticity in the cyclone eye I'd be very interested. EDIT 1: Breaking Rossby waves outside the eyewall I forgot to mention in the foregoing answer another interesting dynamical aspect that contributes to the observed "sharpness" of the transition between the regions of the constant vorticity (in the eye) and the varying vorticity outside. The mean swirling flow in a tropical cyclone provides a background vorticity profile on which so-called "vortex Rossby waves (VRWs)" can propagate. Such VRWs can reach large amplitudes and "break" (the term has a specific definition in this context). This breaking of VRWs is very efficient at mixing vorticity (or more precisely "potential vorticity (PV)"). This leads to a steepening of the vorticity jump in the eye wall and a homogenisation just outside of it (in the so-called "surf-zone") because a breaking VRW mixes high PV air from the eye interior with lower PV air outside. EDIT 2: Concentric eyewalls It may be of interest to you that the velocity profile at the beginning of this post is by no means always present in TC. In fact there is a phenomenon called "concentric eyewall" or "secondary eyewall", where a second maximum occurs in the tangential wind profile. This affects the storm structure immensely, both in its horizontal extent and in its intensity. Therefore, it is an important issue to reproduce them correctly in supercomputer simulations (e.g. to make better forecasts). This is just to illustrate that there are many interesting phenomena associated with TCs :). EDIT 3: Barotropic instability and potential vorticity mixing inside the eye A dynamic explanation for why the vorticity is constant. I have learned just recently that, from a dynamical point of view, there is an answer to why vorticity in the TC eye is constant. The explanation goes something like this: consider some nonlinear velocity profile close to the origin, let's say $V\sim x^\alpha$, $\alpha\neq 1$. If $\alpha<1$ then the vorticity is infinite at the origin, so let's exclude this case. If $\alpha >1$, then the profile will be be "U-shaped", i.e. the vorticity will be smallest in the centre of the eye, then it will be large in a region extending up to the radius of maximum wind, and it will be low outside the eye. In other words, there is a ring of high vorticity, rather than a disk. Now the critical realisation, backed up by numerical experiments, is that such a ring of vorticity is prone to a process called barotropic instabilty which gives rise to disturbances (VRWs) growing in amplitude by taking energy out of a mean shear. These growing disturbances reach finite amplitudes and break inside the eye to mix (potential) vorticity, thus mixing and homogenising it. See Fig. 3 in this paper by Schubert for an illustration of what I mean. This paper is also cool because it gives a derivation of the equilibrium vortex configuration a maximum entropy method. EDIT 4:I found a paper by Glenn Shutts (now at MetOffice UK, written during his time at Imperial College) which explores the bathtub-hurricane analogy further. It seems to be not well known but it is a very interesting read.
Why is the equation $ \frac{D_n}{\mu_n}=\frac{D_p}{\mu_p}= V_t$ called Einstein equation where $D_n$ represents diffusion coefficient of electrons, $D_p$ represents diffusion coefficient of holes, $\mu_n, \mu_p$ represents mobility of electrons and holes respectively and $V_t$ represents thermal voltage. In his 1905 paper on Brownian motion, Einstein derived the equation $$D=\frac{RT}{N}\frac{1}{6\pi kP}\tag{7}$$ where $T$ is temperature, $k$ is viscosity, $P$ is the radius of a spherical molecule (the Stokes radius) and $R$ and $N$ are constants. A more familiar form used by Wikipedia is $$D=\frac{k_BT}{6\pi\eta r}$$ where $k_B$ is Boltzmann's constant, $\eta$ is viscosity, and $r$ is the radius. The variant you have, $$\frac{D}{\mu}=V_t$$ arises from defining the thermal voltage as $$V_t\equiv\frac{k_BT}{q}$$ and writing the electrical mobility $\mu$ in terms of $q$ and $6\pi\eta r$. Sutherland and Smoluchowski also did similar work to arrive at variations of the equation, so they, too, deserve some credit. That said, neither Smoluchowski nor Einstein (I don't have access to the full text of Sutherland's work) used thermal voltage to compactify the diffusion equation. It resembles the equations Einstein wrote down when he was studying diffusion in the context of Brownian Motion. See, e.g., https://en.wikipedia.org/wiki/Einstein_relation_(kinetic_theory) for more
In his celebrated paper "Conjugate Coding" (written around 1970), Stephen Wiesner proposed a scheme for quantum money that is unconditionally impossible to counterfeit, assuming that the issuing bank has access to a giant table of random numbers, and that banknotes can be brought back to the bank for verification. In Wiesner's scheme, each banknote consists of a classical "serial number" $s$, together with a quantum money state $|\psi_s\rangle$ consisting of $n$ unentangled qubits, each one either $$|0\rangle,\ |1\rangle,\ |+\rangle=(|0\rangle+|1\rangle)/\sqrt{2},\ \text{or}\ |-\rangle=(|0\rangle-|1\rangle)/\sqrt{2}.$$ The bank remembers a classical description of $|\psi_s\rangle$ for every $s$. And therefore, when $|\psi_s\rangle$ is brought back to the bank for verification, the bank can measure each qubit of $|\psi_s\rangle$ in the correct basis (either $\{|0\rangle,|1\rangle\}$ or ${|+\rangle,|-\rangle}$), and check that it gets the correct outcomes. On the other hand, because of the uncertainty relation (or alternatively, the No-Cloning Theorem), it's "intuitively obvious" that, if a counterfeiter who doesn't know the correct bases tries to copy $|\psi_s\rangle$, then the probability that both of the counterfeiter's output states pass the bank's verification test can be at most $c^n$, for some constant $c<1$. Furthermore, this should be true regardless of what strategy the counterfeiter uses, consistent with quantum mechanics (e.g., even if the counterfeiter uses fancy entangled measurements on $|\psi_s\rangle$). However, while writing a paper about other quantum money schemes, my coauthor and I realized that we'd never seen a rigorous proof of the above claim anywhere, or an explicit upper bound on $c$: neither in Wiesner's original paper nor in any later one. So, has such a proof (with an upper bound on $c$) been published? If not, then can one derive such a proof in a more-or-less straightforward way from (say) approximate versions of the No-Cloning Theorem, or results about the security of the BB84 quantum key distribution scheme? Update: In light of the discussion with Joe Fitzsimons below, I should clarify that I'm looking for more than just a reduction from the security of BB84. Rather, I'm looking for an explicit upper bound on the probability of successful counterfeiting (i.e., on $c$)---and ideally, also some understanding of what the optimal counterfeiting strategy looks like. I.e., does the optimal strategy simply measure each qubit of $|\psi_s\rangle$ independently, say in the basis $$\{ \cos(\pi/8)|0\rangle+\sin(\pi/8)|1\rangle, \sin(\pi/8)|0\rangle-\cos(\pi/8)|1\rangle \}?$$ Or is there an entangled counterfeiting strategy that does better? Update 2: Right now, the best counterfeiting strategies that I know are (a) the strategy above, and (b) the strategy that simply measures each qubit in the $\{|0\rangle,|1\rangle\}$ basis and "hopes for the best." Interestingly, both of these strategies turn out to achieve a success probability of (5/8) n. So, my conjecture of the moment is that (5/8) n might be the right answer. In any case, the fact that 5/8 is a lower bound on c rules out any security argument for Wiesner's scheme that's "too" simple (for example, any argument to the effect that there's nothing nontrivial that a counterfeiter can do, and therefore the right answer is c=1/2). This post has been migrated from (A51.SE) Update 3: Nope, the right answer is (3/4) n! See the discussion thread below Abel Molina's answer.
I have answered my question at SE. Of course, any comments and answers are still welcome. I think I have a misunderstanding for the KK theory. In the KK theory, we are living in, say, a 5-dimensional spacetime with one dimension compactified. What's different from the brane-world theory is that, in brane-world theory, we are living on a 4-dimensional brane which is embedded in the 5-dimensional spacetime. So in the post, I can not assume that the extended world is at $y=y_0$. Actually, in KK theory, the particles can be in principle everywhere. There is no a 4-dimensional subset which can be identified with our observed 4-dimensional spacetime. But since we observe the world by exchanging momenta and energy with the objects and also the compactified dimension is very small, all the low energy particles are frozen in the extra small dimension so that there is no exchanges of momenta in the extra dimension. In that case, the particles can not feel the existence of the small extra dimension. Note since the small extra dimension is compactified, the minimal momentum for a moving particle in the extra dimension can be obtained according $$e^{ipL}\rightarrow p=\frac{2\pi n}{L}.$$ So if $L$ is very small, the first excitation energy $\frac{2\pi}{L}$ to move the particle in the extra dimension is very large. Equivalently, the low energy particles are frozen at that direction and can not feel the extra dimension. In a word, the momentum excitation is gapless in extended dimensional space while not in compactified dimensional space. This case changes in string theory. Beside the momentum along the extra small dimension, the string can wind around the compacted dimension, which becomes a quantum quantity after quantization. When the extra dimension becomes smaller and smaller, the excitation spectrum for the winding number becomes continuous. The gapless excitations emerge again. In that sense, the extra dimension actually are becoming bigger again.
Direct Standardization Introduction Consider a set of observations \((x_i,y_i)\) drawn non-uniformly from an unknown distribution. We know the expected value of the columns of \(X\), denoted by \(b \in {\mathbf R}^n\), and want to estimate the true distribution of \(y\). This situation may arise, for instance, if we wish to analyze the health of a population based on a sample skewed toward young males, knowing the average population-level sex, age, etc. The empirical distribution that places equal probability \(1/m\) on each \(y_i\) is not a good estimate. So, we must determine the weights \(w \in {\mathbf R}^m\) of a weighted empirical distribution, \(y = y_i\) with probability \(w_i\), which rectifies the skewness of the sample (Fleiss, Levin, and Paik 2003, 19.5). We can pose this problem as \[ \begin{array}{ll} \underset{w}{\mbox{maximize}} & \sum_{i=1}^m -w_i\log w_i \\ \mbox{subject to} & w \geq 0, \quad \sum_{i=1}^m w_i = 1,\quad X^Tw = b. \end{array} \] Our objective is the total entropy, which is concave on \({\mathbf R}_+^m\), and our constraints ensure \(w\) is a probability distribution that implies our known expectations on \(X\). To illustrate this method, we generate \(m = 1000\) data points \(x_{i,1} \sim \mbox{Bernoulli}(0.5)\), \(x_{i,2} \sim \mbox{Uniform}(10,60)\), and \(y_i \sim N(5x_{i,1} + 0.1x_{i,2},1)\). Then we construct a skewed sample of \(m = 100\) points that overrepresent small values of \(y_i\), thus biasing its distribution downwards. This can be seen in Figure , where the sample probability distribution peaks around \(y = 2.0\), and its cumulative distribution is shifted left from the population’s curve. Using direct standardization, we estimate \(w_i\) and reweight our sample; the new empirical distribution cleaves much closer to the true distribution shown in red. In the CVXR code below, we import data from the package and solvefor \(w\). ## Import problem datadata(dspop) # Populationdata(dssamp) # Skewed sampleypop <- dspop[,1]Xpop <- dspop[,-1]y <- dssamp[,1]X <- dssamp[,-1]m <- nrow(X)## Given population mean of featuresb <- as.matrix(apply(Xpop, 2, mean))## Construct the direct standardization problemw <- Variable(m)objective <- sum(entr(w))constraints <- list(w >= 0, sum(w) == 1, t(X) %*% w == b)prob <- Problem(Maximize(objective), constraints)## Solve for the distribution weightsresult <- solve(prob)weights <- result$getValue(w)result$value ## [1] 4.223305 We can plot the density functions using linear approximations for the range of \(y\). ## Plot probability density functionsdens1 <- density(ypop)dens2 <- density(y)dens3 <- density(y, weights = weights)yrange <- seq(-3, 15, 0.01)d <- data.frame(x = yrange, True = approx(x = dens1$x, y = dens1$y, xout = yrange)$y, Sample = approx(x = dens2$x, y = dens2$y, xout = yrange)$y, Weighted = approx(x = dens3$x, y = dens3$y, xout = yrange)$y)plot.data <- gather(data = d, key = "Type", value = "Estimate", True, Sample, Weighted, factor_key = TRUE)ggplot(plot.data) + geom_line(mapping = aes(x = x, y = Estimate, color = Type)) + theme(legend.position = "top") ## Warning: Removed 300 rows containing missing values (geom_path). Followed by the cumulative distribution function. ## Return the cumulative distribution functionget_cdf <- function(data, probs, color = 'k') { if(missing(probs)) probs <- rep(1.0/length(data), length(data)) distro <- cbind(data, probs) dsort <- distro[order(distro[,1]),] ecdf <- base::cumsum(dsort[,2]) cbind(dsort[,1], ecdf)}## Plot cumulative distribution functionsd1 <- data.frame("True", get_cdf(ypop))d2 <- data.frame("Sample", get_cdf(y))d3 <- data.frame("Weighted", get_cdf(y, weights))names(d1) <- names(d2) <- names(d3) <- c("Type", "x", "Estimate")plot.data <- rbind(d1, d2, d3)ggplot(plot.data) + geom_line(mapping = aes(x = x, y = Estimate, color = Type)) + theme(legend.position = "top") Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] tidyr_0.8.3 ggplot2_3.1.1 CVXR_0.99-6 ## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 highr_0.8 ## [4] compiler_3.6.0 pillar_1.4.1 plyr_1.8.4 ## [7] R.methodsS3_1.7.1 R.utils_2.8.0 tools_3.6.0 ## [10] digest_0.6.19 bit_1.1-14 evaluate_0.14 ## [13] tibble_2.1.2 gtable_0.3.0 lattice_0.20-38 ## [16] pkgconfig_2.0.2 rlang_0.3.4 Matrix_1.2-17 ## [19] yaml_2.2.0 blogdown_0.12.1 xfun_0.7 ## [22] withr_2.1.2 dplyr_0.8.1 Rmpfr_0.7-2 ## [25] ECOSolveR_0.5.2 stringr_1.4.0 knitr_1.23 ## [28] tidyselect_0.2.5 bit64_0.9-7 grid_3.6.0 ## [31] glue_1.3.1 R6_2.4.0 rmarkdown_1.13 ## [34] bookdown_0.11 purrr_0.3.2 magrittr_1.5 ## [37] scales_1.0.0 htmltools_0.3.6 scs_1.2-3 ## [40] assertthat_0.2.1 colorspace_1.4-1 labeling_0.3 ## [43] stringi_1.4.3 lazyeval_0.2.2 munsell_0.5.0 ## [46] crayon_1.3.4 R.oo_1.22.0 Source References Fleiss, J. L., B. Levin, and M. C. Paik. 2003. Statistical Methods for Rates and Proportions. Wiley-Interscience.
Back to Nonlinear Equations. Algorithms for systems of nonlinear equations that use trust-region and line-search strategies require the condition \[ \| f(x_{k+1}) \| \leq \| f(x_k) \| \] to hold for each iteration. Consequently, they can become trapped in a region in which the function \(\| f() \|\) has a local minimizer \(z^*\) for which \(f(z^*) \neq 0\), and they can therefore fail to converge to a solution \(x^*\) of the original nonlinear system \[f(x) = 0.\] To be more certain of convergence to \(x^*\) from arbitrary starting points, we need to turn to a more complex class of algorithms known as homotopy, or continuation methods. These methods are usually slower than line-search and trust-region methods, but they are useful on difficult problems for which a good starting point is hard to find. Continuation methods define an easy problem for which the solution is known and a path between the easy problem and the original (hard) problem. The solution to the easy problem is gradually transformed to the solution of the hard problem by tracing this path. The path may be defined by introducing an additional scalar parameter \(\lambda\) into the problem and defining a function \(h(x, \lambda)\): \[h(x, \lambda) = \lambda \, f(x) + (1 - \lambda) \, (x - x_0), \quad (1)\] which is solved for values of \(\lambda \) between 0 and 1. When \(\lambda = 0\), the solution to (1) is clearly \(x=x_0\). When \(\lambda=1\), \(h(x, \lambda) = f(x) \,\), so the solution of (1) coincides with the solution of the original problem \(f(x)=0\). The path that takes \((x, \lambda)\) from \((x_0, 0)\) to \((x^*, 1)\) may have turning points, where it is not possible to follow the path smoothly by insisting on an increasing \(\lambda\) at every step. Instead, practical methods allow \(\lambda\) to decrease where necessary. Some implementations of the method use the more robust approach of expressing both \(x\) and \(\lambda\) in terms of a third parameter, \(s\), which represents arc length along the solution path. Differentiating \(h(x, \lambda)\) with respect to \(s\) yields the ordinary differential equation \[ \partial_x h(x, \lambda) x^\prime (s) + \partial_\lambda h(x, \lambda) \lambda^\prime (s) = 0\,\] Sophisticated solvers may then be applied to this problem with initial condition \[ (x(0), \lambda(0)) = (x_0, 0) \,\] and the side condition \[\|x^\prime (s)\|^2_2 + \lambda^\prime (s)^2 = 1\] Another practical method obtains a new iterate \((x_{k+1}, \lambda_{k+1})\) from the current iterate by solving an augmented system of the form \((x_k, \lambda_k) \) \[\left [ \begin{array}{c} h(x, \lambda) \\ w^T_k x + \mu_k \lambda - t_k \end{array} \right] = 0\] The additional (linear) equation is usually chosen so that \((w_k, \mu_k)\) is one of the unit vectors in \(R^{n+1}\) and is a target value for one of the components of \((x_{k+1}, \lambda_{k+1})\) whose value has been fixed by a predictor step. Last updated: October 10, 2013
Back to Unconstrained Optimization Contents There are many applications in which the goal is to find values for the variables that satisfy a set of given constraints without the need to optimize a particular objective function. When there are \(n\) variables and \(n\) equality constraints, the problem is one of solving a system of . Mathematically, the problem is nonlinear equations \[f(x) = 0,\] where \(f : \cal{R}^n \rightarrow \cal{R}^n\) is a vector function, \[f(x) = \left[ \begin{array}{c} f_1(x) \\ f_2(x) \\ \vdots \\ f_n(x) \end{array} \right], \] where each \(f_i(x) : \cal{R}^n \rightarrow \cal{R}\), \(i=1, 2, \cdots, n\) is smooth. A vector \(x^*\) satisfying \(f(x)=0\) is called a solutionor a rootof the nonlinear equations. In general, a system of nonlinear equations will have no solution, a unique solution or many solutions. Many algorithms for nonlinear equations are related to algorithms for unconstrained optimization and nonlinear least-squares. There are close connections to the nonlinear least-squares problem since a number of algorithms for nonlinear equations proceed by minimizing the sum of squares of the equations: \[\min_x \sum_{i=1}^n f_i^2(x).\] Despite the similarities, there are important differences between algorithms for the two problems. In nonlinear equations, the number of equations is equal to the number of variables and all of the equations must be satisfied at a solution point. Newton's method forms the basis for many of the algorithms to solve systems of nonlinear equations. We give a brief overview of Newton's method and outline some of the related algorithms. For more details, see the sources in the references, for example, Chapter 11 in Nocedal and Wright (1999). Newton's method for nonlinear equations uses a linear approximation of the function \(f\) around the current iterate. Given an initial point \(x_0\), \[f(x) \approx f(x_0) + J(x_0)(x - x_0),\] where \(J(x_0)\) is the Jacobian matrix, the matrix of all first-order partial derivatives of the components of \(f\). To find the vector \(x\) that makes \(f(x) = 0\), we choose the next iterate \(x_1\) so that \[f(x_0) + J(x_0)(x_1 - x_0) = 0.\] Let \(p_0 = x_1 - x_0\). To find \(p_0\), we solve the system of linear equations \[J(x_0)\, p_0 = -f(x_0)\] and then set \(x_1 = x_0 + p_0\). Newton's Method for Nonlinear Equations Given an iterate \(x_k\), Newton's method computes \(f(x_k) \,\) and its Jacobian matrix and finds a step \(p_k\) by solving the system of linear equations \[J(x_k) \, p_k = - f(x_k) \quad (1) \,\] Then, the new iterate is \(x_{k+1} = x_k + p_k\,\). Most of the computational cost of Newton's method is associated with two operations: evaluation -- of both the function \(f\,\) and the Jacobian matrix -- and solution of the linear system of equations (1). For the Jacobian, the computation of the \(i^{th}\) column requires the computation of the partial derivative of each equation of \(f\) with respect to \(x_i\). The solution of the linear system (1) requires order \(n^3 \,\) operations when the Jacobian is dense. Newton's method is guaranteed to converge if the starting point is sufficiently close to the solution and the Jacobian is nonsingular at the solution. Under these conditions, the rate of convergence is quadratic: \[\| x_{k+1} - x^* \| \leq \beta \| x_k - x^* \|^2,\] for some positive constant \(\beta \,\). This rapid local convergence is the main advantage of Newton's method. The disadvantages include the calculation of the Jacobian matrix and the potentially erratic behavior when the starting point is not close to a solution (no guaranteed global convergence). Modifications and enhancements to Newton's method have been developed to deal with these two difficulties. Trust Region and Line-Search Methods Truncated Newton Methods Broyden's Method Tensor Methods Homotopy Methods Nocedal, J. and Wright, S. J. 1999. Numerical Optimization. Springer-Verlag, New York.
I often wondered about these things - then I came up with a simple experiment that works for me because I have a simple bike computer (thing with a magnetic pickup on the spokes that updates my speed every second). I find a flat piece of road, and ride at a certain speed (say 20 mph on my road bike, or 15 mph on my mountain bike). I then stop pedaling at a specific point, and take note of the speed dropping (it conveniently updates every second: use iphone or other voice recorder and just read out the numbers as you see them: 20.0, 19.5, 19.1, 18.7, 18.2, 17.8, 17.4, etc). Now comes the fun part: turning this into power needed to keep a certain speed going. You should have a pretty good idea of your mass plus that of your bike. At a given speed, this gives a certain kinetic energy ($\frac12mv^2$). The drop in your speed means your kinetic energy is being dissipated (road friction, air drag, slope of the road...). To be accurate, you need to take account of the fact that your wheels have rotational kinetic energy - almost all the mass is at a certain radius $r$ (typically 35 cm for a road bike, variable for a mountain bike). For a wheel of mass $m$ rolling at velocity $v$, the total energy is $$\begin{align}KE&=\frac12mv^2 + \frac12I\omega ^2\\&=\frac12mv^2 + \frac12mr^2\omega ^2\\&=mv^2\end{align}$$so exactly double what it would be if you had not taken the rotational energy into account. The simplest way to account for this is just to double the mass of the wheels, then use $\frac12mv^2$. The correction factor is quite small, given that you are probably much heavier than your wheels. Now you create a table (Excel works well for this) with columns for time (sec) and speed (mph) - these are the data columns. You then compute speed (m/s), KE (J), change in KE (J) in the next three columns. Now you can create a plot of power needed at a given speed. Using the numbers above, I came up with the following: which shows that maintaining 20 mph on my road bike on that day (gentle tail wind) required about 225W of sustained power - which is quite comfortable. According to the website calculator at http://www.tribology-abc.com/calculators/cycling.htm I should have expected about 275W with no wind when going 32 km/h; this is certainly in the right ball park. The same calculator shows that the power needed drops to 141 W at 15 mph (24 km/h) - again, quite close to what my simple experiment gave. Another look at the breakdown of the bike calculator shows that the rolling resistance is independent of speed, and that the factor that changes quickly is the wind resistance. This tells me a few things: at low speeds (below 12 mph) the rolling resistance is critical: this is where pumping up the tires of your mountain bike can really help At higher speeds, the wind resistance dominates the power dissipation: a good posture helps to streamline your body. This is where the recumbent and TT bikes excel, and where the mountain bike really loses out. I don't have the same data for a mountain bike as I collected for my road bike, but I'm sure you could do the experiment yourself - and it's more fun... EDIT a bit more about rolling resistance. Rolling resistance is poorly understood by many people. There are different factors that come into play: Tire dimensions (radius, width, curvature) Tire pressure Road condition: smoothness, hardness For example- if the road is rough, a small tire keeps "having to ride up hill" while a larger tire will "glide over the bumps". A soft surface (like sand) creates a dip, and again the tire keeps hiving to "climb out of the hole". This climbing is felt as rolling friction. A mountain bike tire, being wider, digs less of a hole - and so fat tires are best on soft surfaces. But the really interesting thing is friction on a smooth road. Here, the key factor is the shape and size of the contact patch - specifically, the length of the contact patch. There is a nice diagram (from http://velonews.competitor.com/2012/03/bikes-and-tech/technical-faq/tech-faq-seriously-wider-tires-have-lower-rolling-resistance-than-their-narrower-brethren_209268) that helps to show this: The thing that matters most is the difference between the length of the contact patch, and the corresponding arc of the tire that is touching this patch. For a patch length $l$ and a wheel radius $r$, the angle $\theta$ (from start of contact to end of contact, as measured about the axle) is given by $$tan\frac{\theta}{2}= \frac{l}{2r}$$ This means that the amount of rubber that is confined along the length $l$ is in fact a little bit bigger - the excess amount of rubber is $$e = r\theta - 2r\ sin\frac{\theta}{2}$$ Small angle expansion ($sin\theta = \theta - \frac{\theta^3}{6} + ...$) tells us that for small $\theta$ this difference is roughly $$e = \frac{r \theta^3}{3}$$ Further, if we assume an elliptical patch with a constant aspect ratio (this is approximately true for a given tire dimension), then the length will scale with the inverse square root of the pressure (since $F = P \cdot A$, force is product of pressure and area); and since theta is approximately linear with length, you see that the motion of the rubber (and thus the energy dissipation) will go down with pressure. Now comes the "knobby" mountain bike tire. Because much of the tire is not touching the road, the "effective pressure" is lower than you think it is - more specifically, the tire will start to touch the ground earlier, and leave later, for a much higher effective contact length, and thus $\theta$. And that means much higher friction. How much higher? I have no measurements, but here is an estimation: The pressure on a mountain bike tire is typically in the 30's of PSI - let's say 1/4 of the pressure in a road bike tire. But it's also much wider - say 3x wider than a road bike tire - which will shorten the contact length for a given pressure. Finally, with the knobbly nature of the profile, it might have an "effective contact length" that is 20% longer than it would have been for a smooth tire (because the stiffness of the tire will offset some of the knobbly nature of the tire). With all those assumptions, you get a contact length ratio (vs road bike) of $(4/3)*1.2 = 1.6$. Now we computed earlier that rubber friction goes as the third power of the contact length, or 4x greater. Rolling friction on a road bike is around 5N (see link above). Four times more rolling friction corresponds to an additional 15N, which at 15 mph is about 90 W. That's a lot of power - by the plot I derived above, it would drop your speed by about 3 mph for the same power. That is quite similar to the value quoted by Lubos. Note that at 15 mph, your wind resistance is quickly dropping, and that the position of the body (upright vs dropped) doesn't really have a huge impact (although more so if you are riding into a stiff headwind). This just goes to show that you really need to pay attention to your tires - you pay a price for having tires that cannot sustain a high pressure (and you pay even more for not inflating the tires appropriately for the road surface...)
This question already has an answer here: When I apply pumping lemma on this language: ${L=\{010^n:n\ge0\}}$ over the alphabet ${\Sigma =\{0,1\}}$ I get that it is non-regular despite the fact that it is regular. let ${n=4}$, then $w=010000$ $w=xyz$ , $ { \mid xy\mid \leq n} $ and $ {\mid y\mid \geq 1}$ $x=0$ , $y=10$ , $z=000$ let $i =2$ $xy^2z = 01010000$ $\not\in L$ so L is non-regular. so, what I'm missing?
Category Theory for Programmers Chapter 7: Functors This helps me because the book was not super rigorous about thedefinition of a functor which is the following definition: Let$\mathscr{A}$ and $\mathscr{B}$ be categories, a functor$F:\mathscr{A} \to \mathscr{B}$ consists of: a function $\mathrm{ob}(\mathscr{A}) \to \mathrm{ob}(\mathscr{B})$ written as $A \mapsto F(A)$; for each $A, A’ \in \mathscr{A}$, a function $\mathscr{A}(A,A’) \to \mathscr{B}(F(A), F(A’))$, written as $f \mapsto F(f)$, satisfying the following axioms: $F(f’ \circ f) = F(f’) \circ F(f)$ whenever $A \to^{f} A’ \to^{f’} A’’$ in $\mathscr{A}$; $F(1_A) = 1_{F(A)}$ whenever $A \in \mathscr{A}$. Another thing that ended up being really helpful was considering what are ourcategories? When thinking about haskell, we start with the category ofhaskell types, Hask. The morphisms then are just haskell functions asthey’re all from some type to another type. So when thinking about functors in hasekll, our objects are elements of Hask and our morphisms are just haskell functions. The exercises. Can we turn the Maybetype constructor into a functor by defining fmap _ _ = Nothing? To verify this, all we need to do is verify the functor laws (axioms) hold. First start with identity. The Maybetype constructor takes types from Hask, eg x, and turns them into new types, Maybe xwhich is just a subset of Hask. The identity function on these new types uses the same identity function from Haskwhich is why we can say fmap id = idin haskell. So for us to verify our “functor”, we need to verify this equality. 1. fmap id (Just x) = Nothing (by definition of fmap) 2. fmap id (Just x) = id Nothing (by definition of id) 3. fmap id (Just x) = id (Just x) (by definition of fmap id) 4. id (Just x) = id Nothing (by 2 and 3) 5. (Just x) /= Nothing So we do not get that this definition of fmapgives us a functor. Prove the functor laws for the readerfunctor. From the book, turning the type constructor (->) rinto a functor by defining fmap :: (a -> b) -> (r -> a) -> (r -> b)which was given as fmap = (.)(see that r -> a -> bin the first two arguments). First step is showing composition holds. So, given two functions: f :: a -> band g :: b -> cand g . f :: a -> cwe have the following given some h :: x -> a: 1. fmap (g . f) h = (.) (g . f) h (by definition of fmap) 2. fmap (g . f) h = (g . f) . h = g . (f . h) (by associativity of composition) 3. fmap (g . f) h = (g . f) . h = g . (fmap f h) (by def'n of fmap) 4. fmap (g . f) h = (g . f) . h = fmap g (fmap f h) (by def'n of fmap) Next step is to show identity is perserved. It is useful to know (or remember or learn) that id . x = xand x . id = x. fmap id h = (.) id h = id . h = h = id h So we’re good. $\blacksquare$ Implementing the reader functor is equivalent to implementing a compose function as fmap = (.). For that, we’ve already done in chapter 1! Prove the functor laws for the list functor which was defined in the chapeter as data List a = Nil | Cons a (List a) instance Functor List where fmap _ Nil = Nil fmap f (Cons x t) = Cons (f x) (fmap f t) First step is showing composition. Starting with our base case, Nil, we have 1. fmap (f . g) Nil = Nil = fmap g Nil = fmap f (fmap g Nil) Because we’ve shown the base case Nil, all that remains is to demonstrate this holds for Cons x tassuming it holds for t(induction). fmap (f . g) (Cons x t) = Cons ((f . g) x) (fmap (f . g) t) = Cons ((f . g) x) (fmap f (fmap g t)) (hypothesis) = Cons (f (g x)) (fmap f (fmap g t)) (def'n of composition) = fmap f (Cons (g x) (fmap g t)) = fmap f (fmap g (Cons x t)) $\blacksquare$
We can describe a two-dimensional (i.e. planar), inviscid, irrotational, free line vortex in cylindrical coordinates with the stream function $\psi = -K\ln{r}$, velocity potential $\phi= K\theta$, tangential velocity component $v_{\theta} = \frac{1}{r}\frac{\partial \phi}{\partial \theta} = K/r$, and radial velocity component $v_r = \frac{\partial \phi}{\partial r} = 0$, where $K$ is a constant. The motion of mutually perpendicular lines in a fluid element is given by $$ \dot{\gamma} = \frac{1}{r} \frac{\partial v_r}{\partial \theta} + \frac{\partial v_{\theta}}{\partial r} - \frac{v_{\theta}}{r}$$ where $\dot{\gamma}$ is the rate of angular deformation of the angle between the lines. In addition, because the flow is irrotational, $$ \frac{\partial (r v_{\theta})}{\partial r} = \frac{\partial v_r}{\partial \theta}\,, $$ such that $\dot{\gamma}\neq 0$. However, to construct the stream function and velocity potential, we must assume that the flow is inviscid and the only forces acting on the fluid element are the normal stresses (i.e. the pressure) and any body forces. My understanding is that normal stresses and body forces cannot cause angular deformation, and the shear stresses are zero due to the neglect of viscous terms. Thus, what force is causing the angular deformation of the fluid elements in this flow? This Phys.SE question is related, but does not answer my question: When is a flow vortex free?
${{\boldsymbol Z}_{{c}}{(4430)}}$ $I^G(J^{PC})$ = $1^+(1^{+ -})$ G, C need confirmation. was ${{\mathit X}{(4430)}^{\pm}}$ Properties incompatible with a ${{\mathit q}}{{\overline{\mathit q}}}$ structure (exotic state). See the review on non- ${{\mathit q}}{{\overline{\mathit q}}}$ states. First seen by CHOI 2008 in ${{\mathit B}}$ $\rightarrow$ ${{\mathit K}}{{\mathit \pi}^{+}}{{\mathit \psi}{(2S)}}$ decays, confirmed by AAIJ 2014AG , and confirmed in a model-independent way by AAIJ 2015BH . Also seen by CHILIKIN 2014 in ${{\mathit B}}$ $\rightarrow$ ${{\mathit K}^{+}}{{\mathit \pi}}{{\mathit J / \psi}}$ decays. $\mathit J{}^{P}$ was determined by CHILIKIN 2013 and AAIJ 2014AG .
Hello one and all! Is anyone here familiar with planar prolate spheroidal coordinates? I am reading a book on dynamics and the author states If we introduce planar prolate spheroidal coordinates $(R, \sigma)$ based on the distance parameter $b$, then, in terms of the Cartesian coordinates $(x, z)$ and also of the plane polars $(r , \theta)$, we have the defining relations $$r\sin \theta=x=\pm R^2−b^2 \sin\sigma, r\cos\theta=z=R\cos\sigma$$ I am having a tough time visualising what this is? Consider the function $f(z) = Sin\left(\frac{1}{cos(1/z)}\right)$, the point $z = 0$a removale singularitya polean essesntial singularitya non isolated singularitySince $Cos(\frac{1}{z})$ = $1- \frac{1}{2z^2}+\frac{1}{4!z^4} - ..........$$$ = (1-y), where\ \ y=\frac{1}{2z^2}+\frac{1}{4!... I am having trouble understanding non-isolated singularity points. An isolated singularity point I do kind of understand, it is when: a point $z_0$ is said to be isolated if $z_0$ is a singular point and has a neighborhood throughout which $f$ is analytic except at $z_0$. For example, why would $... No worries. There's currently some kind of technical problem affecting the Stack Exchange chat network. It's been pretty flaky for several hours. Hopefully, it will be back to normal in the next hour or two, when business hours commence on the east coast of the USA... The absolute value of a complex number $z=x+iy$ is defined as $\sqrt{x^2+y^2}$. Hence, when evaluating the absolute value of $x+i$ I get the number $\sqrt{x^2 +1}$; but the answer to the problem says it's actually just $x^2 +1$. Why? mmh, I probably should ask this on the forum. The full problem asks me to show that we can choose $log(x+i)$ to be $$log(x+i)=log(1+x^2)+i(\frac{pi}{2} - arctanx)$$ So I'm trying to find the polar coordinates (absolute value and an argument $\theta$) of $x+i$ to then apply the $log$ function on it Let $X$ be any nonempty set and $\sim$ be any equivalence relation on $X$. Then are the following true: (1) If $x=y$ then $x\sim y$. (2) If $x=y$ then $y\sim x$. (3) If $x=y$ and $y=z$ then $x\sim z$. Basically, I think that all the three properties follows if we can prove (1) because if $x=y$ then since $y=x$, by (1) we would have $y\sim x$ proving (2). (3) will follow similarly. This question arised from an attempt to characterize equality on a set $X$ as the intersection of all equivalence relations on $X$. I don't know whether this question is too much trivial. But I have yet not seen any formal proof of the following statement : "Let $X$ be any nonempty set and $∼$ be any equivalence relation on $X$. If $x=y$ then $x\sim y$." That is definitely a new person, not going to classify as RHV yet as other users have already put the situation under control it seems... (comment on many many posts above) In other news: > C -2.5353672500000002 -1.9143250000000003 -0.5807385400000000 C -3.4331741299999998 -1.3244286800000000 -1.4594762299999999 C -3.6485676800000002 0.0734728100000000 -1.4738058999999999 C -2.9689624299999999 0.9078326800000001 -0.5942069900000000 C -2.0858929200000000 0.3286240400000000 0.3378783500000000 C -1.8445799400000003 -1.0963522200000000 0.3417561400000000 C -0.8438543100000000 -1.3752198200000001 1.3561451400000000 C -0.5670178500000000 -0.1418068400000000 2.0628359299999999 probably the weirdness bunch of data I ever seen with so many 000000 and 999999s But I think that to prove the implication for transitivity the inference rule an use of MP seems to be necessary. But that would mean that for logics for which MP fails we wouldn't be able to prove the result. Also in set theories without Axiom of Extensionality the desired result will not hold. Am I right @AlessandroCodenotti? @AlessandroCodenotti A precise formulation would help in this case because I am trying to understand whether a proof of the statement which I mentioned at the outset depends really on the equality axioms or the FOL axioms (without equality axioms). This would allow in some cases to define an "equality like" relation for set theories for which we don't have the Axiom of Extensionality. Can someone give an intuitive explanation why $\mathcal{O}(x^2)-\mathcal{O}(x^2)=\mathcal{O}(x^2)$. The context is Taylor polynomials, so when $x\to 0$. I've seen a proof of this, but intuitively I don't understand it. @schn: The minus is irrelevant (for example, the thing you are subtracting could be negative). When you add two things that are of the order of $x^2$, of course the sum is the same (or possibly smaller). For example, $3x^2-x^2=2x^2$. You could have $x^2+(x^3-x^2)=x^3$, which is still $\mathscr O(x^2)$. @GFauxPas: You only know $|f(x)|\le K_1 x^2$ and $|g(x)|\le K_2 x^2$, so that won't be a valid proof, of course. Let $f(z)=z^{n}+a_{n-1}z^{n-1}+\cdot\cdot\cdot+a_{0}$ be a complex polynomial such that $|f(z)|\leq 1$ for $|z|\leq 1.$ I have to prove that $f(z)=z^{n}.$I tried it asAs $|f(z)|\leq 1$ for $|z|\leq 1$ we must have coefficient $a_{0},a_{1}\cdot\cdot\cdot a_{n}$ to be zero because by triangul... @GFauxPas @TedShifrin Thanks for the replies. Now, why is it we're only interested when $x\to 0$? When we do a taylor approximation cantered at x=0, aren't we interested in all the values of our approximation, even those not near 0? Indeed, one thing a lot of texts don't emphasize is this: if $P$ is a polynomial of degree $\le n$ and $f(x)-P(x)=\mathscr O(x^{n+1})$, then $P$ is the (unique) Taylor polynomial of degree $n$ of $f$ at $0$.
Fractions and binomial coefficients are common mathematical elements with similar characteristics - one number goes on top of another. This article explains how to typeset them in LaTeX. Contents Using fractions and binomial coefficients in an expression is straightforward. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] For these commands to work you must import the package amsmath by adding the next line to the preamble of your file \usepackage{amsmath} The appearance of the fraction may change depending on the context Fractions can be used alongside the text, for example \( \frac{1}{2} \), and in a mathematical display style like the one below: \[\frac{1}{2}\] As you may have guessed, the command \frac{1}{2} is the one that displays the fraction. The text inside the first pair of braces is the numerator and the text inside the second pair is the denominator. Also, the text size of the fraction changes according to the text around it. You can set this manually if you want. When displaying fractions in-line, for example \(\frac{3x}{2}\) you can set a different display style: \( \displaystyle \frac{3x}{2} \). This is also true the other way around \[ f(x)=\frac{P(x)}{Q(x)} \ \ \textrm{and} \ \ f(x)=\textstyle\frac{P(x)}{Q(x)} \] The command \displaystyle will format the fraction as if it were in mathematical display mode. On the other side, \textstyle will change the style of the fraction as if it were part of the text. The usage of fractions is quite flexible, they can be nested to obtain more complex expressions. The fractions can be nested \[ \frac{1+\frac{a}{b}}{1+\frac{1}{1+\frac{1}{a}}} \] Now a wild example \[ a_0+\cfrac{1}{a_1+\cfrac{1}{a_2+\cfrac{1}{a_3+\cdots}}} \] The second fraction displayed in the previous example uses the command \cfrac{}{} provided by the package amsmath (see the introduction), this command displays nested fractions without changing the size of the font. Specially useful for continued fractions. Binomial coefficients are common elements in mathematical expressions, the command to display them in LaTeX is very similar to the one used for fractions. The binomial coefficient is defined by the next expression: \[ \binom{n}{k} = \frac{n!}{k!(n-k)!} \] And of course this command can be included in the normal text flow \(\binom{n}{k}\). As you see, the command \binom{}{} will print the binomial coefficient using the parameters passed inside the braces. A slightly different and more complex example of continued fractions Final example \newcommand*{\contfrac}[2]{% { \rlap{$\dfrac{1}{\phantom{#1}}$}% \genfrac{}{}{0pt}{0}{}{#1+#2}% } } \[ a_0 + \contfrac{a_1}{ \contfrac{a_2}{ \contfrac{a_3}{ \genfrac{}{}{0pt}{0}{}{\ddots} }}} \] For more information see
Another method, not covered by the answers above, is finite automaton transformation. As a simple example, let us show that the regular languages are closed under the shuffle operation, defined as follows:$$L_1 \mathop{S} L_2 = \{ x_1y_1 \ldots x_n y_n \in \Sigma^* : x_1 \ldots x_n \in L_1, y_1 \ldots y_n \in L_2 \}$$You can show closure under shuffle using closure properties, but you can also show it directly using DFAs. Suppose that $A_i = \langle \Sigma, Q_i, F_i, \delta_i, q_{0i} \rangle$ is a DFA that accepts $L_i$ (for $i=1,2$). We construct a new DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ as follows: The set of states is $Q_1 \times Q_2 \times \{1,2\}$, where the third component remembers whether the next symbol is an $x_i$ (when 1) or a $y_i$ (when 2). The initial state is $q_0 = \langle q_{01}, q_{02}, 1 \rangle$. The accepting states are $F = F_1 \times F_2 \times \{1\}$. The transition function is defined by $\delta(\langle q_1, q_2, 1 \rangle, \sigma) = \langle \delta_1(q_1,\sigma), q_2, 2 \rangle$ and $\delta(\langle q_1, q_2, 2 \rangle, \sigma) = \langle q_1, \delta_2(q_2,\sigma), 1 \rangle$. A more sophisticated version of this method involves guessing. As an example, let us show that regular languages are closed under reversal, that is,$$ L^R = \{ w^R : w \in \Sigma^* \}. $$(Here $(w_1\ldots w_n)^R = w_n \ldots w_1$.) This is one of the standard closure operations, and closure under reversal easily follows from manipulation of regular expressions (which may be regarded as the counterpart of finite automaton transformation to regular expressions) – just reverse the regular expression. But you can also prove closure using NFAs. Suppose that $L$ is accepted by a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, where The set of states is $Q' = Q \cup \{q'_0\}$. The initial state is $q'_0$. The unique accepting state is $q_0$. The transition function is defined as follows: $\delta'(q'_0,\epsilon) = F$, and for any state $q \in Q$ and $\sigma \in \Sigma$, $\delta(q', \sigma) = \{ q : \delta(q,\sigma) = q' \}$. (We can get rid of $q'_0$ if we allow multiple initial states.) The guessing component here is the final state of the word after reversal. Guessing often involves also verifying. One simple example is closure under rotation:$$ R(L) = \{ yx \in \Sigma^* : xy \in L \}. $$Suppose that $L$ is accepted by the DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$. We construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$, which operates as follows. The NFA first guesses $q=\delta(q_0,x)$. It then verifies that $\delta(q,y) \in F$ and that $\delta(q_0,x) = q$, moving from $y$ to $x$ non-deterministically. This can be formalized as follows: The states are $Q' = \{q'_0\} \cup Q \times Q \times \{1,2\}$. Apart from the initial state $q'_0$, the states are $\langle q,q_{curr}, s \rangle$, where $q$ is the state that we guessed, $q_{curr}$ is the current state, and $s$ specifies whether we are at the $y$ part of the input (when 1) or at the $x$ part of the input (when 2). The final states are $F' = \{\langle q,q,2 \rangle : q \in Q\}$: we accept when $\delta(q_0,x)=q$. The transitions $\delta'(q'_0,\epsilon) = \{\langle q,q,1 \rangle : q \in Q\}$ implement guessing $q$. The transitions $\delta'(\langle q,q_{curr},s \rangle, \sigma) = \langle q,\delta(q_{curr},\sigma),s \rangle$ (for every $q,q_{curr} \in Q$ and $s \in \{1,2\}$) simulate the original DFA. The transitions $\delta'(\langle q,q_f,1 \rangle, \epsilon) = \langle q,q_0,2 \rangle$, for every $q \in Q$ and $q_f \in F$, implement moving from the $y$ part to the $x$ part. This is only allowed if we've reached a final state on the $y$ part. Another variant of the technique incorporates bounded counters. As an example, let us consider change edit distance closure:$$ E_k(L) = \{ x \in \Sigma^* : \text{ there exists $y \in L$ whose edit distance from $x$ is at most $k$} \}. $$Given a DFA $\langle \Sigma, Q, F, \delta, q_0 \rangle$ for $L$, e construct an NFA $\langle \Sigma, Q', F', \delta', q'_0 \rangle$ for $E_k(L)$ as follows: The set of states is $Q' = Q \times \{0,\ldots,k\}$, where the second item counts the number of changes done so far. The initial state is $q'_0 = \langle q_0,0 \rangle$. The accepting states are $F' = F \times \{0,\ldots,k\}$. For every $q,\sigma,i$ we have transitions $\langle \delta(q,\sigma), i \rangle \in \delta'(\langle q,i \rangle, \sigma)$. Insertions are handled by transitions $\langle q,i+1 \rangle \in \delta'(\langle q,i \rangle, \sigma)$ for all $q,\sigma,i$ such that $i < k$. Deletions are handled by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \epsilon)$ for all $q,\sigma,i$ such that $i < k$. Substitutions are similarly handles by transitions $\langle \delta(q,\sigma), i+1 \rangle \in \delta'(\langle q,i \rangle, \tau)$ for all $q,\sigma,\tau,i$ such that $i < k$.
We have shown that a holomorphic map \(f: G\to \mathbb{C}\) to be expressed as a power series, which bears a certain similarity to polynomials, and a feature of polynomials are that if \(a\) is a root, or zero, for a polynomial \(p\), we can factor \(p\) such that \(p(z)=(z-a)^n q(z)\) where \(q\) is another polynomial with the property that \(q(a)\neq 0\). Now, does this similarity with polynomials extend to factorization? In fact it does as we shall see. Let \(f: G\to \mathbb{C}\) be a holomorphic map that is not identically zero, with \(G\subseteq \mathbb{C}\) a domain and \(f(a)=0\). It is our claim that there exists a smallest natural number \(n\) such that \(f^{(n)}(a)\neq 0\). So suppose that there are no such \(n\), such that \(f^{(k)}(a)=0\) for all \(k\in\mathbb{N}\). Let \(B_\rho(a)\) be the largest open ball with center \(a\) contained in \(G\), since we have that \[f(z)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(z-a)^k\] we then have that \(f\) is identically zero on \(B_\rho(a)\). Fix a point \(z_0\in G\) and let \(\gamma : [0,1]\to G\) be a continuous curve from \(a\) to \(z_0\). By the paving lemma there is a finite partition \(0=t_1 < t_2 <\cdots <t_m=1\) and an \(r>0\) such that \(B_r(\gamma(t_k))\subseteq G\) for all \(k\) and \(\gamma([t_{k-1},t_k])\subseteq B_r(\gamma(t_k))\). Note that \(B_r(\gamma(t_1))=B_r(a)\subseteq B_\rho(a)\) so \(f\) is identically zero on \(B_r(\gamma(t_1))\), but since \(\gamma([t_1,t_2])\subseteq B_r(\gamma(t_1))\) we must have that \(f\) is identically zero on \(B_r(\gamma(t_2))\), and so on finitely many times untill we reach \(\gamma(t_m)\) and conclude that \(f\) is identically zero on \(B_r(\gamma(t_m))=B_r(z_0)\) and since \(z_0\) was chosen to be arbitrary we must conclude that \(f\) is identically zero on all of \(G\). A contradiction. Now, let \(n\) be the smallest natural number such that \(f^{(n)}(a)\neq 0\), then we must have that \(f^{(k)}(a)=0\) for \(k < n\). We then get, for \(z\in B_\rho(a)\): \[\begin{split} f(z) &=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=n}\frac{f^{(k)}(a)}{k!}(a-z)^k \\ &= \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{n+k} \\&=(z-a)^n \sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}, \end{split}\] now, let \(\tilde{f}(z)=\sum^\infty_{k=0}\frac{f^{(n+k)}(a)}{(n+k)!}(a-z)^{k}\) and note that \(\tilde{f}\) is non-zero and holomorphic on \(B_\rho(a)\). We then define a map \(g\) given by \[g(z)=\begin{cases} \tilde{f}(z), & z\in B_\rho(a) \\ \frac{f(z)}{(z-a)^n}, & z\in G\setminus \{a\}\end{cases}\] and note that \[f(z)=(z-a)^n g(z),\] showing the existance of a factorization with our desired properties. Showing that this representation is unique is left as an exercise 😉 References Complex analysis. Copenhagen: Department of Mathematical Sciences, University of Copenhagen.
There are two positive integers c for which the equation 5x^2 + 11x+c=0 has rational solutions. What is the product of those two values of c? I'm not quite sure about how to solve a problem like this, but I'll mess around with it and see what I get. Lets try to factor it by grouping. To factor nicely the 11x should split up with one of the parts a multiple of 5. 5x^2+5x+6x+c 5x(x+1)+6(x+c/6) If this is going to be factored, the second part needs to match the first part. For this to happen c/6=1 so c=6. (5x+6)(x+1) The solitutions of this are x=-1 and x=-6/5. These are both rational. There is one more way 11x can be split up with a multiple of 5. 5x^2+10x+x+c 5x(x+2)+1(x+c/1) For this one to factor, c/1=2 so c=2. (5x+1)(x+2) The solitutions to this one are x=-2 and x=-1/5 and they are also rational. So when c=6 and c=2 the quadratic has rational solitutions and the answer you are looking for is \(2\cdot6=\boxed{12}\). There are two positive integers c for which the equation \(5x^2 + 11x+c=0\) has rational solutions. What is the product of those two values of \(c\)? \(\begin{array}{|rcll|} \hline \mathbf{ 5x^2 + 11x+c} &=& \mathbf{0} \\\\ x &=& \dfrac{-11\pm \sqrt{121-4\cdot 5c} }{2\cdot 5} \\ \hline 121-4\cdot 5c &>& 0 \\ 121-20c&>& 0 \\ 121&>&20c \\ \mathbf{\dfrac{121}{20}} &>& \mathbf{c} \\ \hline \end{array}\) The possible values of positive integers \(c\) are \(\{1,2,3,4,5,6\}\) \(\begin{array}{|c|l|c|} \hline c & \sqrt{121-20c} & \text{rational solutions} \\ \hline \mathbf{6} & \sqrt{121-20\cdot 6}=\sqrt{1}= 1 & \checkmark \\ \hline 5 & \sqrt{121-20\cdot 5}=\sqrt{21} & \\ \hline 4 & \sqrt{121-20\cdot 4}=\sqrt{41} & \\ \hline 3 & \sqrt{121-20\cdot 3}=\sqrt{61} & \\ \hline \mathbf{2} & \sqrt{121-20\cdot 2} =\sqrt{81} = 9 & \checkmark\\ \hline 1 & \sqrt{121-20\cdot 1}=\sqrt{101} & \\ \hline \end{array}\) So \(c=6\) and \(c=2\) and \(2\cdot 6 = 12\)
A reader asks: I know that a square matrix $\mathbf{M}$ maps point $\mathbf{x}$ to point $\mathbf{y}$. Do I have enough information to work out $\mathbf{M}$? In a word: no, unless you're working in one dimension! In general, to work out a square transformation matrix in $n$ dimensions, you need toRead More → "A $z$-score of 1.4," said the student, reaching for his tables. "0.92," said the Mathematical Ninja, without skipping a beat. "0.9192," said the student, with a hint of annoyance. "How on earth..." "Oh, it's terribly simple," said the Mathematical Ninja. "It turns out, for smallish values of $z$, the normalRead More → In this month's Wrong, But Useful, @icecolbeveridge (Colin Beveridge in real life) and @reflectivemaths (Dave Gale when he's at home)... ... completely forget about the Maths Book Club, which was going on during the recording; ... get all excited about the MathsJam conference on the weekend of November 2nd-3rd ...Read More → "You would not be certain that $17 \times 24$ is not 568." - Daniel Kahneman, Thinking Fast And Slow Thanks to Alice for pointing out that yes, she bloody well would. Most people under 50 in the UK would reach for a calculator, or possibly a pen and paper toRead More → "... Evidently not," said the student, with a look of sheer terror that was music to the Mathematical Ninja's eyes. He smiled a nasty smile. "No," he said, "you categorically do not add probabilities as you go through the tree." "You... multiply?" The student was cautious. The Mathematical Ninja hadn'tRead More → Ask virtually any maths teacher what $\sec(\alpha)$ means, the chances are they'll say "it's $\frac{1}{\cos(\alpha)}$," without missing a beat. Ask them what it means geometrically... well, I don't want to speak for the teaching profession as a whole, but I'd have been stumped until the other day. As with theRead More → This is an odd, out-of-sequence post, but I just saw this and thought it needed sharing. - John Halpern in real life - is one of my heroes. I'd rate him comfortably among the top five crossword compilers in the UK (possibly the world), and not just because he hasRead More → Chair: If 'good' requires pupil performance to exceed the national average, and if all schools must be good, how is this, how is this mathematically possible? Michael Gove: By getting better all the time. Chair: So it is possible, is it? Michael Gove: It is possible to get better allRead More → In honour of 's birthday this week, here's a post with a vaguely Douglas-Adams-related theme. The student looked at the Mathematical Ninja and decided this was a moment where reaching for the calculator would be appropriate. "$\frac{29}{42}$..." she said aloud. "0.69," said the Mathematical Ninja. She threw the calculator downRead More → At a recent MathsJam, there was a puzzle. This is nothing out of the ordinary. It went something like: If an absent-minded professor takes his umbrella into a classroom, there's a probability of $\frac{1}{4}$ that he'll absent-mindedly leave it there. One day, he sets off with his umbrella, teaches inRead More →
My favorite connection in mathematics (and an interesting application to physics) is a simple corollary from Hodge's decomposition theorem, which states: On a (compact and smooth) riemannian manifold $M$ with its Hodge-deRham-Laplace operator $\Delta,$ the space of $p$-forms $\Omega^p$ can be written as the orthogonal sum (relative to the $L^2$ product) $$\Omega^p = \Delta \Omega^p \oplus \cal H^p = d \Omega^{p-1} \oplus \delta \Omega^{p+1} \oplus \cal H^p,$$ where $\cal H^p$ are the harmonic $p$-forms, and $\delta$ is the adjoint of the exterior derivative $d$ (i.e. $\delta = \text{(some sign)} \star d\star$ and $\star$ is the Hodge star operator). (The theorem follows from the fact, that $\Delta$ is a self-adjoint, elliptic differential operator of second order, and so it is Fredholm with index $0$.) From this it is now easy to proof, that every not trivial deRham cohomology class $[\omega] \in H^p$ has a unique harmonic representative $\gamma \in \cal H^p$ with $[\omega] = [\gamma]$. Please note the equivalence $$\Delta \gamma = 0 \Leftrightarrow d \gamma = 0 \wedge \delta \gamma = 0.$$ Besides that this statement implies easy proofs for Poincaré duality and what not, it motivates an interesting viewpoint on electro-dynamics: Please be aware, that from now on we consider the Lorentzian manifold $M = \mathbb{R}^4$ equipped with the Minkowski metric (so $M$ is neither compact nor riemannian!). We are going to interpret $\mathbb{R}^4 = \mathbb{R} \times \mathbb{R}^3$ as a foliation of spacelike slices and the first coordinate as a time function $t$. So every point $(t,p)$ is a position $p$ in space $\mathbb{R}^3$ at the time $t \in \mathbb{R}$. Consider the lifeline $L \simeq \mathbb{R}$ of an electron in spacetime. Because the electron occupies a position which can't be occupied by anything else, we can remove $L$ from the spacetime $M$. Though the theorem of Hodge does not hold for lorentzian manifolds in general, it holds for $M \setminus L \simeq \mathbb{R}^4 \setminus \mathbb{R}$. The only non vanishing cohomology space is $H^2$ with dimension $1$ (this statement has nothing to do with the metric on this space, it's pure topology - we just cut out the lifeline of the electron!). And there is an harmonic generator $F \in \Omega^2$ of $H^2$, that solves $$\Delta F = 0 \Leftrightarrow dF = 0 \wedge \delta F = 0.$$ But we can write every $2$-form $F$ as a unique decomposition $$F = E + B \wedge dt.$$ If we interpret $E$ as the classical electric field and $B$ as the magnetic field, than $d F = 0$ is equivalent to the first two Maxwell equations and $\delta F = 0$ to the last two. So cutting out the lifeline of an electron gives you automagically the electro-magnetic field of the electron as a generator of the non-vanishing cohomology class.
X Search Filters Format Subjects Library Location Language Publication Date Click on a bar to filter by decade Slide to change publication date range Journal of High Energy Physics, ISSN 1126-6708, 3/2018, Volume 2018, Issue 3, pp. 1 - 23 The ratios of the branching fractions of the decays Λ c + → pπ − π +, Λ c + → pK − K +, and Λ c + → pπ − K + with respect to the Cabibbo-favoured Λ c + →... Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Nuclear and particle physics. Atomic energy. Radioactivity | LHCb | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Spectroscopy | Branching fraction | Charm physics | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Luminosity | Uncertainty | Large Hadron Collider | Particle collisions | Nuclear and particle physics. Atomic energy. Radioactivity | LHCb | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Journal of High Energy Physics, ISSN 1126-6708, 3/2018, Volume 2018, Issue 3, pp. 1 - 21 The difference between the CP asymmetries in the decays Λ c + → pK − K + and Λ c + → pπ − π + is presented. Proton-proton collision data taken at... Charm physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Skewed distributions | Luminosity | Statistical methods | Statistical analysis | Particle collisions | Decay | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Charm physics | CP violation | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | Skewed distributions | Luminosity | Statistical methods | Statistical analysis | Particle collisions | Decay | PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Physical Review Letters, ISSN 0031-9007, 05/2017, Volume 118, Issue 18 The Ξc+K- mass spectrum is studied with a sample of pp collision data corresponding to an integrated luminosity of 3.3 fb-1, collected by the LHCb experiment.... PHYSICS OF ELEMENTARY PARTICLES AND FIELDS PHYSICS OF ELEMENTARY PARTICLES AND FIELDS Journal Article Journal of High Energy Physics, ISSN 1029-8479, 7/2017, Volume 2017, Issue 7, pp. 1 - 33 A study of B s 0 → η c ϕ and B s 0 → η c π + π − decays is performed using pp collision data corresponding to an integrated luminosity of 3.0 fb−1, collected... B physics | Branching fraction | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory B physics | Branching fraction | Hadron-Hadron scattering (experiments) | Flavor physics | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory Journal Article PHYSICAL REVIEW LETTERS, ISSN 0031-9007, 05/2017, Volume 118, Issue 18 Journal Article 6. Measurement of the η c (1S) production cross-section in proton–proton collisions via the decay η c (1S) → pp European Physical Journal C, ISSN 1434-6044, 07/2015, Volume 75, Issue 7, pp. 1 - 12 The production of the η[subscript c](1S) state in proton-proton collisions is probed via its decay to the p[bar over p] final state with the LHCb detector, in... Physics and Astronomy (miscellaneous) | 13.25.Gv | 13.85.Ni | ROOT-S=7 TEV | 14.40.Pq | High Energy Physics - Experiment | 12.38.Qk | Transverse Momentum, Systematic Uncertainty, Kinematic Region, Uncertainty Component, Hadron Decay | Heavy quarkonia | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Hadronic decays of J/ψ Υ and other quarkonia | LHCb | Quantum chromodynamics: Experimental tests | Engineering (miscellaneous) | Hadron-induced high- and super-high-energy interactions (energy > 10 GeV): Inclusive production with identified hadrons Physics and Astronomy (miscellaneous) | 13.25.Gv | 13.85.Ni | ROOT-S=7 TEV | 14.40.Pq | High Energy Physics - Experiment | 12.38.Qk | Transverse Momentum, Systematic Uncertainty, Kinematic Region, Uncertainty Component, Hadron Decay | Heavy quarkonia | [PHYS.HEXP]Physics [physics]/High Energy Physics - Experiment [hep-ex] | Hadronic decays of J/ψ Υ and other quarkonia | LHCb | Quantum chromodynamics: Experimental tests | Engineering (miscellaneous) | Hadron-induced high- and super-high-energy interactions (energy > 10 GeV): Inclusive production with identified hadrons Journal Article 7. Evidence for an $$\eta _c(1S) \pi ^-$$ ηc(1S)π- resonance in $$B^0 \rightarrow \eta _c(1S) K^+\pi ^-$$ B0→ηc(1S)K+π- decays The European Physical Journal C, ISSN 1434-6044, 12/2018, Volume 78, Issue 12, pp. 1 - 23 A Dalitz plot analysis of $${{B} ^0} \!\rightarrow \eta _c(1S) {{K} ^+} {{\pi } ^-} $$ B0→ηc(1S)K+π- decays is performed using data samples of pp collisions... Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Nuclear Physics, Heavy Ions, Hadrons | Measurement Science and Instrumentation | Nuclear Energy | Quantum Field Theories, String Theory | Physics | Elementary Particles, Quantum Field Theory | Astronomy, Astrophysics and Cosmology Journal Article European Physical Journal C, ISSN 1434-6044, 12/2018, Volume 78, Issue 12, p. 1019 A Dalitz plot analysis of decays is performed using data samples of collisions collected with the detector at centre-of-mass energies of and , corresponding to... Journal Article Journal of High Energy Physics, ISSN 1029-8479, 1/2018, Volume 2018, Issue 1, pp. 1 - 18 A search is performed in the invariant mass spectrum of the B c +π+π− system for the excited B c + states B c (21 S 0)+ and B c (23 S 1)+ using a data sample... Spectroscopy | B physics | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | MESONS | DECAY | ROOT-S=1.8 TEV | MODEL | P(P)OVER-BAR COLLISIONS | WAVE | QCD | LHCB | B-C SPECTROSCOPY | PROSPECTS | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Cross sections | Physics - High Energy Physics - Experiment Spectroscopy | B physics | Hadron-Hadron scattering (experiments) | Quantum Physics | Quantum Field Theories, String Theory | Classical and Quantum Gravitation, Relativity Theory | Physics | Elementary Particles, Quantum Field Theory | MESONS | DECAY | ROOT-S=1.8 TEV | MODEL | P(P)OVER-BAR COLLISIONS | WAVE | QCD | LHCB | B-C SPECTROSCOPY | PROSPECTS | PHYSICS, PARTICLES & FIELDS | Confidence intervals | Luminosity | Large Hadron Collider | Cross sections | Physics - High Energy Physics - Experiment Journal Article Physics Letters B, ISSN 0370-2693, 06/2019, Volume 793, pp. 212 - 223 A measurement of the production of prompt baryons in Pb–Pb collisions at TeV with the ALICE detector at the LHC is reported. The and were reconstructed at... Journal Article Journal of High Energy Physics, ISSN 1126-6708, 04/2019, Volume 2019, Issue 4, pp. 1 - 18 Journal Article
ISSN: 1531-3492 eISSN: 1553-524X All Issues Discrete & Continuous Dynamical Systems - B September 2015 , Volume 20 , Issue 7 Select all articles Export/Reference: Abstract: Linear scalar differential equations with distributed delays appear in the study of the local stability of nonlinear differential equations with feedback, which are common in biology and physics. Negative feedback loops tend to promote oscillations around steady states, and their stability depends on the particular shape of the delay distribution. Since in applications the mean delay is often the only reliable information available about the distribution, it is desirable to find conditions for stability that are independent from the shape of the distribution. We show here that for a given mean delay, the linear equation with distributed delay is asymptotically stable if the associated differential equation with a discrete delay is asymptotically stable. We illustrate this criterion on a compartment model of hematopoietic cell dynamics to obtain sufficient conditions for stability. Abstract: Let \( C[0,1] \) be the space of continuous functions on the unit interval \( [0,1] \). A cosine family $\{C(t), t \in \mathbb{R}\}$ in $C[0,1]$ is said to be Laplace-operator generated, if its generator is a restriction of the Laplace operator $L\colon f \mapsto f''$ to a suitable subset of $C^2[0,1].$ The family is said to preserve a functional $F \in (C[0,1])^*$ if for all $f \in C[0,1]$ and $t \in \mathbb{R}, $ $FC(t)f = Ff.$ We study a class of pairs of functionals such that for each member of this class there is a unique Laplace-operator generated cosine family that preserves both functionals in the pair. Abstract: We study the long-time behavior of the solution of a damped BBM equation $u_t + u_x - u_{xxt} + uu_x + \mathscr{L}_{\gamma}(u) = 0$. The proposed dampings $\mathscr{L}_{\gamma}$ generalize standards ones, as parabolic ($\mathscr{L}_{\gamma}(u)=-\Delta u$) or weak damping ($\mathscr{L}_{\gamma}(u)=\gamma u$) and allows us to consider a greater range. After establish the local well-posedness in the energy space, we investigate some numerical properties. Abstract: Isagi et al introduced a model for masting, that is, the intermittent production of flowers and fruit by trees. A tree produces flowers and fruit only when the stored energy exceeds a certain threshold value. If flowers and fruit are not produced, the stored energy increases by a certain fixed amount; if flowers and fruit are produced, the energy is depleted by an amount proportional to the excess stored energy. Thus a one-dimensional model is derived for the amount of stored energy. When the ratio of the amount of energy used for flowering and fruit production in a reproductive year to the excess amount of stored energy before that year is small, the stored energy approaches a constant value as time passes. However when this ratio is large, the amount of stored energy varies unpredictably and as the ratio increases the range of possible values for the stored energy increases also. In this article we describe this chaotic behavior precisely with complete proofs. Abstract: In this paper, we study bifurcation of the damped Kuramoto-Sivashinsky equation on an odd periodic interval of period $2\lambda$. We fix the control parameter $\alpha \in (0,1)$ and study how the equation bifurcates to attractors as $\lambda$ varies. Using the center manifold analysis, we prove that the bifurcated attractors are homeomorphic to $S^1$ and consist of four or eight singular points and their connecting orbits. We verify the structure of the bifurcated attractors by investigating the stability of each singular point. Abstract: This paper is concerned with the stability of the orbits for a nearly integrable volume-preserving mapping. We prove that the nearly integrable volume-preserving mapping possesses quasi-effective stability under the classical KAM-type nondegeneracy, that is, there is an open subset of the phase space whose measure is nearly full, such that the considered mapping is effective stable on this subset. This announces a connection between the Nekhoroshev theory and KAM theory. Abstract: In this paper, a discretized multigroup SIR epidemic model is constructed by applying a nonstandard finite difference schemes to a class of continuous time multigroup SIR epidemic models. This discretization scheme has the same dynamics with the original differential system independent of the time step, such as positivity of the solutions and the stability of the equilibria. Discrete-time analogue of Lyapunov functions is introduced to show that the global asymptotic stability is fully determined by the basic reproduction number $R_0$. Abstract: We consider the de Gennes' smectic A free energy with a complex order parameter in order to study the influence of magnetic fields on the smectic layers in the strong field limit as well as near the critical field. In previous work by the authors [6], the critical field and a description of the layer undulations at the instability were obtained using $\Gamma$-convergence and bifurcation theory. It was proved that the critical field is lowered by a factor of $\sqrt{\pi}$ compared to the classical Helfrich Hurault theory by using natural boundary conditions for the complex order parameter, but still with strong anchoring condition for the director. In this paper, we present numerical simulations for undulations at the critical field as well as the layer and director configurations well above the critical field. We show that the estimate of the critical field and layer configuration at the critical field agree with the analysis in [6]. Furthermore, the changes in smectic order density as well as layer and director will be illustrated numerically as the field increases well above the critical field. This provides the smectic layers' melting along the bounding plates where the layers are fixed. In the natural case, at a high field, we prove that the directors align with the applied field and the layers are homeotropically aligned in the domain, keeping the smectic order density at a constant in $L^2$. Abstract: We consider a model originally introduced to study layer-undulated structures in bent-core molecule liquid crystals. We first prove existence of minimizers, then analyze a simplified version used to study how in columnar phases the width of the column affects the type of switching, which occurs under an applied electric field. We show via $\Gamma$-convergence that as the width of the column tends to infinity, rotation around the tilt cone is favored, provided the coefficient of the coupling term, between the polar parameter, the nematic parameter, and the layer normal is large. Abstract: This paper studies the dynamic behavior of solutions to a modified Lotka-Volterra reaction-diffusion system with homogeneous Neumann boundary conditions, for which a protection zone should be created to prevent the extinction of the prey only if the prey's growth rate is small. We find a critical size of the protection zone, determined by the ratio of the predation rate and the refuge ability, to ensure the existence, uniqueness and global asymptotic stability of positive steady states for general predator's growth rate $\mu>0$. Bellow the critical size the dynamics of the model would be similar to the case without protection zones. The known uniqueness results for the protection problems with other functional responses, e.g., Holling II model, Leslie model, Beddington-DeAngelis model, were all required that the predator's growth rate $\mu>0$ is large enough. Such a large $\mu$ assumption is not needed for the uniqueness and asymptotic results to the modified Lotka-Volterra reaction-diffusion system considered in this paper. Abstract: The reaction-diffusion system for an $SIR$ epidemic model with a free boundary is studied. This model describes a transmission of diseases. The existence, uniqueness and estimates of the global solution are discussed first. Then some sufficient conditions for the disease vanishing are given. With the help of investigating the long time behavior of solution to the initial and boundary value problem in half space, the long time behavior of the susceptible population $S$ is obtained for the disease vanishing case. Abstract: The current paper is devoted to the asymptotic behavior of the stochastic fractional Boussinesq equations (SFBE). The global well-posedness of SFBE is proved, and the existence of a random attractor for the random dynamical system generalized by the SFBE are also provided. Abstract: Using stochastic differential equations with Lévy jumps, this paper studies the effect of environmental stochasticity and random catastrophes on the permanence of Lotka-Volterra facultative systems. Under certain simple assumptions, we establish the sufficient conditions for weak permanence in the mean and extinction of the non-autonomous system, respectively. In particular, a necessary and sufficient condition for permanence and extinction of autonomous system with jump-diffusion are obtained. We generalize some former results under weaker assumptions. Finally, we discuss the biological implications of the main results. Abstract: This paper is concerned with a system of semilinear parabolic equations with two free boundaries, which describe the spreading fronts of the invasive species in a mutualistic ecological model. The advection term is introduced to model the behavior of the invasive species in one dimension space. The local existence and uniqueness of a classical solution are obtained and the asymptotic behavior of the free boundary problem is studied. Our results indicate that for small advection, two free boundaries tend monotonically to finite limits or infinities at the same time, and a spreading-vanishing dichotomy holds, namely, either the expanding environment is limited and the invasive species dies out, or the invasive species spreads to all new environment and establishes itself in a long run. Moreover, some rough estimates of the spreading speed are also given when spreading happens. Abstract: The existence and uniqueness of solutions to the boundary-value problem for steady Poiseuille flow of an isothermal, incompressible, nonlinear bipolar viscous fluid in a cylinder of arbitrary cross-section is established. Continuous dependence of solutions, in an appropriate norm, is also established with respect to the constitutive parameters of the bipolar fluid model, as these parameters converge to zero, under the additional assumption that the cylinder has a circular cross-section. Abstract: In this paper, we study a PDE model of two species competing for a single limiting nutrient resource in a chemostat in which one microbial species excretes a toxin that increases the mortality of another. Our goal is to understand the role of spatial heterogeneity and allelopathy in blooms of harmful algae. We first demonstrate that the two-species system and its single species subsystem satisfy a mass conservation law that plays an important role in our analysis. We investigate the possibilities of bistability and coexistence for the two-species system by appealing to the method of topological degree in cones and the theory of uniform persistence. Numerical simulations confirm the theoretical results. Abstract: In this paper, we establish the $p$-th moment exponential stability and quasi sure exponential stability of the solutions to impulsive stochastic differential equations driven by $G$-Brownian motion (IGSDEs in short) by means of $G$-Lyapunov function method. An example is presented to illustrate the efficiency of the obtained results. Abstract: In petroleum engineering, the well is usually treated as a point or line source, since its radius is much smaller than the scale of the whole reservoir. In this paper, we consider the modeling error of this treatment for unsteady flow in porous media. Abstract: Over 40 years ago, M. Budyko and W. Sellers independently introduced low-order climate models that continue to play an important role in the mathematical modeling of climate. Each model has one spatial variable, and each was introduced to investigate the role ice-albedo feedback plays in influencing surface temperature. This paper serves in part as a tutorial on the Budyko-Sellers model, with particular focus placed on the coupling of this model with an ice sheet that is allowed to respond to changes in temperature, as introduced in recent work by E. Widiasih. We review known results regarding the dynamics of this coupled model, with both continuous (``Sellers-type") and discontinuous (``Budyko-type") equations. We also introduce two new Budyko-type models that are highly effective in modeling the extreme glacial events of the Neoproterozoic Era. We prove in each case the existence of a stable equilibrium solution for which the ice sheet edge rests in tropical latitudes. Mathematical tools used in the analysis include geometric singular perturbation theory and Filippov's theory of differential inclusions. Abstract: In this paper, we further investigate the global stability of the dengue transmission models. By using persistence theory, it is showed that the disease of system uniformly persists when the basic reproduction number is larger than unity. By constructing suitable Lyapunov function methods and LaSalle Invariance Principle, we show that the unique endemic equilibrium of the model is always globally asymptotically stable as long as it exists. Abstract: This work concerns the problem associated with an averaging principle for two-time-scales stochastic partial differential equations (SPDEs) driven by cylindrical Wiener processes and Poisson random measures. Under suitable dissipativity conditions, the existence of an averaging equation eliminating the fast variable for the coupled system is proved, and as a consequence, the system can be reduced to a single SPDE with a modified coefficient. Moreover, it is shown that the slow component mean-square strongly converges to the solution of the corresponding averaging equation. Abstract: This paper investigates the stochastic averaging of slow-fast dynamical systems driven by fractional Brownian motion with the Hurst parameter $H$ in the interval $(\frac{1}{2},1)$. We establish an averaging principle by which the obtained simplified systems (the so-called averaged systems) will be applied to replace the original systems approximately through their solutions. Here, the solutions to averaged equations of slow variables which are unrelated to fast variables can converge to the solutions of slow variables to the original slow-fast dynamical systems in the sense of mean square. Therefore, the dimension reduction is realized since the solutions of uncoupled averaged equations can substitute that of coupled equations of the original slow-fast dynamical systems, namely, the asymptotic solutions dynamics will be obtained by the proposed stochastic averaging approach. Abstract: The paper is concerned with a diffusive food chain model subject to homogeneous Robin boundary conditions, which models the trophic interactions of three levels. Using the fixed point index theory, we obtain the existence and uniqueness for coexistence states. Moreover, the existence of the global attractor and the extinction for the time-dependent model are established under certain assumptions. Some numerical simulations are done to complement the analytical results. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
If I understand correctly, you are just asking about the relation between energy and distances in both radiation and matter (and cosmological constant) dominated eras of the expansion of the universe. Consider the Einstein equation $$ G_{\mu\nu} = 8\pi G T_{\mu\nu} \ ,$$ where $G$ is Newton's constant. In a FLRW unverse $G_{\mu\nu}$ is diagonal and using standard cosmology (homogeneity and isotropy), the energy-momentum tensor takes the form $T_{\mu\nu}=\text{diag}(\rho,-p,-p,-p)$, where $\rho$ is the energy density and $p$ the pressure. The $00$ component of the Einstein equation is sum of inverse squared spatial curvature $R_c=\pm \frac{a}{\sqrt{|k|}}$ and inverse squared Hubble radius $R_H$ $$ G_{00} = 3 \left[\frac{1}{R_c^2} + \frac{1}{R_H^2} \right]=3\left[\frac{k}{a^2} + \left(\frac{\dot{a}}{a}\right)^2\right]=8\pi G\rho \ , $$ where $k$ is the (constant) curvature, $a$ the scaling factor and the factor $3$ comes from the three spatial dimensions. The ratio $\frac{\dot{a}}{a}=H$ is called the Hubble constant (it is not really a constant, since $a=a(t)$). We rewrite $$ H^2= \frac{8\pi G}{3} \rho - \frac{k}{a^2} \ . $$ From the Friedmann equations we get the energy conservation equation $$\dot{\rho}=-3\frac{\dot{a}}{a}(\rho+p) \ . $$ Now we solve this equation in two scenarios: Radiation domination The system behaves like ultrarelativistic matter, where we know from statistical thermodynamics, that $p=\frac{\rho}{3}$. Solving $$ \dot{\rho} = -3\frac{\dot{a}}{a}\left(1+\frac{1}{3}\right)\rho = -4 \frac{\dot{a}}{a}\rho $$ we see that $\rho \sim a^{-4}$ (the energy density scales with the scaling factor $a^{-4}$). And since $\rho \sim H^2$ we get $a(t) \sim t^{1/2}$. Matter domination The system consits of nonrelativistic matter, where the negligible kinetic energy is expressed as $p=0$. The solution to $$ \dot{\rho} = -3 \frac{\dot{a}}{a}\rho $$ is $\rho \sim a^{-3}$ and thus $a(t) \sim t^{2/3}$. $\Lambda$ domination In a cosmological constant, or $\Lambda$ dominated universe, the Einstein equations take the form $$ G_{\mu\nu} + \Lambda g_{\mu\nu} = 8 \pi G T_{\mu\nu} \ , $$ where $g_{\mu\nu}$ is the metric. We follow the same steps as above and get $$ H^2= \frac{8\pi G}{3}\rho - \frac{k}{a^2} + \frac{\Lambda}{3} \ . $$ If the last term dominates, we have $$ \frac{\dot{a}}{a} \sim \text{constant} \ , $$ which means that $\dot{\rho}\sim \rho$ and thus $a(t) \sim \exp\left[ \frac{\Lambda t}{3}\right]$. The last thing you probably want to know is how to calculate energy into temperature and vice versa. Energy is often measured in electronvolt, where $1\ eV= 1.602 \cdot 10^{-19}\ J$. Using the Boltzmann constant and $1 \frac{eV}{k_B} = 11'602\ K$ we can now calculate temperatures from energies end vice verca.
Given an integer you want to factor $N$, GNFS starts by selecting a monic irreducible polynomial $f \in \mathbb{Z}[X]$ and an integer $m$ such that $f(m) \equiv 0 \text{ mod } N$. In practice, if $m$ is chosen first then $f$ can just be chosen to be the $m$-base expansion of $n$, which is simple to compute. But how is $m$ chosen relative to N? For anyones who's interested - From this paper I found that it's best to find the degree of the polynomial you're going to be using based on the number of bits in $N$. Let k = $log_2N$ Briggs gives experimental bounds $d= 5$ for $k \geq 110$ $d= 4$ for $80 > k > 110$ $d= 3$ for $50 > k \geq 80$ And doesn't say for $k \leq 50$, probably because GNFS is not the fastest for numbers that small. Then you pick a $m$ such that $m^d$ is close to $N$. Then you find the base $m$ expansion of $N$ and obtain $f$ from that. This guarantees that $f(m) \equiv 0$ mod $N$
The standard deviation represents dispersion due to random processes. Specifically, many physical measurements which are expected to be due to the sum of many independent processes have normal (bell curve) distributions. The normal probability distribution is given by:$$\Large Y = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{\left(x-\mu\right)^2}{2\sigma^2}}$$ Where $Y$ is the probability of getting a value $x$ given a mean $\mu$ and #\sigma$…the standard deviation! In other words, the standard deviation is a term that arises out of independent random variables being summed together. So, I disagree with some of the answers given here - standard deviation isn't just an alternative to mean deviation which "happens to be more convenient for later calculations". Standard deviation is the right way to model dispersion for normally distributed phenomena. If you look at the equation, you can see the standard deviation more heavily weights larger deviations from the mean. Intuitively, you can think of the mean deviation as measuring the actual average deviation from the mean, whereas the standard deviation accounts for a bell shaped aka "normal" distribution around the mean. So if your data is normally distributed, the standard deviation tells you that if you sample more values, ~68% of them will be found within one standard deviation around the mean. On the other hand, if you have a single random variable, the distribution might look like a rectangle, with an equal probability of values appearing anywhere within a range. In this case, the mean deviation might be more appropriate. TL;DR if you have data that are due to many underlying random processes or which you simply know to be distributed normally, use standard deviation.
Chanchal Kumar Articles written in Proceedings – Mathematical Sciences Volume 120 Issue 2 April 2010 pp 163-168 The aim of this paper is to study homological properties of deficiently extremal Cohen–Macaulay algebras. Eagon–Reiner showed that the Stanley–Reisner ring of a simplicial complex has a linear resolution if and only if the Alexander dual of the simplicial complex is Cohen–Macaulay. An extension of a special case of Eagon–Reiner theorem is obtained for deficiently extremal Cohen–Macaulay Stanley–Reisner rings. Volume 124 Issue 1 February 2014 pp 1-15 An Alexander dual of a multipermutohedron ideal has many combinatorial properties. The standard monomials of an Artinian quotient of such a dual correspond bijectively to some 𝜆-parking functions, and many interesting properties of these Artinian quotients are obtained by Postnikov and Shapiro ( Volume 126 Issue 4 October 2016 pp 479-500 Research Article Multipermutohedron ideals have rich combinatorial properties. An explicit combinatorial formula for the multigraded Betti numbers of a multipermutohedron ideal and their Alexander duals are known. Also, the dimension of the Artinian quotient of an Alexander dual of a multipermutohedron ideal is the number of generalized parking functions. In this paper, monomial ideals which are certain variants of multipermutohedron ideals are studied. Multigraded Betti numbers of these variant monomial ideals and their Alexander duals are obtained. Further, many interesting combinatorial properties of multipermutohedron ideals are extended to these variant monomial ideals. Volume 129 Issue 1 February 2019 Article ID 0010 Research Article Let $S$ (or $T$ ) be the set of permutations of $[n] = \{1, . . . , n\}$ avoiding123 and 132 patterns (or avoiding 123, 132 and 213 patterns). The monomial ideals $I_{S} = \langle\rm{x}^\sigma = \prod^{n}_{i=1}x^{\sigma(i)}_{i} : \sigma \in S\rangle$ and $I_{T} = \langle\rm{x}^{\sigma} : \sigma \in T \rangle$ in the polynomial ring$R = k[x_{1}, . . . , x_{n}]$ over a field $k$ have many interesting properties. The Alexander dual $I^{[n]}_{S}$ of $I_{S}$ with respect to $\bf{n} = (n, . . . , n)$ has the minimal cellular resolution supported on the order complex $\Delta(\Sigma_{n})$ of a poset $\Sigma_{n}$. The Alexander dual $I^{[n]}_{T}$ also has the minimalcellular resolution supported on the order complex $\Delta(\tilde{\Sigma}_{n})$ of a poset $\tilde{\Sigma}_{n}$. The number of standard monomials of the Artinian quotient $\frac{R}{I^{[n]}_{S}}$ is given by the number of Current Issue Volume 129 | Issue 5 November 2019 Click here for Editorial Note on CAP Mode
65 4 Homework Statement Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations interference Homework Statement:Two identical audio speakers, connected to the same amplifier, produce monochromatic sound waves with a frequency that can be varied between 300 and 600 Hz. The speed of the sound is 340 m/s. You find that, where you are standing, you hear minimum intensity sound a) Explain why you hear minimum-intensity sound b) If one of the speakers is moved 39.8 cm toward you, the sound you hear has maximum intensity. What is the frequency of the sound? c) How much closer to you from the position in part (b) must the speaker be moved to the next position where you hear maximum intensity? Homework Equations:interference I have no idea on how to proceed I started with ## frequency=\frac {speed\space of\space sound} \lambda \space = \frac {340 \frac m s} \lambda ## then ##d \space sin\alpha \space = \space \frac \lambda 2\space ## but now i'm stuck Any help please?
I'm trying to compute the antiderivative $$\int \frac{y^2}{\sqrt{r^2 - y^2}} \, dy.$$ It is proving fairly tricky (for me). Here is Wolfram|Alpha's solution: $$\int \frac{y^2}{\sqrt{r^2 - y^2}} \, dy = \frac{1}{2} \left( r^2 \arctan \left( \frac{y}{\sqrt{r^2 - y^2}} \right) - y\sqrt{r^2 - y^2} \right).$$ The provided step-by-step explanation is: Substitute $y = r \sin u$ and $dy = r \cos u \, du$. Then $\sqrt{r^2 - y^2} = \sqrt{r^2 - r^2 \sin^2 u} = r \cos u$ and $u = \arcsin(y/r)$. The integral becomes $$\int r^2 \sin^2 u \, du = r^2 \int \sin^2 u \, du.$$ From here, the antiderivative is pretty commonly known; it can be derived with the double-angle formula for cosine. This becomes $$\frac{1}{2} r^2 u - \frac{1}{4} r^2 \sin(2u).$$ Use $\cos^2 u = 1 - \sin^2 u$ and $\sin(2u) = 2\sin u\cos u$ to express as $$\frac{1}{2} r^2 u - \frac{1}{4} r^2 \sin(u) \sqrt{1 - \sin^2 u}.$$ Back-substitute $u = \arcsin(y/r)$ to get $$\frac{1}{2} r^2 \arcsin(y/r) - \frac{1}{2} r y \sqrt{1 - \frac{y^2}{r^2}}.$$ For positive reals, this is equivalent to $$\frac{1}{2} \left(r^2 \arctan{\frac{y}{\sqrt{r^2 - y^2}}} - y\sqrt{r^2 - y^2}\right).$$ In my opinion, this is a mess, both in derivation and result, and it's not the first time I've seen Wolfram|Alpha overcomplicate an integral. However, I can't figure out how to solve this myself. I tried $u$-substitution, but the $\Phi(x)$ that might have worked was not injective on the domain so that didn't work So, is there either or both of a cleaner solution to this antiderivative? or a cleaner method of derivation? …it would be really nice if it didn't have any trigonometric substitution…
Electronic Journal of Statistics Electron. J. Statist. Volume 9, Number 2 (2015), 1799-1825. Consistency of the drift parameter estimator for the discretized fractional Ornstein–Uhlenbeck process with Hurst index $H\in(0,\frac{1}{2})$ Abstract We consider the Langevin equation which contains an unknown drift parameter $\theta$ and where the noise is modeled as fractional Brownian motion with Hurst index $H\in(0,\frac{1}{2})$. The solution corresponds to the fractional Ornstein–Uhlenbeck process. We construct an estimator, based on discrete observations in time, of the unknown drift parameter, that is similar in form to the maximum likelihood estimator for the drift parameter in Langevin equation with standard Brownian motion. It is assumed that the interval between observations is $n^{-1}$, i.e. tends to zero (high-frequency data) and the number of observations increases to infinity as $n^{m}$ with $m>1$. It is proved that for strictly positive $\theta$ the estimator is strongly consistent for any $m>1$, while for $\theta\leq0$ it is consistent when $m>\frac{1}{2H}$. Article information Source Electron. J. Statist., Volume 9, Number 2 (2015), 1799-1825. Dates Received: January 2015 First available in Project Euclid: 25 August 2015 Permanent link to this document https://projecteuclid.org/euclid.ejs/1440507394 Digital Object Identifier doi:10.1214/15-EJS1062 Mathematical Reviews number (MathSciNet) MR3391120 Zentralblatt MATH identifier 1326.60048 Subjects Primary: 60G22: Fractional processes, including fractional Brownian motion 60F15: Strong theorems 60F25: $L^p$-limit theorems 62F10: Point estimation 62F12: Asymptotic properties of estimators Citation Kubilius, Kęstutis; Mishura, Yuliya; Ralchenko, Kostiantyn; Seleznjev, Oleg. Consistency of the drift parameter estimator for the discretized fractional Ornstein–Uhlenbeck process with Hurst index $H\in(0,\frac{1}{2})$. Electron. J. Statist. 9 (2015), no. 2, 1799--1825. doi:10.1214/15-EJS1062. https://projecteuclid.org/euclid.ejs/1440507394
Rank Abundance Graphs Species abundance distribution can also be expressed through rank abundance graphs. A common approach is to plot some measure of species abundance against their rank order of abundance. Such a plot allows the user to compare not only relative richness but also evenness. Species abundance models (also called abundance curves) use all available community information to create a mathematical model that describes the number and relative abundance of all species in a community. These models include the log normal, geometric, logarithmic, and MacArthur’s brokenstick model. Many ecologists use these models as a way to express resource partitioning where the abundance of a species is equivalent to the percentage of space it occupies (Magurran 1988). Abundance curves offer an alternative to single number diversity indices by graphically describing community structure. Figure \(\PageIndex{1}\). Generic Rank-abundance diagram of three common mathematical models used to fit species abundance distributions: Motomura’s geometric series, Fisher’s logseries, and Preston’s log-normal series (modified from Magurran 1988) by Aedrake09. Let’s compare the indices and a very simple abundance distribution in two different situations. Stand A and B both have the same number of species (same richness), but the number of individuals in each species is more similar in Stand A (greater evenness). In Stand B, species 1 has the most individuals, with the remaining nine species having a substantially smaller number of individuals per species. Richness, the compliment to Simpson’s D, and Shannon’s H’ are computed for both stands. These two diversity indices incorporate both richness and evenness. In the abundance distribution graph, richness can be compared on the x-axis and evenness by the shape of the distribution. Because Stand A displays greater evenness it has greater overall diversity than Stand B. Notice that Stand A has higher values for both Simpson’s and Shannon’s indices compared to Stand B. Figure \(\PageIndex{2}\). Two stands comparing richness, Simpson’s D, and Shannon’s index. Indices of diversity vary in computation and interpretation so it is important to make sure you understand which index is being used to measure diversity. It is unsuitable to compare diversity between two areas when different indices are computed for each area. However, when multiple indices are computed for each area, the sampled areas will rank similarly in diversity as measured by the different indices. Notice in this previous example both Simpson’s and Shannon’s index rank Stand A as more diverse and Stand B as less diverse. Similarity between Sites There are also indices that compare the similarity (and dissimilarity) between sites. The ideal objective is to express the ecological similarity of different sites; however, it is important to identify the aim or focus of the investigation in order to select the most appropriate index. While many indices are available, van Tongeren (1995) states that most of the indices do not have a firm theoretical basis and suggests that practical experience should guide the selection of available indices. The Jaccard index (1912) compares two sites based on the presence or absence of species and is used with qualitative data (e.g., species lists). It is based on the idea that the more species both sites have in common, the more similar they are. The Jaccard index is the proportion of species out of the total species list of the two sites, which is common to both sites: $$SJ = \frac {c} {(a + b + c)}$$ where SJ is the similarity index, c is the number of shared species between the two sites and a and b are the number of species unique to each site. Sørenson (1948) developed a similarity index that is frequently referred to as the coefficient of community (CC): $$CC = \frac {2c} {(a + b + 2c)}$$ As you can see, this index differs from Jaccard’s in that the number of species shared between the two sites is divided by the average number of species instead of the total number of species for both sites. For both indices, the higher the value the more ecologically similar two sites are. If quantitative data are available, a similarity ratio (Ball 1966) or a percentage similarity index, such as Gauch (1982), can be computed. Not only do these indices compare number of similar and dissimilar species present between two sites, but also incorporate abundance. The similarity ratio is: $$SR_{ij} = \dfrac {\sum y_{ki}y_{kj}}{\sum y_{ki}^2 +\sum y_{kj}^2 -\sum(y_{ki}y_{kj})}$$ where yki is the abundance of the kth species at site i (sites i and j are compared). Notice that this equation resolves to Jaccard’s index when just presence or absence data is available. The percent similarity index is: $$PS_{ij} = \dfrac {200\sum min (y_{ki},y_{kj})} {\sum y_{ki}+\sum y_{kj}}$$ Again, notice how this equation resolves to Sørenson’s index with qualitative data only. So let’s look at a simple example of how these indices allow us to compare similarity between three sites. The following example presents hypothetical data on species abundance from three different sites containing seven different species (A-G). 4 0 1 0 1 0 0 0 0 1 0 1 1 4 0 3 1 1 1 0 3 Let’s begin by computing Jaccard’s and Sørenson’s indices for the three comparisons (site 1 vs. site 2, site 1 vs. site 3, and site 2 vs. site 3). \(SJ1,2=\frac {2}{(3+1+2)}=0.33\) \(SJ1,3 = \frac {4}{(4+1+0)}=0.80\) \(SJ2,3 =\frac {1}{(1+2+3)} = 0.17\) \(CC1,2=\frac {2(2)}{(3+1+2(2))} = 0.50\) \(CC1,3 =\frac {2(4)}{(1+0+2(4))} = 0.89\) \(CC2,3 =\frac {2(1)}{(2+3+2(1))} = 0.29\) Both of these qualitative indices declare that sites 1 and 3 are the most similar and sites 2 and 3 are the least similar. Now let’s compute the similarity ratio and the percent similarity index for the same site comparisons. $$SR1,2=\dfrac {[(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)]}{(4^2+0^2+0^2+1^2+1^2+3^2+1^2)+(0^0+1^2+0^2+0^2+4^2+1^2+0^2)+(4 \times 0)+(0 \times 1) +(0\times 0)+(1\times 0)+(1\times4)+(3\times 1)+(1\times 0)}$$ $$SR1,2= 0.23$$ $$SR1,3=\dfrac {[(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)]}{(4^2 +0^2+0^2+1^2+1^2+3^2+1^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(4\times 1)+(0\times 0)+(0\times 0)+(1\times 1)+(1\times 0)+(3\times 1)+(1\times 3)}$$ $$SR1,3= 0.38$$ $$SR2,3=\dfrac {[(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)]}{(0^2+1^2+0^2+0^2+4^2+1^2+0^2)+(1^2+0^2+0^2+1^2+0^2+1^2+3^2)+(0\times 1)+(1\times 0)+(0\times 0)+(0\times 1) +(4\times 0) +(1\times 1) +(0\times 3)}$$ $$SR1,3= 0.03$$ $$PS1,2=\dfrac {200(0+0+0+0+1+1+0)}{(4+0+0+1+1+3+1)+(0+1+0+0+4+1+0)}=25.0$$ $$PS1,3=\dfrac {200(1+0+0+1+0+1+1)}{(4+0+0+1+1+3+1)+(1+0+0+1+0+1+3)} = 50.0$$ $$PS2,3=\dfrac {200(0+0+0+0+0+1+0)}{(0+1+0+0+4+1+0)+(1+0+0+1+0+1+3)} = 16.7$$ A matrix of percent similarity values allows for easy interpretation (especially when comparing more than three sites). Table \(\PageIndex{1}\). A matrix of percent similarity for three sites. The quantitative indices return the same conclusions as the qualitative indices. Sites 1 and 3 are the most similar ecologically, and sites 2 and 3 are the least similar; and also site 2 is most unlike the other two sites. Habitat Suitability Index (HSI) In 1980, the U.S. Fish and Wildlife Service (USFWS) developed a procedure for documenting predicted impacts to fish and wildlife from proposed land and water resource development projects. The Habitat Evaluation Procedures (HEP) (Schamberger and Farmer 1978) were developed in response to the need to document the non-monetary value of fish and wildlife resources. HEP incorporates population and habitat theories for each species and is based on the assumption that habitat quality and quantity can be numerically described so that changes to the area could be assessed and compared. It is a species-habitat approach to impact assessment and habitat quality, for a specific species is quantified using a habitat suitability index (HSI). Habitat suitability index (HSI) models provide a numerical index of habitat quality for a specific species (Schamberger et al. 1982) and in general assume a positive, linear relationship between carrying capacity (number of animals supported by some unit area) and HSI. Today’s natural resource manager often faces economically and socially important decisions that will affect not only timber but wildlife and its habitat. HSI models provide managers with tools to investigate the requirements necessary for survival of a species. Understanding the relationships between animal habitat and forest management prescription is vital towards a more comprehensive management approach of our natural resources. An HSI model synthesizes habitat use information into a framework appropriate for fieldwork and is scaled to produce an index value between 0.0 (unsuitable habitat) to 1.0 (optimum habitat), with each increment of change being identical to another. For example, a change in HSI from 0.4 to 0.5 represents the same magnitude of change as from 0.7 to 0.8. The HSI values are multiplied by area of available habitat to obtain Habitat Units (HUs) for individual species. The U.S. Fish and Wildlife Service (USFWS) has documented a series of HSI models for a wide variety of species (FWS/OBS-82/10). Let’s examine a simple HSI model for the marten ( Martes americana) which inhabits late successional forest communities in North America (Allen 1982). An HSI model must begin with habitat use information, understanding the species needs in terms of food, water, cover, reproduction, and range for this species. For this species, the winter cover requirements are more restrictive than cover requirements for any other season so it was assumed that if adequate winter cover was available, habitat requirements for the rest of the year would not be limiting. Additionally, all winter habitat requirements are satisfied in boreal evergreen forests. Given this, the research identified four crucial variables for winter cover that needed to be included in the model. Figure \(\PageIndex{3}\). Habitat requirements for the marten. For each of these four winter cover variables (V1, V2, V3, and V4), suitability index graphs were created to examine the relationship between various conditions of these variables and suitable habitat for the marten. A reproduction of the graph for % tree canopy closure is presented below. Figure \(\PageIndex{4}\). Suitability index graph for percent canopy cover. Notice that any canopy cover less than 25% results in unacceptable habitat based on this variable alone. However, once 50% canopy cover is reached the suitability index reaches 1.0 and optimum habitat for this variable is achieved. The following equation was created that combined the life requisite values for the marten using these four variables: $$\frac{(V_1 \times V_2 \times V_3 \times V_4)} {2}$$ Since winter cover was the only life requisite considered in this model, the HSI equals the winter cover value. As you can see, the more life requisites included in the model, the more complex the model becomes. While HSI values identify the quality of the habitat for a specific species, wildlife diversity as a whole is a function of size and spatial arrangement of the treated stands (Porter 1986). Horizontal and structural diversity are important. Generally speaking, the more stands of different character an area contains, the greater the wildlife diversity. The spatial distribution of differing types of stands supports animals that need multiple cover types. In order to promote wildlife species diversity, a manager must develop forest management prescription that varies the spatial and temporal patterns of timber reproduction, thereby providing greater horizontal and vertical structural diversity. Figure \(\PageIndex{5}\): Bird species diversity nesting across a forest to field gradient (After Strelke and Dickson 1980). Typically, even-aged management reduces vertical structural diversity, but options such as the shelterwood method tend to mitigate this problem. Selection system tends to promotes both horizontal and vertical diversity. Integrated natural resource management can be a complicated process but not impossible. Vegetation response to silvicultural prescriptions provides the foundation for understanding the wildlife response. By examining the present characteristics of the managed stands, understanding the future response due to management, and comparing those with the requirements of specific species, we can achieve habitat manipulation together with timber management.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A March 2007 , Volume 19 , Issue 1 Select all articles Export/Reference: Abstract: We are interested in a remarkable property of certain nonlinear diffusion equations, which we call blow-downor delayed regularization. The following happens: a solution of one of these equations is shown to exist in some generalized sense, and it is also shown to be non-smooth for some time $ 0 < t < t_1$, after which it becomes smooth and still nontrivial. We use the logarithmic diffusion equation to examine an example of occurrence of this phenomenon starting from data that contain Dirac deltas, which persist for a finite time. The interpretation of the results in terms of diffusion is also unusual: if the process starts with one or several point masses surrounded by a continuous distribution, then the masses decay into the medium over a finite period of time. The study of the phenomenon implies consideration of a new concept of measure solution which seems natural for these diffusion processes. Abstract: The initial value problem for the $L^{2}$ critical semilinear Schrödinger equation with periodic boundary data is considered. We show that the problem is globally well-posed in $H^{s}( T^{d} )$, for $s>4/9$ and $s>2/3$ in 1D and 2D respectively, confirming in 2D a statement of Bourgain in [4]. We use the "$I$-method''. This method allows one to introduce a modification of the energy functional that is well defined for initial data below the $H^{1}(T^{d} )$ threshold. The main ingredient in the proof is a "refinement" of the Strichartz's estimates that hold true for solutions defined on the rescaled space, $T^{d}_\lambda = R^{d}/{\lambda Z^{d}}$, $d=1,2$. Abstract: We provide a dynamical portrait of singular-hyperbolic transitive attractors of a flow on a 3-manifold. Our Main Theorem establishes the existence of unstable manifolds for a subset of the attractor which is visited infinitely many times by a residual subset. As a consequence, we prove that the set of periodic orbits is dense, that it is the closure of a unique homoclinic class of some periodic orbit, and that there is an SRB-measure supported on the attractor. Abstract: In this paper, we attempt to clarify an open problem related to a generalization of the snap-back repeller. Constructing a semi-conjugacy from the finite product of a transformation $f:\mathbb{R}^{n}\rightarrow \mathbb{R}^{n}$ on an invariant set $\Lambda$ to a sub-shift of the finite type on a $w$-symbolic space, we show that the corresponding transformation associated with the generalized snap-back repeller on $\mathbb{R}^{n}$ exhibits chaotic dynamics in the sense of having a positive topological entropy. The argument leading to this conclusion also shows that a certain kind of degenerate transformations, admitting a point in the unstable manifold of a repeller mapping back to the repeller, have positive topological entropies on the orbits of their invariant sets. Furthermore, we present two feasible sufficient conditions for obtaining an unstable manifold. Finally, we provide two illustrative examples to show that chaotic degenerate transformations are omnipresent. Abstract: In this paper, the dynamics of transcendental meromorphic functions in the one-parameter family $\mathcal{M} = { f_{\lambda}(z) = \lambda f(z) : f(z) = \tanh(e^{z}) \mbox{for} z \in \mathbb{C} \mbox{and} \lambda \in \mathbb{R} \setminus \{ 0 \} }$ is studied. We prove that there exists a parameter value $\lambda^$* $\approx -3.2946$ such that the Fatou set of $f_{\lambda}(z)$ is a basin of attraction of a real fixed point for $\lambda > \lambda^$* and, is a parabolic basin corresponding to a real fixed point for $\lambda = \lambda^$*. It is a basin of attraction or a parabolic basin corresponding to a real periodic point of prime period $2$ for $\lambda < \lambda^$*. If $\lambda >\lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ is connected and, is infinitely connected. Consequently, the singleton components are dense in the Julia set of $f_{\lambda}$ for $\lambda >\lambda^$*. If $\lambda \leq \lambda^$*, it is proved that the Fatou set of $f_{\lambda}$ contains infinitely many pre-periodic components and each component of the Fatou set of $f_{\lambda}$ is simply connected. Finally, it is proved that the Lebesgue measure of the Julia set of $f_{\lambda}$ for $\lambda \in \mathbb{R} \setminus \{ 0 \}$ is zero. Abstract: This note is a shortened version of my dissertation paper, defended at Stony Brook University in December 2004. It illustrates how dynamic complexity of a system evolves under deformations. The objects I considered are quartic polynomial maps of the interval that are compositions of two logistic maps. In the parameter space $P^{Q}$ of such maps, I considered the algebraic curves corresponding to the parameters for which critical orbits are periodic, and I called such curves left and right bones. Using quasiconformal surgery methods and rigidity, I showed that the bones are simple smooth arcs that join two boundary points. I also analyzed in detail, using kneading theory, how the combinatorics of the maps evolve along the bones. The behavior of the topological entropy function of the polynomials in my family is closely related to the structure of the bone-skeleton. The main conclusion of the paper is that the entropy level-sets in the parameter space that was studied are connected. Abstract: The upper semi-continuous convergence of approximate attractors for an infinite delay differential equation of logistic type is proved, first for the associated truncated delay equation with finite delay and then for a numerical scheme applied to the truncated equation. Abstract: We consider second order periodic systems with a nonsmooth potential and an indefinite linear part. We impose conditions under which the nonsmooth Euler functional is unbounded. Then using a nonsmooth variant of the reduction method and the nonsmooth local linking theorem, we establish the existence of at least two nontrivial solutions. Abstract: This paper is concerned with the existence and nodal character of the nontrivial solutions for the following equations involving critical Sobolev and Hardy exponents: $-\Delta u + u - \mu \frac{u}{|x|^2}=|u|^{2^*-2}u + f(u),$ $u \in H^1_r (\R ^N),(1)$ where $2^$*$=\frac{2N}{N-2}$ is the critical Sobolev exponent for the embedding $H^1_r (\R ^N) \rightarrow L^{2^}$*$ (\R ^N)$, $\mu \in [0, \ (\frac {N-2}{2})^2)$ and $f: \R \rightarrow\R $ is a function satisfying some conditions. The main results obtained in this paper are that there exists a nontrivial solution of equation (1) provided $N\ge 4$ and $\mu \in [0, \ (\frac {N-2}{2})^2-1] $ and there exists at least a pair of nontrivial solutions $u^+_k$, $u^-_k$ of problem (1) for each k $\in N \cup \{0\}$ such that both $u^+_k$ and $u^-_k$ possess exactly k nodes provided $N\ge 6$ and $\mu \in [0, \ (\frac {N-2}{2})^2-4]$. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
In this note, we prove that all $2 \times 2$ monotone grid classes are finitely based, i.e., defined by a finite collection of minimal forbidden permutations. This follows from a slightly more general result about certain $2 \times 2$ (generalized) grid classes having two monotone cells in the same row. Section: Permutation Patterns Permutations that avoid given patterns have been studied in great depth for their connections to other fields of mathematics, computer science, and biology. From a combinatorial perspective, permutation patterns have served as a unifying interpretation that relates a vast array of combinatorial structures. In this paper, we introduce the notion of patterns in inversion sequences. A sequence $(e_1,e_2,\ldots,e_n)$ is an inversion sequence if $0 \leq e_i<i$ for all $i \in [n]$. Inversion sequences of length $n$ are in bijection with permutations of length $n$; an inversion sequence can be obtained from any permutation $\pi=\pi_1\pi_2\ldots \pi_n$ by setting $e_i = |\{j \ | \ j < i \ {\rm and} \ \pi_j > \pi_i \}|$. This correspondence makes it a natural extension to study patterns in inversion sequences much in the same way that patterns have been studied in permutations. This paper, the first of two on patterns in inversion sequences, focuses on the enumeration of […] Section: Permutation Patterns We have extended classical pattern avoidance to a new structure: multiple task-precedence posets whose Hasse diagrams have three levels, which we will call diamonds. The vertices of each diamond are assigned labels which are compatible with the poset. A corresponding permutation is formed by reading these labels by increasing levels, and then from left to right. We used Sage to form enumerative conjectures for the associated permutations avoiding collections of patterns of length three, which we then proved. We have discovered a bijection between diamonds avoiding 132 and certain generalized Dyck paths. We have also found the generating function for descents, and therefore the number of avoiders, in these permutations for the majority of collections of patterns of length three. An interesting application of this work (and the motivating example) can be found when task-precedence posets represent warehouse package fulfillment by robots, in which case avoidance of both 231 and 321 […] Section: Permutation Patterns Let $\mathcal{C}$ be a permutation class that does not contain all layered permutations or all colayered permutations. We prove that there is a constant $c$ such that every permutation in $\mathcal{C}$ of length $n$ contains a monotone subsequence of length $cn$. Section: Permutation Patterns Caffrey, Egge, Michel, Rubin and Ver Steegh recently introduced snow leopard permutations, which are the anti-Baxter permutations that are compatible with the doubly alternating Baxter permutations. Among other things, they showed that these permutations preserve parity, and that the number of snow leopard permutations of length $2n-1$ is the Catalan number $C_n$. In this paper we investigate the permutations that the snow leopard permutations induce on their even and odd entries; we call these the even threads and the odd threads, respectively. We give recursive bijections between these permutations and certain families of Catalan paths. We characterize the odd (resp. even) threads which form the other half of a snow leopard permutation whose even (resp. odd) thread is layered in terms of pattern avoidance, and we give a constructive bijection between the set of permutations of length $n$ which are both even threads and odd threads and the set of peakless Motzkin paths of length […] Section: Permutation Patterns A permutation $\tau$ in the symmetric group $S_j$ is minimally overlapping if any two consecutive occurrences of $\tau$ in a permutation $\sigma$ can share at most one element. Bóna \cite{B} showed that the proportion of minimal overlapping patterns in $S_j$ is at least $3 -e$. Given a permutation $\sigma$, we let $\text{Des}(\sigma)$ denote the set of descents of $\sigma$. We study the class of permutations $\sigma \in S_{kn}$ whose descent set is contained in the set $\{k,2k, \ldots (n-1)k\}$. For example, up-down permutations in $S_{2n}$ are the set of permutations whose descent equal $\sigma$ such that $\text{Des}(\sigma) = \{2,4, \ldots, 2n-2\}$. There are natural analogues of the minimal overlapping permutations for such classes of permutations and we study the proportion of minimal overlapping patterns for each such class. We show that the proportion of minimal overlapping permutations in such classes approaches $1$ as $k$ goes to infinity. We also study the proportion of […] Section: Permutation Patterns In this paper, we present two new results of layered permutation densities. The first one generalizes theorems from Hästö (2003) and Warren (2004) to compute the permutation packing of permutations whose layer sequence is~$(1^a,\ell_1,\ell_2,\ldots,\ell_k)$ with~$2^a-a-1\geq k$ (and similar permutations). As a second result, we prove that the minimum density of monotone sequences of length~$k+1$ in an arbitrarily large layered permutation is asymptotically~$1/k^k$. This value is compatible with a conjecture from Myers (2003) for the problem without the layered restriction (the same problem where the monotone sequences have different lengths is also studied). Section: Permutation Patterns We investigate pattern avoidance in permutations satisfying some additional restrictions. These are naturally considered in terms of avoiding patterns in linear extensions of certain forest-like partially ordered sets, which we call binary shrub forests. In this context, we enumerate forests avoiding patterns of length three. In four of the five non-equivalent cases, we present explicit enumerations by exhibiting bijections with certain lattice paths bounded above by the line $y=\ell x$, for some $\ell\in\mathbb{Q}^+$, one of these being the celebrated Duchon's club paths with $\ell=2/3$. In the remaining case, we use the machinery of analytic combinatorics to determine the minimal polynomial of its generating function, and deduce its growth rate. Section: Permutation Patterns In 2000 Klazar introduced a new notion of pattern avoidance in the context of set partitions of $[n]=\{1,\ldots, n\}$. The purpose of the present paper is to undertake a study of the concept of Wilf-equivalence based on Klazar's notion. We determine all Wilf-equivalences for partitions with exactly two blocks, one of which is a singleton block, and we conjecture that, for $n\geq 4$, these are all the Wilf-equivalences except for those arising from complementation. If $\tau$ is a partition of $[k]$ and $\Pi_n(\tau)$ denotes the set of all partitions of $[n]$ that avoid $\tau$, we establish inequalities between $|\Pi_n(\tau_1)|$ and $|\Pi_n(\tau_2)|$ for several choices of $\tau_1$ and $\tau_2$, and we prove that if $\tau_2$ is the partition of $[k]$ with only one block, then $|\Pi_n(\tau_1)| <|\Pi_n(\tau_2)|$ for all $n>k$ and all partitions $\tau_1$ of $[k]$ with exactly two blocks. We conjecture that this result holds for all partitions $\tau_1$ of $[k]$. Finally, we enumerate […] Section: Permutation Patterns We determine the structure of permutations avoiding the patterns 4213 and 2143. Each such permutation consists of the skew sum of a sequence of plane trees, together with an increasing sequence of points above and an increasing sequence of points to its left. We use this characterisation to establish the generating function enumerating these permutations. We also investigate the properties of a typical large permutation in the class and prove that if a large permutation that avoids 4213 and 2143 is chosen uniformly at random, then it is more likely than not to avoid 2413 as well. Section: Permutation Patterns The Permutation Pattern Matching problem, asking whether a pattern permutation $\pi$ is contained in a permutation $\tau$, is known to be NP-complete. In this paper we present two polynomial time algorithms for special cases. The first algorithm is applicable if both $\pi$ and $\tau$ are $321$-avoiding; the second is applicable if $\pi$ and $\tau$ are skew-merged. Both algorithms have a runtime of $O(kn)$, where $k$ is the length of $\pi$ and $n$ the length of $\tau$. Section: Permutation Patterns We study the iteration of the process "a particle jumps to the right" in permutations. We prove that the set of permutations obtained in this model after a given number of iterations from the identity is a class of pattern avoiding permutations. We characterize the elements of the basis of this class and we enumerate these "forbidden minimal patterns" by giving their bivariate exponential generating function: we achieve this via a catalytic variable, the number of left-to-right maxima. We show that this generating function is a D-finite function satisfying a nice differential equation of order~2. We give some congruence properties for the coefficients of this generating function, and we show that their asymptotics involves a rather unusual algebraic exponent (the golden ratio $(1+\sqrt 5)/2$) and some unusual closed-form constants. We end by proving a limit law: a forbidden pattern of length $n$ has typically $(\ln n) /\sqrt{5}$ left-to-right maxima, with Gaussian […] Section: Permutation Patterns Let $S_n$ denote the symmetric group. For any $\sigma \in S_n$, we let $\mathrm{des}(\sigma)$ denote the number of descents of $\sigma$, $\mathrm{inv}(\sigma)$ denote the number of inversions of $\sigma$, and $\mathrm{LRmin}(\sigma)$ denote the number of left-to-right minima of $\sigma$. For any sequence of statistics $\mathrm{stat}_1, \ldots \mathrm{stat}_k$ on permutations, we say two permutations $\alpha$ and $\beta$ in $S_j$ are $(\mathrm{stat}_1, \ldots \mathrm{stat}_k)$-c-Wilf equivalent if the generating function of $\prod_{i=1}^k x_i^{\mathrm{stat}_i}$ over all permutations which have no consecutive occurrences of $\alpha$ equals the generating function of $\prod_{i=1}^k x_i^{\mathrm{stat}_i}$ over all permutations which have no consecutive occurrences of $\beta$. We give many examples of pairs of permutations $\alpha$ and $\beta$ in $S_j$ which are $\mathrm{des}$-c-Wilf equivalent, $(\mathrm{des},\mathrm{inv})$-c-Wilf equivalent, […] Section: Permutation Patterns Given permutations σ of size k and π of size n with k < n, the permutation pattern matching problem is to decide whether σ occurs in π as an order-isomorphic subsequence. We give a linear-time algorithm in case both π and σ avoid the two size-3 permutations 213 and 231. For the special case where only σ avoids 213 and 231, we present a O(max(kn 2 , n 2 log log n)-time algorithm. We extend our research to bivincular patterns that avoid 213 and 231 and present a O(kn 4)-time algorithm. Finally we look at the related problem of the longest subsequence which avoids 213 and 231. Section: Permutation Patterns We consider a sorting machine consisting of two stacks in series where the first stack has the added restriction that entries in the stack must be in decreasing order from top to bottom. The class of permutations sortable by this machine are known to be enumerated by the Schröder numbers. In this paper, we give a bijection between these sortable permutations of length $n$ and Schröder paths -- the lattice paths from $(0,0)$ to $(n-1,n-1)$ composed of East steps $(1,0)$, North steps $(0,1)$, and Diagonal steps $(1,1)$ that travel weakly below the line $y=x$. Section: Permutation Patterns
CDS 212, Homework 3, Fall 2010 J. Doyle Issued: 12 Oct 2010 CDS 212, Fall 2010 Due: 21 Oct 2010 Reading DFT, Chapter 4 Problems [DFT 4.1, page 62] Consider a unity feedback system. True or false: If a controller internally stabilizes two plants, they have the same number of poles in Re <amsmath> s \geq 0</amsmath>. [DFT 4.2, page 62] Unity-feedback problem: Let <amsmath>P_{\alpha}(s)</amsmath> be a plant depending on a real parameter <amsmath>\alpha</amsmath>. Suppose that the poles of <amsmath>P_{\alpha}</amsmath> move continuously as <amsmath>\alpha</amsmath> varies over the interval <amsmath>[0, \ 1]</amsmath>. True or false: If a controller internally stabilizes <amsmath>P_{\alpha}</amsmath> for every <amsmath>\alpha</amsmath> in <amsmath>[0, \ 1]</amsmath>, then <amsmath>P_{\alpha}</amsmath> has the same number of poles in Re <amsmath>s \geq 0</amsmath> for every <amsmath>\alpha</amsmath> in <amsmath>[0, \ 1]</amsmath>. [DFT 4.6, page 63] Consider the unity feedback system with <amsmath>C(s) = 10 </amsmath> and plant <amsmath> P(s) = \frac{1}{s-a},</amsmath> where <amsmath>a</amsmath> is real. Find the range of <amsmath>a</amsmath> for the system to be internally stable. For <amsmath>a = 0</amsmath> the plant is <amsmath>P(s) = 1/s</amsmath>. Regarding <amsmath>a</amsmath> as a perturbations, we can write the plant as <amsmath>with <amsmath>W_2(s) = -a</amsmath>. Then <amsmath>\widetilde P</amsmath> equals the true plant when <amsmath>\Delta(s) = 1</amsmath>. Apply robust stability theory to see when the feedback system <amsmath>\widetilde{P}</amsmath> is internally stable for all <amsmath>\| \Delta\|_\infty \leq 1.</amsmath> Compare this to your result for part (a). \widetilde P = \frac{P}{1 + \Delta W_2 P}</amsmath> [DFT 4.10, page 64] Suppose that the plant transfer function is <amsmath> \tilde{P}(s) = [1 + \Delta(s)W_2(s)]P(s),</amsmath> where <amsmath> W_2(s) = \frac{2}{s + 10}, \ P(s) = \frac{1}{s-1},</amsmath> and the stable perturbation <amsmath>\Delta</amsmath> satisfies <amsmath>\| \Delta \|_{\infty} \leq 2</amsmath>. Suppose that the controller is the pure gain <amsmath>C(s) = k</amsmath>. We want the feedback system to be internally stable for all such perturbations. Determine over what range of <amsmath>k</amsmath> this is true. Consider the feedback system in the figure below. Uncertainty in theplant is described using two stable weighting functions <amsmath>W_1</amsmath> and <amsmath>W_2</amsmath>,and stable <amsmath>\Delta_1</amsmath> and <amsmath>\Delta_2</amsmath> satisfying <amsmath>\| \Delta_1 \|_{\infty} \leq 1</amsmath> and <amsmath>\| \Delta_2 \|_{\infty} \leq 1</amsmath>.We will assume all plants in the uncertainty set have the same unstablepoles. Write down an expression for <amsmath>\tilde{P}</amsmath> (dotted box). If <amsmath>L = PC</amsmath> and <amsmath>\tilde{L} = \tilde{P}C</amsmath>, write an expression for <amsmath>1 + \tilde{L}</amsmath> with <amsmath>1 + L</amsmath> as a factor. State and prove a necessary and sufficient condition for robust stability of the closed loop system as a function of <amsmath>P</amsmath>, <amsmath>C</amsmath> and the sensitivity functions. Apply the condition to the case <amsmath>P=1/(s-1)</amsmath>, <amsmath>C=0.5</amsmath> and <amsmath>W_1 = W_2 = 0.1</amsmath>.
A tweet from @GregSchwanbeck some time back asked: — Greg Schwanbeck (@GregSchwanbeck) March 16, 2015 The setup is: one side of a square is tangent to a circle, and two corners of the square lie on the same circle. Which is larger: the perimeter of the square or the circumference of the circle? (My labels are different to Greg's. That's his fault, obviously.) The typical approach is to adopt some sort of algebraic approach, but I -- having been trawling around for sneaky geometrical tricks to approximate $\pi$ -- had a more elegant trick up my sleeve. It involves a circle theorem that crops up in IGCSE but not at GCSE, relating to the lengths of the intersections of chords. If you have two intersecting chords of a circle (as pictured), the products of their parts are equal -- that is to say, $mp = nq$ as drawn. To prove this, note that triangle $LIJ$ is similar to $LKH$, by comparing angles, which means $\frac mn = \frac qp$, so $mp = nq$. How does it help? Consider the diameter to the circle at the point where the square is tangent, $BM$ in the picture below. That's a chord; so is $CD$, so $|CO| \times |OD| = |BO| \times |OM|$. If we assume for the sake of simplicity that the square has a side length of 8, then $|CO| = |OD| = 4$, while $|BO| = 8$. Using the theorem, $|OM| = \frac{8}{4} = 2$, so the diameter of the circle is 10 and the radius is 5. The circumference of the circle is then $10π \approx 31.4$, while the perimeter of the square is 32 -- so the square's perimeter is larger. That puzzle wasn't enough for @tombutton, though: he asked: Sphere goes through 4 vertices of a cube and midpoint of the opposite face. Which has greater surface area? pic.twitter.com/fUnSWgTfgu — Tom Button (@tombutton) March 17, 2015 Happily, this falls to the same analysis, although it's a bit harder to see (things in 3d usually are). The tangent point and two incident points that don't share an edge (i.e., they're diagonally opposite on one face) are on a circle which shares its centre with the sphere (and hence, its diameter is the sphere's diameter). Again, if the side length is 8, the diagonal of the face is $8\sqrt 2$, so the product of the half-chords is $4\sqrt 2 \times 4 \sqrt 2 = 32$; the diameter splits up as 8 (from one face of the cube to the other) and 4 (using the theorem). In this case, the radius of the sphere is 6, so the surface area is $4\pi r^2 = 108\pi$; the surface area of the cube is $6 \times 8^2 = 384$. Which is greater? The ratio is $144\pi : 384$, which have a common factor of 48, making it the same as $3\pi : 8$. Since $3\pi > 8$, the sphere has the larger surface area.
Research Open Access Published: Construction of Fourier expansion of Apostol Frobenius–Euler polynomials and its applications Advances in Difference Equations volume 2018, Article number: 67 (2018) Article metrics 810 Accesses 1 Citations Abstract In the present paper, we find the Fourier expansion of the Apostol Frobenius–Euler polynomials. By using a Fourier expansion of the Apostol Frobenius–Euler polynomials, we derive some new and interesting results. Introduction When Fourier was trying to solve a problem in heat conduction, he needed to express a function f as an infinite series of sine and cosine functions: Earlier, Bernoulli and Euler had used such series while investigating problems concerning vibrating strings and astronomy. Note that a Fourier series is widely known as an expansion of a periodic function \(f ( x ) \) in terms of an infinite sum of sine and cosine functions. Fourier series make use of the orthogonality relationships of the sine and cosine functions. Fourier series of a function with period T can be written in an exponential form as follows: or equivalently by where the coefficients \(a_{n}\) and \(a_{ - n}\) are computed by The Fourier expansion of some well-known polynomials have been studied by some mathematicians; see, for details, [2–4, 6]. For example, in [3], Luo derived a Fourier series and integral representations for the classical Genocchi polynomials, and Apostol–Genocchi polynomials by using the Lipschitz summation formula. In [4], by making use of Cauchy residue theorem in the complex plane, Bayad obtained a Fourier series for the Apostol–Bernoulli, Apostol–Genocchi and Apostol–Euler polynomials. Also the Fourier series of sums of products of some well-known special polynomials have been investigated extensively by Agarwal et al. [1] and Kim et al. [6–8, 11]. With this motivation, we are going to focus on obtaining the Fourier expansion of the Apostol Frobenius–Euler polynomial. After that, we are going to derive some useful results arising from its Fourier expansion. Before giving the main results mentioned above, we need some useful properties of Apostol–Frobenius Euler polynomials, which will be given in the next section as preliminaries. Preliminaries The Frobenius–Euler polynomials and their various generalizations such as the Apostol Frobenius–Euler polynomials have been studied intensively by some mathematicians. For example, Kim [2] obtained linear differential equations for Frobenius–Euler polynomials by using their generating function. From those differential equations, he gave the sums of products of Frobenius–Euler polynomials. We now begin some known definitions and properties of Frobenius–Euler polynomials and Apostol Frobenius–Euler polynomials which will be useful in deriving the main results of this paper. Definition 1 Definition 2 Observe that which are called the Frobenius–Euler numbers and Apostol Frobenius–Euler numbers, respectively. Definition 3 The Frobenius–Genocchi polynomials are defined by means of the following generating function: The Apostol–Euler polynomials are defined by means of the following generating series: From Eq. (2), we have from which one may get the following useful corollary. Corollary 1 Remark 1 Substituting \(\lambda = 1\) in Definition 2, one can easily see that Remark 2 Putting \(u = - 1\) in Definition 2, Remark 3 Putting \(u = - 1\) in Definition 2, Remark 4 Taking \(u = - 1\) and \(\lambda = 1\) in Definition 2, one can see Remark 5 Taking \(u = - 1\) and \(\lambda = 1\) in Definition 2, one can see Proposition 1 The following identity holds true: Proof It is proved by using Definition 2 as follows: Matching the coefficients \(\frac{t^{n}}{n!}\) gives the required result. □ Proposition 2 The following identity holds true: Proof From Definition 2, we have Comparing the coefficients \(t^{n}\) yields the desired result. □ Proposition 3 Let n be a member of the natural numbers. Then we have Proof Thus we complete the proof of the theorem. □ We are now in a position to state our main results in the next section. Also we derive their special cases. Main results We begin with the following theorem, which is a Fourier series expansion of the Apostol Frobenius–Euler polynomial. For the following theorem, we will give two proofs. The first proof includes the Cauchy residue theorem and a complex integral over a circle C following Bayad’s method in [4]. The second proof includes the Lipschitz summation formula following Luo’s method in [3]. Theorem 1 Let \(u,\lambda \in \mathbb{C}\) with \(u \ne 1\), \(\lambda \ne 1\), \(u \ne \lambda \) and \(0 < x < 1\). We have Proof 1 of Theorem 1 We firstly consider the following integral and the function \(f_{n} ( t ) = \frac{1}{t^{n + 1}}\frac{1 - u}{\lambda e^{t} - u}e^{xt}\): over the circle \(C = \{ t\mid \vert t \vert \le ( 2N + \varepsilon ) \pi \mbox{ and } \varepsilon \in \mathbb{R}, ( \varepsilon \pi i \pm \log ( \frac{ \lambda }{u} ) \ne 0 ( \operatorname {mod}2\pi i ) ) \} \). Now we find the poles of the function \(f_{n} ( t ) \) as follows: and \(t = 0\) is a pole of order \(n + 1\). From the Cauchy residue theorem, we write We should compute \(\operatorname{Res} ( f_{n} ( t ) ,t = 0 ) \) and \(\operatorname{Res} ( f_{n} ( t ) ,t = t_{k} ) \) as follows: and Combining these residues with Eq. (5) yields From this, taking \(N \to \infty \), it becomes \(\int_{C} f_{n} ( t ) \,dt = 0\). So we have Therefore, we complete the proof. □ Before giving the second proof of Theorem 1, we need the following definition. Definition 4 Lipschitz summation formula is defined by where \(\mu \in \mathbb{Z}\), \(\operatorname{Re} ( \alpha ) > 1\) if \(\mu \in \mathbb{R} \setminus \mathbb{Z}\), \(\operatorname{Re} ( \alpha ) > 0\), \(\tau \in H\), H denotes the complex upper half plane; Γ denotes the Euler–Gamma function; cf. [3]. Proof 2 of Theorem 1 By writing \(t = 2\pi i\tau \) in Definition 2, we have Differentiating n times with respect to τ gives From Definition 4, if we substitute \(\alpha = n + 1\), \(\mu = x\) and if τ is replaced by \(\tau + \log ( \frac{\lambda }{u} ) \), we derive From this, we reach the following expression: Taking \(\tau \to 0\) in (8), we arrive at which is the desired result. □ Corollary 2 Proof Since with the logarithmic property over the complex plane, we can write which completes the proof of this corollary. □ Corollary 3 By making use of the relation \(H ( x, - 1,1 ) : = \frac{G_{n + 1} ( x ) }{n + 1}\) in Theorem 1, we have Corollary 4 Putting \(\lambda = 1\) in Theorem 1 we have Corollary 5 Substituting \(u = - 1\) in Theorem 1 yields From a Fourier expanison of the Apostol Frobenius–Euler polynomials, we derive the following interesting identity. Theorem 2 Let L be a positive integer. Then we have Proof From Theorem 1, we derive the following applications: Under the following condition: we have Thus we complete the proof of the theorem. □ Theorem 3 Let \(0 < x < 1\). We have with the following the coefficients \(c_{k,n}\): where \((n) _{l}\) is the falling factorial. Proof Let with the following coefficients: By integration by parts, we have From this, we find the following recurrence relation: By the iteration method, we arrive at the following expression: Now it seems to be sufficient in order to compute \(c_{k,1}\). Since we have Also Thus we end the proof. □ In [9], Kim et al. defined the Hurwitz type λ-zeta function as follows: Note that \(\zeta_{\lambda } ( s,x ) \) when \(\lambda = - 1\) is the Hurwitz–Euler zeta function; cf. [10]. Recall from Eq. (8) that From this we have Then Eq. (9) can be written Thus we have the following theorem. Theorem 4 The following equality holds true: In [9], Kim et al. introduced the λ-partial zeta function as follows: From this we have the following applications: Theorem 5 The following identity holds true: Set \(\lambda = e^{2\pi ix}\), \(x = - \frac{\log ( \frac{\lambda }{u} ) }{2\pi i}\) and \(s = n + 1\) in Eq. (9), we see that Now we write the Fourier expansion of the Apostol Frobenius–Euler polynomials as follows: which is closely related to Eq. (11). So we have Thus we state the following theorem. Theorem 6 Let \(u,\lambda \in \mathbb{C}\) with \(u \ne 1\), \(\lambda \ne 1\), \(u \ne \lambda \) and \(0 < x < 1\). We have Further remarks Based on Definition 3, we introduce here the Apostol Frobenius–Genocchi polynomials \(G_{n}^{F} ( x,u,\lambda ) \) by the following definition. Definition 5 Let \(u \in \mathbb{C}\) with \(u \ne 1\). We define the Apostol Frobenius–Genocchi polynomials as follows: The Apostol Frobenius–Genocchi polynomials are closely related to the Apostol Frobenius–Euler polynomials by the following application: which gives We now give some of fundamental properties of the Apostol Frobenius–Genocchi polynomials. We will omit the proof, since it follows from Definition 5. Theorem 7 The derivative property of the Apostol Frobenius–Genocchi polynomials is as follows: Theorem 8 Difference property of Apostol Frobenius–Genocchi polynomials is as follows: Theorem 9 The integral of Apostol Frobenius–Genocchi polynomials from a to b, where a, b are members of real numbers, is as follows: Theorem 10 For \(\vert \frac{\lambda e^{t}}{u} \vert < 1\), the generating function of the Apostol Frobenius–Genocchi polynomials can be written in the following form: Theorem 11 By the relation \(H_{n} ( x,u,\lambda ) = \frac{G_{n + 1}^{F} ( x,u,\lambda ) }{n + 1}\) in Theorem 1, we have which represents a Fourier expansion of the Apostol Frobenius–Genocchi polynomials. Conclusion and observation In the paper, we have derived the Fourier expansion of Apostol Frobenius–Euler polynomials as Theorem 1. We have investigated special cases of Theorem 1 turning to Fourier expansions of Euler polynomials, Genocchi polynomials, Frobenius–Euler polynomials, Apostol–Euler polynomials, Apostol Genocchi polynomials. With the motivation of the work [12], we have introduced Apostol Frobenius–Genocchi polynomials. We saw that Apostol Frobenius–Genocchi polynomials are closely related to Apostol Frobenius–Euler polynomials by the following relation: By this relation, we have got a Fourier expansion for the Apostol Frobenius–Genocchi polynomials, By Eq. (12), the obtained theorems concerning the Apostol Frobenius–Euler polynomials here and in other sources turn into those concerning Apostol Frobenius–Genocchi polynomials. References 1. Agarwal, R.P., Kim, D.S., Kim, T., Kwon, J.: Sums of finite products of Bernoulli functions. Adv. Differ. Equ. 2017, 237 (2017) 2. Kim, T.: Identities involving Frobenius–Euler polynomials arising from non-linear differential equations. J. Number Theory 132, 2854–2865 (2012) 3. Luo, Q.-M.: Fourier expansions and integral representations for the Apostol–Bernoulli and Apostol–Euler polynomials. Math. Comput. 78(268), 2193–2208 (2009) 4. Bayad, A.: Fourier expansions for Apostol–Bernoulli, Apostol–Euler and Apostol–Genocchi polynomials. Math. Comput. 80(276), 2219–2221 (2011) 5. Bayad, A., Kim, T.: Identities for Apostol-type Frobenius–Euler polynomials resulting from the study of a nonlinear operator. Russ. J. Math. Phys. 23(2), 164–171 (2016) 6. Kim, T., Kim, D.S., Jang, G.-W., Kwon, J.: Fourier series of sums of products of Genocchi functions and their applications. J. Nonlinear Sci. Appl. 10, 1683–1694 (2017) 7. Jang, G.-W., Kim, T., Kim, D.S., Mansour, T.: Fourier series of functions related to Bernoulli polynomials. Adv. Stud. Contemp. Math. 27, 49–62 (2017) 8. Kim, T., Kim, D.S., Rim, S.-H., Dolgy, D.: Fourier series of higher-order Bernoulli functions and their applications. J. Inequal. Appl. 2017, 8 (2017) 9. Kim, T., Rim, S.-H., Simsek, Y., Kim, D.: On the analogs of Bernoulli and Euler numbers, related identities and zeta and L-functions. J. Korean Math. Soc. 45(2), 435–453 (2008) 10. Kim, T.: Euler numbers and polynomials associated with zeta functions. Abstr. Appl. Anal. 2008, Article ID 581582 (2008) 11. Kim, T., Kim, D.S., Dolgy, D.V., Park, J.-W.: Fourier series of sums of products of ordered Bell and poly-Bernoulli functions. J. Inequal. Appl. 2017, 84 (2017) 12. Yilmaz, B., Ozarslan, M.A.: Frobenius–Euler and Frobenius–Genocchi polynomials and their differential equations. New Trends Math. Sci. 3, 172–180 (2015) Acknowledgements The authors are grateful to the Editor, Prof. Dr. Taekyun Kim, and to the referees for their valuable suggestions, which have improved the paper substantially. Ethics declarations Competing interests The authors declare that they have no competing interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. About this article Received Accepted Published DOI MSC 11B68 11S80 05A19 42B05 Keywords Fourier series Apostol Frobenius–Euler polynomials Generating function Lipschitz summation formula Hurwitz zeta type function
The Pauli-Lubanski pseudo-vector is defined as: $$W_{\mu}=\frac{1}{2}\epsilon_{\mu \nu \lambda \rho}J^{\nu \lambda}P^{\rho}$$ Where the rotation and translation operators transform as: \begin{align} U(\Lambda,a)P^{\mu}U^{-1}(\Lambda,a)&=\Lambda^{\mu}_{\nu}P^{\nu}\\ U(\Lambda,a)J^{\mu \nu}U^{-1}(\Lambda,a)&=(\Lambda^{-1})^{\mu}_{\lambda}(\Lambda^{-1})^{\nu}_{\rho}(J^{\lambda \rho}-a^{\lambda}P^{\rho}+a^{\rho}P^{\lambda}) \end{align} I'm working on how the Pauli-Lubanski transforms under Lorentz, to show that it's in fact Lorentz covariant. I saw a solution in one of the answers of this post Calculating the commutator of Pauli-Lubanski operator and generators of Lorentz group the following: \begin{align} U(\Lambda,a)W_{\mu}U^{-1}(\Lambda,a)&=\frac{1}{2}\epsilon_{\mu \nu \lambda \rho}(\Lambda^{\nu}_{\alpha}\Lambda^{\rho}_{\beta}J^{\alpha \beta}+a^{\nu}\Lambda^{\rho}_{\alpha}P^{\alpha}-a^{\rho}\Lambda^{\nu}_{\rho}P^{\rho})\Lambda^{\sigma}_{\delta}P^{\delta}\\ &=\frac{1}{2}\Lambda^{\alpha}_{\mu}\epsilon_{\alpha \nu \rho \sigma}J^{\nu \rho}P^{\sigma}\end{align} However, I'm having trouble handling the $\Lambda's$ in order to get from step one to step two. Could you offer me some guidance?
One of the implicit assumptions that the question is using is that the mass of the solid is constant, and that the density of the solid is constant. If the mass isn't constant, you could set it to zero. If the density wasn't constant, you could indeed crush the mass into a thin axis of infinite density, but that wouldn't be characteristic of many solids! Constant mass means $$\int_0^h \rho \pi r^2 \, \mathrm{d}z = M = \text{const.}$$ or, equivalently, as a constraint equation $$G(r) = \int_0^h \rho \pi r^2 - \frac{M}{h} \, \mathrm{d}z = 0$$ (Constant density is easier to enforce: simply let the variable $\rho$ be constant) Therefore, the problem is now: Minimise $I(r) = \int_0^h \frac{1}{2}\rho \pi r^4 \, \mathrm{d}z$ such that $G(r) = \int_0^h \rho \pi r^2 - \frac{M}{h} \, \mathrm{d}z = 0$ That is, you now have a constrained variational problem, which requires the use of a Lagrangian multiplier $\lambda$. So the cost function is instead $$J(r,\lambda) = I(r) + \lambda G(r) = \int_0^h \frac{1}{2} \rho \pi r^4 + \lambda \left(\rho \pi r^2 - \frac{M}{h}\right) \, \mathrm{d}z = \int_0^h F(r,\lambda) \, \mathrm{d}z$$ The Lagrangian multiplier $\lambda$ now behaves as an addition variable to minimise over, so the solution is obtained by evaluating $$\frac{\mathrm{d}}{\mathrm{d}z}\left(\frac{\partial F}{\partial r'}\right) - \frac{\partial F}{\partial r} = 0$$ $$\frac{\mathrm{d}}{\mathrm{d}z}\left(\frac{\partial F}{\partial \lambda'}\right) - \frac{\partial F}{\partial \lambda} = 0$$ where the prime notation denotes differentiation with respect to $z$. (Since no $r'$ or $\lambda'$ terms are present, the first terms of the above equations are equal to zero). Solving this will then yield the cylinder as the optimal solid.
This is simply proportional to the 3-tachyon correlation function. They warn you above the equation that the momentum conservation factor such as $$(2\pi)^D\delta^{(D)}(k_1-k_2-k_3)$$ is omitted everywhere and I am not even sure whether their normalization of the states includes the power of $2\pi$ factor. Pick Polchinski's textbook if you want every formula to be much more robust and reliable about similar details. At any rate, this delta function is what you get from all the factors of the form $\exp(ik_j\cdot x)$ for $j=1,2,3$. You got the first power of $g$ which is a good thing, a part of the result. One of the two last excessive things you got in your calculation is the exponential of $\sum \alpha_{-n}z^n/n$. But this exponential may simply be replaced by $1$ because all the creation operators $\alpha_{-n}$ for positive $n$ annihilate the ket vector $\langle 0;k_1|$ on the left side – the Hermitian conjugate claim to the claim that annihilation operators $\alpha_{n}$ for a positive $n$ annihilate the ket vectors on the right. So from the Taylor expansion for the exponential, only the leading term $1$ survives. That's great and the only excessive factor you're left with is $z^{k_2 k_1-1}$. But this is also equal to $1$ because the exponent vanishes when all the physical conditions are satisfied. Note that $$k_1\cdot k_2 = \frac 12[(k_1+k_2)^2 - k_1^2-k_2^2] =\frac 12( k_3^2-k_1^2-k_2^2) $$where, in the second $=$ step, I used the momentum conservation because everything is multiplied by the delta-function for the sum of momenta, anyway. However, the calculation is also meaningful for on-shell $k_1,k_2,k_3$ only. But the squared masses of the open-string tachyon is$$-k^2=m_T^2=-\alpha'$$where the minus sign in front of $k^2$ arises because they use the mostly-plus convention for the metric. So in the units $\alpha'=2$ they selectively use for open strings (one has $\alpha'=1/2$ for closed strings, to make things more confusing),$$k_1\cdot k_2 = \frac 12(k_3^2-k_1^2 - k_2^2) =-\frac 12 (-2+1)m_T^2 = \frac 12\alpha' =1$$so the exponent above $z$ is actually $k_1\cdot k_2-1=0$ and all the factors except for $g$ are equal to one. Note that the string amplitudes are on-shell (scattering amplitudes) and the calculations only simplify when the on-shell conditions are imposed. In fact, the general string amplitudes don't have any natural or canonical extension for off-shell momenta (although, of course, when you write down an effective low-energy field theory for string theory, that theory gives you formulae for the off-shell amplitudes, too)! If you want a more pedagogical treatment that doesn't use these "inconsistent" conventions for $\alpha'=1/2$ or $2$, doesn't omit the momentum-conservation delta-functions, and is more explicit about the moments when the on-shell conditions are used and how, try e.g. Polchinski's book or one of the newer competitors. On the other hand, if you start to calculate as many amplitudes as Green and Schwarz did in the early 1980s, it may be pretty helpful to use all the "seemingly sloppy" simplifications in the notation that still capture all the physical essence.
ISSN: 1078-0947 eISSN: 1553-5231 All Issues Discrete & Continuous Dynamical Systems - A December 2007 , Volume 19 , Issue 4 Select all articles Export/Reference: Abstract: We investigate stationary solutions of the one-dimensional Cahn-Hilliard equation with the diffusion coefficient and the total mass of the density as two given parameters. We solve the equation completely in the whole parameter space by using the Jacobi elliptic functions and complete elliptic integrals. In addition to counting the stationary solutions, which was studied by Grinfeld and Novick-Cohen, we provide an exact expression of the solutions. We also illustrate global bifurcation diagrams together with the asymptotic behavior of the solutions as the diffusion coefficient vanishes. Abstract: The paper deals with the bifurcation of relaxation oscillations in two dimensional slow-fast systems. The most generic case is studied by means of geometric singular perturbation theory, using blow up at contact points. It reveals that the bifurcation goes through a continuum of transient canard oscillations, controlled by the slow divergence integral along the critical curve. The theory is applied to polynomial Liénard equations, showing that the cyclicity near a generic coallescence of two relaxation oscillations does not need to be limited to two, but can be arbitrarily high. Abstract: We present three simple regular one-dimensional variational problems that present the Lavrentiev gap phenomenon, i.e., inf$\{\int_a^b L(t,x,\dot x): x\in W_0^{1,1}(a,b)\} $< inf$\{\int_a^bL(t,x,\dot x): x\in W_0^{1,\infty}(a,b)\}$ (where $ W_0^{1,p}(a,b)$ denote the usual Sobolev spaces with zero boundary conditions), in which in the first example the two infima are actually minima, in the second example the infimum in $ W_0^{1,\infty}(a,b)$ is attained while the infimum in $ W_0^{1,1}(a,b)$ is not, and in the third example both infimum are not attained. We discuss also how to construct energies with a gap between any space and energies with multi-gaps. Abstract: We give a condition on a piecewise constant roof function and an irrational rotation by $\alpha$ on the circle to give rise to a special flow having the mild mixing property. Such flows will also satisfy Ratner's property. As a consequence we obtain a class of mildly mixing singular flows on the two-torus that arise from quasi-periodic Hamiltonians flows by velocity changes. Abstract: We study the asymptotic behavior of complex discrete evolution equations of Ginzburg- Landau type. Depending on the nonlinearity and the data of the problem, we find different dynamical behavior ranging from global existence of solutions and global attractors to blow-up in finite time. We provide estimates for the blow-up time, depending not only on the initial data but also on the size of the lattice. Some of the theoretical results are tested by numerical simulations. Abstract: The aim of this paper is to formulate Campanato-type boundary estimates for solutions of the Rothe approximate scheme to parabolic partial differential systems with constant coefficients. The core observation is that such estimates hold independently of the approximate systems. Abstract: We know that two different homoclinic classes contained in the same hyperbolic set are disjoint [12]. Moreover, a connected singular-hyperbolic attracting set with dense periodic orbits and a unique equilibrium is either transitive or the union of two different homoclinic classes [6]. These results motivate the questions of if two different homoclinic classes contained in the same singular-hyperbolic set are disjoint or if the second alternative in [6] cannot occur. Here we give a negative answer for both questions. Indeed we prove that every compact $3$-manifold supports a vector field exhibiting a connected singular-hyperbolic attracting set which has dense periodic orbits, a unique singularity, is the union of two homoclinic classes but is not transitive. Abstract: We consider a mesoscopic model of phase transitions and investigate the geometric properties of the interfaces of the associated minimal solutions. We provide density estimates for level sets and, in the periodic setting, we construct minimal interfaces at a universal distance from any given hyperplane. Abstract: The geometry of self-similar sets $K$ has been studied intensively during the past 20 years frequently assuming the so-called Open Set Condition (OSC). The OSC guarantees the existence of an open set $U$ satisfying various natural invariance properties, and is instrumental in the study of self-similar sets for the following reason: a careful analysis of the boundaries of the iterates of $\overline U$ is the key technique for obtaining information about the geometry of $K$. In order to obtain a better understanding of the OSC and because of the geometric significance of the boundaries of the iterates of $\overline U$, it is clearly of interest to provide quantitative estimates for the "number" of points close to the boundaries of the iterates of $\overline U$. This motivates a detailed study of the rate at which the distance between a point in $K$ and the boundaries of the iterates of $\overline U$ converge to $0$. In this paper we show that for each $t\in I$ (where $I$ is a certain interval defined below) there is a significant number of points for which the rate of convergence equals $t$. In fact, for each $t\in I$, we show that the set of points whose rate of convergence equals $t$ has positive Hausdorff dimension, and we obtain a lower bound for this dimension. Examples show that this bound is, in general, the best possible and cannot be improved. Abstract: In the paper, we give the positive answer of an open problem of Li-Nirenberg under the weaker conditions, and we prove a new variation of the boundary point lemma for second order fully nonlinear ODEs by a new method. A simpler proof of Li-Nirenberg Theorem is also presented. Readers Authors Editors Referees Librarians More Email Alert Add your name and e-mail address to receive news of forthcoming issues of this journal: [Back to Top]
The Ising model is a mathematical model of ferro-magnetism in statistical mechanics. The model consists of discrete variables that represent magnetic dipole moments of atomic spins that can be in one of two states (+1 or −1). The spins are arranged in a graph, usually, a lattice, allowing each spin to interact with its neighbours. The model allows the identification of phase transitions, as a simplified model of reality [Source] The algorithm I am using to calculate it is: $$ E = -\sum\limits_{\langle i,j \rangle}^{n}J_{ij} s_i s_j - \mu H\sum\limits_{i}^{n} s_i $$ Where $E$ is energy, $J$ is the exchange constant, $s_i$ & $s_j$ represent nearest neighbour pairs (Like $s_{<1,0>} s_{<2,0>}$ or $ s_{<5,4>} s_{<5,5>} $). $H$ is the external magnetic field and $\mu$ is the magnetic moment. Usually H is given in units of $\frac{H}{\mu}$ and J is assumed to be isotropic, so the equation simplifies to: $$ E = -J\sum\limits_{\langle i,j \rangle}^{n} s_i s_j - m\sum\limits_{i}^{n} s_i $$ where $m= \frac{H}{\mu}$. This model can be used to predict the temperature at which a substance looses its ferromagnetism, which for cobalt is 1388K, for iron its 1043K and for nickel its 627K. This change from ferromagnetic to paramagnetic is found by observing a second order phase transition. (A continuous, dramatic change in the slope of the magnetism). If you use $J=1$, $T_C \approxeq 2.27K$. How could I determine the exchange constants for other substances, if I know their internal molecular lattice structure, or must I determine it numerically, knowing $T_C$?
Current browse context: cond-mat.str-el Change to browse by: References & Citations Bookmark(what is this?) Condensed Matter > Strongly Correlated Electrons Title: Effects of magnetic anisotropy on spin and thermal transports in classical antiferromagnets on the square lattice (Submitted on 19 Aug 2019) Abstract: Transport properties of the classical antiferromagnetic XXZ model on the square lattice have been theoretically investigated, putting emphasis on how the occurrence of a phase transition is reflected in spin and thermal transports. As is well known, the anisotropy of the exchange interaction $\Delta\equiv J_z/J_x$ plays a role to control the universality class of the transition of the model, i.e., either a second-order transition at $T_N$ into a magnetically ordered state or the Kosterlitz-Thouless (KT) transition at $T_{KT}$, which respectively occur for the Ising-type ($\Delta >1$) and $XY$-type ($\Delta <1$) anisotropies, while for the isotropic Heisenberg case of $\Delta=1$, a phase transition does not occur at any finite temperature. It is found by means of the hybrid Monte-Carlo and spin-dynamics simulations that the spin current probes the difference in the ordering properties, while the thermal current does not. For the $XY$-type anisotropy, the longitudinal spin-current conductivity $\sigma^s_{xx}$ ($=\sigma^s_{yy}$) exhibits a divergence at $T_{KT}$ of the exponential form, $\sigma^s_{xx} \propto \exp\big[ B/\sqrt{T/T_{KT}-1 }\, \big]$ with $B={\cal O}(1)$, while for the Ising-type anisotropy, the temperature dependence of $\sigma^s_{xx}$ is almost monotonic without showing a clear anomaly at $T_{N}$ and such a monotonic behavior is also the case in the Heisenberg-type spin system. The significant enhancement of $\sigma^s_{xx}$ at $T_{KT}$ is found to be due to the exponential rapid growth of the spin-current-relaxation time toward $T_{KT}$, which can be understood as a manifestation of the topological nature of a vortex whose lifetime is expected to get longer toward $T_{KT}$. Possible experimental platforms for the spin-transport phenomena associated with the KT topological transition are discussed. Submission historyFrom: Kazushi Aoyama [view email] [v1]Mon, 19 Aug 2019 07:57:13 GMT (1069kb)
Dynamic pressure, abbreviated as q or Q, also known as velocity pressure, is the amount of total pressure resulting from the media velocity. Dynamic Pressure formula (Eq. 1) \(\large{ q = \frac {1} {2}\; \rho\; v^2 }\) (Eq.2) \(\large{ q = \frac {\rho\; v^2} {2} }\) Where: \(\large{ q }\) = dynamic pressure \(\large{ \rho }\) (Greek symbol rho) = fluid density \(\large{ v }\) = fluid velocity Solve for: \(\large{ v = \sqrt {\frac {2 \;q} {\rho} } }\) Tags: Equations for Pressure
How to compute multiplicative inverses for elements in any simple (not extended) finite field? I mean an algorithm which can be implemented in software. The unit group of the finite field of order $q$ is a cyclic group of order $q-1.$ Thus, for any $a \in \mathbb{F}_q^{\times},$ $$a^{-1} = a^{q-2}.$$ In both cases one may employ the extended Euclidean algorithm to compute inverses. See here for an example. Alternatively, employ repeated squaring to compute $\rm\:a^{-1} = a^{q-2}\:$ for $\rm\:a \in \mathbb F_q^*\:,\:$ which is conveniently recalled by writing the exponent in binary Horner form. A useful reference is Knuth: TAoCP, vol 2: Seminumerical Algorithms. If 'simple' means a prime field $\mathbf{Z}/p\mathbf{Z}$ to you, then, given an integer $x$ coprime to $p$, you simply need to find an integer $y$ such that $xy\equiv1\pmod{p}.$Look up the paragraph of multiplicative inverses in the wikipedia page on Euclidean algorithm
The determination of the domain of attraction and of the relatedconstants $a_n$ and $b_n$ uses several functions related to the survivalfunction, here given by $S(x) = \exp\{-(\lambda x)^k\}$ for $x > 0$.Of major importance are the tail-quantile function $U(t)$ and the hazard rate function $h(x)$. The tail-quantile is obtained by solving $S(U) = 1/t$ for $U$, whichleads to $U(t) = \lambda^{-1} (\log t)^{1/k}$ for $t> 1$. The hazardrate is $h(x) = -\text{d}\log S(x)/\text{d}x = \lambda^k \, k\,x^{k-1}$ for $x>0$. The derivative of the inverse hazard rate$1/h(x)$ is proportional to $x^{-k}$, hence tends to zero for large$x$ since $k>0$. So the Von Mises' condition holds. We know thatthe distribution is in the Gumbel domain of attraction and that wehave the following convergence in distribution to the standard Gumbel \begin{equation}\frac{M_n - U(n)}{a_n} \to \text{Gumbel}.\end{equation} Moreover we know that we can choose $a_n$ as $e(U(n))$ where $e(x)$denotes the mean residual life function given by $e(x) := A(x) /S(x)$ with $A(x) := \int_{x}^\infty S(t) \,\text{d}t$ for $x >0$. We now proceed to the evaluation of $e(U(n))$ or to thedetermination of a quantity which is equivalent to it for large$n$. Using the change of variable $u := (\lambda t)^k$ we get $$ A(x) = \int_{x}^\infty \exp\{-(\lambda t)^k\}\,\text{d}t= \frac{1}{\lambda k} \int_{(\lambda x)^k}^{\infty} u^{1/k -1} e^{-u} \text{d}u = (\lambda k)^{-1} \, \Gamma(s,\, v),$$ where $\Gamma(s,\,v)$ stands for the incomplete gammafunctionevaluated at $s:= 1/k$ and $v := (\lambda x)^k$. We can use thefollowing known result about the incomplete gamma $\Gamma(s,\, v) \simv^{s-1} e^{-v}$ for $v \to \infty$ which can be shown usingintegration by parts. So $$e(x) \sim \frac{(\lambda k)^{-1}\, \left\{ (\lambda x)^k \right\}^{1/k - 1}\exp\{- (\lambda x)^k \}}{S(x)} = \frac{x}{k} \, (\lambda x)^{-k}.$$ Note that $h(x) \times e(x)$ tends to $1$ for large $x$, which isclear from the last equivalence; this limit condition is bothnecessary an sufficient for the attraction to the Gumbel when $h(x)$is monotonic for large $x$, as is the case here - see Theorem 1 inGalambos andObretenov.We can choose $a_n$ as $1 / h(U(n))$, and our constants canbe $$ a_n = \dfrac{1}{\lambda k} \, (\log n)^{1/k -1}, \qquad b_n = \dfrac{1}{\lambda} \, (\log n )^{1/k}.$$ A precise statement of Von Mises' condition is found in the classicalbook Modelling Extremal Events by EmbrechtsP., Klüppelberg C. and Mikosch T. In this book (up to change innotations), the couple of constants is given in Table 3.4.4. ## Weibull parameters k <- 2.5; lambda <- 10 ## simulate set.seed(123) n <- 40; nsim <- 10000 X <- array(rweibull(n * nsim, shape = k, scale = 1/ lambda), dim = c(nsim, n)) M <- apply(X, 1, max) bn <- log(n)^(1 / k) / lambda an <- log(n)^(1 / k - 1) / lambda / k Mscale <- scale(M, center = bn, scale = an) hist(Mscale, breaks = 100, probability = TRUE, col = "lightyellow", main = sprintf("Simulated maxima for n = %d", n), xlab = "") require(evd) curve(dgumbel, add = TRUE, col = "orangered", lwd = 2)
This is not supposed to be a 'how-to-do' question, but rather a 'why' question. I came across the following problem: $Let\ f: R^2 \rightarrow R;\ \ f(x,y)=x^3+y^3.\ Find\ the\ extrema\ points\ of\ f\ so\ that\ x^2+y^2=1$ The solution for the minimum works in the same way as that for finding the maximum, so I'll just consider the maximum when explaining my approach. I took the Lagrange function, made its partial derivatives equal to $0,\ $ found $\ \lambda\ $ so that $g(x,y)=x^2+y^2-1=0\ $ which resulted in some possible solutions. What I find strange is when checking the solution of this problem $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\ $ I found given as a maximum point, whereas $(1,0)$ (and, of course, $(0,1))\ $ clearly yields a greater $\ f$ while also verifying $g(x,y)=0$ and all the partial differential tests for $\ \lambda=-\frac{3}{2}$. Still, $(\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})\ $ passes the Hessian test while $(1,0)$ doesn't, but this just leaves the Hessian inconclusive for $(1,0)$, which shouldn't be a problem. So which is the maximum of $\ f$ on $\ x^2+y^2=1?$ $$(1,0)\ and\ (0,1)\ or\ (\frac{\sqrt{2}}{2},\frac{\sqrt{2}}{2})?$$
I am trying to teach myself Electrodynamics by following Griffiths' book. This is probably what's considered a "homework question", but as I don't have an instructor to ask for help, I'm hoping someone here can do so. If this is really not permitted here, please close this with my apologies Griffiths Introduction to Electrodynamics 4th Ed Problem 2.50 asks to compute the electric field and then the charge distribution based on a potential $V(r)=A \frac{e^{-\lambda r}}{r}$. I found the electric field: $$E=\frac{Ae^{-\lambda r}(1+\lambda r)}{r^2}\hat r$$ without too much trouble. To get the charge distribution from the electric field one applies Gauss' law in differential form: $$\rho=\epsilon _0 \nabla \cdot E$$ $$=\epsilon _0 \nabla \cdot \Biggl(\frac{Ae^{-\lambda r}(1+\lambda r)}{r^2}\hat r\Biggl) $$ Since $\nabla\cdot E$ in spherical coordinates is $\frac{1}{r^2}\frac{\partial}{\partial r}r^2E_r + ... E_\theta + ... E_\phi$, and since E doesn't have theta or phi terms, I simply applied the formula, getting: $$=\epsilon _0 \frac{1}{r^2}\frac{\partial}{\partial r}\Biggl(r^2 \frac{Ae^{-\lambda r}(1+\lambda r)}{r^2}\Biggl)$$ The $r^2$ cancels and moving A outside the dericvative, I get: $$=\frac{A\epsilon _0}{r^2}\frac{\partial}{\partial r}(e^{-\lambda r})(1+\lambda r) $$ Running the derivative through Wolfram Alpha and rearranging gives: $$\rho=-\frac{A\epsilon _0}{r}(e^{-\lambda r})\lambda ^2 $$ However, this isn't what the solution manual (or Chegg) had. Rather, instead of taking the divergence of E, they applied a product rule: $$=\epsilon _0 \Biggl(Ae^{-\lambda r}(1+\lambda r)\nabla \cdot \biggl( \frac{\hat r}{r^2}\biggl)+ \biggl( \frac{\hat r}{r^2}\biggl) \nabla \cdot (Ae^{-\lambda r}(1+\lambda r)) \Biggl)$$ and proceeded from there to get $$\rho=A \epsilon_0 \Biggl(4 \pi \delta^3(r)-\frac{\lambda^2}{r}e^{-\lambda r} \Biggl)$$ I can see what they did, and can follow the computation, but I don't understand why my method was incorrect. Can anyone explain where I erred? Thank you!
2019-09-27 09:59 Higgs boson pair production at colliders: status and perspectives / Di Micco, Biagio (Universita e INFN Roma Tre (IT)) ; Gouzevitch, Maxime (Centre National de la Recherche Scientifique (FR)) ; Mazzitelli, Javier (University of Zurich) ; Vernieri, Caterina (SLAC National Accelerator Laboratory (US)) ; Alison, John (Carnegie-Mellon University (US)) ; Androsov, Konstantin (INFN Sezione di Pisa, Universita' e Scuola Normale Superiore, Pisa (IT)) ; Baglio, Julien Lorenzo (CERN) ; Bagnaschi, Emanuele Angelo (Paul Scherrer Institut (CH)) ; Banerjee, Shankha (University of Durham (GB)) ; Basler, P (Karlsruhe Institute of Technology) et al. This document summarises the current theoretical and experimental status of the di-Higgs boson production searches, and of the direct and indirect constraints on the Higgs boson self-coupling, with the wish to serve as a useful guide for the next years. The document discusses the theoretical status, including state-of-the-art predictions for di-Higgs cross sections, developments on the effective field theory approach, and studies on specific new physics scenarios that can show up in the di-Higgs final state. [...] LHCHXSWG-2019-005.- Geneva : CERN, 27 - 213. Fulltext: PDF; Record dettagliato - Record simili 2019-05-10 11:18 Record dettagliato - Record simili 2019-04-02 20:51 Simplified Template Cross Sections – Stage 1.1 / Delmastro, Marco (Centre National de la Recherche Scientifique (FR)) ; Berger, Nicolas (Centre National de la Recherche Scientifique (FR)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Duehrssen-Debling, Michael (CERN) ; Kivernyk, Oleh (Centre National de la Recherche Scientifique (FR)) ; Langford, Jonathon Mark (Imperial College (GB)) ; Milenovic, Predrag (University of Belgrade (RS)) ; Pandini, Carlo Enrico (CERN) ; Tackmann, Frank (Deutsches Elektronen-Synchrotron (DE)) ; Tackmann, Kerstin (Deutsches Elektronen-Synchrotron (DE)) et al. Simplified Template Cross Sections (STXS) have been adopted by the LHC experiments as a common framework for Higgs measurements. Their purpose is to reduce the theoretical uncertainties that are directly folded into the measurements as much as possible, while at the same time allowing for the combination of the measurements between different decay channels as well as between experiments. [...] arXiv:1906.02754; LHCHXSWG-2019-003; DESY-19-070.- Geneva : CERN, 2019 - 14 p. Fulltext: LHCHXSWG-2019-003 - PDF; 1906.02754 - PDF; Record dettagliato - Record simili 2019-03-27 12:46 Recommended predictions for the boosted-Higgs cross section / Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Caola, Fabrizio (University of Durham (GB)) ; Massironi, Andrea (CERN) ; Mistlberger, Bernhard (Massachusetts Inst. of Technology (US)) ; Monni, Pier (CERN) ; Chen, Xuan (Zurich U.) ; Frixione, Stefano (INFN e Universita Genova (IT)) ; Gehrmann, Thomas Kurt (Universitaet Zuerich (CH)) ; Glover, Nigel (IPPP Durham) ; Hamilton, Keith Murray (University of London (GB)) et al. In this note we study the inclusive production of a Higgs boson with large transverse momentum. We provide a recommendation for the inclusive cross section based on a combination of state of the art QCD predictions for the gluon-fusion and vector-boson-fusion channels. [...] LHCHXSWG-2019-002.- Geneva : CERN, 2019 - 14. Fulltext: PDF; Record dettagliato - Record simili 2019-03-01 22:49 Higgs boson cross sections for the high-energy and high-luminosity LHC: cross-section predictions and theoretical uncertainty projections / Calderon Tazon, Alicia (Universidad de Cantabria and CSIC (ES)) ; Caola, Fabrizio (University of Durham (GB)) ; Campbell, John (Fermilab (US)) ; Francavilla, Paolo (Universita & INFN Pisa (IT)) ; Marchiori, Giovanni (Centre National de la Recherche Scientifique (FR)) ; Becker, Kathrin (Albert Ludwigs Universitaet Freiburg (DE)) ; Bertella, Claudia (Chinese Academy of Sciences (CN)) ; Bonvini, Marco (Sapienza Universita e INFN, Roma I (IT)) ; Chen, Xuan (Zuerich University (CH)) ; Frederix, Rikkert (Technische Universität Muenchen (DE)) et al. This note summarizes the state-of-the-art predictions for the cross sections expected for Higgs boson production in the 27 TeV proton-proton collisions of a high-energy LHC, including a full theoretical uncertainty analysis. It also provides projections for the progress that may be expected on the timescale of the high-luminosity LHC and an assessment of the main limiting factors to further reduction of the remaining theoretical uncertainties.. LHCHXSWG-2019-001.- Geneva : CERN, 01 - 17. Fulltext: PDF; Record dettagliato - Record simili 2016-07-15 07:28 Analytical parametrization and shape classification of anomalous HH production in EFT approach / Carvalho Antunes De Oliveira, Alexandra (Universita e INFN, Padova (IT)) ; Dall'Osso, Martino (Universita e INFN, Padova (IT)) ; De Castro Manzano, Pablo (Universita e INFN, Padova (IT)) ; Dorigo, Tommaso (Universita e INFN, Padova (IT)) ; Goertz, Florian (CERN) ; Gouzevitch, Maxime (Universite Claude Bernard-Lyon I (FR)) ; Tosi, Mia (CERN) In this document we study the effect of anomalous Higgs boson couplings on non-resonant pair production of Higgs bosons (HH) at the LHC. We explore the space of the five parameters $\kappa_\lambda$, $\kappa_t$, $c_2$, $c_{g}$, and $c_{2g}$ in terms of the corresponding kinematics of the final state, and describe a suggested partition of the space into a limited number of regions featuring similar phenomenology in the kinematics of HH final state, along with a corresponding set of representative benchmark points. [...] LHCHXSWG-2016-001.- Geneva : CERN, 2016 Fulltext: PDF; Record dettagliato - Record simili 2015-08-03 09:58 Benchmark scenarios for low $\tan \beta$ in the MSSM / Bagnaschi, Emanuele (DESY) ; Frensch, Felix (Karlsruhe, Inst. Technol.) ; Heinemeyer, Sven (Cantabria Inst. of Phys.) ; Lee, Gabriel (Technion) ; Liebler, Stefan Rainer (DESY) ; Muhlleitner, Milada (Karlsruhe, Inst. Technol.) ; Mc Carn, Allison Renae (Michigan U.) ; Quevillon, Jeremie (King's Coll. London) ; Rompotis, Nikolaos (Seattle U.) ; Slavich, Pietro (Paris, LPTHE) et al. The run-1 data taken at the LHC in 2011 and 2012 have led to strong constraints on the allowed parameter space of the MSSM. These are imposed by the discovery of an approximately SM-like Higgs boson with a mass of $125.09\pm0.24$~GeV and by the non-observation of SUSY particles or of additional (neutral or charged) Higgs bosons. [...] LHCHXSWG-2015-002.- Geneva : CERN, 2015 - 24. Fulltext: PDF; Record dettagliato - Record simili 2015-03-20 14:24 Recommendations for the interpretation of LHC searches for $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ in vector boson fusion with decays to vector boson pairs / Zaro, Marco (Paris U., IV ; Paris, LPTHE) ; Logan, Heather (Ottawa Carleton Inst. Phys.) We provide theory input for the interpretation of the LHC searches for the production of Higgs bosons $H_5^0$, $H_5^{\pm}$, and $H_5^{\pm\pm}$ that transform as a fiveplet under the custodial symmetry. We choose as a benchmark the Georgi-Machacek model, in which isospin-triplet scalars are added to the Standard Model Higgs sector in such a way as to preserve custodial SU(2) symmetry. [...] LHCHXSWG-2015-001.- Geneva : CERN, 30 - 19p. Fulltext: PDF; Record dettagliato - Record simili
If $f(k)\leq K$ a.e in $\Omega$. $|\Omega|= +\infty$ ,$K$ is constant,and $f \in L^2$. Why we can say $K\geq 0$? (from Haim Brezis functional analysis sobolev space and partial differential equations, P308, chapter 9.) Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community If $f(k)\leq K$ a.e in $\Omega$. $|\Omega|= +\infty$ ,$K$ is constant,and $f \in L^2$. Why we can say $K\geq 0$? (from Haim Brezis functional analysis sobolev space and partial differential equations, P308, chapter 9.) $$ f(x)\le K < 0, \mu(\Omega) = \infty\implies \infty = \int_\Omega K^2\,d\mu\le\int_\Omega f^2\,d\mu. $$ $L^2(\mathbb{R})$ contains unbounded functions. Consider the piecewise constant function $f$ defined as follows. For any positive integer $n$ put $f(x) = n$ for $x\in(n, n+1/n^4)$, elsewhere we set $f(x)=0$. Now $$\int|f(x)|^2\,dx =\sum_{n=1}^\infty \frac{n^2}{n^4}=..$$
In this chapter, we consider the problem of assigning specific probabilities to outcomes in a sample space. As we saw in Section 1.2, the axiomatic definition of probability (Definition 1.2.1) does not tell us how to compute probabilities. So in this section we consider the commonly encountered scenario referred to as equally likely outcomes and develop methods for computing probabilities in this special case. Finite Sample Spaces Before focusing on equally likely outcomes, we consider the more general case of finite sample spaces. In other words, suppose that a sample space \(S\) has a finite number of outcomes, which we can denote as \(N\). In this case, we can represent the outcomes in \(S\) as follows: $$S = \{s_1, s_2, \ldots, s_N\}.$$ Suppose further that we denote the probability assigned to each outcome in \(S\) as \(P(s_i) = p_i\), for \(i=1, \ldots, N\). Then the probability of any event \(A\) in \(S\) is given by adding the probabilities corresponding to the outcomes contained in \(A\) and we can write $$P(A) = \sum_{s_i \in A} p_i. \label{finitess}$$ This follows from the third axiom of probability (Definition 1.2.1), since we can write any event as a disjoint union of the outcomes contained in the event. For example, if event \(A\) contains three outcomes, then we can write \(A = \{s_1, s_2, s_3\} = \{s_1\} \cup \{s_2\} \cup \{s_3\}\). So the probability of \(A\) is given by simply summing up the probabilities assigned to \(s_1, s_2, s_3\). This fact will be useful in the special case of equally likely outcomes, which we consider next. Equally Likely Outcomes First, let's state a formal definition of what it means for the outcomes in a sample space to be equally likely. Definition \(\PageIndex{1}\) The outcomes in a sample space \(S\) are if each outcome has the equally likely same probabilityof occurring. In general, if outcomes in a sample space \(S\) are equally likely, then computing the probability of a single outcome or an event is very straightforward, as the following exercise demonstrates. You are encouraged to first try to answer the questions for yourself, and then click "Answer" to see the solution. Exercise \(\PageIndex{1}\) Suppose that there are \(N\) outcomes in the sample space \(S\) and that the outcomes are equally likely. What is the probability of a single outcome in \(S\)? What is the probability of an event \(A\) in \(S\)? Answer First, note that we can represent the outcomes in \(S\) as follows: $$S = \{s_1, s_2, \ldots, s_N\}.$$ For each outcome in \(S\), note that we can denote its probability as $$P(s_i) = c,\ \text{for}\ i=1, 2, \ldots, N,$$ where \(c\) is some constant. This follows from the fact that the outcomes of \(S\) are equally likely and so have the same probability of occurring. With this set-up and using the axioms of probability (Definition 1.2.1), we have the following: \begin{align} 1 = P(S) & = P(\{s_1\}\cup\cdots\cup\{s_N\}) \notag \\ & = P(s_1) + \cdots + P(s_N) \notag \\ & = c + \cdots + c \notag \\ & = N\times c \notag \\ \Rightarrow c & = \frac{1}{N}. \end{align} Thus, the probability of a single outcome is given by \(1\) divided by the number of outcomes in \(S\). Now, for an event \(A\) in \(S\), suppose it has \(n\) outcomes, where \(n\) is an integer such that \(0\leq n\leq N\). We can represent the outcomes in \(A\) as follows: $$A = \{a_1, \ldots, a_n\}.$$ Using equation \ref{finitess}, we compute the probability of \(A\) as follows: \begin{align} P(A) & = \sum^n_{i=1} P(a_i) = \sum^n_{i=1}\frac{1}{N} \notag \\ & = \frac{1}{N} + \cdots + \frac{1}{N} \notag \\ & = n\left(\frac{1}{N}\right) \notag \\ & = \frac{n}{N}. \end{align} Thus, the probability of an event in \(S\) is equal to the number of outcomes in the event divided by the total number of outcomes in \(S\). We have already seen an example of a sample space with equally likely outcomes in Example 1.2.1. You are encouraged to revisit that example and connect it to the results of Exercise 2.1.1. In general, Exercise 2.1.1 shows that if a sample space \(S\) has equally likely outcomes, then the probability of an event \(A\) in the sample space is given by $$\boxed{P(A) = \frac{\text{number of outcomes in}\ A}{\text{number of outcomes in}\ S}.}\label{eqlik}$$ From this result, we see that in the context of equally likely outcomes calculating probabilities of events reduces to simply counting the number of outcomes in the event and the sample space. So, we Counting Techniques First, let's consider the general context of performing multi-step experiments. The following tells us how to count the number of outcomes in such scenarios. Multiplication Principle If one probability experiment has \(m\) outcomes and another has \(n\) outcomes, then there are \(m \times n\) total outcomes for the two experiments. More generally, if there are \(k\) many probability experiments with the first experiment having \(n_1\) outcomes, the second with \(n_2\), etc., then there are \(n_1 \times n_2 \times \cdots \times n_k\) total outcomes for the \(k\) experiments. Example \(\PageIndex{1}\) To demonstrate the Multiplication Principle, consider again the example of tossing a coin twice (see Example 1.2.1). Each toss is a probability experiment and on each toss, there are two possible outcomes: \(h\) or \(t\). Thus, for two tosses, there are \(2 \times 2 = 4\) total outcomes. If we toss the coin a third time, there are \(2\times 2 \times 2 = 8\) total outcomes. Next we define two commonly encountered situations, permutations and combinations, and consider how to count the number of ways in which they can occur. Definition \(\PageIndex{2}\) A is an permutation orderedarrangement of objects. For example, "MATH'' is a permutation of four letters from the alphabet. A is an combination unorderedcollection of \(r\) objects from \(n\) total objects. For example, a group of three students chosen from a class of 10 students. In order to count the number of possible permutations in a given setting, the Multiplication Principle is applied. For example, if we want to know the number of possible permutations of the four letters in "MATH'', we compute $$4\times3\times2\times1 = 4! = 24,\notag$$ since there are four letters to select for the first position, three letters for the second, two for the third, leaving only one letter for the last. In other words, we treat each letter selection as an experiment in a multi-step process. Counting Permutations The number of permutations of \(n\) distinct objects is given by the following: $$n\times(n-1)\times\cdots\times 2\times1 = n!$$ Counting combinations is a little more complicated, since we are not interested in the order in which objects are selected and so the Multiplication Principle does not directly apply. Consider the example that a group of three students are chosen from a class of 10. The group is the same regardless of the order in which the three students are selected. This implies that if we want to count the number of possible combinations, we need to be careful not to include permutations, i.e., rearrangements, of a certain selection. This leads to the following result that the number of possible combinations of size \(r\) selected from a total of \(n\) objects is given by binomial coefficients. Counting Combinations The number of combinations of \(r\) objects selected without replacement from \(n\) distinct objects is given by $$\binom{n}{r} = \frac{n!}{r!\times(n-r)!}.$$ Note that \(\binom{n}{r}\), read as "\(n\) choose \(r\)", is also referred to as a , since it appears in the Binomial Theorem. binomial coefficient Using the above, we can compute the number of possible ways to select three students from a class of 10: $$\binom{10}{3} = \frac{10!}{3!\times7!} = 120 \notag$$ Example \(\PageIndex{2}\) Consider the example of tossing a coin three times. Note that an outcome is a sequence of heads and tails. Suppose that we are interested in the number of outcomes with exactly two heads, not in the actual sequence. To find the number of outcomes with exactly two heads, we need to determine the number of ways to select positions in the sequence for the heads, then the remaining position will be a tails. If we toss the coin three times, there are three positions to select from, and we want to select two. Since the order that we make the selection of placements does not matter, we are counting the number of combinations of 2 positions from a total of 3 positions, i.e., $$\binom{3}{2} = \frac{3!}{2!\times1!} = 3.\notag$$ Of course, this example is small enough that we could have arrived at the answer of 3 using brute force by just listing the possibilities. However, if we toss the coin a higher number of times, say 50, then the brute force approach becomes infeasible and we need to make use of binomial coefficients. Strategies for Analyzing a Counting Problem Before we return to our discussion of probability, the following outlines an approach for tackling problems in which we need to count the number of possible outcomes. Are there cases? To describe all possible outcomes, must one consider specific ways the desired event can occur? If so, make a note of each case and add the number of possibilities for each case. Example:How many ways can a hand of 5 cards have at least 3 hearts? Solution: To examine the ways this can happen, we observe that there are 3 specific ways a hand could have at least 3 hearts: (i) a hand could have exactly 3 hearts and 2 other cards (that are not hearts) OR (ii) a hand could have exactly 4 hearts and 1 other card (that is not hearts) OR (iii) a hand could have exactly 5 hearts. So there are 3 cases. The total number of ways a hand of 5 cards has at least 3 hearts \(=\) the number of ways to have exactly 3 hearts and 2 other cards \(+\) the number of ways to have exactly 4 hearts and 1 other card \(+\) the number of ways to have 5 hearts. Non-Example:How many groups of 10 students have 4 members from Indiana and 6 from Michigan? This event is clearly described and already very specific: 4 from one state and 6 from the other. There are no other options, possibilities, or cases. Are repetitions allowed? If the answer is yes, then use the Multiplication Rule. Are there steps? If so, put a slot for each step and a dot (multiplication symbol) between the slots (Multiplication Principle). Example:How many ways can a hand of 5 cards have exactly 3 hearts? Solution: There are two steps: 1. get 3 hearts AND 2. get 2 other cards (that are not hearts). Then the number of hands \(=\) # of ways to get 3 hearts\(\cdot\) # of ways to get 2 other cards (not hearts) Non-Example:How many groups of 4 books can be selected from a shelf that has 12 books? There is only one step: grab the 4 books. Does order matter? If so, use permutations or the Multiplication Rule. Example:There are 4 distinct cleaning jobs: wash the windows, vacuum, dust, wash the kitchen floor. In how many ways could the jobs be assigned to 4 people from a set of 6? Solution: Order matters. A person is assigned a particular job. Non-Example:See the next point. If order doesn't matter and the end result is a clump or group, then you are counting combinations. Example:How many ways can one choose 5 people from a set of 9 to help with some chores? Solution: Order doesn't matter. We are merely choosing the lucky folks who will help us get some work done. We have NOT assigned particular tasks.
There are six basic atom types: Ord, Op, Bin, Rel, Open, Close and Punct. There are other atom types, but they're treated like Ord ones as far as spacing is concerned (a somewhat special case is Inner). An atom of type Bin is turned into Ord if it isn't surrounded on both sides by atoms compatible with its nature. The following table shows what happens when the atom to the right of the Bin (here +) is an Ord; there are 36 possible combinations, you can try the other 30. For instance, “Ord Bin Open” is the counterpart of “Close Bin Ord”, where the Bin remains such, like also in “Close Bin Open”. \baselineskip=1.2\baselineskip \halign{#\unskip\hfil &\qquad #\unskip\hfil\cr $x+a$ & Ord Bin Ord \cr $\sum+a$ & Op Bin Ord \cr $++a$ & Bin Bin Ord \cr $=+a$ & Rel Bin Ord \cr $(+a$ & Open Bin Ord \cr $)+a$ & Close Bin Ord \cr $,+a$ & Punct Bin Ord \cr } \bye Compile with pdftex. As you see from the picture, only in lines 1, 3 and 6 the middle + behaves like a binary operator, in the other lines it becomes an Ord. Note also that a trailing Bin will always be treated like an Ord (say when you type \lim_{x\to0-}, because it has nothing after it, similarly to a leading one, like in $-1$. This is not available at the macro level and is decided upon at a deep stage when TeX is interpreting a math list for later transforming it into a horizontal list.
In the coordinate plane, let A be a point on the x-axis, and let B be a point on the y-axis, so that AB is tangent to the unit circle. Find the minimum value of AB In the coordinate plane, let A be a point on the x-axis, and let B be a point on the y-axis, so that AB is tangent to the unit circle. Find the minimum value of AB. The point of contact let P be, the origin of the coordinate plane O. Triangle AOB is similar to triangle \(PX_PO\). \(A_{PX_PO}=\frac{x}{2}\cdot y\) \(y=\sqrt{1-x^2}\) \(A=\frac{x}{2}\sqrt{1-x^2}\\ A=\frac{1}{2}\sqrt{x^2-x^4}\) \(A=\frac{1}{2} (x^2-x^4)^{\frac{1}{2}}\) \(A'=\frac{1}{2}(2x-4x^3)\cdot \frac{1}{2}(x^2-x^4)^{-\frac{1}{2}}=0\\ A'=\frac{2x-4x^3}{4(x^2-x^4)^{\frac{1}{2}}}=0\\ A'=\frac{x-2x^3}{2(x^2-x^4)^{\frac{1}{2}}}=0\\ \color{blue}x=\frac{1}{\sqrt{2}}\) \(x=0,7071067811865476\) \(A=\frac{1}{2}\sqrt{x^2-x^4}\) \(A_{X_PO}=0.5\) \(A_{AOB}=4\cdot A_{X_PO}=2 \) The smallest area of the triangle AOB is equal to 2. ! Denote the point of tangency as T. To attain minimum, AO and OB must be as short as they can be. As the unit circle is symmetric, AO must be equal to OB when minimum of AB is attained. \(\text{AO} = \text{OB} \wedge\angle \text{AOB} = 90^{\circ}\). When minimum of AB is attained, \(\triangle \text{AOB}\) is a 90-45-45 triangle. OT = 1 unit As \(\triangle \text{AOB}\) is an isosceles triangle, the radius joining O to the point of tangency is a perpendicular bisector and it divides \(\triangle \text{AOB}\) into 2 equal halves, which are two 90-45-45 triangles.(If you can't understand this, go to the kitchen, grab a square piece of bread, cut it along the main diagonal, and then cut one of the two parts into half.) As \(\triangle \text{AOT}\) is a 90-45-45 triangle with angle OTA = 90 degrees, TA = TO = 1 unit. As the shape is symmetric along the line OT, TB = TA = 1 unit. So when minimum is attained, AB = TA + TB = 1 + 1 = 2 units
Editor’s note: This article was written by Valentin Fadeev, a physics PhD candidate in the UK. He also co-authored this article on equivalences and isomorphisms in category theory. You can find him on Twitter via the handle @ValFadeev. The delta function was introduced by P.A.M. Dirac, one of the founders of quantum electrodynamics. The delta function belongs to the abstract concepts of function theory. In a rigorous sense it is a functional that picks a value of a given function at a given point. However, it also arises as the result of the differentiation of discontinuous functions. One of the most appealing properties of the delta function is the fact that it allows us to work with discretely and continuously distributed quantities in the same way, replacing discrete sums with integrals. Delta and its derivative meet the physical intuition in cases where we deal with quantities of large magnitude whose impact is limited to a small region of space or a very short amount of time (impulses). The inspiration for this particular article came during a course on mechanics in university. In structural mechanics, it is often desired to understand the relationship between bending moment and shear force for objects, particularly beams, under applied loads. An explanation of the typical method is given here. In the text I was using at the time, the proof of the relation between bending moment and shear force did not make explicit use of delta function. The final result was valid, but there was a serious fault in the proof. Point forces were considered as mere constant terms in the sum and differentiated away, despite the fact their contribution was accounted for only starting form certain points (suggesting delta-terms). I gave this problem a mathematically rigorous treatment which led to some nice closed formulae applicable to a very general distribution of load. Relationship Between Shear Force and Bending Moment Consider a rigid horizontal beam of length L, and introduce a horizontal axis x directed along it. Assume the following types of forces are applied: point forces F^{p}_{i} at x_i, i=1,2,…,n, 0<x_i<L distributed forces with linear density given by continuous functions \phi^{d}_{j}(x) on intervals (a_j,b_j),(j=1,2,…,m), 0<a_{j-1}<b_{j-1}<a_j<b_j<L moments (pairs of forces) M_k with axes of rotation at x_k, (k=1,2,…,p), 0<x_k<L. Note that we ignore our own mass of the beam for the sake of clarity, although it can be easily accounted for if necessary. Point Forces Although the point forces F^{p}_{i} are applied at certain points by definition, in reality the force is always applied to a certain finite area. In this case we can consider the density of the force being very large within this area and dropping to 0 outside it. Hence, it is convenient to define the distribution density as follows: \phi^{p}_{i}(x)=F^{p}_{i}\delta(x-x_i). Shear force F at point x is defined as the sum of all forces applied before that point (we are moving from the left end of the beam to the right one): F=\sum_{x_i<x}F_{i} Hence: F^{p}(x)=\sum_{i}\int_0^x \phi^{p}_{i}(z)\mathrm{d}z=\sum_{i}F^{p}_{i}e(x-x_i), where e(x)=\begin{cases}1,\qquad x>0\\0, \qquad x\neq 0\end{cases} is the step function. Distributed Forces Now we shall find the contribution from the distributed forces. \phi^{d}_j(x) may be formally defined on the whole axis, so we must cut off the unnecessary intervals outside (a_j,b_j). Consider the following expressions: \phi^{d}_{j}(x,a_j,b_j)=\phi^{d}_j(x)[e(x-a_j)-e(x-b_j)]. Indeed it is easy to ascertain that the right side is equal to \phi^{d}_j(x) within (a_j,b_j) and vanishes everywhere else. Calculating shear force due to distributed forces demonstrates some useful properties of the delta function: \begin{aligned}F^d&=\sum_{j}\int_{0}^{x}\phi_{j}^{d}(z)[e(z-a_j)-e(z-b_j)]\mathrm{d}z\\&=\sum_{j}\left[\int_0^x\phi^{d}_j(z)e(z-a_j)\mathrm{d}z-\int_0^x \phi^{d}_j(z)e(z-b_j)\mathrm{d}z\right]\\&=\sum_{j}\left[\int_{a_j}^x\phi^{d}_{j}(z)e(z-a_j)\mathrm{d}z-\int{b_j}^{x}\phi^{d}_j(z)e(z-b_j)\mathrm{d}z\right]\\&=\sum_{j}\left[\left.\left(e(z-a_j)\int_{a_j}^x\phi^{d}_j(z)\mathrm{d}z\right)\right|_{a_j}^x-\int_{a_j}^x\left(\int_{a_j}^x\phi^{d}_j(z)\mathrm{d}z\right)\delta(z-a_j)\mathrm{d}z\right.\\&\left.-\left.\left(e(z-b_j)\int_{b_j}^x\phi^{d}_j(z)\mathrm{d}z\right)\right|{b_j}^x+\int_{b_j}^x\left(\int_{b_j}^x\phi^{d}_j(z)\mathrm{d}z\right)\delta(z-b_j)\mathrm{d}z\right]\\&=\sum_{j}\left[e(x-a_j)\int_{a_j}^x\phi^{d}_j(z)\mathrm{d}z+e(x-b_j)\int_{x}^{b_j}\phi^{d}_j(z)\mathrm{d}z\right]\end{aligned} Here we used the defining property of delta: f(x)\delta(x-x_0)=f(x_0)\delta(x-x_0), from which it follows, in particular that \left(\int_{a_j}^x\phi^{d}_j(z)\mathrm{d}z\right)\delta(x-a_j)=\left(\int_{a_j}^{a_j}\phi^{d}_j(z)\mathrm{d}z\right)\delta(x-a_j)=0. Bending Moments Now we shall calculate bending moments created by all types of forces involved. Consider F^{p}_{i} applied at x_i. The bending moment created by this force evaluated at x>x_i can be determined as follows:\begin{aligned}M_{i}^{p}(x)&=F_{i}^{p}e(x-x_i)(x-x_i),\\M_{i}^{p}(x+\mathrm{d}x)&=F_{i}^{p}e(x+\mathrm{d}x-x_i)(x+\mathrm{d}x-x_i)=F^{p}_{i}e(x-x_i)(x-x_i+\mathrm{d}x),\\dM_{i}^{p}&=F_{i}^{p}e(x-x_i)\mathrm{d}x,\\M_{i}^{p}&=F_{i}^{p}\int_0^{x}e(z-x_i)\mathrm{d}z=F^{p}_{i}\frac{x-x_i+|x-x_i|}{2}.\end{aligned} Hence, the total moment due to point forces is M^{p}=\sum_{i}F_{i}^{p}\frac{x-x_i+|x-x_i|}{2}. To calculate moment created by the distributed forces we use the following approach adopted in mechanics. Replace the distributed force to the left of the point of summation with a point force applied at the center of gravity of the figure enclosed by the graph of \phi_j(x) and lines x=a_j, x=b_j. If a_j<x<b_j: \begin{aligned}F^{d}_{j}(x)&=\int_{a_j}^x\phi^{d}_{j}(z)\mathrm{d}z,\\ M^{d}_{j}&=\int_{a_j}^x\phi^{d}{j}(z)\mathrm{d}z\left(x-\frac{\int_{a_j}^x z\phi^{d}_{j}(z)\,\mathrm{d}z}{\int_{a_j}^x\phi^{d}_{j}(z)\mathrm{d}z}\right)\\&=x\int_{a_j}^x\phi^{d}_{j}(z)\mathrm{d}z-\int_{a_j}^x z\phi^{d}_{j}(z)\mathrm{d}z.\end{aligned} Differentiating both parts with respect to x we obtain \frac{\mathrm{d}M^{d}_{j}}{\mathrm{d}x}=\int_{a_j}^x\phi^{d}_{j}(z)\mathrm{d}z+x\phi^{d}_{j}(x)-x\phi^{d}_{j}(x)=\int_{a_j}^x\phi^{d}_{j}(z)\mathrm{d}z=F^{d}_{j}(x) In fact, we could as well include point forces in the above derivation considering total density \phi(x)=\phi^{p}(x)+\phi^{d}(x), which is the correct way to account for point forces in this proof. We can therefore derive the general relation between shear force and bending moment: F=\frac{\mathrm{d}M}{\mathrm{d}x}. Now we need to calculate the contribution made by moments of pairs of forces. Each of these moments can be considered as a pair of equal large forces F applied at a small distance h of each other and oppositely directed. They will make the following contribution to the expression for total density \mu(x)=F\delta(x+h)-F\delta(x)=Fh\frac{\delta(x+h)-\delta(x)}{h}=M\frac{\delta(x+h)-\delta(x)}{h}. Taking the limit as h \to 0: \mu(x)=M\delta'(x) Note that the derivative of \delta(x) is called the unit doublet. This does not imply that expression for shear force will contain terms of the form M\delta(x). This is due to the fact that shear force consists of vertical components of all forces and for pairs of forces those components will cancel out each other. Therefore, the total bending moment of the pairs of forces is expressed as follows M=\sum_{k}M_ke(x-x_k). Finally we can write the expressions for shear force and bending moment in the most general case \begin{aligned}F(x)&=\sum_{i}F^{p}_{i}e(x-x_i)\\&\qquad+\sum_{j}\left[e(x-a_j)\int_{a_j}^x\phi^{d}_j(z)\mathrm{d}z+e(x-b_j)\int_{x}^{b_j}\phi^{d}_j(z)\mathrm{d}z\right],\\M(x)&=\sum_{k}M_ke(x-x_k)+\sum_{i}F^{p}_{i}\frac{x-x_i+|x-x_i|}{2}\\&\qquad+\sum_{j}\int_0^x\left[e(t-a_j)\int_{a_j}^t \phi^{d}_j(z)\mathrm{d}z+e(t-b_j)\int_{t}^{b_j}\phi^{d}_j(z)\mathrm{d}z\right]\mathrm{d}t.\end{aligned} In an important particular case where \phi^{d}_j(x)\equiv \phi^{d}_j=const these expressions reduce to the following \begin{aligned}F(x)&=\sum_{i}F^{p}_{i}e(x-x_i)+\sum_{j}\phi^{d}_j\left[(x-a_j)e(x-a_j)+(x-b_j)e(x-b_j)\right], \\ M(x)&=\sum_{k}M_ke(x-x_k)+\sum_{i}F^{p}_{i}\frac{x-x_i+|x-x_i|}{2}\\ &\qquad+\sum_{j}\frac{\phi^{d}_j}{2}\left[(x-a_j)^2e(x-a_j)+(x-b_j)^2e(x-b_j)\right].\end{aligned} Conclusion In this example we have demonstrated the application of Dirac delta and the related functions to a problem where the input data has a combination of continuous and discrete distributions. Singular functions allow us to work with this data in a uniform and relatively rigorous way. The advantage is that we get the final results in the closed form, as opposed to an algorithmic step-by-step method of solution. We can also obtain concise proofs, such as that for the relation between shear force and bending moment, and extend the results to very general inputs. This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.
I was confronted to a similar problem, here's one possible solution \begin{equation}\mathbb{E}\left[\exp\left(k\int_0^T[B(t)]^{2}\,dt\right)\right] <\infty\text{ ?}\end{equation} First of all, we introduce the following sequence of process \begin{equation}\ B_{t}^{n}= \sum_{i=0}^{n}\sqrt\lambda_{i}*\phi_{i}*\eta_{i}\end{equation} where \begin{equation}\ \eta_{i} \sim N(0,1)\\\ \phi_{i}=\sqrt2\sin((i+\frac{1}{2})\pi*t)\\\ \lambda_{i} = \frac{4}{\pi^2}\frac{1}{(2i+1)^{2}}\end{equation} We will show that this sequence of process is a representation of the brownian motion on [0;1].Then thanks to a change in variables and the invariance property of the brownian motion it will become "easier" to calcul the expectation. This sequence of process is converging, to show that you can show that it is a Cauchy sequence in . We denote X_t that limit. You can easily show that X_t is a gaussian process. Or a gaussian process is characterized by its covariance function and its expected function. Thanks to the Theorem of Mercer you can calculate the covariance function and the calculation of the expected function is very simple because it is the constante 0.\begin{equation}K(s,t)=cov(X_{t},X_{s}) = min(s,t)\\E(t)= \mathbb{E}[X_{t}]=0\end{equation} \begin{equation}\ B_{t}^{n}= \sum_{i=0}^{\infty}\sqrt\lambda_{i}*\phi_{i}*\eta_{i}\text{ is a representation of the brownian motion on [0;1]}\end{equation} We can use that representation to calculate \begin{equation}\int_{0}^{T}[B(t)]^{2}dt= T^{2}*\int_{0}^{1} [\frac{B_{T*t}}{\sqrt{T}}]^{2}dt\\\frac{B_{T*t}}{\sqrt{T}}\text{is a brownian motion, it is just the invariance by expansion of time}\end{equation} \begin{equation}\int_{0}^{T}[B(t)]^{2}dt= \sum_{i=0}^{\infty}\lambda_{i}\eta_{i}^2\\end{equation} Now you "just" have to calculate the Laplace transformation of a square normal law which is painful (it is basically the calculation of a Laplace transformation for a random variable following a chi2 law)\begin{equation}\mathbb{E}\left[\exp\left(k\int_0^T[B(t)]^{2}\,dt\right)\right] = \prod_{i=0}^{\infty}\mathbb{E}[exp(T^{2}*k*\lambda_{i}*\eta_{i}^2)]\end{equation}If I have made any mistake or if you need more explanations on some particular points don't hesitate
Sorry for this type of question, but I've forgotten the math basic from middle school, maybe someone can help me out. If I know the result and base, how can I calculate exponent? $2.5 = 10^x$, how would I get the $x$ value of this? Mathematics Stack Exchange is a question and answer site for people studying math at any level and professionals in related fields. It only takes a minute to sign up.Sign up to join this community Sorry for this type of question, but I've forgotten the math basic from middle school, maybe someone can help me out. If I know the result and base, how can I calculate exponent? $2.5 = 10^x$, how would I get the $x$ value of this? Do you remember the definition of logarithm? That's exactly what you need because the logarithm is defined as the inverse operation of exponentiation (raising a number to a power): $$ 2.5=10^x \Longleftrightarrow x=\log_{10}{2.5}. $$ The statement $x=\log_{b}{a}$ is asking the question what power should I raise $b$ to get $a$? And that's equivalent to saying $b^x=a$. The other answers lead you to $x=\log_{10}(2.5) \approx 0.3979$ though it is possible to get a close mental arithmetic approximation without explicitly using logarithms: $2^{10}=1024 \approx 1000 = 10^3$, so $2 \approx 10^{3/10}$ or slightly more $2.5 = \dfrac{10}{2^2} \approx \dfrac{10^1}{10^{6/10}}=10^{2/5}$ or slightly less making $x \approx \dfrac{2}{5} = 0.4$ or slightly less You can do something similar for other values, and this can give reasonably good approximations for some other $\log_{10}(x)$ x log_10(x) approx 1 0 01.25 0.09691 0.1-1.6 0.20412 0.2+2 0.30103 0.3+2.5 0.39794 0.4-3.125 0.49485 0.5-3.2 0.50515 0.5+4 0.60206 0.6+5 0.69897 0.7-6.25 0.79588 0.8-8 0.90309 0.9+10 1 1 Hint: Use the natural logarithm on both sides of the equation. $$\ln 2.5 = x \ln 10$$ $$\implies x = \dfrac{\ln 2.5}{\ln 10}$$ I used the logarithm law $\log_a b^r = r\log_a b$ on the right-hand side of the first equation. If you know the base and the exponent, then the operation to get the value of the power is exponentiation. For instance, if the base is $10$ and the exponent is $3$, then the value of the power is $10^3 = 1000$. If you know the power and the exponent, then the operation to get the base is root. For instance, if the exponent is $3$ and the power is $1000$, then the base is $\sqrt[3]{1000} = 10$ If you know the power and the base, the operation to get the exponent is the logarithm. For instance, if the power is $1000$ and the base is $10$, then the exponent is $\log_{10}(1000) = 3$. These three operations are so closely related, yet their names and notations (and teaching methods) are entirely different. This is a shame, but there is little one can realistically do about it. In this case, the answer you're looking for is $\log_{10}(2.5)$. (Make sure you use logarithms base $10$, as $10$ is the base of the power. On some calculators, for instance, logarithm base $10$ is denoted by $\log$ while logarithm base $e\approx 2.72$ is denoted by $\ln$. But some times $\log$ refers to base $e$ logarithms instead. You will have to test this on your calculator to figure out which convention it follows.
I have the following tikzpicture: \documentclass{article}\usepackage[T1]{fontenc}\usepackage[utf8]{inputenc} % Support for french language\usepackage{tikz}\usetikzlibrary{positioning,calc}\usetikzlibrary{decorations.markings,shapes,arrows}\begin{document}\tikzstyle{block} = [draw, rectangle, minimum height=3em, minimum width=6em]\tikzstyle{sum} = [draw, circle, node distance=1cm]\tikzstyle{input} = [coordinate]\tikzstyle{output} = [coordinate]\tikzstyle{pinstyle} = [pin edge={to-, thin, black, dashed}, pin distance = 2cm]\tikzstyle{amp} = [regular polygon, regular polygon sides=3, draw, fill=white, text width=1em, inner sep=0.35mm, outer sep=0mm, shape border rotate=-90]\tikzstyle{line} = [draw, thick, -]% The block diagram code is probably more verbose than necessary\begin{tikzpicture}[auto, node distance=2cm,>=latex', every text node part/.style={align=center}] \node[block, pin={[pinstyle]above:p}, node distance=3cm] (gp) {Générateur \\ périodique}; \node[block, below = 2cm of gp] (ga) {Générateur \\ aléatoire}; \node[amp, pin={[pinstyle]above:$\sigma$}, yshift = -1.5cm, right = 2 cm of gp] (g) {}; \draw[line] (g.west) -- ++(-0.5cm,0) coordinate(r1){}; \draw[line] (ga.east) -- ++(1cm,0) -- ++(0,1cm) coordinate(b1){}; \draw[line] (gp.east) -- ++(1cm,0) -- ++(0,-1cm)coordinate(a1){}; \fill (a1) circle[radius=2pt]; \fill (b1) circle[radius=2pt]; \fill (r1) circle[radius=2pt]; \draw[line, rounded corners = 2pt] (r1) -- (a1); \draw[thick,->=stealth] (g.east) -- ++(1cm,0); \node[block, pin={[pinstyle]above:$|a_p(i)|$}, right = 1cm of g] (tf) {$\frac{1}{1+\sum_{i=1}^p a_p(i)z^{-i}}$}; \node [output, right of=tf] (output) {}; \draw[thick, ->=stealth] (tf.east) -- ++(1cm, 0) node[above] () {$x(n)$};\end{tikzpicture}\end{document} That gives the following image: There were, however, remaining operation that I don't know how to do them: I would like to align vertically the dashed lines having the symbols \sigmaand |a_p(i)|vertically with the dashed line with psymbol I would like to replace the dashed single line having |a_p(i)|with an arrow double line I'd like to add 4th vertical dashed line starting with the switch symbol (between r1and a1coordinate) having the symbol V/NVand aligned vertically with the 3 other ones. Any comments or suggestions are welcome.
Dear Uncle Colin, When I differentiate $y=2x^2 + 7x + 2$ and apply the $nx^{n-1}$ rule, why do I only apply it to the $2x^2$ and the $7x$ but not the 2? -- Nervous Over Rules, Mathematically A Liability Hi, NORMAL, and thanks for your message! There are several waysRead More → "Isn't it somewhere around $\phi$?" asked the student, brightly. "That number sure crops up in a lot of places!" The Mathematical Ninja's eyes narrowed. "Like shells! And body proportions! And arrawk!" Hands dusted. The Mathematical Ninja stood back. "The Vitruvian student!" The student arrawked again as the circular machine heRead More → Dear Uncle Colin, I have an equation $3y, \dydx =x$. When I separate and integrate both sides, I end up with $\frac{3}{2}y^2 = \frac{1}{2}x^2$, which reduces to $y = x\sqrt{\frac{1}{3}}+c$. With the initial condition $y(3) = 11$, I get $y = x\sqrt{\frac{1}{3}}+11-3\sqrt{\frac{1}{3}}$, but apparently this is incorrect. What am IRead More → I'm a big advocate of error logs: notebooks in which students analyse their mistakes. I recommend a three-column approach: in the first, write the question, in the second, what went wrong, and in the last, how to do it correctly. Oddly, that's the format for this post, too. The questionRead More → Dear Uncle Colin, There is a famous puzzle where you're asked to form 100 by inserting basic mathematical operations at strategic points in the string of digits 123456789. This can be achieved, for example, by writing $1 + 2 + 3 - 4 + 5 + 6 + 78 +Read More → On this month's episode of Wrong, But Useful, @icecolbeveridge and @reflectivemaths are joined by special guest co-host @christianp. This time, we talk about: Christian, who is involved in @mathsjam and the @aperiodical, and has a number of the podcast: 13. He dislikes it because of its times table; I likeRead More → Every so often, a puzzle comes along and is just right for its time. Not so hard that you waste hours on it, but not so easy that it pops out straight away. I heard this from Simon at Big MathsJam last year and thought it'd be a good oneRead More → Dear Uncle Colin, Apparently, you can use L'Hôpital's rule to find the limit of $\left(\tan(x)\right)^x$ as $x$ goes to 0 - but I can't see how! - Fractions Required, Example Given Excepted Hi, FREGE, and thanks for your question! As it stands, you can't use L'Hôpital - but you canRead More → This is a guest post from @ImMisterAl, who prefers to remain anonymous in real life. It refers to the problem in this post: a semi-circle is inscribed in a 3-4-5 triangle as shown; find $X$. As with any mathematical problem, my first thought was to sort out exactly what IRead More → Dear Uncle Colin, I don't understand why the normal gradient is the negative reciprocal of the tangent gradient. What's the logic there? -- Pythagoras Is Blinding You To What's Obvious Hi, PIBYTWO, and thanks for your message! My favourite way to think about perpendicular gradients is to imagine a lineRead More →
Skills to Develop Confidence limits tell you how accurate your estimate of the mean is likely to be. Introduction After you've calculated the mean of a set of observations, you should give some indication of how close your estimate is likely to be to the parametric ("true") mean. One way to do this is with confidence limits. Confidence limits are the numbers at the upper and lower end of a confidence interval; for example, if your mean is \(7.4\) with confidence limits of \(5.4\) and \(9.4\), your confidence interval is \(5.4\) to \(9.4\). Most people use \(95\%\) confidence limits, although you could use other values. Setting \(95\%\) confidence limits means that if you took repeated random samples from a population and calculated the mean and confidence limits for each sample, the confidence interval for \(95\%\) of your samples would include the parametric mean. To illustrate this, here are the means and confidence intervals for \(100\) samples of \(3\) observations from a population with a parametric mean of \(5\). Of the \(100\) samples, \(94\) (shown with \(X\) for the mean and a thin line for the confidence interval) have the parametric mean within their \(95\%\) confidence interval, and \(6\) (shown with circles and thick lines) have the parametric mean outside the confidence interval. Fig. 3.4.1 Mean and confidence intervals for 100 samples of 3 observations With larger sample sizes, the \(95\%\) confidence intervals get smaller: Fig. 3.4.2 Mean and confidence intervals for 100 samples of 20 observations When you calculate the confidence interval for a single sample, it is tempting to say that "there is a \(95\%\) probability that the confidence interval includes the parametric mean." This is technically incorrect, because it implies that if you collected samples with the same confidence interval, sometimes they would include the parametric mean and sometimes they wouldn't. For example, the first sample in the figure above has confidence limits of \(4.59\) and \(5.51\). It would be incorrect to say that \(95\%\) of the time, the parametric mean for this population would lie between \(4.59\) and \(5.51\). If you took repeated samples from this same population and repeatedly got confidence limits of \(4.59\) and \(5.51\), the parametric mean (which is \(5\), remember) would be in this interval \(100\%\) of the time. Some statisticians don't care about this confusing, pedantic distinction, but others are very picky about it, so it's good to know. Confidence limits for measurement variables To calculate the confidence limits for a measurement variable, multiply the standard error of the mean times the appropriate t-value. The \(t\)-value is determined by the probability (\(0.05\) for a \(95\%\) confidence interval) and the degrees of freedom (\(n-1\)). In a spreadsheet, you could use =(STDEV(Ys)/SQRT(COUNT(Ys)))*TINV(0.05, COUNT(Ys)-1), where \(Ys\) is the range of cells containing your data. You add this value to and subtract it from the mean to get the confidence limits. Thus if the mean is \(87\) and the \(t\)-value times the standard error is \(10.3\), the confidence limits would be \(76.7\) and \(97.3\). You could also report this as "\(87\pm 10.3\) (\(95\%\) confidence limits)." People report both confidence limits and standard errors as the "mean \(\pm \) something," so always be sure to specify which you're talking about. All of the above applies only to normally distributed measurement variables. For measurement data from a highly non-normal distribution, bootstrap techniques, which I won't talk about here, might yield better estimates of the confidence limits. Confidence limits for nominal variables There is a different, more complicated formula, based on the binomial distribution, for calculating confidence limits of proportions (nominal data). Importantly, it yields confidence limits that are not symmetrical around the proportion, especially for proportions near zero or one. John Pezzullo has an easy-to-use web page for confidence intervals of a proportion. To see how it works, let's say that you've taken a sample of \(20\) men and found \(2\) colorblind and \(18\) non-colorblind. Go to the web page and enter \(2\) in the "Numerator" box and \(20\) in the "Denominator" box," then hit "Compute." The results for this example would be a lower confidence limit of \(0.0124\) and an upper confidence limit of \(0.3170\). You can't report the proportion of colorblind men as "\(0.10\pm something\)," instead you'd have to say "\(0.10\) with \(95\%\) confidence limits of \(0.0124\) and \(0.3170\)." An alternative technique for estimating the confidence limits of a proportion assumes that the sample proportions are normally distributed. This approximate technique yields symmetrical confidence limits, which for proportions near zero or one are obviously incorrect. For example, if you calculate the confidence limits using the normal approximation on \(0.10\) with a sample size of \(20\), you get \(-0.03\) and \(0.23\), which is ridiculous (you couldn't have less than \(0\%\) of men being color-blind). It would also be incorrect to say that the confidence limits were \(0\) and \(0.23\), because you know the proportion of colorblind men in your population is greater than \(0\) (your sample had two colorblind men, so you know the population has at least two colorblind men). I consider confidence limits for proportions that are based on the normal approximation to be obsolete for most purposes; you should use the confidence interval based on the binomial distribution, unless the sample size is so large that it is computationally impractical. Unfortunately, more people use the confidence limits based on the normal approximation than use the correct, binomial confidence limits. The formula for the \(95\%\) confidence interval using the normal approximation is \(p\pm 1.96\sqrt{\left [ \frac{p(1-p)}{n} \right ]}\), where \(p\) is the proportion and \(n\) is the sample size. Thus, for \(P=0.20\) and \(n=100\), the confidence interval would be \(\pm 1.96\sqrt{\left [ \frac{0.20(1-0.20)}{100} \right ]}\), or \(0.20\pm 0.078\). A common rule of thumb says that it is okay to use this approximation as long as \(npq\) is greater than \(5\); my rule of thumb is to only use the normal approximation when the sample size is so large that calculating the exact binomial confidence interval makes smoke come out of your computer. Statistical testing with confidence intervals This handbook mostly presents "classical" or "frequentist" statistics, in which hypotheses are tested by estimating the probability of getting the observed results by chance, if the null is true (the \(P\) value). An alternative way of doing statistics is to put a confidence interval on a measure of the deviation from the null hypothesis. For example, rather than comparing two means with a two-sample t–test, some statisticians would calculate the confidence interval of the difference in the means. This approach is valuable if a small deviation from the null hypothesis would be uninteresting, when you're more interested in the size of the effect rather than whether it exists. For example, if you're doing final testing of a new drug that you're confident will have some effect, you'd be mainly interested in estimating how well it worked, and how confident you were in the size of that effect. You'd want your result to be "This drug reduced systolic blood pressure by \(10.7 mm\; \; Hg\), with a confidence interval of \(7.8\) to \(13.6\)," not "This drug significantly reduced systolic blood pressure (\(P=0.0007\))." Using confidence limits this way, as an alternative to frequentist statistics, has many advocates, and it can be a useful approach. However, I often see people saying things like "The difference in mean blood pressure was \(10.7 mm\; \; Hg\), with a confidence interval of \(7.8\) to \(13.6\); because the confidence interval on the difference does not include \(0\), the means are significantly different." This is just a clumsy, roundabout way of doing hypothesis testing, and they should just admit it and do a frequentist statistical test. There is a myth that when two means have confidence intervals that overlap, the means are not significantly different (at the \(P<0.05\) level). Another version of this myth is that if each mean is outside the confidence interval of the other mean, the means are significantly different. Neither of these is true (Schenker and Gentleman 2001, Payton et al. 2003); it is easy for two sets of numbers to have overlapping confidence intervals, yet still be significantly different by a two-sample t–test; conversely, each mean can be outside the confidence interval of the other, yet they're still not significantly different. Don't try compare two means by visually comparing their confidence intervals, just use the correct statistical test. Similar statistics Confidence limits and standard error of the mean serve the same purpose, to express the reliability of an estimate of the mean. When you look at scientific papers, sometimes the "error bars" on graphs or the ± number after means in tables represent the standard error of the mean, while in other papers they represent \(95\%\) confidence intervals. I prefer \(95\%\) confidence intervals. When I see a graph with a bunch of points and error bars representing means and confidence intervals, I know that most (\(95\%\)) of the error bars include the parametric means. When the error bars are standard errors of the mean, only about two-thirds of the bars are expected to include the parametric means; I have to mentally double the bars to get the approximate size of the \(95\%\) confidence interval (because \(t(0.05)\) is approximately \(2\) for all but very small values of \(n\)). Whichever statistic you decide to use, be sure to make it clear what the error bars on your graphs represent. A surprising number of papers don't say what their error bars represent, which means that the only information the error bars convey to the reader is that the authors are careless and sloppy. Examples Measurement data The blacknose dace data from the central tendency web page has an arithmetic mean of \(70.0\). The lower confidence limit is \(45.3\) (\(70.0-24.7\)), and the upper confidence limit is \(94.7\) (\(70+24.7\)). Nominal data If you work with a lot of proportions, it's good to have a rough idea of confidence limits for different sample sizes, so you have an idea of how much data you'll need for a particular comparison. For proportions near \(50\%\), the confidence intervals are roughly \(\pm 30\%,\; 10\%,\; 3\%\), and \(1\%\) for \(n=10,\; 100,\; 1000,\) and \(10,000\), respectively. This is why the "margin of error" in political polls, which typically have a sample size of around \(1,000\), is usually about \(3\%\). Of course, this rough idea is no substitute for an actual power analysis. n proportion=0.10 proportion=0.50 10 0.0025, 0.4450 0.1871, 0.8129 100 0.0490, 0.1762 0.3983, 0.6017 1000 0.0821, 0.1203 0.4685, 0.5315 10,000 0.0942, 0.1060 0.4902, 0.5098 How to calculate confidence limits Spreadsheets The descriptive statistics spreadsheet descriptive.xls calculates \(95\%\) confidence limits of the mean for up to \(1000\) measurements. The confidence intervals for a binomial proportion spreadsheet confidence.xls calculates \(95\%\) confidence limits for nominal variables, using both the exact binomial and the normal approximation. Web pages This web page calculates confidence intervals of the mean for up to \(10,000\) measurement observations. The web page for confidence intervals of a proportion handles nominal variables. R Salvatore Mangiafico's \(R\) Companion has sample R programs for confidence limits for both measurement and nominal variables. SAS To get confidence limits for a measurement variable, add CIBASIC to the PROC UNIVARIATE statement, like this: data fish; input location $ dacenumber; cards; Mill_Creek_1 76 Mill_Creek_2 102 North_Branch_Rock_Creek_1 12 North_Branch_Rock_Creek_2 39 Rock_Creek_1 55 Rock_Creek_2 93 Rock_Creek_3 98 Rock_Creek_4 53 Turkey_Branch 102 ; proc univariate data=fish cibasic; run; The output will include the \(95\%\) confidence limits for the mean (and for the standard deviation and variance, which you would hardly ever need): Basic Confidence Limits Assuming Normality Parameter Estimate 95% Confidence Limits Mean 70.00000 45.33665 94.66335 Std Deviation 32.08582 21.67259 61.46908 Variance 1030 469.70135 3778 This shows that the blacknose dace data have a mean of \(70\), with confidence limits of \(45.3\) and \(94.7\). You can get the confidence limits for a binomial proportion using PROC FREQ. Here's the sample program from the exact test of goodness-of-fit page: data gus; input paw $; cards; right left right right right right left right right right ; proc freq data=gus; tables paw / binomial( P=0.5); exact binomial; run; And here is part of the output: Binomial Proportion for paw = left -------------------------------- Proportion 0.2000 ASE 0.1265 95% Lower Conf Limit 0.0000 95% Upper Conf Limit 0.4479 Exact Conf Limits 95% Lower Conf Limit 0.0252 95% Upper Conf Limit 0.5561 The first pair of confidence limits shown is based on the normal approximation; the second pair is the better one, based on the exact binomial calculation. Note that if you have more than two values of the nominal variable, the confidence limits will only be calculated for the value whose name is first alphabetically. For example, if the Gus data set included "left," "right," and "both" as values, SAS would only calculate the confidence limits on the proportion of "both." One clumsy way to solve this would be to run the program three times, changing the name of "left" to "aleft," then changing the name of "right" to "aright," to make each one first in one run. References Payton, M. E., M. H. Greenstone, and N. Schenker. 2003. Overlapping confidence intervals or standard error intervals: what do they mean in terms of statistical significance? Journal of Insect Science 3: 34. Schenker, N., and J. F. Gentleman. 2001. On judging the significance of differences by examining overlap between confidence intervals. American Statistician 55: 182-186. Contributor John H. McDonald (University of Delaware)
Reading up on Lagrangian mechanics, it's fascinating. Entirely different view, one single rule, a complete alternative to Newton's laws. But how do you actually find the path of least action? Let's say I were trying to find the path taken by a projectile launched by a cannon, image courtesy of Dr. Thomas Gibson, Texas Tech University: Let's say the above cannon launches a projectile off a cliff at $0^\circ$ off the horizontal. The mass doesn't actually affect the path disregarding air resistance, but it's needed for Lagrangian mechanics so let's say it's a $10\textrm{kg}$ projectile. The projectile is fired from $10\textrm{m}$ off the ground, and the velocity imparted to the projectile is $10^{\textrm{m}}/_{\textrm{s}}$ entirely in the positive $x$ direction. Newtonian Running through this Newtonian-style, if we wanted to find the location of the projectile at any given point in its path, we know that the horizontal component of its position can be found using the initial velocity, and the vertical component can be found using acceleration due to gravity, so: $$\vec{s}(t) = \left \langle {v_xt}, {\frac{gt^2}{2}} \right \rangle$$ Lagrangian For the Lagrangian approach, we know that the kinetic energy of the projectile at the start is $\frac{mv^2}{2}$ and ignoring air resistance the potential energy of the projectile at the start is $mgh$. So, within the context of a given path, $${\mathcal {S}}(L)=\int_{t_i}^{t_f}{\left[ \frac{mv_x^2}{2} - mgh \right]} \,dt$$ And the path of least action $L_{LA}$ from all possible paths $L$ is defined as: $$\{L_{LA} \in L \mid {\mathcal {S}}(L_{LA}) = \min_{L_k \in L}{\mathcal {S}}(L_k)\}$$ So I've got my definition down, but how do I actually find the curve followed by the projectile: $$\vec{s}(t) = \left \langle {?}, {?} \right \rangle$$ Example Let's say I wanted to find the position of the projectile at the middle of its path, by time. In the Newtonian model, I know that the projectile's path will end upon hitting the ground, so the total time in the air can be expressed as $t_f = \sqrt\frac{2d}{g}$ so at half of $t_f$ then, $t_{mid} = \sqrt\frac{d}{2g}$. Substitute $t_{mid}$ in for $t$ and we have the position mid-path by time: $$\vec{s}(t_{mid}) = \left \langle {v_x\sqrt\frac{d}{2g}}, {\frac{d}{4}} \right \rangle$$ But I can't quite figure out how to arrive at the same result from the Lagrangian form. How would I use the action to actually find details of the favored path? $${\mathcal {S}}(L)=\int_{t_i}^{t_f}{\left[ \frac{mv_x^2}{2} - mgh \right]} \,dt \quad\quad \Longrightarrow \quad\quad \vec{s}(t_{mid}) = \left \langle {v_x\sqrt\frac{d}{2g}}, {\frac{d}{4}} \right \rangle$$
CZ_PROB1 - Summing to a Square Prime $S_{P2} = \{p \mid p: \mathrm{prime} \wedge (\exists x_1, x_2 \in \mathbb{Z}, p = x_1^2 + x_2^2) \}$ is the set of all primes that can be represented as the sum of two squares. The function $S_{P2}(n)$ gives the $n$ th prime number from the set $S_{P2}$. Now, given two integers $n$ ($0 < n < 501$) and $k$ ($0 < k < 4$), find $p(S_{P2}(n), k)$ where $p(a, b)$ gives the number of unordered ways to sum to the given total ‘$a$’ with ‘$b$’ as its largest possible part. For example: $p(5, 2) = 3$ (i.e. $2+2+1$, $2+1+1+1$, and $1+1+1+1+1$). Here $5$ is the total with $2$ as its largest possible part. Input The first line gives the number of test cases $T$ followed by $T$ lines of integer pairs, $n$ and $k$. Constraints $0 < T < 501$ $0 < n < 501$ $1 < S_{P2}(n) < 7994$ $0 < k < 4$ Output The $p(S_{P2}(n), k)$ for each $n$ and $k$. Append a newline character to every test cases’ answer. Example Input:3 2 2 3 2 5 3 Output:3 7 85 hide comments untitledtitled: 2019-01-16 23:58:12 There seems to be a problem with the rendering of the mathematical notation. The first line reads: tanmayak99: 2018-05-31 18:37:24 Good question.. deadpool_18: 2017-06-19 18:39:42 do not forget to consider 2 in your set although its not congruent to 1 modulo 4 harsh_verma: 2017-06-15 14:05:19 due to small constraints can be solved without dp also ;) #PNC Last edit: 2017-06-15 14:12:20 shubham: 2017-04-27 14:00:24 sometimes even the easy ones get you.. Wasted 1.5 hrs in this singhsauravsk: 2017-04-10 04:31:08 Nice Problem :D minhbk1861: 2016-11-04 07:16:38 Wrong input constant surayans tiwari(http://bit.ly/1EPzcpv): 2016-06-26 14:51:52 coin change :) hash7: 2016-06-24 18:47:46 Nyc qsn :) bottom up + precomputation :.Mohib.:: 2015-08-01 12:18:57 Nice One..!!
I want to show that the family of normal distributions is not a single-parameter exponential family, i.e. there aren't functions $h,g,\eta,T$ such that $$h(x)g(\mu,\sigma^2)\exp(\eta(\mu,\sigma^2) T(x)) = \frac{1}{\sqrt{2\pi\sigma^2}}\exp\left(-\frac{1}{2\sigma^2}(x-\mu)^2\right)$$ for all $x\in\mathbb R$ and $(\mu,\sigma^2)\in\mathbb R\times\mathbb R_{>0}$. I tried fixing one argument, in order to get some information about the other functions. For example $\mu=0$ looks much simpler but it didn't lead me anywhere. Can anybody give me a hint?
Edit (11/12/12): I added an explanation of the phrase "this is essentially equivalent to $X$ being $S_2$" at the end to answer aglearner's question in the comments. [See also here and here] Dear Jesus, I think there are several problems with your question/desire to define a canonical divisor on any algebraic variety. First of all, what is any algebraic variety? Perhaps you mean a quasi-projective variety (=reduced and of finite type) defined over some (algebraically closed) field. OK, let's assume that $X$ is such a variety. Then what is a divisor on $X$? Of course, you could just say it is a formal linear combination of prime divisors, where a prime divisor is just a codimension 1 irreducible subvariety. OK, but what if $X$ is not equidimensional? Well, let's assume it is, or even that it is irreducible. Still, if you want to talk about divisors, you would surely want to say when two divisors are linearly equivalent. OK, we know what that is, $D_1$ and $D_2$ are linearly equivalent iff $D_1-D_2$ is a principal divisor. But, what is a principal divisor? Here it starts to become clear why one usually assumes that $X$ is normal even to just talk about divisors, let alone defining the canonical divisor. In order to define principal divisors, one would need to define something like the order of vanishing of a regular function along a prime divisor. It's not obvious how to define this unless the local ring of the general point of any prime divisor is a DVR. Well, then this leads to one to want to assume that $X$ is $R_1$, that is, regular in codimension $1$ which is equivalent to those local rings being DVRs. OK, now once we have this we might also want another property: If $f$ is a regular function, we would expect, that the zero set of $f$ should be 1-codimensional in $X$. In other words, we would expect that if $Z\subset X$ is a closed subset of codimension at least $2$, then if $f$ is nowhere zero on $X\setminus Z$, then it is nowhere zero on $X$. In (yet) other words, if $1/f$ is a regular function on $X\setminus Z$, then we expect that it is a regular function on $X$. This in the language of sheaves means that we expect that the push-forward of $\mathscr O_{X\setminus Z}$ to $X$ is isomorphic to $\mathscr O_X$. Now this is essentially equivalent to $X$ being $S_2$. So we get that in order to define divisors as we are used to them, we would need that $X$ be $R_1$ and $S_2$, that is, normal. Now, actually, one can work with objects that behave very much like divisors even on non-normal varieties/schemes, but one has to be very careful what properties work for them. As far as I can tell, the best way is to work with Weil divisorial sheaves which are really reflexive sheaves of rank $1$. On a normal variety, the sheaf associated to a Weil divisor $D$, usually denoted by $\mathcal O_X(D)$, is indeed a reflexive sheaf of rank $1$, and conversely every reflexive sheaf of rank $1$ on a normal variety is the sheaf associated to a Weil divisor (in particular a reflexive sheaf of rank $1$ on a regular variety is an invertible sheaf) so this is indeed a direct generalization. One word of caution here: $\mathcal O_X(D)$ may be defined for Weil divisors that are not Cartier, but then this is (obviously) not an invertible sheaf. Finally, to answer your original question about canonical divisors. Indeed it is possible to define a canonical divisor (=Weil divisorial sheaf) for all quasi-projective varieties. If $X\subseteq \mathbb P^N$ and $\overline X$ denotes the closure of $X$ in $\mathbb P^N$, then the dualizing complex of $\overline X$ is $$\omega_{\overline X}^\bullet=R{\mathscr H}om_{\mathbb P^N}(\mathscr O_{\overline X}, \omega_{\mathbb P^N}[N])$$and the canonical sheaf of $X$ is$$\omega_X=h^{-n}(\omega_{\overline X}^\bullet)|_X=\mathscr Ext^{N-n}_{\mathbb P^N}(\mathscr O_{\overline X},\omega_{\mathbb P^N})|_X$$ where $n=\dim X$. (Notice that you may disregard the derived category stuff and the dualizing complex, and just make the definition using $\mathscr Ext$.) Notice further, that if $X$ is normal, this is the same as the one you are used to and otherwise it is a reflexive sheaf of rank $1$. As for your formula, I am not entirely sure what you mean by "where the $D_i$ are representatives of all divisors in the Class Group". For toric varieties this can be made sense as in Josh's answer, but otherwise I am not sure what you had in mind. (Added on 11/12/12): Lemma A scheme $X$ is $S_2$ if and only if for any $\iota:Z\to X$ closed subset of codimension at least $2$, the natural map $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphism. ProofSince both statements are local we may assume that $X$ is affine.Let $x\in X$ be a point and $Z\subseteq X$ its closure in $X$. If $x$ is a codimension at most $1$ point, there is nothing to prove, so we may assume that $Z$ is of codimension at least $2$. Considering the exact sequence (recall that $X$ is affine):$$0\to H^0_Z(X,\mathscr O_X) \to H^0(X,\mathscr O_X) \to H^0(X\setminus Z,\mathscr O_X) \to H^1_Z(X,\mathscr O_X) \to 0$$shows that $\mathscr O_X\to \iota_*\mathscr O_{X\setminus Z}$ is an isomorphismif and only if $H^0_Z(X,\mathscr O_X)=H^1_Z(X,\mathscr O_X)=0$ the latter condition is equivalent to $$\mathrm{depth}\mathscr O_{X,x}\geq 2,$$which given the assumption on the codimension is exactly the condition that $X$ is $S_2$ at $x\in X$. $\qquad\square$
This task is more complex than the task to solve a quadratic equation, for example, and one must master a significant portion of a textbook – such as Georgi's textbook – and perhaps something beyond it to have everything he needs. For the 8-dimensional representation of $SU(3)$, things simplify because it's the "adjoint rep" of $SU(3)$ – the vector space that formally coincides with the Lie algebra itself. And the action of the generator $G_i$ on the basis vector $V_j=G_j$ of the adjoint representation is given by $$ G_i (V_j) = [G_i,V_j]= \sum_k f_{ij}{}^k G_k $$This implies that the structure constants $f$ directly encode the matrix elements of the generator $G_i$ with respect to the adjoint representation – $j$ and $k$ label the row and the column, respectively. The structure constants $f$ determining the commutators may be extracted from all the roots. The whole mathematical structure is beautiful but the decomposition of the generators under the Cartan subalgebra has several pieces, and therefore an even greater number of different types of "pairs of pieces" that appear as the commutators. Some ($r$, rank) of the generators $G_i$ are identified with the Cartan generators $u_a$. The rest of the generators $G_j$ are uniquely associated with all the roots. If you only have the Cartan matrix, you effectively have the inner products of the simple roots only. You first need to get all the roots, and those are connected with the $d-r$ (dimension minus rank) root vectors $r_j$. The commutators of two Cartan generators vanish, $$[h_i,h_j]=0$$The commutator of a Cartan generator with a non-Cartan generator is given by$$[h_i,G_{r(j)}] = r_i G_{r(j)}$$because we organized the non-Cartan generators as simultaneous eigenstates under all the Cartan generators. Finally, the commutator$$[G_{r(i)},G_{r(j)}]$$is zero if $r_i=r_j$. It is a natural linear combination of the $h_i$ generators if the root vectors obey $r_i=-r_j$. If $r_i+r_j$ is a vector that isn't a root vector, the commutator has to vanish. And if $r_i+r_j$ is a root vector but $r_i\neq \pm r_j$, then the commutator is proportional to $G_{r(i)+r(j)}$ corresponding to this "sum" root vector. The coefficient (mostly sign) in front of this commutator is subtle. Once you have all these commutators, you have effectively restored all the structure constants $f$, and therefore all the matrix entries with respect to the adjoint representation. To find matrix elements for a general representation is much more complex. You must first figure out what the representations are. Typically, you want to start with the fundamental (and/or antifundamental) rep, and all others may be obtained as terms composing a direct sum decomposition of tensor products of many copies of the fundamental (and/or antifundamental, if it is different) representation. All the representations may be obtained from the weight lattice which is a subset of the root lattice and is similar. In fact, the weight lattice is the dual (in the "form" vector space sense) of the root lattice under the natural inner product. In practice, physicists don't ever do the procedures in this order because that's not how Nature asks us to solve problems. We learn how to deal with the groups we need – which, at some moment, includes all the compact simple Lie groups as the "core" (special unitary, orthogonal, symplectic, and five exceptional), and we learn the reps of these Lie groups – the obvious fundamental ones, the adjoint, and the pattern how to get the more complicated ones. I am afraid that it doesn't make any sense to fill in any "gaps" if you would need to elaborate upon something because in this way, one would gradually be forced to write another textbook on Lie groups and representation theory as this answer, and I don't think that such a work would be appropriately rewarded – even this work wasn't. ;-)
Overview For the moment, please treat this as a kind of proof-of-concept. It's intended to have enough detail to make the design concept clear, without bogging down too much in details at this point. For example, I'm disregarding literals for now, because they are inessential to the basic deductive system. I guess the basic idea is that we ought to be able to use some variation of Natural Deduction to express just about everything needed to define RDF and RDFS clearly and concisely. Among other things this should allow us to eliminate stuff like "instance of a graph" and "lean graph" (described in RDF Semantics); the former is just not needed (being handled by derivation rules involving variables), while the latter seems to correspond to the concept of normalization, in the sense of a set (sequence) of independent triads, no one of which can be derived from the others. To get started I'm just sketching out a basic calculus to suggest what shape the final thing would have. Also, the notes amount to a sort of tutorial/cry-for-help in that I try to explain what I'm doing and why, rather than just stating definitions as one might do in a formal definition. Also, the notes amount to a sort of tutorial/cry-for-help in that I try to explain what I'm doing and why, rather than just stating definitions as one might do in a formal definition. Meta-language Constants: All constants in the object language are also constants in the meta-language. \(\exists\): existential quantifier \(\land\): logical and \(\phi\): a formula \(\Phi\): set of formulae \(\vdash\): derivation relation \(\models\): entailment (logical consequence) relation \(\leadsto\); informal justification \(\Rightarrow\): rewrite relation; like derivation, only from object language to meta-language. Formulae Quantification \(\exists x.\!x\phi\) has its usual FOL meaning. Inference. We treat inference as a genus with three species: \(\Phi \vdash \phi\): the formulae \(\Phi\) derivesthe formula \(\phi\); \(\phi\) is derivableor deduciblefrom \(\Phi\) \(\Phi \models \phi\): the formulae \(\Phi\) entailthe formula \(\phi\); \(\phi\) is a (logical) consequenceof \(\Phi\). \(\Phi \leadsto \phi\): the meanings of \(\Phi\) justifyor lead tothe meaning \(\phi\); \(\phi\) follows fromof \(\Phi\) (semantically). ( NB: this is an informal notion we will use to explicate the consequence relation.) Note that the meta-language contains a few constants from First-Order Logic (FOL): \(\exists, \land, \vdash, \models\). They allow us to say useful things about object languages and how they work. As it stands it is not complete; this is just enough to get the ideas across. \(\displaystyle\mathbb{T}_\varnothing\) (T-null) Symbols Constants: \(\{A\ldots Z\} \cup \{a,\ldots z\}\cup \{., ``,"\}\) Term calculus: every symbol is a term. Formula calculus: If \(A\), \(B\), and \(C\) are terms, then \(ABC\) is a formula. If \(A\), \(B\), and \(C\) are terms, then each of the following is a formula: \(xBC\), \(AxC\), \(ABx\) \(xyC\), \(xBy\), \(Axy\) \(xyz\) If \(ABC\) and \(DEF\) are formulae, then \(ABC, DEF.\) is a formula. So far, \(\mathbb{T}_\varnothing\) - call it T-null- is a minimal language to say the least. It has a very limited number of forms, and it has no semantics whatsoever. In other words, it's not really a language at all; we could call it an uninterpreted calculus, except that you can't do any calculating in it. However we care to classify it, we can attach any semantics we care to it by assigning meanings to its symbols, but since no such assignments are fixedthe result has no stability - you can always change any assignment, so there isn't much point in making one in the first place. We can express this in terms of expressive strengthin a way that will turn out to be useful as we proceed. \(\mathbb{T}_0\) is the weakestpossible language, because it has no expressive stability. You can't say anything in it that is guaranteed to mean the same thing under different interpretations. \(\mathbb{T}_\varnothing\) corresponds to RDF without blank nodes and literals. Formulae like \(ABC\) obviously correspond structurally to RDF triples; to maintain a clean separation between this language and RDF languages, we will call such forms triads. We might occasionally be tempted to call the constants trions, but we hope not. \(\displaystyle\mathbb{T}_0\) (T-basic) What can we do to remedy this sorry state of affairs? Let's start by giving the speakers of this little language the ability to express things like "somebody killed Caesar" and "Cicero said something". While we're at it, let's allow them to say "Somebody killed Caesar and Cicero said something". As to "somebody" ("something", etc.): the usual way this is done is by But since we want to track RDF, whose symbol set does not include anything corresponding to \(\exists\), we will take a different tack, one that corresponds to the way things are done in the RDF ecology: instead of There are two ways to do this. One is to pick some constants that are already in the language and give them a fixed meaning. This is the technique we will use later on when we accomodate the various "entailment regimes" of RDF. This kind of strengthening is equivalent to extending the set of constants in the language with fixed meanings (call them the The other way to strengthen a language, which we will now use, is to The new meta-language rules fixing the meanings of formulae containing variables will be expressed as As to "somebody" ("something", etc.): the usual way this is done is by extendingthe language. One adds some symbols and the axioms that fix their meanings. In this case, we could add the existential quantifier \(\exists\) to the object language and, if we wanted to be fastidious, define rules in the meta-language to determine its meaning ("there exists"). This would allow users of the language to "say" things of the form \(\exists x.\!xBC\); if you set B = killed and C = Caesar, and this expresses "somebody killed Caesar". But since we want to track RDF, whose symbol set does not include anything corresponding to \(\exists\), we will take a different tack, one that corresponds to the way things are done in the RDF ecology: instead of extendingthe language, we strengthenit. Note: there's more to say about extending/strengthening; in particular such changes must be conservative, that is, they cannot break what is already there. More generally: we have a language, a world, and a stable but partial interpretation. Fix any two and vary the third. There are two ways to do this. One is to pick some constants that are already in the language and give them a fixed meaning. This is the technique we will use later on when we accomodate the various "entailment regimes" of RDF. This kind of strengthening is equivalent to extending the set of constants in the language with fixed meanings (call them the fixed constantsfor short). Having more fixed constants means more expressive power; it allows the speaker to say more (kinds of) things. The other way to strengthen a language, which we will now use, is to reclassifysome subset of its constants. Notice that \(\mathbb{T}_\varnothing\) contains only one kind of symbol. We will change the language not by adding symbols, but by reclassifying the symbols \(x, y,\) and \(z\) into a new class of Variables.Unfortunately, that alone would have no effect; something additional needs to be introduced in order to make such reclassification meaningful. In this case, that means adding rules to the meta-language that fix the meanings of the formulae in which they appear. That last clause - "meanings of the formulae" - is critical, since with this technique we do notfix the meanings of the variable symbols themselves. On the contrary, quantified variables of this sort will neverbe assigned a determinate meaning. Their meaning is essentially tied up with the meaning of quantification, which in turn is essentially connected with the composite forms in which the variables appear. The new meta-language rules fixing the meanings of formulae containing variables will be expressed as rewriterules. A rewrite rule is essentially the same as a syntactic derivation or transformation rule; it says how one form can be mapped to another. The difference is that derivation rules work withinthe object language (i.e. they're closed), whereas rewrite rules take forms from the object language to forms in the metalanguage. Symbols Constants: \(\{A\ldots Z\} \cup \{a,\ldots z\}\!\setminus\!\{x, y, z\} \cup \{., ``,"\}\) Variables: \(\{x, y, z\}\) Meta-language RewriteSublimation Rules \(ABC\Rightarrow ABC\) \(, \Rightarrow\land\) \(xBC \Rightarrow \exists x.\!xBC\) and similarly for AxC and ABx. We treat any formula in the object language that contains variables as an existentially quantified formula. \(xyC \Rightarrow \exists x\exists y.\!xyC\). \(xyz \Rightarrow \exists x\exists y\exists z.\!xyz\). \(xBC, xEF \Rightarrow \exists x.\!(xBC\land xEF)\) and similar for two and three variables. \(xBC, yEF \Rightarrow \exists x\exists y.\!(xBC\land yEF)\) and similar for two and three variables. etc. (existential rewrites shown by example rather than general rule) CAVEAT:it may be tempting to view the rewrite rules as a kind of semantic interpretation. That would be a tragic mistake. They are purely syntactic rules. For example, if we have \(ABC, xEF\), rewrite rules 1-3 ``sublimate'' this to \(ABC\land\exists x.\!xEF\). Note: as it stands this calculus is not complete, because it does not handle the scope of quantification for all cases. For example it does not tell us how to get from \(xBC, DEF, GHy\) to \(\exists x\exists y.\!(xBC\land~DEF\land~GHy)\). But I hope what's there is sufficient to get the idea across. This is a stronger language, but it still lacks a highly desirable bit of logic: it does not yet support any form of inference. We cannot go from a premise to a conclusion. In particular, even though we know how to construct (legal) formulae containing ',' and \(\land\), nothing tells us when it is appropriate to do so. For example, if you know that grass is green, and you also know (separately, as it were) that snow is white, then you ought to be able to package those two bits of knowledge into a single whole and say "grass is green and snow is white". Intuitively that's obviously what we do; but for a calculus, we need an explicit rule. Similarly for \(\exists\): if we know the specific person (thing) that exists, then we can say that someperson or thing exists. But we need a formal rule for this. We could strengthen \(\mathbb{T}_0\) and call it a new language, but in the interest of conciseness let's just define \(\mathbb{T}_0\) as including support for a basic form of inference. It is intuitively obvious that if "Brutus killed Caesar" is true, then so is "somebody killed Caesar". Going from the former to the latter is a basic species of inference. Similarly, if you know "Somebody killed Caesar", and you also know (perhaps you were told) that Brutus killed Caesar, then you no longer need "Somebody killed Caesar". Since Brutus is somebody, you lose nothing if you discard "Somebody killed Caesar" and retain "Brutus killed Caesar". This too is a form of inference. To support this in \(\mathbb{T}_0\) we add inference rules governing the To support this in \(\mathbb{T}_0\) we add inference rules governing the introductionand eliminationof terms like "somebody". To start, we have only two such terms: \(\exists\) ("there exists") corresponding to "somebody" (or "something", etc.) and \(\land\) ("and") .[In the case of \(\exists\) we're abusing the terminology a little bit, since \(\exists\) is not part of the object language. Our introduction rule will actually license introduction of variables in triad constructions, which in turn are mapped to \(\exists\) expressions in the meta-language. \(\land\)-Introduction: \(\displaystyle{\Phi\quad \phi\over \Phi,\ \phi}\) \(\land\)-Elimination: \(\displaystyle{\Phi,\ \phi\over\Phi}\) \(\displaystyle{\Phi,\ \phi\over\phi}\) \(\exists\)-Introduction \(ABC\vdash xBC\) ...similar for \(AxB, ABx\) etc. \(ABC\vdash xyC\) ... \(ABC\vdash xyz\) \(\exists\)-Elimination \({ABC\quad xBC\over ABC}\qquad or\qquad {\Phi\quad \Phi x\over\Phi} \) RDF: \(\exists\)-Elimination is basically the counterpart to "instancing" in RDF-Semantics. NOTE: I'm not sure how to write \(\exists\)-Elimination yet. Maybe something like one of these? TODO: explicate significance of elimination rules for readers unfamiliar with this style of logic. [NB: if we were being really fastidious we would give each of these rules a distinct name, but that seems like too much bother for now. And why not just call them \(\exists\)\!I rules, since that's what they mean?] Alternative format: CAVEAT: this works fine for one triad, but does not work for a sequent of triads. The single triad rule would allow e.g. \(ABC,ADE\vdash xBC,yDE\). We need a rule to express the requirement of uniform substitution, e.g. \(ABC,ADE\vdash xBC,xDE\). \(ABC\over xBC\) \(ABC\over AxC\) \(ABC\over ABx\) \(ABC\over xyC\) \(ABC\over Axy\) \(ABC\over xBy\) \(ABC\over xyz\) [NB: can we generalize? E.g. \(\Phi a\over\Phi x\) , reading \(\Phi a\) to mean a collection of triads containing one or more constants a so the rule says replace 'em all with a variable. For a single triad, \(\phi a\over\phi x\).] The intuitive way to interpret such rules is simple: if you have \(ABC\), then you are licensed to introducea var in place of any of the constants. So "introduce a var" is the formal counterpart of "say 'somebody' (instead of 'Brutus')". [NB: "Var-introduction" actually doesn't quite work, since it involves substitution, but it's close enough.] [TODO: explicate: are the inference rules part of the language definition, or are they in the meta-language? Intuitively they seem to be like the rules of the formula calculus; but they use the symbol \(\vdash\), which we do not want to admit to the language. So they must be part of the meta-language. Remember, the meta-language "includes" the object language.] Historical note: this rule is cribbed directly from the great Gerhard Gentzen, who first came up with the idea of introduction and elimination rules as determining the meanings of the logical constants. But his \(\exists\)-Introduction rule is different than our Var-Introductionrule, since his object language included they symbol \(\exists\) and ours does not. His looked like something this: \[{Pa\over\exists x.Px}\ {\scriptstyle\!\exists\!-\!I}\] where the \(\exists\!-\!I\) on the right names the rule.Informally, a collection of triads is in normal form if the triads are independent; that is, if none of them can be derived from the others. Normal Form Normal Form RDF: as far as I can tell, the concept of "leanness" introduced in RDF-Semantics boils down to normal form. It is expressed in terms of graphs, instances, and proper subgraphs, but what a "lean" graph amounts to is one whose triples are independent - the semantic domain counterpart of normal form. Literals If we want our little Triad calculus to reflect RDF, we obviously need to support literals. I'll get to it eventually. \(\displaystyle\mathbb{T}_1\) (RDF-entailment) ...todo \(\displaystyle\mathbb{T}_2\) (RDFS-entailment) ...todo Commentary: So far, this has nothing to do with RDF per se, except insofar as it reflects its structure. But the symbol set has been chosen with an eye toward the RDF family of languages we will define using this calculus. Informally: Our set of constant symbols \(\{A, B,\ldots\}\cup \{a, b,\ldots\}\) corresponds to the set of IRIs To define a logical language corresponding to RDFS, OWL, etc. we will fix the meanings of subsets of the constants, e.g. \(t\) corresponds to \(\mathrm{rdf:type}\) Variable symbols will be used to capture the RDF concept of ``blank node''. We intentionally limit the symbol set so that we can write triads concisely. If we our term calculus allow e.g. "Foo" as a term, we would need to be able to write "Foo Bar Baz". Which isn't all that bad, but ABC is a whole lot more concise, and since we're only concerned with inference we don't need to be able to construct human-readable terms like "Brutus". This is supposed to be a meta-language, after all. \(AtB, BkC\vdash AtC\).
Ok, so, this is a pretty late answer, but I was thinking about this and I believe I figured it out - though I'm not even sure whether the original NEB was implemented with this exact justification in mind, seeing how cavalier it seems to be with its notation. But nevertheless, it's a good justification. Strap in, because it's a bit of a ride - that involves quantum mechanics and Feynman path integrals, no less! So, let's first begin with the notion of 'action'. Action is a quantity defined over a path. If we have a motion in time from $t=0$ to $t=T$ following a path $\mathbf{x}(t)$ then the action is the integral in time of the Lagrangian, $\mathcal{L}$: $$S = \int_0^T \mathcal{L}(t)dt= \int_0^T\frac{1}{2}m\dot{\mathbf{x}}^2-V(\mathbf{x})dt$$ This factors in an interesting property of quantum mechanics. If we ask ourselves "how likely it is for a particle in that can be found at $\mathbf{x}_0$ at $t=0$ in a potential $V$ to then be found at $\mathbf{x}_N$ at $t=T$?", the answer can be either expressed by a propagator, or by a path integral. Specifically: $$P(\mathbf{x}_N, T;\mathbf{x}_0, 0) = \langle\mathbf{x}_N|e^{-i\frac{H}{\hbar}T}|\mathbf{x_0}\rangle=A\int D[\mathbf{x}(t)]e^{i\frac{S}{\hbar}}$$ Ok. Explanation time! The first formulation is your typical quantum bra-ket notation: you pick your initial state, you evolve it using a time propagator that is the exponential of the Hamiltonian (aka the energy of the system) divided by the Planck constant times $-iT$, then you project it against the desired final state. The overlap between the evolved state and the desired one is how likely it is for that specific process to happen. Still with me? Good. The second formulation is the Feynman path integral notation. It says something slightly different: it says that the probability of finding the system in that final state is proportional (there's a normalization factor $A$ that we just won't worry about for now) to the sum of an imaginary exponential of the action, divided by the Planck constant, for all possible paths that connect the two states in that time. That's what $ D[\mathbf{x}(t)]$ means: it's not an integral over a variable, it's an integral over functions. And as you may guess, that's mighty hard to compute. More on that in a minute. First, let's consider what this path integral formalism means. The action, like the energy, isn't an absolute value, it's defined up to a constant, being an integral of a very energy-like quantity, the Lagrangian, and all. So let's assume that there exists one path that connects our two events that has minimal action, and all other paths have bigger one. Note, this assumption isn't always true, and that means this reasoning and all its consequences can get messed up sometimes; but then again, so can the NEB if there are two equally possible saddle points, so bear with me. If we work under that assumption, then we can set $S=0$ for that path, and $S > 0$ for all the others. These other paths then will contribute with oscillatory terms to the overall integral, and the bigger $S$, the faster the oscillations, the more likely they will just cancel each other out. If we imagine shrinking $\hbar$ (which is a big no-no, it's not called a constant for nothing, but let's fashion ourselves gods and create our own versions of the universe for a moment), then these oscillations become wilder and wilder; and in the limit of $\hbar \rightarrow 0$, that is, in the limit of a perfectly classical, Newtonian, totally-not-quantum universe, they go completely out of control, and only one path remains to contribute: the one with the minimal action. We have just retrieved the Principle of Least Action, which says that the (Newtonian) path between two points in space and time is always the one with the minimal possible action. So, what does this have to do with NEB? Well, we need a few more steps, and a trick. Let's suppose we have a classical system, and want to compute the least action path between two points in space and time. The thing is, all Newtonian trajectories are the least action path between where they start and where they arrive; but we don't know where they will arrive before we try them. Here, instead, we know both initial and final conditions, and we don't know anything about the path itself (including the initial velocity). So, how do we go about doing that, especially with a computer?Well, I'd say, we discretize the integral to compute the action into $N$ steps, with a time step $dt = T/N$, separating it into a sum of intermediate steps $\mathbf{x}_1, \mathbf{x}_2, ...$. In this way, the action becomes: $$S = \sum_{i=0}^N\left[\frac{1}{2}m\dot{\mathbf{x_i}}^2-V(\mathbf{x_i})\right]dt$$ This discretization by the way is also an excellent way of computing the integral above, and is often used for example in quantum field theory.So how do we compute those velocities? Well, let's just assume that they are constant between each pair of steps, so $$\dot{\mathbf{x_i}} = \frac{\mathbf{x_i}-\mathbf{x_{i-1}}}{dt}$$ and $$S = \sum_{i=i}^N\frac{1}{2}m \frac{(\mathbf{x_i}-\mathbf{x_{i-1}})^2}{dt}-\sum_{i=0}^NV(\mathbf{x_i})dt$$ Is this starting to look like your original NEB object function, if you take $k=m/dt$? But see, there's still a problem, namely, that pernicious "minus" sign in front of the potential. In your original formula, it's a plus! That's an Hamiltonian, not a Lagrangian. So, what makes it disappear? Last trick, I swear. Wick Rotation time. Another favourite of purveyors of QFT. This one sounds a bit like magic, really. See that quantum propagator, above? I mean this: $$e^{-i\frac{H}{\hbar}t}$$ Now, if you know the Hamiltonian is basically the system's energy, this looks a lot like a partition function. So let's make it look like one. Let's make a change of parameters: $T \rightarrow -i\tau$. $$e^{-i\frac{H}{\hbar}t} \rightarrow e^{-\frac{H}{\hbar}\tau}$$ Ok, that's gotta be cheating, right? But in fact, it's perfectly okay, we're simply redefining one parameter in our maths, nothing's changed. We call $\tau$ the "imaginary time" and the only thing that's really important to remember is that it has nothing to do with real time and we should never relate the two as if they were the same, they're not. Now let's look at what that does for our action. We have to change its element of time, so $dt \rightarrow -id\tau$, but see what happens... $$S = i\sum_{i=i}^N\frac{1}{2}m \frac{(\mathbf{x_i}-\mathbf{x_{i-1}})^2}{d\tau}+i\sum_{i=0}^NV(\mathbf{x_i})d\tau$$ Well, here we have it! There's also a lot of convenient consequences, like, if we go back to the path integral, now the non-minimal actions don't just oscillate, they vanish exponentially, and that makes the integral converge much better. But in the process we lost the original connection between paths and dynamics! These paths we get out of optimising this action aren't real paths, they're paths in "imaginary time", which frankly sounds something out of a bad Doctor Who episode. So does this time have to do with anything? Well, check the part where we first performed the Wick rotation. That looks a lot like a partition function, right? In fact, it would totally be a partition function if we set $\tau = \hbar\beta$ ($\beta$ here being the usual inverse of the temperature times the Boltzmann constant). So there you have: imaginary time is inverse temperature. When you compute that path, above, you're not looking for a specific path in time, you're looking for a path at a given temperature, and the higher $T$ (the final time), the lower that temperature... uh... $T$ I guess (ok, I realise here I actually used some slightly confusing notation. Sorry for that). Turns out, your $k$ in the NEB objective function is exactly proportional to the temperature. Set it high, and particles will cut corners: they've got enough kinetic energy to do that. Set it low, and particles will just slide back in their potential basins: they can't leave them. And that's why the NEB uses that objective function, and what its physical sense is.
Newton's universal law of gravitation was first published by Isaac Newton in his Principia in 1687 and it states that $$\vec{F}_g=G\frac{m_1m_2}{r^2}\hat{r}_{1,2},\tag{1}$$ where \(\vec{F}_g\) is the gravitational force exerted by the mass \(m_1\) on the mass \(m_2\), \(r\) is the separation distance between the two masses, \(G\) is a constant called the gravitational constant, and \(\hat{r}_{1,2}\) is a unit vector whose tail coincides with \(m_1\) and whose arrow points at \(m_2\). The gravitational constant was experimentally determined to be $$G=6.67408(31)×10^{−11}m^3⋅kg^−1⋅s^−2.\tag{2}$$ According to Equation (1) and Newton's third law of motion, any pair of two objects that have masses exert the forces \(\vec{F}_g\) and \(-\vec{F}_g\) on each other. This is true for any pair of two objects, no matter how light they are and no matter how far apart they are. Two grains of dust on the opposite sides of the universe would have very small masses \(m_1\) and \(m_2\) and their separation distance \(r\) would be vast; despite this, if you plugged those values into Equation (1), you would still get some nonezero value (albeit, it would be very tiny). Thus, according to Newton's law of gravity and third law, two grains of dust on the opposite sides of the universe are pulling on one another and attracted to one another. Since all of the galaxies, stars, planets, life, and grains of dust in the universe have mass, according to Newton's laws all of those things are pulling on each other. This must have been an astonishing realization for Newton: that everything in the universe is attracted to everthing else. Shortly after the birth of the universe, all of the matter in the universe was distributed very smoothly. But there were slight non-uniformities in this distribution of matter: certain regions were denser than others. According to Newton's law of gravity, since the regions of space with denser matter distributions have more mass packed into them than the surround regions of space with comparitively rareified distributions of matter, it follows that the denser mass distributions will exert greater gravitational forces than the rarefied distributions of mass. Thus, the rareified distributions of matter will tend to be drawn to those denser regions. As time rolls forward, the dense regions will become denser and denser and the matter in the rareified regions will become more and more sparse. After many eons, the first generations of stars and galaxies were formed through this process. And smaller conglomerates of matter grew bigger and bigger—pulling in more and more matter—until eventually the first planets were formed. Not only has Newton's law of gravity taught us a great deal about the processes which lead to the formation of galaxies, stars, and planets but it also had, together with Newton's laws of motion, taught us the fundamental nature of motion in the universe. These laws attempted to answer the question: why does everything move the way it does? Why, for example, if I threw a rock, it would trace out a parabolic trajectory and eventually fall to the ground? Why does the Moon revolve around the Earth? According to Newton's law of gravity and laws of motion, the rock falls towards the ground—but, counterintuitively, the Moon is also falling towards the ground. Both the rock and the Moon fall towards the ground for the same reason and due to the same cause: the mass of the Earth is very enormous and exerts a force on both the rock and the Moon which causes them to fall. But why, unlike the rock, does the Moon never actually hit the ground? The answer is because the Earth is so massive that only objects traveling at fantastically high speeds could move fast enough for the Earth to curve underneath them as they are falling thereby allowing them to avoid hitting the ground. But the strength of the gravitational force \(F_g\) exerted by a world (like the Earth) depends on how massive the world is. Humans will likely one day live on Mars' moon Phobos. This world is very unmassive and thus exerts a much smaller gravitational force than does the Earth. Thus, an object falling towards Phobos could avoid hitting the ground by traveling at a much slower speed. According to Newton's laws, if you through a small rock in a straight line parallel to Phobos's surface, that tiny velocity would be sufficient for the rock to travel in a circle all the way around Phobos and hit you in the back of the head. Why do the planets revolve around the Sun? Or more precisely, what causes them to go around the Sun? Rocks or even a big rock like the Moon can fall to the Earth because, according to the law of gravity, the mass of the Earth is so big that it exerts a vast force on rocks and the Moon causing them to fall towards the Earth. The Sun is millions of times more massive than the Earth; so massive that, according to the law of gravity, it causes all of the planets to fall towards it. And because the planets are moving at the right speed, they fall towards the Sun in the shape of a circle. Newton's law of gravity and laws of motion have taught us some very surprising and unexpected things about the nature of motion and the universe. It is therefore little doubt that they are viewed as one of the greatest achievmenets in human thought of all time. This article is licensed under a CC BY-NC-SA 4.0 license.
Past Probability Seminars Spring 2019 Contents 1 Past Seminars Spring 2019 1.1 January 31, Oanh Nguyen, Princeton 1.2 Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University 1.3 February 7, Yu Gu, CMU 1.4 February 14, Timo Seppäläinen, UW-Madison 1.5 February 21, Diane Holcomb, KTH 1.6 Probability related talk in PDE Geometric Analysis seminar: Monday, February 22 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison 1.7 Wednesday, February 27 at 1:10pm Jon Peterson, Purdue 1.8 March 21, Spring Break, No seminar 1.9 March 28, Shamgar Gurevitch UW-Madison 1.10 April 4, Philip Matchett Wood, UW-Madison 1.11 April 11, Eviatar Procaccia, Texas A&M 1.12 April 18, Andrea Agazzi, Duke 1.13 April 25, Kavita Ramanan, Brown 1.14 Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown 1.15 Tuesday, May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) 1.16 Past Seminars Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) Title: The directed landscape Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag.
I am trying to find necessary and sufficient conditions for a nondegenerate lattice in one of the real division algebras $\mathbb{K}$ to admit the structure of a ring with identity (alternative algebra with identity in the case of $\mathbb{K} = \mathbb{O}$), with addition, multiplication and their identities the same as in $\mathbb{K}$. I have found these conditions for the simplest cases $\mathbb{K} = \mathbb{R}$ and $\mathbb{K} = \mathbb{C}$ (if I haven't made any mistake), but I am having trouble determining them for the other two cases; see the section below for more details. In summary, my question is: What are the necessary and sufficient conditions for a nondegenerate lattice in $\mathbb{H}$ or $\mathbb{O}$ to also be an alternative algebra with identity? I would also appreciate any reference that deals with this question. Additional details The cases of $\mathbb{R}$ and $\mathbb{C}$ are easy: the only lattice in the reals which is also a ring is $\mathbb{Z}$, while in $\mathbb{C}$ it is easy to see that the lattice is generated by $1$ and the element with smallest norm that is not an integer, say $a$. Then, since $a^2$ is in the ring and hence in the lattice, we have $$a^2 = m\cdot a + n\cdot 1,$$ with $m, n$ integers. This is a monic quadratic equation, which means that the lattice can only be an order in the quadratic imaginary field $\mathbb{Q}[\sqrt{-d}]$, with $d$ positive (else the lattice would be degenerate). Since all orders of this form generate lattices in $\mathbb{C}$ we are done. Two well-known examples with exceptionally large unit group are the Gaussian and Eisenstein integers $\mathbb{Z}[i]$ and $\mathbb{Z}[\omega]$ with $\omega = (1+\sqrt{3}i)/2$; they generate the square and hexagonal lattices respectively. Over the quaternions and octonions, we can again choose a basis for the lattice consisting of $1$, an element $a_1$ of smallest norm that is not a multiple of $1$, an element $a_2$ of smallest norm that is not in the plane spanned by $1$ and $a_1$, and so on. Each one of the basis elements (excluding $1$) satisfies a monic quadratic equation like the one above, since $a_i^2$ is inside $\mathbb{R}[a_i]$ (any 2D slice of $\mathbb{H}$ or $\mathbb{O}$ containing $\mathbb{R}$ is isomorphic to the complex plane). This again forces the $a_i$ to be quadratic integers, though not necessarily with the same discriminant. In the case of $\mathbb{H}$, this condition is necessary but not sufficient for the lattice to be a ring, since the product of distinct basis elements $a_i a_j$ is in the lattice and thus imposes new conditions relating their respective discriminants; for example, from the expansion of $a_1a_2$ as a combination of basis elements, I found that the cosine of the angle between $\Im(a_1)/|\Im (a_1)|$ and $\Im(a_2)/|\Im (a_2)|$ (thought of as vectors in the unit sphere $S^2$ of purely imaginary elements) must be in $\mathbb{Q}[\sqrt{d_1 d_2}]$, where $d_1$ and $d_2$ are the respective discriminants of $a_1$ and $a_2$ as quadratic integers. At this point the algebra gets messy and I haven't been able to find all the necessary conditions. My current guess is that all ring-lattices come from orders in quaternion algebras $\mathbb{Q}[\alpha, \beta]$ with $\alpha^2=-d_1, \beta^2=-d_2, \alpha\beta=-\beta\alpha$ (they are the only working examples I have been able to produce), but I know very little about these algebras and I haven't so far managed to prove whether this is the case. Perhaps this result is well known, but I haven't found it anywhere I have looked (though maybe I am missing the correct terms to look for). Examples with exceptional symmetry are the Hurwitz and duoprismatic integers, spanned (up to conjugacy) by $1,i,j,\frac{1+i+j+k}{2}$ and $1,\omega, j, j\omega$. They generate the 24-cell and 6,6-duoprismatic lattices respectively. In the case of $\mathbb{O}$, every 4D slice containing $\mathbb{R}$ is isomorphic to the quaternions, but I again expect new conditions that paste together these copies of $\mathbb{H}$ to form an octonionic lattice structure. Here I don't even know where to start. As an example, Conway and Smith's book On Quaternions and Octonions describes the "octavian integers" which generate a copy of the $E_8$ lattice. Possibly useful facts Here are some facts I found that could be useful in answering the question. -First, since in all cases basis elements are quadratic integers, this means that the real part of any element of the lattice is in $\mathbb{Z}[\frac12]$. -This implies that the conjugate $\bar{a}=2\Re(a)-a$ is in the lattice, and thus the squared norm $|a|^2 = a\bar{a}$ is in the intersection of the lattice and $\mathbb{R}_{>0}$, i.e. a positive integer. This means that every lattice element is a quadratic integer, not just the basis elements, since every number in $\mathbb{K}$ satisfies the equation $a^2 = 2\Re(a)\cdot a + |a|^2 \cdot 1$. -Thanks to the polarization identity, the previous fact implies that the scalar product of any two elements, if we think of the division algebra as an Euclidean vector space over the reals, must be in $\mathbb{Z}[\frac12]$. -The commutator of two lattice elements and the associator of three lattice elements must also belong to the lattice, as they can be expressed as sums of products.
One shouldn't expect to have a "good" formula for the local isometric embeddings of a constant negative curvature surface in Euclidean $\mathbb{R}^3$. This is due to a little theorem proved by David Hilbert around 1901: Theorem There does not exist a smooth immersion of the hyperbolic plane into Euclidean 3 space. The theorem has been further studied in the years following. In 1961 Efimov showed that any complete surface with curvature strictly bounded above (that is to say, if there exists a negative number $K_0 < 0$ such that the Gaussian curvature is always strictly less than $K_0$) cannot admit a smooth (twice continuously differentiable) isometric immersion into Euclidean three space. That is to say, if you try to "extend" any surface in Euclidean 3 space that satisfies constant negative curvature, you are guaranteed to hit a singularity. In particular, you cannnot expect the surface to be described by $F(x,y,z) = 0$ where $F(x,y,z)$ has a nice algebraic expression (say, polynomial) and has smooth level sets. Typically the image one usually use to illustrate the notion of negative (but not constant) curvature is the graph $$ z = x^2 - y^2 $$which produces a classical saddle, or the catenoid whose Gaussian curvature, while everywhere negative, is not constant. (Though it has constant [in fact everywhere vanishing] mean curvature.) Lastly, however, despite the above, it is possible to embed "patches" of hyperbolic plane into Euclidean 3 space. There are many ways of doing so (one can search for the term pseudosphere; though some people use the same term for the hyperboloid/de Sitter spaces embedded in higher dimensional Minkowski space), but one of the more well-known is the tractricoid. (See Wiki entry here.) Parametrically in cylindrical coordinates $(z,r,\theta)$ the surface can be described by: $$ \mathbb{R}_+\times\mathbb{S}^1 \ni (t,\omega) \mapsto \left(z=\frac{1}{\cosh t},r=t-\tanh t , \theta = \omega\right) $$ and has constant negative curvature.
Past Probability Seminars Spring 2019 Contents 1 Past Seminars Spring 2019 1.1 January 31, Oanh Nguyen, Princeton 1.2 Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University 1.3 February 7, Yu Gu, CMU 1.4 February 14, Timo Seppäläinen, UW-Madison 1.5 February 21, Diane Holcomb, KTH 1.6 Probability related talk in PDE Geometric Analysis seminar: Monday, February 22 3:30pm to 4:30pm, Van Vleck 901, Xiaoqin Guo, UW-Madison 1.7 Wednesday, February 27 at 1:10pm Jon Peterson, Purdue 1.8 March 21, Spring Break, No seminar 1.9 March 28, Shamgar Gurevitch UW-Madison 1.10 April 4, Philip Matchett Wood, UW-Madison 1.11 April 11, Eviatar Procaccia, Texas A&M 1.12 April 18, Andrea Agazzi, Duke 1.13 April 25, Kavita Ramanan, Brown 1.14 Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown 1.15 Tuesday, May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) 1.16 Past Seminars Spring 2019 Thursdays in 901 Van Vleck Hall at 2:25 PM, unless otherwise noted. We usually end for questions at 3:15 PM. If you would like to sign up for the email list to receive seminar announcements then please send an email to join-probsem@lists.wisc.edu January 31, Oanh Nguyen, Princeton Title: Survival and extinction of epidemics on random graphs with general degrees Abstract: We establish the necessary and sufficient criterion for the contact process on Galton-Watson trees (resp. random graphs) to exhibit the phase of extinction (resp. short survival). We prove that the survival threshold $\lambda_1$ for a Galton-Watson tree is strictly positive if and only if its offspring distribution has an exponential tail, settling a conjecture by Huang and Durrett. On the random graph with degree distribution $D$, we show that if $D$ has an exponential tail, then for small enough $\lambda$ the contact process with the all-infected initial condition survives for polynomial time with high probability, while for large enough $\lambda$ it runs over exponential time with high probability. When $D$ is subexponential, the contact process typically displays long survival for any fixed $\lambda>0$. Joint work with Shankar Bhamidi, Danny Nam, and Allan Sly. Wednesday, February 6 at 4:00pm in Van Vleck 911 , Li-Cheng Tsai, Columbia University Title: When particle systems meet PDEs Abstract: Interacting particle systems are models that involve many randomly evolving agents (i.e., particles). These systems are widely used in describing real-world phenomena. In this talk we will walk through three facets of interacting particle systems, namely the law of large numbers, random fluctuations, and large deviations. Within each facet, I will explain how Partial Differential Equations (PDEs) play a role in understanding the systems.. Title: Fluctuations of the KPZ equation in d\geq 2 in a weak disorder regime Abstract: We will discuss some recent work on the Edwards-Wilkinson limit of the KPZ equation with a small coupling constant in d\geq 2. February 14, Timo Seppäläinen, UW-Madison Title: Geometry of the corner growth model Abstract: The corner growth model is a last-passage percolation model of random growth on the square lattice. It lies at the nexus of several branches of mathematics: probability, statistical physics, queueing theory, combinatorics, and integrable systems. It has been studied intensely for almost 40 years. This talk reviews properties of the geodesics, Busemann functions and competition interfaces of the corner growth model, and presents some new qualitative and quantitative results. Based on joint projects with Louis Fan (Indiana), Firas Rassoul-Agha and Chris Janjigian (Utah). February 21, Diane Holcomb, KTH Title: On the centered maximum of the Sine beta process Abstract: There has been a great deal or recent work on the asymptotics of the maximum of characteristic polynomials or random matrices. Other recent work studies the analogous result for log-correlated Gaussian fields. Here we will discuss a maximum result for the centered counting function of the Sine beta process. The Sine beta process arises as the local limit in the bulk of a beta-ensemble, and was originally described as the limit of a generalization of the Gaussian Unitary Ensemble by Valko and Virag with an equivalent process identified as a limit of the circular beta ensembles by Killip and Stoiciu. A brief introduction to the Sine process as well as some ideas from the proof of the maximum will be covered. This talk is on joint work with Elliot Paquette. Title: Quantitative homogenization in a balanced random environment Abstract: Stochastic homogenization of discrete difference operators is closely related to the convergence of random walk in a random environment (RWRE) to its limiting process. In this talk we discuss non-divergence form difference operators in an i.i.d random environment and the corresponding process—a random walk in a balanced random environment in the integer lattice Z^d. We first quantify the ergodicity of the environment viewed from the point of view of the particle. As consequences, we obtain algebraic rates of convergence for the quenched central limit theorem of the RWRE and for the homogenization of both elliptic and parabolic non-divergence form difference operators. Joint work with J. Peterson (Purdue) and H. V. Tran (UW-Madison). Wednesday, February 27 at 1:10pm Jon Peterson, Purdue Title: Functional Limit Laws for Recurrent Excited Random Walks Abstract: Excited random walks (also called cookie random walks) are model for self-interacting random motion where the transition probabilities are dependent on the local time at the current location. While self-interacting random walks are typically very difficult to study, many results for (one-dimensional) excited random walks are remarkably explicit. In particular, one can easily (by hand) calculate a parameter of the model that will determine many features of the random walk: recurrence/transience, non-zero limiting speed, limiting distributions and more. In this talk I will prove functional limit laws for one-dimensional excited random walks that are recurrent. For certain values of the parameters in the model the random walks under diffusive scaling converge to a Brownian motion perturbed at its extremum. This was known previously for the case of excited random walks with boundedly many cookies per site, but we are able to generalize this to excited random walks with periodic cookie stacks. In this more general case, it is much less clear why perturbed Brownian motion should be the correct scaling limit. This is joint work with Elena Kosygina. March 21, Spring Break, No seminar March 28, Shamgar Gurevitch UW-Madison Title: Harmonic Analysis on GLn over finite fields, and Random Walks Abstract: There are many formulas that express interesting properties of a group G in terms of sums over its characters. For evaluating or estimating these sums, one of the most salient quantities to understand is the character ratio: $$ \text{trace}(\rho(g))/\text{dim}(\rho), $$ for an irreducible representation $\rho$ of G and an element g of G. For example, Diaconis and Shahshahani stated a formula of this type for analyzing G-biinvariant random walks on G. It turns out that, for classical groups G over finite fields (which provide most examples of finite simple groups), there is a natural invariant of representations that provides strong information on the character ratio. We call this invariant rank. This talk will discuss the notion of rank for $GL_n$ over finite fields, and apply the results to random walks. This is joint work with Roger Howe (Yale and Texas AM). April 4, Philip Matchett Wood, UW-Madison Title: Outliers in the spectrum for products of independent random matrices Abstract: For fixed positive integers m, we consider the product of m independent n by n random matrices with iid entries as in the limit as n tends to infinity. Under suitable assumptions on the entries of each matrix, it is known that the limiting empirical distribution of the eigenvalues is described by the m-th power of the circular law. Moreover, this same limiting distribution continues to hold if each iid random matrix is additively perturbed by a bounded rank deterministic error. However, the bounded rank perturbations may create one or more outlier eigenvalues. We describe the asymptotic location of the outlier eigenvalues, which extends a result of Terence Tao for the case of a single iid matrix. Our methods also allow us to consider several other types of perturbations, including multiplicative perturbations. Joint work with Natalie Coston and Sean O'Rourke. April 11, Eviatar Procaccia, Texas A&M Title: Stabilization of Diffusion Limited Aggregation in a Wedge. Abstract: We prove a discrete Beurling estimate for the harmonic measure in a wedge in $\mathbf{Z}^2$, and use it to show that Diffusion Limited Aggregation (DLA) in a wedge of angle smaller than $\pi/4$ stabilizes. This allows to consider the infinite DLA and questions about the number of arms, growth and dimension. I will present some conjectures and open problems. April 18, Andrea Agazzi, Duke Title: Large Deviations Theory for Chemical Reaction Networks Abstract: The microscopic dynamics of well-stirred networks of chemical reactions are modeled as jump Markov processes. At large volume, one may expect in this framework to have a straightforward application of large deviation theory. This is not at all true, for the jump rates of this class of models are typically neither globally Lipschitz, nor bounded away from zero, with both blowup and absorption as quite possible scenarios. In joint work with Amir Dembo and Jean-Pierre Eckmann, we utilize Lyapunov stability theory to bypass this challenges and to characterize a large class of network topologies that satisfy the full Wentzell-Freidlin theory of asymptotic rates of exit from domains of attraction. Under the assumption of positive recurrence these results also allow for the estimation of transitions times between metastable states of this class of processes. April 25, Kavita Ramanan, Brown Title: Beyond Mean-Field Limits: Local Dynamics on Sparse Graphs Abstract: Many applications can be modeled as a large system of homogeneous interacting particle systems on a graph in which the infinitesimal evolution of each particle depends on its own state and the empirical distribution of the states of neighboring particles. When the graph is a clique, it is well known that the dynamics of a typical particle converges in the limit, as the number of vertices goes to infinity, to a nonlinear Markov process, often referred to as the McKean-Vlasov or mean-field limit. In this talk, we focus on the complementary case of scaling limits of dynamics on certain sequences of sparse graphs, including regular trees and sparse Erdos-Renyi graphs, and obtain a novel characterization of the dynamics of the neighborhood of a typical particle. This is based on various joint works with Ankan Ganguly, Dan Lacker and Ruoyu Wu. Friday, April 26, Colloquium, Van Vleck 911 from 4pm to 5pm, Kavita Ramanan, Brown Title: Tales of Random Projections Abstract: The interplay between geometry and probability in high-dimensional spaces is a subject of active research. Classical theorems in probability theory such as the central limit theorem and Cramer’s theorem can be viewed as providing information about certain scalar projections of high-dimensional product measures. In this talk we will describe the behavior of random projections of more general (possibly non-product) high-dimensional measures, which are of interest in diverse fields, ranging from asymptotic convex geometry to high-dimensional statistics. Although the study of (typical) projections of high-dimensional measures dates back to Borel, only recently has a theory begun to emerge, which in particular identifies the role of certain geometric assumptions that lead to better behaved projections. A particular question of interest is to identify what properties of the high-dimensional measure are captured by its lower-dimensional projections. While fluctuations of these projections have been studied over the past decade, we describe more recent work on the tail behavior of multidimensional projections, and associated conditional limit theorems. Tuesday , May 7, Van Vleck 901, 2:25pm, Duncan Dauvergne (Toronto) Title: The directed landscape Abstract: I will describe the construction of the full scaling limit of (Brownian) last passage percolation: the directed landscape. The directed landscape can be thought of as a random scale-invariant `directed' metric on the plane, and last passage paths converge to directed geodesics in this metric. The directed landscape is expected to be a universal scaling limit for general last passage and random growth models (i.e. TASEP, the KPZ equation, the longest increasing subsequence in a random permutation). Joint work with Janosch Ormann and Balint Virag.
I'm stuck with this problem, so I've got the following matrix: $$A = \begin{bmatrix} 4& 6 & 10\\ 3& 10 & 13\\ -2&-6 &-8 \end{bmatrix}$$ Which gives me the following identity matrix of $AI$: $$\begin{bmatrix} 4 - \lambda& 6 & 10\\ 3& 10 - \lambda & 13\\ -2&-6 & -8 - \lambda \end{bmatrix}$$ I'm looking for the Polynomial Characteristic Roots of the Determinant. I cando this on pen and paper, but I want to make this into an algorithm which can workon any given 3x3 matrix. I can then calculate the Determinant of this matrix by doing the following: $$Det(A) = 4 - \lambda \begin{vmatrix} 10 - \lambda&13 \\ -6 & -8 - \lambda \end{vmatrix} = \begin{bmatrix} (10 - \lambda \cdot -8 \lambda) - (-6 \cdot 13) \end{bmatrix}$$ I repeat this process for each of the columns inside the matrix (6, 10).. Watching this video: Here the guy factorises each of the (A) + (B) + (C) to this equation: $$ \lambda (\lambda_{2} - 6\lambda+8) = 0$$ And then finds the polynomials: 1, 2, 4.. Which I understand perfectly. Now, putting this into code and factorising the equation would prove to be difficult. So, I'm asking whether or not there is a simple way to calculate the determinant (using the method given here) and calculate the polynomials without having to factorise the equation.. My aim is to be left with 3 roots based on the Determinant.
In the current model during inflation H remains nearly constant where $ H = \frac{ \dot{a}}{a} $ but the scale factor a grows exponentially and requires a large number of e - folds N where $ N = ln\frac{a(t_f)}{a(t_i)} $ but according to Loop Quantum Cosmology in super inflation a smaller number of e folds are required any one know of a published article that is accessible that shows how you need less e-folds the one I read which can be accessed here says you have a smaller N but it doesn't explain very well how Hubble rate $H$ increases rapidly while the scale factor a remains nearly constant and it gives no estimate for N. I need an article that I can cite. This is for class final project in my upper level undergrad course. In the current model during inflation H remains nearly constant where $ H = \frac{ \dot{a}}{a} $ but the scale factor Before answering, please see our policy on resource recommendation questions. Please write substantial answers that detail the style, content, and prerequisites of the book, paper or other resource. Explain the nature of the resource so that readers can decide which one is best suited for them rather than relying on the opinions of others. Answers containing only a reference to a book or paper will be removed! I found an article by E. J. Copeland, D. J. Mulryne, N. J. Nunes, M. Shaeri that explains this, it's called Super-inflation in Loop Quantum Cosmology. This is part of the answer I wrote most the equations come from this article unless I cite otherwise. According to Loop Quantum Cosmology, in super inflation a smaller number of e-folds are required. This is due to the equations which describe the inflation field in LQC, as well as their solutions where scale invariance occurs. During inflation a modified Friedman equation is given by $ H = \frac{ \dot{a}}{a} = \frac{8\pi G}{3}S(\frac{\dot{\phi} }{2D} + V(\phi)) $ [eq. 1,abs/0708.1261] and they set $8\pi G = 1$. The variable $ \rho $ is defined to be $ \rho = \frac{\dot{\phi} }{2D} + V(\phi) $ . A set of complex differential equations describe the behavior of the system. The derivations and solutions to these equations are outside the scope of my answer. The derivations are in the referenced article. $D(a)$ is approximated by $D_* a^n$ where $n = \frac{3(3-l)}{1-l}$ and $S(a)$ and $D_* = \frac{}{} $ is approximated by $S_* a^r$ where $S_* = \frac{3}{2}a^{-3}$ both $S_* \approx D_* \approx 1$. $\alpha$ is defined as $\alpha = 1- \frac{n}{6} < 0$ , we define the variables $ V_\phi = \frac{dV}{d\phi}$ and $ \lambda = -\frac{\sqrt{D}}{\sqrt{S}}\frac{V}{V_\phi} $ [sec. A, abs/0708.1261]. The solution to the inflation field $\phi$ is given by $\phi = \frac{2\lambda}{\alpha(n-r)}\frac{\sqrt{D}}{\sqrt{S}} $, in units of energy [eq. 15,abs/0708.1261]. The field potential is defined as $V = V_0\phi{^\beta}$ where $\beta = \frac{-2\lambda^2}{\alpha(n-r)} > 0 $, in units of energy density [eq. 16, abs/0708.1261]. The scale factor during inflation is then expressed in the form $a(\tau) = (-\tau)^p$ "for an expanding universe $\tau$ is negative and increasing towards zero" [p. 3, eq 18, abs/0708.1261]. The CMB inflationary fluctuations have the property of being scale invariant so to reproduce them we must also produce scale invariant fluctuations in LQC [pg. 274 Introduction to Cosmology,Barbara Ryden]. Scale invariance occurs "whenever $p \to 0$, which, as we referred to, does indeed imply that $ \overline{\epsilon} \ll 1 $ and consequently $V < 0$" The parameter $ \overline{\epsilon}$ is given as $ \overline{\epsilon} = \frac{\lambda^2}{2} $ \citep[pg. 4]{abs/0708.1261} and $p$ is given as $ p = \frac{2\alpha}{2\overline{\epsilon} -\alpha(2 + r)} $ [eq. 9, abs/0708.1261]. $N = -68p$ and since $p$ must be small and negative to have scale invariance as stated above we thus have less e-folds required. [pg. 274, Introduction to Cosmology,Barbara Ryden] $H$ in contrast to in standard inflation, is not constant $ \dot{H} \neq 0 $. The Hubble rate is related to the inflation field via $\dot{H} = -\frac{\dot{\phi}}{2}(1- \frac{\rho}{2\sigma}) $ [eq. 44, abs/0708.1261]. Where "$2\sigma$ represents the critical energy density arising from quantum geometry effects which leads to the scale factor undergoing a bounce as $\rho$ approaches it." [p. 6, abs/0708.1261] . As we can see in order for LQC to reproduce the effects of inflation the scale factor must remain constant while the Hubble rate must vary during the inflationary phase of the universe.
What is the measure, in degrees, of the acute angle formed by the hour hand and the minute hand of a 12-hour clock at 6:48? \(\text{let's use the navigational assignment of degrees to the clock, i.e. }\\ \text{12 is 0 degrees increasing as we move clockwise back to 360 degrees again at 12}\\ \phi_m = 360\dfrac{min}{60} = 360\dfrac{48}{60} = 360 \dfrac 4 5 = 72\cdot 4 = 288^\circ\\ \phi_h = 360 \dfrac{60hr + min}{720} = 360 \dfrac{60\cdot 6 + 48}{720} = \dfrac 1 2(360+48) = 204^\circ\\ |288^\circ - 204^\circ| = 84^\circ\). You can use the clock angle formula, which is \(|0.5\times (60\times H-11\times M)|\), where H is the number of hours and M is the number of minutes. \(|0.5\times (60\times 6-11\times 48)| = |0.5\times (360-528)| = |0.5\times (-168)|=|-84|=84\) Therefore, the angle between the hands is 84 degrees. You are very welcome! :P The shortened version of the formula is: \(|30H-5.5M|\), where \(H\) stands for hours and \(M\) stands for minutes. Thus, we have \(|30(6)-5.5(48)|=|180-264|=\boxed{84}\) degrees.
The lower bound argument I gave for $\sqrt{n}$ points in a square works here, too. I have tried to simplify it. The idea is to use the union bound: The probability that a random path with $m=\lfloor \sqrt[3]{n} \rfloor$ steps has length less than some constant is small. It is so small that the expected number of ways to choose a short path is less than $1$, so the probability is less than $1$, in fact with probability going to $1$ there is no such short path using $m$ vertices out of $n$. It helps to suppose the points are chosen from a fine lattice $\{1/\ell,2/\ell,...\}^3$ with $\ell \gg m$, and to estimate the probability that the sum of the $L^1$ distances along the path is small rather than the $L^2$ distance. Pushing the points to be on a fine lattice does not change the length much, less than $2\sqrt{3}m/\ell$. The count of sequences of $m$ nonnegative steps of total $L^1$ length up to $d$ is the number of ways of distributing $d$ objects among $3m+1$ categories, $d+3m \choose 3m$. There are at most $2^{3m}$ choices of signs. So, the probability that a random path with $m$ steps has $L^1$ length at most $c \ell$ is at most $2^{3m}{c \ell +3m \choose 3m} /\ell^{3m} \le \frac{2^{3m}(c \ell + 3m)^{3m}}{(3m)!\ell^{3m}} = \frac{(2c+6m/\ell)^{3m}}{(3m)!}$. If $c \ell \gt 3m$ we can estimate this as less than $\frac{(4c)^{3m}}{(3m)!}$. (Actually, we don't need to accept this factor of $2$.) Using $x! \gt (x/3)^x$, this is less than $\left(\frac{4c}{m}\right)^{3m}$. The number of ways to choose an $m$ step path from $n$ points is at most $n\times(n-1)\times(n-2)...\times(n-m) \le n^{m+1} \approx m^{3m+3}.$ The expected number of paths with $m$ steps of $L^1$ length less than $c$ is at most $m^{3m+3}\left(\frac{4c}{m} \right)^{3m} =m^3 (4c)^{3m}$. So, if we choose $c \lt 1/4$ then as $m,n \to \infty$, the probability that there is a path with $L^1$ length smaller than $c$ goes to $0$. The Euclidean length is up to $\sqrt{3}$ times smaller than the $L^1$ length, so the probability that there is a path through $\sqrt[3]{n}$ points with Euclidean length less than $1/(4\sqrt{3})$ goes to $0$ as $n \to \infty$. The constant can be improved easily by a factor of $2$, and with a little work one can estimate the probability that the $L^2$ distance is small directly instead of the $L^1$ distance, and this should improve the constant significantly, too. The convenient estimate $x! \gt (x/3)^x$ can be improved to a lower bound of roughly $(x/e)^x$ from Stirling's formula which improves the constant a bit more.
Most often, the $S$-matrix is defined as an operator between asymptotic initial and final Hilbert spaces for a time-dependent scattering process, i.e. between $t\to-\infty$ and $t\to\infty$. There unitarity encodes conservation of probabilities over time. On the other hand, the book that OP mentions, Ref. 1, talks about a time-independent scattering process. For a discussion of the connection between time-dependent and time-independent scattering, see this Phys.SE question. In this answer we will only consider time-independent scattering. Ref. 1 defines for a 1D system (divided into three regions $I$, $II$, and $III$, with a localized potential $V(x)$ in the middle region $II$), a $2\times 2$ scattering matrix $S(k)$ as a matrix that tells how two asymptotic incoming (left- and right-moving) waves (of wave number $\mp k$ with $k>0$) are related to two asymptotic outgoing (left- and right-moving) waves. In formulas, $$\left. \psi(x) \right|_{I}~=~ \underbrace{A(k)e^{ikx}}_{\text{incoming right-mover}} + \underbrace{B(k)e^{-ikx}}_{\text{outgoing left-mover}}, \qquad k>0, \tag{1} $$$$\left. \psi(x)\right|_{III}~=~ \underbrace{F(k)e^{ikx}}_{\text{outgoing right-mover}} + \underbrace{G(k)e^{-ikx}}_{\text{incoming left-mover}}, \qquad\qquad\qquad\tag{2}$$ $$ \begin{pmatrix} B(k) \\ F(k) \end{pmatrix}~=~ S(k) \begin{pmatrix} A(k) \\ G(k) \end{pmatrix}.\tag{3}$$ To show that a finite-dimensional matrix $S(k)$ is unitary, it is enough to show that $S(k)$ is an isometry, $$ S(k)^{\dagger}S(k)~\stackrel{?}{=}~{\bf 1}_{2\times 2} \quad\Leftrightarrow\quad|A(k)|^2+ |G(k)|^2~\stackrel{?}{=}~|B(k)|^2+ |F(k)|^2,\tag{4}$$ or equivalently, $$ |A(k)|^2-|B(k)|^2 ~\stackrel{?}{=}~|F(k)|^2-|G(k)|^2.\tag{5} $$ Equation (5) can be justified by the following comments and reasoning. $\psi(x)$ is a solution to the time-independent Schrödinger equation (TISE)$$ \hat{H} \psi(x) ~=~ E \psi(x), \qquad \hat{H}~:=~\frac{\hat{p}^2}{2m}+V(x),\qquad \hat{p}~:=~\frac{\hbar}{i}\frac{\partial}{\partial x},\tag{6}$$ for positive energy $E>0$. The solution space for the Schrödinger eq. $(6)$, which is a second-order linear ODE, is a two-dimensional vectors space. It follows from eq. $(6)$ that the wave numbers $\pm k$, $$k ~:=~\frac{\sqrt{2mE}}{\hbar} ~\geq~ 0,\tag{7} $$ must be the same in the two asymptotic regions $I$ and $III$. This will imply that the $M$-matrix (to be defined below) and the $S$-matrix are diagonal in $k$-space. Moreover, it follows that there exists a bijective linear map $$ \begin{pmatrix} A(k) \\ B(k) \end{pmatrix} ~\mapsto~ \begin{pmatrix} F(k) \\ G(k) \end{pmatrix}.\tag{8} $$In Ref. 2, the transfer matrix $M(k)$ is defined as the corresponding matrix$$ \begin{pmatrix} F(k) \\ G(k) \end{pmatrix}~=~ M(k) \begin{pmatrix} A(k) \\ B(k) \end{pmatrix}.\tag9$$ The $S$-matrix $(3)$ is a rearrangement of eq. $(9)$. One may use the Schrödinger eq. $(6)$ (and the reality of $E$ and $V(x)$) to show that the Wronskian $$ W(\psi,\psi^{\ast})(x)~=~\psi(x)\psi^{\prime}(x)^{\ast}-\psi^{\prime}(x)\psi(x)^{\ast},\tag{10}$$or equivalently the probability current$$ J(x)~=~\frac{i\hbar}{2m} W(\psi,\psi^{\ast})(x),\tag{11}$$does not depend on the position $x$,$$ \frac{\mathrm dW(\psi,\psi^*)(x)}{\mathrm dx}~=~\psi(x)\psi^{\prime\prime}(x)^{\ast}-\psi^{\prime\prime}(x)\psi(x)^{\ast}~\stackrel{(6)}{=}~0.\tag{12}$$Unitarity (5) is equivalent to the statement that$$\left. W(\psi,\psi^*)\right|_{I}~=~\left. W(\psi,\psi^*) \right|_{III}.\tag{13}$$Ref. 3 mentions that eq. $(12)$ encodes conservation of energy in the scattering. References: D.J. Griffiths, Introduction to Quantum Mechanics; Section 2.7 in 1st edition from 1994 and Problem 2.52 in 2nd edition from 1999. D.J. Griffiths, Introduction to Quantum Mechanics; Problem 2.49 in 1st edition from 1994 and Problem 2.53 in 2nd edition from 1999. P.G. Drazin & R.S. Johnson, Solitons: An Introduction, 2nd edition, 1989; Section 3.2.
Research Open Access Published: Some Tauberian conditions on logarithmic density Advances in Difference Equations volume 2019, Article number: 424 (2019) Article metrics 82 Accesses Abstract This article is based on the study on the λ-statistical convergence with respect to the logarithmic density and de la Vallee Poussin mean and generalizes some results of logarithmic λ-statistical convergence and logarithmic \((V,\lambda )\)-summability theorems. Hardy’s and Landau’s Tauberian theorems to the statistical convergence, which was introduced by Fast long back in 1951, have been extended by J.A. Fridy and M.K. Khan (Proc. Am. Math. Soc. 128:2347–2355, 2000) in recent years. In this article we try to generalize some Tauberian conditions on logarithmic statistical convergence and logarithmic \((V,\lambda )\)-statistical convergence, and we find some new results on it. Introduction and preliminary concepts In 1951, Fast [2] and Steinhaus [3] independently introduced the concept of statistical convergence for sequences of real numbers, and since then this concept has been generalized and investigated in different ways by different authors. Likewise summability theory and convergence of sequences have also been studied actively in the area of pure mathematics for the last several decades. Extensive works on the topic are applicable in topology, functional analysis, Fourier analysis, measure theory, applied mathematics, mathematical modeling, computer science, analytic number theory, etc. One may refer to [4,5,6,7,8,9], etc. Let \(A \subseteq \mathbb{N}\) and \(A_{n}=\{\psi \leq n: \psi \in A \}\). We say that A has natural density, i.e., \(\delta (A)= \lim_{n} \frac{1}{n} \vert A_{n} \vert \), if the limit exists, where \(\vert A_{n} \vert \) denotes the cardinality of \(A_{n}\). By the concept of statistical convergence, we mean a sequence \(\tilde{x}=(x_{\psi })\) of real numbers which statistically converges to ℓ if for every \(\varepsilon >0\) the set \(A_{\varepsilon }= \{ \psi \in \mathbb{N}:\vert x_{\psi }- \ell \vert \geq \varepsilon \}\) has natural density zero, i.e., for each \(\varepsilon >0\), Let \(\lambda =(\lambda _{n})\) be a non-decreasing sequence of positive numbers tending to ∞ such that and \(\lambda _{1}=0\). The generalized de la Vallee Poussin mean of a sequence \(\tilde{x}=(x _{\psi })\) is defined by \(T_{n}(x)=\frac{1}{\lambda _{n}} \sum_{\psi \in I_{n}} x_{\psi }\), where \(I_{n}=[n-\lambda _{n}+1,n]\). Now, a sequence \(\tilde{x}=(x_{\psi })\) is said to be \((V,\lambda )\)-summable to ℓ if \(T_{n}(x)\) converges to ℓ, i.e., Also a sequence \(\tilde{x}=(x_{\psi })\) is said to be statistically λ-convergent to ℓ if, for every \(\varepsilon >0\), By logarithmic density, we mean \(\delta _{\log _{n}}(E)=\frac{1}{\log _{n}} \sum_{\psi =1}^{n} \frac{\chi _{E}(\psi )}{\psi }\) for \(E \in \mathbb{N}\), where \(\log _{n}= \sum_{\psi =1}^{n} \frac{1}{\psi } \approx \log n\), \(n \in \mathbb{N}\) [8]. A sequence \(\tilde{x}=(x_{\psi })\) is logarithmic statistically convergent to ℓ if A sequence \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically convergent to ℓ if where \(\log _{\lambda _{n}}= \sum_{\psi =1}^{\lambda _{n}} \frac{1}{\psi } \approx \log \lambda _{n}\) (\(n=1,2,3,\ldots\)). Let \(\mu _{n} =\frac{1}{\log _{\lambda _{n}}} \sum_{\psi \in I_{n}} \frac{T_{\psi }(x)}{\psi }\), where \(\log _{\lambda _{n}}= \sum_{\psi =1}^{\lambda _{n}} \frac{1}{\psi } \approx \log \lambda _{n} \) (\(n=1,2,3,\ldots\)). A sequence \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-summable to ℓ if \((\mu _{n})\) is convergent to ℓ, i.e., \(\lim_{n} \frac{1}{\log _{\lambda _{n}}} \sum_{\psi \in I_{n}} \frac{ \vert T_{\psi }(x) -\ell \vert }{\psi } =0\). A sequence \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically summable to ℓ if \((\mu _{n})\) is λ-statistically convergent, i.e., We define it as \(st_{\log _{\lambda _{n}}}- \lim_{n} T_{n}=\ell \). Moricz [10] studied the concept of Tauberian conditions for statistical convergence followed from statistical summability \((C,1)\). Braha [11] extended these results using Tauberian conditions for λ-statistical convergence, which was followed from statistical summability \((V,\lambda )\). Braha [12] also explained the Tauberian theorems for the generalized Norlund–Euler summability method. One may refer to [13,14,15]. In this paper, we study the Tauberian theorems for logarithmic \((V,\lambda )\)-statistical convergence which is followed from de la Vallee Poussin mean. We also try to establish some results involving the logarithmic density. Main results Theorem 2.1 Let λ be a real- valued sequence defined in (1). Then, 1. If\(\tilde{x}=(x_{\psi })\) is logarithmic\((V,\lambda )\)- statistically summable to ℓ, then it is logarithmic\((V,\lambda )\)- statistically convergent to ℓ, provided\(\lim \inf_{n} \frac{1}{\lambda _{n}}>0\). 2. If\(\tilde{x}=(x_{\psi })\) is bounded, then logarithmic\((V,\lambda )\)- statistical convergence implies logarithmic\((V,\lambda )\)- statistical summability. 3. \(\varOmega (\log _{n},\lambda ) \cap \ell _{\infty }=\varPi (\log _{n},\lambda )\), where\(\varOmega (\log _{n},\lambda ) \) is the collection of all logarithmic\((V,\lambda )\)- statistical convergence sequences, \(\ell _{\infty }\) is the collection of all bounded sequences, and\(\varPi (\log _{n},\lambda )\) is the collection of all logarithmic\((V,\lambda )\)- summable sequences. Proof (1) Let \(\tau _{n}= \{ \psi \in I_{n}: \frac{1}{\log _{\lambda _{n}}} \sum_{\psi \in I_{n}} \frac{1}{\psi } \vert T_{\psi }(x)-\ell \vert \geq \varepsilon \} \). Since \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically summable to ℓ, then \(\tau _{n}\) is λ-statistically convergent to ℓ, i.e., Also we can write which implies that Since \(\lim \inf_{n} \frac{1}{\lambda _{n}} >0\) and \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically summable to ℓ, so by taking \(n \rightarrow \infty \), we get \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically convergent to ℓ. This completes the proof. □ Proof (2) Let \(\tilde{x}=(x_{\psi })\) be bounded and logarithmic \((V,\lambda )\)-statistically convergent to ℓ. Then there exists \(M>0\) such that \(\vert x_{\psi }-\ell \vert \leq M\). Now, for any \(\varepsilon >0\), where \(B(n)= \{\psi \in I_{n} : \frac{1}{\psi } \vert T_{\psi }(x)- \ell \vert \geq \varepsilon \}\) Now, if \(\psi \notin B(n)\), then \(K_{1}(n) < \varepsilon \). For \(\psi \in B(n)\), we have as \(n \rightarrow \infty \). Since logarithmic density of \(B(n)\) is zero, hence we can say that \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)-statistically summable. This completes the proof. □ Proof Proof of (3) follows from the proof of (1) and (2), so it is omitted here. □ Tauberian theorems Theorem 3.1 Let \((\lambda _{n})\) be a sequence of real numbers and \(st_{ \log _{\lambda _{n}}}- \lim_{n} \inf \frac{\lambda _{t_{n}}}{\lambda _{n}} >1 \) for all \(t>1\), where \(t_{n}\) denotes the integral parts of \([t.n]\) for every \(n \in \mathbb{N}\), and let \((T_{\psi })\) be a sequence of real numbers such that \(st_{\log _{\lambda _{n}}}- \lim_{n} T_{n} =\ell \). Then \(\tilde{x}=(x_{\psi })\) is \(st_{ \log _{\lambda _{n}}}\)- convergent to ℓ iff the following conditions hold: and Remark Let us suppose that are satisfied, then for every \(t>1\), the following relation is valid: and from which it follows that \(st_{\lambda }- \lim_{n} \frac{1}{\log _{(\lambda _{t_{\psi }}-\lambda _{\psi })}} \sum_{\psi =n+1}^{t_{n}} \frac{1}{\psi } x_{\psi }=0\) holds for \(t>1\), i.e., and for \(0< t<1\), we have \(st_{\lambda }- \lim_{n} \frac{1}{\lambda _{\psi }-\lambda _{t_{\psi }}} \sum_{\psi =t_{n}+1}^{n} \frac{x_{\psi }}{\psi }=0\), i.e., holds. Lemma 3.1 Lemma 3.2 If \(st_{\log _{\lambda _{n}}}- \lim_{n} x_{n} =\ell \) and \(st_{\log _{\lambda _{n}}}- \lim_{n} T_{n} =\ell \) are satisfied, and let \(\tilde{x}=(x_{\psi })\) be a sequence of complex numbers which is logarithmic \((V,\lambda )\)- statistically convergent to ℓ, then for any \(t>1\), Proof Case I: Let us consider that \(t >1\), then from construction of the sequence \(\lambda =(\lambda _{n})\) we get and for every \(\varepsilon >0\), we have Following Eq. (3), we can say that \(st_{\log _{\lambda }}- \lim T_{t_{n}}=\ell \). Case II: Now suppose that \(0< t<1\). For the definition of \(t_{n}=[t.n]\), for any natural number n, we can conclude that \((T_{t_{n}})\) does not appear more than \([1+t^{-1}]\) times in the sequence \((T_{n})\). In fact, if there exist integers ψ, m such that then So, we have the following inequality: which gives that \(st_{\log _{\lambda _{n}}}- \lim_{n} T_{t_{n}}=\ell \). □ Lemma 3.3 If \(st_{\log _{\lambda _{n}}}- \lim_{n} x_{n} =\ell\) and \(st_{\log _{\lambda _{n}}}- \lim_{n} T_{n} =\ell \) are satisfied and \(\tilde{x}=(x_{\psi })\) is logarithmic \((V,\lambda )\)- statistically convergent to ℓ, then we have and Proof (i) Let us suppose that \(t>1\). We get From the definition of the sequence \((\lambda _{n})\) and logarithmic density, we obtain Let us suppose that \(st_{\log _{\lambda _{n}}}- \lim_{n} \sup \sum_{j=n-\lambda _{n}+1}^{t_{n}} x_{j} =L\), and for every \(\varepsilon >0\), we get from which it follows that \(st_{\log _{\lambda _{n}}}- \lim_{n} \sup \sum_{j=t_{n}-\lambda _{t_{n}}+1}^{t_{n}} x_{j} =L\). Also, since \(st_{\lambda }- \lim_{n} \sup \frac{\lambda _{t_{n}}}{\lambda _{t_{n}}-\lambda _{n}} < \infty \) and \(st_{\lambda }- \lim_{n} \sup \frac{1}{\lambda _{t_{n}}-\lambda _{n}} < \infty \), then we get (ii) If \(0< t<1\), we have This completes the proof. □ Following the above procedure, we can get the proof of Theorem 3.1. Proof of Theorem 3.1 Let us suppose that \(st_{\log _{\lambda }}- \lim_{\psi }x_{\psi }=L\) and \(st_{\log _{\lambda }}- \lim_{\psi }T_{\psi }=\ell \). For every \(t>1\), we get (by Lemma 3.2) Similarly, if \(0< t<1\), we obtain (by Lemma 3.2) Now assume that \(st_{\log _{\lambda }}- \lim_{n} T_{n} =\ell \) and and are satisfied. We have to prove that \(st_{\log _{\lambda }}- \lim_{n} x_{n} =\ell \) or equivalently \(st_{\log _{\lambda }}- [4] \lim_{n} (T_{n}-x_{n})=0\). Case I: If \(t>1\), let us suppose For any \(\varepsilon >0\), we obtain From the above relation (9), it follows that, for any arbitrary \(\gamma >0\), there exists \(t>1\) such that Also following Lemma 3.2 and the relations \(st_{\lambda }- \lim_{n} \sup \frac{\lambda _{t_{n}}}{\lambda _{t_{n}}-\lambda _{n}} < \infty \) and \(st_{\lambda }- [4] \lim_{n} \sup \frac{1}{\lambda _{t_{n}}-\lambda _{n}} < \infty \), we get Combining these relations, we have Since γ is arbitrary, we conclude that, for every \(\varepsilon >0\), Case II: If \(0< t<1\), let us suppose For any \(\varepsilon >0\), Proceeding in the same way as above, we get the result as follows: This completes the proof of the theorem. □ Theorem 3.2 Let \((\lambda _{n})\) be a sequence of complex numbers which satisfies the following condition: and also consider that \(st_{\log _{\lambda }}-\lim T_{\psi }=\ell \). Then \((x_{\psi })\) is \(st_{\log _{\lambda }}\)- statistically convergent to the same number ℓ if and only if the following two conditions hold: for every \(\varepsilon >0\), and Proof Proofs can be obtained by following Theorem 3.1. □ Conclusion In this paper, the Tauberian conditions under the logarithmic statistical convergence following from \((V,\lambda )\)-summability are studied. The Tauberian conditions can be further applied in probabilistic normed linear spaces with f-density. They can also be studied in the approximation theorem point of view in more extended forms. References 1. Fridy, J.A., Khan, M.K.: Statistical extensions of some classical Tauberian theorems. Proc. Am. Math. Soc. 128, 2347–2355 (2000) 2. Fast, H.: Sur la convergence statistique. Colloq. Math. 2, 241–244 (1951) 3. Steinhaus, H.: Sur la convergence ordinaire et la convergence asymptotique. Colloq. Math. 2, 73–84 (1951) 4. Savaş, E., Borgohin, S.: On strongly almost lacunary statistical A-convergence and lacunary A-statistical convergence. Filomat 30(3), 689–697 (2016). https://doi.org/10.2298/FIL1603689S 5. Fridy, J.A.: On statistical convergence. Analysis 24, 127–145 (2004) 6. Mursaleen, M., Alotaibi, A.: Statistical summability and approximation by de la Vallee-Poussin mean. Appl. Math. Lett. 24, 320–324 (2011) 7. Mursaleen, M.: λ-Statistical convergence. Math. Slovaca 50, 111–115 (2000) 8. Alghamdi, M.A., Mursaleen, M., Alotaibi, A.: Logarithmic density and logarithmic statistical convergence. Adv. Differ. Equ. 2013, 227 (2013). https://doi.org/10.1186/1687-1847-2013-227 9. Šalát, T.: On statistically convergent sequences of real numbers. Math. Slovaca 30, 139–150 (1980) 10. Moricz, F.: Tauberian conditions, under which statistical convergence follows from statistical summability (C,1). J. Math. Anal. Appl. 275, 277–287 (2002) 11. Braha, N.L.: Tauberian conditions under which λ-statistical convergence follows from statistical summability \((V,\lambda )\). Miskolc Math. Notes 16(20), 695–703 (2015) 12. Braha, N.L.: A Tauberian theorem for the generalized Norlund–Euler summability method. J. Inequal. Spec. Funct. 7(4), 137–142 (2016) 13. Belen, C.: Some Tauberian conditions obtained through weighted generator sequences. Georgian Math. J. 21(4), 407–413 (2014). https://doi.org/10.1515/gmj-2014-0040 14. Chen, C.P., Chang, C.T.: Tauberian conditions under which the original convergence of double sequences follows from the statistical convergence of their weighted means. J. Math. Anal. Appl. 332, 1242–1248 (2007) 15. Edeley, O., Mursaleen, M.: Tauberian theorems for statistically convergent double sequences. Inf. Sci. 176(7), 875–886 (2006). https://doi.org/10.1016/j.ins.2005.01.006 Acknowledgements The authors thank the referees for their constructive comments that improved the quality of the work. Availability of data and materials Not applicable. Funding Not applicable. Ethics declarations Competing interests The authors declare that they have no conflict of interests. Additional information Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
(This is the problem 1.56 from Michael Sipser' Introduction to the theory of computation ) Let $A_k(S)= \{ w |w \text{ is the k-basis representation}$ $\text{ without leading 0 of some natural number in the set S}\} $ Example : $A_2 (\{3,5\}) = \{11,101\} $ Let the statement $ P = \exists S \text{, a set of natural number where} A_2(S) \text{ is regular and }A_3(S) \text{ isn't.} $ Is $P$ true ? I think that $P$ is false. Let's consider two case : $S$ is finite : Write a (possibly very long) regular expression which is the union of all of the k-basis representation strings of the numbers in $S$ no matter what $k$ is. $S$ is infinite : Construct a NFA $N$ that recognize $A_k(S)$ with : $\text{For each symbol }e\in\Sigma\smallsetminus 0, \space$ $\delta(q_{initial},e) = q_{begin},\space q_{begin} \in F $ $\text{For each symbol }e\in\Sigma,\space \delta(q_{begin},e) = q_e ,\space \delta(q_e,\varepsilon) = q_{begin},\space q_e \in F $ Am i correct ? (I think I'm not but I'm stuck with this, maybe I don't get the definition of $A$ correctly ).
This question already has an answer here: Let $A$ be a local ring, $M$ and $N$ finitely generated $A$-modules. Show that if $M \otimes_A N = 0$ then $M=0$ or $N=0$ I read one proof from my book and it goes as follow: First, show that $(A/I)\otimes M \simeq M/IM$ by tensor the exact sequence $0\to I \to A \to A/I \to 0 $ with $M$. Then, let $m$ be the maximal ideal, $k=A/m$ the residue field. Let $M_k := k\otimes_A M \simeq M/mM$. By Nakayama's lemma, $M_k = 0 \Rightarrow M=0$. But $M\otimes_A N = 0 \Rightarrow (M\otimes_A N)_k = 0 \Rightarrow M_k \otimes_k N_k =0 \Rightarrow M_k = 0$ or $N_k =0$. I could not understand 2 parts of above proof. First, how could we deduce $(A/I)\otimes M \simeq M/IM$ by tensor the exact sequence $0\to I \to A \to A/I \to 0 $ with $M$? Is there another easy argument to show $(A/I)\otimes M \simeq M/IM$ ? Secondly, how is it possible that $(M\otimes_A N)_k = 0 \Rightarrow M_k \otimes_k N_k = 0$? Can anyone give me a clarification? Thank you in advance!
In a mixture model the probability of an event \(x\) is written \(P(x=X) =\sum_{i}\pi_{i}P_{i}(x=X) \), where \(\pi_{i}\) is the probability of the point belonging to mixture \(i\) and \(P_{i}(x=X) \) is the probability of the event \(x=X\) in the \(i\)-th mixture. The problem is usually that the \(P_i\) are small which makes underflow happen when you multiply them with big numbers such as \(\pi_i\) or summing up the values. Underflow is simply your computer not having enough precision to handle all those tiny numbers and so rounding errors happen. For example \(1+\epsilon = 1\) if \(\epsilon\) is really small. To fix underflow one usually operates in the log domain i.e. \(\log P(x=X) = \log [ \sum_{i}\pi_{i}P_{i}(x=X) ]\). The problem with this is that the log cannot decompose sums and we still get underflow. To fix this(somewhat) we can write: \(\log P(x) = \log [ \sum_{i}\pi_{i}P_{i}(x=X) ] = \log [ \sum_{i} \exp\big\{ \log\pi_{i} + \log P_{i}(x=X) \big\} ]\) Now by finding the max value of the different sums \(\log(\pi_{i}) + \log ( P_{i}(x=X) )\) and deducting it we can move out most of the probability mass in the equation. It is simple if one looks at the following calculations for a mixture model with \(2\) mixture components. \(\log p = \log [p_1 + p_2] = \log [ \exp(\log p_1) + \exp(\log p_2) ]\) \(pMax = \max [\log p_1 ,\log p_2 ]\) \(\log p = \log[ \exp(pMax) * ( \exp\big\{\log p_1 -pMax\big\} +\exp\big\{log p_2-pMax\big\}) ]\) \(\log p = pMax + \log [ \exp(\log p_1 - pMax) + \exp( \log p_2 - pMax) ]\) Now if we for example assume \(\log p_1 > \log p_2\) then \(\log p = pMax + \log [ \exp(0) + \exp\big\{\log p_2 -pMax\big\} ] = \\ pMax + \log [ 1 + \exp\big\{\log p_2 - pMax\big\} ]\) This means that we gotten out most of the probability mass from the sum and we have made the exponent in the exponential closer to zero and thus less small. This of course doesn't mean numerical issues will be a concern anymore but I believe we are more in the clear than before. Below is an implementation in Matlab. Instructions in the file.
Cube (a three-dimensional figure) is a regular polyhedron with square faces. All edges are the same length. All faces are squares Diagonal is a line from one vertices to another that is non adjacent. Circumscribed sphere is a polyhedron is a sphere that contains the polyhedron and touches each of the ployhedron's vertices. Inscribed sphere - A convex polyhedron is a sphere that is contained within the polyhedron and tangent to each of the polyhedron's faces. Midsphere is a polyhedron is a sphere that is tangent to every edge of the polyhedron. 4 base diagonals 24 face diagonals 4 space diagonals 12 edges 6 faces 8 vertex Circumscribed Sphere Radius of a Cube formula \(\large{ R = a \; \frac{ \sqrt {3} }{2} }\) Where: \(\large{ R }\) = circumscribed sphere radius \(\large{ a }\) = edge Circumscribed Sphere Volume of a Cube formula \(\large{ C_v = \frac{3}{4} \; \pi \; \left( a\; \frac{ \sqrt {3} }{2} \right) ^3 }\) Where: \(\large{ C_v }\) = circumscribed sphere volume \(\large{ a }\) = edge \(\large{ \pi }\) = Pi Edge of a Cube formula \(\large{ a = \sqrt { \frac { A_{surface} } { 6 } } }\) \(\large{ a = V^{1/3} }\) \(\large{ a = \sqrt { 3 } \; \frac { D' } {3} }\) Where: \(\large{ a }\) = edge \(\large{ A_{surface} }\) = surface face area \(\large{ V }\) = volume \(\large{ D' }\) = space diagonal Face Area of a Cube formula \(\large{ A_{area} = a^2 }\) Where: \(\large{ A_{area} }\) = face area \(\large{ a }\) = edge Inscribed Radius of a Cube formula \(\large{ r = \frac{a}{2} }\) Where: \(\large{ r }\) = inside radius \(\large{ a }\) = edge Inscribed Sphere Volume of a Cube formula \(\large{ I_v = \frac{3}{4} \; \pi \; \left( \frac{ a }{2} \right) ^3 }\) Where: \(\large{ I_v }\) = circumscribed sphere volume \(\large{ a }\) = edge \(\large{ \pi }\) = Pi Midsphere Radius of a Cube formula \(\large{ r_m = \frac{a}{2} \sqrt {2} }\) Where: \(\large{ r_m }\) = midsphere radius \(\large{ a }\) = edge Space Diagonal of a Cube formula \(\large{ D' = \sqrt {3} \;a }\) Where: \(\large{ D' }\) = space diagonal \(\large{ a }\) = edge Surface face Area of a Cube formula \(\large{ A_{surface} = 6\;a^2 }\) Where: \(\large{ A_{surface} }\) = surface face area \(\large{ a }\) = edge Surface to volume ratio of a Cube formula \(\large{ S_v = \frac{6}{a} }\) Where: \(\large{ S_v }\) = surface to volume ratio \(\large{ a }\) = edge Volume of a Cube formula \(\large{ V = a^3 }\) Where: \(\large{ V }\) = volume \(\large{ a }\) = edge Tags: Equations for Volume
Short default answer: come up with an LBA which accepts the language and use the simulation used to prove that context-sensitive grammars and LBA define the same set of languages. But that is of course not what you are after. In this specific case, try to think of using a right-linear grammar for $\Sigma^*$ twice, one for the left and one for the right half. All you have to ensure that both grammars derive "in sync". This can be done by swapping around a control token. That is to say, the left grammars picks a rule, generates the fitting control token and passes it to the right grammar. The right grammar sees the control token and executes the fitting rule. Note that you can also implement two-way communication in this way, but it's not necessary here. There is one problem with context-sensitive grammars: they can never delete non-terminals (except $S \to \varepsilon$ if the empty word is in the language). Therefore, we have to create only as many non-terminals as we are going to need; none can be redundant. One way to achieve this is to use the same trick as for certain proofs about LBA: generate all non-terminals you are going to need first, i.e. prepare the "tape". Later, "move around" on that tape. Only "at the end", replace all non-terminals with terminals. So let $G=(N, \Sigma, \delta, S)$ with $\Sigma = \{a,b\}$ (the construction readily extends to larger alphabets) and $N$, $\delta$ given by the following rules. $\qquad \begin{align} S &\to \hat{X}_l S' X_r \mid aaaa \mid abab \mid baba \mid bbbb \mid aa \mid bb \mid \varepsilon \\ S' &\to X_l S' X_r \mid X_l \hat{X}_r \end{align}$ are the rules for generating the "tape". Note that the hat denotes the "head position" and indices $l,r$ denote which half of the word a non-terminal belongs to. The short words are generate thus in order to safe some rules below. Now we need rules to derive one symbol in the left part: $\qquad \begin{align} \hat{X}_l X_l &\to X_\gamma \hat{X}^\gamma_l \\ \hat{X}_l X_\alpha &\to X_\gamma X^\gamma_\alpha \end{align}$ for all $(\alpha, \gamma) \in \Sigma^2$. Note how we use the upper index to carry the generated symbol to the right. $X_a$ and $X_b$ are "final" non-terminals which will only be used for moving the control token around and to derive terminals later. Note furthermore that the second rule is (only) used for the last symbol of the right half. For moving the carry to the right half, we have to move past both remaining $X_l$ and already generated $X_\alpha$: $\qquad \begin{align} \hat{X}^\gamma_l X_l &\to \hat{X}_l X^\gamma_l \\ \hat{X}^\gamma_l X_\alpha &\to \hat{X}_l X^\gamma_\alpha \\ X^\gamma_l X_l &\to X_l X^\gamma_l \\ X^\gamma_l X_\alpha &\to X_l X^\gamma_\alpha \\ X^\gamma_\alpha X_\beta &\to X_\alpha X^\gamma_\beta \end{align}$ for all $(\alpha, \beta, \gamma) \in \Sigma^3$. Now, once the carry reaches the right control token, we have to mimic the rule used on the left: $\qquad\begin{align} X^\gamma_l \hat{X}_r &\to X_l \hat{X}^\gamma_r \\ X^\gamma_\alpha \hat{X}_r &\to X_\alpha \hat{X}^\gamma_r \\ \hat{X}^\gamma_r X_r &\to X_\gamma \hat{X}_r \\ \hat{X}^\gamma_r &\to X_\gamma \end{align}$ for all $(\alpha, \gamma) \in \Sigma^2$. Note that the first rule is used for the first symbol of the right half, and that the last rule can only be used for the very last symbol, otherwise the derivation never terminates. Now we only need the terminating rules $\qquad X_\alpha \to \alpha$ for all $\alpha \in \Sigma$ and we are done. These rules, too, can only be applied after everything (to the left) is done, otherwise the derivation will not terminate. Note that this grammar is ambiguous. Not only can can $X_\alpha \to \alpha$ (safely) be applied anywhere to the left of the left "head" at any time, but there can also be multiple carries underway at the same time. Since they can never overtake each other the correct order is maintained. One remark has to be made still: above grammar is not context-sensitive as many rules changes both of the symbols on the left-hand side. This is not allowed for context-sensitive grammars. Luckily, we can simulate any rule $R$ of the form $\qquad A B \to C D$ by $\qquad \begin{align} A B &\to A Y_R \\ A Y_R &\to X_R Y_R \\ X_R Y_R &\to X_R D \\ X_R D &\to C D \end{align}$ so we are good and can work with the smaller grammar. Showing that interference between multiple such simulations does not hurt is left as an exercise. Do you see how to extend this to $L_k = \{ w^k \mid w \in \Sigma^*\}$? Does it also work for $L = \bigcup_{i\geq 1} L_k$? Can you use the same construction for any $L^k$ for regular $L$?
Given $a,b$ and $c$ are positive real numbers, prove that $$a^5 + b^5 + c^5 \ge abc(ab+bc+ca).$$ closed as off-topic by choco_addicted, user91500, Shailesh, Davide Giraudo, Smylic Jun 6 '17 at 9:45 This question appears to be off-topic. The users who voted to close gave this specific reason: " This question is missing context or other details: Please improve the question by providing additional context, which ideally includes your thoughts on the problem and any attempts you have made to solve it. This information helps others identify where you have difficulties and helps them write answers appropriate to your experience level." – choco_addicted, user91500, Shailesh, Davide Giraudo, Smylic I assume you mean $\geq$. Hint: rewrite the LHS as $$\frac{a^5}5+\frac{a^5}5+\frac{a^5}5+\frac{a^5}5+\frac{a^5}5+\frac{b^5}5+\frac{b^5}5+\frac{b^5}5+\frac{b^5}5+\frac{b^5}5+\frac{c^5}5+\frac{c^5}5+\frac{c^5}5+\frac{c^5}5+\frac{c^5}5,$$ then rearrange the terms into three groups of five and apply AM-GM to each group. While most people would resort to using manipulation to ...subdue the problem. I find the calculus approach still appealing and no less "elegant" in its own way...so let's start. Divide both sides by $a^5$, and put $x = \dfrac{b}{a}, y = \dfrac{c}{a}$, then the "original" statement becomes: $1+x^5+y^5 \ge xy(xy+x+y)$, with $x, y > 0$. To this end, consider the two variable function $f(x,y) = 1+x^5+y^5 - x^2y-x^2y^2 - xy^2$. Our aim is to show: $f(x,y) \ge 0$ by showing that $f_{\text{min}} = 0$. We proceed by finding the critical points: $f_x = 0 = f_y\implies 5x^4-2xy-2xy^2-y^2 = 0 = 5y^4-2xy-2yx^2-x^2$. We subtract the latter equation from the former, and factor: $(x-y)(5(x^3+x^2y+xy^2+y^3)+2xy+x+y)=0\implies x - y = 0\implies x = y$. Substituting these values into the system $f_x = 0 = f_y$ we have: $5x^4-2x^3-3x^2 = 0\implies x^2(5x+3)(x-1) = 0\implies x = 1$ since $x > 0$. Thus the only critical point is $(1,1)$. We have:$f_{xx}(1,1) = 16 > 0, f_{xy}(1,1) = -8, f_{yy}(1,1) = 16\implies D = (f_{xx}f_{yy} - f^2_{xy})|_{(1,1)} = 16^2 - 8^2 = 256-64 = 192 > 0$. Thus by the calculus' D-test, $(x,y) = (1,1)$ is the relative minima and since the domain is open in $\mathbb{R^2}$ ( the first quandrant ), this point is also the global minima which means $f_{\text{min}} = f(1,1) = 0\implies f(x,y) \ge 0\implies 1+x^5+y^5 \ge xy(xy+x+y)$ which is the desire claim we sought to show. $$\sum_{cyc}(a^5-a^2b^2c)=\frac{1}{2}\sum_{cyc}(2a^5-a^3b^2-a^3c^2+a^3b^2+a^3c^2-2a^2b^2c)=$$ $$=\frac{1}{2}\sum_{cyc}(a^5-a^3b^2-a^3b^3+b^3+c^2a^3+c^2b^3-c^2a^2b-c^2ab^2)=$$ $$=\frac{1}{2}\sum_{cyc}((a^3-b^3)(a^2-b^2)+c^2(a^2-b^2)(a-b))=$$ $$=\frac{1}{2}\sum_{cyc}(a-b)^2((a^2+ab+b^2)(a+b)+c^2(a+b))=$$ $$=\sum_{cyc}(a-b)^2(a+b)(a^2+b^2+c^2+ab)\geq0.$$ Done! We see that your inequality is true even for all reals $a$, $b$ and $c$ such that $a+b\geq0$, $a+c\geq0$ and $b+c\geq0.$
Each cell of an $\;8\times8\;$ table is colored either black or white such that every column has equal number of black cells and no two rows have equal number of black cells. Find the maximum possible number of pairs of adjacent cells that have distinct colors. The cells are adjacent if they share a common edge. My attempt : Suppose that the $\;8\times8\;$ table has $k$ black cells, we have $8\mid k$. The number of black cells in each row can be between $0$ and $8$, so the number of black cells in the $\;8\times8\;$ table is between $28$ and $36$. Since $8\mid k\;$ and $28\leq k\leq36$, so $k=32$ and each column has $4$ black cells. Number of pairs of adjacent cells that have distinct colors = Number of pairs of adjacent cells - Number of pairs of adjacent cells that have same colors. Number of pairs of adjacent cells $= 7 \times 8 \times 2 = 112$. Number of pairs of adjacent cells that have same colors : Consider adjacent cells in the same row. The rows that have 0 or 8 black squares will have 7 pairs of adjacent cells that have same colors. The rows that have 1 or 7 black squares will have 5 pairs of adjacent cells that have same colors. The rows that have 2 or 6 black squares will have 3 pairs of adjacent cells that have same colors. The rows that have 3 or 5 black squares will have 1 pair of adjacent cells that have same colors. Total number of pairs of row adjacent cells that have same colors is 32. Consider adjacent cells in the same column. I don't know how to find the number of pairs of column adjacent cells that have same colors. I constructed a table and found that it was 3 but didn't know how to prove.
I've wondered why the tape/tapes are not part of the formal definition of a Turing Machine. Consider, for example, the formal definition of a Turing machine on Wikipedia page. The definition, following Hopcroft and Ullman, includes: the finite set of states $Q$, the tape alphabet $\Gamma$, the blank symbol $b \in \Gamma$, the initial state $q_0\in Q$, the set of final states $F\subseteq Q$, and the the transition function $\delta:(Q\backslash F)\times \Gamma\rightarrow Q\times\Gamma\times\{L,R\}$. None of which is the tape itself. A Turing Machine is always considered to work on a tape, and the transition function is interpreted as moving its head, substitution of symbol and changing state. So, why is the tape left out of the mathematical definition of a Turing machine? From what I can see, the formal definition in itself doesn't seem to imply that the Turing Machine operates like it's often described informally (with a head moving around on a tape). Or does it?
Introduction Consider a simple linear regression problem where it is desired to estimate a set of parameters using a least squares criterion. We generate some synthetic data where we know the model completely, that is \[ Y = X\beta + \epsilon \] where \(Y\) is a \(100\times 1\) vector, \(X\) is a \(100\times 10\) matrix, \(\beta = [-4,\ldots ,-1, 0, 1, \ldots, 5]\) is a \(10\times 1\) vector, and \(\epsilon \sim N(0, 1)\). set.seed(123)n <- 100p <- 10beta <- -4:5 # beta is just -4 through 5.X <- matrix(rnorm(n * p), nrow=n)colnames(X) <- paste0("beta_", beta)Y <- X %*% beta + rnorm(n) Given the data \(X\) and \(Y\), we can estimate the \(\beta\) vector using lm function in R that fits a standard regression model. ls.model <- lm(Y ~ 0 + X) # There is no intercept in our model abovem <- matrix(coef(ls.model), ncol = 1)rownames(m) <- paste0("$\\beta_{", 1:p, "}$")library(kableExtra)knitr::kable(m, format = "html") %>% kable_styling("striped") %>% column_spec(1:2, background = "#ececec") \(\beta_{1}\) -3.9196886 \(\beta_{2}\) -3.0117048 \(\beta_{3}\) -2.1248242 \(\beta_{4}\) -0.8666048 \(\beta_{5}\) 0.0914658 \(\beta_{6}\) 0.9490454 \(\beta_{7}\) 2.0764700 \(\beta_{8}\) 3.1272275 \(\beta_{9}\) 3.9609565 \(\beta_{10}\) 5.1348845 These are the least-squares estimates and can be seen to be reasonably close to the original \(\beta\) values -4 through 5. The CVXR formulation The CVXR formulation states the above as an optimization problem: \[ \begin{array}{ll} \underset{\beta}{\mbox{minimize}} & \|y - X\beta\|_2^2, \end{array}\]which directly translates into a problem that CVXR can solve as shownin the steps below. Step 0. Load the CVXRlibrary suppressWarnings(library(CVXR, warn.conflicts=FALSE)) ## Registered S3 method overwritten by 'R.oo':## method from ## throw.default R.methodsS3 Step 1. Define the variable to be estimated betaHat <- Variable(p) Step 2. Define the objective to be optimized objective <- Minimize(sum((Y - X %*% betaHat)^2)) Notice how the objective is specified using functions such as sum, *%* and ^, that are familiar to R users despite that fact that betaHat is no ordinary R expression but a CVXR expression. Step 3. Create a problem to solve problem <- Problem(objective) Step 4. Solve it! result <- solve(problem) Step 5. Extract solution and objective value ## Objective value: 97.847586 We can indeed satisfy ourselves that the results we get matches thatfrom lm. m <- cbind(result$getValue(betaHat), coef(ls.model))colnames(m) <- c("CVXR est.", "lm est.")rownames(m) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(m, format = "html") %>% kable_styling("striped") %>% column_spec(1:3, background = "#ececec") CVXR est. lm est. \(\beta_{1}\) -3.9196887 -3.9196886 \(\beta_{2}\) -3.0117041 -3.0117048 \(\beta_{3}\) -2.1248257 -2.1248242 \(\beta_{4}\) -0.8666045 -0.8666048 \(\beta_{5}\) 0.0914653 0.0914658 \(\beta_{6}\) 0.9490453 0.9490454 \(\beta_{7}\) 2.0764693 2.0764700 \(\beta_{8}\) 3.1272271 3.1272275 \(\beta_{9}\) 3.9609564 3.9609565 \(\beta_{10}\) 5.1348848 5.1348845 Wait a minute! What have we gained? On the surface, it appears that we have replaced one call to lm withat least five or six lines of new R code. On top of that, the codeactually runs slower, and so it is not clear what was really achieved. So suppose we knew that the \(\beta\)s were nonnegative and we wish totake this fact into account. Thisisnonnegative least squares regression and lm would no longer do the job. In CVXR, the modified problem merely requires the addition of a constraint to theproblem definition. problem <- Problem(objective, constraints = list(betaHat >= 0))result <- solve(problem)m <- matrix(result$getValue(betaHat), ncol = 1)rownames(m) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(m, format = "html") %>% kable_styling("striped") %>% column_spec(1:2, background = "#ececec") \(\beta_{1}\) 0.0000000 \(\beta_{2}\) 0.0000000 \(\beta_{3}\) 0.0000000 \(\beta_{4}\) 0.0000000 \(\beta_{5}\) 1.2374544 \(\beta_{6}\) 0.6234659 \(\beta_{7}\) 2.1230714 \(\beta_{8}\) 2.8035606 \(\beta_{9}\) 4.4448008 \(\beta_{10}\) 5.2073465 We can verify once again that these values are comparable to those obtained from another R package, say nnls. library(nnls)nnls.fit <- nnls(X, Y)$x m <- cbind(result$getValue(betaHat), nnls.fit)colnames(m) <- c("CVXR est.", "nnls est.")rownames(m) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(m, format = "html") %>% kable_styling("striped") %>% column_spec(1:3, background = "#ececec") CVXR est. nnls est. \(\beta_{1}\) 0.0000000 0.0000000 \(\beta_{2}\) 0.0000000 0.0000000 \(\beta_{3}\) 0.0000000 0.0000000 \(\beta_{4}\) 0.0000000 0.0000000 \(\beta_{5}\) 1.2374544 1.2374488 \(\beta_{6}\) 0.6234659 0.6234665 \(\beta_{7}\) 2.1230714 2.1230663 \(\beta_{8}\) 2.8035606 2.8035640 \(\beta_{9}\) 4.4448008 4.4448016 \(\beta_{10}\) 5.2073465 5.2073521 Okay that was cool, but… As you no doubt noticed, we have done nothing that other R packages could not do. So now suppose further, for some extraneous reason, that the sum of \(\beta_2\) and \(\beta_3\) is known to be negative and but all other \(\beta\)s are positive. It is clear that this problem would not fit into any standardpackage. But in CVXR, this is easily done by adding a fewconstraints. To express the fact that \(\beta_2 + \beta_3\) is negative, we construct a row matrix with zeros everywhere, except in positions 2 and 3 (for \(\beta_2\) and \(\beta_3\) respectively). A <- matrix(c(0, 1, 1, rep(0, 7)), nrow = 1)colnames(A) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(A, format = "html") %>% kable_styling("striped") %>% column_spec(1:10, background = "#ececec") \(\beta_{1}\) \(\beta_{2}\) \(\beta_{3}\) \(\beta_{4}\) \(\beta_{5}\) \(\beta_{6}\) \(\beta_{7}\) \(\beta_{8}\) \(\beta_{9}\) \(\beta_{10}\) 0 1 1 0 0 0 0 0 0 0 The sum constraint is nothing but \[ A\beta < 0 \] which we express in R as constraint1 <- A %*% betaHat < 0 NOTE: The above constraint can also be expressed simply as constraint1 <- betaHat[2] + betaHat[3] < 0 but it is easier working with matrices in general with CVXR. For the positivity for rest of the variables, we construct a \(10\times 10\) matrix \(A\) to have 1’s along the diagonal everywhere except rows 2 and 3 and zeros everywhere. B <- diag(c(1, 0, 0, rep(1, 7)))colnames(B) <- rownames(B) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(B, format = "html") %>% kable_styling("striped") %>% column_spec(1:11, background = "#ececec") \(\beta_{1}\) \(\beta_{2}\) \(\beta_{3}\) \(\beta_{4}\) \(\beta_{5}\) \(\beta_{6}\) \(\beta_{7}\) \(\beta_{8}\) \(\beta_{9}\) \(\beta_{10}\) \(\beta_{1}\) 1 0 0 0 0 0 0 0 0 0 \(\beta_{2}\) 0 0 0 0 0 0 0 0 0 0 \(\beta_{3}\) 0 0 0 0 0 0 0 0 0 0 \(\beta_{4}\) 0 0 0 1 0 0 0 0 0 0 \(\beta_{5}\) 0 0 0 0 1 0 0 0 0 0 \(\beta_{6}\) 0 0 0 0 0 1 0 0 0 0 \(\beta_{7}\) 0 0 0 0 0 0 1 0 0 0 \(\beta_{8}\) 0 0 0 0 0 0 0 1 0 0 \(\beta_{9}\) 0 0 0 0 0 0 0 0 1 0 \(\beta_{10}\) 0 0 0 0 0 0 0 0 0 1 The constraint for positivity is \[ B\beta > 0 \] which we express in R as constraint2 <- B %*% betaHat > 0 Now we are ready to solve the problem just as before. problem <- Problem(objective, constraints = list(constraint1, constraint2))result <- solve(problem) And we can get the estimates of \(\beta\). m <- matrix(result$getValue(betaHat), ncol = 1)rownames(m) <- paste0("$\\beta_{", 1:p, "}$")knitr::kable(m, format = "html") %>% kable_styling("striped") %>% column_spec(1:2, background = "#ececec") \(\beta_{1}\) 0.0000000 \(\beta_{2}\) -2.8447019 \(\beta_{3}\) -1.7109799 \(\beta_{4}\) 0.0000000 \(\beta_{5}\) 0.6641321 \(\beta_{6}\) 1.1780936 \(\beta_{7}\) 2.3286068 \(\beta_{8}\) 2.4144816 \(\beta_{9}\) 4.2119206 \(\beta_{10}\) 4.9483132 This demonstrates the chief advantage of CVXR: flexibility. Userscan quickly modify and re-solve a problem, making our package idealfor prototyping new statistical methods. Its syntax is simple andmathematically intuitive. Furthermore, CVXR combines seamlessly withnative R code as well as several popular packages, allowing it to beincorporated easily into a larger analytical framework. The user isfree to construct statistical estimators that are solutions to aconvex optimization problem where there may not be a closed formsolution or even an implementation. Such solutions can then becombined with resampling techniques like the bootstrap to estimatevariability. Further Reading We hope we have whet your appetite. You may wish to read a longer introduction with more examples. We also have a number of tutorial examples available to study and mimic. Session Info sessionInfo() ## R version 3.6.0 (2019-04-26)## Platform: x86_64-apple-darwin18.5.0 (64-bit)## Running under: macOS Mojave 10.14.5## ## Matrix products: default## BLAS/LAPACK: /usr/local/Cellar/openblas/0.3.6_1/lib/libopenblasp-r0.3.6.dylib## ## locale:## [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8## ## attached base packages:## [1] stats graphics grDevices datasets utils methods base ## ## other attached packages:## [1] nnls_1.4 CVXR_0.99-6 kableExtra_1.1.0## ## loaded via a namespace (and not attached):## [1] gmp_0.5-13.5 Rcpp_1.0.1 pillar_1.4.1 ## [4] compiler_3.6.0 highr_0.8 R.methodsS3_1.7.1## [7] R.utils_2.8.0 tools_3.6.0 bit_1.1-14 ## [10] digest_0.6.19 evaluate_0.14 tibble_2.1.2 ## [13] viridisLite_0.3.0 lattice_0.20-38 pkgconfig_2.0.2 ## [16] rlang_0.3.4 Matrix_1.2-17 rstudioapi_0.10 ## [19] yaml_2.2.0 blogdown_0.12.1 xfun_0.7 ## [22] Rmpfr_0.7-2 ECOSolveR_0.5.2 httr_1.4.0 ## [25] stringr_1.4.0 xml2_1.2.0 knitr_1.23 ## [28] hms_0.4.2 bit64_0.9-7 webshot_0.5.1 ## [31] grid_3.6.0 glue_1.3.1 R6_2.4.0 ## [34] rmarkdown_1.13 bookdown_0.11 readr_1.3.1 ## [37] magrittr_1.5 scs_1.2-3 scales_1.0.0 ## [40] htmltools_0.3.6 rvest_0.3.4 colorspace_1.4-1 ## [43] stringi_1.4.3 munsell_0.5.0 crayon_1.3.4 ## [46] R.oo_1.22.0
A carnival Ferris wheel with a radius 8 m makes one complete revolution every 20 seconds. The bottom of the wheel is 1 m off the ground. The ride lasts 40 seconds and starts at the bottom of the wheel. a) Find an equation of a function that models the height of a person riding the Ferris wheel throughout their 40 second ride and give the corresponding graph. b) Little Bobby is a bit afraid of heights and gets really nervous once above 15 meters. During what time intervals on his ride should we expect little Bobby to be really nervous? Answer: h(t) = -8 cos (π/10 t) + 9, between 7.7 and 12.3 seconds and between 27.7 and 32.3 seconds. Note: I have the answer, I am looking comletely for the process! Thanks you! Rewriteable as: -8 cos (π t /10) + 9 [if it looked confusing] a) Maybe this drawing will help... The thing at the bottom right corner of the triangle is supposed to be somebody riding the Ferris wheel x) All the lengths are in meters, and t is the number of seconds after the ride starts. height of person = b + 1 height of person = 8 - 8 cos θ + 1 height of person = 9 - 8 cos θ height of person = 9 - 8 cos( \(\frac{\pi t}{10}\) ) h(t) = -8 cos( \(\frac{\pi t}{10}\) ) + 9 If you have a question about where any part of this came from, please ask Here is a cool graph: https://www.desmos.com/calculator/csy92axqeq You can choose the value of t using the slider and see where the point representing the person is at. Also, you can turn on the folder called "graph of height" to see a graph of y = -8 cos( \(\frac{\pi x}{10}\) ) + 9 b) You can try looking at that graph and moving the point until the y-value goes just above 15 . But to find the answer without using that.... Let's find values of t which make h(t) = 15 \(15\ =\ -8\cos(\frac{\pi t}{10})+9\\~\\ 6\ =\ -8\cos(\frac{\pi t}{10})\\~\\ -\frac34\ =\ \cos(\frac{\pi t}{10})\) There are two values of \(\frac{\pi t}{10}\) in the interval [0, 2π) and thus two values of t which make \(\cos(\frac{\pi t}{10})=-\frac34\) They are: \(\begin{array}{} \frac{\pi t}{10}\ =\ \arccos(-\frac34)&\quad&\frac{\pi t}{10}\ =\ 2\pi-\arccos(-\frac34)\\~\\ t\ =\ \frac{10}{\pi}\arccos(-\frac34)&&t\ =\ \frac{10}{\pi}(2\pi-\arccos(-\frac34))\\~\\ t\ \approx\ 7.7&&t\ =\ 20-\frac{10}{\pi}\arccos(-\frac34)\\~\\ &&t\ \approx\ 20-7.7\\~\\ &&t\ \approx\ 12.3 \end{array}\) The height is 15 meters 7.7 seconds after the starting position and again 7.7 seconds before the starting position. So we can see that Bobby will be really nervous when t is in the interval (7.7, 12.3) Bobby will be really nervous again 20 seconds later when the Ferris wheel is back at the same spot, so Bobby will be really nervous when t is in the interval (27.7, 32.3) Here is another diagram: The person is sitting at point B . height of person = CD + DE And DE = 1 because the bottom of the wheel is 1 m off the ground. height of person = CD + 1 Now we just have to find the length of CD . CD = AD - AC And we know AD = 8 because the radius of the Ferris wheel is 8 m. CD = 8 - AC And we can find the length of AC by looking at △ABC. cos θ = adjacent / hypotenuse cos θ = AC / AB And we know AB = 8 beause the radius of the Ferris wheel is 8 m. cos θ = AC / 8 Multiply both sides of the equation by 8 . 8 cos θ = AC So........ height of person = CD + DE = (AD - AC) + DE = ( 8 - 8 cos θ ) + 1 = -8 cos θ + 9 And....if you don't understand how to relate θ and t .....this might help: try to imagine what would that angle be after 10 seconds? (10 seconds is half of 20 seconds!) What would that angle be after 5 seconds? (5 seconds is a fourth of 20 seconds.)
I thought I would share some interesting things about first order differential operators, acting on functions on a supermanifold. One can reduce the theory to operators on manifolds by simply dropping the sign factors and ignoring the parity. First order differential operators naturally include vector fields as their homogeneous “top component”. The lowest order component is left multiplication by a smooth function. I will attempt to demonstrate that from an algebraic point of view first order differential operators are quite natural and in some sense more fundamental that just the vector fields. Geometrically, vector fields are key as they represent infinitesimal diffeomorphisms and are used to construct Lie derivatives as “geometric variations”. This is probably why in introductory geometry textbooks first order differential operators are not described. I do not think anything I am about to say is in fact new. I assume the reader has some idea what a differential operator is and that they form a Lie algebra under the commutator bracket. Everything here will be done on supermanifolds. I won’t present full proofs, hopefully anyone interested can fill in any gaps. Any serious mistakes then let me know. Let \(M\) be a supermanifold and let \(C^{\infty}(M)\) denote its algebra of functions. Definition A differential operator \(D\) is said to be a first order differential operator if and only if \(\left[ \left[ D,f \right],g \right]1=0\), for all \(f,g \in C^{\infty}(M)\). We remark that we have a filtration here rather than a grading (nothing to do with the supermanifold grading) as we include zero order operators here (left multiplication by a function). Let us denote the vector space of first order differential operators as \(\mathcal{D}^{1}(M)\). Theorem The first order differential operator \(D \in\mathcal{D}^{1}(M) \) is a vector field if and only if \(D(1)=0\). Proof Writing out the definition of a first order differential operator gives \(D(f,g) = D(f)g + (-1)^{\widetilde{D}\widetilde{f}}f D(g)- D(1)fg\), which reduces to the strict Leibniz rule when \(D(1)=0\). QED. Lemma First order differential operators always decompose as \(D = (D-D(1)) + D(1)\). The above lemma says that we can write any first order differential operator as the sum of a vector field and a function. Theorem A first order differential operator \(D\) is a zero order operator if and only if \(D(1) \neq 0\) and \(\left[ D,f\right]1 = 0\), for all \(f \in C^{\infty}(M)\). Proof Writing out the definition of a first order differential operator and using the above Lemma we get \(\left[ D,f\right]1 = (D(f) {-} D(1)f) { -} (-1)^{\widetilde{D}\widetilde{f}}f (D {-} D(1)) =0\). Thus we decompose the condition into the sum of a function and a vector field. As theses are different they must both vanish separately. In particular \(D- D(1)\) must be the zero vector. Then \(D = D(1)\) and we have “just” a non-zero function. QED We assume that the function is not zero, otherwise we can simply consider it to be the zero vector. This avoids the obvious “degeneracy”. Theorem The space of first order differential operators \(D \in\mathcal{D}^{1}(M) \) is a bimodule over \(C^{\infty}(M)\). Proof Let \(D\) be a first order differential operator and let \(k,l \in C^{\infty}(M)\) be functions. Then using all the definitions one arrives at \(kDl = k \left( (-1)^{\widetilde{l} \widetilde{D}}(D- D(1)) + D(l) \right)\), which clearly shows that we have a first order differential operator. QED Please note that this is different to the case of vector fields, they only form a left module. That is \(f \circ X\) is a vector field but \(X \circ f\) is not. Theorem The space of first order differential operators is a Lie algebra with respect to the commutator bracket. Proof Let us assume the basic results for the commutator. That is we take for granted that is forms a Lie algebra. The non-trivial thing is that the space of first order differential operators is closed with respect to the commutator. By the definitons we get \(\left[ D_{1}, D_{2} \right] = \left[(D_{1}-D_{1}(1)) , (D_{2} – D_{2}(1)) \right] + (D_{1}-D_{1}(1))(D_{2}(1)){ -} (-1)^{\widetilde{D_{1}} \widetilde{D}_{2}} (D_{2}- D_{2}(1)) (D_{1}(1))\), which remains a first order differential operator. QED Note that the above commutator contains the standard Lie bracket between vector fields. So as one expects vector fields are closed with respect to the commutator. The commutator bracket between first order differential operators is often known as THE Jacobi bracket. So in conclusion we see that the first order differential operators have a privileged place in geometry. They form a bimodule over the smooth functions and are closed with respect to the commutator. No other order differential operators have these properties. They are also very important from other angles including Jacobi algebroids and related structures like Courant algebroids and generalised geometry. But these remain topics for discussion another day.
Why All These Stresses and Strains? In structural mechanics you will come across a plethora of stress and strain definitions. It may be a Second Piola-Kirchhoff Stress or a Logarithmic Strain. In this blog post we will investigate these quantities, discuss why there is a need for so many variations of stresses and strains, and illuminate the consequences for you as a finite element analyst. The defining tensor expressions and transformations can be found in many textbooks, as well as through some web links at the end of this blog post, so they will not be given in detail here. The Tensile Test When evaluating the mechanical data of a material, it is common to perform a uniaxial tension test. What is actually measured is a force versus displacement curve, but in order to make these results independent of specimen size, the results are usually presented as stress versus strain. If the deformations are large enough, one question then is: do you compute the stress based on the original cross-sectional area of the specimen, or based on the current area? The answer is that both definitions are used, and are called Nominal stress and True stress, respectively. A second, and not so obvious, question is how to measure the relative elongation, i.e. the strain. The engineering strain is defined as the ratio between the elongation and the original length, \epsilon_{eng} = \frac{L-L_0}{L_0}. For larger stretches, however, it is more common to use either the stretch \lambda=\frac{L}{L_0} or the true strain (logarithmic strain) \epsilon_{true} = \log\frac{L}{L_0} = \log \lambda. The true strain is more common in metal testing, since it is a quantity suitable for many plasticity models. For materials with a very large possible elongation, like rubber, the stretch is a more common parameter. Note that for the undeformed material, the stretch is \lambda=1. In order to make use of the measured data in an analysis, you must make sure of the following two things: How the stress and strain are defined in the test In what form your analysis software expects it for a specific material model The transformation of the uniaxial data is not difficult, but it must not be forgotten. Stress-strain curves for the same tensile test. Geometric Nonlinearity Most structural mechanics problems can be analyzed under the assumption that the deformations are so small compared to the dimensions of the structure, that the equations of equilibrium can be formulated for the undeformed geometry. In this case, the distinctions between different stress and strain measures disappear. If displacements, rotations, or strains become large enough, then geometric nonlinearity must be taken into account. This is when we start to consider that area elements actually change, that there is a distinction between an original length and a deformed length, and that directions may change during the deformation. There are several mathematically equivalent ways of representing such finite deformations. For the uniaxial test above, the different representations are rather straight-forward. In real life however, geometries are three-dimensional, have multiaxial stress states, and might rotate in space. Even if we just consider the same tensile test, keep the stress and strain fixed at a certain level, and then rotate the specimen, questions arise. What results can we expect? Are the values of the stress and strain components expected to change or not? Stress Measures The most fundamental and commonly used stress quantity is the Cauchy stress, also known as the true stress. It is defined by studying the forces acting on an infinitesimal area element in the deformed body. Both the force components and the normal to the area have fixed directions in space. This means that if a stressed body is subjected to a pure rotation, the actual values of the stress components will change. What was originally a uniaxial stress state might be transformed into a full tensor with both normal and shear stress components. In many cases, this is neither what you want to use nor what you would expect. Consider for example an orthotropic material with fibers having a certain orientation. It is much more plausible that you want to see the stress in the fiber direction, even if the component is rotated. The Second Piola-Kirchhoff stress has this property. It is defined along the material directions. In the figure below, an originally straight cantilever beam has been subjected to bending by a pure moment at the tip. The xx-component of the Cauchy stress (top) and Second Piola-Kirchhoff stress (below) are shown. Since the stress is physically directed along the beam, the xx-component of the Cauchy stress (which is related to the global x-direction) decreases with the deflection. The Second Piola-Kirchhoff stress however, has the same through-thickness distribution all along the beam, even in the deformed configuration. Cauchy and Second Piola-Kirchhoff stress for an initially straight beam with constant bending moment. Another stress measure that you may encounter is the First Piola-Kirchhoff stress. It is a multiaxial generalization of the nominal (or engineering) stress. The stress is defined as the force in the current configuration acting on the original area. The First Piola-Kirchhoff is an unsymmetric tensor, and is for that reason less attractive to work with. Sometimes you may also encounter the Kirchhoff stress. The Kirchhoff stress is just the Cauchy stress scaled by the volume change. It has little physical significance, but can be convenient in some mathematical and numerical operations. Unfortunately, even without a rotation, the actual values of all these stress representations are not the same. All of them scale differently with respect to local volume changes and stretches. This is illustrated in the graph below. The xx-component of several stress measures are plotted at the fixed end of the beam, where the beam axis coincides with the x-axis. In the center of the beam, where strains, and thereby volume changes are small, all values approach each other. So for a case with large rotation but small strains, the stress representations can be seen as pure rotations of the same stress tensor. The distribution of axial stress at the fixed end of the beam. If you want to compute the resulting force or a moment on a certain boundary, there are really only two possible choices: Either integrate the Cauchy stress over the deformed boundary, or integrate the First Piola-Kirchhoff stress over the same boundary in the undeformed configuration. In COMSOL Multiphysics this corresponds to selecting either “Spatial frame” or “Material frame” in the settings for the integration operator. Strain Measures When investigating the uniaxial tensile test above, three different representations of the strain were introduced. It is possible to generalize all of them to multiaxial cases, but for the true strain this is not trivial. It has to be done through a representation in the principal strain directions because that is the only way to take the logarithm of a tensor. The general tensor representation of the logarithmic strain is often called Hencky strain. There are also many other possible representations of the deformation. Any reasonable representation however, must be able to represent a rigid rotation of an unstrained body without producing any strain. The engineering strain fails here, thus it cannot be used for general geometrically nonlinear cases. One common choice for representing large strains is the Green-Lagrange strain. It contains derivatives of the displacements with respect to the original configuration. The values therefore represent strains in material directions, similar to the behavior of the Second Piola-Kirchhoff stress. This allows a physical interpretation, but it must be realized that even for a uniaxial case, the Green-Lagrange strain is strongly nonlinear with respect to the displacement. If an object is stretched to twice its original length, the Green-Lagrange strain is 1.5 in the stretching direction. If the object is compressed to half its length, the strain would read -0.375. An even more fundamental quantity is the deformation gradient, \mathbf F, which contains the derivatives of the deformed coordinates with respect to the original coordinates, \mathbf F = \frac{\partial \mathbf x}{\partial \mathbf X}. The deformation gradient contains all information about the local deformation in the solid, and can be used to form many other strain quantities. As an example, the Green-Lagrange strain is \frac{1}{2} (\mathbf{F}^T \mathbf F-\mathbf I). A similar strain tensor, but based on derivatives with respect to coordinates in the deformed configuration, is the Almansi strain tensor, \frac{1}{2} ( \mathbf I-( \mathbf{F} \mathbf F^T)^{-1}). The Almansi strain tensor will then refer to directions fixed in space. Conjugate Quantities A general way to express the continuum mechanics problem is by using a weak formulation. In mechanics this is known as the principle of virtual work, which states that the internal work done by an infinitesimal strain variation operating on the current stresses equals the external work done by a corresponding virtual displacement operating on the loads. The stress and strain measures must then be selected so that their product gives an accurate energy density. This energy density may be related either to the undeformed or deformed volume, depending on whether the internal virtual work is integrated over the original or the deformed geometry. In the table below, some corresponding conjugate stress-strain pairs are summarized: Strain Stress Symmetry Volume Orientation Engineering Strain (based on deformed geometry); True strain; Almansi strain Cauchy (True stress) Symmetric Deformed Spatial Engineering Strain (based on deformed geometry); True strain; Almansi strain Kirchhoff Symmetric Original Spatial Deformation gradient First Piola-Kirchhoff (Nominal Stress) Non-symmetric Original Mixed Green-Lagrange strain Second Piola-Kirchhoff (Material Stress) Symmetric Original Material In the Solid Mechanics interface in COMSOL Multiphysics, the principle of virtual work is always expressed in the undeformed geometry (the “Material frame”). Green-Lagrange strains and Second Piola-Kirchhoff stresses are then used. Such a formulation is sometimes called a “Total Lagrangian” formulation. A formulation that is instead based on quantities in the current configuration is called an “Updated Lagrangian” formulation. Additional Resources on Stresses and Strains Comentários (5) CATEGORIAS Chemical COMSOL Now Electrical Fluid General Interfacing Mechanical Today in Science
Recall that continuous random variables have uncountably many possible values (think of intervals of real numbers). Just as for discrete random variables, we can talk about probabilities for continuous random variables using density functions. Definition\(\PageIndex{1}\) The probability density function (pdf), denoted \(f\), of a continuous random variable \(X\) satisfies the following: \(f(x) \geq 0\), for all \(x\in\mathbb{R}\) \(f\) is piecewise continuous \(\displaystyle{\int\limits^{\infty}_{-\infty}\! f(x)\,dx = 1}\) \(\displaystyle{P(a\leq X\leq b) = \int\limits^a_b\! f(x)\,dx}\) Example \(\PageIndex{1}\): Let the random variable \(X\) denote the time a person waits for an elevator to arrive. Suppose the longest one would need to wait for the elevator is 2 minutes, so that the possible values of \(X\) (in minutes) are given by the interval \([0,2]\). A possible pdf for \(X\) is given by $$f(x) = \left\{\begin{array}{l l} x, & \text{for}\ 0\leq x\leq 1 \\ 2-x, & \text{for}\ 1< x\leq 2 \\ 0, & \text{otherwise} \end{array}\right.\notag$$ The graph of \(f\) is given below. The reader is encouraged to verify that \(f\) satisfies the first three conditions in Definition 4.1.1. Figure 1: Graph of f So, if we wish to calculate the probability that a person waits less than 30 seconds (or 0.5 minutes) for the elevator to arrive, then we calculate the following probability using the pdf and the fourth property in Definition 4.1.1: $$P(0\leq X\leq 0.5) = \int\limits^{0.5}_0\! f(x)\,dx = \int\limits^{0.5}_0\! x\,dx = 0.125\notag$$
As a central goal of his MS in Aerospace Engineering thesis, Michael Waddington developed the Wave Drag tool in OpenVSP. This replaced the AWAVE drag tool available in earlier OpenVSP versions, which provided the cross-sectional area calculations necessary for an AWAVE analysis. For details of wave drag methodology, tool development path, implementation details, and validation studies, Michael Waddington's thesis is available here: Development of an Interactive Wave Drag Capability for the OpenVSP Parametric Geometry Tool For additional information, Rob McDonald's presentation at the 2016 OpenVSP Workshop can be viewed here: OpenVSP Workshop 2016: Wave Drag Presentation Wave drag is a phenomenon experienced during transonic/supersonic flight due to the presence of shock waves, which leads to a sharp increase in the drag coefficient. In 1952, Richard Whitcomb of NACA discovered the area-ruling technique, where the cross-sectional area distribution is managed to reduce wave drag. This leading approach to wave drag minimization is known as the Whitcomb area rule, often referred to simply as ‘area-ruling’. By carefully managing the cross-sectional area distribution longitudinally as to avoid deviation from a smooth profile, a designer can prevent strong shock waves. The goal of reducing the wave drag over a body can be accomplished by minimizing the following integral: $$ I=- \frac{1}{2\pi} \int_0^1 \int_0^1 S''(x)S''(y) log|x-y|dxdy $$ where x and y are Cartesian coordinates, S represents area distribution, and the body length has been normalized to unity. A Fourier analysis of the equation, as proposed by Eminton and Lord 1, allows the minimum value of the integral to be found. The Wave Drag GUI is accessed by selecting “Wave Drag…” from the Analysis menu. Without a run of an analysis, the data does not yet exist to populate either the results fields or the cross-sectional area plot. Once the wave drag calculation has run, results from the tool are available for the remainder of the OpenVSP session. Firstly, under the “Case Setup” header, the user is permitted to select the geometry set on which to run the analysis. By default, the analysis will run on all sets. Secondly, user controls exist for the number of slices per θ rotation and the number of θ rotations. Also shown is a toggle button permitting the tool to be run with or without X−Z symmetry. When this symmetry option is turned on, as is the default, the wave drag tool rescales the distribution of the θ rotations on a 0−180° basis rather than 0−360°. The intent here is to utilize this feature when the two X−Z halves of the geometry are identical. The advantage of applying this option is achieving the same fidelity with fewer rotations and, thus, quicker calculations. It is worth noting that with fewer rotations, the symmetry option will achieve the same result as without the symmetry option when the user enters an even number n of rotations; the impossibility of having a fraction of a rotation precludes exactly equal results when the user enters an odd number n of rotations. The middle portion of the “Setup” tab is dedicated to the “Flow Conditions” section in which the Mach number and reference area are set. Mach angle is computed internally as Mach number is typically more intuitive for users. The reference area that is used may be determined from the geometry or may be user-defined. By default, the manual option is selected, with an arbitrary value of 100 in the text field. Lastly, a file navigator gives the option to save the resulting cross-sectional areas as a text output. The “…” button opens a file browser and file naming window from which the user dictates a *.txt file in which the calculated data will be saved. The file string is displayed in the “File” text field. With these setup values, the zero-lift wave drag coefficient, CD0_w, can be calculated. Handling flow-through components can be achieved directly by building a component with a hole or indirectly by building a solid component and using sub-surfaces to designate the flow faces; the wave drag tool provides the functionality to designate sub-surfaces as flow-through. Accommodating solid flow-through components was accomplished by extending flow faces into stream tubes that are intersected by each Mach cutting plane. Adding flow faces onto solid components for the wave drag tool begins with placing sub-surfaces on the components intended to be flow-through. This is done using the sub-surface interface for the component. The user must dictate whether the subsurface lies outside or inside the sub-surface line by selecting “Greater” or “Less” from the subsurface menu. With the Line sub-surface, these options are with respect to x-location— with the “Greater” option, the portion of the component located in the positive x-direction from the subsurface line will be designated as the sub-surface, while the opposite is true for the “Less” option. For example, to create inlet and exit sub-surfaces on an engine component, the user would add a sub-surface Line with “Less” at the inlet location and a subsurface Line with “Greater” at the exit location. The user then communicates to the wave drag tool the sub-surfaces that are to be considered flow-through by using the “Inflow/Outflow” tab of the wave drag tool GUI. This tab contains a checklist window of all sub-surfaces in the vehicle. The default condition of the check boxes is unchecked, meaning that no surfaces are considered flow surfaces. Checking the boxes next to the sub-surfaces to be used as flow faces is the only action required by the user. The wave drag tool uses the geometry of the model to determine whether the sub-surface is an inlet or an exit and performs the analysis accordingly. Controls for managing the visual interaction tools were segregated into the “Plot” tab of the Wave Drag GUI. A rotation index selector under the “Displayed Rotation” header allows the user to select which of the available θ rotation cross-sectional area plots to visualize. An additional text field to the right of the header is provided to display the value, in degrees, of the currently selected θ. A visual indicator of the current x-location of interest is shown on the cross-sectional area plot, as well as in the form of a translucent cutting plane on the geometry window. The slider under “Slice Reference” header controls the x-location of these indicators. The cutting plane visualizer is discussed in additional detail later in this section, and can be toggled on and off using the “Plane” button to the right of the “Slice Reference” header. Additional functionality is built into this visual indicator pertaining to the locations of maximum wave drag on the body. As stated by Eminton and Lord 1, the equation for S′(x) allows for differentiation such that the x-value corresponding to the location of maximum wave drag contribution may be determined. In the wave drag tool, these x-values are determined for each set of Mach cutting planes over the dictated θ rotations. The “X” button immediately right of the visual indicator control relocates the visual indicator to the x-location of the maximum wave drag contribution on the presently selected θ rotation. The “X, Rot” button relocates to the global maximum wave drag contribution by changing the GUI selections to the value of θ that contains maximum wave drag contribution, as well as the x-location of the visual indictor to the location the maximum drag contribution. The “Optimal Distribution” dropdown menu allows the user to select from a given list of available bodies of revolution whose cross-sectional area distributions will appear on the cross-sectional area plot along with the distribution for the existing aircraft model. The default is to display none of these curves, but the list contains: 1) Sears-Haack body; 2) von Karman ogive; and 3) Lighthill’s body. All three are bodies that use length and one other parameter to distribute area from nose to tail. These curves are useful for comparing to the area distribution of the aircraft model. Under the “Plot Style” header, the user is given the opportunity to dictate the manner in which the cross-sectional area plot is constructed. By default, the plot will show the total cross-sectional area distribution, and show the plot line with data points. However, the user is also permitted to view the data by parts or by buildup. The user may also elect to show the plot lines without the data points. The “Legend” section at the bottom of the tab menu shows the relevant information for the Parts and Buildup selections. On the right side of the GUI, visible on all tabs, is the results cross-sectional area plot. Each time the wave drag GUI is updated, the plot is redrawn to reflect any changes. The minimum of the y-axis is zero, for zero cross-sectional area, and the maximum value is the global maximum value of cross-sectional area for all θ, plus a buffer to disallow any data from being plotted at the very top of the plot. Using the axis location and cross-sectional area as (x,y) coordinates, the data for the current θ is then plotted as black points on the canvas. The Fourier terms of the solution approach are used to create the smooth curve that approximates the discrete values from the area calculation, shown as the black line connecting the points on the cross-sectional area plot. Selecting a body of revolution curve from the “Optimal Distribution” menu plots the selected curve in blue on top of the existing data. The Sears-Haack body is the minimum wave drag shape for a given length and volume 2. In the wave drag tool, the length is obtained from the x-wise span of the current θ value; the volume is calculated as the integral of the area results on the current θ. The result is an equivalent Sears-Haack body created to match the geometry data from the aircraft model. The von Karman ogive is most commonly referenced in nose cone design. This body uses a given length and maximum diameter to produce the minimum wave drag shape for when the maximum diameter is located at the base. The equation for the von Karman ogive is given in Reference 2; again, the length parameter used is the x-wise span of the current θ value. Lighthill’s body is very similar to the von Karman ogive, with the length and diameter being specified. However, the maximum diameter location is moved to the midpoint. Certain assumptions are made about the slenderness of the body and sufficiently low supersonic Mach numbers in generating Lighthill’s body equation and are covered in Reference 2. This page was created and edited by: — Justin Gravett 2017/12/07 10:00
If I want to know the azimuth (initial heading) to another point on a sphere I use the formula $$\tag{1} \tan(\theta) = \frac{\sin(\Delta\lambda)\cos(\varphi_2)}{\cos(\varphi_1)\sin(\varphi_2)-\sin(\varphi_1)\cos(\varphi_2)\cos(\Delta\lambda)} $$ where $\theta$ is the initial bearing to point 2, $\varphi_1$ and $\varphi_2$ are the latitudes of point one and two respectively and $\Delta\lambda$ is the difference in longitudes of the two points. To get the azimuth I then use the $atan2$-function and insert $$\tag{2} \sin(\Delta\lambda)\cos(\varphi_2) $$ for $X$ and $$\tag{3} \cos(\varphi_1)\sin(\varphi_2)-\sin(\varphi_1)\cos(\varphi_2)\cos(\Delta\lambda) $$ for $Y$. I know that $\tan(\alpha) = \frac{X}{Y}$ is used to calculate the heading $\alpha$ of a vector. However, I can't figure out why equation 2 is my $X$ and equation 3 my $Y$. Where do these formulae come from? Let $p$, $p'$ be two unit vectors, directed from the Earth center to points $P$ and $P'$ on the sphere, and $n$ the analogous unit vector for the North Pole. To compute the heading from $P$ to $P'$ you must set up a coordinate system in a plane perpendicular to $p$, with the $y$-axis pointing towards the North Pole. This can be simply done by constructing two unit vectors $x$ and $y$ as follows: $$ y={p\times n\over|p\times n|}\times p,\quad x=y\times p. $$ A vector $t$ in the same plane pointing towards $P'$ is: $$ t=(p\times p')\times p $$ and its coordinates are then its projections along vectors $x$ and $y$: $$ t_x=t\cdot x,\quad t_y=t\cdot y. $$ If you now express the coordinates of $p$, $p'$ as a function of their latitude and longitude (and of course $n=(0,0,1)$), you can find explicit expressions for $t_x$ and $t_y$ and should recover your formulas. EDIT. If $\phi$, $\phi'$ are the latitudes, and $\lambda$, $\lambda'$ the longitudes of points $P$ and $P'$, we have: $$ p=(\cos\phi\cos\lambda,\cos\phi\sin\lambda,\sin\phi),\quad p'=(\cos\phi'\cos\lambda',\cos\phi'\sin\lambda',\sin\phi'),\quad n=(0,0,1). $$ It follows that: $$ p\times n=(\cos\phi\sin\lambda,-\cos\phi\cos\lambda,0), \quad |p\times n|=\cos\phi, $$ whence $$ y=(-\cos\lambda\sin\phi,-\sin\lambda\sin\phi,\cos\phi), \quad x=(-\sin\lambda,\cos\lambda,0). $$ Moreover: $$ t=(p\times p')\times p=-p(p'\cdot p)+p'=\\ \big( \sin ^2\phi \cos \phi' \cos \lambda' +\cos \phi^2\cos \phi' \sin \lambda \sin (\lambda-\lambda') -\sin\phi\cos \phi \sin \phi' \cos \lambda,\\ \sin ^2\phi\cos \phi' \sin \lambda' -\cos^2 \phi\cos \phi' \cos \lambda \sin (\lambda-\lambda') -\sin\phi \cos \phi \sin \phi' \sin \lambda,\\ \cos \phi (\cos \phi \sin \phi'-\sin \phi \cos \phi' \cos (\lambda-\lambda'))\big), $$ and finally: $$ t_x=-\cos\phi'\sin(\lambda-\lambda'), \quad t_y=-\cos\phi'\sin\phi\cos(\lambda-\lambda')+\cos\phi\sin\phi'. $$
Sorry this answer got too long. I have categorized it into three points. (1) I think the reason Kohmoto stresses the importance of the Brillouin zone being a torus $BZ = T^2$, is because he wants to say that BZ is compact and has no boundary. This is important because of the subtlety that makes everything work. The Hall conductance is given by $\sigma_{xy} = -\frac{e^2}h C_1$ (eq. 4.9), where the first Chern number is (eq. 4.8) $C_1 = \frac i{2\pi}\int_{BZ} F = \frac i{2\pi}\int_{BZ} dA$. However by naively using Stokes theorem $\int_M dA = \int_{\partial M} A$, where $\partial M$ is the boundary of $M$. Since $BZ= T^2$ and the fact that the torus has no boundary $\partial T^2$, this seem to imply that $\int_{\partial BZ} A = 0$ and thus $\sigma_{xy}=0$. There is however an important subtlety here, our use of Stokes theorem is only correct if $A$ can be constructed globally on all of $BZ$ and this cannot be done in general. One has to split the $BZ$ torus into smaller patches and construct $A$ locally on each patch, which now do have boundaries (see figure 1). The mismatch between the values of the $A$'s on the boundaries of the patches will make $\sigma_{xy}$ non-zero (see eq. 3.13). In terms of de Rahm cohomology one can say that $F$ belongs to a non-trivial second cohomolgy class of the torus, or in other words the equation $F = dA$ is only true locally not globally. And that's why our use of Stokes theorem was wrong. In this case, you can actually replace the torus with a sphere with no problem (why that is requires some arguments from algebraic topology, but I will shortly give a more physical picture of this). In higher dimensions and in other types of topological insulators there can be a difference between taking $BZ$ to be a torus or a sphere. The difference is that with the sphere you only get what people call strong topological insulators, while with $BZ=T^2$ you also get the so-called weak topological insulators. The difference is that, the weak topological insulators correspond to stacks of lower-dimensional systems and these exist only if there is translational symmetry, in other words they are NOT robust against impurities and disorder. People therefore usually pretend $BZ$ is a sphere, since the strong topological insulators are the most interesting anyway. For example the table for the K-theoretic classification of topological insulators people usually show (see table I here), correspond to using the sphere instead of torus, otherwise the table will be full of less interesting states. Let me briefly give you some physical intuition about what $\sigma_{xy}$ measures by making an analogy to electromagnetism. In a less differential geometric notation, one can write (eq. 3.9) $C_1 = \frac i{2\pi}\oint_M \mathbf B\cdot d\mathbf S$, where $\mathbf B = \nabla_k\times \mathbf A$ can be though of as a magnetic field in k-space. This is nothing but a magnetic version of the Gauss law and it measures the total magnetic flux through the closed surface $M$. In other words, it measures the total magnetic charge enclosed by the surface $M$ (see also here). Take $M=S^2$, the sphere. If $C_1 = n$ is non-zero, that means that there are magnetic monopoles inside the sphere with total charge $n$. In conventional electromagnetism $C_1$ is always zero, since we assume there are no magnetic monopoles! This is the content of the Gauss law for magnetism, which in differential form is $\nabla\cdot\mathbf B = 0$. The analogue equation for our k-space "magnetic field" would be $\nabla\cdot\mathbf B = \rho_m$, where $\rho_m$ is the magnetic charge density (see here). If $M=BZ=T^2$ the intuition is the same, $C_1$ is the total magnetic charge inside the torus. Another way to say the above is that the equation $\mathbf B = \nabla\times\mathbf A$ as we always use and love, is only correct globally if there are no magnetic monopoles around! (2) Now let me address the next point about Gauss-Bonnet theorem. Actually Gauss-Bonnet theorem does not play any role here, it is just an analogy. For a two-dimensional manifold $M$ with no boundary, the theorem says that $\int_M K dA = 2\pi (2-2g)$. Here $K$ is the Gauss curvature and $g$ is the genus. For example for the torus, $g=1$ and the integral is zero as you also mention. This is not the same as $C_1$ however. The Gauss-Bonnet theorem is about the topology of the manifold (for example the $BZ$ torus), but $\sigma_{xy}$ is related to the topology of the fiber bundle over the torus not the torus itself. Or in other words, how the Bloch wavefunctions behave globally. What plays a role for us is Chern-Weil theory, which is in a sense a generalization of Gauss-Bonnet theorem. The magnetic field $\mathbf B$, or equivalently the field strength $F$, is geometrically the curvature of a so-called $U(1)$ bundle over $BZ$. Chern-Weil theory says that the integral over the curvature $C_1 = \frac i{2\pi}\int_{BZ} F$ is a topological invariant of the $U(1)$ bundle. This is analogous to Gauss-Bonnet, which says that the integral over the curvature is an topological invariant of the manifold. Thus this connection is mainly an analogy people use to give a little intuition about $C_1$, since it is easier to see the curvature $K$ than the curvature $F$ which is more abstract. (3) The comment of Xiao-Gang Wen is correct and to explain it requires going into certain deep issues about what is topological order and what is a topological insulator and what the relation between them is. The distinction between these two notions is very important and there are lots of misuse of terminology in the literature where these are mixed together. The short answer is that both notions are related to topology, but topological order is a much deeper and richter class of states of matter and topology (and quantum entanglement) plays a much bigger role there, compared to topological insulators. In other words, topological order is topological in a very strong sense while topological insulator is topological in a very weak sense. If you are very interested, I can post another answer with more details on the comment of Xiao-Gang Wen since this one is already too big.
Search Now showing items 1-8 of 8 Production of Σ(1385)± and Ξ(1530)0 in proton–proton collisions at √s = 7 TeV (Springer, 2015-01-10) The production of the strange and double-strange baryon resonances ((1385)±, Ξ(1530)0) has been measured at mid-rapidity (|y|< 0.5) in proton–proton collisions at √s = 7 TeV with the ALICE detector at the LHC. Transverse ... Inclusive photon production at forward rapidities in proton-proton collisions at $\sqrt{s}$ = 0.9, 2.76 and 7 TeV (Springer Berlin Heidelberg, 2015-04-09) The multiplicity and pseudorapidity distributions of inclusive photons have been measured at forward rapidities ($2.3 < \eta < 3.9$) in proton-proton collisions at three center-of-mass energies, $\sqrt{s}=0.9$, 2.76 and 7 ... Charged jet cross sections and properties in proton-proton collisions at $\sqrt{s}=7$ TeV (American Physical Society, 2015-06) The differential charged jet cross sections, jet fragmentation distributions, and jet shapes are measured in minimum bias proton-proton collisions at centre-of-mass energy $\sqrt{s}=7$ TeV using the ALICE detector at the ... K*(892)$^0$ and $\Phi$(1020) production in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV (American Physical Society, 2015-02) The yields of the K*(892)$^0$ and $\Phi$(1020) resonances are measured in Pb-Pb collisions at $\sqrt{s_{NN}}$ = 2.76 TeV through their hadronic decays using the ALICE detector. The measurements are performed in multiple ... Production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\mathbf{\sqrt{s_{{\rm NN}}} = 5.02}$ TeV (Elsevier, 2015-01) We report on the production of inclusive $\Upsilon$(1S) and $\Upsilon$(2S) in p-Pb collisions at $\sqrt{s_{\rm NN}}=5.02$ TeV at the LHC. The measurement is performed with the ALICE detector at backward ($-4.46< y_{{\rm ... Elliptic flow of identified hadrons in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Springer, 2015-06-29) The elliptic flow coefficient ($v_{2}$) of identified particles in Pb--Pb collisions at $\sqrt{s_\mathrm{{NN}}} = 2.76$ TeV was measured with the ALICE detector at the LHC. The results were obtained with the Scalar Product ... Measurement of electrons from semileptonic heavy-flavor hadron decays in pp collisions at s =2.76TeV (American Physical Society, 2015-01-07) The pT-differential production cross section of electrons from semileptonic decays of heavy-flavor hadrons has been measured at midrapidity in proton-proton collisions at s√=2.76 TeV in the transverse momentum range ... Multiplicity dependence of jet-like two-particle correlations in p-Pb collisions at $\sqrt{s_NN}$ = 5.02 TeV (Elsevier, 2015-02-04) Two-particle angular correlations between unidentified charged trigger and associated particles are measured by the ALICE detector in p–Pb collisions at a nucleon–nucleon centre-of-mass energy of 5.02 TeV. The transverse-momentum ...
In the context of string theory, and world sheets the Dirichlet boundary conditions can be written as: $$\frac{\partial X^\mu(\tau,\sigma_1)}{\partial \tau}=0$$ where $\sigma_1$ is the value of the parameter $\sigma$ at the end of the 'string'. This however, seems to imply that $$\delta X^\mu(\tau,\sigma_1)=0$$ But I cannot see why, so please can you explain? Here are my thoughts (which are wrong since I get the wrong outcome): It is my assumption that $\delta X^\mu \equiv dX^\mu$ in this context although I could be wrong. This therefore means that: $$\delta X^\mu(\tau,\sigma_1) =\frac{\partial X^\mu(\tau,\sigma_1)}{\partial \tau}d\tau + \frac{\partial X^\mu(\tau,\sigma_1)}{\partial \sigma}d\sigma$$ So subbing in my first equation we get: $$\delta X^\mu (\tau,\sigma_1)=\frac{\partial X^\mu(\tau,\sigma_1)}{\partial \sigma}d\sigma$$ which is generally not equal to $0$. Thus my first equation does not necessary imply my second, as it should. References: A first course in string theory by Barton Zwiebach (2 e.d.) pg 114 http://www.damtp.cam.ac.uk/user/tong/string/three.pdf
In this section, and the next, we look at various numerical characteristics of random variables. These give us a way of classifying and comparing random variables. Expected Value of Discrete Random Variables We begin with the formal definition. Definition \(\PageIndex{1}\) If \(X\) is a discrete random variable with possible values \(x_1, x_2, \ldots, x_i, \ldots\), and probability mass function \(p(x)\), then the (or expected value ) of \(X\) is denoted \(E[X]\) and given by mean $$E[X] = \sum_i x_i\cdot p(x_i).\label{expvalue}$$ The expected value of \(X\) may also be denoted as \(\mu_X\) or simply \(\mu\) if the context is clear. The expected value of a random variable has many interpretations. First, looking at the formula in Definition 3.6.1 for computing expected value (Equation \ref{expvalue}), note that it is essentially a . Specifically, for a discrete random variable, the expected value is computed by "weighting'', or multiplying, each value of the random variable, \(x_i\), by the probability that the random variable takes that value, \(p(x_i)\), and then summing over all possible values. This interpretation of the expected value as a weighted average explains why it is also referred to as the mean of the random variable. weighted average The expected value of a random variable is also interpreted as the long-run value measure of center of the random variable. Finally, the expected value of a random variable has a graphical interpretation. The expected value gives the of the probability mass function, which the following example demonstrates. center of mass Example \(\PageIndex{1}\) Consider again the context of Example 1.1.1, where we recorded the sequence of heads and tails in two tosses of a fair coin. In Example 3.1.1 we defined the discrete random variable \(X\) to denote the number of heads obtained. In Example 3.2.2 we found the pmf of \(X\). We now apply Equation \ref{expvalue} from Definition 3.6.1 and compute the expected value of \(X\): $$E[X] = 0\cdot p(0) + 1\cdot p(1) + 2\cdot p(2) = 0\cdot(0.25) + 1\cdot(0.5) + 2\cdot(0.25) = 0.5 + 0.5 = 1.\notag$$ Thus, we expect that the number of heads obtained in two tosses of a fair coin will be 1 in the long-run or on average. Figure 1 demonstrates the graphical representation of the expected value as the center of mass of the probability mass function. Figure 1: Histogram of \(X\): The red arrow represents the center of mass, or the expected value of \(X\) Example \(\PageIndex{2}\) Suppose we toss a fair coin three times and define the random variable \(X\) to be our winnings on a single play of a game where we win $\(x\) if the first heads is on the \(x^{th}\) toss, for \(x=1,2,3\), and we lose $1 if we get no heads in all three tosses. Then \(X\) is a discrete random variable, with possible values \(x=-1,1,2,3\), and pmf given by the following table: \(x\) \(p(x) = P(X=x)\) -1 \(\frac{1}{8}\) 1 \(\frac{1}{2}\) 2 \(\frac{1}{4}\) 3 \(\frac{1}{8}\) For many of the common probability distributions, the expected value is given by a parameter of the distribution. For example, if discrete random variable \(X\) has a Poisson distribution with parameter \(\lambda\), then \(E[X] = \lambda\). This can be derived directly from Definition 3.6.1, but we will derive it another way in Section 3.8 below. Expected Value of Functions of Random Variables In many applications, we may not be interested in the value of a random variable itself, but rather in a function applied to the random variable or a collection of random variables. For example, we may be interested in the value of \(X^2\). The following theorems, which we state without proof, demonstrates how to calculate the expected value of functions of random variables. Theorem \(\PageIndex{1}\) Let \(X\) be a random variable and let \(g\) be a real-valued function. Define the random variable \(Y = g(X)\). If \(X\) is a discrete random variable with possible values \(x_1, x_2, \ldots, x_i, \ldots\), and frequency function \(p(x)\), then the expected value of \(Y\) is given by $$E[Y] = \sum_i g(x_i)\cdot p(x_i).\notag$$ To put it simply, Theorem 3.6.1 states that to find the expected value of a function of a random variable, just apply the function to the possible values of the random variable in the definition of expected value. Before stating an important special case of Theorem 3.6.1, a word of caution regarding order of operations. Note that, in general, $$E[g(X)] \neq g\left(E[X]\right)\text{!}\notag$$ However, as the next theorem states, there are exceptions. Special Case of Theorem 3.6.1 Let \(X\) be a random variable. If \(g\) is a linear function, i.e., \(g(x) = ax + b\), then $$E[g(X)] = E[aX + b] = aE[X] + b.\notag$$ The above special case is referred to as the linearity of expected value. Linearity of Expectation Suppose \(X_1, \ldots, X_n\) are jointly distributed random variables, and let \(Y = g(X_1, \ldots, X_n)\). If \(X_1, \ldots, X_n\) are discrete random variables with joint frequency function \(p(x_1, \ldots, x_n)\), then the expected value of \(Y\) is given by $$E[Y] = \sum_{x_1, \ldots, x_n} g(x_1, \ldots, x_n)\cdot p(x_1, \ldots, x_n),\notag$$ where the sum is over all possible combinations of possible values for the random variables \(X_1, \ldots, X_n\). Theorem 3.7.2 allows us to extend the linearity property of expected value to linear combinations of jointly distributed random variables. Extension of Special Case of Theorem 1: Let \(X_1, \ldots, X_n\) be jointly distributed random variables, and let \(a_1, \ldots, a_n, b\) be constants. Then, the following holds: $$E[a_1X_1 + \cdots + a_nX_n + b] = a_1E[X_1] + \cdots + a_nE[X_n] + b.\notag$$ As a corollary to Theorem 3.7.2, we obtain an easy way of finding the expected value of products of functions of independent random variables. corollary 3.7.1 If \(X\) and \(Y\) are independent random variables, then $$E[g(X)\cdot h(Y)] = E[g(X)] \cdot E[h(Y)].\notag$$ Corollary 3.7.1 implies that, for independent random variables, \(E[XY] = E[X]E[Y]\).
Search Now showing items 1-10 of 34 Kaon femtoscopy in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV (Elsevier, 2017-12-21) We present the results of three-dimensional femtoscopic analyses for charged and neutral kaons recorded by ALICE in Pb-Pb collisions at $\sqrt{s_{\rm{NN}}}$ = 2.76 TeV. Femtoscopy is used to measure the space-time ... J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}}$ = 5.02 TeV (American Physical Society, 2017-12-15) We report a precise measurement of the J/$\psi$ elliptic flow in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV with the ALICE detector at the LHC. The J/$\psi$ mesons are reconstructed at mid-rapidity ($|y| < 0.9$) ... Highlights of experimental results from ALICE (Elsevier, 2017-11) Highlights of recent results from the ALICE collaboration are presented. The collision systems investigated are Pb–Pb, p–Pb, and pp, and results from studies of bulk particle production, azimuthal correlations, open and ... Event activity-dependence of jet production in p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV measured with semi-inclusive hadron+jet correlations by ALICE (Elsevier, 2017-11) We report measurement of the semi-inclusive distribution of charged-particle jets recoiling from a high transverse momentum ($p_{\rm T}$) hadron trigger, for p-Pb collisions at $\sqrt{s_{\rm NN}} = 5.02$ TeV, in p-Pb events ... System-size dependence of the charged-particle pseudorapidity density at $\sqrt {s_{NN}}$ = 5.02 TeV with ALICE (Elsevier, 2017-11) We present the charged-particle pseudorapidity density in pp, p–Pb, and Pb–Pb collisions at sNN=5.02 TeV over a broad pseudorapidity range. The distributions are determined using the same experimental apparatus and ... Photoproduction of heavy vector mesons in ultra-peripheral Pb–Pb collisions (Elsevier, 2017-11) Ultra-peripheral Pb-Pb collisions, in which the two nuclei pass close to each other, but at an impact parameter greater than the sum of their radii, provide information about the initial state of nuclei. In particular, ... Measurement of $J/\psi$ production as a function of event multiplicity in pp collisions at $\sqrt{s} = 13\,\mathrm{TeV}$ with ALICE (Elsevier, 2017-11) The availability at the LHC of the largest collision energy in pp collisions allows a significant advance in the measurement of $J/\psi$ production as function of event multiplicity. The interesting relative increase ... Multiplicity dependence of jet-like two-particle correlations in pp collisions at $\sqrt s$ =7 and 13 TeV with ALICE (Elsevier, 2017-11) Two-particle correlations in relative azimuthal angle (Δ ϕ ) and pseudorapidity (Δ η ) have been used to study heavy-ion collision dynamics, including medium-induced jet modification. Further investigations also showed the ... Electroweak boson production in p–Pb and Pb–Pb collisions at $\sqrt{s_\mathrm{NN}}=5.02$ TeV with ALICE (Elsevier, 2017-11) W and Z bosons are massive weakly-interacting particles, insensitive to the strong interaction. They provide therefore a medium-blind probe of the initial state of the heavy-ion collisions. The final results for the W and ... Investigating the Role of Coherence Effects on Jet Quenching in Pb-Pb Collisions at $\sqrt{s_{NN}} =2.76$ TeV using Jet Substructure (Elsevier, 2017-11) We report measurements of two jet shapes, the ratio of 2-Subjettiness to 1-Subjettiness ($\it{\tau_{2}}/\it{\tau_{1}}$) and the opening angle between the two axes of the 2-Subjettiness jet shape, which is obtained by ...
People with an interest in date coincidences are probably already getting themselves slightly over-excited about the fact that this month will include what can only be described as . That is, on 14th March 2015, written under certain circumstances by some people as 3/14/15, we’ll be celebrating the closest that the date can conceivably get to the exact value of π (in that format). Ultimate π Day Of course, sensible people would take this as an excuse to have a party, so here’s my top $\tau$ recommendations for having a π party on π day. 1. Have a π (pie) bake-off Baking pies to honour π day is a long-standing tradition, and surely this year you can pull out all the stops. Pies with digits of π on, pies in the shape of π, three whole pies and a little pie – the possibilities are as endless as the string of digits after the decimal point in π. 2. Decorate your house/office/classroom with π Of course, no party is complete without decorations – cut out the symbol π and hang it around the room, or find some spherical balloons and write the formulae for the surface area and volume of a sphere on them with a Sharpie. Or, if you’re feeling productive, why not print out Think Maths’s Mile of Pi (or, you know, the first few metres of it) and stick it up around the walls? 3. Decorate yourself with π There’s plenty of π-related tshirts out there, and the impending date-coincidence-pocalypse has caused an explosion in the availability of π tshirt designs. Aside from the official Pi Day website, there’s a special page on Spreadshirt, one on Zazzle, and even one on Etsy, which includes lovely badges, wall hangings and jewellery adorned with the magic number. If you’re feeling really serious about your love for π, why not get a tattoo? (Or a temporary one, for you and all your friends.) 4. Watch some YouTube videos about π Good old Numberphile has plenty of videos about π, all collected onto one handy page, while Vi Hart has a playlist of all her pro- and anti-π videos. 5. Pun relentlessly about π Get some friends ROUND, sit in a CIRCLE, and CONSTANTly make puns about π. It might seem IRRATIONAL, but I assure you it’s completely NORMAL. (Proof pending) 6. Have a π-related movie night As well as the obvious, Darren Aronofsky’s surrealist psychological thriller π, you could watch other mathematical movies, or event just settle for things with the word π in their name, like The Life of π, Sπderman, or πrates of the Carribean. Or anything with Brad πtt. 6.0283… Post loads of cool content on The Aperiodical This only counts as 0.283… of a suggestion, since it’s more something we’re doing than you – but we here at The Aperiodical will be posting a selection of π-related pieces, including articles, videos and interactive posts, from our own team and a selection of special guests. They’ll be going online in the lead-up to π day itself, and on the day we’ll post a round-up so you can find them all in one handy place. So there you have my top $\tau$ suggestions for what to do on π day. Hope you have a ball (with volume $\frac{4}{3}\pi r^3$)!
When adding two points. calculating the lambda. x3 ≡ λ2 − 2x1 (mod p) is that ( λ2 − 2x1 ) (mod p) or is the mod applied only to the 2x1? Cryptography Stack Exchange is a question and answer site for software developers, mathematicians and others interested in cryptography. It only takes a minute to sign up.Sign up to join this community When adding two points. calculating the lambda. x3 ≡ λ2 − 2x1 (mod p) is that ( λ2 − 2x1 ) (mod p) or is the mod applied only to the 2x1? In general, the equation $$A \equiv B \pmod n$$ means that there exists some $k$ such that $$A = B + k n.$$ The $\cdots \pmod n$ part applies to the entire equation, not to one side or the other. It does not necessarily imply that $$A = B \bmod n,$$ where $B \bmod n$ is an expression in its own right usually meaning the smallest nonnegative remainder that can be obtained from dividing $B$ by $n$, although conversely $A = B \bmod n$ does imply that $A \equiv B \pmod n$. Each elliptic curve can be represented by the form $y^2+a_1xy+a_3y=x^3+a_2x^2+a_4x+a_6$. Let $P$ and $Q$ be two points of this curve. In this case, $$x_{P+Q}=(\lambda^2+a_1\lambda-a_2-x_P-x_Q)\pmod p.$$ Your question relates to a curve in the form $y^2=x^3+Ax+B$ and in a state where $P=Q$. So we have: $$x_{2P}=(\lambda^2-2x_P)\pmod p.$$
It turns out thatstorage of primes can be compressedby an arbitrary amount,though the storage neededis exponential in the compression factor. Note:This has been corrected and been made more precise. Let $p_n$ be the $n$-th prime(with $p_1 = 2$),and let$P_n$ be the product of the first $n$ primes,so that$P_1 = 2, P_2 = 6,P_3 = 30, P_4 = 210$. If we do asieve of Eratosthenes,sieving out multiples of thefirst $n$ primes,all that are leftare the first $n$ primesand the numbersrelatively prime to$P_n$. For example,for $n=2$,the numbers left are $2, 3$,and the forms$6m+1$ and $6m+5$.For $n=3$,the numbers leftare $2, 3, 5$and the forms$30m+1,7,11,13,17,19,23,$ and $29$. For general $n$,the numbers remainingare of the form$P_nm+q_i$,where the $q_i$are the numbers from $1$ to$P_n-1$ relatively prime to $P_n$. To illustrate,I will use the case $n=3$. Each block of 30 numbersin a range$30m+1$ to $30m+29$can have as most $8$primes as shown above.Therefore,only one bit is needed for each of these $8$ possibilities,to indicate whether or notthat value is actually prime.Therefore,storage of primes can be compressedby a factor of$\frac{8}{30} \approx 0.267$. Here is what happens forgeneral $n$. The $P_n$ valuesin the range from$mP_n$ to$(m+1)P_n-1$are compressed to$\phi(P_n)$ bits,where $\phi(m)$ is Eu;er's phi function(though he lets me use it)that counts the number of integersfrom $1$ to $m$relatively prime to $m$. Since $\phi(P_n)= \prod_{i=1}^n (p_i-1)$,the compression factor is$\dfrac{\phi(P_n)}{P_n}=\prod_{i=1}^n (1-\dfrac1{p_i})$.Since this goes to zero(because$\sum_{i=1}^n \dfrac1{p_i}\sim \ln \ln n$ - this is Merten's theorem),the amount of compressioncan be arbitrarily large,though about$e^n/n$ bits are needed. To see this,by one of the corollaries of the prime number theorem,$P_n \sim e^n$, and$\dfrac{\phi(P_n)}{P_n}=\prod_{i=1}^n (1-\dfrac1{p_i})\approx e^{-\ln \ln n}= 1/\ln n$.Therefore,each block of$P_n$ valuescan be represented byabout $\frac{P_n}{n}$bits. Therefore, using a sievewith the first $n$ primesand the numbersrelatively prime to $P_n$can compress the primesby about a factor of $n$. Therefore the primes can be compressed byan arbitrary amount,though the amount of storage neededis exponential in $n$. A number of years ago,I used this ideawith $n=4$,so I got a compression of$\frac{1\cdot 2\cdot 4\cdot 6}{2\cdot 3\cdot 5\cdot 7}=\frac{8}{35}$.
We know that $d = \gcd(a, b)$ can be written as $sa + tb$, where $s, t \in \mathbb{Z}$. Apparently, $d$ is the smallest positive number that can be written in this form. Why is this so? Here is a conceptual way to prove Bezout's gcd Identity. The set $\rm\,S\,$ of all integers of the form $\rm\,a\,x + b\,y,\,\ x,y\in \mathbb Z,\,$ is closed under subtraction $\ ax+by-(a\bar x+b\bar y)\, =\, a(x\!-\bar x)+b(y\!-\!\bar y).\, $By the Lemma below, every positive $\rm\,n\in S\,$ is divisible by $\rm\,d = $ least positive $\rm\in S.\,$ Therefore $\rm\,a,b\in S\,$ $\Rightarrow$ $\rm\,d\mid a,b,\,$ i.e. $\rm\,d\,$ is a common divisor of $\rm\,a,b,\,$ necessarily the greatest common divisor by $\rm\ c\mid a,b\,$ $\Rightarrow$ $\rm\,c\mid d = a\,x_1\!+\! b\,y_1\Rightarrow$ $\rm\,c\le d.$ Lemma $\ \ $ Let $\,\rm S\ne\emptyset \,$ be a set of integers $>0$ closed under subtraction $> 0,\,$ i.e. for all $\rm\,n,m\in S, \,$ $\rm\ n > m\ \Rightarrow\ n-m\, \in\, S.\,$ Then every element of $\rm\,S\,$ is a multiple of the least element $\rm\:\ell = \min\, S.$ Proof ${\bf\ 1}\,\ $ If not there is a least nonmultiple $\rm\,n\in S,\,$ contra $\rm\,n-\ell \in S\,$ is a nonmultiple of $\rm\,\ell.$ Proof ${\bf\ 2}\,\rm\,\ \ S\,$ closed under subtraction $\rm\,\Rightarrow\,S\,$ closed under remainder (mod), when it is $\ne 0,$ since mod may be computed by repeated subtraction, i.e. $\rm\, a\ mod\ b\, =\, a - k b\, =\, a-b-b-\cdots -b.\,$ Thus $\rm\,n\in S\,$ $\Rightarrow$ $\rm\, (n\ mod\ \ell) = 0,\,$ else it is $\rm\,\in S\,$ and smaller than $\rm\,\ell,\,$ contra mimimality of $\rm\:\ell.$ Remark $\ $ In a nutshell, two applications of induction yield the following inferences $\ \ \rm\begin{eqnarray} S\ closed\ under\ {\bf subtraction} &\:\Rightarrow\:&\rm S\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction \\ &\:\Rightarrow\:&\rm S\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclid's\ algorithm) \end{eqnarray}$ Interpreted constructively, this yields the extended Euclidean algorithm for the gcd. Namely, $ $ starting from the two elements of $\rm\,S\,$ that we know: $\rm\ a \,=\, 1\cdot a + 0\cdot b,\ \ b \,=\, 0\cdot a + 1\cdot b,\ $ we search for the least element of $\rm\,S\,$ by repeatedly subtracting elements of $\,\rm S\,$ to produce smaller elements of $\rm\,S\,$ (while keeping track of each elements linear representation in terms of $\rm\,a\,$ and $\rm\,b).\:$ This is essentially the subtractive form of the Euclidean algorithm (vs. mod/remainder form). I think I've figured it out. Suppose $\gcd(a, b) = m$ where $m \geq d$. By Bezout's identity $m$ can be written as $va + ub$. Clearly $m | d$, implying $m \leq d$ since $m$, $d \geq 0$. Thus $m = d$.
Let $f: X \rightarrow X'$ be a morphism of schemes, and let $\mathcal{I}^{\bullet}$ be a complex of $\mathcal{O}_X$-modules. There are two spectral sequences (well, more than that, but these are the two I care about) abutting to the hypercohomology $\mathbb{H}^n(X, \mathcal{I}^{\bullet})$. The first is the second spectral sequence of hypercohomology, which has $E_2$-term $E_2^{p,q} = H^p(X, H^q(\mathcal{I}^{\bullet}))$ (where $H^q(\mathcal{I}^{\bullet})$ just means the $q^{th}$ cohomology object of the complex $\mathcal{I}^{\bullet}$) and the second is the Leray spectral sequence associated with $f$, which has $E_2$-term $E_2^{p,q} = H^p(X', R^q f_* \mathcal{I}^{\bullet})$ (Here, since the terms in the spectral sequence are $\Gamma(X', \mathcal{O}_{X'})$-modules, the abutment $\mathbb{H}^n(X, \mathcal{I}^{\bullet})$ must be viewed as a $\Gamma(X', \mathcal{O}_{X'})$-module by restricting scalars.) Are there conditions on $X, X', f$, and/or $\mathcal{I}^{\bullet}$ under which we can say that these $E_2$-terms are the same, that is, that $H^p(X, H^q(\mathcal{I}^{\bullet})) \simeq H^p(X', R^q f_* \mathcal{I}^{\bullet})$ as $\Gamma(X', \mathcal{O}_{X'})$-modules. In this (non-degenerate) case, it's not really a spectral sequence question at all, but a question of somehow comparing two iterated cohomology objects. In the case I care about, in which I suspect for other reasons that there will be such an isomorphism but the spectral sequences will almost surely not degenerate at $E_2$, there are lots of nice properties that can be assumed: $X$ and $X'$ are smooth and projective over a field, $f$ is smooth and projective (hence, in particular, proper and flat), and the complex $\mathcal{I}^{\bullet}$ is a complex of injective $\mathcal{O}_X$-modules (but not, however, an injective resolution of any single $\mathcal{O}_X$-module). (Edited 7/2: as Karl Schwede points out, even if both degenerate at the $E_2$-term there's no reason to assume all the terms are the same.)