url stringlengths 14 2.42k | text stringlengths 100 1.02M | date stringlengths 19 19 | metadata stringlengths 1.06k 1.1k |
|---|---|---|---|
https://www.nature.com/articles/s41467-022-28518-y?error=cookies_not_supported&code=01686d4e-06b8-4025-972f-717626464264 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
# Data-driven modeling and prediction of non-linearizable dynamics via spectral submanifolds
## Abstract
We develop a methodology to construct low-dimensional predictive models from data sets representing essentially nonlinear (or non-linearizable) dynamical systems with a hyperbolic linear part that are subject to external forcing with finitely many frequencies. Our data-driven, sparse, nonlinear models are obtained as extended normal forms of the reduced dynamics on low-dimensional, attracting spectral submanifolds (SSMs) of the dynamical system. We illustrate the power of data-driven SSM reduction on high-dimensional numerical data sets and experimental measurements involving beam oscillations, vortex shedding and sloshing in a water tank. We find that SSM reduction trained on unforced data also predicts nonlinear response accurately under additional external forcing.
## Introduction
Low-dimensional reduced models of high-dimensional nonlinear dynamical systems are critically needed in various branches of applied science and engineering. Such simplified models would significantly reduce computational costs and enable physical interpretability, design optimization and efficient controllability. As of yet, however, no generally applicable procedure has emerged for the reliable and robust identification of nonlinear reduced models.
Instead, the most broadly used approach to reducing nonlinear dynamical systems has been a fundamentally linear technique, the proper orthogonal decomposition (POD), followed by a Galerkin projection1,2,3. Projecting the full dynamics to the most energetic linear modes, POD requires the knowledge of the governing equations of the system and hence is inapplicable when only data is available. As purely data-based alternatives, machine learning methods are broadly considered and tested in various fields4,5,6,7. While the black-box approach of machine learning might often seem preferable to a detailed nonlinear analysis, the resulting neural network models require extensive tuning, lack physical interpretability, generally perform poorly outside their training range and tend to be unnecessarily complex8. This has inspired a number of approaches that seek a blend of machine learning with a priori information about the underlying physics9,10. Still within the realm of machine learning, sparse regression has also shown promise in approximating the right-hand sides of low-dimensional, simple dynamical systems with functions taken from a preselected library4. Another recent approach is cluster-based network modeling, which uses the toolkit of network science and statistical physics for modeling nonlinear dynamics11.
A popular alternative to POD and machine learning is the dynamic mode decomposition (DMD)12, which approximates directly the observed system dynamics. The original DMD and its later variants fit a linear dynamical system to temporally evolving data, possibly including further functions of the original data, over a given finite time interval13. DMD provides an appealingly simple yet powerful algorithm to infer a local model near steady states where the nonlinear dynamics is always approximately linear. This linear model is also more globally valid if constructed over observables lying in a span of some eigenfunctions of the Koopman operator, which maps observables evaluated over initial states into their evaluations over current states14,15,16. This relationship between DMD and the Koopman operator has motivated an effort to machine-learn Koopman eigenfunctions from data in order to linearize nonlinear dynamical systems globally on the space of their observables17,18,19.
Finding physically realizable observables that fall in a Koopman eigenspace is, however, often described as challenging or difficult20. A more precise assessment would be that such a find is highly unlikely, given that the probability of any countable set of a priori selected observables falling in any Koopman eigenspace is zero. In addition, those eigenspaces can only be determined explicitly in simple, low-dimensional systems. In practice, therefore, DMD can only provide a justifiable model near an attracting fixed point of a dynamical system. While Koopman modes still have the potential to linearize the observer dynamics on larger domains, those domains cannot include more than one attracting or repelling fixed point19,20,21. Indeed, DMD and Koopman mode expansions fail to converge outside neighborhoods of fixed points even in the simplest, one-dimensional nonlinear systems with two fixed points20,22. In summary, while these data-driven model reduction methods are powerful and continue to inspire ongoing research, their applicability is limited to locally linearized systems and globally linearizable nonlinear systems, such as the Burgers equation23.
The focus of this paper is the development of data-driven, simple and predictive reduced-order models for essentially nonlinear dynamical systems, i.e., nonlinearizable systems. Determining exact linearizability conclusively from data is beyond reach. In contrast, establishing that a dynamical system is nonlinearizable in a domain of interest is substantially simpler: one just needs to find an indication of coexisting isolated stationary states in the data. By an isolated stationary state, we mean here a compact and connected invariant set with an open neighborhood that contains no other compact and connected invariant set. Examples of such stationary states include hyperbolic fixed points, periodic orbits, invariant spheres and quasiperiodic tori; closures of homoclinic orbits and heteroclinic cycles; and chaotic attractors and repellers. If a data set indicates the coexistence of any two sets from the above list, then the system is conclusively non-linearizable in the range of the available data. Specifically, there will be no homeomorphism (continuous transformation with a continuous inverse) that transforms the orbits of the underlying dynamical system into those of a linear dynamical system. While this is a priori clear from dynamical systems theory, several studies have specifically confirmed a lack of convergence of Koopman-mode expansions already for the simplest case of two coexisting fixed points, even over subsets of their domain of attraction or repulsion20,22.
Non-linearizable systems are ubiquitous in science, technology and nature. Beyond the well-known examples of chaotic dynamical systems and turbulent fluid flows1, any bifurcation phenomenon, by definition, involves coexisting steady states and hence is automatically non-linearizable. Indeed, aerodynamic flutter24, buckling of beams and shells25, bistable microelectromechanical systems26, traffic jams27 or even tipping points in climate change28 are all fundamentally non-linearizable, just to name a few. Figure 1 shows some examples of non-linearizable systems emerging in technology, nature and scientific modeling.
We will show here that a collection of classic and recent mathematical results from nonlinear dynamical systems theory enables surprisingly accurate and predictive low-dimensional modeling from data for a number of non-linearizable phenomena. Our construct relies on the recent theory of spectral submanifolds (SSMs), the smoothest invariant manifolds that act as nonlinear continuations of non-resonant eigenspaces from the linearization of a system at a stationary state (fixed point, periodic orbit or quasiperiodic orbit29). Using appropriate SSM embeddings30,31,32 and an extended form of the classic normal form theory33, we obtain sparse dynamical systems describing the reduced dynamics on the slowest SSMs of the system, which are normally hyperbolic and hence robust under perturbations34.
We construct the extended normal form within the slowest SSM as if the eigenvalues of the linearized dynamics within the SSM had zero real parts, although that is not the case. As a result, our normalization procedure will not render the simplest possible (linear) normal form for the SSM dynamics, valid only near the underlying isolated stationary state. Instead, our procedure yields a sparsified nonlinear, polynomial normal form on a larger domain of the SSM that can also capture nearby coexisting stationary states. This fully data-driven normalization algorithm learns the normal form transformation and the coefficients of the normal form simultaneously by minimizing an appropriately defined conjugacy error between the unnormalized and normalized SSM dynamics.
For a generic observable of an oscillatory dynamical system without an internal resonance, a two-dimensional data-driven model calculated on the slowest SSM of the system turns out to capture the correct asymptotic dynamics. Such an SSM-reduced model is valid on domains in which the nonlinearity and any possible external forcing are strong enough to create nonlinearizable dynamics, yet are still moderate enough to render the eigenspace of the linear system relevant. More generally, oscillatory systems with m independent internal resonances in their spectrum can be described by reduced models on $$2\left(m+1\right)$$-dimensional SSMs. In both the resonant and the nonresonant cases, the models can be refined by increasing the degree of their nonlinearity rather than by increasing their dimension. As we show in examples, the resulting SSM-based models are explicit, deterministic and even have the potential to predict system behavior outside the range of the training data away from bifurcations. Most importantly, we find that the models also accurately predict forced response, even though they are only trained on data collected from unforced systems.
We illustrate the power of data-driven SSM-reduced models on high-dimensional numerically generated data sets and on experimental data. These and further examples are also available as MATLAB® live scripts, which are part of a general open-source package, SSMLearn, that performs this type of model reduction and prediction for arbitrary data sets.
## Results
### Spectral submanifolds and their reduced dynamics
A recent result in dynamical systems is that all eigenspaces (or spectral subspaces) of linearized systems admit unique nonlinear continuations under well-defined mathematical conditions. Specifically, spectral submanifolds (SSMs), as defined by29, are the unique smoothest invariant manifolds that serve as nonlinear extensions of spectral subspaces under the addition of nonlinearities to a linear system. The SSM formulation and terminology we use here is due to29; the Methods section “Existence of SSMs” discusses the history of these results and further technical details.
We consider n-dimensional dynamical systems of the form
$$\dot{{{{{{{{\bf{x}}}}}}}}}={{{{{{{\bf{A}}}}}}}}{{{{{{{\bf{x}}}}}}}}+{{{{{{{{\bf{f}}}}}}}}}_{0}({{{{{{{\bf{x}}}}}}}})+\epsilon {{{{{{{{\bf{f}}}}}}}}}_{1}({{{{{{{\bf{x}}}}}}}},{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon ),\qquad {{{{{{{{\bf{f}}}}}}}}}_{0}({{{{{{{\bf{x}}}}}}}})={{{{{{{\mathcal{O}}}}}}}}({\left\vert{{{{{{{\bf{x}}}}}}}}\right\vert}^{2}),\qquad 0\le \epsilon \ll 1,$$
(1)
with a constant matrix $${{{{{{{\bf{A}}}}}}}}\in {{\mathbb{R}}}^{n\times n},$$ and with class Cr functions $${{{{{{{{\bf{f}}}}}}}}}_{0}:{{{{{{{\mathcal{U}}}}}}}}\to {{\mathbb{R}}}^{n}$$ and $${{{{{{{{\bf{f}}}}}}}}}_{1}:{{{{{{{\mathcal{U}}}}}}}}\times {{\mathbb{T}}}^{\ell }\to {{\mathbb{R}}}^{n}$$, where $${{\mathbb{T}}}^{\ell }={S}^{1}\times \ldots \times {S}^{1}$$ is the -dimensional torus. The elements of the frequency vector $${{{{{{{\boldsymbol{\Omega }}}}}}}}{\mathbb{\in }}{{\mathbb{R}}}^{\ell }$$ are rationally independent, and hence the function f1 is quasiperiodic in time. The assumed degree of smoothness for the right-hand side of (1) is $$r\in {{\mathbb{N}}}^{+}\cup \left\{\infty ,a\right\}$$, with a referring to analytic. The small parameter ϵ signals that the forcing in system (1) is moderate so that the structure of the autonomous part is still relevant for the full system dynamics. Rigorous mathematical results on SSMs are proven for small enough ϵ, but continue to hold in practice for larger values of ϵ as well, as we will see in examples. Note that eq. (1) describes equations of motions of physical oscillatory systems. It does not cover phenomenological models of phase oscillators, such as the Kuramoto model35.
The eigenvalues $${\lambda }_{j}={\alpha }_{j}+{{{{{{{\rm{i}}}}}}}}{\omega }_{j}\in {\mathbb{C}}$$ of A, with multiplicities counted, are ordered based on their real parts, $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{j}$$, as
$${{{{{{{\rm{Re}}}}}}}}{\lambda }_{n}\le {{{{{{{\rm{Re}}}}}}}}{\lambda }_{n-1}\le \ldots \ldots \le {{{{{{{\rm{Re}}}}}}}}{\lambda }_{1}.$$
(2)
Their corresponding real modal subspaces (or eigenspaces), $${E}_{j}\subset {{\mathbb{R}}}^{n}$$, are spanned by the imaginary and real parts of the corresponding eigenvectors and generalized eigenvectors of A. To analyze typical systems, we assume that $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{j}={\alpha }_{j}\,\ne\, 0$$ holds for all eigenvalues, i.e., x = 0 is a hyperbolic fixed point for ϵ = 0.
A spectral subspace $${E}_{{j}_{1},\ldots ,{j}_{q}}$$ is a direct sum
$${E}_{{j}_{1},\ldots ,{j}_{q}}={E}_{{j}_{1}}\oplus {E}_{{j}_{2}}\oplus \ldots \oplus {E}_{{j}_{q}}$$
(3)
of an arbitrary collection of modal subspaces, which is always an invariant subspace for the linear part of the dynamics in (1). Classic examples of spectral subspaces are the stable and unstable subspaces, comprising all modal subspaces with $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{k} \, < \, 0$$ and $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{k} \, > \, 0$$, respectively. Projections of the linearized system onto the nested hierarchy of slow spectral subspaces,
$${E}^{1}\subset {E}^{2}\subset {E}^{3}\subset \ldots ,\qquad {E}^{k}:= {E}_{1,\ldots ,k},\quad k=1,\ldots ,n,$$
(4)
provide exact reduced-order models for the linearized dynamics over an increasing number of time scales under increasing k, as sketched in panel (a) of Fig. 2. This is why a Galerkin projection onto Ek is an exact model reduction procedure for linear systems, whose accuracy can be increased by increasing k. A fundamental question is whether nonlinear analogues of spectral subspaces continue to organize the dynamics under the addition of nonlinear and time-dependent terms in the full system (1).
Let us fix a specific spectral subspace $$E={E}_{{j}_{1},\ldots ,{j}_{q}}$$ within either the stable or the unstable subspace. If E is non-resonant (i.e., no nonnegative, low-order, integer linear combination of the spectrum of AE is contained in the spectrum of A outside E), then E has infinitely many nonlinear continuations in the system (1) for ϵ small enough29. These invariant manifolds are of smoothness class CΣ(E), with the spectral quotient Σ(E) measuring the ration of the fastest decay exponent outside E to the slowest decay exponent inside E (see eq. (13) of the Methods section “Existence of SSMs”). All such manifolds are tangent to E for ϵ = 0, have the same quasiperiodic time dependence as f1 does and have a dimension equal to that of E.
Of these infinitely may invariant manifolds, however, there will be a unique smoothest one, the spectral submanifold (SSM) of E, denoted W(E, Ωt; ϵ). This manifold is Cr smooth if r > Σ(E) and can therefore be approximated more accurately than the other infinitely many nonlinear continuations of E. In particular, SSMs have convergent Taylor expansions if the dynamical system (1) is analytic (r = a). Then the reduced dynamics on a slow SSM, Ek, can be approximated with arbitrarily high accuracy using arbitrarily high-order Taylor expansions, without ever increasing the dimension of Ek, see panel (b) of Fig. 2. Such an approximation for dynamical systems with known governing equations is now available for any required order of accuracy via the open-source MATLAB® package SSMTool36. In contrast, reduced models obtained from projection-based procedures can only be improved by increasing their dimensions.
The nearby coexisting stationary states in Fig. 2 happen to be contained in the SSM. In specific examples, however, these states may also be off the SSM, contained instead in one of the infinitely many additional nonlinear continuations, $$\tilde{W}(E,{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$, of the spectral subspace E. The Taylor expansion of the dynamics on $$\tilde{W}(E,{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$ and W(E, Ωt; ϵ) are, however, identical up to order Σ(E). Therefore, the reduced models we will compute on the SSM W(E, Ωt; ϵ) also correctly capture the nearby stationary states on $$\tilde{W}(E,{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$, as long as the polynomial order of the model stays below Σ(E). In large physical systems, this represents no limitation, given that Σ(E) 1.
### Embedding SSMs via generic observables
If at least some of the real parts of the eigenvalues in (2) are negative, then longer-term trajectory data for system (1) will be close to an attracting SSM, as illustrated in panel (b) of Fig. 2. This is certainly the case for data from experiments that are run until a nontrivial, attracting steady state emerges, see, e.g., in panel (e) of Fig. 1. Measurements of trajectories in the full phase space, however, are seldom available from such experiments. Hence, if data about system (1) is only available from observables, the construction of SSMs and their reduced dynamics has to be carried out in the space of those observables.
An extended version of Whitney’s embedding theorem guarantees that almost all (in the sense of prevalence) smooth observable vectors $${{{{{{{\bf{y}}}}}}}}({{{{{{{\bf{x}}}}}}}})=({y}_{1}({{{{{{{\bf{x}}}}}}}}),,...,{y}_{p}({{{{{{{\bf{x}}}}}}}}))\in {{\mathbb{R}}}^{p}$$ provide an embedding of a compact subset $${{{{{{{\mathcal{C}}}}}}}}\subset W(E,{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$ of a d-dimensional SSM, W(E, Ωt; ϵ), into the observable space $${{\mathbb{R}}}^{p}$$ for high enough p. Specifically, if we have p > 2(d + ) simultaneous and independent continuous measurements, y(x), of the p observables, then almost all maps $${{{{{{{\bf{y}}}}}}}}:{{{{{{{\mathcal{C}}}}}}}}\to {{\mathbb{R}}}^{p}$$ are embeddings of $${{\mbox{}}}{{{{{{{\mathcal{C}}}}}}}}{{\mbox{}}}$$37, and hence the top right plot of Fig. 3 is applicable with probability one.
In practice, we may not have access to p > 2(d + ) independent observables and hence cannot invoke Whitney’s theorem. In that case, we invoke the Takens delay embedding theorem38, which covers observable vectors built from p uniformly sampled, consecutive measured instances of a single observable. More precisely, if s(t) is a generic scalar quantity measured at times Δt apart, then the observable vector for delay-embedding is formed as $${{{{{{{\bf{y}}}}}}}}(t)=\left(s(t),s(t+{{\Delta }}t),...,s\left(t+(p-1){{\Delta }}t\right)\right)\in {{\mathbb{R}}}^{p}$$. We discuss the embedding, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}\subset {{\mathbb{R}}}^{p}$$, of an autonomous SSM, W(E, Ωt0; 0), in the observable space $${{\mathbb{R}}}^{p}$$ in more detail in the Methods section “Embedding the SSM in the observable space”.
### Data-driven extended normal forms on SSMs
Once the embedded SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, is identified in the observable space, we seek to learn the reduced dynamics on $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$. An emerging requirement for learning nonlinear models from data has been model sparsity4, without which the learning process would be highly sensitive. The dynamics on $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, however, is inherently nonsparse, which suggests that we learn its Poincaré normal form39 instead. This classic normal form is the simplest polynomial form to which the dynamics can be brought via successive, near-identity polynomial transformations of increasing order.
Near the origin on a slow SSM, however, this simplest polynomial form is just the restriction of the linear part of system (1) to $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, as long as infinitely many nonresonance conditions are satisfied for the operator A40. The Poincaré normal form on $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ would, therefore, only capture the low-amplitude, linearized part of the slow SSM dynamics.
To construct an SSM-reduced model for non-linearizable dynamics, we use extended normal forms. This idea is motivated by normal forms used in the study of bifurcations of equilibria on center manifolds depending on parameters33,41. In that setting, the normal form transformation is constructed at the bifurcation point where the system is non-linearizable by definition. The same transformation is then used away from bifurcations, even though the normal form of the system would be linear there. One, therefore, gives up the maximal possible simplicity of the normal form but gains a larger domain on which the normal form transformation is invertible and hence captures truly nonlinear dynamics. In our setting, there is no bifurcation at x = 0, but we nevertheless construct our normal form transformation as if the eigenvalues corresponding to the slow subspace E were purely imaginary. This procedure leaves additional, near-resonant terms in the SSM-reduced normal form, enhancing the domain on which the transformation is invertible and hence the normal form is valid.
We determine the normal form coefficients directly from data via the minimization of a conjugacy error (see the Methods section). This least-square minimization procedure renders simultaneously the best-fitting normal form coefficients and the best fitting normal form transformation. As we will find in a specific example, this data-driven procedure can yield accurate reduced models even beyond the formal domain of convergence of equation-driven normal forms.
The simplest extended normal form on a slow SSM of an oscillatory system arises when the underlying spectral subspace E corresponds to a pair of complex conjugate eigenvalues. Writing in polar coordinates and truncating at cubic order,42 finds this normal form on the corresponding two-dimensional, autonomous SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, to be
$$\begin{array}{ll}\dot{\rho }=\,{\alpha }_{0}\rho +\beta {\rho }^{3},\!\!\!\\ \dot{\theta }=\,{\omega }_{0}+\gamma {\rho }^{2}.\end{array}$$
(5)
This equation is also known as the Stuart–Landau equation arising in the unfolding of a Hopf bifurcation43,44,45.
The dynamics of (5) is characteristically nonlinearizable when α0β < 0, given that a limit cycle coexists with the ρ = 0 fixed point in that case. Further coexisting steady states will arise when forcing is added to the system, as we discuss in the next section. We note that the cubic normal form on two-dimensional SSMs has also been approximated from data in46. That non-sparse procedure fits the full observer dynamics to a low-dimensional, discrete polynomial dynamical system, then performs an analytic SSM reduction and a classic normal form transformation on the SSM.
For higher accuracy, the extended normal form on an oscillatory SSM of dimension 2m is of the form
$$\begin{array}{c}{\dot{\rho }}_{j}={\alpha }_{j}({{{{{{{\boldsymbol{\rho }}}}}}}},{{{{{{{\boldsymbol{\theta }}}}}}}}){\rho }_{j},\\ \!\!\!\!\!{\dot{\theta }}_{j}={\omega }_{j}({{{{{{{\boldsymbol{\rho }}}}}}}},{{{{{{{\boldsymbol{\theta }}}}}}}}),\end{array}\,\,\,\,\,\,\,j=1,2,...m,\,\,\,\,\,\,{{{{{{{\boldsymbol{\rho }}}}}}}}\in {{\mathbb{R}}}_{+}^{m},\,\,\,\,\,\,{{{{{{{\boldsymbol{\theta }}}}}}}}\in {{\mathbb{T}}}^{m}.$$
(6)
If the linearized frequencies are nonresonant, then the functions αj and ωj only depend on ρ42. Our numerical procedure determines these functions up to the necessary order that ensures a required accuracy for the reduced-order model on the SSM. This is illustrated schematically for a four-dimensional slow SSM (m = 2) in the bottom right plot of Fig. 3.
### Predicting forced dynamics from unforced data
With the normalized reduced dynamics (6) on the embedded SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, at hand, we can also make predictions for the dynamics of the embedded quasiperiodic SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$, of the full system (1). This forced SSM is guaranteed to be an $${{{{{{{\mathcal{O}}}}}}}}(\epsilon )$$Cr-close perturbation of $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ for moderate external forcing amplitudes. A strict proof of this fact is available for small enough ϵ > 029, but as our examples will illustrate, the smooth persistence of the SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$, generally holds for all moderate ϵ values in practice. Such moderate forcing is highly relevant in a number of technological settings, including system identification in structural dynamics and fluid-structure interactions, where the forcing must be moderate to preserve the integrity of the structure.
We discuss the general extended normal form on $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$ in the Methods section “SSM dynamics via extended normal forms”. In the simplest and most frequent special case, the external forcing is periodic ( = 1) and $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{\Omega }}t)$$ is the embedding of the slowest, two-dimensional SSM corresponding to a pair of complex conjugate eigenvalues. Using the modal forcing amplitude f1,1 and modal phase shift ϕ1,1 in the general normal form (25)47, introduces the new phase coordinate ψ = θ − Ωt − ϕ1,1 and lets f = f1,1, α = α1, ω = ω1 to obtain the planar, autonomous, extended normal form on $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{\Omega }}t)$$ as
$$\dot{\rho } =\alpha (\rho )\rho +f\sin \psi ,\\ \dot{\psi } =\omega (\rho )-{{\Omega }}+\frac{f}{\rho }\cos \psi$$
(7)
at leading order in ϵ. All stable and unstable periodic responses on the SSM are fixed points of system (7), with their amplitudes ρ0 and phases ψ0 satisfying the equations
$${{\Omega }}=\omega ({\rho }_{0})\pm \sqrt{\frac{{f}^{2}}{{\rho }_{0}^{2}}-{\alpha }^{2}({\rho }_{0})},\quad {\psi }_{0}={\tan }^{-1}\left[\frac{\alpha \left({\rho }_{0}\right)}{\omega \left({\rho }_{0}\right)-{{\Omega }}}\right].$$
(8)
The first analytic formula in (8) predicts the forced response curve (FRC) of system (1), i.e., the relationship between response amplitude, forcing amplitude and forcing frequency, from the terms α(ρ) and ω(ρ) of the extended normal form of the autonomous SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$. These terms are constructed from trajectories of the unforced system, thus eq. (8) predicts the behavior of a nonlinearizable dynamical system under forcing based solely on unforced training data. The stability of the predicted periodic response follows from a simple linear analysis at the corresponding fixed point of the ODE (7). The first formula in (8) also contains another frequently used notion of nonlinear vibration analysis, the dissipative backbone curve ω(ρ), which describes the instantaneous amplitude-frequency relation along freely decaying vibrations within the SSM.
As we will also show in examples, our unforced model-based predictions for forced periodic response (see the Methods section “Prediction of forced response from unforced training data”) are confirmed by numerical simulation or dedicated laboratory experiments on forced systems.
### Examples
We now illustrate data-driven, SSM-based modeling and prediction on several numerical and experimental data sets describing non-linearizable physical systems. Further applications are described in48. Both the numerical and the experimental data sets were initialized without knowledge of the exact SSM. All our computations have been carried out by the publicly available MATLAB® package, SSMLearn, whose repository also contains further examples not discussed here. The main algorithm behind SSMLearn is illustrated in Fig. 3, with more detail given in the Methods section “Summary of the algorithm”.
To quantify the errors of an SSM-based reduced model, we use the normalized mean-trajectory-error (NMTE). For P observations of the observable vector yj and their model-based reconstructions, $${\hat{{{{{{{{\bf{y}}}}}}}}}}_{j}$$, this modeling error is defined as
$${{{{{{{\rm{NMTE}}}}}}}}=\frac{1}{\parallel \underline{{{{{{{{\bf{y}}}}}}}}}\parallel }\frac{1}{P}\mathop{\sum }\limits_{j=1}^{P}\parallel {{{{{{{{\bf{y}}}}}}}}}_{j}-{\hat{{{{{{{{\bf{y}}}}}}}}}}_{j}\parallel\!.$$
(9)
Here $$\underline{{{{{{{{\bf{y}}}}}}}}}$$ is a relevant normalization vector, such as the data point with the largest norm. When validating the reduced dynamics for a given testing trajectory, we run the reduced model from the same initial condition for the comparison. Increasing the order of the truncated normal form polynomials in eq. (6) generally reduces the NMTE error to any required level but excessively small errors can lead to overfitting. In our examples, we will be allowing model errors in the order of 1% − 4% to avoid overfitting.
As a first example, we consider a finite-element discretization of a von Kármán beam with clamped-clamped boundary conditions49, shown in panel (a) of Fig. 4. In contrast to the classic Euler-Bernoulli beam, the von Kármán model captures moderate deformations by including a nonlinear, quadratic term in the kinematics. We first construct a 33 degree-of-freedom, damped, unforced finite element model (i.e., n = 66 and ϵ = 0 in eq. (1)) for an aluminum beam of length 1 [m], width 5 [cm], thickness 2 [cm] and material damping modulus 106 [] (see the Supplementary Information for more detail).
Our objective is to learn from numerically generated trajectory data the reduced dynamics on the slowest, two-dimensional SSM, W(E1), of the system, defined over the slowest two-dimensional (d = 2) eigenspace E1 of the linear part. To do so, we generate two trajectories starting from initial beam deflections caused by static loading of 12 [kN] and 14 [kN] at the midpoint, as shown in panel (a) of Fig. 4. The latter trajectory, shown in panel (b) of Fig. 4, is used for training, the other for testing. Along the trajectories, we select our single observable s(t) to be the midpoint displacement of the beam.
The beam equations are analytic (r = a), and hence the SSM, W(E1), admits a convergent Taylor expansion near the origin. The minimal embedding dimension for the two-dimensional, W(E1), as required by Whitney’s theorem, is p = 5, which is not satisfied by our single scalar observable s(t). We therefore employ delay-embedding using $${{{{{{{\bf{y}}}}}}}}(t)=\left(s(t),s(t+{{\Delta }}t),\ldots ,s(t+4{{\Delta }}t)\right)$$ with Δt = 0.0955 [ms]. By Takens’s theorem, this delayed observable embeds the SSM in $${{\mathbb{R}}}^{5}$$ with probability one.
A projection of the embedded SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}\in {{\mathbb{R}}}^{5},$$ onto three coordinates is shown in panel (c) of Fig. 4. On $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, SSMLearn returns the 7th-order extended normal form
$$\dot{\rho } =\alpha (\rho )\rho ,\quad \alpha (\rho )=-3.02-5.79{\rho }^{2}+57.5{\rho }^{4}-191{\rho }^{6},\\ \dot{\theta } =\omega (\rho ),\,\,\quad \omega (\rho )=658+577{\rho }^{2}-347{\rho }^{4}-387{\rho }^{6},$$
(10)
to achieve our preset reconstruction error bar of 3% on the test trajectory (NMTE = 0.027), shown in panel (d) of Fig. 4.
We now use the model (10), trained on a single decaying trajectory, to predict the forced response of the beam for various forcing amplitudes and frequencies in closed form. We will then compare these predictions with analytic forced response computations for the forced SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{\Omega }}t)$$, obtained from SSMTool36 and with numerical simulations of the damped-forced beam. The periodic forcing is applied at the midpoint node; the Taylor expansion order in SSMTool for the analytically computed dynamics on $${{{{{{{{\mathcal{M}}}}}}}}}_{\epsilon }({{\Omega }}t)$$ is set to 7, as in (10). Panel (e) of Fig. 4 shows the FRCs (green) and the backbone curve (blue) predicted by SSMLearn based on formula (8) from the single unforced trajectory in panel (b) of Fig. 4. To obtain the relevant forcing amplitudes f in the delay-observable space, we have followed the calibration procedure described in the Methods section “Prediction of forced response from unforced training data” for the forcing values $$\left|\epsilon {{{{{{{{\bf{f}}}}}}}}}_{1}\right|=15,45,95$$ [N] at the single forcing frequency Ω = 103.5 [Hz]. Recall that coexisting stable (solid lines) and unstable (dashed lines) periodic orbits along the same FRC are hallmarks of non-linearizable dynamics and hence cannot be captured by the model reduction techniques we reviewed in the Introduction for linearizable systems.
The data-based prediction for the FRCs agrees with the analytic FRCs for low forcing amplitudes but departs from it for higher amplitudes. Remarkably, as the numerical simulations (red) confirm, the data-based FRC is the correct one. The discrepancy between the two FRCs for large amplitudes only starts decreasing under substantially higher-order Taylor series approximations used in SSMTool (see the Supplementary Information). This suggests the use of the data-based approach for this class of problems even if the exact equations of motion are available.
As a second example, we consider the classic problem of vortex shedding behind a cylinder8. Our input data for SSM-based reduced modeling are the velocity and pressure fields over a planar, open fluid domain with a hole representing the cylinder section, as shown in panel (a) of Fig. 5. The boundary conditions are no-slip on the circular inner boundary, standard outflow on the outer boundary at the right side, and fixed horizontal velocity on the three remaining sides50. The Reynolds number for this problem is the ratio between the cylinder diameter times the inflow velocity and the kinematic viscosity of the fluid.
Available studies8,50,51 report that, at low Reynolds number, the two-dimensional unstable manifold, Wu(SS), of the wake-type steady solution, SS, in panel (b) of Fig. 5 connects SS to the limit cycle shown in panel (c) of Fig. 5. Here we evaluate the performance of SSMLearn on learning this unstable manifold as an SSM, along with its reduced dynamics, from trajectory data at Reynolds number equal to 70. For this SSM, we again have d = 2 and r = a, as in our previous example. There is no external forcing in this problem, and hence we have ϵ = 0 in eq. (1). In contrast to prior studies that often consider a limited number of observables8,51,52, here we select the full phase space of the discretized Navier-Stokes simulation to be the observable space for illustration, which yields n = p = 76, 876 in eq. (1). We generate nine trajectories numerically, eight of which will be used for training and one for testing the SSM-based model.
The nine initial conditions of our input trajectory data are small perturbations from the wake-type steady solution along its unstable directions, equally spaced on a small amplitude circle on this unstable plane. All nine trajectories quickly converge to the unstable manifold and then to the limit cycle representing periodic vortex shedding.
We choose to parametrize the SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}={W}^{u}(SS)$$, with two leading POD modes of the limit cycle, which have been used in earlier studies for this problem. The training trajectories projected onto these two POD modes are shown in panel (d) of Fig. 5. To limit the modeling error (9) to less than NMTE = 1%, SSMLearn requires a polynomial order of 18 in the SSM computations. For this order, our approach can accommodate the strong mode deformation observed for this problem51, manifested by a fold of the SSM over the unstable eigenspace in panel (f) of Fig. 5. Panel (g) of Fig. 5 shows the strongly nonlinear geometry of $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ projected to the observable subspace formed by the velocities and the pressure of a probe point in the wake.
To capture the SSM-reduced dynamics with acceptable accuracy, we need to compute the extended normal form up to order 11, obtaining
$$\begin{array}{c}\dot{\rho }=\alpha (\rho )\rho =0.0584\rho -0.479{\rho }^{3}+1.27{\rho }^{5}+6.80{\rho }^{7}-58.9{\rho }^{9}+108{\rho }^{11},\\ \!\!\!\!\!\!\!\!\!\dot{\theta }=\omega (\rho )\,=0.553+0.441{\rho }^{2}-3.38{\rho }^{4}+55.5{\rho }^{6}-321{\rho }^{8}+626{\rho }^{10}.\end{array}$$
(11)
To describe a transition qualitatively from a fixed point to a limit cycle, the reduced-order dynamical model should be at least of cubic order51. Capturing the qualitative behavior (i.e., the unstable fixed point and the stable limit cycle), however, does not imply a low NMTE error for the model. Indeed, the data-driven cubic normal form for this example gives a reconstruction error of NMTE = 117% normalized over the limit cycle amplitude, mainly arising from an out-of-phase convergence to the limit cycle along the testing trajectory. In contrast, the $${{{{{{{\mathcal{O}}}}}}}}(11)$$ normal form in eq. (11) reduced this error drastically to NMTE = 3.86% on the testing trajectory, as shown in panel (e) of Fig. 5.
We show in Section 1.2.3 of the Supplementary Information that for comparable accuracy, the Sparse Identification of Nonlinear DYnamics (SINDy) approach of4 returns non-sparse nonlinear models for this example. Similarly, while the DMD13 can achieve highly accurate curve-fitting on the available training trajectories with a high-dimensional linear model, that model only captures linearizable dynamics near the origin. As a consequence, its trajectories grow without bound over longer integration times and hence fail to capture the limit cycle.
As a third example, we consider fluid oscillations in a tank, which exhibit highly nonlinear characteristics53. To describe such non-linearizable softening effects observed in the sloshing motion of surface waves, Duffing-type models have been proposed54. While amplitude variations observed in forced experiments can be fitted to forced softening Duffing equations, nonlinear damping remains a challenge to identify55.
The experiments we use to construct an SSM-reduced nonlinear model for sloshing were performed in a rectangular tank of width 500 [mm] and depth 50 [mm], partially filled with water up to a height of 400 [mm], as shown in panel (a) of Fig. 6. The tank was mounted on a platform excited harmonically by a motor. The surface level was detected via image processing from a monochrome camera. As an observable s(t) we used the horizontal position of the computed center of mass of the water at each time instant, normalized by the tank width. This physically meaningful scalar is robust with respect to image evaluation errors55.
We identify the unforced nonlinear behavior of the system from data obtained in resonance decay experiments56. In those experiments (as in Fig. panel (a) of 6, but with a shaker instead of a motor), once a periodic steady state is reached under periodic horizontal shaking of the tank, the shaker is turned off and the decaying sloshing is recorded. We show such a decaying observable trajectory (orange line) in panel (b) of Fig. 6, with the shaker switched off slightly before zero time. This damped oscillation is close, by construction, to the two-dimensional, slowest SSM of the system. We use three such decaying observer trajectories (two for training and one for model testing) for the construction of a two-dimensional (d = 2), autonomous, SSM-based reduced-order model for s(t). For delay embedding dimension, we again pick p = 5, the minimal value guaranteed to be generically correct for embedding the SSM by Takens’s theorem. The delay used in sampling s(t) is Δt = 0.033 [s]. For this input and for a maximal reconstruction error of 2%, SSMLearn identifies a nearly flat SSM in the delayed observable space–see panel (c) of Fig. 6–with a cubic extended normal form
$$\dot{\rho }=-0.063179\rho -0.041214{\rho }^{3},\quad \dot{\theta }=7.8144-1.5506{\rho }^{2}.$$
(12)
This lowest-order, Stuart–Landau-type normal form, cf. (5), already constitutes an accurate reduced-order model with NMTE = 1.88% on the testing data set, see panel (b) of Fig. 6. The amplitude-dependent nonlinear damping, α(ρ), provided by this model is plotted in panel (d) of Fig. 6 with respect to the physical amplitude.
In another set of experiments with the setup of panel (a) Fig. 6, steady states of periodically forced sloshing were measured in sweeps over a range of forcing frequencies under three different shaker amplitudes. As in the previous beam example, we identify the corresponding forcing amplitude, f, in (7) at the maximal amplitude response of each frequency sweep. Shown in panels (e, f) of Fig. 6, the closed-form predictions for FRCs from eq. (8) (solid lines) match closely the experimental FRCs (dots). Given the strong nonlinearity of the FRC, any prediction of this curve from a DMD-based model is bound to be vastly inaccurate, as we indeed show in Section 1.3 of the Supplementary information.
The phase ψ0 of the forced response relative to the forcing has been found difficult to fit to forced Duffing-type models55, but the present modeling methodology also predicts this phase accurately using the second expression in (8). The blue curve in panel (e) of Fig. 6 shows the backbone curve of decaying vibrations, which terminates at the highest amplitude occurring in the training data set. This plot therefore shows that the closed-form FRC predictions obtained from the SSM-based reduced model are also effective for response amplitudes outside the training range of the reduced model.
## Discussion
We have described a data-driven model reduction procedure for non-linearizable dynamical systems with coexisting isolated stationary states. Our approach is based on the recent theory of spectral submanifolds (SSMs), which are the smoothest nonlinear continuations of spectral subspaces of the linearized dynamics. Slow SSMs form a nested hierarchy of attractors and hence the dynamics on them provide a hierarchy of reduced-order models with which generic trajectories synchronize exponentially fast. These SSMs and their reduced models smoothly persist under moderate external forcing, yielding low-dimensional, mathematically exact reduced-order models for forced versions of the same dynamical system. The normal hyperbolicity of SSMs also ensures their robustness under small noise.
All these results have been implemented in the open-source MATLAB® package, SSMLearn, which we have illustrated on data sets arising from forced nonlinear beam oscillations, vortex shedding behind a cylinder and water sloshing in a vibrating tank. For all three examples, we have found that two-dimensional data-driven extended normal forms on the slowest SSMs provide sparse yet accurate models of non-linearizable dynamics in the space of the chosen observables. Beyond matching training and testing data, SSM-reduced models prove their intrinsic, qualitative meaning by predicting non-linearizable, forced steady states purely from decaying, unforced data.
In this brief report, examples of higher-dimensional SSMs and multi-harmonic forcing have not been considered, even though SSMLearn is equipped to handle them. Higher-dimensional SSMs are required in the presence of internal resonances or in non-resonant problems in which initial transients also need to be captured more accurately. A limitation of our approach for non-autonomous systems is the assumption of quasiperiodic external forcing. Note, however, that even specific realizations of stochastic forcing signals can be approximated arbitrarily closely with quasiperiodic functions over any finite time interval of interest. A further limitation in our work is the assumption of smooth system dynamics. For data from non-smooth systems, SSMLearn will nevertheless return an equivalent smooth reduced-order model whose accuracy is a priori known from the available mean-squared error of the SSM fitting and conjugacy error of the normal form construction. We are addressing these challenges in ongoing work to be reported elsewhere. Further applications of SSMLearn to physical problems including higher-dimensional coexisting steady states (see, e.g.,57) are also underway.
## Methods
### Existence of SSMs
In the context of rigid body dynamics, invariant manifolds providing generalizations of invariant spectral subspaces to nonlinear systems were first envisioned and formally constructed as nonlinear normal modes by58 (see59 for a recent review of related work). Later studies, however, pointed out the nonuniqueness of nonlinear normal modes in specific examples (60,61).
In the mathematics literature,62 obtained general results on the existence, smoothness and degree of uniqueness of such invariant manifolds for mappings on Banach spaces. These results use a special parameterization method to construct the manifolds even in evolutionary partial differential equations that admit a well-posed flow map in both time directions (see63 for a mechanics application). The results have been extended to a form applicable to dynamical systems with quasiperiodic time dependence64. An extensive account of the numerical implementation of the parametrization method with a focus on computing invariant tori and their whiskers in Hamiltonian systems is also available65,29 Discussed the existence of the SSM, W(E, Ωt; ϵ), depending on its absolute spectral quotient,
$${{\Sigma }}(E)={{{{{{{\rm{Int}}}}}}}}\,\left[\frac{\mathop{\max }\limits_{\lambda \in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{S})}| {{{{{{{\rm{Re}}}}}}}}\lambda | }{\mathop{\min }\limits_{{\lambda }_{e}\in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{E})}| {{{{{{{\rm{Re}}}}}}}}{\lambda }_{e}| }\right],$$
(13)
where Spect(AS) is the stable (unstable) spectrum of A if the SSM is stable (unstable). For a stable SSM, Σ(E) is the integer part of the quotient of the minimal real part in the spectrum of A and the maximal real part of the spectrum of A restricted to E.
Based on Σ(E), we call a d-dimensional spectral subspace non-resonant if for any set $$\left({m}_{1},\ldots ,{m}_{d}\right)$$ of nonnegative integers satisfying $$2\le \mathop{\sum }\nolimits_{j = 1}^{d}{m}_{j}\le {{\Sigma }}(E)$$, the eigenvalues, λk, of A satisfy
$$\mathop{\sum }\limits_{j=1}^{d}{m}_{j}{{{{{{{\rm{Re}}}}}}}}{\lambda }_{j}\,\ne \,{{{{{{{\rm{Re}}}}}}}}{\lambda }_{k},\quad {\lambda }_{k}\in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}})-{{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{E}).$$
(14)
This condition only needs to be verified for resonance orders between 2 and Σ(E)64. In particular, a 1: 1 resonance between E1 and E2 is allowed if $$\dim {E}_{1}=\dim {E}_{2}=1$$, in which case each strongly resonant spectral subspace gives rise to a unique nearby spectral submanifold.
If E violates the nonresonance condition (14), then E can be enlarged to a higher-dimensional spectral subspace until the nonresonance relationship (14) is satisfied. In the absence of external forcing (ϵ = 0), the nonresonance condition (14) can also be relaxed with the help of the relative spectral quotient,
$$\sigma (E)={{{{{{{\rm{Int}}}}}}}}\,\left[\frac{\mathop{\max }\limits_{\lambda \in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{S})-{{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{E})}| {{{{{{{\rm{Re}}}}}}}}\lambda | }{\mathop{\min }\limits_{{\lambda }_{e}\in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{E})}| {{{{{{{\rm{Re}}}}}}}}{\lambda }_{e}| }\right],$$
(15)
to the form
$$\mathop{\sum }\limits_{j=1}^{d}{m}_{j}{\lambda }_{j}\,\ne\, {\lambda }_{k},\quad {\lambda }_{k}\in {{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}})-{{{{{{{\rm{Spect}}}}}}}}({{{{{{{\bf{A}}}}}}}}{| }_{E}),\qquad 2\le \mathop{\sum }\limits_{j=1}^{d}{m}_{j}\le \sigma (E).$$
(16)
This is indeed a relaxation because condition (16) is only violated if both the real and the imaginary parts of eigenvalues involved are in the exact same resonance with each other. In contrast, (14) is already violated when the real parts are in resonance with each other.
If $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{1} < 0$$ in eq. (2) and all Ek subspaces are nonresonant, then the nested set of slow spectral submanifolds,
$$W({E}^{1},{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )\subset W({E}^{2},{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )\subset W({E}^{3},{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )\subset \ldots ,$$
gives a hierarchy of local attractors. All solutions in a vicinity of x = 0 approach the reduced dynamics on one of these attractors exponentially fast, as sketched in panel (b) of Fig. 2 for the ϵ = 0 limit. As we will see, non-linearizable dynamics tend to emerge on W(Ek, Ωt; ϵ) due to near-resonance between the linearized frequencies within Ek and the forcing frequencies Ω. The specific location of nontrivial steady states in W(Ek, Ωt; ϵ) is then determined by a balance between the nonlinearities, damping and forcing.
A resonant Ek subspace can be enlarged by adding the next $$k^{\prime}$$ modal subspaces to it until $${E}^{k+k^{\prime} }$$ in the hierarchy (4) becomes non-resonant and hence admits an SSM, $$W({E}^{k+k^{\prime} },{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$. This technical enlargement is also in agreement with the physical expectation that all interacting modes have to be included in an accurate reduced-order model. Finally, we note that SSMs are robust features of dynamical systems: they inherit smooth dependence of the vector field in (1) on parameters29.
For discrete-time dynamical systems of the form
$${{{{{{{{\bf{x}}}}}}}}}_{k+1}=\tilde{{{{{{{{\bf{A}}}}}}}}}{{{{{{{{\bf{x}}}}}}}}}_{k}+{\tilde{{{{{{{{\bf{f}}}}}}}}}}_{0}({{{{{{{{\bf{x}}}}}}}}}_{k})+\epsilon {\tilde{{{{{{{{\bf{f}}}}}}}}}}_{1}({{{{{{{{\bf{x}}}}}}}}}_{k},{{{{{{{{\boldsymbol{\phi }}}}}}}}}_{k};\epsilon ),\qquad {{{{{{{{\boldsymbol{\phi }}}}}}}}}_{k+1}={{{{{{{{\boldsymbol{\phi }}}}}}}}}_{k}+\tilde{{{{{{{{\boldsymbol{\Omega }}}}}}}}},$$
(17)
the above results on SSMs apply based on the eigenvalues μk of $$\tilde{{{{{{{{\bf{A}}}}}}}}}$$. One simply needs to replace λk with $$\log {\mu }_{k}$$ and $${{{{{{{\rm{Re}}}}}}}}{\lambda }_{k}$$ with $$\log | {\mu }_{k}|$$ in formulas (13)-(16)29.
We close by noting that in a neighborhood of an SSM, an invariant family of surfaces resembling the role of coordinate planes in a linear system exists66. This invariant spectral foliation (ISF) can, in principle, be used to generate a nonlinear analogue of linear modal superposition in a vicinity of a fixed point. Constructing the ISF from data has shown both initial promise and challenges to be addressed.
### Embedding the SSM in the observable space
Originally conceived for autonomous systems, the Takens delay embedding theorem38 has been strengthened and generalized to externally forced dynamics32. By these results, the embedding for a d-dimensional compact SSM subset, $${{{{{{{\mathcal{C}}}}}}}}\subset W(E,{{{{{{{\boldsymbol{\Omega }}}}}}}}t;\epsilon )$$, in the delay observable space as $${{{{{{{\mathcal{M}}}}}}}}({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$ is guaranteed for almost all choices of the observable s(t) if p > 2(d + l) and some generic assumptions regarding periodic motions on $${{{{{{{\mathcal{M}}}}}}}}({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$ are satisfied37.
Of highest importance in technological applications is the case of time-periodic forcing ( = 1), with frequency $${{{{{{{\boldsymbol{\Omega }}}}}}}}={{\Omega }}\in {\mathbb{R}}$$ and period T = 2π/Ω. In this case, the Whitney and Takens embedding theorems can be applied to the associated period-T sampling map (or Poincaré map) $${{{{{{{{\bf{P}}}}}}}}}_{{t}_{0}}:{{\mathbb{R}}}^{n}\to {{\mathbb{R}}}^{n}$$ of the system based at time t0. This map is autonomous and has a time-independent SSM that coincides with the d-dimensional SSM, $${{{{{{{\mathcal{M}}}}}}}}({{\Omega }}{t}_{0})$$, of the full system (1). In this case, by direct application of the embedding theorems to the discrete dynamical system generated by $${{{{{{{{\bf{P}}}}}}}}}_{{t}_{0}}$$, the typically sufficient embedding dimension estimate is improved to p > 2d for Whitney’s and Takens’s theorem.
Technically speaking, the available data will never be exactly on an SSM, as these embedding theorems assume. By the smoothness of the embeddings, however, points close enough to the SSM in the phase space will be close to $${{{{{{{\mathcal{M}}}}}}}}({{{{{{{\boldsymbol{\Omega }}}}}}}}t)$$ in the observable space under the embeddings. Moreover, as slow SSMs attract nearby trajectories exponentially, the distance of observable data from the embedded slow SSM will shrink exponentially fast. Therefore, even under uncorrelated noise in the measurements, mean-squared estimators are suitable for learning slow SSMs from data in the observable space, as we illustrate in the Supplementary Information.
After a possible coordinate shift, the trivial fixed point of the autonomous limit of system (1) will be mapped into the y = 0 origin of the observable space. To find an embedded, d-dimensional SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}\in {{\mathbb{R}}}^{p}$$, attached to this origin for ϵ = 0, we focus on observable domains in which $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ is a graph over its tangent space $${T}_{{{{{{{{\bf{0}}}}}}}}}{{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ at the origin y = 0. Such domains always exist and are generally large enough to capture non-linearizable dynamics in most applications (but see below). Note that $${T}_{{{{{{{{\bf{0}}}}}}}}}{{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ coincides with the image of the spectral subspace E in the observable space.
To learn such a graph-style parametrization for $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ from data, we define a matrix $${{{{{{{{\bf{U}}}}}}}}}_{1}\in {{\mathbb{R}}}^{n\times d}$$ with columns that are orthonormal vectors spanning the yet unknown $${T}_{{{{{{{{\bf{0}}}}}}}}}{{{{{{{{\mathcal{M}}}}}}}}}_{0}$$. The reduced coordinates $${{{{{{{\boldsymbol{\eta }}}}}}}}\in {{\mathbb{R}}}^{d}$$ for a point $${{{{{{{\bf{y}}}}}}}}\in {{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ are then defined as the orthogonal projection $${{{{{{{\boldsymbol{\eta }}}}}}}}={{{{{{{{\bf{U}}}}}}}}}_{1}^{T}{{{{{{{\bf{y}}}}}}}}$$. We week a Taylor-expansion for $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ near the η = 0 origin, denoting by η2:M the family of all monomials of d variables from degree 2 to M. For example, if d = 2 and M = 3, then $${{{{{{{{\boldsymbol{\eta }}}}}}}}}^{2:3}={({\eta }_{1}^{2},{\eta }_{1}{\eta }_{2},{\eta }_{2}^{2},{\eta }_{1}^{3},{\eta }_{1}^{2}{\eta }_{2},{\eta }_{1}{\eta }_{2}^{2},{\eta }_{2}^{3})}^{T}$$. As a graph over $${T}_{{{{{{{{\bf{0}}}}}}}}}{{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, the manifold $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ is approximated as y = V1η + Vη2:M, where the matrices V1 and V contain coefficients for the d-variate linear and nonlinear monomials, respectively. Learning $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ from a data set of P observations y1, …, yP then amounts to finding the $$({{{{{{{{\bf{U}}}}}}}}}_{1}^{* },{{{{{{{{\bf{V}}}}}}}}}_{1}^{* },{{{{{{{{\bf{V}}}}}}}}}^{* })$$ matrices that minimize the mean-square reconstruction error along the training data:
$$\begin{array}{ll}({{{{{{{{\bf{U}}}}}}}}}_{1}^{* },{{{{{{{{\bf{V}}}}}}}}}_{1}^{* },{{{{{{{{\bf{V}}}}}}}}}^{* })=&\arg \mathop{\min }\limits_{{{{{{{{{\bf{U}}}}}}}}}_{1},{{{{{{{{\bf{V}}}}}}}}}_{1},{{{{{{{\bf{V}}}}}}}}}\mathop{\sum }\limits_{j=1}^{P}\parallel {{{{{{{{\bf{y}}}}}}}}}_{j}-{{{{{{{{\bf{V}}}}}}}}}_{1}{{{{{{{{\bf{U}}}}}}}}}_{1}^{T}{{{{{{{{\bf{y}}}}}}}}}_{j}-{{{{{{{\bf{V}}}}}}}}{({{{{{{{{\bf{U}}}}}}}}}_{1}^{T}{{{{{{{{\bf{y}}}}}}}}}_{j})}^{2:M}{\parallel }^{2},\\ &{{{{{{{{\bf{U}}}}}}}}}_{1}^{T}{{{{{{{{\bf{U}}}}}}}}}_{1}={{{{{{{\bf{I}}}}}}}}.\end{array}$$
(18)
The simplest solution to this problem is U1 = V1 with the additional constraint $${{{{{{{{\bf{V}}}}}}}}}_{1}^{T}{{{{{{{\bf{V}}}}}}}}={{{{{{{\bf{0}}}}}}}}$$, which represents a basic nonlinear extension of the principal component analysis67.
The above graph-style parametrization of the SSM breaks down for larger y values if $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ develops a fold over $${T}_{0}{{{{{{{{\mathcal{M}}}}}}}}}_{0}$$. That creates an issue for model reduction if a nontrivial steady state on $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ falls outside the fold, as the limit cycle does in our vortex shedding example. In that case, alternative parametrization methods for $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$ can be used to enhance the domain of the SSM-reduced model. These methods include selecting the columns of U1 to be the leading POD modes of the nontrivial steady state, or enlarging the embedding space with (further) delayed observations. In these cases, the columns of V1 are still orthonormal vectors spanning $${T}_{{{{{{{{\bf{0}}}}}}}}}{{{{{{{{\mathcal{M}}}}}}}}}_{0}.$$
In both panels (c) of Figs. 4, 6, the SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, is nearly flat in the delay-embedding space. This turns out to be a universal property of delay embedding for small delays and low embedding dimensions (see the Supplementary Information).
For ϵ > 0 small (i.e., for moderate forcing), the autonomous SSM, $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, already captures the bulk nonlinear behavior of system (1). Indeed, for this forcing range, the reduced dynamics on the corresponding SSM can simply be computed as an additive perturbation of the autonomous dynamics on $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$47,68,69 (see section “Predicting forced dynamics from unforced data”).
### SSM dynamics via extended normal forms
For an autonomous SSM $${{{{{{{{\mathcal{M}}}}}}}}}_{0}$$, the reduced dynamics is governed by a vector field
$$\dot{{{{{{{{\boldsymbol{\eta }}}}}}}}}={{{{{{{\bf{r}}}}}}}}({{{{{{{\boldsymbol{\eta }}}}}}}})$$
(19)
with a flow map $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{r}}}}}}}}}^{t}({{{{{{{\boldsymbol{\eta }}}}}}}})$$. We can generically assume that the Jacobian Dr(0) is semisimple, i.e., Dr(0)B = BΛ, where $${{{{{{{\boldsymbol{\Lambda }}}}}}}}\in {{\mathbb{C}}}^{d\times d}$$ is a diagonal matrix containing the eigenvalues of Dr(0). Classic normal form theory would seek to simplify the reduced dynamics (19) in a vicinity of η = 0 via a nonlinear change of coordinates, η = h(z), so that the transformed vector field $$\dot{{{{{{{{\bf{z}}}}}}}}}={{{{{{{\bf{n}}}}}}}}({{{{{{{\bf{z}}}}}}}})$$ with flow map $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{n}}}}}}}}}^{t}({{{{{{{\bf{z}}}}}}}})$$ has a diagonal linear part and has as few nonlinear terms in its Taylor expansion as possible. In our present setting, the origin is assumed hyperbolic, in which case the classic normal form is simply $$\dot{{{{\bf{z}}}}}={{\mathbf{\Lambda}}}{{{\bf{z}}}}$$ under appropriate non-resonance conditions that generically hold40. The corresponding normal form transformation h(z), however, is only valid on a small enough domain in which the dynamics is linearizable.
To capture non-linearizable behavior, we employ extended normal forms motivated by those used to unfold bifurcations33. In this approach, we construct normal forms that do not remove those polynomial terms from (19) whose removal would result in small denominators in the Taylor coefficients h(z) and hence decrease its domain of convergence. Instead, we seek a normal form for (19) of the form
$$\begin{array}{l}{{{{{{{\bf{n}}}}}}}}({{{{{{{\bf{z}}}}}}}};{{{{{{{\bf{N}}}}}}}})={{{{{{{\boldsymbol{\Lambda }}}}}}}}{{{{{{{\bf{z}}}}}}}}+{{{{{{{\bf{N}}}}}}}}{{{{{{{{\bf{z}}}}}}}}}^{2:N},\\ {{{{{{{\bf{h}}}}}}}}({{{{{{{\bf{z}}}}}}}};{{{{{{{\bf{H}}}}}}}})={{{{{{{\bf{B}}}}}}}}({{{{{{{\bf{z}}}}}}}}+{{{{{{{\bf{H}}}}}}}}{{{{{{{{\bf{z}}}}}}}}}^{2:N}),\,\,\,\,\,\,{{{{{{{{\bf{h}}}}}}}}}^{-1}({{{{{{{\boldsymbol{\eta }}}}}}}};{{{{{{{{\bf{H}}}}}}}}}_{{{{{{{{\boldsymbol{\star }}}}}}}}})={{{{{{{{\bf{B}}}}}}}}}^{-1}{{{{{{{\boldsymbol{\eta }}}}}}}}+{{{{{{{{\bf{H}}}}}}}}}_{\star }{({{{{{{{{\bf{B}}}}}}}}}^{-1}{{{{{{{\boldsymbol{\eta }}}}}}}})}^{2:N},\end{array}$$
(20)
where the matrices N, H and H contain the coefficients for the appropriate d-variate monomials. To identify near-resonances, we let S2:N be the matrix of integers whose columns are the powers of the d-variate monomials from order 2 to N. We then define a matrix Δ2:N containing all relevant integer linear combinations of eigenvalues as follows:
$${({{{{{{{{\boldsymbol{\Delta }}}}}}}}}^{2:N})}_{j,k}={({{{{{{{\rm{Im}}}}}}}}{{{{{{{\boldsymbol{\Lambda }}}}}}}})}_{j,j}-\mathop{\sum }\limits_{s=1}^{d}{({{{{{{{\rm{Im}}}}}}}}{{{{{{{\boldsymbol{\Lambda }}}}}}}})}_{s,s}{({{{{{{{{\bf{S}}}}}}}}}^{2:N})}_{s,k}.$$
(21)
Following the approach used in universal unfolding principles41, we collect in a set S the row and column indices of the entries of Δ2:N for which near-resonances occur, i.e., for which the corresponding entry of Δ2:N is smaller in norm than a small, preselected threshold. (The default threshold is 10−8 in SSMLearn.) The entries of H and H with indices contained in S are then set to zero but the corresponding monomial terms are retained in n(z; N). Conversely, coefficients of non-near-resonant entries of H and H are selected in a way so that the corresponding non–near-resonant monomials vanish from the normal form n(z; N). As a result, the matrix N is sparse, containing only the coefficients of essential, near-resonant monomials.
For example, if d = 2, N = 3 and the eigenvalues of Dr(0) form a complex pair λ = α0 ± iω0 with $${\omega }_{0}={{{{{{{\mathcal{O}}}}}}}}(1)$$, then we have
$${{{{{{{{\bf{S}}}}}}}}}^{2:N} =\left[\begin{array}{lllllll}2&1&0&3&2&1&0\\ 0&1&2&0&1&2&3\end{array}\right],\,\,{{{{{{{{\boldsymbol{\Delta }}}}}}}}}^{2:N}\\ =\left[\begin{array}{lllllll}-{\omega }_{0}&{\omega }_{0}&3{\omega }_{0}&-2{\omega }_{0}&0&2{\omega }_{0}&4{\omega }_{0}\\ -3{\omega }_{0}&-{\omega }_{0}&{\omega }_{0}&-4{\omega }_{0}&-2{\omega }_{0}&0&2{\omega }_{0}\end{array}\right].$$
(22)
Only two elements of Δ2:N are (near-) zero, and hence the reduced dynamics in extended normal form will require learning the following coefficients:
$${{{{{{{{\bf{H}}}}}}}}}_{\star } =\left[\begin{array}{lllllll}{h}_{20}&{h}_{11}&{h}_{02}&{h}_{30}&0&{h}_{12}&{h}_{03}\\ {\bar{h}}_{02}&{\bar{h}}_{11}&{\bar{h}}_{20}&{\bar{h}}_{03}&{\bar{h}}_{12}&0&{\bar{h}}_{30}\end{array}\right],\,\,\,\,\\ {{{{{{{\bf{N}}}}}}}} =\left[\begin{array}{lllllll}0&0&0&0&{h}_{21}&0&0\\ 0&0&0&0&0&{\bar{h}}_{21}&0\end{array}\right].$$
(23)
The corresponding cubic polar form (5) is then obtained from the relations z = (ρeiθ, ρeiθ) and h21 = β + iγ.
For a data-driven construction of the extended normal form (20), we first obtain an estimate for the Jacobian Dr(0) from linear regression. This determines the matrix B and the types of monomials arising in h−1 and n. Next, we note that the flow map $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{r}}}}}}}}}^{t}$$ of the SSM-reduced dynamics and the flow map $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{n}}}}}}}}}^{t}$$ of the extended normal form are connected through the conjugacy relationship $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{n}}}}}}}}}^{t}={{{{{{{{\bf{h}}}}}}}}}^{-1}\circ {{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{r}}}}}}}}}^{t}\circ {{{{{{{\bf{h}}}}}}}}$$. We find the nonzero complex coefficients of h−1 and n by minimizing the error in this exact conjugacy over the available P data points, represented in the η coordinates. Specifically, we determine the nonzero elements of H and N as
$$({{{{{\bf{H}}}}}}_{\star }^{* },{{{{{{\bf{N}}}}}}}^{* })= \arg {\mathop{\min }\limits_{{{{{{\mathbf{H}}}}}}_{\star }}},{{{{{\mathbf{N}}}}}}{\mathop{\sum }\limits_{j=1}^{P}}\Vert \frac{d}{dt}{{{{{{\bf{h}}}}}}}^{-1}({{{{{{\boldsymbol{\eta }}}}}}}_{j};{{{{{{\mathbf{H}}}}}}_{\star }})-{{{{{{{\bf{n}}}}}}}}({{{{{{{{\bf{h}}}}}}}}}^{-1}({{{{{{{{\boldsymbol{\eta }}}}}}}}}_{j};{{{{{{\bf{H}}}}}}_{\star }});{{{{{{{\bf{N}}}}}}}}){\Vert}^{2},\\ {({{{{{{{\bf{N}}}}}}}})}_{s,k}=0,\,\,\forall (s,k)\in S;{({{{{{{\mathbf{H}}}}}}_{\star }})}_{s,k}=0,\forall (s,k)\,\notin\, S.$$
(24)
Once h−1 is known, we obtain the coefficients H of h via regression.
As initial condition for the minimization problem (24), we set all unknown coefficients to zero. This initial guess assumes linear dynamics, which the minimization corrects as needed. We can compute the time derivative in (24) reliably using finite differences, provided that the sampling time Δt of the trajectory data is small compared to the fastest timescale of the SSM dynamics. For larger sampling times, one should use the discrete formulation of SSM theory, as discussed in section “Existence of SSMs” and29. In that formulation, the conjugacy error must be formulated for the 1-step prediction error of the normal form flow map $${{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{n}}}}}}}}}^{{{\Delta }}t}({{{{{{{\bf{z}}}}}}}})$$. The matrix defined in eq. (21) also carries over to the discrete time setting, with Λ defined as the diagonal matrix of the logarithms of the eigenvalues of $$D{{{{{{{{\boldsymbol{\varphi }}}}}}}}}_{{{{{{{{\bf{r}}}}}}}}}^{{{\Delta }}t}({{{{{{{\boldsymbol{0}}}}}}}})$$.
### Prediction of forced response from unforced training data
Forced SSMs continue to be embedded in our observable space, provided that we also include the phase of the forcing among our observables32. (In the simplest case of periodic forcing, this inclusion is not necessary, as we pointed out Section “Embedding SSMs via generic observables”). The quasiperiodic SSM-reduced normal form of system (1) in the observable space takes the general form
$$\begin{array}{c}{\dot{\rho }}_{j}={\alpha }_{j}({{{{{{{\boldsymbol{\rho }}}}}}}},{{{{{{{\boldsymbol{\theta }}}}}}}}){\rho }_{j}-\mathop{\sum}\limits_{{{{{{{{\bf{k}}}}}}}}\in {K}_{j}^{\pm }}{f}_{j,{{{{{{{\bf{k}}}}}}}}}\sin \left(\langle {{{{{{{\bf{k}}}}}}}},{{{{{{{\boldsymbol{\Omega }}}}}}}}\rangle t+{\phi }_{j,{{{{{{{\bf{k}}}}}}}}}\mp {\theta }_{j}\right),\\ \!\!\!\!\!\!{\dot{\theta }}_{j}={\omega }_{j}({{{{{{{\boldsymbol{\rho }}}}}}}},{{{{{{{\boldsymbol{\theta }}}}}}}})+\mathop{\sum}\limits_{{{{{{{{\bf{k}}}}}}}}\in {K}_{j}^{\pm }}\frac{{f}_{j,{{{{{{{\bf{k}}}}}}}}}}{{\rho }_{j}}\cos \left(\langle {{{{{{{\bf{k}}}}}}}},{{{{{{{\boldsymbol{\Omega }}}}}}}}\rangle t+{\phi }_{j,{{{{{{{\bf{k}}}}}}}}}\mp {\theta }_{j}\right),\end{array}\,\,\,\,\,\,\,j=1,2,...m,\,\,\,\,\,\,{{{{{{{\bf{k}}}}}}}}\in {{\mathbb{Z}}}^{\ell },\,\,\,\,\,\,{{{{{{{\boldsymbol{\Omega }}}}}}}}\in {{\mathbb{R}}}_{+}^{\ell },$$
(25)
where the terms fj,k and ϕj,k are the forcing amplitudes and phases for each mode of the SSM and for each forcing harmonic 〈k, Ω〉, while $${K}_{j}^{\pm }$$ are the set containing the indexes k of the resonant forcing frequencies for mode j (see the Supplementary Information). The normal form (25) will capture non-linearizable dynamics arising from resonant interactions between the eigenfrequencies of the spectral subspace E (which may also contain internal resonances) and the external forcing frequencies in Ω. One can use numerical continuation70 to find nontrivial co-existing steady states (such as periodic orbits and invariant tori) in eq. (25) under varying forcing amplitudes and forcing frequencies.
To predict forced response from the SSM-based model trained on unforced data, the forcing amplitude f relevant for eq. (7) in the observable space needs to be related to the forcing amplitude $$\left|\epsilon {{{{{{{{\bf{f}}}}}}}}}_{1}\right|$$ relevant for system (1) in the physical phase space. This involves (1) employing a single forcing amplitude-frequency pair $$\left(\left|\epsilon {{{{{{{{\bf{f}}}}}}}}}_{1}\right|,{{\Omega }}\right)$$ in the experiment (2) measuring the periodic observable response y(t) (3) computing the corresponding normalized reduced and normalized response amplitude ρ0 (4) substituting ρ0 into the first formula in (8) and (5) solving for f in closed form. This f can then be used to make a prediction for the full FRC and response phase via (8) in the experiment for arbitrary Ω forcing frequencies. The predicted FRC may have several connected components, including isolated responses (isolas) that are notoriously difficult to detect by numerical or experimental continuation68.
### Summary of the algorithm
The data-driven model reduction method used in this paper is available in the open-source MATLAB® package SSMLearn. User input is the measured trajectory data of the autonomous dynamical system (ϵ = 0), the SSM dimension d, the polynomial orders or approximation (M, N) for the SSM and for the extended normal form, as well as the type of the dynamical system (discrete or continuous). If the number of observables is not sufficient for manifold embedding, the data is automatically augmented with delays to reach the minimum embedding dimension p = 2d + 1. If the manifold learning returns poor results (due to, e.g., insufficient closeness of the data to the SSM), then the starting value of p can be increased until a good embedding is found. Then, the algorithm learns the SSM geometry in observable space and, after unsupervised detection of the required normal form, identifies the extended normal form of the reduced dynamics. The level of accuracy can be increased with larger polynomial orders, keeping in mind that excessive orders may lead to overfitting.
SSMLearn also offers all the tools we have used in this paper to analyze the reduced dynamics and make predictions for forced response from unforced training data. In particular, it contains the MATLAB®-based numerical continuation core COCO70. which can compute steady state and help with the design of nonlinear control strategies. In principle, there are no restrictions on the dimensions of the reduced-order model, yet the larger the SSM is, the more computationally expensive the problem becomes.
Qualitative or partial a priori knowledge of the linearized dynamics (e.g., some linearized modes and frequencies) helps in finding good initial conditions for trajectories to be used in SSMLearn. For example, the resonance decay method56 (which we exploited in our sloshing example), targets a specific 2-dimensional, stable SSM in laboratory experiments. This method consists of empirically isolating a resonant periodic motion on the SSM based on its locally maximal amplitude response under a forcing frequency sweep. Discontinuing the forcing will then generate transient decay towards the equilibrium in a close proximity of the SSM. For noisy data, filtering or dimensionality reduction can efficiently de-noise the data67, provided that the polynomial orders used for the description of the SSM and its reduced dynamics are not excessively large (see the Supplementary Information). For higher-dimensional SSMs, it is desirable to collect diverse trajectories to avoid bias towards specific motions. Good practice requires splitting the data sets into training, testing and validation parts.
### Algorithm 1
SSMLearn
Input parameters: SSM dimension d, polynomial approximation orders (M, N), selection among discrete or continuous-time dynamics
Input data: measured unforced trajectories
Output: SSM geometry, extended normal form of reduced dynamics, predictions for forced response.
1 Embed data in a suitable p-dimensional observable space with p > 2d.
2 Identify the manifold parametrization in reduced coordinates.
3 Estimate the normalized reduced dynamics after an automated identification of the required type of extended normal form.
4 Run analytics and prediction of forced response on the SSM-reduced and normalized model.
## Data availability
All data discussed in the results presented here is publicly available in the SSMLearn repository at github.com/haller-group/SSMLearn.
## Code availability
The code supporting the results presented here is publicly available in the SSMLearn repository at github.com/haller-group/SSMLearn.
## References
1. Holmes, P.J., Lumley, J.L., Berkooz, G., & Rowley, C.W. Turbulence, Coherent Structures, Dynamical Systems and Symmetry 2nd edn, (Cambridge Monographs on Mechanics. Cambridge University Press, 2012).
2. Awrejcewicz, J., Krys’ko, V.A., & Vakakis, A.F. Order Reduction by Proper Orthogonal Decomposition (POD) Analysis, 279–320 (Springer, Berlin, Heidelberg, 2004).
3. Lu, K. et al. Review for order reduction based on proper orthogonal decomposition and outlooks of applications in mechanical systems. Mech. Sys. Signal Proc. 123, 264–297 (2019).
4. Brunton, S. L., Proctor, J. L. & Kutz, J. N. Discovering governing equations from data by sparse identification of nonlinear dynamical systems. Proc. Natl Acad. Sci. 113, 3932–3937 (2016).
5. Mohamed, K.S. Machine Learning for Model Order Reduction (Springer, Cham, 2018).
6. Daniel, T., Casenave, F., Akkari, N. & Ryckelynck, D. Model order reduction assisted by deep neural networks (rom-net). Adv. Model. Simul. Eng. Sci. 7, 105786 (2020).
7. Calka, M. et al. Machine-learning based model order reduction of a biomechanical model of the human tongue. Computer Methods Prog. Biomedicine 198, 105786 (2021).
8. Loiseau, J.-C., Brunton, S.L., & Noack, B.R.From the POD-Galerkin method to sparse manifold models, 279–320 (De Gruyter, Berlin, 2020).
9. Karniadakis, G. E. et al. Physics-informed machine learning. Nat. Rev. Phys. 123, 422–440 (2021).
10. Li, S. & Yang, Y. Data-driven identification of nonlinear normal modes via physics-integrated deep learning. Nonlinear Dyn. 106, 3231–3246 (2021).
11. Fernex, D., Noack, B. R. & Semaan, R. Cluster-based network modeling–From snapshots to complex dynamical systems. Sci. Adv. 7, eabf5006 (2021).
12. Schmid, P. J. Dynamic mode decomposition of numerical and experimental data. J. Fluid Mech. 656, 5–28 (2010).
13. Kutz, J.N., Brunton, S.L., Brunton, B.W., & Proctor, J.L. Dynamic Mode Decomposition (SIAM, Philadelphia, PA, 2016).
14. Rowley, C. W., Mezić, I., Bagheri, S., Schlachter, P. & Henningson, D. S. Spectral analysis of nonlinear flows. J. Fluid Mech. 641, 115–127 (2009).
15. Mezić, I. Analysis of fluid flows via spectral properties of the Koopman operator. Ann. Rev. Fluid Mech. 45, 357–378 (2013).
16. Mauroy, A., Mezić, I., & Susuki, Y. The Koopman Operator in Systems and Control Concepts, Methodologies, and Applications: Concepts, Methodologies, and Applications (Springer, New York, 2020).
17. Lusch, B., Kutz, J. N. & Brunton, S. L. Deep learning for universal linear embeddings of nonlinear dynamics. Nat. Commun. 9, 1–10 (2018).
18. Otto, S. E. & Rowley, C. W. Linearly recurrent autoencoder networks for learning dynamics. SIAM J. Appl. Dynamical Syst. 18, 558–593 (2019).
19. Kaiser, E., Kutz, J. N. & Brunton, S. L. Data-driven discovery of koopman eigenfunctions for control. Mach. Learn.: Sci. Technol. 2, 035023 (2021).
20. Page, J. & Kerswell, R. R. Koopman mode expansions between simple invariant solutions. J. Fluid Mech. 879, 1–27 (2019).
21. Brunton, S. L., Brunton, B. W., Proctor, J. L. & Kutz, J. N. Koopman invariant subspaces and finite linear representations of nonlinear dynamical systems for control. PLoS ONE 11, 1–19 (2016b).
22. Bagheri, S. Koopman-mode decomposition of the cylinder wake. J. Fluid Mech. 726, 596–623 (2013).
23. Page, J. & Kerswell, R. R. Koopman analysis of Burgers equation. Phys. Rev. Fluids 3, 071901 (2018).
24. Dowell, E. H. Panel flutter - a review of the aeroelastic stability of plates and shells. AIAA J. 8, 385–399 (1970).
25. Abramian, A., Virot, E., Lozano, E., Rubinstein, S. M. & Schneider, T. M. Nondestructive prediction of the buckling load of imperfect shells. Phys. Rev. Lett. 125, 225504 (2020).
26. Podder, P., Mallick, D., Amann, A. & Roy, S. Influence of combined fundamental potentials in a nonlinear vibration energy harvester. Sci. Rep. 6, 37292 (2016).
27. Orosz, G. & Stépán, G. Subcritical hopf bifurcations in a car-following model with reaction-time delay. Proc. R. Soc. A 462, 2643–2670 (2006).
28. Ashwin, P., Wieczorek, S., Vitolo, R. & Cox, P. Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system. Philos. Trans. R. Soc. A 370, 1166–1184 (2012).
29. Haller, G. & Ponsioen, S. Nonlinear normal modes and spectral submanifolds: existence, uniqueness and use in model reduction. Nonlinear Dyn. 86, 1493–1534 (2016).
30. Whitney, H. The self-intersections of a smooth n-manifold in 2n-space. Ann. Math. 45, 220–246 (1944).
31. Stark, J., Broomhead, D. S., Davies, M. E. & Huke, J. Takens embedding theorems for forced and stochastic systems. Nonlinear Anal.: Theory, Methods Appl. 30, 5303–5314 (1997).
32. Stark, J. Delay embeddings for forced systems. I. deterministic forcing. J. Nonlinear Sci. 9, 255–332 (1999).
33. Guckenheimer, J. & Holmes, P. Nonlinear Oscillations, Dynamical Systems and Bifurcation of Vector Fields (Springer, New York, 1983).
34. Fenichel, N. Persistence and smoothness of invariant manifolds for flows. Indiana Univ. Math. J. 21, 193–226 (1971).
35. Kuramoto, Y. Chemical Oscillations, Waves and Turbulence (Springer, Berlin, 1984).
36. Jain, S. & Haller, G. How to compute invariant manifolds and their reduced dynamics in high-dimensional finite-element models? (Nonlinear Dyn., 2021).
37. Sauer, T., Yorke, J. A. & Casdagli, M. Embedology. J. Stat. Phys. 65, 579–616 (1997).
38. Takens, F. Detecting strange attractors in turbulence. In D. Rand and L. Young, editors, Dynamical Systems and Turbulence, Warwick 1980, 366–381 (Springer Berlin Heidelberg, 1981).
39. Poincaré, H. Les Méthodes Nouvelles de la Mécanique Céleste. (Gauthier-Villars et Fils, Paris, 1892).
40. Sternberg, S. On the structure of local homeomorphisms of euclidean n-space, II. Am. J. Math. 80, 623–631 (1958).
41. Murdock, J. Normal Forms and Unfoldings for Local Dynamical Systems. (Springer Monographs in Mathematics. Springer-Verlag New York, 2003).
42. Ponsioen, S., Pedergnana, T. & Haller, G. Automated computation of autonomous spectral submanifolds for nonlinear modal analysis. J. Sound Vib. 420, 269–295 (2018).
43. Landau, L. D. On the problem of turbulence. Dokl. Akad. Nauk SSSR 44, 339–349 (1944).
44. Stuart, J. T. On the non-linear mechanics of wave disturbances in stable and unstable parallel flows. Part 1. The basic behaviour in plane Poiseuille flow. J. Fluid Mech. 9, 353–370 (1960).
45. Fujimura, K. Centre manifold reduction and the Stuart-Landau equation for fluid motions. Proc.: Math., Phys. Eng. Sci. 453, 181–203 (1997).
46. Szalai, R., Ehrhardt, D. & Haller, G. Nonlinear model identification and spectral submanifolds for multi-degree-of-freedom mechanical vibrations. Proc. R. Soc. A 473, 20160759 (2017).
47. Breunung, T. & Haller, G. Explicit backbone curves from spectral submanifolds of forced-damped nonlinear mechanical systems. Proc. R. Soc. A 474, 20180083 (2018).
48. Cenedese, M., Axås, J., Yang, H., Eriten, M., & Haller, G. Data-driven nonlinear model reduction to spectral submanifolds in mechanical systems. arXiv:2110.01929, 2021.
49. Jain, S., Tiso, P. & Haller, G. Exact nonlinear model reduction for a von Kármán beam: slow-fast decomposition and spectral submanifolds. J. Sound Vib. 423, 195–211 (2018).
50. Barkley, D. & Henderson, R. D. Three-dimensional floquet stability analysis of the wake of a circular cylinder. J. Fluid Mech. 322, 215–241 (1996).
51. Noack, B. R., Afanasiev, K., Morzyński, M., Tadmor, G. & Thiele, F. A hierarchy of low-dimensional models for the transient and post-transient cylinder wake. J. Fluid Mech. 497, 335–363 (2003).
52. Rowley, C. W. & Dawson, S. T. M. Model reduction for flow analysis and control. Annu. Rev. Fluid Mech. 49, 387–417 (2017).
53. Taylor, G. I. An experimental study of standing waves. Proc. R. Soc. Lond. Ser. A. Math. Phys. Sci. 218, 44–59 (1953).
54. Ockendon, J. R. & Ockendon, H. Resonant surface waves. J. Fluid Mech. 59, 397–413 (1973).
55. Bäuerlein, B & Avila, K. Phase lag predicts nonlinear response maxima in liquid-sloshing experiments. J. Fluid Mech. 925, 2021 (2021).
56. Peeters, M., Kerschen, G. & Golinval, J. C. Dynamic testing of nonlinear vibrating structures using nonlinear normal modes. J. Sound Vib. 330, 486–509 (2011).
57. Deng, N., Noack, B. R., Morzyński, M. & Pastur, L. R. Low-order model for successive bifurcations of the fluidic pinball. J. Fluid Mech. 884, A37 (2020).
58. Shaw, S. W. & Pierre, C. Normal modes for non-linear vibratory systems. J. Sound Vib. 164, 85–124 (1993).
59. Renson, L., Kerschen, G. & Cochelin, B. Numerical computation of nonlinear normal modes in mechanical engineering. J. Sound Vib. 364, 177–206 (2016).
60. Neild, S. A., Champneys, A. R., Wagg, D. J., Hill, T. L. & Cammarano, A. The use of normal forms for analysing nonlinear mechanical vibrations. Philos. Trans. R. Soc. A 373, 20140404 (2015).
61. Cirillo, G. I., Mauroy, A., Renson, L., Kerschen, G. & Sepulchre, R. A spectral characterization of nonlinear normal modes. J. Sound Vib. 377, 284–301 (2016).
62. Cabré, X., Fontich, E. & de la Llave, R. The parameterization method for invariant manifolds i: Manifolds associated to non-resonant subspaces. Indiana Univ. Math. J. 52, 283–328 (2003).
63. Kogelbauer, F. & Haller, G. Rigorous model reduction for a damped-forced nonlinear beam model: An infinite-dimensional analysis. J. Nonlinear Sci. 28, 1109–1150 (2018).
64. Haro, A. & de la Llave, R. A parameterization method for the computation of invariant tori and their whiskers in quasi-periodic maps: rigorous results. J. Differential Eqs. 228, 530–579 (2006).
65. Haro, A., Canadell, M., Figueras, J.-L., Luque, A., & Mondelo, J.M. The Parameterization Method for Invariant Manifolds: from Rigorous Results to Effective Computations. (Springer, New York, 2016).
66. Szalai, R. Invariant spectral foliations with applications to model order reduction and synthesis. Nonlinear Dyn. 101, 2645–2669 (2020).
67. Bishop, C.M. Pattern Recognition and Machine Learning. (Information Science and Statistics. Springer-Verlag New York, 2006).
68. Ponsioen, S., Pedergnana, T. & Haller, G. Analytic prediction of isolated forced response curves from spectral submanifolds. Nonlinear Dyn. 98, 2755–2773 (2019).
69. Ponsioen, S., Jain, S. & Haller, G. Model reduction to spectral submanifolds and forced-response calculation in high-dimensional mechanical systems. J. Sound Vib. 488, 115640 (2020).
70. Dankowicz, H. & Schilder, F. Recipes for Continuation. (Society for Industrial and Applied Mathematics, 2013).
## Acknowledgements
B.B. and K.A. acknowledge financial support from the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) in the framework of the research unit FOR 2688 ‘Instabilities, Bifurcations and Migration in Pulsatile Flows’ under Grant No. AV 156/1-1. K.A. acknowledges funding for an ’Independent Project for Postdocs’ from the Central Research Development Fund of the University of Bremen.
## Author information
Authors
### Contributions
M.C. and G.H. designed the research. M.C. carried out the research. M.C. and J.A. developed the software and analyzed the examples. B.B. and K.A. performed the liquid sloshing experiments and participated in their analysis. M.C. and G.H. wrote the paper. G.H. lead the research team.
### Corresponding author
Correspondence to George Haller.
## Ethics declarations
### Competing interests
The authors declare no competing interests.
## Peer review
### Peer review information
Nature Communications thanks Bernd Noack, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
## Rights and permissions
Reprints and Permissions
Cenedese, M., Axås, J., Bäuerlein, B. et al. Data-driven modeling and prediction of non-linearizable dynamics via spectral submanifolds. Nat Commun 13, 872 (2022). https://doi.org/10.1038/s41467-022-28518-y
• Accepted:
• Published:
• DOI: https://doi.org/10.1038/s41467-022-28518-y
• ### Enhancing computational fluid dynamics with machine learning
• Ricardo Vinuesa
• Steven L. Brunton
Nature Computational Science (2022)
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate. | 2022-07-03 01:12:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7923263311386108, "perplexity": 1322.2747705718657}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656104205534.63/warc/CC-MAIN-20220702222819-20220703012819-00311.warc.gz"} |
https://www.danielmathews.info/2009/03/08/chord-diagrams-contact-topological-quantum-field-theory-and-contact-categories/ | (74 pages) – on the arXivpublished in Algebraic & Geometric Topology.
Abstract: We consider contact elements in the sutured Floer homology of solid tori with longitudinal sutures, as part of the (1+1)-dimensional topological quantum field theory defined by Honda–Kazez–Mati\'{c} in \cite{HKM08}. The $$\mathbb{Z}_2$$ $$SFH$$ of these solid tori forms a “categorification of Pascal’s triangle”, and contact structures correspond bijectively to chord diagrams, or sets of disjoint properly embedded arcs in the disc. Their contact elements are distinct and form distinguished subsets of $$SFH$$ of order given by the Narayana numbers. We find natural “creation and annihilation operators” which allow us to define a QFT-type basis of each $$SFH$$ vector space, consisting of contact elements. Sutured Floer homology in this case reduces to the combinatorics of chord diagrams. We prove that contact elements are in bijective correspondence with comparable pairs of basis elements with respect to a certain partial order, and in a natural and explicit way. The algebraic and combinatorial structures in this description have intrinsic contact-topological meaning. In particular, the QFT-basis of $$SFH$$ and its partial order have a natural interpretation in pure contact topology, related to the contact category of a disc: the partial order enables us to tell when the sutured solid cylinder obtained by “stacking” two chord diagrams has a tight contact structure. This leads us to extend Honda’s notion of contact category to a “bounded” contact category, containing chord diagrams and contact structures which occur within a given contact solid cylinder. We compute this bounded contact category in certain cases. Moreover, the decomposition of a contact element into basis elements naturally gives a triple of contact structures on solid cylinders which we regard as a type of “distinguished triangle” in the contact category. We also use the algebraic structures arising among contact elements to extend the notion of contact category to a 2-category.
chord_diagrams_contact_categories
Chord diagrams, contact-topological quantum field theory, and contact categories | 2021-07-26 22:50:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6983364820480347, "perplexity": 444.00958512048305}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046152156.49/warc/CC-MAIN-20210726215020-20210727005020-00071.warc.gz"} |
http://clay6.com/qa/26316/a-liquid-is-boiled-and-its-temperature-t-versus-heat-supplied-q-graph-is-pl | Browse Questions
# A liquid is boiled and its temperature (T) versus Heat supplied (Q) graph is plotted. Which of the following is the correct graph?
At boiling temperature, the heat supplied is spent as latent heat of vaporization.
The temperature doesnt rise during this interval.
So, the flat portion in (A) denotes the latet heat of vaporization. | 2017-03-29 07:21:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7994503974914551, "perplexity": 2852.4979702348737}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218190234.0/warc/CC-MAIN-20170322212950-00583-ip-10-233-31-227.ec2.internal.warc.gz"} |
http://mathhelpforum.com/number-theory/148963-couldn-t-solve-one.html | Math Help - Couldn't solve this one
1. Couldn't solve this one
Positive integers $a, b$ are both relatively prime and less than or equal to 2008. $a^2 + b^2$ is a perfect square.
$b$ has the same digits as $a$ in the reverse order. The number of such ordered pairs $(a, b)$ is _________ .
I started with 2 digits:
Let $a=xy$ and $b=yx$
$(10x+y)^2 + (10y+x)^2$
$101x^2 + 40xy + 101y^2$
which can't be factorized and isn't a perfect square.
Tried the same with 3 digits and ended up with this:
$10001x^2 + 200y^2 + 10001z^2 + 400xz + 2020xy + 2020yz$
This also isn't factorizable.
I've simply got no idea of how to proceed from here.
2. There simply is NONE. You sure your question is CORRECTLY worded?
3. Originally Posted by Wilmer
There simply is NONE. You sure your question is CORRECTLY worded?
Dunno, someone challenged me to solve it. Guess it was his idea of a joke.
My Apologies..
4. I am not sure but my answer is zero !
It is a famous property ( which is not what i am confused ) , $a-b \equiv 0 \bmod{9}$ , the proof is as follows :
Let $a = \sum_{i=0}^n a_i 10^i ~~ a_i \in \{\ 0,1,2,...,9 \}\$
so $b = \sum_{i=0}^n a_i 10^{n-i}$ and $a-b = \sum_{i=0}^n a_i ( 10^i - 10^{n-i} ) \equiv \sum_{i=0}^n a_i ( 1-1) \bmod{9} \equiv 0 \bmod{9}$
We have $a^2 + b^2 = c^2$
Since $a,b$ coprime , they can be expressed as $m^2 - n^2 ~,~ 2mn$ , wlog let $a = m^2 - n^2 ~ b = 2mn$ so we have
$a-b = m^2 - 2mn - n^2 \equiv 0 \bmod{9}$
$(m-n)^2 - 2n^2 \equiv 0 \bmod{9}$
I consider the form $x^2 - 2y^2$ whether it can be the multiple of $9$
Be caution the quadratic residues are $[R] = \{\ 0,1,4,7 \}$ so $2[R] = \{\ 0,2,8,14 \}= \{\ 0,2,5,8 \}$ , since the intersection of the sets is just $\{\ 0 \}$ , we conclude $x^2 - 2y^2 \equiv 0 \bmod{9}$ iff $x \equiv y \equiv 0 \bmod{3}$ . Therefore , $m-n \equiv n \equiv 0 \bmod{3} ~ \implies m \equiv n \equiv 0 \bmod{3}$ which is false because , $(a,b)=1 \implies (m,n)=1$ so we can never find out any odered pair $(a,b)$ .
EDIT: I have made it more complicated , in fact we can consider the quadratic residues from here :
$a^2 + b^2 = c^2$
It is easy to show $a \equiv b \bmod{9}$ since we have already proved that $a - b \equiv 0 \bmod{9}$ , so we have :
$2a^2 \equiv c^2 \bmod{9}$ consider the residues as i mentioned .
5. Originally Posted by simplependulum
EDIT: I have made it more complicated , in fact we can consider the quadratic residues from here :
$a^2 + b^2 = c^2$
We could simply apply the "pythagorean triplet" rules, couldn't we, SimpleP ? | 2015-04-28 06:31:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 35, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.954955518245697, "perplexity": 281.2999207900289}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-18/segments/1429246660724.78/warc/CC-MAIN-20150417045740-00156-ip-10-235-10-82.ec2.internal.warc.gz"} |
https://chem.washington.edu/lecture-demos/baby-battery | # Baby Battery
## Chemicals and Solutions
• Strips of Magnesium and Copper
• 0.5 M copper sulfate
• 0.5 M sodium sulfate
## Materials
• 600 mL beaker with one holed rubber stopper to fit
• dialysis tubing
• flash light bulb in a small lamp holder
## Procedure
1. Cut about 6 inches of dialysis tubing and soften it in water. Tie off one end with a double knot to make a leak proof bag.
2. Fill the bag 2/3 full with 0.5 M copper sulfate solution. Then place a long thin strip of copper into the copper sulfate solution such that one end of the strip sticks out of the bag.
3. Place the prepared bag in a 600 mL beaker.
4. Fill the beaker about half full with 0.5 M sodium sulfate.
5. Place a long strip of magnesium in the sodium sulfate solution such that one end of the strip is above the lip of the beaker.
6. Stopper the apparatus such that the dialysis bag and the metal strips are held in place. The top of the bag and the tops of the metal strips will be sticking out.
7. Use alligator leads to connect the metal strips to the light bulb. The bulb will light up.
## Discussion
The reaction:
$$\ce{ Mg_{(s)} + Cu2+_{(aq)} -> Mg2+_{(aq)} + Cu_{(s)} + 1.5V }$$
Electrolysis of water produces bubbles of hydrogen and oxygen gas. | 2021-01-20 11:14:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.33288151025772095, "perplexity": 3247.585448455268}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610703519984.9/warc/CC-MAIN-20210120085204-20210120115204-00106.warc.gz"} |
http://mymathforum.com/calculus/6574-application-differentiation.html | My Math Forum Application of differentiation
Calculus Calculus Math Forum
April 18th, 2009, 09:15 PM #1 Newbie Joined: Apr 2009 Posts: 14 Thanks: 0 1. The area between two varying concentric circle is at all times 9phi in^2. the rate of change of the area of the larger circle is 10phi in^2/sec. How fast is the circumference of the smaller circle changing when it has area 16phi in^2 2. A wall of a building is to be braced by a beam that must rest on the ground and pass over a vertical wall 10ft high that is 8 ft from the building. find the length L of the shortest beam that can be used. Last edited by skipjack; April 16th, 2015 at 07:06 PM.
April 19th, 2009, 12:41 PM #2 Senior Member Joined: Dec 2008 Posts: 251 Thanks: 0 Re: Application of differentiation I'll work out the answer to the first problem for you. First, we give variable names to all the quantities under consideration: $A$ and $a$ for the area of the larger and smaller circle respectively, $R$ and $r$ for the radii, $C$ and $c$ for the circumferences. As it turns out, we will not need to consider $R$, but we have the formulas $\begin{array}{rclrcl} A &=& \pi R^2 & C &=& 2\pi R \\ a &=& \pi r^2 & c &=& 2\pi r \\ A\,-\,a &=& 9\phi\,\mbox{in}^2 \\ \frac{dA}{dt} &=& 10\phi\,\frac{\mbox{in^2}}{\mbox{sec}}. \\ \end{array}$ Now, we find the circumference of the smaller circle in terms of the area of the larger: $c\,=\,2\pi r\,=\,2\pi\sqrt{\frac{a}{\pi}}\,=\,2\sqrt{\pi a}\,=\,2\sqrt{\pi(A\,-\,9\phi)}$ and differentiate by $t$: $\frac{dc}{dt}\,=\,\frac{dc}{dA}\frac{dA}{dt}\,=\,2 \cdot\frac{1}{2}(\pi(A\,-\,9\phi))^{-\frac{1}{2}}\cdot \pi \cdot\frac{dA}{dt}\,=\,\sqrt{\frac{\pi}{A\,-\,9\phi}}\cdot \frac{dA}{dt}\,=\,\sqrt{\frac{\pi}{a}}\cdot \frac{dA}{dt}.$ At $\frac{dA}{dt}\,=\,10\phi\,\frac{\mbox{in}^2}{\mbox {sec}}$ and $a\,=\,16\phi\,\mbox{in}^2$, we have $\frac{dc}{dt}\,=\,\sqrt{\frac{\pi}{16\phi}}\cdot 10\phi\,=\,\frac{5}{2}\sqrt{\pi\phi}\,\frac{\mbox{ in}}{\mbox{sec}}.$
April 15th, 2015, 08:11 AM #3
Math Team
Joined: Jul 2011
From: Texas
Posts: 3,102
Thanks: 1677
Quote:
Originally Posted by Tear_Grant 1. The area between two varying concentric circle is at all times 9phi in^2. the rate of change of the area of the larger circle is 10phi in^2/sec. How fast is the circumference of the smaller circle changing when it has area 16phi in^2
I assume you mean pi ($\pi$) , and not phi ($\phi$)
Quote:
The area between two varying concentric circle is at all times 9phi in^2.
$\displaystyle \pi(R^2-r^2) = 9\pi$
$\displaystyle R^2 - r^2 = 9$
$\displaystyle \frac{d}{dt}\left(R^2 - r^2 = 9\right)$
$\displaystyle 2R\frac{dR}{dt} - 2r\frac{dr}{dt} = 0$
$\displaystyle R\frac{dR}{dt} - r\frac{dr}{dt} = 0$
Quote:
the rate of change of the area of the larger circle is 10phi in^2/sec.
$\displaystyle A_R = \pi R^2$
$\displaystyle \frac{dA_R}{dt} = 2\pi R \frac{dR}{dt} = 10 \pi \implies R\frac{dR}{dt} = 5$
Quote:
How fast is the circumference of the smaller circle changing when it has area 16phi in^2
$\displaystyle 16\pi = \pi r^2 \implies r = 4$
$\displaystyle C_r = 2\pi r$
$\displaystyle \frac{dC_r}{dt} = 2\pi \frac{dr}{dt}$
looks like you need to find the value of $\displaystyle \frac{dr}{dt}$ when $\displaystyle r = 4$ ... you have enough information above to determine that value.
Last edited by skipjack; April 16th, 2015 at 07:42 PM.
April 15th, 2015, 10:15 AM #4
Math Team
Joined: Jul 2011
From: Texas
Posts: 3,102
Thanks: 1677
Quote:
2. A wall of a building is to be braced by a beam that must rest on the ground and pass over a vertical wall 10ft high that is 8 ft from the building. find the length L of the shortest beam that can be used.
Note the sketch for the assignment of variables.
Pythagoras ...
$\displaystyle L^2 = (x+8 )^2 + (y+10)^2$
similar triangles ...
$\displaystyle \frac{y}{8} = \frac{10}{x}$
use the similar triangles proportion to solve for y in terms of x (or x in terms of y, your choice) and substitute into the Pythagoras equation to get $L^2$ in terms of a single variable.
let $\displaystyle L^2 = Z$ ...
$\displaystyle Z = (x+8 )^2 + (y+10)^2$
note that minimizing $Z$ will also minimize $L$.
find $\displaystyle \frac{dZ}{dx}$ or $\displaystyle \frac{dZ}{dy}$ and minimize.
Attached Images
Ladder prob.jpg (18.7 KB, 1 views)
,
,
### a wall of a building is to be braced by beam which must pass
Click on a term to search for related topics.
Thread Tools Display Modes Linear Mode
Similar Threads Thread Thread Starter Forum Replies Last Post shindigg Calculus 7 March 4th, 2017 09:52 PM hatchelhoff Calculus 2 March 4th, 2014 09:20 AM Ronaldo Calculus 3 December 29th, 2012 04:41 PM shindigg Calculus 4 September 4th, 2009 07:24 AM Ronaldo Complex Analysis 3 December 31st, 1969 04:00 PM
Contact - Home - Forums - Cryptocurrency Forum - Top | 2019-11-22 21:00:07 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 14, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8007471561431885, "perplexity": 1123.445492476228}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496671548.98/warc/CC-MAIN-20191122194802-20191122223802-00173.warc.gz"} |
https://www.geosci-instrum-method-data-syst.net/9/79/2020/ | Journal topic
Geosci. Instrum. Method. Data Syst., 9, 79–104, 2020
https://doi.org/10.5194/gi-9-79-2020
Geosci. Instrum. Method. Data Syst., 9, 79–104, 2020
https://doi.org/10.5194/gi-9-79-2020
Research article 19 Mar 2020
Research article | 19 Mar 2020
# Design and applications of drilling trajectory measurement instrumentation in an ultra-deep borehole based on a fiber-optic gyro
Design and applications of drilling trajectory measurement instrumentation in an ultra-deep borehole based on a fiber-optic gyro
Yimin Liu1,2, Chenghu Wang1, Guangqiang Luo3, and Weifeng Ji3 Yimin Liu et al.
• 1Institute of Crustal Dyanmics, CEA, Beijing, 100085, China
• 2School of Manufacturing Science and Engineering, Sichuan University, Chengdu, 610065, China
• 3The Institute of Exploration Technology of CAGS, Chengdu, 611730, China
Correspondence: Yimin Liu (153973418@qq.com)
Abstract
The working environment in hot dry rock boreholes, encountered in deep geothermal investigation drilling and ultra-deep geological drilling (up to 5000 m), is very difficult at the present stage. We have developed a drilling trajectory measuring instrumentation (DTMI), which is based on the interference fiber-optic gyro (FOG). This can work continuously, for 4 h, in an environment where the ambient temperature does not exceed 270 C and the pressure does not exceed 120 MPa. The DTMI is mainly divided into three parts: an external confining tube, a metal vacuum flask, and a FOG measurement probe. Here, we focus on the mechanical design, strength, and pressure field simulation analysis for the external tube, the structural design and temperature field simulation analysis for the vacuum flask, and the FOG Shupe error analysis and compensation in the temperature field. Finally, through the engineering applications of the SK-2 east borehole of the China Continental Scientific Drilling (CCSD) project and the geothermal well of Xingreguan-2, the data measurements of the drilling trajectory were used to analyze the stability of the DTMI. The instrument realizes long-duration, high-stability work in the process of making trajectory measurements in an ultra-deep hole. The instrument has the characteristic of anti-electromagnetic interference and enables work to be carried out in the blind zone of existing technologies and instrumentation. Therefore, DTMI has great potential in the promotion and development of geological drilling technology.
1 Introduction
With the recent rapid development of the national economy, shortages in resources and energy have become major barriers to economic development. In order to obtain more resources and energy, we now need to explore deeper into the Earth's crust. Therefore, the depth of drill holes for mineral resource exploration and extraction is constantly increasing. Furthermore, geothermal energy and shale gas are considered to be new “green energies” that can maintain sustainable development, which is of both domestic and foreign concern. Therefore, the development of drilling engineering, in the field of high-temperature geothermal energy and shale gas, is included in the development plan of China (Chen et al., 2015). With increases in borehole depths, there is also an increase in temperature. In general, the temperature gradient for a normal formation is about $\mathrm{3}\phantom{\rule{0.125em}{0ex}}{}^{\circ }\mathrm{C}/\mathrm{100}\phantom{\rule{0.125em}{0ex}}\mathrm{m}$; the maximum temperature of a 3000 m deep oil and gas well can reach over 100 C. When the borehole depth reaches around 8000 m, the temperature will reach 250 C. In hot dry rock, areas of high geothermal energy, and other areas with anomalous geothermal gradients, the temperature will be higher (Osipova et al., 2015; Chen et al., 2015). When the depth of the borehole (well) increases, the degree of borehole deviation also increases. Borehole deviation will have a significant impact on the quality of the borehole and construction safety. With the increasing depth of deep mineral resource exploration and geothermal resource exploration and development, the drilling path or trajectory needs to be more and more accurate (Xu et al., 1996). Therefore, borehole deviation and drilling trajectory must be measured to provide technical support for drilling construction.
One of the main technical problems in ultra-deep borehole drilling construction is how to measure the drilling trajectory in a high-temperature environment while ensuring measurement accuracy. At present, the drilling trajectory measurement technique is to measure the zenith angle of the hole's (well's) body axis (the angle between the tangent of the wellbore axis and the vertical), the azimuth angle (the direction of the horizontal projection of the tangential line of the wellbore axis), and the depth of the hole (the depth position of the measuring point in the well). These three main geometric parameters are then used in an appropriate calculation method to calculate the spatial position of the measured point indirectly, so as to obtain the trajectory data for the hole (well) (Xiao et al., 1989). Figure 1 shows the coordinate system of a borehole trajectory.
Figure 1Coordinate system of a borehole trajectory.
As shown in Fig. 1, the curve OAC is the trajectory of the borehole in the coordinate system, the zenith angle (θA) is the angle between the tangent at the point and the vertical line, the azimuth angle (αA) is the angle between the projection of the tangent line onto the horizontal plane and the true north direction, and the hole depth (HA) is the distance from the orifice O to the bore axis of the point.
The hole depth can be measured by the length of the drill pipe or cable under the hole. The measurement of zenith angle generally uses a sensor based on the measurement principle of liquid level, suspension principle and gravitational acceleration, and the precision is relatively high. The azimuth measurement is usually of the following two types: one uses the principle of the Earth's magnetic field, and the other uses the principle of inertial navigation. The borehole inclinometer, which is based on the principle of geomagnetic field orientation, is only suitable for non-magnetic interference or weak magnetic mining areas (Sedlak, 1994). In strongly magnetic mining areas or in magnetic interference drilling, due to the interference of the Earth's magnetic field or metal shielding (magnetic mining area, instrument casing, and drill pipe casing), the accuracy of this type of instrument does not meet the measurement requirements (Yamaguchi et al., 2015). In order to solve the problem of borehole inclination in the case of strong magnetic interference (inside the drill pipe, inside the casing, etc.), the azimuth is generally measured by optical fiber or dynamic tuning gyroscope, based on the principle of inertial navigation. Some existing fiber-optic gyro (FOG)-based drilling trajectory measuring instrumentation (DTMI) are shown in Table 1 (Mass et al., 2007).
Table 1Technical parameters of some DTMIs based on FOG.
From Table 1, we can know that the instrument that can withstand the highest temperature and pressure is the Keeper type produced in the US, and the maximum temperature is 200 C; the maximum pressure is 140 MPa. However, as mentioned in the first paragraph, the temperature can reach 250 C or even higher in a hot dry rock borehole or an ultra-deep borehole. Therefore, there is a lack of drilling trajectory measurement instruments and corresponding technologies for such high-temperature and high-pressure environments. There are also many problems in the practical application of the drilling trajectory measurement. Therefore, research on drilling measurement technology for high-temperature and high-pressure environments, encountered in ultra-deep drilling, is of great significance. This technology can be used for new energy, ultra-deep oil and gas development, high-temperature geothermal energy, hot dry rock development drilling engineering, scientific drilling engineering, ultra-deep mineral resource drilling engineering, and oil and gas drilling engineering.
Here, we look at working environments with a maximum temperature not exceeding 270 C and pressures not exceeding 120 MPa. A thermal simulation analysis of a metal vacuum flask and pressure simulation modeling of the external confining tube are presented. A design scheme for DTMI is proposed and optimized. Through the field engineering application in the China Continental Scientific Drilling (CCSD) project and in geothermal development drilling, a long-duration, high-performance DTMI for deep hole trajectory measurement has been realized. This is of great significance for promoting the development of geological drilling.
2 Design and principles of DTMI
## 2.1 Configuration of DTMI
DTMI is an important instrument for investigating drilling construction quality and trajectory parameters in anti-magnetic interference drilling engineering. The DTMI is mainly composed of an external confining tube, a metal vacuum flask, and an FOG measuring probe, shown in Fig. 2.
Figure 2Photos of external confining tube, metal vacuum flask, and FOG measurement probe.
The external confining tube is the outermost layer, a threaded interface is equipped with a high-pressure metal sealing ring to ensure that the maximum pressure reached is 120 MPa. For downhole measurements, it is equipped with a shock-proof guide joint and a centralizer. The metal vacuum flask forms the middle layer, which is equipped with four heat absorbers and shock absorbers in the upper, middle, and lower parts. These ensure that, as the temperature rises externally to 270 C, during the 4 h of operation the internal temperature of the bottle does not exceed 90 C. The FOG measurement storage probe forms the innermost level, mainly comprised of an inertial measurement unit (IMU) sensor components, storage control module, navigation solver module, and a high-temperature battery. Its function is to realize multi-point azimuth and inclination measurements and storage (Liu, 2016; Lin et al., 2015). The internal structural diagram of the DTMI is shown in Fig. 3.
Figure 3Internal structural diagram of DTMI.
The measurement flowchart of the DTMI is shown in Fig. 4. The DTMI is lowered and lifted by a wire rope connection, and the data measured, during operation, are stored in the probe's memory, in real time. When the measurements are completed, the trajectory measuring probe is taken out of the borehole, the storage module in the FOG probe is connected to the ground laptop, through the data line, and the data stored in the probe are read by the upper computer measurement software. Data processing and display are performed, thereby giving the results of the trajectory measurements.
Figure 4Hardware diagram of DTMI.
## 2.2 Design and principles of measurement module
### 2.2.1 Measurement module design
The measurement module consists of a three-axis fiber-optic gyroscope (inertial measurement unit) and a three-axis accelerometer (acceleration measuring unit), which are orthogonal to each other, as shown in Fig. 5.
Figure 5Composition diagram of the three-axis measurement module.
The measurement module uses a module design method; it consists of a three-axis accelerometer sensor module, a three-axis FOG sensor module (IMU), a temperature sensor module, a signal conditioning module, a high-precision A/D conversion module, a navigation calculation processing module, and a high-temperature power module, which are shown in Fig. 6.
Figure 6Figure of measurement principles.
The key component is the FOG measurement component. This is composed of an interference fiber gyro (I-FOG), an optical path portion, and a circuit portion. According to the three-axis integrated design, three interferometric fiber-optic gyroscopes (I-FOG) are fixed, respectively, on the three coordinate axes of the carrier coordinate system, which are orthogonal each other. As shown in Fig. 7, each single-axis optical path is partially composed of a light source (divided by a super-luminescent diode (SLD) tube), a coupler, an integrated optical modulator (referred to as a waveguide-Y), and an optical fiber ring (a special process used for a polarization-maintaining optical fiber). The detector consists of five major components, and three FOGs share one SLD light source. Other functional modules adopt a mature all-digital closed environmental biasing scheme. This design has advantages of low cost, small size, and high stability. Meanwhile, the design of the inertial navigation measurement module focuses on the assembly process, low thermal power, overall electromagnetic compatibility, and anti-interference (Titterton and Weston, 2004; Savage and Paul, 2013).
Figure 7Composition diagram of IMU.
### 2.2.2 Measurement principles
Due to the narrow space inside the borehole, it is very difficult to install a stable physical measurement platform. Therefore, the instrument uses the strapdown inertial navigation technology to realize the function of navigation by using the fiber-optic gyroscope, accelerometer, and trajectory calculation model (Grewal et al., 2007). Among them, the three-axis accelerometer measures the acceleration in three directions, and the acceleration value can be used to obtain the displacement, in three directions, by double integration with respect to time; the three-axis fiber-optic gyroscopes measure the rotational speed of the carrier in three directions, and the corresponding rotation angle can be obtained by integrating with respect to time.
Using the three-axis displacement and rotation angle, the attitude matrix is solved by the space coordinate system variation, the drilling trajectory calculation model, the Euler angle coordinate transformation, and the quaternion method; when the rigid body is rotated around an axis, the angular position of the rotating rigid body can be calculated (Çelikel and Sametoğlu, 2012). The rotation of the carrier coordinate system, relative to the navigation coordinate system, is shown in Eq. (1).
$\begin{array}{}\text{(1)}& \stackrel{\mathrm{˙}}{\mathrm{\Lambda }}=\frac{\mathrm{1}}{\mathrm{2}}W\left({w}_{nb}^{b}\right)\mathrm{\Lambda }\end{array}$
$W\left({w}_{nb}^{b}\right)=\left[\begin{array}{cccc}\mathrm{0}& -{w}_{nbx}^{b}& -{w}_{nby}^{b}& -{w}_{nbz}^{b}\\ {w}_{nbz}^{b}& \mathrm{0}& {w}_{nbz}^{b}& -{w}_{nby}^{b}\\ {w}_{nby}^{b}& {w}_{nbz}^{b}& \mathrm{0}& {w}_{nbx}^{b}\\ {w}_{nbz}^{b}& {w}_{nby}^{b}& -{w}_{nbx}^{b}& \mathrm{0}\end{array}\right].$
The fourth-order Runge–Kutta (R–K) method is used to solve the ordinary differential equation (Bernardo and Shu, 1989); the azimuth α, the zenith θ, and the tool face angle β can be obtained by Eq. (2).
$\begin{array}{}\text{(2)}& \left\{\begin{array}{l}\mathrm{tan}\mathit{\theta }=\frac{\sqrt{{G}_{X}^{\mathrm{2}}+{G}_{Y}^{\mathrm{2}}}}{{G}_{Z}}\\ \mathrm{sin}\mathit{\theta }=\frac{{G}_{Z}}{\sqrt{{G}_{X}^{\mathrm{2}}+{G}_{Y}^{\mathrm{2}}+{G}_{Z}^{\mathrm{2}}}}\end{array}\right\\left\{\begin{array}{l}\mathit{\alpha }=\mathrm{arctan}\left[\frac{\mathrm{2}\left({q}_{\mathrm{1}}{q}_{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}+\phantom{\rule{0.125em}{0ex}}{q}_{\mathrm{0}}{q}_{\mathrm{3}}\right)}{{q}_{\mathrm{0}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}+\phantom{\rule{0.125em}{0ex}}{q}_{\mathrm{1}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}-\phantom{\rule{0.125em}{0ex}}{q}_{\mathrm{2}}^{\mathrm{2}}\phantom{\rule{0.125em}{0ex}}-\phantom{\rule{0.125em}{0ex}}{q}_{\mathrm{3}}^{\mathrm{2}}}\right]\\ \mathit{\theta }=\mathrm{arcsin}\left[\mathrm{2}\left({q}_{\mathrm{1}}{q}_{\mathrm{3}}-{q}_{\mathrm{0}}{q}_{\mathrm{2}}\right)\right]\\ \mathit{\beta }=\mathrm{arctan}\left[\frac{\mathrm{2}\left({q}_{\mathrm{2}}{q}_{\mathrm{3}}+{q}_{\mathrm{0}}{q}_{\mathrm{1}}\right)}{{q}_{\mathrm{0}}^{\mathrm{2}}+{q}_{\mathrm{1}}^{\mathrm{2}}-{q}_{\mathrm{2}}^{\mathrm{2}}+{q}_{\mathrm{3}}^{\mathrm{2}}}\right]\end{array}\right\\end{array}$
$\begin{array}{}\text{(3)}& \left\{\begin{array}{l}{q}_{\mathrm{0}}=\mathrm{cos}\frac{\mathit{\alpha }}{\mathrm{2}}\\ {q}_{\mathrm{1}}=\mathrm{sin}\frac{\mathit{\alpha }}{\mathrm{2}}\mathrm{cos}{\mathit{\beta }}_{x}\\ {q}_{\mathrm{2}}=\mathrm{sin}\frac{\mathit{\alpha }}{\mathrm{2}}\mathrm{cos}{\mathit{\beta }}_{y}\\ {q}_{\mathrm{3}}=\mathrm{sin}\frac{\mathit{\alpha }}{\mathrm{2}}\mathrm{cos}{\mathit{\beta }}_{z}\end{array}\right\,\end{array}$
where α is the angle of rotation about the axis of rotation, and GX, GY, and GZ are components of the acceleration in the x, y, and z directions, respectively; cos βx, cos βy, and cos βz are components of the axis of rotation in the x, y, and z directions, respectively.
## 2.3 Technical parameters
The technical parameters of the DTMI based on the FOG are shown in Table 2.
Table 2Key technical parameters of the DTMI. MTBF is the mean time between failure.
3 Key technologies of the DTMI
This section focuses on the temperature resistance and measurement accuracy of the DTMI. For this, we look at three aspects: mechanical design of the external confining tube, structural design of the vacuum flask, and FOG error compensation. This is to ensure that the DTMI can work for a long duration with high performance in the ultra-deep hole trajectory measurement process and meets the design parameters.
## 3.1 External confining tube
The main function of the external confining tube is to ensure that the DTMI can withstand an external pressure of up to 120 MPa, during 4 h of working time. Due to the complexity of the loads when the DTMI is doing actual drilling work, this section uses finite element analysis software to accurately model and analyze its mechanical state. Based on the analysis of the overall model to the local components, a mechanical simulation of the external confining tube is carried out. The stress–strain distribution and deformation of each detail position, under different loads, are compared and analyzed (Tavio and Tata, 2009).
### 3.1.1 Establishment of finite element model for threaded connection
For the material of the external confining probe tube, 17-4PH precipitation-type hardened stainless steel was selected. This has good mechanical properties, good temperature resistance, and slow heat conduction. The sealing joint is made of 30CrMnSiA using heat treatment. The technical parameters of the two materials are shown in Table 3 (Wen et al., 2010).
Table 3Material parameters of the confined tube.
Figure 9(a) Connector thread assembly drawing; (b) model simplified diagram of two-dimensional axisymmetric.
We used the Ansys workbench platform to build a simplified model of the thread, accurately assemble it, and finally import it into the Ansys platform, for accurate mechanical simulation analysis, as shown in Fig. 9.
### 3.1.2 Simulation of the pressure field and optimization of the structural parameters
This section focuses on optimizing the design parameters of the probe, from the internal and external diameter (wall thickness), thread taper, thread height, and pitch. From field experience, the outer diameter of the probe tube was designed to be 73 mm, and the inner diameter selected was 68 mm; 67.5, 67, 66.5, and 66 mm were used for finite element analysis.
Figure 10The stress nephogram of the external thread of the confined probe joint (the inner diameters from top to bottom are 68, 67.5, 67, 66.5, and 66 mm, respectively).
Figure 11(a) The trend diagram of the maximum equivalent stress with inner diameter decreases; (b) the trend diagram of the maximum total deformation with inner diameter decreases.
## Effect of local thickening of confined probe joint on connection strength
Local thickening of the ends of the joint thread is one common processes for improving the strength of the confining probe joint (Vasudevan et al., 2013). According to the work experience and the actual situation, the outer diameter of the probe tube is designed to be 73 mm, and the inner diameter of the pipe body is selected as 68 mm; 67.5, 67, 66.5, and 66 mm were used for finite element analysis. The equivalent stress value of the thread, the von Mises stress, the maximum equivalent stress value, and the stress contour are compared and evaluated. The stress and deformation of the local details of the threaded joint are also evaluation indexes. The results are shown in Figs. 10 and 11.
From the finite element analysis results (Fig. 11), the following conclusion can be obtained: with a decrease in the inner diameter of the probe joint (i.e., the thickness component is thickened), the maximum equivalent stress at the joint thread is gradually reduced and joint strength is increased by 20.8 %. At the same time, the maximum total deformation of the confining probe and joint is gradually reduced, indicating that the stress distribution inside the thread is more balanced. Therefore, the inner diameter set at 67 mm greatly improves the connection strength of the confining probe.
## Effect of thread taper on joint strength
The thread parameters of the confining tube are selected from the national standard equipment standard GB/T 16951–1997 (Neq ISO 10098: 1992, 1997). For the simulation analysis, five sets of taper were selected; these were 1:15, 1:20, 1:25, 1:30, and 1:36. The results of finite element calculation are shown in Figs. 12 and 13.
We comprehensively analyzed the maximum equivalent stress of the pipe body and the joint with different tapers. When the taper of the thread is around 1:25, the overall joint strength of the pressure probe tube is the greatest.
## Effect of thread height on joint strength
Overall, seven thread heights (0.8, 0.9, 1.0, 1.1, 1.2, 1.3, and 1.5 mm) were selected for simulation analysis, and the results of the finite element calculation are shown in Figs. 14 and 15.
Figure 12The stress nephogram of the external thread of the confining probe joint (the tapers from top to bottom are 1:15, 1:20, 1:25, 1:30, and 1:36).
Figure 13(a) Relationship between thread taper and the maximum equivalent stress of tube body; (b) relationship between thread taper and the maximum equivalent stress of joint.
Figure 14The stress nephogram of the external thread of the confining probe joint (the heights from top to bottom are 0.8, 0.9, 1.0, 1.1, 1.2, 1.3, and 1.5 mm).
Figure 15(a) The trend diagram of the maximum equivalent stress of tube body with thread increases; (b) the trend diagram of the maximum total deformation of tube body with thread increases.
From the results of the finite element analysis (Fig. 15), it can be seen that different thread heights have a large influence on the joint strength. We comprehensively analyzed the maximum equivalent stress value and maximum total deformation of the pipe body, with the tooth height set at 0.9 mm. The joint strength of the whole structure was high, the internal structure stress distribution was relatively uniform, the bearing capacity of the structure was good, the threaded joint did not fall off easily, and the wear resistance was also good.
## Thread pitch affects joint strength
A total of five pitches (6, 7, 8, 9, and 10 mm) were selected for simulation analysis. The finite element calculation results are shown in Figs. 16 and 17.
Figure 16The stress nephogram of the external thread of the confining probe joint (the pitches from top to bottom are 6, 7, 8, 9, and 10 mm).
Figure 17The trend diagram of the maximum equivalent stress of tube body with pitch increases.
From the finite element analysis results (Fig. 17), the connection strength of the pressure probe tube gradually decreases with the increase of the pitch, and the maximum drop rate is reduced to 90.9 %. In practical applications, the smaller the pitch, the greater the number of thread buckles needed, and the labor intensity of the shackle buckle is increased. Therefore, the pitch should be 8 mm for consideration for comprehensive analysis.
In summary, from the pressure field simulation analysis of the pressure probe, the optimized parameters for the pressure probe and its thread are shown in Table 4.
Table 4Optimized parameters of the pressure probe and its thread.
Due to the in borehole limitations, the metal vacuum flask is designed with a cylindrical structure. The protected measuring probe is loaded into the flask. As shown in Fig. 18, the metal vacuum flask is mainly composed of a vacuum-insulated bottle, heat absorbers, and a heat insulator.
Figure 18Structural diagram of the metal vacuum flask.
The metal vacuum flask consists of a gland, a plug, a heat-insulating tube, an upper heat absorber, a bottle body (vacuum), a medium heat absorber, and a lower heat absorber. These components are mainly composed of aluminum alloy, 1Cr18Ni9Ti, titanium alloy, and 45 steel. In the process of temperature simulation of the complete fiber-optic gyroscope, the parameters of the relevant materials should be determined (Fang, 2002). The bottle body uses 1Cr18Ni9Ti as the inner shell material, similar to the heat preservation cup, and the material adopts vacuum structure. The heat absorber is mainly made of paraffin, and the heat absorption process is realized by a solid- to liquid-phase transformation. Accounting for the heat from the fiber-optic gyroscope movement and the external temperature, the length of the phase change body is designed; the insulator is made of SiO2 fiber-reinforced aerogel material. The specific parameters are shown in Table 5.
Table 5Material property parameters of various materials of the metal vacuum flask.
### 3.2.1 Heat calculation in the flask
The heat calculation in the flask consists of the following three parts: heat power calculation of internal measuring components (including FOGs, accelerometers, acquisition board, navigation solution board, data storage output board, and high-temperature battery); calculation of the leakage heat of the flask; and calculation of the heat storage (endothermic). In the trajectory measurement process of the DTMI, the total heat calculation formula is shown in Eq. (4), and the specific calculation is as follows.
$\begin{array}{}\text{(4)}& \begin{array}{rl}{Q}_{\text{Total}}& ={Q}_{\text{Battery}-\text{dissipation}}+{Q}_{\text{IMU}}-{Q}_{\text{Absorber}}\\ & -{Q}_{\text{Self-dissipation}}\end{array}\end{array}$
## (a) Heat power calculation of internal measuring components
When the DTMI is energized, the entire measurement system (three-axis FOG and three-axis accelerometer) is in a moving state. The orientation of each FOG and accelerometer changes under different moving states. We assume that one hot surface is up, one is down, and one is vertical during the convection process. When the inertial measurement unit is working, the related components will generate heat and compensate for the internal and external temperature difference, so the internal and external temperature differences are not considered. The power consumptions of the main heat sources are shown in Table 6.
Among them, the battery conversion efficiency is assumed to be 80 %, and the rest is converted into heat; QIMU is the sum of the power consumptions of the three-axis FOG, the accelerometer, and the circuit board, which is calculated by Eq. (5), where t=4h and P=21W.
$\begin{array}{}\text{(5)}& {Q}_{\text{IMU}}=P\cdot t\end{array}$
## (b) Calculation of leakage heat
The heat leakage of the metal flask is mainly the heat transfer from the outside to the inside, which includes two aspects: (1) axial heat transfer through the flask mouth, including solid heat conduction of the inner tube wall and the heat-insulating plug; (2) radiation leakage between the inner and outer tubes, heat conduction of residual gas, and solid heat transfer between the vacuum layers. The total leakage heat flow rate is set to Φsum, Φ1 is the heat conduction of inner wall of the flask, Φ2 is the leakage of the heat-insulation plugging, Φ3 is the radiation leakage heat, and Φ4 is the residual gas leakage heat. It follows that the total heat leakage Φsum is calculated from Φ1, Φ2, Φ3, and Φ4. The calculation formulas are shown in Eqs. (6) and (7) (Malatip et al., 2012).
$\begin{array}{}\text{(6)}& & {\mathrm{\Phi }}_{\text{sum}}={\mathrm{\Phi }}_{\mathrm{1}}+{\mathrm{\Phi }}_{\mathrm{2}}+{\mathrm{\Phi }}_{\mathrm{3}}+{\mathrm{\Phi }}_{\mathrm{4}}\text{(7)}& & \left\{\begin{array}{l}{\mathrm{\Phi }}_{\mathrm{1}}=\frac{A}{L}\cdot \stackrel{\mathrm{‾}}{\mathit{\lambda }}\cdot \mathrm{\Delta }T\\ {\mathrm{\Phi }}_{\mathrm{2}}=\frac{S}{L}\cdot \stackrel{\mathrm{‾}}{\mathit{\lambda }}\cdot \mathrm{\Delta }T\\ {\mathrm{\Phi }}_{\mathrm{3}}={\mathit{\sigma }}_{\mathrm{0}}\cdot A\cdot \left[{\left({T}_{\mathrm{1}}\right)}^{\mathrm{4}}-{\left({T}_{\mathrm{2}}\right)}^{\mathrm{4}}\right]\cdot \frac{{\mathit{\epsilon }}_{\mathrm{1}}\cdot {\mathit{\epsilon }}_{\mathrm{2}}}{{\mathit{\epsilon }}_{\mathrm{1}}+{\mathit{\epsilon }}_{\mathrm{2}}-{\mathit{\epsilon }}_{\mathrm{1}}\cdot {\mathit{\epsilon }}_{\mathrm{2}}}\cdot \frac{\mathrm{1}}{n+\mathrm{1}}\\ {\mathrm{\Phi }}_{\mathrm{4}}=K\cdot a\cdot P\cdot \left({T}_{\mathrm{1}}-{T}_{\mathrm{2}}\right)\cdot A,\end{array}\right\\end{array}$
where A is the effective cross-sectional area, in m2; L is the length of the thermal conduction section, in meters; ΔT is the temperature difference, taken as 200 K; $\stackrel{\mathrm{‾}}{\mathit{\lambda }}$ is the average thermal conductivity of the material, which is 0.045; the blackbody radiation constant is $\mathrm{5.67}×{\mathrm{10}}^{-\mathrm{8}}$, in ${\mathrm{Wm}}^{-\mathrm{2}}\phantom{\rule{0.125em}{0ex}}{\mathrm{K}}^{-\mathrm{4}}$; A is the radiation area, in m2; ε is the effective emissivity, taken as 0.08; T1 and T2 are the temperatures of the high- and low-temperature surfaces, respectively, in K; n is the number of layers. When the vacuum is $>{\mathrm{10}}^{-\mathrm{3}}$ Pa, the residual gas leakage heat can be neglected, so a total leakage heat flux Φsum=3.525W is calculated.
## (c) Calculation of heat storage
The heat storage material uses the sensible heat and latent heat of paraffin (phase transition temperature is 60 C, density is 900 kg m−3) to store heat, and the material itself is used to control the rise of external temperature (Fang, 2002). The length of the heat absorber is set to 450 mm, and the heat storage capacity calculated is 101.736 W; when the temperature inside the bottle rises to 125 C, the heat absorption time is equal to heat storage/total heat leakage flow, so the total endothermic time is calculated as 8.12 h.
Table 6The heat consumption table of main heat source.
In summary, the theoretical calculation of the heat in the flask proves that when paraffin wax is used as the heat-absorbing material and its length is designed to be 400 mm, the temperature rise inside flask is guaranteed to be ≤125C in a period greater than 7 h.
### 3.2.2 Simulation and calculation of temperature field
The initial temperature of the internal measuring component was set to 25 C, the initial temperature of the outer boundary of the metal thermos was set to 270 C, and the third boundary condition (100) of heat transfer was selected in order to obtain the temperature simulation result, shown in Fig. 19.
Figure 19(a) Temperature distribution nephogram of the metal vacuum flask after 4 h of work; (b) temperature distribution nephogram of the FOG measurement probe after 4 h of work.
Figure 19a is a temperature distribution nephogram of the metal flask after 4 h of working. It can be seen that the overall trend of temperature change in the thermos bottle is gradually reduced from the bottle head to the bottom of the bottle. At the mouth of the bottle, the maximum temperature reaches 113.23 C, which meets the design requirements (temperature rise <125C in flask). Figure 19b is a temperature distribution nephogram of the FOG measurement probe. It can be seen that the internal temperature gradually decreases from the bottle head to the bottom of the bottle, and the temperature values of the different sensor positions are also different. The temperature value on the bottommost FOG is the lowest, which proves the importance of the metal thermos bottle to the temperature rise control of the fiber-optic gyroscope, which is also a guarantee for the measurement accuracy of the FOG.
## 3.3 Analysis of coupling effect of temperature field and pressure field
In this section, the temperature field–pressure field coupling effect is taken into consideration. Through finite element analysis, a stress and strain distribution, a cloud diagram for the whole measuring instrument is obtained. The deformation state of the outer tube, under the coupling of the temperature field and the pressure field, is analyzed and used to verify the resistance to high temperatures and compression.
Firstly, we simplify the mechanical model of the DTMI; then the coupling of the temperature field and the pressure field is solved by the following three steps: (1) apply temperature boundary conditions, setting the temperature to 270 C, solve the temperature field, and introduce the temperature field results into statics structural analysis; (2) in static structural analysis, apply a confining pressure load of 120 MPa and constrain it; (3) solve the coupling between the temperature field and the pressure field, as shown in Figs. 20 and 21.
Figure 20The stress nephogram of the coupling between the temperature field and the pressure field: the upper left is the total deformation nephogram; the upper right is the equivalent strain nephogram; the lower left is the minimum principal stress nephogram; the lower right is the Mohr–Coulomb stress displacement safety factor nephogram.
Figure 21The total deformation nephogram comparison: the left picture shows the pressure field total deformation nephogram; the right picture shows the pressure field and temperature field coupling total deformation nephogram.
Analysis of the finite element results shows that under the action of high temperature and high pressure, the total deformation of the DTMI increases and the equivalent strain increases, when compared with the action of the pressure field alone. The maximum deformation of the DTMI is calculated to be 3.47 mm m−1. The deformation is 1.33 mm, which meets the requirement that the deformation amount per 1 m in the parameter index should be less than 1.5 mm.
## 3.4 FOG error analysis and compensation in temperature field
Due to thermal field distribution and heat transfer under the thermal transient state, the physical properties, geometrical features, and thermal transfer of the FOG are dynamically changing over time, leading to a Shupe error in the FOG inclinometer. The Shupe error is negatively correlated to the accuracy of the inclinometer. To enhance the accuracy of the IMU, the finite element method (FEM) was adopted to analyze the heat conduction features of the inclinometer in a thermal field. This method can overcome the limits of traditional analytical techniques. For instance, it can handle the complex boundary conditions of the FOG. According to the differential control equations for heat conduction, a FOG error compensation formula in thermal field was derived through Shupe error analysis, laying the basis for FOG thermal field modeling and error compensation.
### 3.4.1 Shupe error analysis
In the thermal transient state, the Shupe error can be calculated based on the phase delay ϕ of the light wave propagation in a fiber loop with length L (Mohr, 1995):
$\begin{array}{}\text{(8)}& \mathit{\varphi }=\underset{\mathrm{0}}{\overset{L}{\int }}\mathit{\beta }\left(z\right)\mathrm{d}z=\underset{\mathrm{0}}{\overset{L}{\int }}{\mathit{\beta }}_{\mathrm{0}}n\left(z\right)\mathrm{d}z,\end{array}$
where β0 is the propagation constant of light in free space; n(z) is the refractive index. Since the thermal expansion and refractive index of the medium may vary with the temperature fields during light wave propagation, Eq. (8) can be rewritten as Eq. (9):
$\begin{array}{}\text{(9)}& \mathit{\varphi }={\mathit{\beta }}_{\mathrm{0}}{n}_{\text{eff}}L+{\mathit{\beta }}_{\mathrm{0}}\left(\frac{\partial {n}_{\text{eff}}}{\partial T}+{n}_{\text{eff}}\mathit{\alpha }\right)\underset{\mathrm{0}}{\overset{L}{\int }}\mathrm{\Delta }T\left(z\right)\mathrm{d}z,\end{array}$
where neff is fiber refractive index; (${\partial }_{\text{neff}}\right)/{\partial }_{T}$ is the temperature coefficient; α is fiber thermal expansion coefficient; ΔT(z) is the temperature change. Using the Sagnac effect, the FOG measures the phase difference of two beams of a light wave through the same optical fiber with length L. One beam of light wave propagates in the clockwise direction, and the other in an counter-clockwise direction. Specifically, the clockwise phase delay ϕcw(t) and the counter-clockwise phase delay ϕccw(t) are calculated, and the Shupe error ΔϕE(Shupe) in the thermal transient state is given a differential time definition (Raab and Quast, 1994). The ΔϕE(Shupe) can be calculated by
$\begin{array}{}\text{(10)}& \mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{E}}\left(\text{Shupe}\right)=\frac{{\mathit{\beta }}_{\mathrm{0}}}{{c}_{\mathrm{m}}}\left(\frac{\partial {n}_{\text{eff}}}{\partial T}+{n}_{\text{eff}}\mathit{\alpha }\right)\underset{\mathrm{0}}{\overset{L}{\int }}\stackrel{\mathrm{˙}}{T}\left(z,t\right)\left(L-\mathrm{2}z\right)\mathrm{d}z,\end{array}$
where $\stackrel{\mathrm{˙}}{T}\left(z,t\right)$ is the temperature change rate in the fiber loop $\left(\stackrel{\mathrm{˙}}{T}\left(z,t\right)=\partial T/\partial z\right)$. Equation (10) shows that the Shupe error reflects how the temperature gradient affects the FOG measuring accuracy. To sum up, the Shupe error ΔϕE(Shupe), induced by the temperature field variation in the transient state, has a certain effect on the output light intensity, I, of the FOG in that it reduces the measuring accuracy of FOG inclinometer. The output light intensity, I, can be obtained as follows:
$\begin{array}{}\text{(11)}& I={I}_{\mathrm{0}}\left\{\mathrm{1}+\mathrm{cos}\left[{\mathit{\varphi }}_{\mathrm{S}}+\mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{S}}\left(\text{Shupe}\right)\right]\right\}.\end{array}$
### 3.4.2 Error compensation formula of FOG in the temperature field
Under the joint action of Shupe error ΔϕE(Shupe) and the additional phase shift ϕΔL in the transient state, an error will occur in the Sagnac phase shift ϕS. Based on the previous analysis of Shupe error, this section attempts to deduce the error formula in the thermal transient state and then the FOG error compensation formula in the temperature field.
Due to the temperature field and the thermal stress in the FOG, the fiber loop length L changes by a certain degree. Hence, $L/d\gg \mathrm{1}$, with d being the diameter of the fiber. By the definition of the thermal expansion coefficient (Sokolov et al., 2015), the thermal expansion coefficient α of the optical fiber in the thermal transient state can be expressed as
$\begin{array}{}\text{(12)}& \mathit{\alpha }=\frac{\mathrm{d}\left(\mathrm{\Delta }z\right)}{\mathrm{\Delta }z\cdot \mathrm{d}\left[T\left(z,t\right)\right]}.\end{array}$
In Eq. (12), z is a point on the optical fiber loop of the FOG. Then, the linear deformation of the fiber loop ΔL leads to a Sagnac equivalent phase shift ϕΔL:
$\begin{array}{}\text{(13)}& & {\mathit{\varphi }}_{\mathrm{\Delta }L}=\mathrm{\Delta }L\cdot \frac{\partial {\mathit{\varphi }}_{\mathrm{S}}}{\partial L}=\mathrm{\Delta }T\cdot \mathit{\alpha }\cdot L\cdot \frac{\partial {\mathit{\varphi }}_{\mathrm{S}}}{\partial L}\text{(14)}& & \mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{S}}\left(\mathrm{\Delta }L\right)=\mathit{\alpha }\cdot \frac{\partial {\mathit{\varphi }}_{\mathrm{S}}}{\partial L}\cdot \sum \mathrm{\Delta }T\left(z,t\right)\cdot \mathrm{\Delta }z.\end{array}$
It can be seen that ΔϕS(Shupe) in the thermal transient state is correlated with the linear expansion coefficient, α, and effective refractive index, neff, of the optical fiber: $\mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{S}}\text{(Shupe)}=F\left(\mathit{\alpha },\mathrm{\dots },{n}_{\text{eff}}\right)$. Then, Eqs. (13) and (14) are derived by finding the partial derivatives of α and neff, and Eqs. (13) and (14), respectively, depict the effect of the Sagnac phase shift in thermal transient state caused by the linear deformation and refractive index variation of the optical fiber; the linear expansion coefficient, α, is determined by the material of the fiber.
$\begin{array}{ll}\text{(15)}& \frac{\partial \left[\mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{S}}\left(\text{Shupe}\right)\right]}{\partial \mathit{\alpha }}& =\frac{{\mathit{\beta }}_{\mathrm{0}}}{{c}_{\mathrm{m}}}\cdot {n}_{\text{eff}}\cdot \underset{\mathrm{0}}{\overset{L}{\int }}\stackrel{\mathrm{˙}}{T}\left(z,t\right)\left(L-\mathrm{2}z\right)\mathrm{d}z\frac{\partial \left[\mathrm{\Delta }{\mathit{\varphi }}_{\mathrm{S}}\left(\text{Shupe}\right)\right]}{\partial \mathit{\alpha }}& =\left(\frac{\mathit{\alpha }{\mathit{\beta }}_{\mathrm{0}}}{{c}_{\mathrm{m}}}+\frac{\partial \left(\frac{\partial {n}_{\text{eff}}}{\partial T}\right)}{\partial {n}_{\text{eff}}}\right)\\ \text{(16)}& & \phantom{\rule{1em}{0ex}}\cdot \underset{\mathrm{0}}{\overset{L}{\int }}\stackrel{\mathrm{˙}}{T}\left(z,t\right)\left(L-\mathrm{2}z\right)\mathrm{d}z\end{array}$
On the premise of using the same material fiber, and combining this with Eqs. (10), (15), and (16), error theory analysis shows that the factors affecting the Shupe error of the FOG in a thermal field are mainly the temperature change rate, the effective refractive index, and the angular velocity. Therefore, these formulas serve as the guide for FOG error compensation in a thermal field, effectively paving the way for a rational error compensation plan.
4 Engineering application
## 4.1 Engineering application in the CCSD SK-2 east borehole
### 4.1.1 CCSD SK-2 east borehole
The CCSD project in the Songliao Basin aimed to investigate deep geothermal energy, establish a deep stratigraphic structure profile, seek geological evidence of Cretaceous climate change, and develop deep detection techniques. This was the third international CCSD project funded by the China Mainland Scientific Drilling Program (ICDP) (Wang et al., 2013, 2017). As the main borehole of this project, the SK-2 east borehole was designed to reach a depth of 6400 m, in order to penetrate the Cretaceous strata and reach the base of the basin. It is located about 0.25 km southeast of Jiqing village, Yangcao town, Anda city, Heilongjiang province (Zou et al., 2016; Zhu et al., 2018). The main scientific objectives of logging the CCSD SK-2 east borehole are (1) to provide complete and continuous petrophysical parameters and well-side structural parameters for use in the establishment of a Cretaceous continental sedimentary study at the geophysical exploration “scale”; (2) to explore the relationship between the global greenhouse climate and environmental changes using logging information during the Cretaceous period from 65 million to 140 million years; (3) to carry out reservoir division and oil and gas evaluation for key hydrocarbon-bearing horizons; (4) to provide the basic information necessary for the full-scale long-term observation and experimental research of the CCSD SK-2 east borehole in the future (Xu, 2004; Gao et al., 2015; Zou et al., 2018).
Geophysical logs played an important role in the subsequent geoscience research because very few core samples were recovered over the Upper Cretaceous intervals (i.e., Spud 1 and Spud 2). After the borehole was drilled, just two uncased and cased borehole logging operations (using Beck Atlas' ECLIPS-5700 and Halliburton's EXCEL 2000) were carried out in the Upper Cretaceous intervals using advanced imaging logging tools (Zou et al., 2016). Therefore, DTMI's engineering application was selected for the SK-2 east borehole to test the performance in the high temperature and high pressure of the scientific detection wells, and to test the startup performance and overall performance of the power supply in the low-temperature field environment (the ambient surface temperature is about 2 C at the SK-2 east borehole).
### 4.1.2 Testing steps
The engineering application time was on 15–16 October 2017. At that time, the construction depth of the SK-2 east borehole had exceeded 6200 m, the depth of the casing was 5905 m, the specific gravity of the mud was 1.46, and the pressure of the mud column at the bottom of the well exceeded 90 MPa. The bottom temperature exceeded 200 C. Under the suggestion of the construction party, the application test was carried out in the casing section, using two DTMIs at the same time, and the measurement was carried out at intervals of 500 m. A photo of the DTMI used in this engineering application is shown in Fig. 22.
Figure 22Photo of DTMI which applies to the SK-2 east borehole.
The main contents of the engineering application were three methods: a closed water test, downward trajectory measurement, and upward trajectory measurement. The purpose was to test the sealing, pressure, thermal insulation performance, and reliability of the measurement data of the DTMI.
1. The closed water test is a pilot test to verify the sealing performance of the pressure probe and to investigate the inside of the hole. In order to prevent the core measuring component, based on the FOG, from being damaged due to seal failure, when the water shutoff test was carried out, the FOG movement was not placed in the pressure probe tube. Instead, a humidity test paper and a boiling point thermometer were placed in the tube. After the water shutoff test was completed, the measuring device was taken out and the pressure and sealing ring of the pressure-reducing probe were verified to be non-destructive, the inside of the pressure-exploring probe was dry, and the humidity test paper showed no obvious changes. This indicated that the pressure-exploring tube had good sealing performance and could meet the design requirements.
2. When preparing for measurements in the borehole, raw tape was wrapped on the connecting threaded joint and thread oil was applied to protect the thread and further improve the sealing performance; then the downward measurement was started, and the DTMI was lowered through the wire rope, or cable, of the ground gauge winch; after reaching the bottom of the hole, the upward measurement was started, and the measured data were stored in the memory of the probe; after the trajectory measurement process was finished, the internal measurement probe was taken out of the confining tube and metal flask; finally, the movement probe was connected to the PC using a data line and a conversion joint, and the trajectory measurement data stored in the movement were read. Then, data acquisition and processing were performed, thereby obtaining data on the drilling trajectory.
### 4.1.3 Data analysis
The maximum well depth for this measurement was 5800 m, and the highest temperature measured was 177.8 C. The detailed measured data of zenith angle, azimuth angle, and temperature are shown in Tables 7, 8, and 9. The comparative analysis of the measured data gave the following results:
Table 7Measured zenith angle of the CCSD SK-2 east borehole.
Table 8Measured azimuth angle of the CCSD SK-2 east borehole.
Table 9Measured azimuth angle of the CCSD SK-2 east borehole.
1. The zenith between the downward and upward measurements of one DTMI and between the two sets of DTMIs were in good agreement with each other. The result had good repeatability, with an error less than 0.15.
2. When the zenith was less than 3, the error of the azimuth data between the downward and upward measurements of one DTMI and between the two sets of DTMIs was relatively large. When the zenith was greater than 3, the error of the azimuth data between downward and upward measurements of one DTMI and between the two sets of DTMIs was less than 0.15.
3. In the process of downward and upward measurements, the temperature-measured data of the two sets of instruments were consistent with each other.
4. The two sets of the DTMI-measured data for zenith and azimuth were compared with previous logging data. We found that the data were roughly consistent with each other, although there were certain deviations in individual points. The main reason for these deviations in the data may be that the previous logging data were measured in the open well, while this measurement was done after the casing was placed in the well. The data deviation was normal, and its accuracy is acceptable for deep exploration engineering and underground resources and energy engineering.
5. There was a deviation in the temperature data between the downward and upward measurements, mainly due to the short time of the measurement at each point (except for 5 min at the well bottom, at 5800 m; the other measurement points were occupied for only 30 s), and there was a failure of the heat to sufficiently stabilize at the temperature sensor.
6. The deviation between the measured temperature value at 5800 m and the previous logging data was due to the fact that the logging data were in the open well and the original temperature waiting time reached 30 min. Here, the measurement was in the casing and the waiting time was only 5 min. Thus, the temperature sensor had not reached a temperature balance with the bottom of the well environment.
Figure 23a shows a curve of measured zenith against borehole depth, Fig. 23b shows a curve of measured azimuth against borehole depth, and Fig. 23c shows a curve of measured temperature against borehole depth.
Figure 23(a) Curve of measured zenith against borehole depth; (b) curve of measured azimuth against borehole depth; (c) curve of measured temperature against borehole depth.
## 4.2 Engineering application in the geothermal well of Xingreguan-2, Beijing
### 4.2.1 Geothermal well of Xingreguan-2
The Xingreguan-2 well is a geothermal well within the dolomite formation of the Wuyishan Formation, in Jixian county. It is located in the Xingming Lake Resort in the Daxing district, Beijing. The well location coordinates are latitude 393648.09′′ and longitude 1162556.54′′ (Zhao, 2011). The well is a geothermal well within the dolomite group of the Wuyishan Formation. It was drilled for the water intake section on 31 May 2016. The completion depth is 2505.18 m, the water output is 1549.67 m3 d−1, and the wellhead outlet temperature is 46 C. The application time was from 14:00 to 17:00 UTC+8 on 14 November 2017, and one DTMI was used for this measurement. In order to ensure the safety of the instrument and avoid the measurement of bare holes, the depth of this measurement was only 1750 m; during decentralization and lifting of the instrument, measurements were made at the same depths of the well, with intervals of 200 m.
### 4.2.2 Data analysis
The maximum well depth for this measurement was 1750 m, and the highest temperature measured was 43.4 C. The detailed measured data of the zenith angle, azimuth angle, and temperature are shown in Tables 10, 11, and 12. The comparative analysis of the measured data gave the following results:
Table 10Measured zenith angle of the Xingreguan-2 well, Beijing.
Table 11Measured azimuth angle of the Xingreguan-2 well, Beijing.
Table 12Measured temperature of the Xingreguan-2 well, Beijing.
1. The zenith angles between downward and upward measurements were in good agreement with each other, and the result showed good repeatability, with an error of less than 0.15; when the zenith was less than 3, the error of the azimuth data between the downward and upward measurements was relatively large and when the zenith was greater than 3 (for the measuring points at 1000, 1200, 1400, 1600, and 1750 m depth), the error of the azimuth data between downward and upward measurements was less than 0.15
2. In the process of downward and upward measurements, the temperature-measured data of the two sets of instruments were consistent with each other.
3. The DTMI-measured data for zenith and azimuth were compared with previous logging data. We found that the data were roughly consistent with each other, although there were certain deviations at individual points. The main reason for these deviations in the data may be that the previous logging data were measured in the open well and these measurements were done after the casing was placed in the well.
Figure 24a shows a curve of measured zenith against borehole depth, Fig. 24b shows a curve of measured azimuth against borehole depth, and Fig. 24c shows a curve of measured temperature against borehole depth.
Figure 24(a) Curve of measured zenith against borehole depth; (b) curve of measured azimuth against borehole depth; (c) curve of measured temperature against borehole depth.
5 Discussion
The FOG has the advantages of high measurement accuracy, small size, high sensitivity, and strong anti-interference ability. This is especially important for the magnetic material with the greatest influence on the trajectory measurement technology in geological exploration and external magnetic field interference. The DTMI, based on the FOG, can be used for borehole trajectory measurements in any formation conditions. The FOG, used in geological and oil drilling fields, generally requires a wide operating temperature range. However, the measurement accuracy of the FOG is very sensitive to changes in ambient temperature, especially in deep borehole testing. The high external temperature and high-pressure environment, along with internal self-heating, will have an impact on the performance of the FOG. This mostly manifests as noise and drift. The noise directly affects the working accuracy of the fiber-optic gyroscope, that is the minimum detectable phase shift, while the drift reflects the degree of change in the output signal. Therefore, the thermal non-reciprocal error (Shupe error) caused by the temperature field will seriously affect the performance of the FOG inclinometer. This is characterized by poor data repeatability, short working time, and low precision (Shupe, 1980; Kurbatov, 2013).
Due to the Shupe error caused by the temperature field of the FOG, it is difficult to carry out thermal simulation analysis of the whole structure and internal mechanism by a traditional thermal field analytical method. It is nearly impossible to determine the boundary range, control equation, and thermal characteristics of the main components (Yang et al., 2011). In this study, the finite element method was used to analyze the heat transfer characteristics of the borehole trajectory measuring instrument. In Sect. 3.3, the Shupe error of the fiber-optic gyroscope was calculated using the differential control equations for thermal conduction and the thermal boundary condition. The unified formula of the thermal error was derived. The external parameters that affect the Shupe error are mainly temperature change rate, the effective refractive index, and the angular velocity. Therefore, these formulas can be used as the basis for the temperature field model establishment and error compensation experimental scheme for the internal core measurement mechanism.
In order to reduce the influence of temperature changes on the output accuracy of the fiber-optic gyroscope, the FOG error compensation experiment was used to establish a temperature compensation model between the parameters (temperature change rate, the effective refractive index, and the angular velocity) and the FOG output value, and to explore the mathematical relationship between them. Coefficients of the model require the FOG measurement probe to be tested on a three-axis quadrature calibration bench with a high-temperature thermostat, thereby obtaining a series of experimental data (Liu et al., 2019); photos of the temperature compensation experiment are shown in Fig. 25.
Figure 25(a) Photo of the experiments on three-axis quadrature calibration bench; (b) photo of the experiments in high-temperature thermostat.
Due to the higher temperature in boreholes, up to 250 C, a metal vacuum flask was used to ensure that the internal space did not exceed 125 C degrees (Liu et al., 2016). The temperature range of the measuring probe was set from 0 to 120 C in the experiments. There are four influence factors in the FOG error compensation experiments, namely experimental temperature range, temperature change rate, effective refractive index, and angular velocity. Therefore, a four-layer network structure was employed. The parameter values (number of levels) for each factor are shown in Table 13.
Table 13Table of experimental factors and their levels.
The thermal error compensation experiments usually employ a network structure of three to four layers (Meng et al., 2009). During these experiments, we found that both full-scale design methods and orthogonal design methods were time-consuming and low in efficiency for FOG error compensation. For example, in the full-scale design method, the number of experiments required is $\mathrm{6}×\mathrm{12}×\mathrm{4}×\mathrm{4}=\mathrm{1152}$. If an experiment takes 10 min, it will take 192 h to complete the full-scale design method experiments. The time needed for these experiments was not available. Therefore, more advanced experimental protocols should be used to simplify the experimental process, such as the uniform design method, neural network method, and fuzzy mathematics method. As an experimental design for space filling, the uniform design method describes a test point spread evenly throughout the test range and thus has been proposed from the perspective of uniformity (Fang, 1994). In particular, the design enables the conduct of trials with many experimental factors and a large number of levels but with fewer trials required when compared to the orthogonal or comprehensive design methods (Deesuth et al., 2012). Therefore, future work will involve the use of the uniform design method for temperature compensation experiments, establish a neural network dynamic model to compensate its output, improve the FOG temperature compensation accuracy, and ensure the stability of the DTMI-measured data.
6 Conclusions
A drilling hole trajectory measuring instrument, based on interference FOG, is developed in this study. We examined the mechanical design and strength. We carried out pressure field simulation analysis for the pressure-bearing outer tube. The structural design of the metal thermos was examined, and we carried out temperature field simulation analysis, measuring with the FOG in the probe. The study of Shupe error compensation in the field of temperature field was carried out using finite element analysis, for 4 h, in an environment where the maximum temperature does not exceed 270 C and the pressure does not exceed 120 MPa. Engineering applications were carried out in the CCSD SK-2 east borehole and the geothermal well of Xingreguan-2. Data such as azimuth, apex angle, and temperature were obtained and compared with results from previous logging equipment. The engineering applications proved that DTMI had good measurement accuracy and repeatability, high stability, and reliable measurement methods.
Aimed at the difficult working conditions in boreholes of hot dry rock, deep geothermal investigation drilling, and ultra-deep geological drilling (up to 5000 m), this instrument is based on the principle of inertia and is not affected by external electrical and magnetic interference. This makes up for the working blind zone of existing technologies and instruments. Therefore, DTMI has great potential for the development and advances in geological drilling technology.
The main highlights of this paper are as follows:
1. A method for measuring the borehole trajectory based on a three-axis interference-type fiber-optic gyroscope is proposed. The two-axis orthogonal three-axis fiber gyroscope and three-axis accelerometer sensor are used to form the strapdown inertial navigation system. The three-axis displacement and the three-axis rotation angle can be calculated by the trajectory calculation model described in the text. The DTMI can be applied in strong magnetic interference mining areas and expands the scope of engineering applications. Compared with existing drilling trajectory measuring equipment using a fluxgate or single-axis and two-axis gyroscopes as sensitive devices, it is a step forward in novelty and innovation.
2. The DTMI pressure field–temperature field coupling analysis method is proposed to guide the optimal design of the DTMI. Using a finite element software platform, the DTMI thread parameters, the size and wall thickness of the confined probe, optimization of the material and structure of the metal vacuum flask was carried out, and the simulation analysis and optimization design of the pressure field–temperature field coupling effect for the whole instrument were carried out. Finally, the reliability and stability of the DTMI were verified by the engineering application in the CCSD SK-2 east borehole.
Data availability
Data availability.
The data used to support the findings of this study are available from the corresponding author upon request.
Author contributions
Author contributions.
YL and GL designed and built the instrument with the help of CW and WJ. YL and CW prepared the paper with contributions from all authors.
Competing interests
Competing interests.
The authors declare that they have no conflict of interest.
Acknowledgements
Acknowledgements.
Ce Zhou and Wenjun Chen helped acquire the data of engineering application presented in this paper; Zhong Li processed the data in Tables 7–9, for which he is gratefully acknowledged.
Financial support
Financial support.
This research has been supported by the National Natural Science Foundation of China (grant no. 41804089); the National Key Scientific Instrument and Equipment Development Project of China (grant no. 2013YQ050791); and the China Postdoctoral Science Foundation (grant no. 2019M650782).
Review statement
Review statement.
This paper was edited by Luis Vazquez and reviewed by two anonymous referees.
References
Bernardo, C. and Shu, C.-W.: TVB Runge–Kutta Local Projection Discontinuous Galerkin Finite Element Method for Conservation Laws II: General Framework, Math. Comput., 52, 411–435, https://doi.org/10.2307/2008474, 1989.
Çelikel, O. and Sametoğlu, F.: Assessment of magneto-optic Faraday effect-based drift on interferometric single-mode fiber optic gyroscope (IFOG) as a function of variable degree of polarization, Meas. Sci. Technol., 23, 25104–25120, https://doi.org/10.1088/0957-0233/23/2/025104, 2012.
Chen, Z., Zheng, K., and Jiang, J.: Discussion on the development strategy of hot dry rock in China, Hydrogeol. Eng. Geol., 43, 161–166, https://doi.org/10.16030/j.cnki.issn.1000-3665.2015.03.26, 2015.
Deesuth, O., Laopaiboon, P., Jaisil P., and Laopaiboon, L.: Optimization of Nitrogen and Metal Ions Supplementation for Very High Gravity Bioethanol Fermentation from Sweet Sorghum Juice Using an Orthogonal Array Design, Energies, 5, 3178–3197, https://doi.org/10.3390/en5093178, 2012.
Fang, K.: Uniform design and uniform design table, Science Press, Beijing, China, 1994.
Fang, K.: Materials engineering handbook, Beijing Press, Beijng, China, 2002.
Gao, W., Kong, G. Pan, H. et al.: Geophysical logging in scientific drilling borehole and find deep Uranium anomaly in Luzong basin, Chinese J.. Geophys., 58, 4522–4533, https://doi.org/10.6038/cjg20151215, 2015 (in Chinese with English abstract).
Grewal, M. S., Weill, L. R., and Andrews A. P.: Global Positioning System, Inertial Navigation, and Integration, Wiley Int. Rev. Comput. Stat., 3, 383–384, https://doi.org/10.1002/wics.158, 2007.
Kielbassa, A. M., Fuentes, M. D., Goldstein, M., and Arnhart, C.: Randomized controlled trial comparing a variable-thread novel tapered and a standard tapered implant: Interim one-year results, J. Prosthet. Dent., 101, https://doi.org/10.1016/S0022-3913(09)60060-3, 293–305, 2009.
Kurbatov, A. M. and Kurbatov, R. A.: Temperature characteristics of fiber-optic gyroscope sensing coils, J. Commun. Technol. El., 58, 745–752, https://doi.org/10.1134/S1064226913060107, 2013.
Liu, Y.: Design of the Drilling Trajectory Measurement Device in Ultra-high Temperature based on FOG and Its Key Technologies Research, PhD thesis, Sichuan university, Chengdu, China, 2016.
Liu, Y., Wang, J., Ji, W., and Luo, G.: Temperature Field Finite Element Analysis of the Ultra-high Temperature Borehole Inclinometer based on FOG and Its Optimization Design, Chem. Eng. Trans., 51, 709–714, https://doi.org/10.3303/CET1651119, 2016.
Liu, Y., Wang, C. Wang, J., and Ji, W.: Optimization Research on Thermal Error Compensation of FOG in Deep Mining using Uniform Mixed-data Design Method, Math. Probl. Eng., 2019, 1–6, https://doi.org/10.1155/2019/4064652, 2019.
Maas, S. J. and Metzbower, D. R.: Optical accelerometer, optical inclinometer and seismic sensor system using such accelerometer and inclinometer: US Patent 7,222,534. 29 May 2007.
Malatip, A., Wansophark, N., and Dechaumphai, P.: Fractional four-step finite element method for analysis of thermally coupled fluid-solid interaction problems, Neuropharmacology, 33, 253–257, https://doi.org/10.1007/s10483-012-1536-9, 2012.
Meng, Y., Chen, Y., Li, S., Chen, C., Xu, K., Ma, F., and Dai X.: Research on the orthogonal experiment of numeric simulation of macromolecule-cleaning element for sugarcane harvester, Mater. Design, 30, 2250–2258, https://doi.org/10.1016/j.matdes.2008.08.020, 2009
Mohr, F.: Thermooptically Induced Bias Drift in Fiber Optical Sagnac Interferommeters, Lightwave Technol., 14, 27–41, https://doi.org/10.1109/50.476134, 1995.
Osipova, E. N., Ivanov, I. V., Smirnov, V. A., and Abramova, R. N.: Terrestrial heat flow and its role in petroleum geology, Earth Environ. Sci., 27, 12–15, https://doi.org/10.1088/1755-1315/27/1/012015, 2015.
Raab, M. and Quast, T.: Two-color Brillouin Ring Laser Gyro with Gyrocompassing Capability, Opt. Lett., 19, 1491–1496, https://doi.org/10.1364/OL.19.001492, 1994.
Savage, P. G.: Blazing Gyros: The Evolution of Strapdown Inertial Navigation Technology for Aircraft, J. Guid. Control Dynam., 36, 637–655, https://doi.org/10.2514/1.60211, 2013.
Sedlak, V.: Magnetic pulse method applied to borehole deviation measurements, Int. J. Rock Mech. Min., 39, 61–75, https://doi.org/10.1016/0148-9062(95)99588-O, 1994.
Shupe, D. M.: Thermally induced nonreciprocity in the fiber-optic interferometer, Appl. Optics, 19, 654–655, https://doi.org/10.1364/AO.19.000654, 1980.
Sokolov, A. V., Krasnov, A. A., Staroseltsev, L. P., and Dzyuba, A. N.: Development of a gyro stabilization system with fiber-optic gyroscopes for an air-sea gravimeter, Gyrosc. Navi., 6, 338–343, https://doi.org/10.1134/S2075108715040124, 2015.
Tavio, T. and Tata, A.: Predicting Nonlinear Behavior and Stress-Strain Relationship of Rectangular Confined Reinforced Concrete Columns with ANSYS, Civil Eng. Dim., 11, 123–129, 2009.
Titterton, D. and Weston, J.: Strapdown inertial navigation technology, Aerospace and Electronic Systems Magazine IEEE, 20, 33–34, https://doi.org/10.1049/PBRA017E, 2004.
Vasudevan, G., Kothandaraman, S., and Azhagarsamy, S.: Study on Non-Linear Flexural Behavior of Reinforced Concrete Beams Using ANSYS by Discrete Reinforcement Modeling, Strength Mater., 45, 231–241, https://doi.org/10.1007/s11223-013-9452-3, 2013.
Wang, C., Feng, Z., Zhang, L., Huang, Y., Cao, K., Wang, P., and Zhao, B.: Cretaceous paleogeography and paleoclimate and the setting of SKI borehole sites in Songliao Basin, northeast China, Palaeogeogr. Palaeocl., 385, 17–30, https://doi.org/10.1016/j.palaeo.2012.01.030, 2013.
Wang, P., Liu, H., Ren, Y., Wan, X., and Wang, S.: How to choose a right drilling site for the ICDP Cretaceous Continental Scientific Drilling in the Songliao Basin (SK2), Northeast China, Earth Sci. Front., 24, 216–228, https://doi.org/10.13745/j.esf.2017.01.014, 2017 (in Chinese with English abstract).
Wen, B. and Zhongkai, E: Mechanical design handbook, 2nd edn., China Machine Press, Beijng, China, 2010.
Xiao, S.: Borehole Deviation Measurement, 2nd edn., Geological Publishing House, Beijing, China, 1989.
Xu, Z.: Advanced Research of Chinese Continental Scientific Drilling, Metallurgical Industry Press, Beijing, China, 1996.
Xu, Z.: The scientific goals and investigation progresses of the Chinese continental scientific drilling project, Acta Petrol. Sin., 20, 1–8, 2004 (in Chinese with English abstract).
Yamaguchi, A., Okada, A., and Miyake, T.: Development of Curved Hole Drilling Method by EDM with Suspended Ball Electrode, J. Jpn. Soc. Prec. Eng., 81, 435–440, https://doi.org/10.2493/jjspe.81.1039, 2015.
Yang, C., Feng, Z., Liao, L., and Yang, H.: Temperature experiment and computer simulation analysis of fiber optical gyroscope, Av. Prec. Manuf. Technol., 47, 11–14, https://doi.org/10.3969/j.issn.1003-5451.2011.04.002, 2011 (in Chinese with English abstract).
Zhao, L.: Review and Prospect of Geothermal Power Generation in China, Nature, 33, 86–92, 2011.
Zhu, Y., Wang, W., Wu, X., Zhang, H., Xu, J., Yan, J., Cao, L., Ran, H., and Zhang, J.: Main technical innovations of Songke Well No.2 Drilling Project, China Geol., 1, 187–201, https://doi.org/10.31035/cg2018031, 2018.
Zou, C., Zhang, X., Niu, Y.-X., Niu, Y., Hou, J., and Peng, C.: General design of geophysical logging of the CCSD-SK-2 east borehole in the Songliao basin of Northeast China, Earth Sci. Front., 23, 279–287, https://doi.org/10.13745/j.esf.2016.03.031, 2016 (in Chinese with English abstract).
Zou, C., Zhang, X., Zhao, J., Peng, C., Zhang, S., Li, X., Niu, Y., Ding, Y., Qin, Y., and Lin, F.: Scientific Results of Geophysical Logging in the Upper Cretaceous Strata, CCSD SK-2 East Borehole in the Songliao Basin of Northeast China, Acta Geosci. Sin., 39, 679–690, https://doi.org/10.3975/cagsb.2018.101602, 2018. | 2020-04-02 05:26:59 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 25, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5474987626075745, "perplexity": 2325.201189744564}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370506673.7/warc/CC-MAIN-20200402045741-20200402075741-00551.warc.gz"} |
http://mammothmemory.net/maths/pythagoras-and-trigonometry/intercept-and-midpoint-theorem/thales-intercept-theorem.html | # Intercept and midpoint theorem
## Thales intercept theorem
Thales intercept theorem (or triangular proportionality theorem).
Thales was a Greek mathematician who founded geometry. Thales pronounced (they leaves).
He had a strange tale (thales) and raked up hay and leaves (thales) with it into separate (intercept) piles.
Thales lived in Greece in 640BC and was asked to measure the height of the oldest and largest Pyramid in Egypt.
The oldest and largest Pyramid in Egypt is the Cheops Pyramid (Cheops is pronounced chee-ops).
Now if he could have chopped (Cheops) the pyramid in half it would have been very easy for him to measure the Pyramid.
His method was ingenious.
To determine the height of the Pyramid he measured the length of the Pyramids shadow when the length of his own shadow was equal to his height.
But he played around with these ideas and realised that he could work out the height of the Pyramid at any time of the day (he started using poles as well).
Thales measured the length of a pole (A) and the length of its shadow (B) and at the same time measured the length of the shadow C.
He realised that the ratio
A/B is identical to D/C
And therefore A/B=D/C
So if A = 1.8m B = 2.2m and C = 168m then D must be
1.8/2.2=D/168
Therefore D = 138metres
So at any time of day Thales could have worked out the height of the Pyramid. | 2019-04-26 16:23:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4683745205402374, "perplexity": 1800.1188748087156}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578841544.98/warc/CC-MAIN-20190426153423-20190426175423-00411.warc.gz"} |
https://www.vedantu.com/question-answer/the-angle-of-elevation-of-an-electric-pole-from-class-11-maths-cbse-5f5facca68d6b37d1635b106 | Question
# The angle of elevation of an electric pole from a point A to the ground is 60° and from a point B towards the pole on the line joining the foot of the pole to the point is 75°. If the distance AB = a, then the height of the pole is : A.$\dfrac{{a\left( {3 + 2\sqrt 3 } \right)}}{2}$B.$a\left( {4 + 2\sqrt 3 } \right)$C.$\dfrac{{a\left( {2 + \sqrt 3 } \right)}}{2}$D.$\dfrac{{a\left( {2\sqrt 3 - 3} \right)}}{2}$
Hint: We will first draw the figure of the given condition and then we will use the formula of $\tan \theta = \dfrac{{perpendicular}}{{base}}$ in both the given angles of tan using the formula of tan(a+b) = $\dfrac{{\tan a + \tan b}}{{1 - \tan a\tan b}}$ and then from the obtained two equations, we will determine the value of height.
We are given the angle of elevation of an electric pole from a point A from the ground is given as 60° .
Also, the angle of elevation made by point B from the line joining the foot of the pole is 75°.
We are given the distance between the points A and B is AB = a.
Let us draw the figure:
let us assume that distance BC = x
hence, in triangle BCP, $\tan 75^\circ = \dfrac{h}{b}$
Now, tan 75° can be written as $\tan (45^\circ + 30^\circ )$
We can further solve it as $\tan (45^\circ + 30^\circ ) = \dfrac{{\tan 45^\circ + \tan 30^\circ }}{{1 - \tan 45^\circ \tan 30^\circ }}$ using the formula tan (a + b) = $\dfrac{{\tan a + \tan b}}{{1 - \tan a\tan b}}$.
$\Rightarrow \tan ({45^ \circ } + {30^ \circ }) = \dfrac{{1 + \dfrac{1}{{\sqrt 3 }}}}{{1 - 1(\dfrac{1}{{\sqrt 3 }})}} \\ \Rightarrow \tan {75^ \circ } = \dfrac{{\sqrt 3 + 1}}{{\sqrt 3 - 1}} = \dfrac{{\left( {\sqrt 3 + 1} \right)\left( {\sqrt 3 + 1} \right)}}{{3 - 1}} = \dfrac{{3 + 1 + 2\sqrt 3 }}{2} = \dfrac{{2\left( {2 + \sqrt 3 } \right)}}{2} = \left( {2 + \sqrt 3 } \right) \\$
Therefore, tan75$^ \circ$= $\dfrac{h}{b} = \left( {2 + \sqrt 3 } \right)$
$\Rightarrow b = \dfrac{h}{{2 + \sqrt 3 }}$
Now, in triangle ACP, tan 60$^ \circ$= $\dfrac{h}{{a + b}}$
Substituting the values of b and tan 60$^ \circ$, we get
$\Rightarrow \sqrt 3 = \dfrac{h}{{a + b}} \\ \Rightarrow h = \sqrt 3 \left( {a + b} \right) \\ \Rightarrow h = \sqrt 3 \left( {a + \dfrac{h}{{2 + \sqrt 3 }}} \right) \\ \Rightarrow h - \dfrac{{h\sqrt 3 }}{{2 + \sqrt 3 }} = a\sqrt 3 \\ \Rightarrow h\left( {\dfrac{{2 + \sqrt 3 - \sqrt 3 }}{{2 + \sqrt 3 }}} \right) = a\sqrt 3 \\$
Simplifying it further for the value of h, we get
$\therefore h = \dfrac{{a\left( {3 + 2\sqrt 3 } \right)}}{2}$
Therefore, the height of the pole h is found to be $\dfrac{{a\left( {3 + 2\sqrt 3 } \right)}}{2}$
Note: You should not get confused while calculating the tan75$^ \circ$with tan45$^ \circ$+ tan30$^ \circ$ instead of tan(45$^ \circ$+ tan30$^ \circ$). Be careful while simplifying for h because there are further calculations based on h value. If h value is wrong the final answer will come wrong. | 2020-09-23 18:53:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9608369469642639, "perplexity": 423.80368489453497}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400212039.16/warc/CC-MAIN-20200923175652-20200923205652-00384.warc.gz"} |
https://hal-paris1.archives-ouvertes.fr/hal-01333611 | A Banach-Stone type Theorem for invariant metric groups - Archive ouverte HAL Access content directly
Journal Articles Topology and its Applications Year : 2016
## A Banach-Stone type Theorem for invariant metric groups
(1)
1
Mohammed Bachir
• Function : Author
• PersonId : 960537
#### Abstract
Given an invariant metric group $(X,d)$, we prove that the set $Lip^1_+(X)$ of all nonnegative and $1$-Lipschitz maps on $(X,d)$ endowed with the inf-convolution structure is a monoid which completely determine the group completion of $(X,d)$. This gives a Banach-Stone type theorem for the inf-convolution structure in the group framework.
#### Domains
Mathematics [math] Functional Analysis [math.FA]
### Dates and versions
hal-01333611 , version 1 (17-06-2016)
### Identifiers
• HAL Id : hal-01333611 , version 1
### Cite
Mohammed Bachir. A Banach-Stone type Theorem for invariant metric groups. Topology and its Applications, 2016. ⟨hal-01333611⟩
### Export
BibTeX TEI Dublin Core DC Terms EndNote Datacite
180 View | 2023-01-26 23:26:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5096619129180908, "perplexity": 5677.171765227736}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764494826.88/warc/CC-MAIN-20230126210844-20230127000844-00646.warc.gz"} |
https://www.semanticscholar.org/paper/RT-distance-and-weight-distributions-of-Type-1-of-4-Dinh-Nguyen/57c8e7a82cfd7ee0e334a9f9636afb0ddad6c4b7 | • Corpus ID: 201074595
# RT distance and weight distributions of Type 1 constacyclic codes of length 4 p
@inproceedings{Dinh2019RTDA,
title={RT distance and weight distributions of Type 1 constacyclic codes of length 4 p},
author={Hai Quang Dinh and Bac Trong Nguyen and Songsak Sriboonchitta},
year={2019}
}
• Published 2019
• Computer Science, Mathematics
For any odd prime p such that p ≡ 1 (mod 4) , the class of Λ -constacyclic codes of length 4p over the finite commutative chain ring Ra = Fpm [u] ⟨ua⟩ = Fpm + uFpm + · · · + u Fpm , for all units Λ of Ra that have the form Λ = Λ0+uΛ1+ · · ·+uΛa−1 , where Λ0,Λ1, . . . ,Λa−1 ∈ Fpm , Λ0 ̸=0, Λ1 ̸=0 , is investigated. If the unit Λ is a square, each Λ -constacyclic code of length 4p is expressed as a direct sum of a −λ -constacyclic code and a λ -constacyclic code of length 2p . In the main case…
1 Citations
• Mathematics
ArXiv
• 2019
The Rosenbloom-Tsfasman (RT) distance, Hamming distance and weight distribution of Type (1) $\lambda$-constacyclic codes of length $4p^s$ are obtained when $\ lambda$ is not a square and the dual of the above code is self-orthogonal and self-dual.
## References
SHOWING 1-10 OF 43 REFERENCES
• Mathematics
Algebra Colloquium
• 2019
For any odd prime p such that pm ≡ 3 (mod 4), consider all units Λ of the finite commutative chain ring [Formula: see text] that have the form Λ = Λ0 + uΛ1 + ⋯ + ua−1 Λa−1, where Λ0, Λ1, …, Λa−1 ∊
• Computer Science, Mathematics
• 2018
Representations for all distinct cyclic codes, negacyclic codes and their dual codes of length np s over R are obtained, and self-duality for these codes are determined.
• Mathematics, Computer Science
Appl. Math. Lett.
• 2008
• Computer Science
IEEE Trans. Inf. Theory
• 1973
It is shown that the polynomials (x - c)^i, i = 0,1,2,\cdots, have the "weight-retaining" property that any linear combination of these polynmials with coefficients in GF(q) has Hamming weight at least as great as that of the minimum degree polynomial included.
• H. Dinh
• Computer Science, Mathematics
Finite Fields Their Appl.
• 2012
• Mathematics
IEEE Trans. Inf. Theory
• 1986
There exists a nontrivial cyclic extended Reed-Solomon code of length q over GF (q) if and only if q is a prime. | 2023-02-03 14:47:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6398006081581116, "perplexity": 3588.657177024973}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500056.55/warc/CC-MAIN-20230203122526-20230203152526-00259.warc.gz"} |
https://www.physicsforums.com/threads/percentage-errors.636645/ | # Percentage errors
lavster
How can you get positive and negative errors to be different in magnitude?
for example -
when calculating the error of the area of an annulus from two circular areas (ie subtracting one from the other, why is the positive error greater than the negative error
Thanks
Naty1
because the denominators are different...
like, say, you have $100 invested and you lose$20...that's a $20 loss... Now you have$80...What percentage gain do you need to get your \$20 back...
20/80 is 25%.
figures don't lie, but liars figure!
lavster
i understand the money analogy, but not when talking about the areas - sorry! :S we are only subtracting once
i understand the money analogy, but not when talking about the areas - sorry! :S we are only subtracting once
Could you illustrate by an example what you are concerned about?
Mentor
Imagine a square where both sides are known with a precision of 10% - they might be 10% shorter or 10% longer, but not more. What is the maximal deviation?
Larger area: Both sides 10% longer, total area 1.1^2 = 1.21 of the original area (21% more).
Smaller area: Both sides 10% shorter, total area 0.9^2 = 0.81 of the original area (19% less).
Do you see the difference?
Imagine a square where both sides are known with a precision of 10% - they might be 10% shorter or 10% longer, but not more. What is the maximal deviation?
Larger area: Both sides 10% longer, total area 1.1^2 = 1.21 of the original area (21% more).
Smaller area: Both sides 10% shorter, total area 0.9^2 = 0.81 of the original area (19% less).
Do you see the difference?
I see the difference, but why do you see this as a problem? | 2023-02-07 14:56:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6536539793014526, "perplexity": 1710.9182403301238}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500619.96/warc/CC-MAIN-20230207134453-20230207164453-00593.warc.gz"} |
https://quizlet.com/explanations/questions/some-electric-stoves-have-a-flat-ceramic-surface-with-heating-elements-hidden-beneath-a-pot-placed-over-a-heating-element-will-be-heated-whi-8b0df96e-ef4685eb-066b-436e-adb5-92af12265826 | Question
Some electric stoves have a flat ceramic surface with heating elements hidden beneath. A pot placed over a heating element will be heated, while it is safe to touch the surface only a few centimeters away. Why is ceramic, with a conductivity less than that of a metal but greater than that of a good insulator, an ideal choice for the stove top?
Solution
Verified
Step 1
1 of 2
Ceramic is a good choice because it releases the heat slowly, retaining it for a long time, and hence efficiently cooking the food as well as mainitaining the electricity bill under check as not much of the heat is wasted.
Recommended textbook solutions
Physics for Scientists and Engineers: A Strategic Approach with Modern Physics
4th EditionISBN: 9780133942651 (8 more)Randall D. Knight
3,508 solutions
Mathematical Methods in the Physical Sciences
3rd EditionISBN: 9780471198260 (1 more)Mary L. Boas
3,355 solutions
Fundamentals of Physics
10th EditionISBN: 9781118230718David Halliday, Jearl Walker, Robert Resnick
8,950 solutions
College Physics
1st EditionISBN: 9781938168000 (1 more)Paul Peter Urone, Roger A Hinrichs
3,169 solutions | 2023-02-03 07:49:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 9, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.28222930431365967, "perplexity": 5823.172501507568}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764500044.16/warc/CC-MAIN-20230203055519-20230203085519-00083.warc.gz"} |
https://byjus.com/cbse/exercise-4-1-lines-and-triangles/ | Exercise 4.1: Lines and Triangles
Question 1: In the given figure, AB is a mirror. PQ is the incident ray and QR, the reflected ray. If
$$\begin{array}{l}\angle PQR=112^{\circ}\end{array}$$
, find
$$\begin{array}{l}\angle PQA\end{array}$$
.
Ans:
We know that the angle of incidence = angle of reflection.
Hence, let
$$\begin{array}{l}\angle PQA = \angle BQR = x^{o}\end{array}$$
Since, AQB is a straight line, we have
therefore,
$$\begin{array}{l} \angle PQA + \angle PQR + \angle BQR = 180^{o}\end{array}$$
x + 112 + x = 180o
2x = 68
X = 34o
therefore,
$$\begin{array}{l}\angle PQA = 34^{o}\end{array}$$
Question 2: If two straight lines intersect each other than prove that ray opposite to the bisector of one of the angles so formed bisect the vertically opposite angles.
Ans:
Let AB and CD be the two lines intersecting at a point O and let ray OE bisect
$$\begin{array}{l}\angle AOC\end{array}$$
. Now draw a ray OF in the opposite direction of OE, such that EOF is a straight line.
Let
$$\begin{array}{l}\angle COE = 1, \angle AOE = 2, \angle BOF = 3\ and\ \angle DOF = 4\end{array}$$
We know that vertically opposite angles are equal.
therefore,
$$\begin{array}{l} \angle 1 = \angle 4\ and\ \angle 2 = \angle 3\end{array}$$
But,
$$\begin{array}{l}\angle 1 = \angle 2\end{array}$$
[Since OE bisects
$$\begin{array}{l}\angle AOC\end{array}$$
]
therefore,
$$\begin{array}{l} \angle 4 = \angle 3\end{array}$$
Hence, the ray opposite the bisector of one of the angles so formed bisects the vertically-opposite angle.
Question 3: Prove that the bisector of two adjacent supplementary angles include a right angle.
Ans: Let AOB denote a straight line and let
$$\begin{array}{l}\angle AOC\ and\ \angle BOC\end{array}$$
be the supplementary angles.
Thus, we have:
$$\begin{array}{l}\angle AOC = x^{o}\end{array}$$
and
$$\begin{array}{l}\angle BOC = (180 -x)^{o}\end{array}$$
Let OE bisects
$$\begin{array}{l}\angle AOC\end{array}$$
and OF bisect
$$\begin{array}{l}\angle BOC\end{array}$$
Then, we have:
$$\begin{array}{l}\angle AOE = \angle COE = \frac{1}{2} x^{o}\end{array}$$
and
$$\begin{array}{l}\angle BOF = \angle FOC = \frac{1}{2}(180 – x)^{o}\end{array}$$
Therefore,
$$\begin{array}{l}\angle COE + \angle FOC = \frac{1}{2} x + \frac{1}{2} (180 – x)^{o}\end{array}$$
$$\begin{array}{l} = \frac{1}{2}(x + 180 – x)\end{array}$$
$$\begin{array}{l}= \frac{1}{2}(180^{o})\end{array}$$
= 90o
$$\begin{array}{l}AB\left | \right | CD\end{array}$$
are cut by a traversal t at E and F respectively. If L1 =
$$\begin{array}{l}70^{\circ}\end{array}$$
, find the measure of each of the remaining marked angles.
Ans.
We have,
$$\begin{array}{l}\angle 1=70^{\circ}\end{array}$$
. Then,
$$\begin{array}{l}\angle 1=\angle 5\end{array}$$
[Corresponding angle ]
Therefore,
$$\begin{array}{l}\angle 5=70^{\circ}\end{array}$$
$$\begin{array}{l}\angle 1=\angle 3\end{array}$$
[Vertically-opposite angles]
Therefore,
$$\begin{array}{l}\angle 3=70^{\circ}\end{array}$$
$$\begin{array}{l}\angle 5=\angle 7\end{array}$$
[Vertically-opposite angles]
Therefore,
$$\begin{array}{l}\angle 7=70^{\circ}\end{array}$$
$$\begin{array}{l}\angle 1+\angle 2=180^{\circ}\end{array}$$
[Since AFB is a straight line ]
Therefore,
$$\begin{array}{l}70^{\circ}+\angle 2=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle 2=110^{\circ}\end{array}$$
[Vertically-opposite angles]
à
$$\begin{array}{l}\angle 4=110^{\circ}\end{array}$$
$$\begin{array}{l}\angle 2=\angle 6\end{array}$$
[Corresponding angles]
à
$$\begin{array}{l}\angle 6=110^{\circ}\end{array}$$
$$\begin{array}{l}\angle 6=\angle 8\end{array}$$
[Vertically-opposite angles]
à
$$\begin{array}{l}\angle 8=110^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l}\angle 1=70^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 2=110^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 3=70^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 4=110^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 5=70^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 6=110^{\circ}\end{array}$$
,
$$\begin{array}{l}\angle 7=70^{\circ}\end{array}$$
and
$$\begin{array}{l}\angle 8=110^{\circ}\end{array}$$
.
$$\begin{array}{l}AB\left | \right | CD\end{array}$$
are cut by a transversal t at E and F respectively. If L2:L1 = 5:4 , find the measure of each one of the marked angles.
Given
$$\begin{array}{l}AB\left | \right | CD\end{array}$$
and a line t intersects them at E and F forming angles
L1 + L2 + L3 + L4 + L5 + L6 + L7 + L8
Ans.
Given, L2:L1 = 5:4
Let L2 = 5y and L1 = 4y
But L2 + L1 =
$$\begin{array}{l}180^{\circ}\end{array}$$
[Linear pair]
à 5y + 4y =
$$\begin{array}{l}180^{\circ}\end{array}$$
à Y =
$$\begin{array}{l}\frac{180^{\circ}}{9}=20^{\circ}\end{array}$$
Therefore, L2 = 5y = 5 x
$$\begin{array}{l}20^{\circ}\end{array}$$
=
$$\begin{array}{l}100^{\circ}\end{array}$$
And L1 = 4y = 4 x
$$\begin{array}{l}20^{\circ}\end{array}$$
=
$$\begin{array}{l}80^{\circ}\end{array}$$
But L1 = L3 [vertically opp. Angles]
Therefore, L3 =
$$\begin{array}{l}80^{\circ}\end{array}$$
Similarly, Since L2 = L4 [vertically opp. Angles]
Therefore, L4 =
$$\begin{array}{l}100^{\circ}\end{array}$$
Since, L1 = L5 [corresponding angles]
Therefore, L5 =
$$\begin{array}{l}80^{\circ}\end{array}$$
Since, L4 = L6 [Alternate angles]
Therefore, L6 =
$$\begin{array}{l}100^{\circ}\end{array}$$
Since, L3 = L7 [Corresponding angles]
Therefore, L7 =
$$\begin{array}{l}80^{\circ}\end{array}$$
Since, L4 = L8 [Corresponding angles]
Therefore, L8 =
$$\begin{array}{l}100^{\circ}\end{array}$$
Hence, L3 =
$$\begin{array}{l}80^{\circ}\end{array}$$
,L4 =
$$\begin{array}{l}100^{\circ}\end{array}$$
, L5 =
$$\begin{array}{l}80^{\circ}\end{array}$$
, L6 =
$$\begin{array}{l}100^{\circ}\end{array}$$
, L7 =
$$\begin{array}{l}80^{\circ}\end{array}$$
, L8 =
$$\begin{array}{l}100^{\circ}\end{array}$$
.
$$\begin{array}{l}AB\left | \right |DC\end{array}$$
and
$$\begin{array}{l}AD\left | \right |BC\end{array}$$
.Prove that
$$\begin{array}{l}\angle ADC=\angle ABC\end{array}$$
.
Ans.
Let
$$\begin{array}{l}AD\parallel BC\end{array}$$
and CD is the transversal. Then,
$$\begin{array}{l}\angle ADC+\angle DCB=180^{\circ}\end{array}$$
…(i) [Consecutive Interior angles]
Also,
$$\begin{array}{l}AB\parallel CD\end{array}$$
and BC is the transversal. Then,
$$\begin{array}{l}\angle DCB+\angle ABC=180^{\circ}\end{array}$$
…(ii) [Consecutive Interior angles]
From (i) and (ii), we get:
$$\begin{array}{l}\angle ADC+\angle DCB=\angle DCB+\angle ABC\end{array}$$
à
$$\begin{array}{l}\angle ADC=\angle ABC\end{array}$$
Q7) In each of the figure find the angle
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x in each case.
(i)
Ans. (i)
In the fig.
$$\begin{array}{l}AB\parallel CD\parallel EF\end{array}$$
Now,
$$\begin{array}{l}AB\parallel EF\end{array}$$
and BE is the transversal. Then,
$$\begin{array}{l}\angle ABE=\angle BEF\end{array}$$
[Alternate interior angles]
à
$$\begin{array}{l}\angle BEF=35^{\circ}\end{array}$$
Again,
$$\begin{array}{l}EF\parallel CD\end{array}$$
and DE is the transversal.
Then,
$$\begin{array}{l}\angle DEF=\angle FED\end{array}$$
à
$$\begin{array}{l}\angle FED=65^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l}x^{\circ}=\angle BEF+\angle FED\end{array}$$
=
$$\begin{array}{l}(35+65)^{\circ}\end{array}$$
=
$$\begin{array}{l}100^{\circ}\end{array}$$
Or, x = 100
(ii)
Draw
$$\begin{array}{l}EO\parallel AB\parallel CD\end{array}$$
Then,
$$\begin{array}{l}\angle EOB+\angle EOD=x^{\circ}\end{array}$$
Now,
$$\begin{array}{l}EO\parallel AB\end{array}$$
and BO is the transversal.
Therefore,
$$\begin{array}{l}\angle EOB+\angle ABO=180^{\circ}\end{array}$$
[Consecutive Interior angles]
à
$$\begin{array}{l}\angle EOB+55^{\circ}=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle EOB=155^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l}x^{\circ}=\angle EOB+\angle EOD\end{array}$$
=
$$\begin{array}{l}(125+155)^{\circ}\end{array}$$
=
$$\begin{array}{l}280^{\circ}\end{array}$$
Or, x = 280
(iii)
Draw
$$\begin{array}{l}EF\parallel AB\parallel CD\end{array}$$
.
Then,
$$\begin{array}{l}\angle AEF+\angle CEF=x^{\circ}\end{array}$$
Now,
$$\begin{array}{l}EF\parallel AB\end{array}$$
and AE is the transversal.
Therefore,
$$\begin{array}{l}\angle AEF+\angle BAE=180^{\circ}\end{array}$$
[Consecutive interior angles]
à
$$\begin{array}{l}\angle AEF+116=180\end{array}$$
à
$$\begin{array}{l}\angle AEF=64^{\circ} \end{array}$$
Again,
$$\begin{array}{l}EF\parallel CD\end{array}$$
and CE is the transversal.
$$\begin{array}{l}\angle CEF+\angle ECD=180^{\circ}\end{array}$$
[Consecutive Interior angles]
à
$$\begin{array}{l}\angle CEF+124=180\end{array}$$
à
$$\begin{array}{l}\angle CEF=56^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l}x^{\circ}=\angle AEF+\angle CEF\end{array}$$
=
$$\begin{array}{l}(64+56)^{\circ}\end{array}$$
=
$$\begin{array}{l}120^{\circ}\end{array}$$
Or, x = 120
Q8) In the given figure,
$$\begin{array}{l}AB\parallel CD\parallel EF\end{array}$$
. Find the value of x.
Ans.
Given,
$$\begin{array}{l}EF\parallel CD\end{array}$$
and CE is the transversal.
Then,
$$\begin{array}{l}\angle ECD+\angle CEF=180^{\circ}\end{array}$$
[Consecutive Interior angles]
à
$$\begin{array}{l}\angle ECD+130^{\circ}=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle ECD=50^{\circ}\end{array}$$
Again,
$$\begin{array}{l}AB\parallel CD\end{array}$$
and BC is the transversal.
Then,
$$\begin{array}{l}\angle ABC=\angle BCD\end{array}$$
[Alternate Interior Angles]
à
$$\begin{array}{l}70^{\circ}=x+50^{\circ} ;Therefore, [\angle BCD=\angle BCE+\angle ECD]\end{array}$$
à
$$\begin{array}{l}x=20^{\circ}\end{array}$$
Q9) In the given figure ,
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x.
Ans.
Draw
$$\begin{array}{l}EF\parallel AB\parallel CD\end{array}$$
$$\begin{array}{l}EF\parallel CD\end{array}$$
and CE is the transversal.
Then,
$$\begin{array}{l}\angle ECD+\angle CEF=180^{\circ}\end{array}$$
[Anfles on the same side of a transversal are supplementary]
à
$$\begin{array}{l}130^{\circ}+\angle CEF=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle CEF=50^{\circ}\end{array}$$
Again,
$$\begin{array}{l}EF\parallel AB\end{array}$$
and AE is the transversal.
Then,
$$\begin{array}{l}\angle BAE+\angle AEF=180^{\circ}\end{array}$$
[Angles on the same side of a transversal line are supplementary]
à
$$\begin{array}{l}x^{\circ}+20^{\circ}+50^{\circ}=180^{\circ}\;[\angle AEF=\angle AEC+\angle CEF]\end{array}$$
à
$$\begin{array}{l}x^{\circ}+70^{\circ}=180^{\circ}\end{array}$$
à
$$\begin{array}{l}x^{\circ}=110^{\circ}\end{array}$$
à x = 110
Q10) In the given figure,
$$\begin{array}{l}AB\parallel CD\end{array}$$
, Prove that
$$\begin{array}{l}\angle BAE-\angle DCE=\angle AEC\end{array}$$
Ans.
Draw
$$\begin{array}{l}EF\parallel AB\parallel CD\end{array}$$
through E.
Now,
$$\begin{array}{l}EF\parallel AB\end{array}$$
and AE is the transversal.
Then,
$$\begin{array}{l}\angle BAE+\angle AEF=180^{\circ}\end{array}$$
[Angles on the same side of a transversal line are supplementary]
Again,
$$\begin{array}{l}EF\parallel CD\end{array}$$
and CE is the transversal.
Then,
$$\begin{array}{l}\angle DCE+\angle CEF=180^{\circ}\end{array}$$
[Angles on the same side of a transversal line are supplementary]
à
$$\begin{array}{l}\angle DCE+(\angle AEC+\angle AEF)=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle DCE+\angle AEC+180^{\circ}-\angle BAE=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle BAE-\angle DCE=\angle AEC\end{array}$$
Q11) In the given figure,
$$\begin{array}{l}AB\parallel CD\; and \; CD\parallel EF\end{array}$$
. Find the value of x.
Ans.
We have,
$$\begin{array}{l}AB\parallel CD\end{array}$$
and
$$\begin{array}{l}BC\parallel ED\end{array}$$
.
$$\begin{array}{l}BD\parallel ED\end{array}$$
and CD is the transversal.
Then,
$$\begin{array}{l}\angle BCD+\angle CDE=180^{\circ}\end{array}$$
[Angles on the same side of a transversal line are supplementary]
à
$$\begin{array}{l}\angle BCD+75=180\end{array}$$
à
$$\begin{array}{l}\angle BCD=105^{\circ}\end{array}$$
$$\begin{array}{l}AB\parallel CD\end{array}$$
and BC is the transversal.
$$\begin{array}{l}\angle ABC=\angle BCD\end{array}$$
(alternate angles)
à
$$\begin{array}{l}x^{\circ}=105^{\circ}\end{array}$$
à x = 105
Q12) In the given figure ,
$$\begin{array}{l}AB\parallel CD\end{array}$$
.Prove that P + q – r =
$$\begin{array}{l}180^{\circ}\end{array}$$
.
Ans.
Draw
$$\begin{array}{l}PFQ\parallel AB\parallel CD\end{array}$$
Now,
$$\begin{array}{l}PFQ\parallel AB\end{array}$$
and EF is the transversal.
Then,
$$\begin{array}{l}\angle AEF+\angle EFP=180^{\circ}\end{array}$$
…(1)
[Angles on the same side of a transversal line are supplementary]
Also,
$$\begin{array}{l}PFQ\parallel CD\end{array}$$
$$\begin{array}{l}\angle PFQ=\angle FGD=r^{\circ}\end{array}$$
[Alternate angles]
And
$$\begin{array}{l}\angle EFP=\angle EFG-\angle PFG=q^{\circ}-r^{\circ}\end{array}$$
Putting the value of
$$\begin{array}{l}\angle EFP\end{array}$$
in equ. (i)
We get,
$$\begin{array}{l}p^{\circ}+q^{\circ}-r^{\circ}=180^{\circ}\end{array}$$
à p + q – r = 180
Q13) In the given figure,
$$\begin{array}{l}AB\parallel PQ\end{array}$$
. Find the value of x and y.
Ans.
Given
$$\begin{array}{l}AB\parallel PQ\end{array}$$
Let CD be the transversal cutting AB and PQ at E and F, respectively.
Then,
$$\begin{array}{l}\angle CEB+\angle BEG+\angle GEF=180^{\circ}\end{array}$$
[Since CD is a straight line]
à
$$\begin{array}{l}75^{\circ}+20^{\circ}+\angle GEF=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle GEF=85^{\circ}\end{array}$$
We know that the sum of angles of a triangle is
$$\begin{array}{l}180^{\circ}\end{array}$$
therefore,
$$\begin{array}{l} \angle GEF+\angle EGF+\angle EFG=180\end{array}$$
à
$$\begin{array}{l}85^{\circ}+x+25^{\circ}=180^{\circ}\end{array}$$
à
$$\begin{array}{l}110^{\circ}+x=180^{\circ}\end{array}$$
à x =
$$\begin{array}{l}70^{\circ}\end{array}$$
And
$$\begin{array}{l}\angle FEG+\angle BEG=\angle DFQ\end{array}$$
[Corresponding angles]
à
$$\begin{array}{l}85^{\circ}+20^{\circ}=\angle DFQ\end{array}$$
à
$$\begin{array}{l}\angle DFQ=105^{\circ}\end{array}$$
$$\begin{array}{l}\angle EFG+\angle GFQ+\angle DFQ=180^{\circ}\end{array}$$
[Since CD is a straight line]
à
$$\begin{array}{l}25^{\circ}+y+105^{\circ}=180^{\circ}\end{array}$$
à
$$\begin{array}{l}y=50^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l} x=70^{\circ}\end{array}$$
and
$$\begin{array}{l}y=50^{\circ}\end{array}$$
Q14) In the given figure,
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x.
Ans.
$$\begin{array}{l}AB\parallel CD\end{array}$$
and AC is the transversal.
Then,
$$\begin{array}{l}\angle BAC+\angle ACD=180^{\circ}\end{array}$$
[Consecutive Interior angles]
à 75 +
$$\begin{array}{l}\angle ACD=180\end{array}$$
à
$$\begin{array}{l}\angle ACD=105^{\circ}\end{array}$$
And,
$$\begin{array}{l}\angle ACD=\angle ECF\end{array}$$
[Vertically –opposite angles]
à
$$\begin{array}{l}\angle ECF=105^{\circ}\end{array}$$
We know that the sum of the angles of a triangle is
$$\begin{array}{l}180^{\circ}\end{array}$$
$$\begin{array}{l}\angle ECF+\angle CFE+\angle CEF=180^{\circ}\end{array}$$
à
$$\begin{array}{l}105^{\circ}+30^{\circ}+x=180^{\circ}\end{array}$$
à
$$\begin{array}{l}135^{\circ}+x=180^{\circ}\end{array}$$
à
$$\begin{array}{l}x=45^{\circ}\end{array}$$
Q15) In the given figure,
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x.
Ans.
$$\begin{array}{l}AB\parallel CD\end{array}$$
and PQ is the transversal.
Then,
$$\begin{array}{l}\angle PEF=\angle EGH\end{array}$$
[Corresponding Angles]
à
$$\begin{array}{l}\angle EGH=85^{\circ}\end{array}$$
And,
$$\begin{array}{l}\angle EGH+\angle QGH=180^{\circ}\end{array}$$
[Since PQ is a straight line]
à
$$\begin{array}{l}85^{\circ}+\angle QGH=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle QGH=95^{\circ}\end{array}$$
Also,
$$\begin{array}{l}\angle CHQ+\angle GHQ=180^{\circ}\end{array}$$
[Since CD is a straight line]
à
$$\begin{array}{l}115^{\circ}+\angle GHQ=180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle GHQ=65^{\circ}\end{array}$$
We know that the sum of angles of a triangle is
$$\begin{array}{l}180^{\circ}\end{array}$$
à
$$\begin{array}{l}\angle QGH+\angle GHQ+\angle GQH=180^{\circ}\end{array}$$
à
$$\begin{array}{l}95^{\circ}+65^{\circ}+x=180^{\circ}\end{array}$$
à x =
$$\begin{array}{l}20^{\circ}\end{array}$$
Q16) In the given figure,
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x,y and z.
Ans.
$$\begin{array}{l}\angle ADC= \angle DAB\end{array}$$
[Alternate interior angles]
à z =
$$\begin{array}{l}75^{\circ}\end{array}$$
$$\begin{array}{l}\angle ABC=\angle BCD\end{array}$$
[Alternate Interior Angles]
à x =
$$\begin{array}{l}75^{\circ}\end{array}$$
We know that the sum of the angles of triangle is
$$\begin{array}{l}180^{\circ}\end{array}$$
à
$$\begin{array}{l}35^{\circ}+y+75^{\circ}=180^{\circ}\end{array}$$
à y =
$$\begin{array}{l}70^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l}x=35^{\circ},y=70^{\circ}\;and\;z=75^{\circ}\end{array}$$
Q17) In the given figure,
$$\begin{array}{l}AB\parallel CD\end{array}$$
. Find the value of x,y and z.
Ans.
$$\begin{array}{l}AB\parallel CD\end{array}$$
and let EF and EG be the transversals.
Now,
$$\begin{array}{l}AB\parallel CD\end{array}$$
and EF is the transversal.
Then,
$$\begin{array}{l}\angle AEF=\angle EFG\end{array}$$
[Alternate angles]
à
$$\begin{array}{l}y^{\circ}=75^{\circ}\end{array}$$
à y = 75
Also,
$$\begin{array}{l}\angle EFC+\angle EFD=180^{\circ}\end{array}$$
[Since CFGD is a straight line]
à x + y =180
à x + 75 =180
à x = 105
And,
$$\begin{array}{l}\angle EGF+\angle EGD=180^{\circ}\end{array}$$
[Since CFGD is a straight line]
à
$$\begin{array}{l}\angle EGF+125=180\end{array}$$
à
$$\begin{array}{l}\angle EGF=55^{\circ}\end{array}$$
We know that the sum of angles of a triangle is
$$\begin{array}{l}180^{\circ}\end{array}$$
$$\begin{array}{l}\angle EFG+\angle GEF+\angle EGF=180^{\circ}\end{array}$$
à
$$\begin{array}{l}y+z+55=180\end{array}$$
à 75 + z + 55 = 180
à z = 50
Therefore, x = 105, y = 75 and z = 50
Q18) In the given figure,
$$\begin{array}{l}AB\parallel CD\; and \; EF\parallel GH\end{array}$$
. Find the value of x,y,z and t.
Ans.
In the given figure,
x =
$$\begin{array}{l}60^{\circ}\end{array}$$
[Vertically-opposite Angles]
$$\begin{array}{l}\angle PRQ=\angle SQR\end{array}$$
[Alternate angles]
y =
$$\begin{array}{l}60^{\circ}\end{array}$$
$$\begin{array}{l}\angle APR=\angle PQS\end{array}$$
[Corresponding Angles]
à
$$\begin{array}{l}110^{\circ}=\angle PQR+60^{\circ}\;because [\angle PQS=\angle PQR+\angle RQS]\end{array}$$
à
$$\begin{array}{l}\angle PQR=50^{\circ}\end{array}$$
$$\begin{array}{l}\angle PQR+\angle RQS+\angle BQS=180^{\circ}\end{array}$$
[Since AB is straight line]
à
$$\begin{array}{l}50^{\circ}+60^{\circ}+z=180^{\circ}\end{array}$$
à
$$\begin{array}{l}110^{\circ}+z=180^{\circ}\end{array}$$
à
$$\begin{array}{l}z=70^{\circ}\end{array}$$
$$\begin{array}{l}\angle DSH=z\end{array}$$
[Corresponding Angles]
à
$$\begin{array}{l}\angle DSH=70^{\circ}\end{array}$$
therefore,
$$\begin{array}{l} \angle DSH=t\end{array}$$
[Vertically-opposite Angles]
à t =
$$\begin{array}{l}70^{\circ}\end{array}$$
Therefore,
$$\begin{array}{l};x=60^{\circ},z=70^{\circ}\;and\;t=70^{\circ}\end{array}$$
Q19) For what value of x will the lines l and m be parallel to each other?
Ans.
For the lines l and m to be parallel
(i)
$$\begin{array}{l}\Leftrightarrow\end{array}$$
3x – 20 = 2x +10 [Corresponding angles]
$$\begin{array}{l}\Leftrightarrow\end{array}$$
x = 30
(ii)
$$\begin{array}{l}\Leftrightarrow\end{array}$$
3x + 5 + 4x = 180 [Consecutive Interior Angles]
$$\begin{array}{l}\Leftrightarrow\end{array}$$
7x = 175
$$\begin{array}{l}\Leftrightarrow\end{array}$$
x = 25
Q20) If two straight lines are perpendicular to the same line, prove that the lines are parallel to each other.
Ans:
Given: Two lines m and n are perpendicular to a given line l.
To Prove :
$$\begin{array}{l}m\parallel n\end{array}$$
Proof : Since
$$\begin{array}{l}m\perp l\end{array}$$
So,
$$\begin{array}{l}\angle 1=90^{\circ}\end{array}$$
Again, Since
$$\begin{array}{l}n\perp l\end{array}$$
$$\begin{array}{l}\angle 2=90^{\circ}\end{array}$$
therefore,
$$\begin{array}{l} \angle 1=\angle 2=90^{\circ}\end{array}$$
But
$$\begin{array}{l}\angle 1\end{array}$$
and
$$\begin{array}{l}\angle 2\end{array}$$
are the corresponding angles made by the transversal l with lines m and n and they are proved to be equal.
Thus,
$$\begin{array}{l}m\parallel n\end{array}$$ | 2022-08-09 23:45:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.678573489189148, "perplexity": 3615.85570982972}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-33/segments/1659882571090.80/warc/CC-MAIN-20220809215803-20220810005803-00143.warc.gz"} |
https://en.khanacademy.org/math/multivariable-calculus/integrating-multivariable-functions/flux-in-3d-articles/a/unit-normal-vector-of-a-surface | If you're seeing this message, it means we're having trouble loading external resources on our website.
If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked.
## Multivariable calculus
### Course: Multivariable calculus>Unit 4
Lesson 14: Flux in 3D (articles)
# Unit normal vector of a surface
Learn how to find the vector that is perpendicular, or "normal", to a surface. You will need this skill for computing flux in three dimensions.
## What we're building to
• If a surface is parameterized by a function start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis, the unit normal vector to this surface is given by the expression
\begin{aligned} \pm \dfrac{ \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) }{ \left| \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) \right| } \end{aligned}
• You always have two choices for a unit vector function. If a surface is closed, like a sphere or a torus, those choices can be interpreted as outward-facing and inward-facing vectors.
• This is useful for the idea of flux in three-dimensions, covered in the next article.
## Unit normal vector
Let's say you have some surface, S. If a vector at some point on S is perpendicular to S at that point, it is called a normal vector (of S at that point). More precisely, you might say it is perpendicular to the tangent plane of S at that point, or that it is perpendicular to all possible tangent vectors of S at that point.
When a normal vector has magnitude 1, it is called a unit normal vector. Notice, there will always be two unit normal vectors, each pointing in opposite directions:
Why do we care? To compute surface integrals in a vector field, also known as three-dimensional flux, you will need to find an expression for the unit normal vectors on a given surface. This will take the form of a multivariable, vector-valued function, whose inputs live in three dimensions (where the surface lives), and whose outputs are three-dimensional vectors.
## Example: How to compute a unit normal vector
Consider the surface described by the following parametric function:
\begin{aligned} \vec{\textbf{v}}(t, s) = \left[ \begin{array}{c} t + 1\\ s \\ s^2 - t^2 + 1 \end{array} \right] \end{aligned}
In the range where minus, 2, is less than or equal to, t, is less than or equal to, 2 and minus, 2, is less than or equal to, s, is less than or equal to, 2, here's what that surface looks like:
For what follows, I am assuming you know that the two partial derivatives of a parametric surface give vectors which are each tangent to the surface, but in different directions.
#### Step 1: Find a (not necessarily unit) normal vector
Concept check: Which of the following will give a vector which is perpendicular to the surface parameterized by start bold text, v, end bold text, with, vector, on top at the point start bold text, v, end bold text, with, vector, on top, left parenthesis, 1, comma, minus, 2, right parenthesis?
This is a pretty complicated expression, with two vector-valued partial derivatives and a cross product. If you have computed some surface integrals before, you are all-too familiar with the expression and how ugly it can be to compute.
Once again, here's how start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis is defined:
\begin{aligned} \vec{\textbf{v}}(t, s) = \left[ \begin{array}{c} t + 1\\ s \\ s^2 - t^2 + 1 \end{array} \right] \end{aligned}
Concept check: Now compute the cross product of the partial derivatives of start bold text, v, end bold text, with, vector, on top. Do this for a general point left parenthesis, t, comma, s, right parenthesis, meaning each component of your answer will be a function of t and s. As described in the previous problem, this will give you a function for normal vectors of the surface.
\begin{aligned} \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) = \end{aligned}
start bold text, i, end bold text, with, hat, on top, plus
start bold text, j, end bold text, with, hat, on top, plus
start bold text, k, end bold text, with, hat, on top
For example, if we plugged in left parenthesis, t, comma, s, right parenthesis, equals, left parenthesis, 1, comma, minus, 2, right parenthesis, here's what we'd get:
\begin{aligned} \left[ \begin{array}{c} 2(1) \\ -2(-2) \\ 1 \end{array} \right] = \left[ \begin{array}{c} 2 \\ 4 \\ 1 \end{array} \right] \end{aligned}
This is a vector which is perpendicular to the surface at the point start bold text, v, end bold text, with, vector, on top, left parenthesis, 1, comma, minus, 2, right parenthesis. However, it is not a unit vector, as you can see by computing its magnitude:
square root of, 2, squared, plus, 4, squared, plus, 1, squared, end square root, equals, square root of, 4, plus, 16, plus, 1, end square root, equals, square root of, 21, end square root
#### Step 2: Make that a unit normal vector
So we have this expression \begin{aligned} \left[ \begin{array}{c} 2t \\ -2s \\ 1 \end{array} \right] \end{aligned} that gives us a normal vector for each point start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis. The next step is to massage this a bit to get an expression for a unit normal vector.
Concept check: What is the unit normal vector to our surface at the point start bold text, v, end bold text, with, vector, on top, left parenthesis, 1, comma, minus, 2, right parenthesis?
start bold text, i, end bold text, with, hat, on top, plus
start bold text, j, end bold text, with, hat, on top, plus
start bold text, k, end bold text, with, hat, on top
Concept check: More generally, what is the unit normal vector to our surface at an arbitrary point start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis, as a function of t and s?
start bold text, i, end bold text, with, hat, on top, plus
start bold text, j, end bold text, with, hat, on top, plus
start bold text, k, end bold text, with, hat, on top
If you plug in any value left parenthesis, t, start subscript, 0, end subscript, comma, s, start subscript, 0, end subscript, right parenthesis to this expression, you will get a vector which has magnitude 1, and is normal to the surface parameterized by the function start bold text, v, end bold text, with, vector, on top at the point start bold text, v, end bold text, with, vector, on top, left parenthesis, t, start subscript, 0, end subscript, comma, s, start subscript, 0, end subscript, right parenthesis.
## Choosing orientation
Notice, if you multiply your function for a unit normal vector by minus, 1, it will still produce unit normal vectors. They will all just point in the opposite directions. The choice of direction for the unit normal vectors of your surface is what's called an orientation of that surface.
You will see the significance of this in the next article on three-dimensional flux. In short, orienting your surface is analogous to giving a one-dimensional curve a direction.
When your surface is closed, like a sphere or a torus, the two choices for unit normal vectors are often called outward-facing and inward-facing unit normal vectors.
## Summary
• Given a surface parameterized by a function start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis, to find an expression for the unit normal vector to this surface, take the following steps:
• Step 1: Get a (non necessarily unit) normal vector by taking the cross product of both partial derivatives of start bold text, v, end bold text, with, vector, on top, left parenthesis, t, comma, s, right parenthesis:
\begin{aligned} \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) \end{aligned}
• Step 2: Turn this vector-expression into a unit vector by dividing it by its own magnitude:
\begin{aligned} \dfrac{ \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) }{ \left| \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \blueE{t}}(t, s) \right) \times \left( \dfrac{\partial \vec{\textbf{v}}}{\partial \redE{s}}(t, s) \right) \right| } \end{aligned}
• You can also multiply this expression by minus, 1, and it will still give unit normal vectors.
• The main reason for learning this skill is to compute three-dimensional flux. | 2023-03-21 07:07:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 56, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 20, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9974908232688904, "perplexity": 1689.0053766823864}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296943637.3/warc/CC-MAIN-20230321064400-20230321094400-00019.warc.gz"} |
https://gmatclub.com/forum/there-are-x-watermelons-of-10-kg-each-and-y-watermelons-of-r-kg-each-221608.html | GMAT Question of the Day - Daily to your Mailbox; hard ones only
It is currently 14 Dec 2018, 19:31
GMAT Club Daily Prep
Thank you for using the timer - this advanced tool can estimate your performance and suggest more practice questions. We have subscribed you to Daily Prep Questions via email.
Customized
for You
we will pick new questions that match your level based on your Timer History
Track
every week, we’ll send you an estimated GMAT score based on your performance
Practice
Pays
we will pick new questions that match your level based on your Timer History
Events & Promotions
Events & Promotions in December
PrevNext
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345
Open Detailed Calendar
• Typical Day of a UCLA MBA Student - Recording of Webinar with UCLA Adcom and Student
December 14, 2018
December 14, 2018
10:00 PM PST
11:00 PM PST
Carolyn and Brett - nicely explained what is the typical day of a UCLA student. I am posting below recording of the webinar for those who could't attend this session.
• Free GMAT Strategy Webinar
December 15, 2018
December 15, 2018
07:00 AM PST
09:00 AM PST
Aiming to score 760+? Attend this FREE session to learn how to Define your GMAT Strategy, Create your Study Plan and Master the Core Skills to excel on the GMAT.
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each.
Author Message
Math Expert
Joined: 02 Sep 2009
Posts: 51215
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
08 Jul 2016, 07:04
00:00
Difficulty:
75% (hard)
Question Stats:
45% (02:09) correct 55% (01:23) wrong based on 51 sessions
HideShow timer Statistics
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
--== Message from the GMAT Club Team ==--
THERE IS LIKELY A BETTER DISCUSSION OF THIS EXACT QUESTION.
This discussion does not meet community quality standards. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
_________________
Director
Joined: 04 Jun 2016
Posts: 571
GMAT 1: 750 Q49 V43
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
08 Jul 2016, 09:03
Explanation :-
Question stem gives us $$\frac{10X+YR}{X+Y}=12$$
==>10X + YR = 12X + 12Y
2X = (R - 12)Y
So we have three variable X, Y and R to solve , we will need three equation..
we already have First equation from quesition stem
Statement 1 ) Gives us the second equation
Y=X+5 (Why is Y is the heavy watermelon? because the average weight of all watermelon is 12 , therefore there must be watermelons heavier than 10 kg, X watermelon are 10Kg therefore Y must be the heavy watermelons... )
Insufficient on its own (Option A and D out)
Statement 2) Gives us third equation
R = Y = X + 5
Insufficient on its own (Option B out)
Merging statement 1 and 2 we get three equations for three unknown hence C is sufficient
For those who want a definite proof
Merge statement 1 and 2 AND substitute for R and Y
2X = (R - 12)Y
==> 2X=(X+5-12)(X+5)
==>2X=(X-7)(X+5)
This is result in a quadratic of the the form
$$x^2 - 4x - 35 = 0.$$
Determinant is positive
b^2-4ac>0 (16-(-4*1*35) ==> two real and positive roots will be yielded by the quadratic equation.
Pick the correct one
Bunuel wrote:
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
_________________
Posting an answer without an explanation is "GOD COMPLEX". The world doesn't need any more gods. Please explain you answers properly.
FINAL GOODBYE :- 17th SEPTEMBER 2016. .. 16 March 2017 - I am back but for all purposes please consider me semi-retired.
Current Student
Joined: 18 Oct 2014
Posts: 846
Location: United States
GMAT 1: 660 Q49 V31
GPA: 3.98
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
08 Jul 2016, 12:55
1
Bunuel wrote:
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
10X+RY = 12 (X+Y)
(1) There are five heavier watermelons more than lighter watermelons.
Since Avg> 10, R >10
Y = X+5
10X + R(X+5)= 12 (X + X+5)
We have two variables in the equation. Not possible.
(2) The weight of the heavier watermelons in Kg is equal to their number
Y=R
Not sufficient as we will still have two variables.
Combining both statements
R=Y=X+5
Putting R= X+5 in 10X + R(X+5)= 12 (X + X+5) , we will get the equation with one variable. Hence, Sufficient.
_________________
I welcome critical analysis of my post!! That will help me reach 700+
Senior Manager
Joined: 05 Nov 2012
Posts: 447
Concentration: Technology, Other
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
18 Jul 2016, 21:10
Top Contributor
It seems I am missing something here.
Is there any other quick way to solve this? I am sure, I will kill some valuable time calculating the root using the quad equation formulae. IMO, its not safe to assume that just using A and B we would find the right solution. Sometimes GMAT tricks us by giving 2 valid answers in option C.
Director
Joined: 31 Jul 2017
Posts: 504
Location: Malaysia
GMAT 1: 700 Q50 V33
GPA: 3.95
WE: Consulting (Energy and Utilities)
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
20 Feb 2018, 23:49
Bunuel wrote:
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
Hi Bunuel
Doesn't Statement B mean that Y*R = Y or Am I missing something here. Please advise.
_________________
If my Post helps you in Gaining Knowledge, Help me with KUDOS.. !!
Math Expert
Joined: 02 Sep 2009
Posts: 51215
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
20 Feb 2018, 23:56
rahul16singh28 wrote:
Bunuel wrote:
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
Hi Bunuel
Doesn't Statement B mean that Y*R = Y or Am I missing something here. Please advise.
Intending meaning is that y = r, but I agree the wording is not prices here.
_________________
Director
Joined: 31 Jul 2017
Posts: 504
Location: Malaysia
GMAT 1: 700 Q50 V33
GPA: 3.95
WE: Consulting (Energy and Utilities)
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
21 Feb 2018, 00:04
Bunuel wrote:
rahul16singh28 wrote:
Bunuel wrote:
There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. The average weight of a watermelon is 12 Kg. What is the value of R?
(1) There are five heavier watermelons more than lighter watermelons.
(2) The weight of the heavier watermelons in Kg is equal to their number
Hi Bunuel
Doesn't Statement B mean that Y*R = Y or Am I missing something here. Please advise.
Intending meaning is that y = r, but I agree the wording is not prices here.
Thanks Bunuel.
But doesn't it change the answer in this case. As, $$Y*R = Y$$ --> $$Y(R-1)$$ = 0 as Y can't be 0, R = 1. Please advise.
_________________
If my Post helps you in Gaining Knowledge, Help me with KUDOS.. !!
Math Expert
Joined: 02 Sep 2009
Posts: 51215
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. [#permalink]
Show Tags
21 Feb 2018, 00:07
rahul16singh28 wrote:
But doesn't it change the answer in this case. As, $$Y*R = Y$$ --> $$Y(R-1)$$ = 0 as Y can't be 0, R = 1. Please advise.
Ignore this question.
--== Message from the GMAT Club Team ==--
THERE IS LIKELY A BETTER DISCUSSION OF THIS EXACT QUESTION.
This discussion does not meet community quality standards. It has been retired.
If you would like to discuss this question please re-post it in the respective forum. Thank you!
To review the GMAT Club's Forums Posting Guidelines, please follow these links: Quantitative | Verbal Please note - we may remove posts that do not follow our posting guidelines. Thank you.
_________________
Re: There are X watermelons of 10 Kg each, and Y Watermelons of R Kg each. &nbs [#permalink] 21 Feb 2018, 00:07
Display posts from previous: Sort by | 2018-12-15 03:31:00 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.36393770575523376, "perplexity": 5638.286952633063}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376826686.8/warc/CC-MAIN-20181215014028-20181215040028-00395.warc.gz"} |
https://stacks.math.columbia.edu/tag/03MJ | Lemma 65.22.1. Let $\mathcal{P}$ be a property of morphisms of schemes which is étale local on the source-and-target. Let $S$ be a scheme. Let $f : X \to Y$ be a morphism of algebraic spaces over $S$. Consider commutative diagrams
$\xymatrix{ U \ar[d]_ a \ar[r]_ h & V \ar[d]^ b \\ X \ar[r]^ f & Y }$
where $U$ and $V$ are schemes and the vertical arrows are étale. The following are equivalent
1. for any diagram as above the morphism $h$ has property $\mathcal{P}$, and
2. for some diagram as above with $a : U \to X$ surjective the morphism $h$ has property $\mathcal{P}$.
If $X$ and $Y$ are representable, then this is also equivalent to $f$ (as a morphism of schemes) having property $\mathcal{P}$. If $\mathcal{P}$ is also preserved under any base change, and fppf local on the base, then for representable morphisms $f$ this is also equivalent to $f$ having property $\mathcal{P}$ in the sense of Section 65.3.
Proof. Let us prove the equivalence of (1) and (2). The implication (1) $\Rightarrow$ (2) is immediate (taking into account Spaces, Lemma 63.11.6). Assume
$\xymatrix{ U \ar[d] \ar[r]_ h & V \ar[d] \\ X \ar[r]^ f & Y } \quad \quad \xymatrix{ U' \ar[d] \ar[r]_{h'} & V' \ar[d] \\ X \ar[r]^ f & Y }$
are two diagrams as in the lemma. Assume $U \to X$ is surjective and $h$ has property $\mathcal{P}$. To show that (2) implies (1) we have to prove that $h'$ has $\mathcal{P}$. To do this consider the diagram
$\xymatrix{ U \ar[d]_ h & U \times _ X U' \ar[l] \ar[d]^{(h, h')} \ar[r] & U' \ar[d]^{h'} \\ V & V \times _ Y V' \ar[l] \ar[r] & V' }$
By Descent, Lemma 35.29.5 we see that $h$ has $\mathcal{P}$ implies $(h, h')$ has $\mathcal{P}$ and since $U \times _ X U' \to U'$ is surjective this implies (by the same lemma) that $h'$ has $\mathcal{P}$.
If $X$ and $Y$ are representable, then Descent, Lemma 35.29.5 applies which shows that (1) and (2) are equivalent to $f$ having $\mathcal{P}$.
Finally, suppose $f$ is representable, and $U, V, a, b, h$ are as in part (2) of the lemma, and that $\mathcal{P}$ is preserved under arbitrary base change. We have to show that for any scheme $Z$ and morphism $Z \to X$ the base change $Z \times _ Y X \to Z$ has property $\mathcal{P}$. Consider the diagram
$\xymatrix{ Z \times _ Y U \ar[d] \ar[r] & Z \times _ Y V \ar[d] \\ Z \times _ Y X \ar[r] & Z }$
Note that the top horizontal arrow is a base change of $h$ and hence has property $\mathcal{P}$. The left vertical arrow is étale and surjective and the right vertical arrow is étale. Thus Descent, Lemma 35.29.5 once again kicks in and shows that $Z \times _ Y X \to Z$ has property $\mathcal{P}$. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar). | 2021-06-14 18:16:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.9945812821388245, "perplexity": 138.14402246224952}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487613380.12/warc/CC-MAIN-20210614170602-20210614200602-00295.warc.gz"} |
https://chrishunter.ca/tag/fractions/ | ## [SBA] Constructing Proficiency Scales
In this series:
1. Writing Learning Standards
2. Constructing Proficiency Scales
3. Designing Assessment Items
## Constructing Proficiency Scales
BC’s reporting order requires teachers of Grades K-9 to use proficiency scales with four levels: Emerging, Developing, Proficient, and Extending. Teachers of Grades 10-12 may use proficiency scales but must provide letter grades and percentages. Proficiency scales help communicate to students where they are and where they are going in their learning. But many don’t. When constructing these instruments, I keep three qualities in mind…
## Descriptive, Positive, Progressive and Additive
#### Descriptive
BC’s Ministry of Education defines Emerging, Developing, Proficient, and Extending as demonstrating initial, partial, complete, and sophisticated knowledge, respectively. Great. A set of synonyms. It is proficiency scales that describe these depths with respect to specific learning standards; they answer “No, really, what does Emerging, or initial, knowledge of operations with fractions look like?” Populating each category with examples of questions can help students–and teachers–make sense of the descriptors.
#### Positive
Most scales or rubrics are single-point posing as four. Their authors describe Proficient, that’s it. The text for Proficient is copied and pasted to the Emerging and Developing (or Novice and Apprentice) columns. Then, words such as support, some, and seldom are added. Errors, minor (Developing) and major (Emerging), too. These phrases convey to students how they come up short of Proficient; they do not tell students what they know and can do at the Emerging and Developing levels.
BC’s Ministry of Education uses this phrase to describe profiles of core competencies: “[Profiles] are progressive and additive, and they emphasize the concept of expanding and growing. As students move through the profiles, they maintain and enhance competencies from previous profiles while developing new skills.”
I have borrowed this idea and applied it to content learning standards. It was foreshadowed by the graphic organizer at the end of my previous post: Extending contains Proficient, Proficient contains Developing, and Developing contains Emerging. (Peter Liljedahl calls this backward compatible.) For example, if a student can determine whole number percents of a number (Proficient), then it is assumed that they can also determine benchmark percents (i.e., 50%, 10%) of a number (Emerging). A move from Emerging to Proficient reflects new, more complex, knowledge, not greater independence or fewer mistakes. Students level up against a learning standard.
## Emerging and Extending
The meanings of two levels–Emerging to the left and Extending to the right–are open to debate. Emerging is ambiguous, Extending less so. Some interpretations of Extending require rethinking.
#### Emerging
“Is Emerging a pass?” Some see Emerging as a minimal pass; others interpret “initial understanding” as not yet passing. The MoE equivocates: “Every student needs to find a place on the scale. As such, the Emerging indicator includes both students at the lower end of grade level expectations, as well as those before grade level expectations. […] Students who are not yet passing a given course or learning area can be placed in the Emerging category.” Before teachers can construct proficiency scales that describe Emerging performance, they must land on a meaning of Emerging for themselves. This decision impacts, in turn, the third practice of a standards-based approach, designing assessment items.
#### Extending
A flawed framing of Extending persists: above and beyond. Above and beyond can refer to a teacher’s expectations. The result: I-know-it-when-I-see-it rubrics. “Wow me!” isn’t descriptive.
Above and beyond can also refer to a student’s grade level. Take a closer look at the MoE’s definition of Extending: “The student demonstrates a sophisticated understanding of the concepts and competencies relevant to the expected learning [emphasis added].” It is Math 6 standards, not Math 8 standards, that set forth the expected learning in Math 6. When reaching a decision about proficiency in relation to a Math 6 outcome, it is unreasonable–and nonsensical–to expect knowledge of Math 8 content.
Characterizing Extending as I can teach others is also problematic. Explaining does not ensure depth; it doesn’t raise a complete understanding of a concept to a sophisticated understanding. Further, I can teach others is not limited to one level. A student may teach others at a basic complexity level. For example, a student demonstrates an initial understanding of add and subtract fractions when they explain how to add proper fractions with the same denominator.
## Example: Systems of Linear Equations
In my previous post, I delineated systems of linear equations as solve graphically, solve algebraically, and model and solve contextual problems. Below, I will construct a proficiency scale for each subtopic.
Note that I’ve attached specific questions to my descriptors. My text makes sense to me; it needs to make sense to students. Linear, systems, model, slope-intercept form, general form, substitution, elimination–all of these terms are clear to teachers but may be hazy to the intended audience. (Both logarithmic and sinusoidal appear alongside panendermic and ambifacient in the description of the turbo-encabulator. Substitute nofer trunnions for trigonometric identities in your Math 12 course outline and see if a student calls you on it on Day 1.) The sample questions help students understand the proficiency scales: “Oh yeah, I got this!”
Some of these terms may not make sense to my colleagues. Combination, parts-whole, catch-up, and mixture are my made-up categories of applications of systems. Tees and hoodies are representative of hamburgers and hot dogs or number of wafers and layers of stuf. Adult and child tickets can be swapped out for dimes and quarters or movie sales and rentals. The total cost of a gas vehicle surpassing that of an electric vehicle is similar to the total cost of one gym membership or (dated) cell phone plan overtaking another. Of course, runner, racing car and candle problems fall into the catch-up category, too. Textbooks are chock full o’ mixed nut, alloy, and investment problems. I can’t list every context that students might come across; I can ask “What does this remind you of?”
My descriptors are positive; they describe what students know, not what they don’t know, at each level. They are progressive and additive. Take a moment to look at my solve-by-elimination questions. They are akin to adding and subtracting quarters and quarters, then halves and quarters, then quarters and thirds (or fifths and eighths) in Math 8. Knowing $\frac{8}{3} - \frac{5}{4}$ implies knowing $\frac{7}{4} - \frac{3}{4}$.
Emerging is always the most difficult category for me to describe. My Emerging, like the Ministry’s, includes not yet passing. I would welcome your feedback!
Describing the Extending category can be challenging, too. I’m happy with my solve graphically description and questions. I often lean on create–or create alongside constraints–for this level. I’m leery of verb taxonomies; these pyramids and wheels can oversimplify complexity levels. Go backwards might be better. Open Middle problems populate my Extending columns across all grades and topics.
My solve algebraically… am I assessing content (i.e., systems of linear equations) or competency (i.e., “Explain and justify mathematical ideas and decisions”)? By the way, selecting and defending an approach is behind my choice to not split (👋, Marc!) substitution and elimination. I want to emphasize similarities among methods that derive equivalent systems versus differences between step-by-step procedures. I want to bring in procedural fluency:
Procedural fluency is the ability to apply procedures accurately, efficiently, and flexibly; to transfer procedures to different problems and contexts; to build or modify procedures from other procedures; and to recognize when one strategy or procedure is more appropriate to apply than another
NCTM
But have I narrowed procedural fluency to one level?
$\frac{x}{3} + \frac{y}{2} = 3$
$\frac{x+3}{2} + \frac{y+1}{5} = 4$?
Note that my model and solve contextual problems is described at all levels. Apply does not guarantee depth of knowledge. Separating problem solving–and listing it last–might suggest that problem solving follows building substitution and elimination methods. It doesn’t. They are interweaved. To see my problem-based approach, watch my Systems of Linear Equations videos from Surrey School’s video series for parents.
Next up, designing assessment items… and constructing proficiency scales has done a lot of the heavy lifting!
## [SBA] Writing Learning Standards
For several years, standards-based assessment (SBA) has been the focus of much of my work with Surrey teachers. Simply put, SBA connects evidence of student learning with learning standards (e.g., “use ratios and rates to make comparisons between quantities”) rather than events (“Quiz 2.3”). The change from gathering points to gathering data represents a paradigm shift.
In this traditional system, experience has trained students to play the game of school. Schools dangle the carrot (the academic grade) in front of their faces and encourage students to chase it. With these practices, schools have created a culture of compliance. Becoming standards based is about changing to a culture of learning. “Complete this assignment to get these points” changes to “Complete this assignment to improve your learning.” […] Educators have trained learners to focus on the academic grade; they can coach them out of this assumption.
Schimmer et al., 2018, p. 12
In this series, I’ll describe four practices of a standards-based approach:
1. Writing Learning Standards
2. Constructing Proficiency Scales
3. Designing Assessment Items
## Writing Learning Standards
In BC, content learning standards describe what students know and curricular competency learning standards describe what students can do. Describe is generous–more like list. In any mathematical experience a student might “bump into” both content and competency learning standards. Consider Nat Banting’s Quadratic Functions Menu Math task:
You could build ten different quadratic functions to satisfy these ten different constraints.
Instead, build a set of as few quadratic functions as possible to satisfy each constraint at least once. Write your functions in the form y = a(x − p)2 + q.
Which constraints pair nicely? Which constraints cannot be paired?
Is it possible to satisfy all ten constraints using four, three, or two functions?
Describe how and why you built each function. Be sure to identify which functions satisfy which constraints.
Students activate their knowledge of quadratic functions. In addition, they engage in several curricular competencies: “analyze and apply mathematical ideas using reason” and “explain and justify mathematical ideas and decisions,” among others. Since the two are interwoven, combining competencies and content (i.e., “reason about characteristics of quadratic functions”) is natural when thinking about a task as a learning activity. However, from an assessment standpoint, it might be helpful to separate the two. In this series, I will focus on assessing content.
The content learning standard quadratic functions and equations is too broad to inform learning. Quadratic functions–nevermind functions and equations–is still too big. A student might demonstrate Extending knowledge of quadratic functions in the form y = a(x − p)2 + q but Emerging knowledge of completing the square, attain Proficient when graphing parabolas but Developing when writing equations.
Operations with fractions names an entire unit in Mathematics 8. Such standards need to be divided into subtopics, or outcomes. For example, operations with fractions might become:
2. multiply and divide fractions
3. evaluate expressions with two or more operations on fractions
4. solve contextual problems involving fractions
Teachers can get carried away breaking down learning standards, differentiating proper from improper fractions, same from different denominators, and so on. These differences point to proficiency levels, not new outcomes. Having too many subtopics risks atomizing curriculum. Further, having as many standards as days in the course is incompatible with gathering data over time. I aim for two to four (content) outcomes per unit.
In Foundations of Mathematics and Pre-calculus 10, systems of linear equations can be delineated as:
1. solve graphically
2. solve algebraically
3. model and solve contextual problems
My solve algebraically includes both substitution and elimination. Some of my colleagues object to this. No worries, separate them.
In my next post, I’ll describe constructing proficiency scales to differentiate complexity levels within these learning standards. Here’s a sneak peek:
What do you notice?
## Alike & Different: Which One Doesn’t Belong? & More
I have no idea what I was going for here:
At that time, I was creating Which One Doesn’t Belong? sets. Cuisenaire rods didn’t make the cut. Nor did hundreds/hundredths grids:
I probably painted myself into a corner. Adding a fourth shape/graph/number/etc. to a set often knocks down the reason why one of the other three doesn’t belong. Not all two-by-two arrays make good WODB? sets (i.e., a mathematical property that sets each element apart).
Still, there are similarities and differences among the four numbers above that are worth talking about. For example, the top right and bottom right are close to 100 (or 1); the top left and bottom right are greater than 100 (or 1); top left and top right have seven parts, or rods, of tens (or tenths); all involve seven parts in some way. There is an assumed answer to the question, “Which one is 1?,” in these noticings — a flat is 100 if we’re talking whole numbers and 1 if we’re talking decimals. But what if 1 is a flat in the top left and a rod in the bottom left? Now both represent 1.7. (This flexibility was front and centre in my mind when I created this set. The ten-frame sets, too.)
Last spring, Marc and I offered a series of workshops on instructional routines. “Alike and Different: Which One Doesn’t Belong? and More” was one of them. WODB? was a big part of this but the bigger theme was same and different (and justifying, communicating, arguing, etc.).
So rather than scrap the hundreds/hundredths grids, I can simplify them:
Another that elicits equivalent fractions and place value:
For more, see Brian Bushart’s Same or Different?, another single-serving #MTBoS (“Math-Twitter-Blog-o-Sphere”) site.
Another question that I like — from Marian Small — is “Which two __________ are most alike?” I like it because the focus is on sameness and, like WODB?, students must make and defend a decision. Also, this “solves” my painted-into-a-corner problem; there are three, not six, relationships between elements to consider.
The numbers in the left and right images are less than 100 (if a dot is 1); the numbers in the centre and right can be expressed with 3 in the tens place; the left and centre image can both represent 43, depending on how we define 1.
At the 2017 Northwest Mathematics Conference in Portland, my session was on operations across the grades. The big idea that ran through the workshop:
“The operations of addition, subtraction, multiplication, and division hold the same fundamental meanings no matter the domain in which they are applied.”
– Marian Small
That big idea underlies the following slide:
At first glance, the second and third are most alike: because decimals. But the quotient in both the first and second is 20; in fact, if we multiply both 6 and 0.3 by 10 in the second, we get the first. The first and third involve a partitive (or sharing) interpretation of division: 3 groups, not groups of 3. (Likely. Context can determine meaning. My claim here is that for each of these two purposefully crafted combinations of naked numbers, division as sharing is the more intuitive meaning.)
Similar connections can be made here:
This time, the first and second involve a quotative (or measurement) interpretation of division: groups of (−3) or 3x, not (−3) or 3x groups. (What’s the reason for the second and third? Maybe this isn’t a good “Which two are most alike?”?)
I created a few more of these in the style of Brian’s Same or Different?, including several variations on 5 − 2.
Note: this doesn’t work in classrooms where the focus is on “just invert and multiply” (or butterflies or “keep-change-change” or…).
And I still have no idea what I was going for with the Cuisenaire rods.
The slides:
An edited version of this post appeared in Vector.
## Dividing by Decimals & Fractions: Ham & Ribs
I bought a ham. It was touch-and-go there for awhile. As I was picking up and putting down hams of various sizes, I was calculating baking times. My essential question was, can I have this on the table by six? Simultaneously, I was trying to remember if this was partitive or quotative division.
In partitive division problems, a.k.a division as (fair) sharing, the number of groups is known. This type of problem asks how many are in each group. In quotative division problems, a.k.a. division as measurement, the number in each group is known. This type of problem asks how many groups. For example: 6 ÷ 3 = 2 (partitive) means ♦♦ ♦♦ ♦♦; 6 ÷ 3 = 2 (quotative) means ♦♦♦ ♦♦♦. This distinction isn’t limited to collections of objects. Consider 6 ÷ 3 as cutting a 6 m rope into 3 parts (sharing) vs. cutting lengths of 3 m (measurement). Nor are these meanings limited to whole numbers. Which brings me back to my ham…
The directions read “bake approximately 15 minutes per pound (0.454 kg) or until internal temperature reaches whatever.” But here’s the thing:
Kilograms, not pounds. I could have converted from kilograms to pounds by doubling then adding ten percent of that. Instead, I divided 1.214 by 0.454. I know, I know, this still gives me the weight of my ham in pounds. But at the time, I interpreted 2.67 as the number of repeated additions of 15 minutes in my baking time. Either way, I determined how many 0.454s there are in 1.214. Quotative division. By a decimal.
As a math task, this is clunky. The picture book How Much Does a Ladybug Weigh? by Alison Limentani is a more promising jumping off point for quotative division in the classroom. On each page, the weight of one animal is expressed in terms of a smaller animal.
Using the data at the back of the book, we have 3.2 ÷ 0.53 = 6. We could ask children to make other comparisons (e.g., how many grasshoppers weigh the same as one garden snail?).
In the past, I have struggled with partitive division by decimals (or fractions). But I found the following example at The Fair this summer:
It’s not intuitive–at least to me–to think of 1/3 in 12 ÷ 1/3 as the number of groups. Take a step back and think about 26 ÷ 1 = 26. The cost, $26, is shared between 1 rack of ribs; the quotient represents the unit price,$26/rack, if the unit is a rack. This result should be… underwhelming.
Before we think about dividing by a fraction here, let’s imagine dividing by a whole number (not equal to one). What if I paid $72 for 3 racks? (Don’t look for these numbers in the photo above–I’m making them up.) In 72 ÷ 3 = 24, the cost,$72, is shared between the number of racks, 3; again, the quotient represents the unit price, $24/rack. Partitive division. So what about 12 ÷ 1/3? The cost is still distributed across the number of racks; once again, the quotient represents the unit price,$36/(full) rack. The underlying relationship between dividend, divisor, and quotient hasn’t changed because of a fraction; the fundamental meaning (partitive division) remains the same.
We could have solved this problem by asking a parallel question, how many 1/3s in 12? And this quotative interpretation makes sense with naked numbers. But it falls apart in this context–how many 1/3 racks in 12 dollars? Units, man! If dollars were racks, a quotative interpretation would make sense–how many 1/3 racks in 12 full racks?
As a math task, this, too, is clunky. My favourite math tasks for partitive division by fractions are still Andrew Stadel’s estimation jams.
(Looking for a quotative division problem that involves whole numbers? See Graham Fletcher’s Seesaw three-act math task. For partitive, there’s Bean Thirteen.)
## Fair Share Pair
A couple weeks ago, I was discussing ratio tasks, including Sharing Costs: Travelling to School from MARS, with a colleague who reminded me of a numeracy task from Peter Liljedahl. Here’s my take on Peter’s Payless problem:
Three friends, Chris, Jeff, and Marc, go shopping for shoes. The store is having a buy two pairs, get one pair free sale.
Chris opts for a pair of high tops for $75, Jeff picks out a pair of low tops for$60, and Marc settles on a pair of slip-ons for $45. The cashier rings them up; the bill is$135.
How much should each friend pay? Try to find the fairest way possible. Justify your reasoning.
Sharing Pairs.pdf
I had a chance to test drive this task in a Math 9 class. I asked students to solve the problem in small groups and record their possible solutions on large whiteboards. Later, each student recorded his or her fairest share of them all on a piece of paper. If you’re more interested in sample student responses than my reflections, scroll down.
The most common initial approach was to divide the bill by three; each person pays $45. What’s more fair than same? I poked holes in their reasoning: “Is it fair for Marc to pay the same as Chris? Why? Why not?” Students notice that Chris is getting more shoe for his buck. Also, Marc is being cheated of any discount, as described by Student A. (This wasn’t a happy accident; it’s the reason why I chose the ratio 5:4:3.) Next, most groups landed on$60-$45-$30. Some, like Student A, shifted from equal shares of the cost to equal shares of the discount; from ($180 −$45)/3 to $45/3. Others, like Students B, C, and D, arrived there via a common difference; in both$75, $60,$45 and $60,$45, $30, the amounts differ by$15. This approach surprised me. Additive, rather than multiplicative, thinking.
Student C noticed that this discount of $15 represented different fractions of the original prices;$15/$75 = 1/5,$15/$60 = 1/4,$15/$45 = 1/3. He applied a discount of 1/4 to all three because “it’s the middle fraction.” Likely, this is a misconception that didn’t get in the way of a reasonable solution. Student D presented similar amounts. Note the interplay of additive and multiplicative thinking. She wants to keep a common difference, but changes it to$10 to better match the friends’ discounts as percents.
Student E applies each friend’s percent of the original price to the sale price. This approach came closest to my intended learning outcome: “Solve problems that involve rates, ratios and proportional reasoning.”
In spite of not reaching my learning goal, I think that this lesson was a success. The task was accessible yet challenging, allowed students to make and justify decisions, and promoted mathematical discourse.
Still, to increase the future likelihood that students solve this problem using ratios, I’m wondering about changes I could make. Multiples of 20 ($100-$80-$60) rather than 15 ($75-$60-$45)? Different ratios, like 4:3:2 or 5:3:2, might help; the doubles/halves could kickstart multiplicative thinking. (Also, 5:3:2 breaks that arithmetic sequence.)
Or, I could make changes to my questioning.
When I asked “What do you notice?” students said:
• the prices of the shoes are different
• Chris’ shoes are the most expensive
• Marc’s shoes are the cheapest
• Chris’ shoes are $15 more than Jeff’s, which are$15 more than Marc’s
• Jeff’s shoes are the fugliest
Maybe I could ask “What else could you say about the prices of Chris’ shoes compared to Marc’s?” etc. to prompt comparisons involving ratios. If that fails, I’m more comfortable connecting ratios to the approaches taken by students themselves than I am forcing it.
BTW, “buy one, get one 50% off” vs. “buy two, get one free” would make a decent “Would you rather?” math task.
h/t Cam Joyce, Carley Brockway
## [TMWYK] Aero Bubble Bar
Recently, Nestlé launched the new AERO bubble bar throughout Canada and the UK.
For the benefit of the American readership:
From the press release:
As well as offering a unique bar design, guaranteed to stand out from the crowd, AERO’s innovation isn’t just for show. The new design sees the bar divided into ten easily snappable ‘bubbles’, making it less messy to eat and more portionable. What’s more, each of the ten ‘bubbles’ are designed to melt more easily in the mouth, maximising the taste of AERO’s signature bubbly chocolate.
I brought one home a couple weeks ago. I put the bar’s portionability to the test.
I snapped off two bubbles each for Keira (5), Gwyneth (8), and Marnie (N/A). Plus, two for me. (Missed math teacher opportunity, I know.) Two pieces were left over. “How much more should we each get?” I asked.
“Half,” Keira answered. She told me to make two cuts: two becomes four, or n(Keira’s family). For shits and giggles, we played with different cuts. What I learned from Keira:
“Or two-quarters,” Gwyneth piped up.
“Huh?” I returned, caught off-guard. “Tell me more,” I recovered. Gwyneth told me to cut each of the two bubbles into four quarters, giving us eight quarters. Eight pieces can be shared equally between four people. Each of us should get two pieces, or two-quarters.
Gwyneth’s strategy–divide each piece into fourths rather than make four pieces in all like her sister–surprised me. It’s a strategy that makes sense to her: dividing each piece into fourths means she’ll be able to form four equal groups. It’s a strategy that’s flexible: I don’t think she’ll be fazed by a curveball, like an additional bubble or family member.
Symbolically, we have:
The result is trivial; her thinking is not.
For more math talk with kids, please follow Christopher Danielson’s new blog.
## More Decimals and Ten-Frames
What number is this?
123? 12.3? 1.23? One has to ask oneself one question: Which one is one?
Earlier this year, I was invited into a classroom to introduce decimals. We had been representing and describing tenths concretely, pictorially, and symbolically. We finished five minutes short, so I gave the students a blank hundred-frame and asked them to show me one half and express this in as many ways as they could.
As expected, some expressed this as 5/10 and 0.5. They used five of the ten full ten-frames it takes to cover an entire hundred-frame. Others expressed this as 50/100 and 0.50. They covered the blank hundred-frame with fifty dots. I was listening for these answers.
One student expressed this as 2/4. I assumed he just multiplied both the numerator and denominator of 1/2 by 2. And then he showed me this:
One student expressed this as 500/1000 and 0.500. I assumed he was just extending the pattern(s). “Yeahbut where do you see the 500 and 1000?” I asked challenged. “I imagine that inside every one of these *points to a dot* there is one of these *holds up a full ten-frame*,” he explained. As his teacher and I listened to his ideas, our jaws hit the floor.
In my previous post, I discussed fractions, decimals, place value, and language. To come full circle, what if we took a closer look at 0.5, 0.50, and 0.500? These are equivalent decimals. That is, they represent equivalent fractions: “five tenths,” “fifty hundreds,” “five hundred thousandths,” respectively. From a place-value-on-the-left-of-the-decimal-point point of view, 0.5 is five tenths; 0.50 is five tenths and zero hundredths; 0.500 is five tenths, zero hundredths, zero thousandths. Equal, right?
Hat Tip: Max Ray‘s inductive proof of Why 2 > 4
Recently, I was invited into three Grade 3/4 classrooms to introduce fractions.
Cuisenaire rods give children hands-on ways to explore the meaning of fractions. After students built their towers, flowers, and robots, I asked, “If the orange rod is the whole, which rod is one half?” Students explained their thinking: “two yellows make an orange.” I emphasized, or rather, students emphasized that the two parts must be equal.
I asked students to find as many pairs as they could that showed one half. I let ’em go and they built and recorded the following:
Once more, with one third:
As children shared their pairs, we discussed the big ideas:
• the denominator tells how many equal parts make the whole (e.g., two purple rods make one brown rod, three light green rods make one blue rod)
• the same fraction can describe different pairs of quantities (e.g., one half can be represented using five different pairs, one third can be represented using three different pairs)
• the same quantity can be used to represent different fractions (e.g., white is one half of red and one third of light green, red is one half of purple and one third of dark green, etc.)
Something interesting and outside the lesson plan happened in each of these three classrooms.
Some students described each pair of rods using equivalent fractions (e.g., 1/2, 2/4, 4/8):
I asked the “we’re done” students to represent their own fractions using pairs of rods and determine each other’s mystery fraction. Many students chose fractions like 2/5 or 3/4, not simply unit fractions:
After students shared the three pairs of rods for one third, I asked if anyone found any more. “I did,” said one student, unexpectedly. Check this out:
I asked her why she chose to combine an orange rod and a red rod to make the whole. She explained that twelve can be divided into three equal parts. Without prompting, the rest of the class starting building these:
## Marriage Problem
Last week, we wrapped up our winter sessions with over 50 elementary school math teams. Part of these sessions are devoted to having teachers work together to solve problems. Having teachers “do the math” helps brings meaning to important topics in mathematics education. We gave the following problem, from Van de Walle:
In a particular small town, 2/3 of the men are married to 3/5 of the women. What fraction of the entire population are married?
This is a challenging problem, but only because traditional algorithms get in the way of sense-making methods. The gut reaction is to do something with common denominators. Time after time, with each group, primary and intermediate. Through questioning, the mistake can be recognized.
“In this context, what does the 15 over here represent?” [points to 10/15]
“The total number of men.”
“And over here?” [points to 9/15]
“The total number of wom–OOOOOh…”
Sometimes, it takes longer to reach an ‘OOOOOh’:
“What does the 10 represent?”
“The number of married men.”
“And the 9?”
“The number of married wom–OOOOOh…”
Once teachers realize that having 10 men married to 9 women is somewhat problematic, most model the problem using colour tiles. Two out of three men being married becomes four out of six and six out of nine. Three out of five women being married is equivalent to six out of ten. Six pairs of husbands and wives can be formed. We have 12 out of 19 people being married.
Others think logically to solve the problem. The number of husbands must equal the number of wives. The number of husbands and wives are represented by the numerators. Therefore, the numerators must be made equal. With all due respect to Dr. Math, it just makes sense.
The use of manipulatives to construct meaning continues to be a focus of teachers involved in the numeracy project, both for themselves and for their students. Long before I became involved in this project, my fellow Numeracy Helping Teachers (Marc Garneau, Selina Millar, Sandra Ball, and Shelagh Lim) worked tirelessly to set a climate in which teachers and students felt comfortable using a variety of manipulatives.
At these sessions, we present teachers with problems, not practice. It’s a pleasure to work with such an amazing group of educators so willing to explore, take risks, and persevere. But as much fun as these sessions with teachers have been, I’m looking forward to the real fun: problem-solving with their students.
## Update (2020/01/18)
A much more inclusive context!
## The first step in adding fractions is to find a common numerator.
“Okay, listen up! Today’s lesson will be on adding fractions. Let’s start with an easy one like 1/3 + 1/6. The first step is to find a common numerator, which, in this example, we already have. This becomes the numerator of the sum so let’s write a 1 up there. The denominator is, of course, itself a fraction whose numerator is the product of the denominators and whose denominator is the sum of the denominators. This gives us 1/(18/9), or 1/2.
Let’s kick it up a notch and try 2/3 + 1/4. Remember, the first step is to find the lowest common numerator, or LCN. You guys look a little puzzled. You remember learning this in grade 7, right? Since the LCN is 2, we have 2/3 + 2/8. Write a 2 up top. To determine the denominator, simply multiply and add to get 24/11. We have 2/(24/11). This is a tricky one since 24/11 doesn’t reduce nicely. Multiplying the common numerator by the denominator of the denominator gives us 22/24. One more thing… if you don’t reduce to lowest terms, I’ll have to deduct half a mark. 22/24 should be written as 11/12. I’ve typed up some notes. Take one sheet and pass the rest back.”
Christopher Danielson over at OMT shared the method above with me earlier this year. Recently, I presented it to a group of secondary math teachers. Christopher’s algorithm brilliantly initiates conversation about what is important in teaching and learning mathematics. For example, one teacher said “It works. I can prove that it works. But, it doesn’t make sense.” Another asked “It’s quick and easy, but does that matter?”
I think Christopher (@Trianglemancsd) plays it straight when he shows his algorithm to pre-service teachers. I couldn’t pull this off – more of a tongue-in-cheek thing for me. This elicited some (nervous?) laughter as teachers put themselves in the role of their students learning about LCD’s.
This segued to activities that do build conceptual understanding of fraction operations. We looked at:
• using an area model to represent multiplication,
• using pattern blocks to explore quotative division, and
• using a common denominator to divide fractions.
These last two are connected… more on this later. | 2022-06-29 22:54:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 4, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5445998907089233, "perplexity": 3068.1455429934804}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103645173.39/warc/CC-MAIN-20220629211420-20220630001420-00477.warc.gz"} |
https://datascience.stackexchange.com/questions/12738/is-there-an-r-package-which-uses-neural-networks-to-explicitly-model-count-data?noredirect=1 | # Is there an R package which uses neural networks to explicitly model count data?
Ripley's nnet package, for example, allows you to model count data using a multi nomial setting but is there a package which preserves the complete information relating to a count? For example, whereas an ordinal multinomial model preserves the ordering of the integers that make up the count, a fully developed model of count data as a GLM such as Poisson or Negative Binomial Regression includes how large the integer counts are in relation to each other.
Another phrasing might be, 'What kind of models come closest to combining the advantages of neural networks, in terms of, as an example, easily modelling non-linearity in the predictors, and count data GLMs, which are good at taking into account that the data is in fact a count?'
• What do you mean by complete information in this case? – Jan van der Vegt Jul 14 '16 at 7:22
• Hi Jan, I have made an attempt at more completely explaining what I mean. – Robert de Graaf Jul 15 '16 at 6:41
• Then I did understand it correctly, I will write a half answer now – Jan van der Vegt Jul 15 '16 at 6:44
• Would u please share your answer with me? – Aki Apr 5 '17 at 2:19
$$E = -\sum_{n=1}^N[-t_n + y_nlog(t_n)]$$ | 2020-08-07 21:51:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6084799766540527, "perplexity": 818.9031343062404}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-34/segments/1596439737225.57/warc/CC-MAIN-20200807202502-20200807232502-00496.warc.gz"} |
https://xarray-simlab.readthedocs.io/en/stable/_api_generated/xsimlab.global_ref.html | # xsimlab.global_ref¶
xsimlab.global_ref(name, intent='in')
Create a reference to a variable that is defined somewhere else in a model with a unique, global name.
Unlike foreign(), the original variable is not known until the model is created (implicit reference). This may be a good alternative if explicit references are tricky and if standard names exist.
Parameters
• name (str) – The global name of the variable. Must refer to a unique variable model-wise.
• intent ({'in', 'out'}, optional) – Defines whether the variable is an input (i.e., the process needs the variable’s value for its computation) or an output (i.e., the process computes a value for the variable). Default: input. Intent ‘inout’ is not supported here. | 2021-09-24 14:51:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.21162571012973785, "perplexity": 2015.9551986997217}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780057558.23/warc/CC-MAIN-20210924140738-20210924170738-00401.warc.gz"} |
https://physics.stackexchange.com/questions/508975/understanding-quantum-mechanics | # Understanding quantum mechanics [closed]
Forgive me for this dumb question but what are matter waves of particles? are they particles being spread out in a space like waves or the particles are still "particles" but matter waves are probability waves? and if the particle is actually spread out in space like a wave then why does this page from wikipedia about string theory says that strings replace "point-like particles"? https://en.wikipedia.org/wiki/String_theory
## closed as too broad by Aaron Stevens, Qmechanic♦Oct 19 at 1:16
Please edit the question to limit it to a specific problem with enough detail to identify an adequate answer. Avoid asking multiple distinct questions at once. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
Quantum mechanics is applicable in the regime of extremely small (when length scale ~ $$h/p$$). So lets talk about small particles. For such particles, something strange happens:
1. when they are detected they behave as if they are particles in the conventional sense: that is they have an exact value for their energy and momenta. Imagine it this way: when the particle's position is detected say at a screen of some detector, its hits it at just one tiny spot.
2. strangely, if the detection experiment were to be repeated, the energy momentum values are found to be different. They are nevertheless exact this time too.
(by repeated, we mean detection performed by an ensemble of identically prepared systems)
Therefore, there is a probability distribution associated with the energy-momentum of the particle. This is true for any observable. When the observable is position, the associated probability density is called a matter wave.
The matter wave is unlike any notion of wave that you may have. It is not
1. matter of the particle smeared out in space in the form a standing or propagating wave
2. a wave in the probability distribution of the observable
3. a wave in the form of which the original particle travels.
All a particle's associated matter wave represents is the probability of detection of that particle at different points in space. At every detection, the particle exhibits particulate properties. This is the particle nature. However since it doesn't seem to be able decide upon one value of its position, its as if its spread out. This is the wave nature. This is wave-matter duality.
Why can't we say the particle is actually spread out like a wave i.e it travels like a wave? Because it is not experimentally possible to detect the form in which a particle travels-to do so requires detection and at the moment of detection, each particle appears particulate in nature.
What is the wave in the "matter wave"? I am not completely sure. The particle's wave functions (whose mod square gives he probability density) is in general imaginary though it does sometimes appear as if its spreading in space or moving in space with time.. like a pulse expanding. But its nothing like what you associate waves with--sound, water eaves, EM fields etc.
• -1: "[W]hen they are detected they behave as if they are particles in the conventional sense: that is they have an exact value for their energy and momenta. Imagine it this way: when the particle is detected say at a screen of some detector, its hits at just one tiny spot." This is wrong in multiple ways: 1. Whether a particle can be measured to have an exact value for their energy and momenta depends on the Hamiltonian. If the Hamiltonian doesn't commute with the momentum operators, you can't measure both energy and momenta at the same time. [...] – Dvij Mankad Oct 19 at 2:26
• [...] 2. Whether a particle will have a precise momentum or not when you "detect" the particle depends entirely on your mechanism of detecting. If you are measuring the momentum then it would have a precise momentum, if you are measuring the position, it would have a precise position. Both are legitimate ways of detection. 3. A way to think about a particle measured to have a precise momentum is not to imagine it having a localized position like hitting at just one tiny spot. It is the opposite. A particle with a definite momentum would not have a precise position at all. – Dvij Mankad Oct 19 at 2:29
• Probability density is a real, not imaginary, quantity. Matter waves are complex waves of probability amplitude whose modulus squared is the probability density. – G. Smith Oct 19 at 3:13
• @DvijMankad For your point 3, doesn't applying that reasoning only apply for the distribution of measurements of similarly prepared quantum systems? i.e. if we obtain a smaller spread in momentum measurements we would expect a larger spread in position measurements? – Aaron Stevens Oct 19 at 4:24
• @G Smith. Thank you. I have corrected the answer. – lineage Oct 19 at 13:18 | 2019-11-17 05:43:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7500169277191162, "perplexity": 650.3196061879196}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668787.19/warc/CC-MAIN-20191117041351-20191117065351-00201.warc.gz"} |
https://study.com/academy/answer/find-the-indefinite-integral-and-check-the-result-by-differentiation-use-c-for-the-constant-of-integration-integral-8-sin-x-3-e-x-dx.html | # Find the indefinite integral and check the result by differentiation. (Use C for the constant of...
## Question:
Find the indefinite integral and check the result by differentiation. (Use C for the constant of integration.)
{eq}\displaystyle \int (8 \sin x - 3 e^x)\ dx {/eq}.
{eq}\int ( \sin x)\ dx=-cosx\\ \int( e^x)\ dx=e^x\\ \frac{d(cosx)}{dx}=-sinx\\ \frac{d(e^x)}{dx}=e^x\\ {/eq}
{eq}\text{We have to integrate }\int (8 \sin x - 3 e^x)\ dx {/eq}
$$\int (8 \sin x - 3 e^x)\ dx\\ \text{Apply linearity}\\ 8\int ( \sin x)\ dx - 3\int( e^x)\ dx\\ \text{Now solving}\\ 8\int ( \sin x)\ dx\\ \text{This is standard integral}\\ -8cosx\\ \text{Now solving}\\ 3\int( e^x)\ dx\\ \text{This is standard integral}\\ 3e^x\\ \text{Hence, }\int (8 \sin x - 3 e^x)\ dx\\=-8cosx-3e^x+c$$
{eq}\text{For checking our integration we have to differentiate }-8cosx-3e^x+c {/eq}
$$\frac{d(-8cosx-3e^x+c)}{dx}\\ \text{we know that }\frac{d(cosx)}{dx}=-sinx\;\;and\;\;\frac{d(e^x)}{dx}=e^x\\ \Rightarrow-8(-sinx)-3e^x\\ \Rightarrow8sinx-3e^x\\ \text{Here we get the expression we integrated. Hence, the result is checked}$$ | 2019-08-21 00:56:25 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9972856640815735, "perplexity": 5602.615473063554}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315695.36/warc/CC-MAIN-20190821001802-20190821023802-00231.warc.gz"} |
http://www.hpmuseum.org/forum/thread-9610-page-2.html | HP calcs are really not that accurate..
12-02-2017, 12:40 PM
Post: #21
toml_12953 Senior Member Posts: 935 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-02-2017 10:56 AM)Gerald H Wrote: Aristotle? He claimed women have a different number of teeth from men. Discredited.
I'm reminded of Robert Recorde:
I couldn't have said it better myself (in old English!)
Tom L
Ducator meus nihil agit sine lagunculae leynidae accedunt
12-02-2017, 02:50 PM
Post: #22
rprosperi Senior Member Posts: 3,028 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-02-2017 10:37 AM)DA74254 Wrote: Thanks again, and to emphasize; it was not my intension to step on anyones toes.
You can learn a lot here, depending on how you ask a question, if you have thick skin....
You can pose the same question in different ways, for example:
1. "How does one do XYZ?"
or this way:
2. "There is no question that the best way to do XYZ is [insert simple/dumb explanation here], right?"
and you will get vastly different responses. The former is likely to get fewer simple answers, including the universal (and generally appropriate) "RTFM", while the latter will provide many more elaborate and insightful answers, often leading to a series of competing solutions, each successively improving on the last.
But you may take a few small bruises in the process... Of course they heal, and you've learned a lot in the process.
@Pauli - Regarding "256 digits are far more than you'd need to represent the diameter of the universe in terms of the planck length"
Awesome statement, it just captures and states the point better than any other phrase possibly could express it. And it's cool!
And as a bonus, you've no doubt caused a lot of folks to google 'planck length'. Sure, I knew it was small, but not that it is essentially the very definition of smallest possible. Like I said, Cool!
--Bob Prosperi
12-02-2017, 03:07 PM (This post was last modified: 12-02-2017 03:07 PM by pier4r.)
Post: #23
pier4r Senior Member Posts: 1,763 Joined: Nov 2014
RE: HP calcs are really not that accurate..
(12-02-2017 02:50 PM)rprosperi Wrote: 2. "There is no question that the best way to do XYZ is [insert simple/dumb explanation here], right?"
and you will get vastly different responses. The former is likely to get fewer simple answers, including the universal (and generally appropriate) "RTFM", while the latter will provide many more elaborate and insightful answers, often leading to a series of competing solutions, each successively improving on the last.
Almost like the Cunningham's Law
And I agree with the point that every thread can be a source for new input.
Wikis are great, Contribute :)
12-02-2017, 03:47 PM
Post: #24
Thomas Okken Senior Member Posts: 644 Joined: Feb 2014
RE: HP calcs are really not that accurate..
(12-02-2017 02:50 PM)rprosperi Wrote: You can learn a lot here, depending on how you ask a question, if you have thick skin....
You can pose the same question in different ways, for example:
1. "How does one do XYZ?"
or this way:
2. "There is no question that the best way to do XYZ is [insert simple/dumb explanation here], right?"
and you will get vastly different responses. The former is likely to get fewer simple answers, including the universal (and generally appropriate) "RTFM", while the latter will provide many more elaborate and insightful answers, often leading to a series of competing solutions, each successively improving on the last.
I agree that option 1 is generally not the way to go. For me personally, it tends to make me suspect that the person asking the question is looking for someone to do their homework for them.
Whether option 2 is right is a matter of taste. I find that it if you're actually making an effort yourself, it works perfectly fine to simply be honest: "I'm trying to figure out how to do XYZ. The best I've been able to come up with is <insert description of algorithm here>."
Not everyone has a taste for unpleasantness, and people who are genuinely willing to share their knowledge don't need to be prodded into action with provocatively-phrased statements, and might even by turned off by them -- "I'd be happy to help, but this person is being an ass, so f--- them."
12-02-2017, 04:05 PM
Post: #25
rprosperi Senior Member Posts: 3,028 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-02-2017 03:47 PM)Thomas Okken Wrote: Whether option 2 is right is a matter of taste. I find that it if you're actually making an effort yourself, it works perfectly fine to simply be honest: "I'm trying to figure out how to do XYZ. The best I've been able to come up with is <insert description of algorithm here>."
Not everyone has a taste for unpleasantness, and people who are genuinely willing to share their knowledge don't need to be prodded into action with provocatively-phrased statements, and might even by turned off by them -- "I'd be happy to help, but this person is being an ass, so f--- them."
Oh, I agree with you, and I'm not advocating using style 2; I guess I was not clear, I'm just commenting on what I've observed over the years.
In fact I've found that the very people whose replies and opinions I seek most, generally will not reply to such taunts, except occasionally to correct another reply.
I think your final phrase captures it well!
There is no doubt that making a sincere effort first, followed by an honest and direct request for help is the way to go here. It always results in lots of replies, and even if the ultimate question isn't answered (sometimes there is no answer) one always learns something new in the process (as do most of the readers).
--Bob Prosperi
12-02-2017, 10:43 PM
Post: #26
Paul Dale Senior Member Posts: 1,412 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-02-2017 02:50 PM)rprosperi Wrote: @Pauli - Regarding "256 digits are far more than you'd need to represent the diameter of the universe in terms of the planck length"
Awesome statement, it just captures and states the point better than any other phrase possibly could express it. And it's cool!
I almost went with represent the volume of the universe in terms of. I don't think this is quite so neat but it is even more mind boggling.
As for asking questions, Thomas's variation of approach 2 is my favourite: I've tried, I've got something is there a better way? Being less confrontational can often avoid the need for thick skin.
Pauli
12-02-2017, 11:51 PM
Post: #27
Claudio L. Senior Member Posts: 1,390 Joined: Dec 2013
RE: HP calcs are really not that accurate..
Spoiler alert: shameless plug below. But it's all true nonetheless...
(12-01-2017 09:09 PM)DA74254 Wrote: In my opinion, there should be no reason that we should not have, say, at least 256 digits/decimal points accuracy. That goes for any device capable of doing "2+2".
How about 2000? Search for newRPL.
(12-01-2017 10:37 PM)brickviking Wrote: In addition, while computers have gobs of spare memory and awesome (!) floating point processors, calculators do not. Most calculators are not expected to be connected to the wall just to plain work (or charge their batteries after 9 hours of use), they're expected to work after 6 months (or more!) of use just the same as when the battery was first put in. This requires serious compromises in the choices of CPU or multifunction chip so as to make best use of the limited energy resources available from batteries.
Not completely true - see newRPL. Same CPU, battery and RAM requirements as the 39gs/40gs/50g. May not be perfect but does what the OP suggested (>256 digits) without much compromise.
(12-02-2017 10:37 AM)DA74254 Wrote: I have been lied to, and I don't like it.
Well, some good came out of this. My slight ADHD/ADD, which demands things set square still prefers the "good" answers from the lying calcs, though I myself, upon reading the HP article linked here and the explanations from you, sets things "square" in a better way.
You can have it both ways. If your precision is sufficiently high (as you suggested), but you only display a limited number of digits, it creates the illusion of the perfect answer (2.00000000000000000000000), but if you subtract 2 you'll still see the 1.3E-254 error, which is the "truth" behind the scenes. That will give you the peace of mind that you are not being lied to.
By the way, that number is the actual answer from newRPL with 256 digits setting, doing the 5 iterations of sqrt/sq).
My way of thinking aligns well with yours. If you have enough numerical precision, you don't need to "lie",and you need to worry less about roundoff error propagations after thousands of operations.
I'm not sure I mentioned before in this post, but you should check out newRPL! :-)
PS: My apologies to everybody, I'm really bad at marketing...
12-03-2017, 12:10 AM
Post: #28
Craig Bladow Member Posts: 185 Joined: Apr 2016
RE: HP calcs are really not that accurate..
(12-01-2017 07:49 PM)DA74254 Wrote: I put 2 on the stack and sqrt it 5 times, then squared it back. All my 5 HP's returned 1.99999999979 instead of 2.0. That goes for the Android Pro version as well.
I ran this in NQ41, which uses double precision floating point numbers, subtracting 2 and got the following result.
Code:
Welcome to NQ41 (Not Quite a -41!) Version 0.003 Copyright © 2017 Craig Bladow. All rights reserved. This experimental software is released for the sole purpose of testing and feedback and without warranty of any kind. Input commands and numbers, separated by spaces, and press return. A space or return after a number is the same as the command 'enter'. Use 'exit' to quit and 'catalog 3' for a list of commands. > 2 sqrt sqrt sqrt sqrt sqrt x^2 x^2 x^2 x^2 x^2 2 - x: -3.5527e-15 y:0.0000 z:0.0000 t:0.0000 l:2.0000 >
Check out NQ41!
12-03-2017, 01:19 AM
Post: #29
Sukiari Member Posts: 113 Joined: Dec 2014
RE: HP calcs are really not that accurate..
(12-01-2017 09:09 PM)DA74254 Wrote: there should be no reason that we should not have, say, at least 256 digits/decimal points accuracy
This is bait.
12-03-2017, 01:47 AM
Post: #30
AlexFekken Member Posts: 151 Joined: May 2016
RE: HP calcs are really not that accurate..
Inspired by some other thread (where I looked at the pros and cons [but mainly the cons, the pros getting enough support already] of QPI and PSLQ), I just realised that we can "deal with" some of the issues raised in this thread in a more sophisticated and more defensible way if instead of sneaky extra digits we use (auxiliary, stateless) CAS and functions like QPI and PSLQ behind the scenes.
Would we still be talking about "lying" then, i.e. in particular when the auxiliary CAS does not know the provenance of the floats? Presumably you would need a mechanism to indicate how accurate your floats are...
More generally, how useful, magical, dangerous, perfect, .... would such a CAS-backed numerical calculator be?
12-03-2017, 01:49 AM
Post: #31
SlideRule Senior Member Posts: 496 Joined: Dec 2013
RE: HP calcs are really not that accurate..
Might I suggest
[attachment=5379], [attachment=5378], [attachment=5380], [attachment=5381]
for perusal / edification as well as common reference.
BEST!
SlideRule
12-03-2017, 02:31 AM (This post was last modified: 12-03-2017 02:39 AM by AlexFekken.)
Post: #32
AlexFekken Member Posts: 151 Joined: May 2016
RE: HP calcs are really not that accurate..
(12-03-2017 01:49 AM)SlideRule Wrote: Might I suggest
Sorry, haven't read the books (yet). But I think I did get a decent training in the principles covered, and this prompted me to write up this rant:
My previoius post reminded me of literally the first two basic principles that I learned when I started studying physics (and mathematics) at university in 1976:
1 - a number is totally meaningless if you don't specify its units
2 - a number is also totally meaningless if you don't specify its error margin
Now 1 is usually implied by context, but 2 still seems to be consistently neglected, sometimes even by people who did read the books and should know better (e.g. Feynman wrote that error margins for a number of space shuttle components were clearly reverse engineered from the requirements).
Now back in 1976 we had to rely on custom applications to process data with error margins. But I wonder why this is still the case now. My favourite, somewhat biased :-), answer is:
It is politically incorrect to teach scientifically correct thinking to the masses. It might hurt their brains, or worse, they might pick it up.
But still, should we not have an abundance of (freely available) tools now to do, for example, interval arithmetic? And should these not be the standard in scientific education by now. Clearly, this is much more fundamental and important than e.g. CAS or graphing capabilties.
And then threads like this would not even exist...
12-03-2017, 02:43 AM
Post: #33
brickviking Senior Member Posts: 322 Joined: Dec 2014
RE: HP calcs are really not that accurate..
(12-03-2017 01:19 AM)Sukiari Wrote:
(12-01-2017 09:09 PM)DA74254 Wrote: there should be no reason that we should not have, say, at least 256 digits/decimal points accuracy
This is bait.
Ahh, but such sweet bait. And it had the purpose of extending the conversation even further. That's not always a bad thing…
(Post 140)
Regards, BrickViking
HP-50g |Casio fx-9750G+ |Casio fx-9750GII (SH4a)
12-03-2017, 02:46 AM
Post: #34
SlideRule Senior Member Posts: 496 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-03-2017 02:31 AM)AlexFekken Wrote: Sorry, haven't read the books (yet). But I think I did get a decent training in the principles covered, and this prompted me to write up this rant:
I don't consider your expository a rant but rather significantly aligned with my undergraduate studies in Physical Science & Civil Engineering. My only departure is with respect to Numbers versus Measures, but this is a nuance adopted from my professors. Since I seldom venture into PURE MATHEMATICS, I also have difficulty with numerical quantification of physical measurements and an absence of attendant units / error. Please continue to press to test
BEST!
SlideRule
12-03-2017, 03:24 AM (This post was last modified: 12-03-2017 07:51 AM by AlexFekken.)
Post: #35
AlexFekken Member Posts: 151 Joined: May 2016
RE: HP calcs are really not that accurate..
(12-03-2017 02:46 AM)SlideRule Wrote: My only departure is with respect to Numbers versus Measures,
Thanks, I agree of course. The two principles came from a physics lecture on measurement and data processing (a mandatory first lecture) so that narrowed the context.
SwissMicros seems to be working on the right sort of hardware (big multi-line display) for a true scientific calculator that would meet our demands.
Now just waiting for someone to implement Free42+$$\delta$$ :-)
12-03-2017, 05:46 AM
Post: #36
Sukiari Member Posts: 113 Joined: Dec 2014
RE: HP calcs are really not that accurate..
(12-03-2017 02:43 AM)brickviking Wrote:
(12-03-2017 01:19 AM)Sukiari Wrote: This is bait.
Ahh, but such sweet bait. And it had the purpose of extending the conversation even further. That's not always a bad thing…
(Post 140)
Mathematica ran quite nicely on a Next machine with, by today's standards, a very modest 68k processor, I think there could be a market for a Wolfram calculator. One could conceivably create a Pi shield with a handheld calculator form factor ala the PocketChip which I also own and quite like, and simply use the free Mathematica that Steven Wolfram was kind enough to release on that platform, even.
12-03-2017, 07:46 AM
Post: #37
DA74254 Member Posts: 76 Joined: Sep 2017
RE: HP calcs are really not that accurate..
(12-02-2017 10:43 PM)Paul Dale Wrote:
(12-02-2017 02:50 PM)rprosperi Wrote: @Pauli - Regarding "256 digits are far more than you'd need to represent the diameter of the universe in terms of the planck length"
Awesome statement, it just captures and states the point better than any other phrase possibly could express it. And it's cool!
I almost went with represent the volume of the universe in terms of. I don't think this is quite so neat but it is even more mind boggling.
As for asking questions, Thomas's variation of approach 2 is my favourite: I've tried, I've got something is there a better way? Being less confrontational can often avoid the need for thick skin.
Pauli
Actually, you can't calculate the volume of the universe. It has no volume as it is flat
Lookey heerey:
Space.com
Scientific American
Wikipedia
Esben
28s, 35s, 49G+, 50G, Prime, SwissMicros DM42
Elektronika MK-52 & MK-61
12-03-2017, 08:54 AM
Post: #38
DA74254 Member Posts: 76 Joined: Sep 2017
RE: HP calcs are really not that accurate..
(12-02-2017 11:51 PM)Claudio L. Wrote: newRPL commercial..
I've been looking at that site every now and then.
Not wanting to "do something" with my precious 50G, I'd ask here; can it be run on the 49G+ as well?
I'm more inclined to use that calc as an experimental thingy than my 50G.
Esben
28s, 35s, 49G+, 50G, Prime, SwissMicros DM42
Elektronika MK-52 & MK-61
12-03-2017, 10:03 AM
Post: #39
emece67 Senior Member Posts: 363 Joined: Feb 2015
RE: HP calcs are really not that accurate..
(12-01-2017 10:18 PM)Paul Dale Wrote: What are you trying to represent that requires so many digits?
How can you possibly measure something that accurately?
256 digits are far more than you'd need to represent the diameter of the universe in terms of the planck length.
256 digits may be an overkill when dealing with physics problems, but there are other problems where they may be not enough.
Some authors have used the double-exponential quadrature method to compute some definite integrals and then the PSLQ algorithm to ascertain closed form solutions involving elementary functions and well know constants that solve such integrals. Such approach relies on computing such definite integrals with a large number of digits, not in the hundreds, but in the thousands. So in this universe do exist interesting problems that demand high digit counts.
(12-02-2017 07:42 AM)DA74254 Wrote: And yes, I know exactly how big my land plot is in square plancks (just over 5,5x10^72)
Thus, apparently, you only needed 2 digits to measure your plot in square planks.
César - Information must flow.
12-03-2017, 11:34 AM
Post: #40
Paul Dale Senior Member Posts: 1,412 Joined: Dec 2013
RE: HP calcs are really not that accurate..
(12-03-2017 07:46 AM)DA74254 Wrote: Actually, you can't calculate the volume of the universe. It has no volume as it is flat
Apart from being trivial to calculate the volume of a two dimensional object (it's zero), the universe being flat doesn't mean it is two dimensional. Flat has a special meaning in this context which is defined by curvature and quickly heads away from Euclidean to Riemannian spaces and manifold theory. I knew this stuff thirty odd years ago but the details have faded away.
Pauli
« Next Oldest | Next Newest »
User(s) browsing this thread: 1 Guest(s) | 2018-12-09 22:25:58 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.401553213596344, "perplexity": 2637.5036222354393}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823183.3/warc/CC-MAIN-20181209210843-20181209232843-00258.warc.gz"} |
https://www.techwhiff.com/issue/choose-the-logical-inference-please-help-i-ll-mark--138080 | ### What decimal is halfway between 18 hundredths and 3 tenths
what decimal is halfway between 18 hundredths and 3 tenths...
### What feelings would you have if you were returning home after twenty years? Check any that apply. fulfillment exhaustion love glory pride happiness
What feelings would you have if you were returning home after twenty years? Check any that apply. fulfillment exhaustion love glory pride happiness...
### Jack and Jill shared 1/2 of a cake. Jack got to eat twice as much cake as Jill. What fraction of the whole cake did Jack eat?
Jack and Jill shared 1/2 of a cake. Jack got to eat twice as much cake as Jill. What fraction of the whole cake did Jack eat?...
### An initial population of fish is introduced into a lake. This fish population grows according to a continuous exponential growth model. There are fish in the lake after years. (a)Let be the time (in years) since the initial population is introduced, and let be the number of fish at time . Write a formula relating to . Use exact expressions to fill in the missing parts of the formula. Do not use approximations. (b)How many fish are there years after the initial population is introduced
An initial population of fish is introduced into a lake. This fish population grows according to a continuous exponential growth model. There are fish in the lake after years. (a)Let be the time (in years) since the initial population is introduced, and let be the number of fish at time . Write a fo...
### Similarities between Pop Art and Superflat art
similarities between Pop Art and Superflat art...
### Suppose that - π/2 ≤ θ ≤ π/2 and that tan(θ) = 0.1. determine value of sec(θ)
Suppose that - π/2 ≤ θ ≤ π/2 and that tan(θ) = 0.1. determine value of sec(θ) ...
### 2. The set {0,1,-1} is closed under the operation of?
2. The set {0,1,-1} is closed under the operation of?...
### Which phrase best describes a period on the periodic table?
which phrase best describes a period on the periodic table?...
### What types of injuries occur over a long period of time from the overuse of one area of the body while playing a sport?
What types of injuries occur over a long period of time from the overuse of one area of the body while playing a sport?...
### Which trails begin in indapendence
Which trails begin in indapendence...
### An animal sanctuary currently has 125 elk in it. The elk are expected to increase by 20% each year. How many elk will there be in 3 years?
An animal sanctuary currently has 125 elk in it. The elk are expected to increase by 20% each year. How many elk will there be in 3 years?... | 2022-09-27 09:00:29 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34849250316619873, "perplexity": 1879.0294892039346}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030334992.20/warc/CC-MAIN-20220927064738-20220927094738-00716.warc.gz"} |
https://www.postonline.co.uk/broker/2481511/aon-revenue-falls-2-in-2016 | # Aon revenue falls 2% in 2016
The 2% drop in revenue from $11.7bn (£9.4bn) in 2015 to$11.6bn at the end of last year was due to a 2% decrease in commissions and fees related to divestitures, and a 2% negative impact from foreign | 2018-09-23 20:36:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.31477853655815125, "perplexity": 4329.922138829788}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159744.52/warc/CC-MAIN-20180923193039-20180923213439-00061.warc.gz"} |
http://math.stackexchange.com/questions/122057/all-naturals-are-t-finite-all-finite-sets-are-t-finite | All naturals are T-finite, all finite sets are T-finite
In Jech's Set Theory, there is defined T-finite, where a set $S$ is T-finite if every non-empty $X\subseteq\mathcal{P}(S)$ has $\subseteq$-maximal element.
[ie. there is $u\in X$ s.t. there is no $v\in X$ with $u\subsetneq v$]
The following exercises are being related to this.
1. Each $n\in \mathbb{N}$ is T-finite
2. $\mathbb{N}$ is T-infinite (not T-finite)
3. Every finite set is T-finite
4. Every infinite set is T-infinite
I completed 2 and 4 (considering first $\mathbb{N}\subset \mathcal{P}(\mathbb{N})$ since the naturals are linearly ordered by $\subseteq$, and for $S$ infinite, $\{u\subseteq S\vert u \text{ finite}\}$ ). I am so far unable to solve the others. Many thanks for you kind help.
-
For (1) try to show by induction that each $n \in \mathbb{N}$ is T-finite.
• Since $0 = \emptyset$, then $\mathcal{P} ( \emptyset ) = \{ \emptyset \}$, and we can analyse both subsets of this to show that they have $\subseteq$-maximal elements.
• Going from $n$ to $n+1$, note that if $X \subseteq \mathcal{P} ( n+1 ) = \mathcal{P} (\{ 0, \ldots , n \} )$ has no $\subseteq$-maximal element, then the family $Y = \{ a \in X : n \in a \}$ must be nonempty. Reduce this down to a question about a subset of $\mathcal{P} ( n )$.
For (3), note that it follows easily from (1) once you show that if $X$ is T-finite and $f : X \to Y$ is a bijection, then $Y$ is T-finite. ($f$ will induce a bijection $\hat{f} : \mathcal{P} ( X ) \to \mathcal{P} (Y)$.)
-
Could you explain please how to reduce this to a question about a subset of $\mathcal{P}(n)$? I am fine with the inductive hypothesis, but am not able to get inductive step. Many thanks – Inigo Montoya Mar 19 '12 at 13:41
@Inigo: Take $Y^\prime = \{ a \setminus \{ n \} : a \in Y \}$. This is a family of subsets of $\mathcal{P} ( n )$. Argue that a $\subseteq$-maximal element of this maps (by adding $n$ to it) to a $\subseteq$-maximal element of the original $X$. – Arthur Fischer Mar 19 '12 at 14:08
Yes, yes - you have my gratitude – Inigo Montoya Mar 19 '12 at 14:50
Let $A = \{n \in \Bbb N : n \text{ is T-finite}\}$. Try to prove $A = \Bbb N$ by using Ex 1.10 in Jech's book.
-
Tarski proved that a set $S$ is finite if and only if every non-empty collection of subsets of $S$ has a maximal element. This equivalence holds without the axiom of choice, and so it made somewhat sense to slightly weaken it during the years when various forms of finiteness were investigated (without the axiom of choice, of course).
In Jech's book The Axiom of Choice he defines (Ch. 4, Ex. 9, p. 52) T-finite in a slightly different manner:
Call a set $S$ $T$-finite if every non-empty monotone [read: $\subseteq$-chain] $X\subseteq\mathscr P(S)$ has a $\subseteq$-maximal element.
It is not hard to show that every finite set is $T$-finite, indeed Tarski's equivalent implies that immediately. It is also not hard to show that every $T$-finite is Dedekind-finite.
Neither of the implications is reversible in ZF, as the following will show:
1. If $A$ is amorphous (infinite and every subset of $A$ is finite or co-finite) then $A$ is $T$-finite. It is consistent that amorphous sets exist. Therefore it is consistent that there are $T$-finite sets which are infinite.
2. If $A$ is a $T$-finite set then $A$ can be linearly ordered if and only if $A$ is finite.
3. It is consistent that there is an infinite Dedekind-finite set of real numbers. The previous fact shows that such set cannot be $T$-finite because it can be linearly ordered.
For those interested, a nice exercise in understanding the definition is to show that we can replace "maximal" by "minimal" in all the definitions.
-
To prove 1, I would try to show that: $A$ is T-finite $\Rightarrow$ $A\cup\{x\}$ is T-finite. (I.e. adding one element preserves T-finiteness.) Then I would go by induction on $n$. (If this hint is not sufficient, I can try to add a more complete solution.)
Jech's definition of finite is: $A$ is finite iff $A$ is in a bijection with some $n\in\mathbb N$. As T-finiteness is preserved by bijection, 1 clearly implies 3.
EDIT: Section 4.1 of Herrlich's book Axiom of choice might be interesting for you, too.
-
Is it obvious that bijections preserve T-finiteness? They must have to be $\subseteq$-order preserving also. – Inigo Montoya Mar 19 '12 at 14:05
It is explained in more detail in Arthur's answer. If $f:X\to Y$ is a bijection then we have bijection $\hat f:\mathcal P(X)\to\mathcal P(Y)$ given by $\hat f(A)=\{f(a); a\in A\}$. For any bijection $f$ the corresponding map $\hat f$ is $\subseteq$-order preserving. – Martin Sleziak Mar 19 '12 at 14:46 | 2014-08-29 18:55:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9577423334121704, "perplexity": 358.3789522505006}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-35/segments/1408500832738.80/warc/CC-MAIN-20140820021352-00002-ip-10-180-136-8.ec2.internal.warc.gz"} |
https://www.nature.com/articles/s41598-019-41953-0 | Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
Potassium doping increases biochar carbon sequestration potential by 45%, facilitating decoupling of carbon sequestration from soil improvement
Abstract
Negative emissions technologies offer an important tool to limit the global warming to <2 °C. Biochar is one of only a few such technologies, and the one at highest technology readiness level. Here we show that potassium as a low-concentration additive in biochar production can increase biochar’s carbon sequestration potential; by up to 45% in this study. This translates to an increase in the estimated global biochar carbon sequestration potential to over 2.6 Gt CO2-C(eq) yr−1, thus boosting the efficiency of utilisation of limited biomass and land resources, and considerably improving the economics of biochar production and atmospheric carbon sequestration. In addition, potassium doping also increases plant nutrient content of resulting biochar, making it better suited for agricultural applications. Yet, more importantly, due to its much higher carbon sequestration potential, AM-enriched biochar facilitates viable biochar deployment for carbon sequestration purposes with reduced need to rely on biochar’s abilities to improve soil properties and crop yields, hence opening new potential areas and scenarios for biochar applications.
Introduction
Technologies for CO2 removal from the atmosphere (so called Negative Emission Technologies) will be required to limit the global warming to <2 °C1,2. Sequestering carbon in soil in the form of biochar has been discussed for around a decade and has shown great potential as a carbon negative strategy3,4,5. Although uncertainties still exist regarding estimation of the residence time of biochar in soil based on its physical and chemical properties and soil conditions6, it is generally accepted that carbon in biochar has a residence time in soil several orders of magnitude greater than the biomass it was produced from7. Therefore, many studies use proxies for assessing the amount of stable carbon in biochar8,9,10. Most relevant is the amount of carbon stable after 100 years in soil, the duration typically used to calculate the global warming potential of greenhouse gases11. In various articles the carbon sequestration potential of biochar has been calculated to be in the range of 0.7–1.8 Gt CO2-C(eq) yr−13,12,13.
Biochar is produced as one of the co-products of pyrolysis, which is the thermochemical conversion of biomass in the absence of free oxygen at temperatures above around 350 °C. This process yields three co-products in the form of pyrolysis solids (biochar), pyrolysis liquids (organic acids, phenolic compounds, etc.), and gases (CO, CO2, H2, CH4, C2H4, etc.)14,15. The yield of biochar, and also its carbon stability, are dependent on feedstock and pyrolysis conditions16,17.
Increasing the percentage of stable carbon content in biochar is one way of increasing its carbon sequestration potential (the other is increasing biochar yield); this is typically achieved by increasing the severity of the pyrolysis process (higher temperature and longer residence time, which reduces solids yield, increasing C release as gas, leading to higher CO2 emissions when burned)8,18. In context of carbon capture and storage, however, the increase of the stable carbon yield, i.e. the amount of stable carbon that can be obtained from the same amount of biomass, is more relevant. Increasing the stable carbon yield relative to the parent biomass feedstock is in effect equivalent to reducing the amount of biomass needed to sequester a unit of carbon.
The stability of biochar carbon is related to its condensed aromatic nature19, which in turn is affected by the composition of biomass feedstock and by processing conditions, mainly highest treatment temperature (HTT). Besides organic constituents (cellulose, hemicellulose and lignin), biomass also contains inorganic constituents, which content and composition is dependent on the type of biomass, growing location and to some extent the method of harvest and subsequent treatment20,21. Previous studies showed that concentrations of certain constituents of the mineral matter, especially alkali metals (AM) and alkaline earth metals (AEM), such as K, Na, Ca, Mg, strongly affect biochar yields22,23,24,25,26, with biochar yields increasing with elevated levels of AMs and AEMs due to catalysis of biochar formation. Yet, little is know about the effects of AMs and AEMs on the yield of stable carbon in biochar. In some studies the carbon retention in biochar was increased using additives, though very high additive to biomass ratios (w/w) were used, e.g. in Ren et al.26 Ca(OH)2 in a ratio of 1:9 was applied, in Zhao et al.25 H3PO4 was applied in ratios of 0.359:1 and 0.718:1. In this study, we have used an order of magnitude lower loading of a low-cost additive (1 and 2%) which at the same time provides K to plants when resulting biochar is applied to soil.
The potential ability of AMs to catalyse biochar formation together with the fact that potassium is a valuable macronutrient makes the use of AMs as additives in biochar production a potentially attractive proposition. This is especially true, if the AM catalysed biochar has at least similar or better properties in terms of carbon stability as biochar produced without AM doping. Despite extensive research on biochar stability, no systematic investigation of stability of AM catalysed biochar has been reported to date. The research presented in this paper focused on increasing the carbon sequestration potential of biochar derived from the energy crop Miscanthus giganteus, by increasing biochar yield and stability using a common, low-cost additive, potassium acetate, and comparing the effects with those of sodium acetate doping.
Results and Discussion
Potassium doping increases biochar and stable carbon yield
Our results show that, as expected based on previous research and published literature, the biochar yield of Miscanthus biomass was strongly affected by the presence of AMs. Doping with AMs (1 wt% K+, 2 wt% K+ and 1 wt% Na+) unequivocally resulted in higher biochar yields, by 10.5–21.1% relative to the untreated biomass control, in the whole temperature range tested (350–750 °C) (Figs 1A, 2A). The relative biochar yield change compared to un-amended biomass pyrolysis (Fig. 3A) shows that in the tested temperature range the biochar yield increase was independent of the HTT.
Besides biochar yield, the content of stable carbon and subsequently the yield of stable carbon are the other two important parameters in terms of biochar’s ability to sequester carbon efficiently. In this study the stable carbon content was determined using a hydrogen peroxide oxidation method calibrated so that it corresponds to approx. 100 years of ageing in soil8,9,27. The yield of stable carbon was determined as a product of biochar yield and stable carbon content. Both Na+ and K+ doping showed similar performance in increasing both biochar (Fig. 2A) and stable carbon (Fig. 2B) yields. The increase in biochar yield is a result of catalysis of the charring process by AM catalysts in the pyrolysis of lignocellulosic materials. In the presence of these metals the reaction process favours charring and dehydrating reactions, versus fragmentation and depolymerization pathways, in the primary decomposition of the holocellulosic fraction28. In addition, AMs also enhance dehydration, demethoxylation, decarboxylation, and biochar formation in lignin pyrolysis29,30.
All AM treatments considerably increased the stable carbon yield (Fig. 2C) compared to the untreated controls. The relative change was between + 9.5% (1% K+ 650 °C) and +45.0% (2% K + 450 °C) (SI Table 4) and the average change was +20.4% for all AM treatments and pyrolysis temperatures.
This finding was confirmed with a different biomass feedstock material. Willow chips treated in the same way with 1% w/w K+ were also pyrolysed at 350, 550 and 750 °C (described in SI). The stable carbon yield was highest at a HTT of 550 °C with a 31.8% higher yield relative to the feedstock biomass than the corresponding untreated willow sample (SI Table 3). This shows that the effect of AM doping on stable carbon yield is not specific to one type of biomass (Miscanthus).
450 °C pyrolysis maximises the carbon sequestration potential of miscanthus biochar
Tailoring the pyrolysis conditions and hence the biochar properties to suit the biochar end use is a profound and essential feature of biochar production, and therefore any effects of AM additives on available range of processing parameters are of high importance.
In general, both AMs performed comparably in terms of stable carbon yield when added at the same concentration (1wt %) (Fig. 2B). Increased AM loading reduced the HTT at which the highest stable carbon yield was obtained. While pyrolysis of Miscanthus loaded with 1% K+ yielded most stable carbon at 550 °C, in the case of Miscanthus loaded with 2% K+ this was observed at 450 °C (Fig. 2B). At higher temperatures these differences between AM loading levels became less pronounced (Fig 1B, 2B).
Taking all AM-treatments together the relative increase in stable carbon yield was highest at 450 °C (Fig. 3B) with +33.9% on average for both AM doping levels. This observation has very important practical implications, as it means that production units operating at the lower end of usual pyrolysis temperature ranges (450–500 °C) could be used. Such units would not require high-grade stainless steel as a material for construction, reducing costs compared to units specified for higher temperatures. With increasing pyrolysis temperature, the pH and surface area of biochar increase while at the same time the functionality (O/C, H/C ratios) decreases31,32,33. Therefore, producing biochar at moderate pyrolysis temperatures, i.e. 450–550 °C, is a good compromise, not only providing high yield of stable carbon, but also yielding biochar with beneficial properties for soil amelioration (high surface area, high CEC).
AMs affect biochar microstructure and carbon stability
The increased stability of carbon can be explained by the catalytic effect of AMs on biomass pyrolysis, such as enhanced cross-linking reactions (e.g., dehydration forming C=C or C-O-C) resulting in a highly cross-linked biochar, compared to biochar from untreated biomass34,35.
To further corroborate these findings, the structure of the biochar was investigated using Raman spectroscopy and X-ray diffraction (XRD), focusing on differences in the biochar carbon structure resulting from K+ doping. Raman spectra of the six biochar types investigated are shown in SI Fig. 2. Meaningful features of the spectra are (i) the position of the G peak, (ii) the relative intensity of the D and G peaks (the so-called ID/IG ratio), (iii) the shift of the D peak, and (iv) the width of the peaks. SI Fig. 2 shows that for both series the increase of temperature leads to a) a shift of the D-peak to lower wavenumbers, indicating a slight reduction in the size of defect free and/or edge free regions, b) a decrease in the width of the D peaks with increasing temperature, indicating a reduction in the structural disorder. At any given process temperature, the use of K+ (1 and 2 wt%) leads to lower G peak intensities, i.e. to higher ID/IG ratio. This indicates that the amount of disordered material as determined by Raman spectroscopy is larger for K+ treated Miscanthus biochar.
The XRD analysis of Miscanthus and K+ Miscanthus biochar confirmed the differences in biochar structure resulting from doping with potassium (see SI Table 5). The content of graphitic carbon, although small, increases because of K+ doping at all temperatures, with the effect being strongest at the lower end of the temperature range investigated. This is comparable with the trend seen for yield of stable carbon.
Based on this evidence we can conclude that although the carbon structure in the K+ Miscanthus biochar appears to be less ordered and more defective than that of untreated Miscanthus biochar, it has a comparable recalcitrance. It is important to note here that in comparing the information obtained from Raman spectroscopy and XRD care has to be taken since XRD is a bulk technique while Raman is sampling a few hundred nanometres close to the surface. Hence findings from these two techniques must be looked at as complementary. The overall effect of AM on biochar and stable carbon yield is schematically shown in SI Fig. 6. This finding is fundamental for developing biochar for carbon sequestration purposes as it means that use of AM doping can make pyrolysis much more efficient in converting biomass carbon to stable biochar carbon.
AMs doping enhances biochar’s role as a carbon sequestration tool
The potential of biochar to store atmospheric carbon has been estimated to be between 0.7–1.8 Gt CO2-C(eq) yr−13,12,13. Assuming a stable carbon increase of 45% as observed in this study for 2% potassium addition increases the carbon sequestration potential of biochar to between 1 and 2.6 Gt CO2-C(eq) yr−1, which corresponds to over 7% of current annual GHG emissions36 and close to 25% of annual atmospheric CO2 removal necessary to maintain atmospheric concentrations of CO2 at safe levels37.
Another conclusion that can be drawn from this finding is that to sequester a given amount of carbon in form of biochar, 31% less biomass, and therefore land would be required, compared to biochar without AM additives. This would further reduce biochar’s already relatively low land requirement, compared to other NETs options13.
In addition, tests of the K+-enhanced biochar showed that the majority of the K+ contained in the biochar was available and could be utilised by plants upon application of the amended biochar to soil (based on 0.01 M CaCl2-extraction (SI Table 6)). Therefore, K+-enhanced biochar provides not only a better tool for carbon sequestration, but also a better slow-release K-fertiliser, compared to unamended biochar.
To date, viable biochar applications have been inextricably linked to biochar’s ability to increase crop yields5,38. Due to its much higher carbon sequestration potential, AM-enriched biochar offers a price competitive option for atmospheric carbon removal compared to bioenergy and carbon capture and storage (BECCS), and direct air capture (DAC) etc., especially if low-cost sources of potassium, such as ash39, can be utilised. Under such scenario, crop growth improvements would still be a desirable option40, in fact a more likely one due to the increased K content, but no longer as critical for viable biochar implementation. This opens-up a whole new range of potential applications where carbon sequestration can be achieved, but where only small or no crop yield benefits could be expected, e.g., fertile soils, contaminated land, as well as non-soil applications (e.g., building materials, underground storage). This is a breakthrough approach to biochar deployment and a paradigm shifting finding.
Methods
Biochar production
To prepare AM-loaded Miscanthus with 1% and 2% w/w K+ content (0.256 and 0.513 mmol g−1 dry feedstock), potassium was added in a similar manner to previous work23,41,42 by spraying an aqueous solution (174.7 g L−1 and 349.5 g L−1) of potassium acetate (Sigma-Aldrich, ≥99.0%) onto oven-dried Miscanthus chips (approximately 10 mm in size) spread in a thin layer. In this way the desired level of potassium was introduced, and the biomass moisture was restored to the original level of ~12 wt%. Similarly, Na+ in form of an aqueous solution of sodium acetate (anhydrous, 98% pure, Fisher Scientific) (146 g L−1 sodium acetate) was applied to Miscanthus chips to introduce the same concentration in moles per weight of Miscanthus as 1% (w/w) K+ (256.4 μmol g−1 dry feedstock). This ensures that the same molar amount of AM is present in the 1% AM treatments independently of their molar weight which is equivalent to 0.59% (w/w) Na+. These organic salts were selected as additives due to their easy applicability, relatively low costs (potassium acetate 650–850 USD t−143), bulk availability, widespread use (e.g., de-icing, drilling muds, fertilisers, etc.), and low environmental impact44,45.
The untreated Miscanthus samples were pyrolysed at 350, 450, 550, 650 and 750 °C with a continuous auger reactor described in Buss et al.21 with mean residence time of the biomass in the heated zone of around 21.5 minutes. The 1% K+ doped Miscanthus was pyrolysed in the same temperature range, but with duplicate at 550 °C. Miscanthus doped with the other two AM treatments (2% K and 1% Na) was pyrolysed at the HTTs of 450 °C, 550 °C and 650 °C.
Biochar carbon stability
In this work, the chemical stability of biochar against oxidation in the environment was tested using a hydrogen peroxide wet oxidation method9. This method has been calibrated on naturally aged charcoal samples and corresponds to 92 and 187 years at mean annual temperatures of 17 °C and 7 °C, respectively. Hence, the stable carbon content determined in this study reflects the amount of carbon that is sequestered in the soil and for which carbon credits could be given. The method has also been cross-referenced with other proxies for carbon stability8, such as fixed carbon determined by proximate analysis (comparison of the two methods on the biochar samples from this study is provided in the supplementary information). The method is described in Cross et al.9 and in brief here. Before analysis, all biochars were ground to fine powder using a pestle and mortar, and homogenised sub-samples were used for the test. A char sample containing 0.1 g of C was mixed in a test tube with 0.01 mol of H2O2 in 7 mL of DI water. The test tubes were subsequently heated to 80 °C and kept at this temperature, with occasional agitation, for two days until the solution had evaporated. Subsequently, the remaining material was dried overnight at 105 °C before weighing the amount of residual char, and C content analysis. The carbon amount prior and post ageing was then calculated using the char amount and carbon content. The ratio of retained C to initial C gives the % stable carbon content.
$$stable\,carbon\,content( \% )=\frac{residual\,char\,mass\,\ast \,residual\,char\,carbon\,content}{initial\,char\,mass\,\ast \,initial\,char\,carbon\,content}$$
The analysis was performed in triplicate. By multiplying the stable carbon content by the char yield the stable carbon yield was calculated as described for the fixed carbon content by Antal and Grønli (2003)18. This corresponds to the efficiency of carbon conversion from biomass to biochar and serves as the key parameter for comparison in this work.
Raman spectroscopy
Raman spectra measurements were performed using a Renishaw InviaH instrument equipped with a green laser source (wavelength: 514.5 nm). All samples were taken as received (crushed) and placed on a microscope glass slide and flattened to obtain an optical field as flat as possible to improve the focus on the sample. Several points were examined for each sample. For each point we started with a low magnification objective (5x) zooming-in using a 20x and finally recording spectra with a 50x objective. The area of each spot examined had a width of about 2 μm2. Raman analysis being a volume technique, the signal is gathered from the sample surface up to a depth that varies with the optical properties of the analysed materials. In sp2-rich carbon materials this depth is of a few hundred nanometres (Ni, Z., Wang, Y., Yu, T. et al. Raman spectroscopy and imaging of graphene Nano Res. (2008) 1: 273. doi.org/10.1007/s12274–008–8036–1). The laser power was set at 5 mW in order to obtain a good signal-to-noise ratio while avoiding damage of the sample due to excessive heating. Each measurement was carried out in extended mode (100 cm−1 to 3500 cm−1) with an exposure time of 10 s and with 3 accumulations.
X-ray diffraction (XRD)
A Bruker D8 advance XRD instrument with a Cu Anode was used at 40 mA and 40 kV with a NaI detector which analysed at 1.5 s/step from 2 to 65° in 0.025°/step increments. The raw data were evaluated with the software TOPAS 3.0 Rietveld analysis. The biochar samples were spiked with 20% calcite to quantitatively measure the composition of mineral in the samples relative to a known concentration of calcite.
Statistics
Two-sample, two-sided, equal variances t-tests in Microsoft Excel were performed to determine effects of the 1% K+ addition on stable carbon yields. One-way ANOVAs followed by Tukey post-hoc tests (in SigmaPlot 13.0) were used to investigate the effect of the various AM treatments on stable carbon yield and the effect of the HTT on char yield and stable carbon yield taking the AM treatments as replications. Significant differences are given with a significance level of p < 0.05.
Data Availability
All data related to this experiment are provided within the article and the Supplementary Information.
References
1. 1.
IPCC 2014: Summary for Policymakers. Climate Change 2014: Mitigation of Cliamte Change. Contribution of Woeking group III to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Edenhofer, O., R. Pichs-Madruga, Y. Sokona, E. Farahani, S. Kadner, K. Seyboth, A. Adler,. mbridge Univ. Press. Cambridge, United Kingdom New York, NY, USA. (2014).
2. 2.
Fuss, S. et al. Betting on negative emissions. Nat. Clim. Chang. 4, 850–853 (2014).
3. 3.
Woolf, D., Amonette, J. E., Street-Perrott, F. A., Lehmann, J. & Joseph, S. Sustainable biochar to mitigate global climate change. Nat. Commun. 1, 1–9 (2010).
4. 4.
Lehmann, J. A handful of carbon. Nature 447, 10–11 (2007).
5. 5.
Woolf, D., Lehmann, J. & Lee, D. R. Optimal bioenergy power generation for climate change mitigation with or without carbon sequestration. Nat. Commun. 7, 13160 (2016).
6. 6.
Wang, J., Xiong, Z. & Kuzyakov, Y. Biochar stability in soil: Meta-analysis of decomposition and priming effects. GCB Bioenergy 1–12 https://doi.org/10.1111/gcbb.12266 (2015).
7. 7.
Lehmann, J. et al. Chapter 10: Persistence of biochar in soil. in Biochar for Environmental Management: Science and Technology and Implementation, second Edition 169–182 (Earthscan Ltd., London 2015).
8. 8.
Crombie, K., Mašek, O., Sohi, S. P., Brownsort, P. & Cross, A. The effect of pyrolysis conditions on biochar stability as determined by three methods. GCB Bioenergy 5, 122–131 (2013).
9. 9.
Cross, A. & Sohi, S. P. A method for screening the relative long-term stability of biochar. GCB Bioenergy 5, 215–220 (2013).
10. 10.
Budai, A. et al. International Biochar Initiative: Biochar Carbon Stability Test Method: An assessment of methods to determine biochar carbon stability. (2013).
11. 11.
IPCC. Chapter 8: Climate Change 2013: The physical science basis. Contribution of working group I to the fifth assessment report of the intergovernmental panel on climate change https://doi.org/10.1017/CBO9781107415324 (2013).
12. 12.
Paustian, K. et al. Climate-smart soils. Nature 532, 49–57 (2016).
13. 13.
Smith, P. Soil carbon sequestration and biochar as negative emission technologies. Glob. Chang. Biol. 22, 1315–1324 (2016).
14. 14.
Zhang, Q., Chang, J., Wang, T. & Xu, Y. Review of biomass pyrolysis oil properties and upgrading research. Energy Convers. Manag. 48, 87–92 (2007).
15. 15.
Crombie, K. & Mašek, O. Investigating the potential for a self-sustaining slow pyrolysis system under varying operating conditions. Bioresour. Technol. 162, 148–56 (2014).
16. 16.
Crombie, K. & Mašek, O. Pyrolysis biochar systems, balance between bioenergy and carbon sequestration. GCB Bioenergy 7, 349–361 (2015).
17. 17.
Liu, Z., Demisie, W. & Zhang, M. Simulated degradation of biochar and its potential environmental implications. Environ. Pollut. 179, 146–52 (2013).
18. 18.
Antal, M. J. & Grønli, M. The art, science, and technology of charcoal production. Ind. Eng. Chem. Res. 42, 1619–1640 (2003).
19. 19.
Ameloot, N., Graber, E. R., Verheijen, F. G. A. & De Neve, S. Interactions between biochar stability and soil organisms: Review and research needs. Eur. J. Soil Sci. 64, 379–390 (2013).
20. 20.
Evangelou, M. W. H., Conesa, H. M., Robinson, B. H. & Schulin, R. Biomass production on trace element–contaminated land: a review. Environ. Eng. Sci. 29, 823–839 (2012).
21. 21.
Buss, W., Graham, M. C., Shepherd, J. G. & Mašek, O. Suitability of marginal biomass-derived biochars for soil amendment. Sci. Total Environ. 547, 314–322 (2016).
22. 22.
Wang, Z., Wang, F., Cao, J. & Wang, J. Pyrolysis of pine wood in a slowly heating fixed-bed reactor: Potassium carbonate versus calcium hydroxide as a catalyst. Fuel Process. Technol. 91, 942–950 (2010).
23. 23.
Fuentes, M. E. et al. A survey of the influence of biomass mineral matter in the thermochemical conversion of short rotation willow coppice. J. energy Inst. 81, 234–241 (2008).
24. 24.
Richards, G. N., Shafizadeh, F. & Stevenson, T. T. Influence of sodium chloride on volatile products formed by pyrolysis of cellulose: Identification of hydroxybenzenes and 1-hydroxy-2-propanone as major products. Carbohydr. Res. 117, 322–327 (1983).
25. 25.
Zhao, L. et al. Roles of Phosphoric Acid in Biochar Formation: Synchronously Improving Carbon Retention and Sorption Capacity. J. Environ. Qual. 46, 393 (2017).
26. 26.
Ren, N., Tang, Y. & Li, M. Mineral additive enhanced carbon retention and stabilization in sewage sludge-derived biochar. Process Saf. Environ. Prot. 115, 70–78 (2018).
27. 27.
Masek, O., Brownsort, P., Cross, A. & Sohi, S. Influence of production conditions on the yield and environmental stability of biochar. Fuel 103, 151–155 (2013).
28. 28.
Di Blasi, C. Combustion and gasification rates of lignocellulosic chars. Prog. Energy Combust. Sci. 35, 121–140 (2009).
29. 29.
Jakab, E., Faix, O. & Till, F. Thermal decomposition of milled wood lignins studied by thermogravimetry/mass spectrometry. J. Anal. Appl. Pyrolysis 40–41, 171–186 (1997).
30. 30.
Kleen, M. & Gellerstedt, G. Influence of inorganic species on the formation of polysaccharide and lignin degradation products in the analytical pyrolysis of pulps. J. Anal. Appl. Pyrolysis 35, 15–41 (1995).
31. 31.
Harvey, O. R., Herbert, B. E., Rhue, R. D. & Kuo, L. J. Metal interactions at the biochar-water interface: Energetics and structure-sorption relationships elucidated by flow adsorption microcalorimetry. Environ. Sci. Technol. 45, 5550–5556 (2011).
32. 32.
Jindo, K., Mizumoto, H., Sawada, Y. & Sonoki, T. Physical and chemical characterization of biochars derived from different agricultural residues. Biogeosciences 11, 6613–6621 (2014).
33. 33.
Ronsse, F., van Hecke, S., Dickinson, D. & Prins, W. Production and characterization of slow pyrolysis biochar: influence of feedstock type and pyrolysis conditions. GCB Bioenergy 5, 104–115 (2013).
34. 34.
Le Brech, Y. et al. Effect of Potassium on the Mechanisms of Biomass Pyrolysis Studied using Complementary Analytical Techniques. ChemSusChem 9, 863–872 (2016).
35. 35.
Liu, D., Yu, Y., Long, Y. & Wu, H. Effect of MgCl2 loading on the evolution of reaction intermediates during cellulose fast pyrolysis at 325 °C. Proc. Combust. Inst. 35, 2381–2388 (2015).
36. 36.
The European Academies’ Science Advisory Council. EASAC policy report 35: Negative emission technologies: What role in meeting Paris Agreement targets? (2018).
37. 37.
EASAC. Science Advice for the Benefit of Europe Negative emission technologies: What role in meeting Paris Agreement targets? EASAC Policy Report (2018).
38. 38.
Peters, J., Iribarren, D. & Dufour, J. Biomass pyrolysis for biochar or energy applications? A life cycle assessment. Environ. Sci. Technol. 150401141039001, https://doi.org/10.1021/es5060786 (2015).
39. 39.
Buss, W., Jansson, S., Wurzer, C. & Mašek, O. Synergies between BECCS and biochar - maximizing carbon sequestration potential by recycling wood ash - accepted. ACS Sustain. Chem. Eng. https://doi.org/10.1021/acssuschemeng.8b05871 (2019).
40. 40.
Jeffery, S., Verheijen, F. Ga & van der Velde, M. & Bastos, a. C. A quantitative review of the effects of biochar application to soils on crop productivity using meta-analysis. Agric. Ecosyst. Environ. 144, 175–187 (2011).
41. 41.
Nowakowski, D. J. & Jones, J. M. Uncatalysed and potassium-catalysed pyrolysis of the cell-wall constituents of biomass and their model compounds. J. Anal. Appl. Pyrolysis 83, 12–25 (2008).
42. 42.
Nowakowski, D. J., Jones, J. M., Brydson, R. M. D. & Ross, A. B. Potassium catalysis in the pyrolysis behaviour of short rotation willow coppice. Fuel 86, 2389–2402 (2007).
43. 43.
Alibaba.com. www.alibaba.com/showroom/potassium-acetate-price.html accessed 20/11/2017 (2017).
44. 44.
Fischel, M. Evaluation of selected deicers based on a review of the literature, Report No. CDOT-DTD-R-2001-15, Colorado Department of Transportation, Research Branch. (2001).
45. 45.
EPA. Environmental Impact and Benefit Assessment for the Final Effluent Limitation Guidelines and Standards for the Airport Deicing Category. 1–187 (2012).
Acknowledgements
The authors would like to acknowledge the financial support of the Engineering and Physical Sciences Research Council (EPSRC) for the impact acceleration award that enabled this work.
Author information
Authors
Contributions
Ondřej Mašek led the research design and manuscript preparation, he also contributed to sample analysis, data collection, data analysis and interpretation. Wolfram Buss led the data analysis and interpretation, and contributed to experimental work, sample analysis, and manuscript preparation. Peter Brownsort contributed to research design, experimental work, sample analysis, data collection, data analysis and interpretation, and manuscript preparation. Massimo Rovere contributed to sample analysis, data collection, data analysis and interpretation, and manuscript preparation. Alberto Tagliaferro contributed to sample analysis, data collection, data analysis and interpretation, and manuscript preparation. Ling Zhao provided advisory support, contributed to data interpretation, and manuscript preparation. Xinde Cao provided advisory support, contributed to data interpretation, and manuscript preparation. Guangwen Xu provided advisory support, contributed to data interpretation, and manuscript preparation.
Corresponding author
Correspondence to Ondřej Mašek.
Ethics declarations
Competing Interests
The authors declare no competing interests.
Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Reprints and Permissions
Mašek, O., Buss, W., Brownsort, P. et al. Potassium doping increases biochar carbon sequestration potential by 45%, facilitating decoupling of carbon sequestration from soil improvement. Sci Rep 9, 5514 (2019). https://doi.org/10.1038/s41598-019-41953-0
• Accepted:
• Published:
• Extraneous Fe Increased the Carbon Retention of Sludge-Based Biochar
• Minghao Shen
• , Xiangdong Zhu
• & Shicheng Zhang
Bulletin of Environmental Contamination and Toxicology (2021) | 2021-10-28 12:34:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.602877140045166, "perplexity": 8362.675017403013}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-43/segments/1634323588284.71/warc/CC-MAIN-20211028100619-20211028130619-00467.warc.gz"} |
https://physics.stackexchange.com/questions/463829/derivation-of-von-neumann-equation-for-density-matrices | # Derivation of von Neumann Equation for Density Matrices
Consider an ensemble of systems where each system is in one of a set of states $$|\alpha_i\rangle$$, with proportions $$w_i$$, such that the density operator is
$$\hat{\rho} = \sum_i w_i |\alpha_i\rangle \langle \alpha_i|.$$
I'd like to derive the von Neumann equation to describe the time evolution of the density matrix. Now, I imagine that the entire point of such an exercise is that the weights as well as the states change in time; the states evolve via the Schrodinger equation, so I would think to stick in the above definition. Doing so, I get
$$i\hbar\frac{\partial \hat{\rho}}{\partial t} = -[\hat{\rho},\hat{H}] + i\hbar \sum_i \frac{\partial w_i}{\partial t} |\alpha_i \rangle \langle \alpha_i|,$$
which is obviously not the von Neumann equation due to the last term on the right. What happens to that term? Am I missing an important reason that the $$w_i$$ do not evolve in time?
The weights $$w_i$$ are constant in time. The reason for this is that all of the time evolution is contained in the kets; all of the $$Nw_i$$ systems in the $$|\alpha_i (t_0)\rangle$$ state at time $$t_0$$ will be in the state $$|\alpha_i(t)\rangle$$ at a later time $$t$$, so it is accurate to treat only the time evolution of the kets and keep the $$w_i$$ constant. | 2023-03-27 06:14:21 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 12, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8476546406745911, "perplexity": 60.29725810918412}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296948609.41/warc/CC-MAIN-20230327060940-20230327090940-00794.warc.gz"} |
https://www.physicsforums.com/threads/any-matrix-as-product-of-elementary-matrices.127620/ | # Any matrix as product of elementary matrices
1. Jul 31, 2006
### Castilla
Good day. My question is this.
Let A be a square matrix. I know that if Det (A) is not 0, then A can be put as the product of k elementary matrices.
But in Marsden's Elementary Classical Analysis I have read that ANY matrix can be put as the product of elementary matrices.
Iit is ok, or it is a errata?
2. Jul 31, 2006
### StatusX
It would depend on how you define "elementary matrices," but if you use the usual definition that they are the matrices corresponding to row transpositions, multiplying a row by a constant, and adding one row to another, it isn't hard to show all such matrices have nonzero determinants, and so by the product rule for determinants, (det(AB)=det(A)det(B) ), the product of elementary matrices must be non-singular.
3. Jul 31, 2006
### nocturnal
If the book states that even noninvertible square matrices can be written as a product of elementary matrices, then that is an error. A square matrix is invertible iff it can be written as a product of elementary matrices.
4. Jul 31, 2006
### mathwonk
you need some hypotheses on the ring of entries in the matrices, e.g. they need to come from a euclidean domain, such as a field or the integers. it is unknown if this is possible for entries from any principal ideal domain i believe.
5. Jul 31, 2006
I would think, seeing as the book is a book on classical analysis, that it's safe to assume the Matrices have entries in R.
6. Aug 1, 2006
### mathwonk
just trying to teach you something, which of course you are free to ignore. actually clasical analysis was usually done over C.
Last edited: Aug 1, 2006
7. Aug 1, 2006
Castilla.
8. Aug 1, 2006
### mathwonk
perhaps you are satisfied, but i am still interested in this topic as i have to teach it soon.
i believe if you think abut it you will notice that the key to writing a matrix as a product of elementary ones, is diagonalizing it by elementary operations.
if the diagonl version is the dientity, then the matrix it self is the product of the elementary matrices corresponding to the elementary operations.
the key to diagonalizing a matrix, is the proces of replacing an element by the gcd of that element and another element in the matrix. thus to diagonalize a matrix roughly this way, one needs to be able to write the gcd of two elements as a linear combination of the two elements.
this is only possible in a p.i.d. But the ooperations called elementary are more restrictive than this. if you think about it, you will notice that the elmentary matrix operations only allow one to replace an element by a unit times itself, plus any multiple of another element.
this is no restriction for fields since all non zero elements are units, but for rings it is a restriction. however in a euclidean domain such as Z, the gcd of a,b, can be obtained by the euclidean algorithm as a combination of elements of form a + by, rather than ax+by.
This makes it possible always to diagonalize any matrix, not just an invertible one, by elementary row and column ooperations over any p.i.d. perhaps this is what marsden was thinking of.
But the diagonal version will not be the identity in case the matrix is not invertible, so one cannot get the original matrix as a product of elementary ones, rather one gets the original matrix as a product of elementary matrices times a diagonal matrix.
does that help?
9. Aug 1, 2006
### mathwonk
by the way my earlier remark may be wrong. it is possibly known that one cannot write all invertible matrices as a product of elementary ones in a pid, but what is not known may be just which matrices can be so written? anyway these are interesting questions about an elementary subject.
if you are primarily interested in fields such as R or C, note that the same operations work in other fields such as finite fields, or rational functions.
But esentially the same operations also work as noted above in pids such as the ring of polynomials over a field. this simple remark leads to the easiest proof of all the standard canonical forms for matrices with coefficients in a field, such as rational canonical forms, and jordan matrices.
i.e. all that is needed to find canonical form of a square matrix like A, over Q say, is to diagonalize the matrix [A-X.I] over the polynomials with rational coefficients.
so if you really understand the proces over a field, it leads much further thn you might think. | 2017-04-28 21:48:53 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.886114239692688, "perplexity": 334.308804611001}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917123097.48/warc/CC-MAIN-20170423031203-00219-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://www.zora.uzh.ch/id/eprint/128893/ | # Retrieval analysis of lingual fixed retainer adhesives
Gugger, Jonas; Pandis, Nikolaos; Zinelis, Spiros; Patcas, Raphael; Eliades, George; Eliades, Theodore (2016). Retrieval analysis of lingual fixed retainer adhesives. American Journal of Orthodontics and Dentofacial Orthopedics, 150(4):575-584.
## Abstract
INTRODUCTION: Our objective was to analyze the surface and bulk properties alterations of clinically aged composites used for fixed retention.
METHODS: Twenty-six lingual retainers bonded for different time periods (2.2-17.4 years) were retrieved from postorthodontic patients. Fifteen lingual retainers had been cemented by a chemically cured adhesive (Maximum Cure, Reliance Orthodontic Products, Itasca, Ill), and 11 were treated with a photo-cured adhesive (Flow-Tain, Reliance Orthodontic Products). The first group was in service for 2.8 to 17.4 years and the second for 2.2 to 5.4 years. Five specimens from each material were prepared and used as the control (or reference) group. The debonded surfaces from enamel were studied by attenuated total reflectance Fourier transform infrared spectroscopy (n = 3 per material per group), low-vacuum scanning electron microscopy, and energy dispersive x-ray microanalysis (n = 3 per material per group). All specimens were used for the assessment of Vickers hardness, indentation modulus, and elastic index with the instrumented indentation testing method. The values of Vickers hardness, indentation modulus, and elastic index were compared between the retrieved and the reference groups with 1-way analysis of variance and the Student-Newman-Keuls multiple comparison test (α = 0.05).
RESULTS: The attenuated total reflectance Fourier transform infrared spectroscopy analysis showed that both retrieved composites demonstrated reduced unsaturation in comparison with the corresponding reference specimens. Some bonded surfaces showed development of organic integuments. All retrieved specimens showed reduced silicon content. Barium was identified only in the photo-cured group. No significant differences were found between the reference and retrieved groups in Vickers hardness, indentation modulus, and elastic index.
CONCLUSIONS: Despite the changes in composition, the mechanical properties of the materials tested remained unaffected by intraoral aging.
## Abstract
INTRODUCTION: Our objective was to analyze the surface and bulk properties alterations of clinically aged composites used for fixed retention.
METHODS: Twenty-six lingual retainers bonded for different time periods (2.2-17.4 years) were retrieved from postorthodontic patients. Fifteen lingual retainers had been cemented by a chemically cured adhesive (Maximum Cure, Reliance Orthodontic Products, Itasca, Ill), and 11 were treated with a photo-cured adhesive (Flow-Tain, Reliance Orthodontic Products). The first group was in service for 2.8 to 17.4 years and the second for 2.2 to 5.4 years. Five specimens from each material were prepared and used as the control (or reference) group. The debonded surfaces from enamel were studied by attenuated total reflectance Fourier transform infrared spectroscopy (n = 3 per material per group), low-vacuum scanning electron microscopy, and energy dispersive x-ray microanalysis (n = 3 per material per group). All specimens were used for the assessment of Vickers hardness, indentation modulus, and elastic index with the instrumented indentation testing method. The values of Vickers hardness, indentation modulus, and elastic index were compared between the retrieved and the reference groups with 1-way analysis of variance and the Student-Newman-Keuls multiple comparison test (α = 0.05).
RESULTS: The attenuated total reflectance Fourier transform infrared spectroscopy analysis showed that both retrieved composites demonstrated reduced unsaturation in comparison with the corresponding reference specimens. Some bonded surfaces showed development of organic integuments. All retrieved specimens showed reduced silicon content. Barium was identified only in the photo-cured group. No significant differences were found between the reference and retrieved groups in Vickers hardness, indentation modulus, and elastic index.
CONCLUSIONS: Despite the changes in composition, the mechanical properties of the materials tested remained unaffected by intraoral aging.
## Statistics
### Citations
Dimensions.ai Metrics
### Altmetrics
7 downloads since deposited on 13 Dec 2016 | 2018-09-23 10:53:54 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8041766285896301, "perplexity": 11098.446483813072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159193.49/warc/CC-MAIN-20180923095108-20180923115508-00357.warc.gz"} |
http://mathematica.stackexchange.com/questions/26089/how-to-calculate-the-area-of-a-listplot | # How to calculate the area of a listplot [duplicate]
I need to calculate the area enclosed in a two-dimensional listplot and following an answer to a similar question I tried this way:
imgnorm = (mnorm =
MorphologicalComponents[
Erosion[Binarize[
ListLinePlot[{list}, ImageSize -> Large, Axes -> False]],
0.5]]) // Colorize
listareas = {area1 = 1/. ComponentMeasurements[{mnorm, imgnorm}, "Area"], area2 = 2/. ComponentMeasurements[{mnorm, imgnorm}, "Area"]}
It works fine, however it is not precise enough for what I need to do because part of the area is taken by the black line.
The list is quite big and the points don't form a polygon.
EDIT: Here is the plot:
Any help would be appreciated, thank you in advance.
-
## marked as duplicate by Jens, J. M.May 29 '13 at 17:48
Can you show the list resoluted-down perhaps? And could you close it to form a polygon for any practical purpose? For we know how to compute the polygon area... – BoLe May 29 '13 at 16:13
When you say it's not a polygon, do you mean the lines overlap? Or that it is not closed? – Michael E2 May 29 '13 at 17:09
It is closed but the lines overlap, it wouldn't be a problem to remove those points if I could do it automatically, I don't need to be that precise – John May 29 '13 at 17:12
Related answer -- assumes no self-intersections. – Michael E2 May 29 '13 at 17:15
Another related question - actually I'm now pretty sure that this is a duplicate of the linked question. – Jens May 29 '13 at 17:21
I don't see any reason to create a plot at all. According to your example, you have a list that can be plotted with ListLinePlot. That means it can be interpolated by a piecewise linear function. Consequently, you only need to find that linear function and calculate its integral.
Here is an example:
list = Accumulate[RandomReal[{-1, 1}, 250]];
Integrate[
Interpolation[list, InterpolationOrder -> 1][x], {x, 1,
Length[list]}]
This assumed that the x axis runs from 1 to the length of the list. It's easy to extend this to cases where the scale is different, or the x axis values are not evenly spaced.
The important ingredient is InterpolationOrder. The setting 1 leads to a linear interpolation the way it is plotted in ListLinePlot. Higher settings would produce smoothed approximations that may or may not be a better representation of the data you're trying to integrate.
-
I can't use an interpolating function because it is a cicle and some x values have more than one correspondant y value – John May 29 '13 at 16:56
I see. But then it will still be possible to divide your list into separate parts that can be considered functions over an interval on the horizontal axis. I'd say that's much more efficient than to try and extract the same information from a plot of any kind. Plotting is just a detour if you already have the data. – Jens May 29 '13 at 17:17 | 2014-11-23 02:30:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5869367122650146, "perplexity": 700.1449697376772}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-49/segments/1416400378956.26/warc/CC-MAIN-20141119123258-00253-ip-10-235-23-156.ec2.internal.warc.gz"} |
https://jp.mathworks.com/help/stats/compactclassificationnaivebayes.logp.html?lang=en | # logP
Log unconditional probability density for naive Bayes classifier
## Description
lp = logP(Mdl,tbl) returns the log unconditional probability density of the observations (rows) in tbl using the naive Bayes model Mdl.
You can use lp to identify outliers in the training data.
example
lp = logP(Mdl,X) returns the log unconditional probability density of the observations (rows) in X using the naive Bayes model Mdl.
## Input Arguments
expand all
Naive Bayes classifier, specified as a ClassificationNaiveBayes model or CompactClassificationNaiveBayes model returned by fitcnb or compact, respectively.
Sample data, specified as a table. Each row of tbl corresponds to one observation, and each column corresponds to one predictor variable. Optionally, tbl can contain additional columns for the response variable and observation weights. tbl must contain all the predictors used to train Mdl. Multi-column variables and cell arrays other than cell arrays of character vectors are not allowed.
If you trained Mdl using sample data contained in a table, then the input data for this method must also be in a table.
Data Types: table
Predictor data, specified as a numeric matrix.
Each row of X corresponds to one observation (also known as an instance or example), and each column corresponds to one variable (also known as a feature). The variables making up the columns of X must be the same as the variables that trained Mdl.
Data Types: double | single
## Output Arguments
expand all
Log of the unconditional probability density of the predictors, returned as a numeric column vector. lp has as many elements as rows in X, and each element is the log probability density of the corresponding row in X.
If any rows in X contain at least one NaN, then the corresponding element of lp is NaN.
## Examples
expand all
X = meas; % Predictors
Y = species; % Response
Train a naive Bayes classifier. It is good practice to specify the class order. Assume that each predictor is conditionally normally distributed given its label.
Mdl = fitcnb(X,Y,'ClassNames',{'setosa','versicolor','virginica'});
Mdl is a trained ClassificationNaiveBayes classifier.
Compute the unconditional probability densities of the in-sample observations.
lp = logP(Mdl,X);
histogram(lp)
xlabel 'Log-unconditional probability'
ylabel 'Frequency'
title 'Histogram: Log-Unconditional Probability'
Identify indices of observations having log-unconditional probability less than -7.
idx = find(lp < -7)
idx = 3×1
61
118
132 | 2020-04-07 11:31:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8253732919692993, "perplexity": 2791.2744821319493}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585371700247.99/warc/CC-MAIN-20200407085717-20200407120217-00176.warc.gz"} |
https://mathoverflow.net/questions/249883/average-decay-of-fourier-coefficients-of-continuous-measures-along-the-sequence | # Average decay of Fourier coefficients of continuous measures along the sequence $\lfloor n^{3/2}\big\rfloor$
Let $\mu$ be a continuous measure on $[0,1]$ (i.e. each individual point has $0$ measure). As usual, denote by $\hat\mu(n)=\int_0^1e^{2\pi inx}d\mu(x)$ the Fourier transform of $\mu$, and let $\lfloor x\rfloor$ denote the floor of $x\in\mathbb R$. Is it true that $$\lim_{N\to\infty}sup_{M\in\mathbb N}\frac1N\sum_{n=M}^{M+N}\left|\hat\mu\Big(\big\lfloor n^{3/2}\big\rfloor\Big)\right|=0~?$$
Observe that if $\mu$ is absolutely continuous, then the answer is yes by the Riemann-Lebesgue lemma.
If instead of $\hat\mu(\lfloor n^{3/2}\rfloor)$ we consider $\hat\mu(n)$, then the answer is again yes (it follows from Wiener's theorem).
If we don't take a supremum over all shifts of the interval $\{1,\dots,N\}$, then the result is again well known because the sequences $n\mapsto\lfloor n^{3/2}\rfloor x$ are uniformly distributed for any irrational $x$.
The motivation for this question comes from the fact that $$\lim_{N\to\infty}sup_{M\in\mathbb N}\frac1N\sum_{n=M}^{M+N}\left|\hat\mu\big(n^2\big)\right|=0.$$ This follows easily from the fact that the sequences $n\mapsto n^2 x$ are well distributed for every irrational $x$, but this is not the case for the sequences $n\mapsto\lfloor n^{3/2}\rfloor x$.
• The Fourier coefficients are complex numbers, so you need to take an absolute value somewhere before you can take the sup. Where exactly do you put the $|\cdot|$? Sep 14, 2016 at 20:32
• Oops, you're right, I just added them Sep 14, 2016 at 20:37
Take $M=4N^4$. Then $\lfloor (M+n)^{3/2}\rfloor=8N^6+3N^2n$ for $n=1,\dots,N$, so $\frac 1N\sum_{n=1}^N e^{-2\pi i \lfloor (M+n)^{3/2}\rfloor x}$ is $1$ when $x=q/N^2$ and nearly $1$ on a small open neighborhood $U_N$ of those points. Now take some very fast increasing sequence of $N$ so that $\cap_N U_N$ contains a Cantor-type set and put the measure on it. The exact formulae do not matter, of course. What mattered a bit was having long linear pieces of arbitrarily large slope, but even that wasn't crucial. As you noticed yourself, the (particular kind of) absence of good distribution was the key.
• Thanks for the answer but I didn't understand something: it seems to me that the intervals $U_N$ could shrink so that the intersection will be a single point at most. How do we guarantee we can fit a Cantor set in there? Sep 16, 2016 at 1:22
• Oh, never mind I see it now, each $U_N$ consists of $N^2$ intervals, not just one. Sep 16, 2016 at 1:24 | 2022-12-06 18:28:10 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9653069376945496, "perplexity": 108.18543942411758}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446711111.35/warc/CC-MAIN-20221206161009-20221206191009-00339.warc.gz"} |
http://en.wikipedia.org/wiki/Solar_mass | # Solar mass
"Weight of the Sun" redirects here. For the song, see Tao of the Dead.
Size and mass of very large stars: Most massive example, the blue Pistol Star (150 M). Others are Rho Cassiopeiae (40 M), Betelgeuse (20 M), and VY Canis Majoris (17 M). The Sun (1 M) which is not visible in this thumbnail is included to illustrate the scale of example stars. Earth's orbit (grey), Jupiter's orbit (red), and Neptune's orbit (blue) are also given.
The solar mass (M) is a standard unit of mass in astronomy that is used to indicate the masses of other stars, as well as clusters, nebulae and galaxies. It is equal to the mass of the Sun, about two nonillion kilograms:
$M_\odot=( 1.98855\ \pm\ 0.00025 )\ \times10^{30}\hbox{ kg}$[1][2]
The above mass is about 332,946 times the mass of the Earth (M), or 1,048 times the mass of Jupiter (MJ).
Because the Earth follows an elliptical orbit around the Sun, its solar mass can be computed from the equation for the orbital period of a small body orbiting a central mass.[3] Based upon the length of the year, the distance from the Earth to the Sun (an astronomical unit or AU), and the gravitational constant (G), the mass of the Sun is given by:
$M_\odot=\frac{4 \pi^2 \times (1\ {\rm AU})^3}{G\times(1\ {\rm year})^2}$
The value of the gravitational constant was derived from measurements that were made by Henry Cavendish in 1798 with a torsion balance. The value he obtained differs by only 1% from the modern value.[4] The diurnal parallax of the Sun was accurately measured during the transits of Venus in 1761 and 1769,[5] yielding a value of 9″ (compared to the present 1976 value of 8.794148″). If we know the value of the diurnal parallax, we can determine the distance to the Sun from the geometry of the Earth.[6]
The first person to estimate the mass of the Sun was Isaac Newton. In his work Principia, he estimated that the ratio of the mass of the Earth to the Sun was about 1/28,700. Later he determined that his value was based upon a faulty value for the solar parallax, which he had used to estimate the distance to the Sun (1 AU). He corrected his estimated ratio to 1/169,282 in the third edition of the Principia. The current value for the solar parallax is smaller still, yielding an estimated mass ratio of 1/332,946.[7]
As a unit of measurement, the solar mass came into use before the AU and the gravitational constant were precisely measured. This is because the relative mass of another planet in the solar System or the combined mass of two binary stars can be calculated in units of Solar mass directly from the orbital radius and orbital period of the planet or stars using Kepler's third law, provided that orbital radius is measured in astronomical units and orbital period is measured in years.
The mass of the Sun has decreased since the time it formed. This has occurred through two processes in nearly equal amounts. First, in the Sun's core hydrogen is converted into helium by nuclear fusion, in particular the pp chain, and this reaction converts some mass into energy in the form of gamma ray photons. Most of this energy eventually radiates away from the Sun. Second, high-energy protons and electrons in the atmosphere of the Sun are ejected directly into outer space as a solar wind.
The original mass of the Sun at the time it reached the main sequence remains uncertain. The early Sun had much higher mass-loss rates than at present, so it may have lost anywhere from 1–7% of its natal mass over the course of its main-sequence lifetime.[8] The Sun gains a very small mass through the impact of asteroids and comets; however the Sun already holds 99.86% of the Solar System's total mass, so these impacts cannot offset the mass lost by radiation and ejection.
## Related units
One solar mass, M, can be converted to related units:
It is also frequently useful in general relativity to express mass in units of length or time.
$M_\odot \frac{G}{c^2} \approx 1.48~\mathrm{km};\ \ M_\odot \frac{G}{c^3} \approx 4.93~ \mathrm{\mu s}$ | 2014-12-23 01:04:12 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 3, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7853341698646545, "perplexity": 598.140385769687}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-52/segments/1418802777438.76/warc/CC-MAIN-20141217075257-00091-ip-10-231-17-201.ec2.internal.warc.gz"} |
https://ictp.acad.ro/category/paper/original/page/32/ | # (original)
## Bilateral approximations of the roots of scalar equations by Lagrange-Aitken-Steffensen method of order three
Abstract We study the monotone convergence of two general methods of Aitken-Steffenssen type. These methods are obtained from the Lagrange…
## On a Steffensen-Hermite type method for approximating the solutions of nonlinear equations
Abstract It is well known that the Steffensen and Aitken-Steffensen type methods are obtained from the chord method, using controlled…
## Accelerating the convergence of the iterative methods of interpolatory type
Abstract In this paper we deal with iterative methods of interpolatory type, for solving nonlinear equations in Banach spaces. We…
## On some Aitken-Steffensen-Halley-type method for approximating the roots of scalar equations
Abstract We extend the Aitken-Steffensen method to the Halley transformation. Under some rather simple assumptions we obtain error bounds for…
## Aitken-Steffensen-type methods for nonsmooth functions (II)
Abstract We present some new conditions which assure that the Aitken-Steffensen method yields bilateral approximation for the solution of a… | 2022-09-28 23:28:14 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.642135500907898, "perplexity": 1114.7720614722284}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030335286.15/warc/CC-MAIN-20220928212030-20220929002030-00171.warc.gz"} |
https://brilliant.org/problems/can-you-express-the-solution-in-base-7/ | # Can you Express the Solution in Base 7?
Number Theory Level 3
You are given that $$x_1=0.333\ldots$$ is a number in base 9. Similarly, $$x_2= 0.333\ldots$$ is a number in base 10. Then what is $$x_1 - x_2$$ in base 7? | 2016-10-28 00:46:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7449122667312622, "perplexity": 171.3747874297969}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2016-44/segments/1476988721415.7/warc/CC-MAIN-20161020183841-00036-ip-10-171-6-4.ec2.internal.warc.gz"} |
https://sites.astro.caltech.edu/~aam/publication/pub0197/ | # Time-series and Phasecurve Photometry of Episodically-Active Asteroid (6478) Gault in a Quiescent State Using APO, GROWTH, P200 and ZTF
### Abstract
We observed Episodically Active Asteroid (6478) Gault in 2020 with multiple telescopes in Asia and North America and have found that it is no longer active after its recent outbursts at the end of 2018 and start of 2019. The inactivity during this apparation allowed us to measure the absolute magnitude of Gault of H_r = 14.63 +/- 0.02, G_r = 0.21 +/- 0.02 from our secular phasecurve observations. In addition, we were able to constrain Gault’s rotation period using time-series photometric lightcurves taken over 17 hours on multiple days in 2020 August, September and October. The photometric lightcurves have a repeating $lesssim$0.05 magnitude feature suggesting that (6478) Gault has a rotation period of ~2.5 hours and may have a semi-spherical or top-like shape, much like Near-Earth Asteroids Ryugu and Bennu. The rotation period of ~2.5 hours is near to the expected critical rotation period for an asteroid with the physical properties of (6478) Gault suggesting that its activity observed over multiple epochs is due to surface mass shedding from its fast rotation spun up by the Yarkovsky-O’Keefe-Radzievskii-Paddack effect.
Type | 2022-06-27 17:05:46 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3397907614707947, "perplexity": 4228.294115527694}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-27/segments/1656103337962.22/warc/CC-MAIN-20220627164834-20220627194834-00402.warc.gz"} |
http://xrpp.iucr.org/Da/ch2o4v0001/sec2o4o3o1/ | International
Tables for
Crystallography
Volume D
Physical properties of crystals
Edited by A. Authier
International Tables for Crystallography (2006). Vol. D, ch. 2.4, p. 330
## Section 2.4.3.1. Direct coupling to displacements
R. Vachera* and E. Courtensa
aLaboratoire des Verres, Université Montpellier 2, Case 069, Place Eugène Bataillon, 34095 Montpellier CEDEX, France
Correspondence e-mail: rene.vacher@ldv.univ-montp2.fr
#### 2.4.3.1. Direct coupling to displacements
| top | pdf |
The change in the relative optical dielectric tensor produced by an elastic wave is usually expressed in terms of the strain, using the Pockels piezo-optic tensor p, as The elastic wave should, however, be characterized by both strain S and rotation A (Nelson & Lax, 1971; see also Section 1.3.1.3 ):The square brackets on the left-hand side are there to emphasize that the component is antisymmetric upon interchange of the indices, . For birefringent crystals, the rotations induce a change of the local in the laboratory frame. In this case, (2.4.3.1) must be replaced by where is the new piezo-optic tensor given by One finds for the rotational partIf the principal axes of the dielectric tensor coincide with the crystallographic axes, this gives This is the expression used in this chapter, as monoclinic and triclinic groups are not listed in the tables below.
For the calculation of the Brillouin scattering, it is more convenient to use which is valid for small .
### References
Nelson, D. F. & Lax, M. (1971). Theory of photoelastic interaction. Phys. Rev. B, 3, 2778–2794. | 2019-06-25 16:51:12 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8619489073753357, "perplexity": 2639.13628963458}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-26/segments/1560627999853.94/warc/CC-MAIN-20190625152739-20190625174739-00128.warc.gz"} |
https://codereview.stackexchange.com/questions/127456/sleep-with-count-down-display | # Sleep with count down display
I have a sleep function which displays the time remaining in sec, some feedback would be appreciated
use warnings;
$x=$ARGV[0];
$|++;$n=0;
# arguments could be 1, 1s , 1m or 1h
if($x=~/([0-9]+)([a-zA-Z]?)/) {$t = lc($2) ; if($t eq "h"){$n =$1*3600;}
elsif( $t eq "m" ){$n = $1*60;} elsif($t eq "s" || $t eq "") {$n = $1;} else {die;} } else {die;} while($n)
{
print $n; sleep 1;$len=length($n); print "\r", " "x$len, "\r"; $n--; } This code and updates are available at csleep ## 2 Answers Here are some points I would like to see improved • Always use strict, and use my to declare your variables as late as possible • use warnings is preferable to -w on the shebang-line or command-line, but you should use warnings 'all' to ensure that you get all the help that perl has to offer • The coding standards in perldoc perlstyle will make your code instantly accessible to most Perl professionals. You should follow them unless you have a very good reason to write your code differently • If you have multiple parameters then it is more concise to use a list copy to assign the values to local variables. There is no reason to do it differently for a single parameter, so my ($p1) = @ARGV is better than my $p1 =$ARGV[0]
• The built-in variables are poorly-named, and you should avoid using them explicitly if you can. $|++ is much better as STDOUT->autoflush • Single-character identifiers are unacceptable, except for a few exceptions. $i, $j and $k are traditionally integer values and array indices; similarly $x, $y and $z are floating-point coordinates. $t may be used for a time value, and $l, $w and $h for length, width and height. $n is a number - usually a count - and $s may be a string. Pretty much anything else can be written more explicitly using a longer name. Note that Perl reserves $a and $b as internal variables for the built-in sort operator. They should never be used outside a sort block • If you are testing something that invalidates all of the rest of the program then you should check it straight away and die with an appropriate message string, rather than enclosing all of the rest of your code in a conditional block. So it would be better to rewrite if($x=~/([0-9]+)([a-zA-Z]?)/) { ... } else {die} as $x=~/([0-9]+)([a-zA-Z]?)/ or die "bad input". After that, the rest of your program can assume that the input is well-formed • Make sure to anchor your regular expressions. $x=~/([0-9]+)([a-zA-Z]?)/ will be true if $x contains a string that matches the pattern, like >>>0<<<<. Try it! • If you're using a regular expression, then get it to do all the work it can. Instead of [a-zA-Z]? when you are expecting only h, m, s, H, M or S, you can write [hms]? and use the /i (case-independent) modifier on the match. Thereafter, you don't have to consider the case where the pattern has matched any other letter • Make your constants clear. "Magic numbers" never help, and although you must rely on people knowing that there are sixty seconds in a minute, you don't need to use 3600 for the hour multiplier. 60 * 60 commmunicates a lot more • Consider using the /x modifier on regex patterns that are at all complex. It allows you to add whitespace to patterns so that you can lay them out better and make them clearer to read. /([0-9]+)([a-zA-Z]?)/ is clearer as / ( [0-9]+ ) ( [a-zA-Z]? ) /x • People familiar with Perl will expect to see parentheses used much more rarely than in most languages. They are used more like commas in natural languages, and an expression like lc($2) is more common as lc $2 • I personally prefer or and and to || and &&. I think they read better, and they often remove the need for additional parentheses. But beware that they have much lower precedence than their symbolic equivalents • Don't use the trailing ++ and -- operators when you don't need them. They mean "increment (decrement) the variable and return the value before it was changed". Most often --$n is just fine. The post-fixed operators have become popular only because of the name of the language C++, and it isn't a good guideline
I have altered your program, following the guidelines above. I hope you agree that it's preferable this way
use strict;
use warnings 'all';
STDOUT->autoflush;
my ($input) = @ARGV;$input =~ / ^ ( [0-9]+ ) ( [hms]? ) $/ix or die qq{Invalid input "$input"};
my $unit = lc$2;
my $seconds; if ($unit eq 'h' ) {
$seconds =$1 * 60 * 60;
}
elsif ( $unit eq 'm' ) {$seconds = $1 * 60; } elsif ($unit eq 's' or $unit eq "" ) {$seconds = $1; } while ($seconds-- ) {
print $seconds; sleep 1; my$len = length $seconds; print "\r", " " x$len, "\r";
}
I would personally choose to set up a hash that converted suffix strings to a multiplying factor instead of using a chain of if statements. Like this
use strict;
use warnings 'all';
STDOUT->autoflush;
my ($input) = @ARGV;$input =~ / ^ ( [0-9]+ ) ( [hms]? ) $/ix or die qq{Invalid input "$input"};
my %factors = (
h => 60 * 60,
m => 60,
s => 1,
'' => 1,
);
my $seconds =$1 * $factors{ lc$2 };
while ( $seconds-- ) { print$seconds;
sleep 1;
my $len = length$seconds;
print "\r", " " x $len, "\r"; } I hope that helps ### Naming The names are very poor. Consider these renames: • n -> seconds • t -> unit • x -> input ### Make the most out of regex If you only allow "s", "m", or "h" as units, then instead of [a-zA-Z] you could use [smhSMH] in the regex. ### Formatting The formatting is really too compact. It's recommended to put spaces around operators. ### use strict It's recommended to use strict always. It can help catching bugs. ### Alternative implementation Applying the above suggestions (and a bit more), consider this alternative implementation: use warnings; use strict; my$seconds;
# input could be 1, 1s , 1m or 1h
my $input =$ARGV[0];
if ($input =~ /^([0-9]+)([smh]?)$/i) {
$seconds =$1;
my $unit = lc($2);
my $multiplier = 1; if ($unit eq "h") {
$multiplier = 3600; } elsif ($unit eq "m") {
$multiplier = 60; }$seconds *= $multiplier; } else { die; }$| = 1; # force flush output on every write
while ($seconds) { print$seconds;
sleep 1;
my $len = length($seconds);
print "\r", " " x $len, "\r";$seconds--;
}
• Hey your program doesn't work, I corrected some of them but they still have some problems, like argument 2a is still valid while it shouldn't be. ([smh]?) does match smh and that it is optional but if there is other letters than them they dont match but the the condition would be still true. this would need something like ([has smh] and [doesnt have other letters]) ideone.com/wGO6x6 – tejas May 5 '16 at 8:11
• @tejas my bad, corrected – janos May 5 '16 at 8:16
• wouldn't "2A" be a valid input still? – tejas May 5 '16 at 12:18
• No, 2A is not valid, but 2A1s is. There should be a ^ anchor in the beginning of the pattern. – simbabque May 10 '16 at 9:01 | 2021-07-31 13:29:28 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.38461947441101074, "perplexity": 9125.043998716028}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154089.6/warc/CC-MAIN-20210731105716-20210731135716-00354.warc.gz"} |
http://www.aperiodictiling.org/wpaperiodictiling/ | # Introduction
Most tilings of the plane are periodic. Common examples are pavements of roads or sidewalks, the surface of a brick wall, the tile pattern of kitchen or bathroom floors. In aperiodic tilings the pattern does not repeat itself. Since a few decades an increasing number of aperiodic tilings, constructed using only a few different tiles and a prescription how the tiles should be put together, have been found. The most famous among them is the Penrose tiling with a five-fold rotation symmetry, discovered in the early seventies. One version, shown in Fig I1 is the tiling built using two different rhombs with opening angles $\pi/5$ and $2\pi/5$. The red dots are decorations of these so-called prototiles to indicate how they should be connected.
Fig. I1. Penrose rhomb tiles.
Another example of aperiodic tiling is the Amman Beenker (AB) tiling, also based on two rhomb tiles with opening angle $\pi/8$ and $\pi/4$. A simple way to construct aperiodic tilings is to use a substitution rule. The substitution rule tells us how inflated copies of a basic set of tiles are decomposed into a discrete number of basic tiles. By iteratively applying the rule to the inflated tiles an arbirary large tiling is obtained. The basic tiles, in our case the two rhomb tiles, are called the prototiles, and the (multiply) inflated tiles the supertiles.
In Fig. I2 three generations of substitution tiles are shown for the AB tiling, to the right the commonly used partly overlapping tiles, and on the left hand side the corresponding area conserving tiles The substitution edge angles are (0, -1, 1)$\pi/4$ respectively. The number set {0, -1, 1} is the so called edge sequence.
Fig. I2. Three generations of AmmannBeenker substitution tiles.
The above and many other substitution tilings are depicted in the aperiodic tilings encyclopedia by E.O.Harriss and D. Frettlöh. Other useful links are the wikipedia page on aperiodic tilings and a list of aperiodic sets of tiles. | 2015-11-26 21:17:54 | {"extraction_info": {"found_math": true, "script_math_tex": 5, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 5, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.627094030380249, "perplexity": 1330.3603426867965}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398447783.20/warc/CC-MAIN-20151124205407-00125-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://calculator-online.org/sumofseries/expr/0fd9ba282393b9b5f6734e7a5f139ef6 | Mister Exam
# Sum of series factorial(n)/n^n
=
### The solution
You have entered [src]
oo
____
\
\ n!
\ --
/ n
/ n
/___,
n = 1
$$\sum_{n=1}^{\infty} \frac{n!}{n^{n}}$$
Sum(factorial(n)/n^n, (n, 1, oo))
The radius of convergence of the power series
Given number:
$$\frac{n!}{n^{n}}$$
It is a series of species
$$a_{n} \left(c x - x_{0}\right)^{d n}$$
- power series.
The radius of convergence of a power series can be calculated by the formula:
$$R^{d} = \frac{x_{0} + \lim_{n \to \infty} \left|{\frac{a_{n}}{a_{n + 1}}}\right|}{c}$$
In this case
$$a_{n} = n^{- n} n!$$
and
$$x_{0} = 0$$
,
$$d = 0$$
,
$$c = 1$$
then
$$1 = \lim_{n \to \infty}\left(n^{- n} \left(n + 1\right)^{n + 1} \left|{\frac{n!}{\left(n + 1\right)!}}\right|\right)$$
Let's take the limit
we find
False
False
False
The rate of convergence of the power series
oo
___
\
\ -n
/ n *n!
/__,
n = 1
$$\sum_{n=1}^{\infty} n^{- n} n!$$
Sum(n^(-n)*factorial(n), (n, 1, oo))
1.87985386217525853348630614507
1.87985386217525853348630614507 | 2022-11-26 16:08:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9039912819862366, "perplexity": 6584.57951392869}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446708010.98/warc/CC-MAIN-20221126144448-20221126174448-00580.warc.gz"} |
https://learn.careers360.com/engineering/question-arrange-the-following-cobalt-complexes-in-the-order-of-incresing-crystal-field-stabilization-energy-cfse-valuecomplexes-choose-the-correct-option-option-1-option-2-option-3-option-4-option-5-option-6-option-7-option-8-option-9-option-10-option-11-option-12-option-13-option-14-option-15-option-16-143734/ | #### Arrange the following Cobalt complexes in the order of incresing Crystal Field Stabilization Energy (CFSE) value. Complexes : Choose the correct option : Option: 1 Option: 2 Option: 3 Option: 4
As we have learnt ,
The CFSE depends upon the nature of ligands if the metal cation is the same.
Also, cations having greater positive charge have a greater CFSE.
Thus, the given complexes can be arranged in order of their CFSE as
$\mathrm{\underset{\textbf{D}}{\left [ Co\left ( en \right )_{3} \right ]^{3+}}> \underset{\textbf{C}}{\left [ Co\left ( NH_{3} \right ) _{6} \right ]^{3+}}>\underset{\textbf{A}} {\left [ CoF_{6} \right ]^{3-}}> \underset{\textbf{B}}{\left [ Co\left ( H_{2}O \right )_{6} \right ]^{2+}}}$
Hence, the correct answer is option (1) | 2023-01-30 11:00:31 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 1, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8802524209022522, "perplexity": 3701.583192985395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-06/segments/1674764499816.79/warc/CC-MAIN-20230130101912-20230130131912-00235.warc.gz"} |
https://stacks.math.columbia.edu/tag/03BB | Lemma 29.44.15. Let $f : X \to Y$ be a morphism of schemes. If $f$ is finite and a monomorphism, then $f$ is a closed immersion.
Proof. This reduces to Algebra, Lemma 10.107.6. $\square$
In your comment you can use Markdown and LaTeX style mathematics (enclose it like $\pi$). A preview option is available if you wish to see how it works out (just click on the eye in the toolbar).
Unfortunately JavaScript is disabled in your browser, so the comment preview function will not work.
In order to prevent bots from posting comments, we would like you to prove that you are human. You can do this by filling in the name of the current tag in the following input field. As a reminder, this is tag 03BB. Beware of the difference between the letter 'O' and the digit '0'. | 2021-02-24 17:54:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 2, "x-ck12": 0, "texerror": 0, "math_score": 0.8351337909698486, "perplexity": 582.7943332355889}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178347293.1/warc/CC-MAIN-20210224165708-20210224195708-00586.warc.gz"} |
https://www.lmfdb.org/ModularForm/GL2/Q/holomorphic/392/1/k/b/ | # Properties
Label 392.1.k.b Level $392$ Weight $1$ Character orbit 392.k Analytic conductor $0.196$ Analytic rank $0$ Dimension $4$ Projective image $D_{4}$ CM discriminant -8 Inner twists $8$
# Related objects
## Newspace parameters
Level: $$N$$ $$=$$ $$392 = 2^{3} \cdot 7^{2}$$ Weight: $$k$$ $$=$$ $$1$$ Character orbit: $$[\chi]$$ $$=$$ 392.k (of order $$6$$, degree $$2$$, not minimal)
## Newform invariants
Self dual: no Analytic conductor: $$0.195633484952$$ Analytic rank: $$0$$ Dimension: $$4$$ Relative dimension: $$2$$ over $$\Q(\zeta_{6})$$ Coefficient field: $$\Q(\sqrt{2}, \sqrt{-3})$$ Defining polynomial: $$x^{4} + 2 x^{2} + 4$$ Coefficient ring: $$\Z[a_1, a_2, a_3]$$ Coefficient ring index: $$1$$ Twist minimal: yes Projective image $$D_{4}$$ Projective field Galois closure of 4.0.2744.1 Artin image $C_3\times D_8$ Artin field Galois closure of $$\mathbb{Q}[x]/(x^{24} - \cdots)$$
## $q$-expansion
Coefficients of the $$q$$-expansion are expressed in terms of a basis $$1,\beta_1,\beta_2,\beta_3$$ for the coefficient ring described below. We also show the integral $$q$$-expansion of the trace form.
$$f(q)$$ $$=$$ $$q -\beta_{2} q^{2} -\beta_{1} q^{3} + ( -1 - \beta_{2} ) q^{4} + \beta_{3} q^{6} - q^{8} + \beta_{2} q^{9} +O(q^{10})$$ $$q -\beta_{2} q^{2} -\beta_{1} q^{3} + ( -1 - \beta_{2} ) q^{4} + \beta_{3} q^{6} - q^{8} + \beta_{2} q^{9} + ( \beta_{1} + \beta_{3} ) q^{12} + \beta_{2} q^{16} + \beta_{1} q^{17} + ( 1 + \beta_{2} ) q^{18} + ( -\beta_{1} - \beta_{3} ) q^{19} + \beta_{1} q^{24} + ( -1 - \beta_{2} ) q^{25} + ( 1 + \beta_{2} ) q^{32} -\beta_{3} q^{34} + q^{36} -\beta_{1} q^{38} -\beta_{3} q^{41} -\beta_{3} q^{48} - q^{50} -2 \beta_{2} q^{51} -2 q^{57} + \beta_{1} q^{59} + q^{64} + ( 2 + 2 \beta_{2} ) q^{67} + ( -\beta_{1} - \beta_{3} ) q^{68} -\beta_{2} q^{72} -\beta_{1} q^{73} + ( \beta_{1} + \beta_{3} ) q^{75} + \beta_{3} q^{76} + ( 1 + \beta_{2} ) q^{81} + ( -\beta_{1} - \beta_{3} ) q^{82} + \beta_{3} q^{83} + ( \beta_{1} + \beta_{3} ) q^{89} + ( -\beta_{1} - \beta_{3} ) q^{96} + \beta_{3} q^{97} +O(q^{100})$$ $$\operatorname{Tr}(f)(q)$$ $$=$$ $$4q + 2q^{2} - 2q^{4} - 4q^{8} - 2q^{9} + O(q^{10})$$ $$4q + 2q^{2} - 2q^{4} - 4q^{8} - 2q^{9} - 2q^{16} + 2q^{18} - 2q^{25} + 2q^{32} + 4q^{36} - 4q^{50} + 4q^{51} - 8q^{57} + 4q^{64} + 4q^{67} + 2q^{72} + 2q^{81} + O(q^{100})$$
Basis of coefficient ring in terms of a root $$\nu$$ of $$x^{4} + 2 x^{2} + 4$$:
$$\beta_{0}$$ $$=$$ $$1$$ $$\beta_{1}$$ $$=$$ $$\nu$$ $$\beta_{2}$$ $$=$$ $$\nu^{2}$$$$/2$$ $$\beta_{3}$$ $$=$$ $$\nu^{3}$$$$/2$$
$$1$$ $$=$$ $$\beta_0$$ $$\nu$$ $$=$$ $$\beta_{1}$$ $$\nu^{2}$$ $$=$$ $$2 \beta_{2}$$ $$\nu^{3}$$ $$=$$ $$2 \beta_{3}$$
## Character values
We give the values of $$\chi$$ on generators for $$\left(\mathbb{Z}/392\mathbb{Z}\right)^\times$$.
$$n$$ $$197$$ $$295$$ $$297$$ $$\chi(n)$$ $$-1$$ $$-1$$ $$\beta_{2}$$
## Embeddings
For each embedding $$\iota_m$$ of the coefficient field, the values $$\iota_m(a_n)$$ are shown below.
For more information on an embedded modular form you can click on its label.
Label $$\iota_m(\nu)$$ $$a_{2}$$ $$a_{3}$$ $$a_{4}$$ $$a_{5}$$ $$a_{6}$$ $$a_{7}$$ $$a_{8}$$ $$a_{9}$$ $$a_{10}$$
67.1
0.707107 − 1.22474i −0.707107 + 1.22474i 0.707107 + 1.22474i −0.707107 − 1.22474i
0.500000 + 0.866025i −0.707107 + 1.22474i −0.500000 + 0.866025i 0 −1.41421 0 −1.00000 −0.500000 0.866025i 0
67.2 0.500000 + 0.866025i 0.707107 1.22474i −0.500000 + 0.866025i 0 1.41421 0 −1.00000 −0.500000 0.866025i 0
275.1 0.500000 0.866025i −0.707107 1.22474i −0.500000 0.866025i 0 −1.41421 0 −1.00000 −0.500000 + 0.866025i 0
275.2 0.500000 0.866025i 0.707107 + 1.22474i −0.500000 0.866025i 0 1.41421 0 −1.00000 −0.500000 + 0.866025i 0
$$n$$: e.g. 2-40 or 990-1000 Significant digits: Format: Complex embeddings Normalized embeddings Satake parameters Satake angles
## Inner twists
Char Parity Ord Mult Type
1.a even 1 1 trivial
8.d odd 2 1 CM by $$\Q(\sqrt{-2})$$
7.b odd 2 1 inner
7.c even 3 1 inner
7.d odd 6 1 inner
56.e even 2 1 inner
56.k odd 6 1 inner
56.m even 6 1 inner
## Twists
By twisting character orbit
Char Parity Ord Mult Type Twist Min Dim
1.a even 1 1 trivial 392.1.k.b 4
3.b odd 2 1 3528.1.bx.b 4
4.b odd 2 1 1568.1.o.b 4
7.b odd 2 1 inner 392.1.k.b 4
7.c even 3 1 392.1.g.b 2
7.c even 3 1 inner 392.1.k.b 4
7.d odd 6 1 392.1.g.b 2
7.d odd 6 1 inner 392.1.k.b 4
8.b even 2 1 1568.1.o.b 4
8.d odd 2 1 CM 392.1.k.b 4
21.c even 2 1 3528.1.bx.b 4
21.g even 6 1 3528.1.g.c 2
21.g even 6 1 3528.1.bx.b 4
21.h odd 6 1 3528.1.g.c 2
21.h odd 6 1 3528.1.bx.b 4
24.f even 2 1 3528.1.bx.b 4
28.d even 2 1 1568.1.o.b 4
28.f even 6 1 1568.1.g.b 2
28.f even 6 1 1568.1.o.b 4
28.g odd 6 1 1568.1.g.b 2
28.g odd 6 1 1568.1.o.b 4
56.e even 2 1 inner 392.1.k.b 4
56.h odd 2 1 1568.1.o.b 4
56.j odd 6 1 1568.1.g.b 2
56.j odd 6 1 1568.1.o.b 4
56.k odd 6 1 392.1.g.b 2
56.k odd 6 1 inner 392.1.k.b 4
56.m even 6 1 392.1.g.b 2
56.m even 6 1 inner 392.1.k.b 4
56.p even 6 1 1568.1.g.b 2
56.p even 6 1 1568.1.o.b 4
168.e odd 2 1 3528.1.bx.b 4
168.v even 6 1 3528.1.g.c 2
168.v even 6 1 3528.1.bx.b 4
168.be odd 6 1 3528.1.g.c 2
168.be odd 6 1 3528.1.bx.b 4
By twisted newform orbit
Twist Min Dim Char Parity Ord Mult Type
392.1.g.b 2 7.c even 3 1
392.1.g.b 2 7.d odd 6 1
392.1.g.b 2 56.k odd 6 1
392.1.g.b 2 56.m even 6 1
392.1.k.b 4 1.a even 1 1 trivial
392.1.k.b 4 7.b odd 2 1 inner
392.1.k.b 4 7.c even 3 1 inner
392.1.k.b 4 7.d odd 6 1 inner
392.1.k.b 4 8.d odd 2 1 CM
392.1.k.b 4 56.e even 2 1 inner
392.1.k.b 4 56.k odd 6 1 inner
392.1.k.b 4 56.m even 6 1 inner
1568.1.g.b 2 28.f even 6 1
1568.1.g.b 2 28.g odd 6 1
1568.1.g.b 2 56.j odd 6 1
1568.1.g.b 2 56.p even 6 1
1568.1.o.b 4 4.b odd 2 1
1568.1.o.b 4 8.b even 2 1
1568.1.o.b 4 28.d even 2 1
1568.1.o.b 4 28.f even 6 1
1568.1.o.b 4 28.g odd 6 1
1568.1.o.b 4 56.h odd 2 1
1568.1.o.b 4 56.j odd 6 1
1568.1.o.b 4 56.p even 6 1
3528.1.g.c 2 21.g even 6 1
3528.1.g.c 2 21.h odd 6 1
3528.1.g.c 2 168.v even 6 1
3528.1.g.c 2 168.be odd 6 1
3528.1.bx.b 4 3.b odd 2 1
3528.1.bx.b 4 21.c even 2 1
3528.1.bx.b 4 21.g even 6 1
3528.1.bx.b 4 21.h odd 6 1
3528.1.bx.b 4 24.f even 2 1
3528.1.bx.b 4 168.e odd 2 1
3528.1.bx.b 4 168.v even 6 1
3528.1.bx.b 4 168.be odd 6 1
## Hecke kernels
This newform subspace can be constructed as the kernel of the linear operator $$T_{3}^{4} + 2 T_{3}^{2} + 4$$ acting on $$S_{1}^{\mathrm{new}}(392, [\chi])$$.
## Hecke characteristic polynomials
$p$ $F_p(T)$
$2$ $$( 1 - T + T^{2} )^{2}$$
$3$ $$4 + 2 T^{2} + T^{4}$$
$5$ $$T^{4}$$
$7$ $$T^{4}$$
$11$ $$T^{4}$$
$13$ $$T^{4}$$
$17$ $$4 + 2 T^{2} + T^{4}$$
$19$ $$4 + 2 T^{2} + T^{4}$$
$23$ $$T^{4}$$
$29$ $$T^{4}$$
$31$ $$T^{4}$$
$37$ $$T^{4}$$
$41$ $$( -2 + T^{2} )^{2}$$
$43$ $$T^{4}$$
$47$ $$T^{4}$$
$53$ $$T^{4}$$
$59$ $$4 + 2 T^{2} + T^{4}$$
$61$ $$T^{4}$$
$67$ $$( 4 - 2 T + T^{2} )^{2}$$
$71$ $$T^{4}$$
$73$ $$4 + 2 T^{2} + T^{4}$$
$79$ $$T^{4}$$
$83$ $$( -2 + T^{2} )^{2}$$
$89$ $$4 + 2 T^{2} + T^{4}$$
$97$ $$( -2 + T^{2} )^{2}$$ | 2021-01-26 03:44:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9856958985328674, "perplexity": 14691.830240136916}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-04/segments/1610704795033.65/warc/CC-MAIN-20210126011645-20210126041645-00455.warc.gz"} |
http://physics.stackexchange.com/questions/47154/ball-rolling-in-a-parabolic-bowl/47161 | # Ball Rolling in a Parabolic Bowl
I encountered a physics problem which inquired about a ball rolling inside a parabolic bowl (i.e. a bowl where any cross section through the vertex would make a parabolic shape given by $y = kx^2$). The ball was to be released from some initial height $z_0$ and allowed to roll back and forth inside the bowl.
At first, I thought that the motion would be simple harmonic for any displacement, since $U(x) = mgh = mgkx^2$, and simple harmonic motion is characterized by a potential energy which is proportional to the square of the distance from some equilibrium point. However, the answer key said that the motion was not truly simple harmonic (and the rolling ball only appeared to be so for small amplitudes of motion).
Why is motion with the a potential energy proportional to the square of displacement from equilibrium not simple harmonic in this case?
EDIT: To clarify, the "ball" should have really been a point mass.
-
not sure what the answer sheet refers to. a potential of this form surely is a simple harmonic. a nitpicky argument would be that the gravitational constant is height dependent and only constant for low heights compared to the radius of the earth. – luksen Dec 19 '12 at 0:20
@luksen No, it is not harmonic, because the displacement of the ball is the full arclength, not the horizontal projection of the displacement. – Mark Eichenlaub Dec 19 '12 at 3:04
Although the potential energy has the same form, this is not a simple harmonic motion basically because in simple harmonic oscillator the ball moves only on the x-axis, while on your problem it's forced by the bowl to move on the z-axis as well. If we write down the Lagrangian for your problem: $$L = T - U = \frac{1}{2}m(\dot{x}^2 + \dot{z}^2) - mgz = \frac{1}{2}m(\dot{x}^2 + (2kx\dot{x})^2) - mgkx^2 =$$ $$= \frac{1}{2}m\dot{x}^2(1 + 4k^2x^2) - mgkx^2$$ we can see from Euler-Lagrange equations: $$\frac{\partial L}{\partial x} = \frac{d}{dt}\frac{\partial L}{\partial \dot{x}}$$ that the equation of motion describing your problem is: $$(1+4k^2x^2)\ddot{x}+4k^2\dot{x}^2x+2gkx=0$$ which is quite different from the one for simple harmonic motion.
Edit: for simplicity I've assumed it's a point-body with no moment of inertia rather than an actual ball that rolls inside the bawl. In the second case, it requires us to take the ball's moment of inertia into account, which will result in a bit different equation of motion.
-
Indeed, it is a point body. I'm not very familiar with Lagrangian mechanics (still a high school student), but I'll look into it to see if I can understand your explanation better. – Shivam Sarodia Dec 19 '12 at 2:58
Aside from the rotation, we also must account for the ball's finite size in finding it potential energy. When the ball is on a slope, the vertical displacement from the point of contact to the center of the ball is less than when the ball is on the flat part at bottom. – Mark Eichenlaub Dec 19 '12 at 3:06
+1 for addressing the moment of inertia point. I was just thinking that even for a bowl shape that would generate SHM (for a point mass), a physical ball would still not execute SHM. – Kyle Oman Dec 19 '12 at 3:07
@Draksis It can, but the kinetic energy is not $1/2 m \dot{x}^2$. – Mark Eichenlaub Dec 19 '12 at 4:43
@Draksis Think of it this way: when the ball moves from point $x_1$ to point $x_2$, it gains kinetic energy equal to $\Delta T = U(x_1)-U(x_2) = mgk(x_{1}^{2}-x_{2}^{2})$. If the movement is one dimensional, all of this change in kinetic energy will be expressed as a change in speed on the x-direction alone. If the movement is 2-dimensional, on the other hand, some of it will be expressed as a change in speed on the z-direction as well. – Andrey B Dec 19 '12 at 5:32
I would like to offer another Lagrangian approach, that tries to mirror the typical pendulum problem in lagrangian mechanics. I just thought of this, so I hope this is right...
A typical problem in lagrangian mechanics is to solve the pendulum with a constant string length, $l$. The path of the ball at the end of the string (in 2D) is a circle, this is a path of constant radius. We exemplify that we know this by first writing $$L = T-U = \frac{1}{2}m(\dot{x}^2 + \dot{y}^2) + mgy$$ and then making the change to polar coordinates using $x = r\sin\phi$ and $y = r \cos\phi$. Now lets do this same thing, except our ball is not traveling curves of constant radius, it is traveling curves of constant parabola. Then we will use the change to parabolic coordinates, $x = \sigma\tau$ and $y = \frac{1}{2}(\tau^2 - \sigma^2)$. Here, curves of constant $\sigma$ are upward facing parabolae, so we let our $\sigma$ be a constant. With this our Lagrangian becomes $$L = \frac{1}{2}m(\sigma^2+\tau^2)\dot{\tau}^2 - \frac{1}{2}mg(\tau^2 - \sigma^2)$$ Then $$\frac{\partial L}{\partial \dot{\tau}}=m(\sigma^2 + \tau^2)\dot{\tau}\implies\frac{d}{dt}\frac{\partial L}{\partial \dot{\tau}}=m[2\tau\dot{\tau}^2+(\sigma^2+\tau^2)\ddot{\tau}]$$ and $$\frac{\partial L}{\partial \tau}=m\tau\dot{\tau}^2-mg\tau$$ so all together becomes $$m[2\tau\dot{\tau}^2+(\sigma^2+\tau^2)\ddot{\tau}]-m\tau\dot{\tau}^2+mg\tau=0$$ $$\rightarrow \ddot{\tau}+\frac{g\tau+\tau\dot{\tau}^2}{(\sigma^2+\tau^2)}=0$$ There is also a constraint equation related to constant $\sigma$, $\;f=\sigma-a=0$ which can be used with a lagrange multiplier to extract the normal force on the ball. Hope this helps/is right.
-
About the units at the final expression... $g\tau$ has units of $length^{1.5} \cdot time^{-2}$ while $\tau\dot{\tau}$ has units of $length \cdot time^{-1}$, so I guess there is a mistake somewhere. – Andrey B Dec 19 '12 at 4:06
Thanks :), I think my $\partial L /\partial \tau$ was wrong, I think I fixed it. – kηives Dec 19 '12 at 4:11 | 2015-08-04 17:59:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.85009765625, "perplexity": 262.65805960598124}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-32/segments/1438042991076.30/warc/CC-MAIN-20150728002311-00327-ip-10-236-191-2.ec2.internal.warc.gz"} |
http://support.markedapp.com/discussions/questions/137-integration-in-mailapp | # Integration in Mail.app
#### Herwig Henseler
30 May, 2012 11:59 AM
Is there an easy way to integrate Marked into Mail.app? That is:
• I write a new Mail in Mail.app and I want to have it preview in Marked while writing.
• I receive a Mail with markdown and want to display it in Marked?
In know there are Plugins for Mail.app, would it be possible to write one for that purpose?
Regards,
Herwig
1. Support Staff Posted by Brett on 30 May, 2012 12:49 PM
No, neither are currently possible. You can use external files to compose and then copy RTF or the original text, and the upcoming version c
-Brett (on iPhone)
2. Support Staff Posted by Brett on 30 May, 2012 01:31 PM
Because Marked needs actual files on the disk to work with, this isn't currently possible. I may work out a means to do it with AppleScript someday, but it's far more complex than working with VoodooPad or Scrivener, for example.
For right now you have to use external files for composing. I keep one scratch file handy and re-use it for things like that, just always having it open in Marked and then hitting ⌘E to edit in my external editor. The next version of Marked does support Clipboard Preview, which basically makes it really fast to open a scratch file with the clipboard contents and preview it, optionally using ⌘E to edit directly.
You can also play with QuickCursor (I think it's on the App Store now) for quickly editing a Mail window in a scratch file, which could then be opened in Marked, but I'm not sure how well that would go without trying it out.
3. Posted by Herwig Henseler on 30 May, 2012 02:22 PM
Thanks, Brett for answering so fast,
the clipboard trick sounds nice, I'm looking forward to the next version of Marked.
Regarding "Marked needs actual files on the disk": Mail.app does this. If I start writing a mail and do a cmd-s, then a real file is saved to disk. The trick is to find out the filename, which is buried deep in Library/Mail/some/long/obscure/path/with/random/stuff.emlx
Regards,
Herwig
4. Support Staff Posted by Brett on 30 May, 2012 02:29 PM
Thanks for the tip, I didn't realize drafts were saved to disk. I'll look into it.
Formatting help / Preview (switch to plain text) No formatting (switch to Markdown)
### »
#### Attached Files
You can attach files up to 10MB
If you don't have an account yet, we need to confirm you're human and not a machine trying to post spam.
# Keyboard shortcuts
### Generic
? Show this help Blurs the current field
### Comment Form
r Focus the comment reply box Submit the comment
You can use Command ⌘ instead of Control ^ on Mac
## Recent Discussions
13 Feb, 2020 09:22 PM Markdown Formatting 11 Feb, 2020 11:53 AM Custom themes not working 10 Feb, 2020 06:48 PM KaTex Preview Update 06 Feb, 2020 04:32 PM Using mermaid in Marked 2 29 Jan, 2020 12:45 AM MathJax rendering errors - Scrivener
## Recent Articles
Using JavaScript in Marked Custom CSS: Writing custom CSS for Marked License code has already been utilized Highlight sentences longer than a certain number of words How do I retrieve a lost license (direct version) | 2020-02-19 20:50:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.17182765901088715, "perplexity": 4582.16499230764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-10/segments/1581875144167.31/warc/CC-MAIN-20200219184416-20200219214416-00288.warc.gz"} |
https://stats.stackexchange.com/questions/116374/glivenko-cantelli-theorem | # Glivenko-Cantelli Theorem
The Glivenko-Cantelli Theorem states that if $$F$$ is a distribution function, $$X_1,\dots,X_n \sim F$$, and $$\hat{F}_n$$ is the empirical distribution function, then $$\sup_{x \in \mathbb{R}} \lvert \hat{F}_n(x) - F(x) \rvert \xrightarrow{a.s.} 0 . \tag{1}$$
How does this differ from simply stating the following? $$\hat{F}_n(x) \xrightarrow{a.s.} F(x) \tag{2}$$
Using the definition of convergence almost surely, from (1): \begin{align*} \mathbb{P} \left( \lim_{n\rightarrow\infty} \left\lvert \sup_{x \in \mathbb{R}} \lvert \hat{F}_n(x) - F(x) \rvert \right\rvert = 0 \right) &= 1 \\ \mathbb{P} \left( \lim_{n\rightarrow\infty} \sup_{x \in \mathbb{R}} \lvert \hat{F}_n(x) - F(x) \rvert = 0 \right) &= 1 \tag{1a} \end{align*}
Using the definition of convergence almost surely, from (2): \begin{align*} \mathbb{P} \left( \lim_{n\rightarrow\infty} \left\lvert \hat{F}_n(x) - F(x) \right\rvert = 0 \right) = 1 \tag{2a} \end{align*}
To me, it seems that (1a) and (2a) are equivalent statements because of the least upper bound, and thus (1) and (2) are equivalent statements. But I have a feeling that I'm missing a subtle difference since otherwise I would think the Theorem would just be stated the simpler way (Equation (2)).
The differerence is in uniform convergence. 1 is saying that for all x there is a single n such that error is less than epsilon ( uniform). The other one is saying for each x there is a large enough n that error is less than epsilon(pointwise). An example from wikipedia of pointwise but not uniform convergence is $f_n (x)=x^n$ on $0\le x\le 1$. You need larger and larger n as you get closer to $x=1$.
• Hmm. Did you really mean for that to end with "closer to $x=0$"? I'd have thought that the need for larger $n$ was near $x=1$. Sep 22, 2014 at 22:04
It is perhaps worth noting that pointwise convergence of $$\hat F_n(x)$$ to $$F(x)$$ already implies uniform convergence where $$F$$ is continuous (because the cdfs are bounded and monotone). More precisely, if $$[a,b]$$ is an interval that does not contain any discontinuities of $$F$$, the convergence is uniform on $$[a,b]$$ -- and that's still true for $$a=-\infty$$ or $$b=\infty$$.
The conclusion of the Glivenko-Cantelli theorem is stronger: that the convergence is uniform even at discontinuities, and this is important. By contrast, if $$\hat F_n$$ are a sequence of empirical CDFs from distributions $$F_n$$ converging in distribution to $$F$$, we have pointwise convergence of $$\hat F_n(x)$$ to $$F(x)$$, and uniform convergence on intervals with no discontinuities, but not uniform convergence everywhere. | 2022-05-29 03:34:47 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 20, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9985657334327698, "perplexity": 236.7596049237382}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652663035797.93/warc/CC-MAIN-20220529011010-20220529041010-00418.warc.gz"} |
https://www.quizover.com/online/course/an-algorithm-to-implement-a-boolean-function-using-only-nand-s-or | # An algorithm to implement a boolean function using only nand's or
Page 1 / 3
An algorithm to implement a boolean function as a gate network using only NAND's or only NOR's is presented. Any boolean function can be implemented straightforwardly using AND's, OR's, and NOT gates. Using DeMorgan's Law in different forms gates in the network can be successively converted to use only NAND's or only NOR's.
NAND's and NOR's are the most common basic logic circuit element in use because they are simpler to build than AND and OR gates, and because each is logically complete . Many logical functions are expressed using AND's, OR's, and Inverters (NOT), however, because an implementing circuit can be constructed straightforwardly from the truth table expression of a logical function and because Karnaugh Map's can be used to minimize AND, OR, INVERTER networks.
This document is adapted, with permission, from algorithms and examples given in Dr. Jump's Elec326 course notes.
Below a simple algorithm is given for converting a network with AND gates, OR gates and INVERTERS to one with NAND gates or NOR gates exclusively. First the boolean function is represented using AND's, OR's, and NOT gates. Then, using DeMorgan's Law in various forms, the AND, OR, INVERTER network is converted step-by-step to use only NAND gates or only NOR gates.
OR to NAND AND to NOR $a\vee b\equiv ¬\left(¬a\wedge ¬b\right)$ $a\wedge b\equiv ¬\left(¬a\vee ¬b\right)$
## Conversion algorithm
1. Draw AND, OR, INVERTER implementation. First draw out an implementation of the boolean function using AND gates, OR gates and INVERTERS. Any implementation that uses only those three gate types will work. One way to implement a boolean function using AND's, OR's and INVERTERS is to build the Disjunctive Normal Form of the boolean function from the truth table that describes the function. Disjunctive Normal Form, is also called Sum of Products form. Propositional Logic: Normal Forms gives a succinct treatment of normal forms and of how to go from a truth table to Disjunctive Normal Form.
2. Apply DeMorgan's Law. Apply DeMorgan's Law to the circuit by using the equivalences in the first two rows of the Figure above. To create a NAND only circuit, use the transforms in the left box, and for a NOR only circuit use the transforms in the right-hand box.
3. Remove redundant inverters: Any time that two inverters are in series (an inverted output goes directly in to an inverted input), remove both of them, since they cancel each other out.
4. Replace remaining inverters. Replace any remaining inverters with the equivalent NAND or NOR implementation (the third row of the Figure).
Note that in step c. the final elimination of inverters isn't quite done since B and D are inverted into one of the NAND's.
Do somebody tell me a best nano engineering book for beginners?
what is fullerene does it is used to make bukky balls
are you nano engineer ?
s.
what is the Synthesis, properties,and applications of carbon nano chemistry
so some one know about replacing silicon atom with phosphorous in semiconductors device?
Yeah, it is a pain to say the least. You basically have to heat the substarte up to around 1000 degrees celcius then pass phosphene gas over top of it, which is explosive and toxic by the way, under very low pressure.
Harper
how to fabricate graphene ink ?
for screen printed electrodes ?
SUYASH
What is lattice structure?
of graphene you mean?
Ebrahim
or in general
Ebrahim
in general
s.
Graphene has a hexagonal structure
tahir
On having this app for quite a bit time, Haven't realised there's a chat room in it.
Cied
what is biological synthesis of nanoparticles
what's the easiest and fastest way to the synthesize AgNP?
China
Cied
types of nano material
I start with an easy one. carbon nanotubes woven into a long filament like a string
Porter
many many of nanotubes
Porter
what is the k.e before it land
Yasmin
what is the function of carbon nanotubes?
Cesar
I'm interested in nanotube
Uday
what is nanomaterials and their applications of sensors.
what is nano technology
what is system testing?
preparation of nanomaterial
Yes, Nanotechnology has a very fast field of applications and their is always something new to do with it...
what is system testing
what is the application of nanotechnology?
Stotaw
In this morden time nanotechnology used in many field . 1-Electronics-manufacturad IC ,RAM,MRAM,solar panel etc 2-Helth and Medical-Nanomedicine,Drug Dilivery for cancer treatment etc 3- Atomobile -MEMS, Coating on car etc. and may other field for details you can check at Google
Azam
anybody can imagine what will be happen after 100 years from now in nano tech world
Prasenjit
after 100 year this will be not nanotechnology maybe this technology name will be change . maybe aftet 100 year . we work on electron lable practically about its properties and behaviour by the different instruments
Azam
name doesn't matter , whatever it will be change... I'm taking about effect on circumstances of the microscopic world
Prasenjit
how hard could it be to apply nanotechnology against viral infections such HIV or Ebola?
Damian
silver nanoparticles could handle the job?
Damian
not now but maybe in future only AgNP maybe any other nanomaterials
Azam
Hello
Uday
I'm interested in Nanotube
Uday
this technology will not going on for the long time , so I'm thinking about femtotechnology 10^-15
Prasenjit
can nanotechnology change the direction of the face of the world
At high concentrations (>0.01 M), the relation between absorptivity coefficient and absorbance is no longer linear. This is due to the electrostatic interactions between the quantum dots in close proximity. If the concentration of the solution is high, another effect that is seen is the scattering of light from the large number of quantum dots. This assumption only works at low concentrations of the analyte. Presence of stray light.
how did you get the value of 2000N.What calculations are needed to arrive at it
Privacy Information Security Software Version 1.1a
Good
Berger describes sociologists as concerned with
Got questions? Join the online conversation and get instant answers! | 2018-09-26 08:56:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 2, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6104040741920471, "perplexity": 2158.7807803022606}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267164469.99/warc/CC-MAIN-20180926081614-20180926102014-00027.warc.gz"} |
http://ffmpeg.org/pipermail/ffmpeg-devel/2008-May/053782.html | # [FFmpeg-devel] [PATCH] Move pitch vector interpolation filter to acelp_filters.
Michael Niedermayer michaelni
Mon May 19 15:44:14 CEST 2008
On Mon, May 19, 2008 at 06:20:20PM +0700, Vladimir Voroshilov wrote:
>
>
> 2008/5/19 Vladimir Voroshilov <voroshil at gmail.com>:
> > 2008/5/19 Vladimir Voroshilov <voroshil at gmail.com>:
> >
> >
> > > 2008/5/18 Michael Niedermayer <michaelni at gmx.at>:
> > >
> > >
> > > > On Sun, May 18, 2008 at 12:23:32AM +0700, Vladimir Voroshilov wrote:
> > > >
>
> [...]
>
> > > > This code is written for a filter with an odd number of coeffs (1,3,5,...) but
> > > > all but the first filter for which it is used have an even number thus they
> > > > end up needing these 0 coeffs.
> > > > The code should be changed so it expects even number of coeff filters and the
> > > > unneeded 0 elements should be removed from the tables
> > > >
> > > > anyway, the obvious implementation is:
> > > >
> > > > for(n=0; n<subframe_size; n++){
> > > > int idx = precision>>1;
> > > > int v = 0x4000;
> > > > for(i=1; i<filter_length+1; i++){
> > > > v += in[n - pitch_delay_int - i] * filter_coeffs[idx - frac];
> > > > v += in[n - pitch_delay_int + i] * filter_coeffs[idx + frac];
> > > > idx += precision;
> > > > }
> > > > out[n] = av_clip_int16(v >> 15);
> > >
> > > First, I can't see here equivalent (due to i>0) for
> > >
> > > int v = in[n - pitch_delay_int] * filter_coeffs[FFABS(pitch_delay_frac)];
> > >
> > > Second, your code 'as is' gives me PSNR 10 and i'm afraid it is totally wrong:
> > > you have shifted filter represented in filter_coeffs by precision/2
> > > as respect signal. This will obviously produce different result.
> >
> > Sorry, i overreacted with 'totally wrong'.
> > I'm still investingating problem, but already found several
> > inconsistences between doxygen comment to the filter and it's usage
> > (like passing [-5; 0] value
> > in pitch_frac_delay).
> >
> > Code is not looks like yours yet, but is going to that direction.
>
> finished.
> Here is result.
> It looks a bit hackish but i don't see yet how to make it cleaner.
[...]
>
> +const int16_t ff_acelp_interp_filter[66] =
> +{ /* (0.15) */
> + 29443, 28346, 25207, 20449, 14701, 8693,
> + 3143, -1352, -4402, -5865, -5850, -4673,
> + -2783, -672, 1211, 2536, 3130, 2991,
> + 2259, 1170, 0, -1001, -1652, -1868,
> + -1666, -1147, -464, 218, 756, 1060,
> + 1099, 904, 550, 135, -245, -514,
> + -634, -602, -451, -231, 0, 191,
> + 308, 340, 296, 198, 78, -36,
> + -120, -163, -165, -132, -79, -19,
> + 34, 73, 91, 89, 70, 38,
> +};
you removed one 0 too much
const int16_t ff_acelp_interp_filter[66] =
{ /* (0.15) */
29443, 28346, 25207, 20449, 14701, 8693,
3143, -1352, -4402, -5865, -5850, -4673,
-2783, -672, 1211, 2536, 3130, 2991,
2259, 1170, 0, -1001, -1652, -1868,
-1666, -1147, -464, 218, 756, 1060,
1099, 904, 550, 135, -245, -514,
-634, -602, -451, -231, 0, 191,
308, 340, 296, 198, 78, -36,
-120, -163, -165, -132, -79, -19,
34, 73, 91, 89, 70, 38,
0,
};
> +
> +void ff_acelp_interpolate_pitch_vector(
> + int16_t* out,
> + const int16_t* in,
> + const int16_t* filter_coeffs,
> + int precision,
> + int pitch_delay_int,
> + int pitch_delay_frac,
> + int filter_length,
> + int subframe_size)
> +{
As this really is a generic interpolation function it should be
void ff_acelp_interpolate(
int16_t* out,
const int16_t* in,
const int16_t* filter_coeffs,
int precision,
int frac,
int filter_length,
int out_len)
{
note especially the removed pitch_delay_int it served no purpose anyway.
[...]
> +/**
> + * \brief Decode the adaptive-codebook (pitch) vector (4.1.3 of G.729).
Its a generic interpolation function ...
> + * \param out [out] buffer to store decoded vector
Its a generic interpolation function ...
> + * \param in input data
> + * \param filter_coeffs interpolation filter coefficients (0.15)
> + * \param precision precision of passed interpolation filter
unclear, precission could be as well the number of bits in each coeff
> + * \param pitch_delay_int pitch delay, integer part
> + * \param pitch_delay_frac pitch delay, fractional part [0..5]
> + * \param filter_length filter length
> + * \param subframe_length subframe length
Its a generic interpolation function, theres no "frame"
> + *
> + * The routine assumes the following order of fractions (X - integer delay):
> + *
> + * 1/3 precision: X 1/3 2/3 X 1/3 2/3 X
> + * 1/6 precision: X 1/6 2/6 3/6 4/6 5/6 X 1/6 2/6 3/6 4/6 5/6 X
> + *
> + * The routine can be used for 1/3 precision, too, by
> + * passing 2*pitch_delay_frac as third parameter.
> + */
This is at least missing a clear specification of the contents of filter_coeffs
[...]
-- | 2014-09-19 09:55:37 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8091030716896057, "perplexity": 10488.986964921129}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-41/segments/1410657131238.51/warc/CC-MAIN-20140914011211-00117-ip-10-196-40-205.us-west-1.compute.internal.warc.gz"} |
https://chemistry.stackexchange.com/questions/111262/can-ethylene-glycol-enhance-the-acidity-of-orthoboric-acid | # Can ethylene glycol enhance the acidity of orthoboric acid?
My textbook says:
Boric acid is a weak monobasic acid but on addition of certain organic polyhydroxy compounds, such as mannitol, glycerol, dextrose or invert sugar, it is transformed into a relatively stronger acid. Ethylene glycol cannot give this test (reason is uncertain).
I do know that cis-diols form a boron-ether compound with the $$\ce{[B(OH)4]-}$$, leading to an increase in acidity. But is it true that ethylene glycol is exceptional? And is there any possible reason for it?
• Maybe it is about the molecular geometries. I can't run the necessary calculations until at least the weekend, though. Would be interesting to know how buta-2,3-diol would behave (but then again, maybe it does not dissolve in water). – TAR86 Mar 20 '19 at 15:17
• But a question in iit jee advanced(2014) says that on addition of ethylene glycol too the acidity of orthoboric acid is increased. ques- The correct statement(s) for orthoboric acid is/are (A) It behaves as a weak acid in water due to self ionization. (B) Acidity of its aqueous solution increases upon addition of ethylene glycol. (C) It has a three dimensional structure due to hydrogen bonding. (D) It is a weak electrolyte in water. ans- B,D (official) i am also confused now beacuse its opposite is written in j d lee as per your question – Tanmay Singh Apr 20 '19 at 14:36
• But in my institutes exercise book the question is marked as of jee 2014 but only D option is mentioned as an answer – Harsh jain May 13 '19 at 13:29
• Yes it has been written in jd Lee that ethylene glycol don't give this test – Kuvcharu May 9 at 1:42
• Will somebody please make a solution of boric acid, check its pH, then add some ethylene glycol to see if the pH decreases, and by how much? It would save time for so many people checking their textbooks. – James Gaidis May 9 at 13:54 | 2021-06-16 07:14:08 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 1, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4214923679828644, "perplexity": 2565.3661201764}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487622234.42/warc/CC-MAIN-20210616063154-20210616093154-00202.warc.gz"} |
http://shelah.logic.at/972_abs.html | ### Monotone hulls for ${\mathcal N}\cap {\mathcal M}$
by Roslanowski and Shelah. [RoSh:972]
Periodica Math Hungarica, 2014
Using the method of decisive creatures ([KrSh:872]) we show the consistency of there is no increasing omega_2 --chain of Borel sets and non (N)= non (M)= omega_2=2^omega''. Hence, consistently, there are no monotone hulls for the ideal M cap N . This answers Balcerzak and Filipczak. Next we use FS iteration with partial memory to show that there may be monotone Borel hulls for the ideals M, N even if they are not generated by towers.
Back to the list of publications | 2018-12-11 16:40:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9137507677078247, "perplexity": 3310.370194028887}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823657.20/warc/CC-MAIN-20181211151237-20181211172737-00292.warc.gz"} |
http://louistiao.me/posts/numpy-mgrid-vs-meshgrid/ | # NumPy mgrid vs. meshgrid
The meshgrid function is useful for creating coordinate arrays to vectorize function evaluations over a grid. Experienced NumPy users will have noticed some discrepancy between meshgrid and the mgrid, a function that is used just as often, for exactly the same purpose. What is the discrepancy, and why does a discrepancy even exist when "there should be one - and preferably only one - obvious way to do it." [1]
First, recall that meshgrid behaves as follows:
>>> import numpy as np
>>> x1, y1 = np.meshgrid(np.arange(1, 11, 2), np.arange(-12, -3, 3))
>>> x1 # 3x5 array
array([[1, 3, 5, 7, 9],
[1, 3, 5, 7, 9],
[1, 3, 5, 7, 9]])
>>> y1 # 3x5 array
array([[-12, -12, -12, -12, -12],
[ -9, -9, -9, -9, -9],
[ -6, -6, -6, -6, -6]])
If you have used NumPy for a while or are familiar enough with how Broadcasting works, you will have realized that meshgrid is actually superfluous for NumPy arrays, and that it is actually just an implementation of MATLAB's meshgrid, probably to cater to users coming from a MATLAB background.
Observe the behavior of mgrid, which essentially returns the transpose of meshgrid:
>>> x2, y2 = np.mgrid[1:11:2, -12:-3:3]
>>> x2 # 5x3 array
array([[1, 1, 1],
[3, 3, 3],
[5, 5, 5],
[7, 7, 7],
[9, 9, 9]])
>>> y2 # 5x3 array
array([[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6]])
>>> np.all(x1 == x2.T)
True
>>> np.all(y2 == y2.T)
True
Note this this order is actually more natural, since mgrid just fleshes out the open (not fleshed out) grids given by ogrid by broadcasting them to form dense grids, i.e.
>>> a, b = np.ogrid[1:11:2, -12:-3:3]
>>> a # 5x1 array
array([[1],
[3],
[5],
[7],
[9]])
>>> b # 1x3 array
array([[-12, -9, -6]])
and the 5x1 array a is broadcasted with the 1x3 array b to form two 5x3 arrays
>>> x2, y2 = np.broadcast_arrays(a, b)
>>> x2 # 5x3 array
array([[1, 1, 1],
[3, 3, 3],
[5, 5, 5],
[7, 7, 7],
[9, 9, 9]])
>>> y2 # 5x3 array
array([[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6],
[-12, -9, -6]])
which behaves exactly the same way as mgrid. Note that you seldom have to broadcast arrays explicitly, let alone use functions like mgrid or meshgrid, since all arithmetic operations on NumPy arrays already perform broadcasting implicitly. E.g.
>>> x2 + y2 # adding two 5x3 arrays
array([[-11, -8, -5],
[ -9, -6, -3],
[ -7, -4, -1],
[ -5, -2, 1],
[ -3, 0, 3]])
>>> a + b # adding a 5x1 array to a 1x3 array
array([[-11, -8, -5],
[ -9, -6, -3],
[ -7, -4, -1],
[ -5, -2, 1],
[ -3, 0, 3]])
Finally, if for some reason you must have output like that of meshgrid, just use mgrid with the arguments and unpacking targets reversed.
>>> y3, x3 = np.mgrid[-12:-3:3, 1:11:2]
>>> x3 # 3x5 array
array([[1, 3, 5, 7, 9],
[1, 3, 5, 7, 9],
[1, 3, 5, 7, 9]])
>>> y3 # 3x5 array
array([[-12, -12, -12, -12, -12],
[ -9, -9, -9, -9, -9],
[ -6, -6, -6, -6, -6]])
>>> np.all(x1 == x3)
True
>>> np.all(y1 == y3)
True
## Uniformly-spaced meshgrids
At the very beginning, we created a meshgrid by specifying ranges and step lengths using np.arange. Suppose instead we just want to specify the number of evenly-spaced points we'd like the meshgrid to include between some ranges. In other words, we're instead interested in using np.linspace instead of np.arange:
>>> x1, y1 = np.meshgrid(np.linspace(-5, 5, 5),
... np.linspace(-12, -3, 3))
>>> x1 # 3x5 array
array([[-5. , -2.5, 0. , 2.5, 5. ],
[-5. , -2.5, 0. , 2.5, 5. ],
[-5. , -2.5, 0. , 2.5, 5. ]])
>>> y1 # 3x5 array
array([[-12. , -12. , -12. , -12. , -12. ],
[ -7.5, -7.5, -7.5, -7.5, -7.5],
[ -3. , -3. , -3. , -3. , -3. ]])
The mgrid allows you to specify this by using a complex number (e.g. 5j) as a step length. When the step length is a complex number, the integer part of its magnitude is interpreted as specifying the number of points to create between the start and stop values, where the stop value is inclusive. Hence, to achieve the above using mgrid:
>>> y3, x3 = np.mgrid[-12:-3:3j,-5:5:5j]
>>> np.all(x1 == x3)
True
>>> np.all(y1 == y3)
True
In summary, while the mgrid function is often overlooked, it is very general and powerful, and subsumes many other functions in NumPy as special cases. It is related to the ogrid, and demonstrates the flexibility of NumPy Broadcasting.
[1] PEP20 - The Zen of Python (https://www.python.org/dev/peps/pep-0020/) | 2017-12-13 13:07:24 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7452428340911865, "perplexity": 4731.073888988166}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-51/segments/1512948523222.39/warc/CC-MAIN-20171213123757-20171213143757-00407.warc.gz"} |
http://icpc.njust.edu.cn/Problem/Zju/3829/ | # Known Notation
Time Limit: 2 Seconds
Memory Limit: 65536 KB
## Description
Do you know reverse Polish notation (RPN)? It is a known notation in the area of mathematics and computer science. It is also known as postfix notation since every operator in an expression follows all of its operands. Bob is a student in Marjar University. He is learning RPN recent days.
To clarify the syntax of RPN for those who haven't learnt it before, we will offer some examples here. For instance, to add 3 and 4, one would write "3 4 +" rather than "3 + 4". If there are multiple operations, the operator is given immediately after its second operand. The arithmetic expression written "3 - 4 + 5" in conventional notation would be written "3 4 - 5 +" in RPN: 4 is first subtracted from 3, and then 5 added to it. Another infix expression "5 + ((1 + 2) × 4) - 3" can be written down like this in RPN: "5 1 2 + 4 × + 3 -". An advantage of RPN is that it obviates the need for parentheses that are required by infix.
In this problem, we will use the asterisk "*" as the only operator and digits from "1" to "9" (without "0") as components of operands.
You are given an expression in reverse Polish notation. Unfortunately, all space characters are missing. That means the expression are concatenated into several long numeric sequence which are separated by asterisks. So you cannot distinguish the numbers from the given string.
You task is to check whether the given string can represent a valid RPN expression. If the given string cannot represent any valid RPN, please find out the minimal number of operations to make it valid. There are two types of operation to adjust the given string:
1. Insert. You can insert a non-zero digit or an asterisk anywhere. For example, if you insert a "1" at the beginning of "2*3*4", the string becomes "12*3*4".
2. Swap. You can swap any two characters in the string. For example, if you swap the last two characters of "12*3*4", the string becomes "12*34*".
The strings "2*3*4" and "12*3*4" cannot represent any valid RPN, but the string "12*34*" can represent a valid RPN which is "1 2 * 34 *".
## Input
There are multiple test cases. The first line of input contains an integer T indicating the number of test cases. For each test case:
There is a non-empty string consists of asterisks and non-zero digits. The length of the string will not exceed 1000.
## Output
For each test case, output the minimal number of operations to make the given string able to represent a valid RPN.
## Sample Input
3
1*1
11*234**
*
## Sample Output
1
0
2
None
None | 2020-10-31 21:48:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5880151987075806, "perplexity": 825.6913272669296}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107922463.87/warc/CC-MAIN-20201031211812-20201101001812-00289.warc.gz"} |
https://specutils.readthedocs.io/en/latest/api/specutils.analysis.equivalent_width.html | # equivalent_width¶
specutils.analysis.equivalent_width(spectrum, continuum=1, regions=None)[source]
Computes the equivalent width of a region of the spectrum.
Applies to the whole spectrum by default, but can be limited to a specific feature (like a spectral line) if a region is given.
Parameters: spectrum : Spectrum1D The spectrum object overwhich the equivalent width will be calculated. regions: ~specutils.utils.SpectralRegion or list of ~specutils.utils.SpectralRegion Region within the spectrum to calculate the gaussian sigma width. If regions is None, computation is performed over entire spectrum. continuum : 1 or Quantity, optional Value to assume is the continuum level. For the special value 1 (without units), 1 in whatever the units of the spectrum.flux will be assumed, otherwise units are required and must be the same as the spectrum.flux. Equivalent width calculation, in the same units as the spectrum’s spectral_axis.
Notes
To do a standard equivalent width measurement, the spectrum should be continuum-normalized to whatever continuum is before this function is called. | 2018-09-23 23:51:05 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5047258138656616, "perplexity": 2194.098703796915}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-39/segments/1537267159938.71/warc/CC-MAIN-20180923232129-20180924012529-00334.warc.gz"} |
https://deepai.org/publication/two-harmonic-jacobi-davidson-methods-for-computing-a-partial-generalized-singular-value-decomposition-of-a-large-matrix-pair | # Two harmonic Jacobi–Davidson methods for computing a partial generalized singular value decomposition of a large matrix pair
Two harmonic extraction based Jacobi–Davidson (JD) type algorithms are proposed to compute a partial generalized singular value decomposition (GSVD) of a large regular matrix pair. They are called cross product-free (CPF) and inverse-free (IF) harmonic JDGSVD algorithms, abbreviated as CPF-HJDGSVD and IF-HJDGSVD, respectively. Compared with the standard extraction based JDGSVD algorithm, the harmonic extraction based algorithms converge more regularly and suit better for computing GSVD components corresponding to interior generalized singular values. Thick-restart CPF-HJDGSVD and IF-HJDGSVD algorithms with some deflation and purgation techniques are developed to compute more than one GSVD components. Numerical experiments confirm the superiority of CPF-HJDGSVD and IF-HJDGSVD to the standard extraction based JDGSVD algorithm.
There are no comments yet.
## Authors
• 3 publications
• 8 publications
04/29/2020
### A cross-product free Jacobi-Davidson type method for computing a partial generalized singular value decomposition (GSVD) of a large matrix pair
A Cross-Product Free (CPF) Jacobi-Davidson (JD) type method is proposed ...
07/24/2019
### On choices of formulations of computing the generalized singular value decomposition of a matrix pair
For the computation of the generalized singular value decomposition (GSV...
01/10/2020
### The joint bidiagonalization process with partial reorthogonalization
The joint bidiagonalization(JBD) process is a useful algorithm for the c...
11/06/2020
### Efficient Robust Watermarking Based on Quaternion Singular Value Decomposition and Coefficient Pair Selection
Quaternion singular value decomposition (QSVD) is a robust technique of ...
04/05/2020
### New Formulation and Computation for Generalized Singular Values of Grassman Matrix Pair
In this paper, we derive new model formulations for computing generalize...
02/12/2020
### Towards a more robust algorithm for computing the restricted singular value decomposition
A new algorithm to compute the restricted singular value decomposition o...
05/04/2020
### The Multi-Symplectic Lanczos Algorithm and Its Applications to Color Image Processing
Low-rank approximations of original samples are playing more and more an...
##### This week in AI
Get the week's most popular data science and artificial intelligence research sent straight to your inbox every Saturday.
## 1 Introduction
For a pair of large and possibly sparse matrices and , the matrix pair is called regular if , i.e., , where and denote the null spaces of and , respectively. The generalized singular value decomposition (GSVD) of was introduced by Van Loan [34] and developed by Paige and Saunders [28]. Since then, GSVD has become a standard matrix decomposition and has been widely used [2, 3, 4, 9, 10, 25]. Let , and , , where the superscript denotes the transpose. Then the GSVD of is
(1.1) {UTAX=ΣA=\diag{C,0l1,q1,Iq2},VTBX=ΣB=\diag{S,Iq1,0l2,q2},
where is nonsingular, and are orthogonal, and the diagonal matrices and satisfy
0<αi,βi<1andα2i+β2i=1,i=1,…,q
with . Here, and are the zero matrices and identity matrices of order , respectively; see [28]. The GSVD part in (1.1) that corresponds to and can be written as
(1.2)
where is the th column of and the unit-length vectors and are the th columns of and , respectively. The quintuples , are called nontrivial GSVD components of . Particularly, the numbers or the pairs are called the nontrivial generalized singular values, and and are the corresponding left and right generalized singular vectors, respectively, .
For a given target , we assume that all the nontrivial generalized singular values of are labeled by their distances from :
(1.3) |σ1−τ|≤⋯≤|σℓ−τ|<|σℓ+1−τ|≤⋯≤|σq−τ|.
We are interested in computing the GSVD components corresponding to the nontrivial generalized singular values of closest to . If is inside the nontrivial generalized singular spectrum of , then , are called interior GSVD components of ; otherwise, they are called the extreme, i.e., largest or smallest, ones. A large number of GSVD components, some of which are interior ones [5, 6, 7], are required in a variety of applications. Throughout this paper, we assume that is not equal to any generalized singular value of .
Zha [37] proposes a joint bidiagonalization (JBD) method to compute extreme GSVD components of the large matrix pair . The method is based on a JBD process that successively reduces to a sequence of upper bidiagonal pairs, from which approximate GSVD components are computed. Kilmer, Hansen and Espanol [26] have adapted the JBD process to the linear discrete ill-posed problem with general-form regularization and developed a JBD process that reduces to lower-upper bidiagonal forms. Jia and Yang [24] have developed a new JBD process based iterative algorithm for the ill-posed problem and considered the convergence of extreme generalized singular values. In the GSVD computation and the solution of discrete ill-posed problem, one needs to solve an least squares problem with the coefficient matrix at each step of the JBD process. Jia and Li [22] have recently considered the JBD process in finite precision and proposed a partial reorthogonalization strategy to maintain numerical semi-orthogonality among the generated basis vectors so as to avoid ghost approximate GSVD components, where the semi-orthogonality means that two unit-length vectors are numerically orthogonal to the level of with being the machine precision.
Hochstenbach [12] presents a Jacobi–Davidson (JD) GSVD (JDGSVD) method to compute a number of interior GSVD components of with of full column rank, where, at each step, an -dimensional linear system, i.e., the correction equation, needs to be solved iteratively with low or modest accuracy; see [14, 15, 20, 21]. The lower -dimensional and upper -dimensional parts of the approximate solution are used to expand the right and one of the left searching subspaces, respectively. The JDGSVD method formulates the GSVD of as the equivalent generalized eigendecomposition of the augmented matrix pair for of full column rank, computes the relevant eigenpairs, and recovers the approximate GSVD components from the converged eigenpairs. The authors [16]
have shown that the error of the computed eigenvector is bounded by the size of the perturbations times a multiple
, where denotes the -norm condition number of with and being the largest and smallest singular values of , respectively. Consequently, with an ill-conditioned , the computed GSVD components may have very poor accuracy, which has been numerically confirmed [16]. The results in [16] show that if is ill conditioned but has full column rank and is well conditioned then the JDGSVD method can be applied to the matrix pair and computes the corresponding approximate GSVD components with high accuracy. Note that the two formulations require that and
be rectangular or square, respectively. We should also realize that a reliable estimation of the condition numbers of
and may be costly, so that it may be difficult to choose a proper formulation in applications.
Zwaan and Hochstenbach [39] present a generalized Davidson (GDGSVD) method and a multidirectional (MDGSVD) method to compute an extreme partial GSVD of . These two methods involve no cross product matrices and or matrix-matrix products, and they apply the standard extraction approach, i.e., the Rayleigh–Ritz method [31] to directly and compute approximate GSVD components with respect to the given left and right searching subspaces, where the two left subspaces are formed by premultiplying the right one with and , respectively. At iteration of the GDGSVD method, the right searching subspace is spanned by the residuals of the generalized Davidson method [1, Sec. 11.2.4 and Sec. 11.3.6]
applied to the generalized eigenvalue problem of
; in the MDGSVD method, an inferior search direction is discarded by a truncation technique, so that the searching subspaces are improved. Zwaan [38] exploits the Kronecker canonical form of a regular matrix pair [32] and shows that the GSVD problem of can be formulated as a certain generalized eigenvalue problem without involving any cross product or any other matrix-matrix product. Such formulation currently is mainly of theoretical value since the nontrivial eigenvalues and eigenvectors of the structured generalized eigenvalue problem are always complex: the generalized eigenvalues are the conjugate quaternions with the imaginary unit, and the corresponding right generalized eigenvectors are
[uTj,xTj/βj,√σjuTj,√σjvTj]T,[−uTj,−xTj/βj,√σjuTj,√σjvTj]T, [−iuTj,ixTj/βj,√σjuTj,−√σjvTj]T,[iuTj,ixTj/βj,−√σjuTj,−√σjvTj]T.
Clearly, the size of the generalized eigenvalue problem is much bigger than that of the GSVD of . The conditioning of eigenvalues and eigenvectors of this problem is also unclear. In the meantime, no structure-preserving algorithm has been found for such kind of complicated structured generalized eigenvalue problem. Definitely, it will be extremely difficult and highly challenging to seek for a numerically stable structure-preserving algorithm for this problem.
The authors [15] have recently proposed a Cross Product-Free JDGSVD method, referred to as the CPF-JDGSVD method, to compute several GSVD components of corresponding to the generalized singular values closest to . The CPF-JDGSVD method is cross products and free when constructing and expanding right and left searching subspaces; it premultiplies the right searching subspace by and to construct two left ones separately, and forms the orthonormal bases of those by computing two thin QR factorizations, as done in [39]. The resulting projected problem is the GSVD of a small matrix pair without involving any cross product or matrix-matrix product. Mathematically, the method implicitly deals with the equivalent generalized eigenvalue problem of without forming or explicitly. At the subspace expansion stage, an -by- correction equation is approximately solved iteratively with low or modest accuracy, and the approximate solution is used to expand the searching subspaces. Therefore, the subspace expansion is fundamentally different from that used in [39], where the dimension of the correction equations is no more than half of the dimension of those in [12].
Just like the standard Rayleigh–Ritz method for the matrix eigenvalue problem and the singular value decomposition (SVD) problem, the CPF-JDGSVD method suits better for the computation of some extreme GSVD components, but it may encounter some serious difficulties for the computation of interior GSVD components. Remarkably, adapted from the standard extraction approach for the eigenvalue problem and SVD problem to the GSVD computation, an intrinsic shortcoming of a standard extraction based method is that it may be hard to pick up good approximate generalized singular values correctly even if the searching subspaces are sufficiently good. This potential disadvantage may make the resulting algorithm expand the subspaces along wrong directions and converge irregularly, as has been numerically observed in [15]. To this end, inspired by the harmonic extraction based methods that suit better for computing interior eigenpairs and SVD components [11, 13, 14, 17, 23, 21, 27], we will propose two harmonic extraction based JDGSVD methods that are particularly suitable for the computation of interior GSVD components. One method is cross products and free, and the other is inversions and free. As will be seen, the derivations of the two harmonic extraction methods are nontrivial, and they are subtle adaptations of the harmonic extraction for matrix eigenvalue and SVD problems. In the sequel, we will abbreviate Cross Product-Free and Inverse-Free Harmonic JDGSVD methods as CPF-HJDGSVD and IF-HJDGSVD, respectively.
We first focus on the case and propose our harmonic extraction based JDGSVD type methods. Then by introducing the deflation technique in [15] into the methods, we present the methods to compute more than one, i.e., , GSVD components. To be practical, combining the thick-restart technique in [30] and some purgation approach, we develop thick-restart CPF-HJDGSVD and IF-HJDGSVD algorithms to compute the GSVD components associated with the generalized singular values of closest to .
The rest of this paper is organized as follows. In Section 2, we briefly review the CPF-JDGSVD method proposed in [15]. In Section 3, we propose the CPF-HJDGSVD and IF-HJDGSVD methods. In Section 4, we develop thick-restart CPF-HJDGSVD and IF-HJDGSVD with deflation and purgation to compute GSVD components of . In Section 5, we report numerical experiments to illustrate the performance of CPF-HJDGSVD and IF-HJDGSVD, make a comparison of them and CPF-JDGSVD, and show the superiority of the former two to the latter one. Finally, we conclude the paper in Section 6.
Throughout this paper, we denote by the column space of a matrix, and by and the - and -norms of a matrix or vector, respectively. As in (1.1), we denote by and the -by- identity and -by- zero matrices, respectively, with the subscripts and dropped whenever they are clear from the context.
## 2 The standard extraction based JDGSVD method
We review the CPF-JDGSVD method in [15] for computing the GSVD component of . Assume that a -dimensional right searching subspace is available, from which an approximation to is extracted. Then we construct
(2.1) U=AXandV=BX
as the two left searching subspaces, from which approximations to and are computed. It is proved in [15] that the distance between and (resp. and ) is as small as that between and , provided that (resp. ) is not very small. In other words, for the extreme and interior GSVD components, and constructed by (2.1) are as good as provided that the desired generalized singular values are neither very small nor very small. It is also proved in [15] that or is as accurate as for very large or small generalized singular values.
Assume that the columns of form an orthonormal basis of , and compute the thin QR factorizations of and :
(2.2) A˜X=˜URAandB˜X=˜VRB,
where and are orthonormal, and and are upper triangular. Then the columns of and are orthonormal bases of and , respectively. With , , and their orthonormal bases available, we can extract an approximation to the desired GSVD component of with respect to them. The standard extraction approach in [15] seeks for positive pairs with , normalized vectors , , and vectors that satisfy the Galerkin type conditions:
(2.3) ⎧⎪⎨⎪⎩A~x−~α~u⊥U,B~x−~β~v⊥V,~βAT~u−~αBT~v⊥X.
Among pairs ’s, select closest to , and take as an approximation to . We call or a Ritz value and , and the corresponding left and right Ritz vectors, respectively.
It follows from the thin QR factorizations (2.2) of and that and . Write , and . Then (2.3) becomes
(2.4) RA~d=~α~e,RB~d=~β~f,~βRTA~e=~αRTB~f,
which is precisely the GSVD of the projected matrix pair . Therefore, in the extraction phase, the standard extraction approach computes the GSVD of the -by- matrix pair , picks up the GSVD component with being the generalized singular value of closest to the target , and use
(~α,~β,~u,~v,~x)=(~α,~β,˜U~e,˜V~f,˜X~d)
as an approximation to of . It is straightforward from (2.3) that , and
(ATA−~θ2 BTB)~x⊥X.
That is, is a standard Ritz pair to the eigenpair of the symmetric definite matrix pair with respect to the subspace . Because of this, we call a standard Ritz approximation in the GSVD context.
Since and , the residual of Ritz approximation is
(2.5) r=r(~α,~β,~u,~v,~x)=~βAT~u−αBT~v.
Obviously, is an exact GSVD component of if and only if . The approximate GSVD component is claimed to have converged if
(2.6) ∥r∥≤(~β∥A∥1+~α∥B∥1)⋅tol,
where is a user prescribed tolerance, and one then stops the iterations.
If has not yet converged, the CFP-JDGSVD method expands the right searching subspace and constructs the corresponding left subspaces and by (2.1). Specifically, the CPF-JDGSVD seeks for an expansion vector in the following way: For the vector
(2.7) ~y:=(ATA+BTB)~x=~αAT~u+~βBT~v
that satisfies , we first solve the correction equation
(2.8) (I−~y~xT)(ATA−ρ2BTB)(I−~x~yT)t=−r
with the fixed for until
(2.9) ∥r∥≤(~β∥A∥1+~α∥B∥1)⋅fixtol
for a user prescribed tolerance , say, , and then solve the modified correction equation with the dynamic for . Note that is an oblique projector onto the orthogonal complement of the subspace .
With the solution of (2.8), we expand to the new -dimensional , whose orthonormal basis matrix is
(2.10) ˜Xnew=[˜X, x+]withx+=(I−˜X˜XT)t∥(I−˜X˜XT)t∥,
where is called an expansion vector. We then compute the orthonormal bases and of the expanded left searching subspaces
Unew=AXnew=span{˜U,Ax+},Vnew=BXnew=span{˜V,Bx+}
by efficiently updating the thin QR factorizations of and , respectively, where
˜Unew=[˜U,u+],RA,new=[RArAγA],˜Vnew=[˜V,v+],RB,new=[RBrBγB]
with
rA=˜UTAx+,γA=∥Ax+−˜UrA∥,u+=Ax+−˜UrAγA, rB=˜VTBx+,γB=∥Bx+−˜VrB∥,v+=Bx+−˜VrBγA.
CPF-JDGSVD then computes a new approximate GSVD component of with respect to and , and repeat the above process until the convergence criterion (2.6) is achieved. We call iterative solutions of (2.8) the inner iterations and the extractions of the approximate GSVD components with respect to , and the outer iterations.
As has been shown in [15], it suffices to iteratively solve the correction equations approximately with low or modest accuracy and uses an approximate solution to update in the above way, in order that the resulting inexact CPF-JDGSVD method and its exact counterpart with the correction equations solved accurately behave similarly. Precisely, for the correction equation (2.8), we adopt the inner stopping criteria in [15] and stop the inner iterations when the inner relative residual norm satisfies
(2.11) ∥rin∥≤min{2c~ε,0.01},
where is a user prescribed parameter and is a constant depending on and the current approximate generalized singular values.
## 3 The harmonic extraction based JDGSVD methods
We shall make use of the principle of the harmonic extraction [31, 33] to propose the CPF-harmonic and IF-harmonic extraction based JDGSVD methods in Section 3.1 and Section 3.2, respectively. They compute new approximate GSVD components of with respect to the given left and right searching subspaces , and , and suit better for the computation of interior GSVD components.
### 3.1 The CPF-harmonic extraction approach
If has full column rank with some special, e.g., banded, structure, from which the inversion can be efficiently applied, we can propose our CPF-harmonic extraction approach to compute a desired approximate GSVD component as follows. For the purpose of derivation, assume that
(3.1) BTB=LLT
is the Cholesky factorization of with being nonsingular and lower triangular, and define the matrix
(3.2) ˇA=AL−T.
We present the following result, which establishes the relationship between the GSVD of and the SVD of and will be used to propose the CPF-harmonic extraction approach.
###### Theorem 3.1.
Let be a GSVD component of the regular matrix pair and . Assume that has full column rank and has the Cholesky factorization (3.1), and let be defined by (3.2) and the vector
(3.3) z∗=1β∗LTx∗.
Then is a singular triplet of :
(3.4) ˇAz∗=σ∗u∗andˇATu∗=σ∗z∗.
###### Proof.
It follows from the GSVD (1.2) of that with , meaning that . Making use of (3.1), we have
∥z∗∥=1β∗∥LTx∗∥=1β∗∥Bx∗∥=1.
By the definitions (3.2) and (3.3) of and , from we obtain
ˇAz∗=1β∗AL−TLTx∗=1β∗Ax∗=α∗β∗u∗=σ∗u∗,
that is, the first relation in (3.4) holds. From the GSVD (1.2), it is straightforward that . Making use of this relation and (3.1) gives
ˇATu∗=L−1ATu∗=σ∗β∗L−1BTBx∗=σ∗β∗LTx∗=σ∗z∗,
which proves the second relation in (3.4). ∎
Theorem 3.1 motivates us to propose our first harmonic extraction approach to compute the singular triplet of and then recover the desired GSVD component of .
Specifically, take the -dimensional and as the left and right searching subspaces for the left and right singular vectors and of , respectively. Then the columns of form a basis of . Mathematically, we seek for positive and vectors and such that
(3.5) [0ˇATˇA0][ˇzˇu]−ϕ[ˇzˇu] ⊥ ([0ˇATˇA0]−τI)R([˜Z˜U]).
This is the harmonic extraction approach for the eigenvalue problem of the augmented matrix
[0ˇATˇA0]
for the given target [31, 33], where is a harmonic Ritz value and is the harmonic Ritz vector with respect to the searching subspace
R([˜Z˜U]).
We pick up the closest to as the approximation to and take the normalized and as approximations to and , respectively. We will show how to obtain an approximation to afterwards.
Write and with and . Then , and requirement (3.5) amounts to the equation
[˜ZT˜UT][−τIˇATˇA−τI][−ϕIˇATˇA−ϕI][˜Z˜U][ˇdˇe]=0.
Decompose , and rearrange the above equation. Then we obtain the generalized eigenvalue problem of a -by- matrix pair:
(3.6) [˜ZTˇATˇA˜Z+τ2˜ZT˜Z−2τ˜ZTˇAT˜U−2τ˜UTˇA˜Z˜UTˇAˇAT˜U+τ2I][ˇdˇe]=(ϕ−τ)[−τ˜ZT˜Z˜ZTˇAT˜U˜UTˇA˜Z−τI][ˇdˇe].
By (3.2), and the thin QR factorization of in (2.2), we have
ˇA˜Z=A˜X=˜URA,
showing that
˜ZTˇATˇA˜Z=RTARA%and˜ZTˇAT˜U=RTA.
Moreover, exploiting the Cholesky factorization (3.1) of and the thin QR factorization of in (2.2), we obtain
˜ZT˜Z = ˜XTLLT˜X=˜XTBTB˜X=RTBRB, ˜UTˇAˇAT˜U = ˜UTA(LLT)−1AT˜U=˜UTA(BTB)−1AT˜U.
Substituting these two relations into (3.6) yields
(3.7)
For the brevity of presentation, we will denote the symmetric matrices
HA,B†=˜UTA(BTB)−1AT˜U
and
(3.8) Gc=[−τRTBRBRTARA−τI],Hc=[RTARA+τ2RTBRB−2τRTA−2τRAHA,B†+τ2I].
In implementations, we compute the generalized eigendecomposition of the symmetric positive definite matrix pair and pick up the largest eigenvalue in magnitude and the corresponding unit-length eigenvector . Then the harmonic Ritz approximation to the desired singular triplet of is
(3.9) (ϕ,ˇu,ˇz)=(τ+1μ,˜Uˇe∥ˇe∥,˜Zˇd∥˜Zˇd∥).
Since is an approximation to the right singular vector of , from (3.3) the vector after some proper normalization is an approximation to the right generalized singular vector of , which we write as
(3.10) ˇx=1ˇδ˜Xˇd,
where is a normalizing factor. It is natural to require that the approximate right singular vector be -norm normalized, i.e., , since the exact satisfies by (1.2). With this normalization, from (3.10), we have
1=1ˇδ2ˇdT˜XT(ATA+BTB)˜Xˇd=1ˇδ2ˇdT(RTARA+RTBRB)ˇd,
from which it follows that
(3.11) ˇδ=√∥RAˇd∥2+∥RBˇd∥2.
Note that the approximate left generalized singular vector defined by (3.9) is no longer collinear with , as opposed to the collinear and obtained by the standard extraction approach in Section 2. To this end, instead of in (3.9), we take new and defined by
(3.12) ˇu=Aˇx∥Aˇx∥andˇv=Bˇx∥Bˇx∥
as the harmonic Ritz approximations to and , which are colinear with and , respectively. Correspondingly, define and . Then by (3.11), the parameter in (3.10) becomes . Moreover, by definition (3.10) of and the thin QR factorizations of and in (2.2), we obtain
Aˇx = 1ˇδA˜Xˇd=1ˇδ˜URAˇd=1ˇδ˜Uˇe, Bˇx = 1ˇδB˜Xˇd=1ˇδ˜VRBˇd=1ˇδ˜Vˇf.
Using them, we can efficiently compute the approximate generalized singular vectors
(3.13) ˇu=Aˇx∥Aˇx∥=˜Uˇe∥ˇe∥andˇv=Bˇx∥Bˇx∥=˜Vˇf∥ˇf∥< | 2022-05-18 12:11:18 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.882298469543457, "perplexity": 920.7066620514235}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662522270.37/warc/CC-MAIN-20220518115411-20220518145411-00016.warc.gz"} |
http://www.clp.ac.cn/EN/Article/OJ68fb3e028cad13f8?type=html | Main > Chinese Optics Letters > Volume 13 > Issue 9 > Page 091402 > Article
• View Full Text
• Abstract
• Figures (9)
• Tables (1)
• Equations (0)
• References (22)
• Cited By (2/1)
• Get PDF
• View Full Text
• Paper Information
Accepted: Jul. 9, 2015
Posted: Oct. 31, 2019
Published Online: Sep. 14, 2018
The Author Email: Dijun Chen (djchen@siom.ac.cn), Haiwen Cai (hwcai@siom.ac.cn)
• Get Citation
• ##### Copy Citation Text
Bin Lu, Fang Wei, Zhen Zhang, Dan Xu, Zhengqing Pan, Dijun Chen, Haiwen Cai. Research on tunable local laser used in ground-to-satellite coherent laser communication[J]. Chinese Optics Letters, 2015, 13(9): 091402
• Category
• Share
## Abstract
In order to realize homodyne reception and Doppler frequency shift tracking in ground-to-satellite coherent laser communication, a local laser is experimentally demonstrated in this Letter. It is realized based on modulation-sideband injection locking, and has a 10 GHz tuning range, a 1 THz/s tuning rate, a 5 kHz linewidth, and 16 mW of output power. When applied to a Costas loop in a coherent laser communication system, the local laser can achieve $±5 GHz$ Doppler frequency shift tracking with a 20 MHz/s frequency shift rate, which is sufficient for the ground-to-satellite coherent laser communication.
Coherent optical communication has a very high receiving sensitivity, and has become a hot research area in the field of inter-orbit coherent laser communications[14]. In a coherent system, a homodyne receiver with a binary phase-shift keying signal modulation can realize the theoretical maximum sensitivity[5]. Homodyne reception requires phase synchronization between the source laser and the narrow-linewidth local laser, which is realized by optical phase-locked loop (OPLL) technology and also requires that the local laser has good frequency tunability[6] to track the Doppler frequency shift. In ground-to-satellite coherent laser communication, it is necessary for the Doppler frequency shift tracking range to be about $±5 GHz$, while the maximum Doppler frequency shift rate is nearly 10 MHz/s.
Numerous methods have been presented to meet the requirements mentioned above. Semiconductor lasers have a good frequency tuning property, and can be rapidly tuned by adjusting the temperature or current. However, the linewidth of semiconductor lasers is relatively wide[7]. Fiber lasers with a narrow linewidth can realize frequency tuning by an acousto-optic modulator (AOM)[8,9] or an electro-optic modulator (EOM)[1012], which can combine the fiber lasers’ good properties (robustness and narrow linewidth) with a rapid tuning performance. However, the tuning range is limited by the external radio frequency (RF) signal and the response speed of the modulators, and the output power of the modulated light will be restricted by the modulator’s low conversion efficiency. Simultaneous achievement of narrow linewidth operation, fast precision tuning, and high output power is a challenge for a conventional laser source.
In this Letter, a modulation-sideband injection-locking technology is proposed. Through this technology, the good frequency tuning property of semiconductor lasers and narrow linewidth property of fiber lasers can be combined. The master fiber laser is modulated by an EOM to generate many sidebands, and then the modulated light is launched into the semiconductor slave laser. By adjusting the current or temperature of the slave laser, the slave laser will be injection locked at a specific sideband, and the other modes will be suppressed. If the RF signal to the EOM is shifted by $Δω$, the nth sideband frequency will shift instantly by $nΔω$; consequently, the frequency tuning rate and range are simultaneously multiplied, overcoming the restriction of the RF signal. Using this technology, the narrow linewidth and stability properties of the main laser can be transferred to the slave laser, and will effectively enhance the output power. Meanwhile, we can easily realize the frequency tuning by altering the RF signal[1320].
A schematic diagram of the modulation-sideband injection locking is shown in Fig. 1. The master laser is a low-noise fiber laser working in the 1550 nm band. The slave laser is a distributed feedback (DFB) semiconductor laser. The emitting light of the master laser is modulated through an EOM driven by a RF signal to generate many sidebands. Since the generating procedure of the sidebands is strictly bound to the RF signal and is independent from the cavity reconfiguration, the narrow linewidths of the sidebands are reserved even at a high-frequency tuning rate. We adjust the temperature or current of the slave laser to ensure that a specific sideband is injection locked to the slave laser, and then the frequency tuning can be easily realized by altering the RF signal. There is an injection-locking area (typically hundreds of megahertz for stable injection levels[17]), beyond which the slave laser may lose the locked state. In order to maintain injection locking at the sideband when the frequency is being tuned, synchronous temperature or current compensation is necessary. Considering that the response rate of the temperature compensation is much lower, which will hamper the frequency tuning rate, current compensation is a better choice for rapid frequency tuning.
Figure 1. Schematic diagram of the modulation-sideband injection locking.
The operation temperature and current of the DFB slave laser can be set up on the laser driving board. The current driving board not only can set up the bias current, but also has the external current compensation connector, through which an analog voltage can be turned up to linearly adjust the driving current. The measured external current compensation coefficient is about 20 mA/V. Moreover, the frequency tuning and external compensation voltage must be kept synchronous.
The RF signal source is designed to generate both the RF signal and the synchronous compensation voltage. Meanwhile, the RF signal has to keep a low noise in order to not deteriorate the modulated master laser’s linewidth. A voltage-controlled oscillator (VCO) is a kind of high-speed voltage-frequency converter. It can rapidly transform a voltage signal into a corresponding RF signal. Meanwhile, the voltage can be processed to act as the synchronous compensation voltage. The parameters of the VCO (Hittite, HMC-C029) we use are shown in Table 1. To guarantee the low-noise property of the RF signal, two ultra-low-noise digital-to-analog converter (DAC) are chosen, and the circuit board is professionally designed and manufactured.
• #### Table 1. Parameters of the VCO
View tableView all Tables
#### Table 1. Parameters of the VCO
Frequency Range 5–10 GHz Power Output 17–20 dBm Tune Voltage 0–20 V Supply Current 195 mA@15 V Second Harmonic −15 dBc SSB Phase Noise −93 dBc/Hz@100 kHz −64 dBc/Hz@10 kHz
The schematic diagram of the RF signal source is shown in Fig. 2. The output voltage of the two different DAC is added together through an electrical adder to drive the EOM. One is a DAC that is of high precision (20 bit), has a high output voltage range (0–20 V), and has a low speed (settling time 1 us). The other one is a DAC that is of lower precision (16 bit), has a low output voltage range (0–200 mV), and has a high speed (settling time 400 ps). The voltage output range of the high-precision DAC can drive the VCO to produce a RF signal spanning from 5 to 10 GHz. Driven by the RF signal, the slave laser can realize the large range and lower speed frequency tuning, and can thus fulfill the tracking of the Doppler frequency shift. The high-speed DAC is used to enable the slave laser to conduct a rapid, small-range frequency tuning to make the phase locked instantly in the OPLL system. The output voltage of the adder is processed by an analog circuit, and is attached to the external current compensation connector of the slave laser to fulfill the synchronous current compensation.
Figure 2. Schematic diagram of the RF signal source.
The output RF signal of the VCO is measured by a spectrum analyzer (Fig. 3); it indicates that the 3 dB bandwidth is about 5 kHz. Thus, the noise of the RF signal is low enough that it will not deteriorate the linewidth of the modulated master laser, although there are two small sidelobe peaks induced by the noise from the DAC.
Figure 3. RF signal spectrum diagram at 7.94796 GHz.
The output optical spectra are measured by an optical spectrum analyzer, as shown in Fig. 4. Figure 4(a) is the output spectrum of the EOM modulated by a RF signal of about 6.5 GHz. Figure 4(b) is the output spectrum of the slave laser after being injection locked at the second sideband. It is obvious that the carrier has been effectively suppressed (over 25 dB), and the EOM can generate offset sidebands of up to 45 GHz. The output power is greatly enhanced (more than 30 dB) after injection locking. The output power of the injection-locked slave laser is measured by a power meter. The average power is above 16 mW (almost the same as the output power of the slave laser without injection locking), which is high enough to satisfy the needs in a homodyne reception.
Figure 4. Spectra of the local laser. (a) Modulated master laser. (b) Injection-locked slave laser.
The linewidth of the laser is measured based on a fiber-delayed, self-heterodyne interferometer[21]. The fiber length is about 20 km, and the frequency is about 40 MHz, generated by an AOM. The linewidth of the slave laser without injection locking is nearly 200 kHz. The results after realizing injection locking are presented in Fig. 5: the red line is the linewidth of the master laser, and the black one is the linewidth of the slave laser injection locked at the second sideband of the master laser. It shows that the noise of the locked slave laser is slightly increased compared to the master laser’s linewidth. The sidelobe peaks are due to the RF signal induced by the noise of the DAC. The 3 dB linewidth is almost the same as the master laser: $<5 kHz$. The narrow linewidth property will enable the local laser to be applied to an OPLL system.
Figure 5. Linewidths of the master and injection-locked slave lasers.
In order to demonstrate the large range and rapid frequency tuning ability of the local laser, the high-precision DAC is controlled to drive the VCO to generate a RF signal spanning from 5 to 10 GHz. By adjusting the slave laser to be injection locked at the second sideband of the master laser, the output light-frequency tuning range can be expanded from 5 to 10 GHz. The frequency tuning characteristics are measured by a fiber asymmetric Mach–Zehnder interferometer (see Fig. 6), whose free spectral range is about 220 MHz. Figure 6(a) is the voltage signal of the VCO, and Fig. 6(b) shows the intensity curves of the non-equilibrium Mach–Zehnder interferometer. It can be seen that the frequency-tuning period is approximately 10 ms, and that the number of sinusoidal wave is approximately 49. Then, we can calculate that the frequency tuning range surpasses 10 GHz (220 MHz*49), and that the frequency tuning rate is over 1 THz/s. The frequency tuning performance of the local laser can fulfill the needs of the Doppler frequency shift compensation of a ground-to-satellite coherent laser communication.
Figure 6. Intensity curves of the non-equilibrium Mach–Zehnder interferometer for laser frequency modulation.
Then, the local laser is applied to the Costas OPLL system[22], which is an important part of the coherent optical communication. The signal laser is a kind of fiber DFB laser working in the 1550 nm band. The signal laser can be tuned to around dozens of megahertz through an external voltage, or tuned to around scores of gigahertz by adjusting the signal laser’s operating temperature. A frequency difference is set between the signal laser and the local laser. Then, the field programmable gate array controls the local laser to conduct a frequency sweeping. The frequency difference and the beat frequency of the output sinusoidal wave are gradually reduced, until they are within the capture range of the OPLL. The OPLL will achieve phase locking when the beat frequency is nearly zero. Despite phase locking, some phase noise will still exist. As long as the phase noise does not exceed half of the full-width of the output signal, no error will occur. The capture process is shown in Fig. 7, where the phase noise is within the threshold of red line.
Figure 7. Capture process.
To verify the Doppler frequency shift tracking, the signal laser needs to be frequency tuned to simulate the Doppler frequency shift. The small frequency range tracking process is shown in Fig. 8. The tuning frequency range is approximately 12 MHz, and the simulated Doppler frequency shift rate is nearly 20 MHz/s. Figure 8(a) is the external tuning voltage of the signal laser, Fig. 8(b) is the response of the local laser, and Fig. 8(c) is the phase-locked state. The large-range Doppler frequency shift process is shown in Fig. 9. Due to the frequency tuning limit of the signal laser, the simulated Doppler frequency shift is about 6 GHz, while the tuning rate is nearly 20 MHz/s. Figure 9(a) is the tuning temperature of the signal, Fig. 9(b) is the response of the local laser, and Fig. 9(c) is the phase-locked state. While conducting the large frequency range Doppler frequency shift compensation, the phase noise is slightly increased, but is still within the threshold, and is low enough to satisfy the needs of coherent communication.
Figure 8. Doppler frequency shift tracking within 12 MHz. | 2020-09-23 03:18:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 5, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5239214301109314, "perplexity": 1732.0222456087608}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-40/segments/1600400209665.4/warc/CC-MAIN-20200923015227-20200923045227-00694.warc.gz"} |
https://www.betterment.com/resources/inside-betterment/engineering/bobs-ultimate-garage/ | Consider the following example.
Bob’s Ultimate Garage
Let’s say you are one of Bob’s lucky customers and park your car at Bob’s Ultimate Garage. This Garage is highly efficient and has mechanisms in place to help you get to your optimal parking spot faster, reduce car congestion, and give you favorable pricing for features that you want and nothing more. We’ll assume the following properties for the Garage:
• Has three levels (1-3)
• Each level has three sections
• Section 1 – for daily parking. Enter in the morning, leave in the evening
• Section 2 – for over-night parking. Enter whenever, stay the night, leave in the morning.
• Section 3 – hourly parking.
• Each level has different security strategies (for keeping car thieves out)The Garage’s pricing scheme will charge you more for a level with stricter security.
• Level 1 – need your parking ticket to enter
• Level 2 – closed circuit cameras in addition to needing a ticket to enter
• Level 3 – on-site security guard patrolling the premises
To help the driver decide where to park, Bob has integrated in a new parking Ticketing System.
Bob’s fancy new Ticketing System will work in the following way:
• When a Car enters the Garage, it will pull up to a self-service ticket stand. User will press a typical button to clock in.
• Once a car pull up, a scanner will feed the following data into the Ticket System:
• Current Time (of check in)
• Year and Model of the car
Based on this input, the Ticketing System will print out a Ticket with the following:
• Two recommendations for Level and Section of the garage
• Estimate of hourly cost for each recommendation
• Confirmation of Check-in timestamp (obviously)
Initial Analysis
To implement Bob’s Ticketing System we will require some conditional logic. The main drivers for determining what the produced Ticket will recommend are:
1. The type of car
2. The time period of the day a customer enters Bob’s garage to park, ie Morning, Afternoon, Evening
The duration of a customer’s stay at the garage may influence the avg hourly rate Bob will charge. Bob may wish to offer discounts for certain cars, for extended stays, etc. For the sake of our example we will assume that Bob’s discounts will just be a function of time and the select level/section of the garage.
Basic domain modeling
From the “nouns” discussed so far we know to create objects representing:
• A Car, capturing the CarType, CarModel, year of it make
• Enum representing TimeOfDay, ie. Morning, Afternoon, Evening, GarageLevel, GarageSection
• An HourlyEstimate which will retain the estimated number of hours that a customer is expected to stay at the garage (based on Bob’s data analysis) and the hourly rate determined by associated business rules.
• A Recommendation for a GarageLevel, GarageSection and Hourly Rate
• A TicketRequest representing input from the Ticketing system: time of ticket request, details of the customer’s car
• And of course a TicketResponse which contains all recommendations we promised to deliver to our customer
A quick mock-up of what we need yields:
enum CarType {
// Using Guava's sets
// Car models are defined in-line so you don't forget to define models for each supported car type when adding new car types
// see Effective Java chapters on Enums for further reading
FERARI(CarClass.SPORT, Sets.newHashSet("california", "458 italis", "f12 berlinetta", "ff")),
LEXUS(CarClass.LUXURY, Sets.newHashSet("is", "gs", "ls")),
OLDSMOBILE(CarClass.CLASSIC, Sets.newHashSet("pirate", "55", "defender"));
...
}
enum TimeOfDay {
MORNING,
AFTERNOON,
EVENING,
ANY;
}
enum CarClass {
LUXURY,
SPORT,
CLASSIC
}
enum GarageLevel {
LEVEL1,
LEVEL2,
LEVEL3
}
enum GarageSection {
SECTION1,
SECTION2,
SECTION3
}
class Car {
final CarType carType;
final int makeYear;
}
class HourlyEstimate {
final int numHoursEst;
final BigDecimal avgHourlyPrice;
}
class Recommendation {
final GarageLevelSection gls;
final HourlyEstimate estimate;
}
class TicketRequest {
final DateTime entryTimestamp;
final String modelName;
final int makeYear;
}
class TicketResponse {
final DateTime entryTimestamp;
final Car car;
final Recommendation recommendation1;
final Recommendation recommendation2;
}
Capturing behaviors and defining rules
One way to begin our solution is to envision conditionals that capture all system behaviors as they were envisioned by Bob. Let’s mock up some business rules:
1. All Classics such as the 1902 Oldsmobile Pirate should favor Level 1, Section 1 in the Morning, and Level 1, Section 2 otherwise. (Bob doesn’t like Oldsmobiles and doesn’t think they require monitoring)
2. All Luxury cars such as the Lexus GS should favor Level 3, Section 1 in the Morning, Level 3, Section 2 in the Afternoon, and Level 3, Section 3 in the Evening.
And so on for other classes of cars we wish to support…
Since pricing is based on the chosen Level, Section and Time we can contrive some rules here as well:
1. Level 1 is the cheapest
2. Level 2 is average, unless it’s in Section 3 where it is as expensive as Level 3
3. Level 3 is the most expensive
Since the duration of stay may influence the hourly avg price, we need some way of associating the Time of Day (Morning, Afternoon, Evening) with the estimated number of hours Bob believes a car is likely to remain parked in the garage.
To model this in code, our instinct may be to try out some if/else blocks. Here is a snippet from BobsUltimateGarage.evaluateProcedurally() method:
TicketResponse evaluateProcedurally(TicketRequest request) {
log.info("Processing procedurally: " + request);
CarType carType = CarType.fromModel(request.modelName);
CarClass carClass = carType.getCarClass();
Car car = new Car(carType, request.makeYear);
TimeOfDay timeOfDay = TimeOfDay.fromDateTime(request.entryTimestamp);
...
// with just two variables, we have two levels of nested if/elses
if (CarClass.CLASSIC.equals(carClass)) {
// Bob doesn't value classics much...
if (TimeOfDay.MORNING.equals(timeOfDay)) {
// this can be refactored more... for compostional object building, Builder method is preferred.
HourlyEstimate estimate1 = calculateHourlyEstimateImperatively(timeOfDay, GarageLevel.LEVEL1, GarageSection.SECTION1);
r1 = new Recommendation(new GarageLevelSection(GarageLevel.LEVEL1, GarageSection.SECTION1), estimate1);
HourlyEstimate estimate2 = calculateHourlyEstimateImperatively(timeOfDay, GarageLevel.LEVEL2, GarageSection.SECTION1);
r2 = new Recommendation(new GarageLevelSection(GarageLevel.LEVEL2, GarageSection.SECTION1), estimate2);
} else {
HourlyEstimate estimate1 = calculateHourlyEstimateImperatively(timeOfDay, GarageLevel.LEVEL1, GarageSection.SECTION2);
r1 = new Recommendation(new GarageLevelSection(GarageLevel.LEVEL1, GarageSection.SECTION2), estimate1);
HourlyEstimate estimate2 = calculateHourlyEstimateImperatively(timeOfDay, GarageLevel.LEVEL2, GarageSection.SECTION2);
r2 = new Recommendation(new GarageLevelSection(GarageLevel.LEVEL2, GarageSection.SECTION2), estimate2);
}
} else if (CarClass.LUXURY.equals(carClass)) {
if (TimeOfDay.MORNING.equals(timeOfDay)) {
...
} else if (TimeOfDay.AFTERNOON.equals(timeOfDay)) {
...
}
...
}
return new TicketResponse(request.entryTimestamp, car, r1, r2);
}
And our simulated methods for estimating the duration of stay and the avg hourly rate:
private int getHourEstimateFromTimeOfDay(TimeOfDay timeOfDay) {
int result = 1; //made up avg stay
if (TimeOfDay.AFTERNOON.equals(timeOfDay)) {
result = 2; //maybe an errand?
} else if (TimeOfDay.MORNING.equals(timeOfDay)) {
result = 8; //8-hr work day?
} else if (TimeOfDay.EVENING.equals(timeOfDay)) {
result = 12; //overnight?
}
return result;
}
private BigDecimal getHourlyRate(GarageLevel level, GarageSection section) {
BigDecimal result = new BigDecimal(5); //\$5.00/hr default
if (GarageLevel.LEVEL1.equals(level)) {
if (GarageSection.SECTION1.equals(section)) {
result = new BigDecimal(5);
} else {
result = new BigDecimal(7);
}
} else if (GarageLevel.LEVEL2.equals(level)) {
if (GarageSection.SECTION3.equals(section)) {
result = new BigDecimal(10);
} else {
result = new BigDecimal(5);
}
} else if (GarageLevel.LEVEL3.equals(level)) {
result = new BigDecimal(10);
}
return result;
}
Observations on our first coding attempt
The above code works but should feel very procedural and overly complex – even with good commentary and studious refactoring. For every additional variable introduced into our business rules, the complexity of our conditionals increases manifold. If we were to introduce a new time slice, say LUNCH, each CarClass codified in our conditional will have to provide support for the new time period. For every nuance in rate calculations, we’ll have to work on correctly updating behavior without introducing regression bugs.
To sum up, the if/else approach suffers from:
• Redundancy. For each outer grouping, we must repeat inner groupings or find clever ways to refactor logic into helper methods.
• Explosive complexity. As new variables are introduced the decision tree becomes very complex, quickly.
• Challenged Maintainability. If we go beyond our contrived example and work with more nuanced rules, the resulting code may become brittle and susceptible to careless mistakes. For example, if we were to add the LUNCH TimeOfDay, a developer supporting this code may easily overlook a helper method that performs conditional processing on the TimeOfDay enum. In such a case, refactoring may have hurt maintainability of the code!
• Compromised Readability and Comprehension. With enough values or states that a given variable can take, the decision tree can quickly become difficult to reason about.
An alternative (Better) approach
A better way to capture Bob’s Ultimate Garage rules is to actually model out a Rule Engine. To do this in a manner which avoids all of the aforementioned pitfalls we introduce Guava’s Function object and the Apache Commons MultiKeyMap implementation.
MultiKeyMaps are super neat because they permit multiple discrete objects (including primitives) to act as a single composite key into the underlying map implementation. Applied, a MultiKeyMap will allow us to collapse nested if/else statements into a single, flat, composite key.
Guava’s Functions are delegate mechanisms with some of the Java nastiness encapsulated away. For C++ buffs, you can think of these Functions as function pointers (but they are references). Functions will allow us to elegantly implement multiple service contracts while encapsulating nuanced functionality of Bob’s business rules.
Combining Functions with Maps will allow us to configure multiple Strategies for our Business Rules in a declarative, flexible, and maintainable manner. To illustrate some variations, BobsUltimateGarage uses MultiKeyMaps for modeling Garage Level and Section rules without any Functional idioms while a simple HashMap is used to configure hourly rate calculation strategies for supported Car Classes (ie Classical, Luxury, Sport) using Functional idioms.
Let’s take a look at the CarClass-driven representation of rules pertaining to avg hourly rate calculations:
void initializeBusinessRules() {
}
Here, the BusinessRule object is nothing more than a container for multiple Functions. These functions comprise a Strategy for calculating estimated duration of stay and the avg hourly rate for each CarClass.
Functions like “simpleDateConverterFunction” and “complexDateConverterFunctions” share signatures and therefore act as implementations of a shared interface. (In fact, under the hood, Functions are nothing more than a genericized implementation of an Interface with a single apply() method.)
Further, these functions cleanly encapsulate different behaviors we wish to model. For example, we’ve contrived a “minDurationEstimateFunction” and a “simpleDurationEstimateFunction” to help us imagine the many different pricing strategies that Bob may come up with. Here is a closer look at these two functions:
//no minimum stay
private final Function simpleDurationEstimateFunction = new Function() {
@Override
public Integer apply(TimeOfDay timeOfDay) {
// reusing functionality from elsewhere... why not?
return getHourEstimateFromTimeOfDay(timeOfDay);
}
};
// introduce a 2 hr minimum stay
private final Function minDurationEstimateFunction = new Function() {
@Override
public Integer apply(TimeOfDay timeOfDay) {
// reusing functionality from elsewhere... why not?
int estimate = getHourEstimateFromTimeOfDay(timeOfDay);
return Math.max(estimate, 2);
}
}
Each strategy can be reapplied to other CarClass types in a declarative manner, with ease. There is also no limit to how fancy the inner implementation of each Function can be. Our production code using similar techniques makes database calls, web services calls, etc.
Rules to produce a Recommendation are similarly encoded into a MultiKeyMap. Although we don’t use any Functional strategies here, we can easily envision doing so.
void initializeRecommendations() {
RECOMMENDATIONS.put(CarClass.CLASSIC, TimeOfDay.MORNING, 1, new GarageLevelSection(GarageLevel.LEVEL1, GarageSection.SECTION1));
RECOMMENDATIONS.put(CarClass.CLASSIC, TimeOfDay.MORNING, 2, new GarageLevelSection(GarageLevel.LEVEL2, GarageSection.SECTION1));
RECOMMENDATIONS.put(CarClass.CLASSIC, TimeOfDay.ANY, 1, new GarageLevelSection(GarageLevel.LEVEL1, GarageSection.SECTION2));
RECOMMENDATIONS.put(CarClass.CLASSIC, TimeOfDay.ANY, 2, new GarageLevelSection(GarageLevel.LEVEL2, GarageSection.SECTION2));
...
}
Note how much easier it is to reason about this code than it is to reason about the if/else mess we previously created.
The clincher however, comes in constructing the final TicketResponse object.
Building the TicketResponse
The functional/declarative approach to modeling business rule strategies lends itself to a more elegant technique for constructing our response. Since everything is either pre-packaged into a value object or is a collection of Functions, we simply need to sequence construction of the result correctly. To illustrate, consider this quick-and-dirty Builder for producing a TicketResponse. Its only responsibilities are to parse the Request and then “apply” our rules in the proper sequence (everything that an API will provide documentation for). In fact, the entire ingestion of a TicketRequest and the construction of a TicketResponse is now but a few lines of code:
TicketResponse evaluateFunctionally(TicketRequest request) {
log.info("Processing functionally: " + request);
Car car = new Car(CarType.fromModel(request.modelName), request.makeYear);</code>
// can be further refactored into a helper Builder class
TimeOfDay timeOfDay = bizRule.timeOfDayFunction.apply(request.entryTimestamp);
GarageLevelSection glsReco1 = getRecommendation(car.getCarClass(), timeOfDay, 1);
HourlyEstimate estimate1 = new HourlyEstimate(bizRule.durationEstimateFunction.apply(timeOfDay),
bizRule.rateEstimateFunction.apply(glsReco1));
Recommendation r1 = new Recommendation(glsReco1, estimate1);
GarageLevelSection glsReco2 = getRecommendation(car.getCarClass(), timeOfDay, 2);
HourlyEstimate estimate2 = new HourlyEstimate(bizRule.durationEstimateFunction.apply(timeOfDay),
bizRule.rateEstimateFunction.apply(glsReco2));
Recommendation r2 = new Recommendation(glsReco2, estimate2);
return new TicketResponse(request.entryTimestamp, car, r1, r2);
}
The above can be further simplified with a bit more refactoring. Even without additional cleanup, we still see the effect.
When perusing the full code sample (https://github.com/Betterment/BetterDev/blob/master/BobsUltimateGarage.java) note how much simpler and more readable this method is compared to the original “evaluateProcedurally” method. The “evaluateFunctionally” method doesn’t care about the underlying strategies chosen, nor any of the implementation details what-so-ever. It only cares about computing the result and packaging everything into a TicketResponse.
Should Bob introduce new pricing schemes or new scheduling slices, we will only have to make updates to our Rules Map definitions and write new Functions for capturing new behaviors.
In Conclusion
We’ve covered two very different approaches to solving one common problem. The first approach, using nested if/else conditionals, suffers from complexity creep, maintenance, and comprehension challenges. A logic tree which combinatorially expresses the impact of multiple variables can quickly result in repetitive, procedural code that is not pleasant to work with.
An alternate approach is to leverage functional idioms available to us in Java along with single-key or multi-key HashMap implementations. This “Better” approach allows us to not only represent our state machine in a declarative manner but also to encapsulate rule variations cleanly. Since all the heavy lifting is abstracted away, the API for such a Rule Engine can remain litter-free. A response Builder of some sort can be provided to “apply” selected strategies and to construct the result while remaining completely ignorant of the underlying implementation.
When might such a design choice be appropriate?
1. When a decision tree is likely to consider multiple variables
2. and the states of these variables can be expressed declaratively
3. and when the response object requires non-trivial construction – meaning that the values it takes on requires the completion of several business processes
4. and the strategy for executing a business process may change dynamically depending on the given state of the application
5. and finally, when it is natural to describe rules as a concatenation of “if this and that and this other thing, then we should do so and so”
Once conditional logic is expressed as a table of rules and lookups, we can consider maintenance (code upkeep) challenges inherent to such a solution. The rules map, although defined in a quasi-declarative way, is still hard-coded! Any change to a rule will require a patch, build, and deploy cycle. If you are like Betterment and your organization is agile enough to push out multiple patch releases in a week (or a day) without breaking a sweat, perhaps you need go no further. For best-practice enthusiasts, purists, or those of us bogged in a lengthier release cycle, we would naturally strive for a rules engine that is configurable outside of our code-base.
A few options exist.
One option is to consider modeling the whole data structure as a Spring bean. The xml will likely be messy but some namespace tricks in Spring might make it less verbose. The challenge lies in dependency-injecting Functions with Spring. Interested in thinking through this problem further? See our BetterDev challenge below.
Another option is to codify the rules engine into a Rules table of a relational (or maybe even a No-SQL) database. Lookup keys used in our multi-key maps are easily convertible into composite keys in our database table. Pulling the right rule is as simple as running a Select query. Again, the challenge is in coming up with an elegant way of correlating a Rule with the Strategy used to operate on your data at runtime. We’ve thought about the solution a bit but again leave it up to you to take this further. Interested? See our BetterDev challenge below.
Yet another option is to consider a full-fledged rule modeling framework, such as DRools and MVEL if keys into a rule are not constant but require runtime evaluation.
Guava’s Functional Idioms:
Cool Rules Engine using Drool and MVEL:
http://java.dzone.com/articles/really-simple-powerful-rule
Lambda expressions and higher-order functions in Java 8 coming soon!
http://www.infoq.com/articles/java-8-vs-scala | 2017-11-17 19:15:04 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22197318077087402, "perplexity": 7471.372638819149}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-47/segments/1510934803906.12/warc/CC-MAIN-20171117185611-20171117205611-00510.warc.gz"} |
https://de.mathworks.com/help/curvefit/exponential.html | ## Exponential Models
The toolbox provides a one-term and a two-term exponential model as given by
$\begin{array}{l}y=a{e}^{bx}\\ y=a{e}^{bx}+c{e}^{dx}\end{array}$
Exponentials are often used when the rate of change of a quantity is proportional to the initial amount of the quantity. If the coefficient associated with b and/or d is negative, y represents exponential decay. If the coefficient is positive, y represents exponential growth.
For example, a single radioactive decay mode of a nuclide is described by a one-term exponential. a is interpreted as the initial number of nuclei, b is the decay constant, x is time, and y is the number of remaining nuclei after a specific amount of time passes. If two decay modes exist, then you must use the two-term exponential model. For the second decay mode, you add another exponential term to the model.
Examples of exponential growth include contagious diseases for which a cure is unavailable, and biological populations whose growth is uninhibited by predation, environmental factors, and so on.
### Fit Exponential Models Interactively
1. Open the Curve Fitting app by entering cftool. Alternatively, click Curve Fitting on the Apps tab.
2. In the Curve Fitting app, select curve data (X data and Y data, or just Y data against index).
Curve Fitting app creates the default curve fit, Polynomial.
3. Change the model type from Polynomial to Exponential.
You can specify the following options:
• Choose one or two terms to fit exp1 or exp2.
Look in the Results pane to see the model terms, the values of the coefficients, and the goodness-of-fit statistics.
• (Optional) Click Fit Options to specify coefficient starting values and constraint bounds appropriate for your data, or change algorithm settings.
The toolbox calculates optimized start points for exponential fits, based on the current data set. You can override the start points and specify your own values in the Fit Options dialog box.
The fit options for the single-term exponential are shown next. The coefficient starting values and constraints are for the census data.
For an example specifying starting values appropriate to the data, see Gaussian Fitting with an Exponential Background.
For more information on the settings, see Specifying Fit Options and Optimized Starting Points.
### Fit Exponential Models Using the fit Function
This example shows how to fit an exponential model to data using the fit function.
The exponential library model is an input argument to the fit and fittype functions. Specify the model type 'exp1' or 'exp2' .
Fit a Single-Term Exponential Model
Generate data with an exponential trend and then fit the data using a single-term exponential. Plot the fit and data.
x = (0:0.2:5)';
y = 2*exp(-0.2*x) + 0.1*randn(size(x));
f = fit(x,y,'exp1')
f =
General model Exp1:
f(x) = a*exp(b*x)
Coefficients (with 95% confidence bounds):
a = 2.021 (1.89, 2.151)
b = -0.1812 (-0.2104, -0.152)
plot(f,x,y)
Fit a Two-Term Exponential Model
f2 = fit(x,y,'exp2')
f2 =
General model Exp2:
f2(x) = a*exp(b*x) + c*exp(d*x)
Coefficients (with 95% confidence bounds):
a = 2443 (-1.229e+12, 1.229e+12)
b = -0.2574 (-1.87e+04, 1.87e+04)
c = -2441 (-1.229e+12, 1.229e+12)
d = -0.2575 (-1.872e+04, 1.872e+04)
plot(f2,x,y)
Set Start Points
The toolbox calculates optimized start points for exponential fits based on the current data set. You can override the start points and specify your own values.
Find the order of the entries for coefficients in the first model ( f ) by using the coeffnames function.
coeffnames(f)
ans = 2x1 cell
{'a'}
{'b'}
If you specify start points, choose values appropriate to your data. Set arbitrary start points for coefficients a and b for example purposes.
f = fit(x,y,'exp1','StartPoint',[1,2])
f =
General model Exp1:
f(x) = a*exp(b*x)
Coefficients (with 95% confidence bounds):
a = 2.021 (1.89, 2.151)
b = -0.1812 (-0.2104, -0.152)
plot(f,x,y)
Examine Exponential Fit Options
Examine the fit options if you want to modify fit options such as coefficient starting values and constraint bounds appropriate for your data, or change algorithm settings. For details on these options, see the table of properties for NonlinearLeastSquares on the fitoptions reference page.
fitoptions('exp1')
ans =
Normalize: 'off'
Exclude: []
Weights: []
Method: 'NonlinearLeastSquares'
Robust: 'Off'
StartPoint: [1x0 double]
Lower: [1x0 double]
Upper: [1x0 double]
Algorithm: 'Trust-Region'
DiffMinChange: 1.0000e-08
DiffMaxChange: 0.1000
Display: 'Notify'
MaxFunEvals: 600
MaxIter: 400
TolFun: 1.0000e-06
TolX: 1.0000e-06 | 2021-09-19 15:01:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 1, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6604664921760559, "perplexity": 4080.912563394259}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-39/segments/1631780056890.28/warc/CC-MAIN-20210919125659-20210919155659-00098.warc.gz"} |
https://zbmath.org/?q=an:0996.34029&format=complete | # zbMATH — the first resource for mathematics
Asymptotic behaviour of a class of third-order delay differential equations. (English) Zbl 0996.34029
The authors present sufficient conditions for the delay differential equation $y'''(t)+a(t)y''(t)+b(t)y'(t)+c(t)y(g(t))=0\tag{1}$ to have property (B), that is, every nonoscillatory solution $$y(t)$$ to (1) satisfies $$y(t)y^{i}(t)>0$$, $$0\leq i\leq 3$$. The obtained results generalize some other known results.
##### MSC:
34C10 Oscillation theory, zeros, disconjugacy and comparison theory for ordinary differential equations
##### Keywords:
oscillation; asymptotic behaviour; property (B)
Full Text:
##### References:
[1] AHMAD S.-LAZER A. C.: On the oscillatory behaviour of a class of linear third order differential equations. J. Math. Anal. Appl. 28 (1970), 681-689. · Zbl 0167.07903 [2] DŽURINA J.: Asymptotic properties of third order delay differential equations. Czechoslovak Math. J. 45 (1995), 443-448. · Zbl 0842.34073 [3] DŽURINA J.: Asymptotic properties of the third order differential equations. Nonlinear Anal. 26 (1996), 33-39. · Zbl 0840.34076 [4] ERBE L.: Existence of oscillatory solutions and asymptotic behaviour for a class of third order linear differential equations. Pacific J. Math. 64 (1976), 369-385. · Zbl 0339.34030 [5] GYORI I.-LADAS G.: Oscillation Theory of Delay Differential Equations. Clarendon Press, Oxford, 1991. [6] JONES G. D.: Properties of solutions of a class of third-order differential equations. J. Math. Anal. Appl. 48 (1974), 165-169. · Zbl 0289.34046 [7] KIGURADZE I. T.: On the oscillation of solutions of the equation $$d^m u/dt^m +a(t)|u|^n \sgn u = 0$$. (Russian), Mat. Sb. 65 (1964), 172-187. · Zbl 0135.14302 [8] KUSANO T.-NAITO M.: Comparison theorems for functional differential equations with deviating arguments. J. Math. Soc. Japan 33 (1981), 509-532. · Zbl 0494.34049 [9] LAZER A. C.: The behaviour of solutions of the differential equation $$y''' + p(x)y' + q(x)y = 0$$. Pacific. J. Math. 17 (1966), 435-456. · Zbl 0143.31501 [10] PARHI N.-DAS P.: On the oscillation of a ciass of linear homogeneous third order differential equations. Arch. Math. (Brno) 34 (1998), 435-443. · Zbl 0973.34023 [11] PARHI N.-PADHI, SESHADEV: On asymptotic behaviour of delay differential equations of third order. Nonlinear Anal. 34 (1998), 391-403. · Zbl 0935.34063 [12] TRENCH W. F.: Canonical forms and principal systems for general disconjugate equations. Trans. Amer. Math. Soc. 189 (1974), 319-327. · Zbl 0289.34051
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching. | 2022-01-23 05:46:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6607369780540466, "perplexity": 1290.2155120744635}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-05/segments/1642320304134.13/warc/CC-MAIN-20220123045449-20220123075449-00002.warc.gz"} |
https://aptitude.gateoverflow.in/5604/cat-2015-question-48 | 174 views
Answer the following questions based on the information given below: In a sports event, six teams $\text{(A, B, C, D, E and F)}$ are competing against each other. Matches are scheduled in two stages. Each team plays three matches in stage – I and two matches in Stage – II. No team plays against the same team more than once in the event. No ties are permitted in any of the matches. The observations after the completion of Stage – I and Stage – II are as given below.
Stage-I:
• One team won all the three matches.
• Two teams lost all the matches.
• $\text{D}$ lost to $\text{A}$ but won against $\text{C}$ and $\text{F}$.
• $\text{E}$ lost to $\text{B}$ but won against $\text{C}$ and $\text{F}$.
• $\text{B}$ lost at least one match.
• $\text{F}$ did not play against the top team of stage-I.
Stage-II:
• The leader of Stage-I lost the next two matches.
• Of the two teams at the bottom after Stage-I, one team won both matches, while the other lost both matches.
• One more team lost both matches in Stage-II.
The only team(s) that won both matches in Stage-II is (are):
1. $\text{B}$
2. $\text{E & F}$
3. $\text{A, E & F}$
4. $\text{B, E & F}$
5. $\text{B & F}$ | 2022-11-30 13:05:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.2878194749355316, "perplexity": 1499.0640200898358}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-49/segments/1669446710764.12/warc/CC-MAIN-20221130124353-20221130154353-00848.warc.gz"} |
https://villagetalespublishing.com/liza-soberano-smaucb/53c546-what-is-a-prime-number | ( It also implies that the p p Any other natural number is composite, not prime. This number is a Mersenne prime, because it is written using the form 2 n-1, where n is itself a prime number. ; 2 is prime as it forms a rectangle that is 1 card by 2. {\displaystyle N} a x [81] For example, is an infinite arithmetic progression with modulus 9. n A prime number (or prime integer, often simply called a "prime" for short) is a positive integer p>1 that has no positive integer divisors other than 1 and p itself. A prime number is a natural number greater than 1, which is only divisible by 1 and itself. ) {\displaystyle a} ⋅ 1 However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. n and [54] There is also a set of Diophantine equations in nine variables and one parameter with the following property: the parameter is prime if and only if the resulting system of equations has a solution over the natural numbers. + p x is called prime if it is nonzero, has no multiplicative inverse (that is, it is not a unit), and satisfies the following requirement: whenever n [155] As well as in the hash function, prime numbers are used for the hash table size in quadratic probing based hash tables to ensure that the probe sequence covers the whole table. There are numerous ways to test whether a number is prime, but there's a trade off. The concept ‘factorization’ is defined on integers. 2 Some interesting fact about Prime numbers . [52] The first five of them are prime, but the sixth, There is no known efficient formula for primes. . For example: 17 is a prime number because you cannot divide it (without a remainder) by any number except 17 or 1: 17 ÷ 17 = 1. The algorithms with guaranteed-correct output include both deterministic (non-random) algorithms, such as the AKS primality test,[129] ( . 3 Generally, "prime" indicates minimality or indecomposability, in an appropriate sense. is prime is (approximately) inversely proportional to the number of digits in n b For example, 5 is a prime number because it can be divided by only 1 and 5. asymptotic distribution of primes given by the prime number theorem will also hold over much shorter intervals (of length about the square root of , exactly when {\displaystyle \;n=r\cdot s\;} randomly from {\displaystyle n} 7 p 1 [80] This difference is called the modulus of the progression. + Another way of saying it is that a prime number is defined as a whole number which has only 2 factors - 1 and itself. b Mersenne prime (or Marsenne prime): A Mersenne (also spelled Marsenne) prime is a specific type of prime number . {\displaystyle p} ν {\displaystyle \mathbb {P} } -th prime is known. mod 5= 1 x 5. p For instance, this is true of trial division. [109], Not every ring is a unique factorization domain. 8 {\displaystyle n} {\displaystyle \mathbf {P} } [31], The increased practical importance of computerized primality testing and factorization led to the development of improved methods capable of handling large numbers of unrestricted form. … {\displaystyle y} Wilson's theorem says that an integer n [121] The oldest method for generating a list of primes is called the sieve of Eratosthenes. {\displaystyle a} Prime and Composite Numbers. 1 {\displaystyle 2k.} If a number can be divided evenly by any other number not counting itself and 1, it is not prime and is referred to as a composite number. 2 x Numbers 2, 3, 5, 7, 11, 13, 17, etc. One group of modern primality tests is applicable to arbitrary numbers, while more efficient tests are available for numbers of special types. In this sense, prime numbers occur more often than squares of natural numbers, [39] Similarly, the sieve of Eratosthenes would not work correctly if it handled 1 as a prime, because it would eliminate all multiples of 1 (that is, all other numbers) and output only the single number 1. A prime number is a whole number greater than 1 whose only factors are 1 and itself. + n Euler's proof that there are infinitely many primes considers the sums of reciprocals of primes, Euler showed that, for any arbitrary real number [53], Many conjectures revolving about primes have been posed. p The converse does not hold in general, but does hold for unique factorization domains. n − / 2 n {\displaystyle p=x^{2}+y^{2}} Fun Facts about prime numbers; Prime numbers are often used in cryptography or security for technology and the internet. [21] Euler proved Alhazen's conjecture (now the Euclid–Euler theorem) that all even perfect numbers can be constructed from Mersenne primes. ⋅ Parents, Sign Up for Free Teachers, Sign Up for Free . a n By contrast, numbers with more than 2 factors are call composite numbers. A number that is not a prime is called a composite number. [127], In contrast, some other algorithms guarantee that their answer will always be correct: primes will always be determined to be prime and composites will always be determined to be composite. {\displaystyle 1} ) p First few prime numbers are : 2 3 5 7 11 13 17 19 23 ….. , for a natural number {\displaystyle p} 3. ζ At the start of the 19th century, Legendre and Gauss conjectured that as Two is the only even Prime number. [47], There are infinitely many prime numbers. . − 11= 1 x 11. For example, factors of 6 are 1,2,3 and 6, which are four factors in total. = [107], The fundamental theorem of arithmetic continues to hold (by definition) in unique factorization domains. Another more asymptotically efficient sieving method for the same problem is the sieve of Atkin. n ( {\displaystyle k} Although this method is simple to describe, it is impractical for testing the primality of large integers, because the number of tests that it performs grows exponentially as a function of the number of digits of these integers. n ≡ + is prime if / The question of how many integer prime numbers factor into a product of multiple prime ideals in an algebraic number field is addressed by Chebotarev's density theorem, which (when applied to the cyclotomic integers) has Dirichlet's theorem on primes in arithmetic progressions as a special case. For instance, Beiler writes that number theorist. a 0 A Fermat number F n is of the form 2 m + 1, where m signifies the power of 2 -- that is, m = 2 n, and where n is an integer. 3 p The first prime number, p 1 = 2 The second prime number, p 2 = 3 The third prime number, p 3 = 5 The fourth prime number, p 4 = 7 And so on. Covers Common Core Curriculum 4.OA.4 Play Now {\displaystyle p} Here we explain what exactly this means, give you a list of the prime numbers children need to know at primary school and provide you with some practice prime number questions and examples. − this cannot hold, since one of its factors divides both n and A factor is a whole numbers that can be divided evenly into ... View All Computer Science Definitions. of integers [176][177] Copyright 1999 - 2021, TechTarget The following table lists some of these tests. A prime number is a whole number that is only divisible by the number 1 and itself. − in its factorization, leaving only the other primes. A prime number can be divided, without a remainder, only by itself and by 1. , {\displaystyle 1/2^{n}} [17] Another Islamic mathematician, Ibn al-Banna' al-Marrakushi, observed that the sieve of Eratosthenes can be sped up by testing only the divisors up to the square root of the largest number to be tested. = p a x 2 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47. x A factor is a whole numbers that can be ... See complete definition. , Any natural number which is divisible by any other number apart from one and itself is called a composite number. − ζ {\displaystyle m/n} n [41] Some other more technical properties of prime numbers also do not hold for the number 1: for instance, the formulas for Euler's totient function or for the sum of divisors function are different for prime numbers than they are for 1. p ) In abstract algebra, objects that behave in a generalized way like prime numbers include prime elements and prime ideals. A natural number is a positive nutural number that has at least one positive divisor other than one or itself. with one or more prime factors. {\displaystyle 2^{p}-1} {\displaystyle n} = i Multiplying an integer by its 3 {\displaystyle a\not \equiv 0} 1 / 2 [131] These methods can be used to generate large random prime numbers, by generating and testing random numbers until finding one that is prime; {\displaystyle n} 1 Composite Numbers. . [35] n 2 You can break down all numbers to prime numbers. The first few known values of n that produce Mersenne primes are where n = 2, n = 3, n = 5, n = 7, n = 13, n = 17, n = 19, n = 31, n = 61, and n = 89. In works such as La Nativité du Seigneur (1935) and Quatre études de rythme (1949–50), he simultaneously employs motifs with lengths given by different prime numbers to create unpredictable rhythms: the primes 41, 43, 47 and 53 appear in the third étude, "Neumes rythmiques". 1 (if we can make it by multiplying other whole numbers it is a Composite Number) And 1 is not prime and also not composite. p b ! b [78] This function is closely connected to the prime numbers and to one of the most significant unsolved problems in mathematics, the Riemann hypothesis. 2 ν ≡ , is the limiting probability that two random numbers selected uniformly from a large range are relatively prime (have no factors in common). 7 1 a ) This prime numbers generator is used to generate the list of prime numbers from 1 to a number you specify. [136] This is why since 1992 (as of December 2018[update]) the largest known prime has always been a Mersenne prime. The most basic primality testing routine, trial division, is too slow to be useful for large numbers. ≤ q n , since there are five primes less than or equal to 11. 1. -tuples, patterns in the differences between more than two prime numbers. {\displaystyle {\sqrt {n}}} Cross out 1 because it is not a prime number. , but the sum would diverge (it is the harmonic series is known. Prime number definition is - any integer other than 0 or ± 1 that is not divisible without remainder by any other integers except ± 1 and ± the integer itself. Primes of shape, "Record 12-Million-Digit Prime Number Nets \$100,000 Prize", "PrimeGrid's Seventeen or Bust Subproject", “795-bit factoring and discrete logarithms,”, "Crypto needs more transparency, researchers warn", Journal of the London Mathematical Society, "Why Eisenstein proved the Eisenstein criterion and why Schönemann discovered it first", "SIC POVMs and Clifford groups in prime dimensions", "Prime Numbers and the Search for Extraterrestrial Intelligence", "The Curious Incident of the Dog in the Night-Time", Plus teacher and student package: prime numbers, Fast Online primality test with factorization, https://en.wikipedia.org/w/index.php?title=Prime_number&oldid=1004947229, Wikipedia pages semi-protected against vandalism, Articles containing potentially dated statements from December 2018, All articles containing potentially dated statements, Articles containing potentially dated statements from 2014, Articles containing potentially dated statements from December 2019, Articles containing potentially dated statements from October 2012, Pages using Sister project links with hidden wikidata, Pages using Sister project links with default search, Creative Commons Attribution-ShareAlike License, This page was last edited on 5 February 2021, at 05:39. A prime number has only 2 factors, the number 1 and itself. … + {\displaystyle a} For example, among the numbers 1 through 6, the numbers 2, 3, and 5 are the prime numbers,[5] as there are no other numbers that divide them evenly (without a remainder). although their ordering may differ. y {\displaystyle n} Remember, that 1 is neither prime nor composite. [124] In advanced mathematics, sieve theory applies similar methods to other problems. {\displaystyle 1+{\tfrac {1}{2}}+{\tfrac {1}{3}}+\dots } , [16], In 1640 Pierre de Fermat stated (without proof) Fermat's little theorem (later proved by Leibniz and Euler). {\displaystyle p-2} 2 For example, factorization or ramification of prime ideals when lifted to an extension field, a basic problem of algebraic number theory, bears some resemblance with ramification in geometry. These include Goldbach's conjecture, that every even integer greater than 2 can be expressed as the sum of two primes, and the twin prime conjecture, that there are infinitely many pairs of primes having just one even number between them. A prime number is a whole number greater than 1 whose only factors are 1 and itself. Several historical questions regarding prime numbers are still unsolved. + {\displaystyle p} 2 − , If the Riemann hypothesis is true, these fluctuations will be small, and the [112] For example, the fundamental theorem of arithmetic would need to be rephrased in terms of factorizations into primes greater than 1, because every number would have multiple factorizations with different numbers of copies of 1. Stronger forms of the theorem state that the sum of the reciprocals of these prime values diverges, and that different linear polynomials with the same The number M 19 M_{19} M 1 9 was proved to be prime by Cataldi in 1588 and this was the largest known prime for about 200 years until Euler proved that M 31 M_{31} M 3 1 is prime. [72] This shows that there are infinitely many primes, because if there were finitely many primes the sum would reach its maximum value at the biggest prime rather than growing past every ( Numbers that have more than two factors are called composite numbers. + [152], Several public-key cryptography algorithms, such as RSA and the Diffie–Hellman key exchange, are based on large prime numbers (2048-bit primes are common). ( For example, the prime field of a given field is its smallest subfield that contains both 0 and 1. n {\displaystyle \mu .} {\displaystyle 3.} [111] Arithmetic geometry also benefits from this notion, and many concepts exist in both geometry and number theory. ). If the list consists of the primes [161] Another example is Eisenstein's criterion, a test for whether a polynomial is irreducible based on divisibility of its coefficients by a prime number and its square. Click here for the full version: http://vn2.me/zutPrime numbers aren't too hard to define, but they still puzzle professional mathematicians. a A supply chain attack is a type of cyber attack that targets organizations by focusing on weaker links in an organization's ... A TrickBot is malware designed to steal banking information. For example, 2, 3, 5, 7, 11, 13, 17, 19, and 23 are all examples of prime numbers; dividing them by anything other than themselves, or the number one, results in a fraction. , {\displaystyle 6} ! n 1 is not prime, as it is specifically excluded in the definition. Any other natural number can be mapped into this system by replacing it by its remainder after division by Encryption always follows a fundamental rule: the algorithm -- or the actual procedure being used -- doesn't need to be kept secret, but the key does. ( Definitions about consumer technology including Internet acronyms, tech lingo, multimedia definitions, words about personal computing and peripherals as well as terms used … 6 x ( ⌋ [90], The Riemann hypothesis states that the zeros of the zeta-function are all either negative even numbers, or complex numbers with real part equal to 1/2. Prime numbers are divisible only by themselves and 1. The first few prime numbers are 2, 3, 5, 7, 11, 13, 17, 19, 23 and 29. / R , the task of providing one (or all) prime factors is referred to as factorization of n 1 a result that is known to follow from the Riemann hypothesis, while the much stronger Cramér conjecture sets the largest gap size at , + a In the terminology of abstract algebra, the ability to perform division means that modular arithmetic modulo a prime number forms a field or, more specifically, a finite field, while other moduli only give a ring but not a field. 6 For instance the original method of Carter and Wegman for universal hashing was based on computing hash functions by choosing random linear functions modulo large prime numbers. p in the same time as a single iteration of the Miller–Rabin test. Here’s a list of all the prime numbers up … − However, these are not useful for generating primes, as the primes must be generated first in order to compute the values of 1 If it has any other divisor, it cannot be prime. / -gon may be constructed using straightedge, compass, and an angle trisector if and only if the prime factors of {\displaystyle k} [86] The Hardy-Littlewood conjecture F predicts the density of primes among the values of quadratic polynomials with integer coefficients Hence, 7 is a prime number but 6 is not, instead it is a composite number.But always remember that 1 is neither prime nor composite. [61] The branch of number theory studying such questions is called additive number theory. and Fermat prime: A Fermat prime is a Fermat number that is also a prime number . The is composite then it answers yes with probability at most 1/2 and no with probability at least 1/2. A prime number is a number that can only be divided by itself and 1 without remainders. They have also been used in evolutionary biology to explain the life cycles of cicadas. This established the record for another century and when Lucas showed that M 127 M_{127} M 1 2 7 ( which is a 39 digit number ) is prime that took the record as far as the age of the electronic … Prime numbers are divisible only by the number 1 or itself. . b and {\displaystyle ab} [97] Equality of integers corresponds to congruence in modular arithmetic: ( Dirichlet's Theorem on Primes in Arithmetical Progressions", "The history of the primality of one: a selection of sources", David Joyce's English translation of Euclid's proof, "Empirical verification of the even Goldbach conjecture and computation of prime gaps up to, 3.1 Structure and randomness in the prime numbers, pp. + Therefore, the factor of a number (integer) is another integer that can divide the original into a third integer without leaving a reminder. {\displaystyle (n-1)!} Particularly fast methods are available for numbers of special forms, such as Mersenne numbers. 1 is not prime because it does not have two factors. Although conjectures have been formulated about the proportions of primes in higher-degree polynomials, they remain unproven, and it is unknown whether there exists a quadratic polynomial that (for integer arguments) is prime infinitely often. , the number to be tested and, for probabilistic algorithms, the number To find the Fermat number F n for an integer n , you first find m = 2 n , and then calculate 2 m + 1. {\displaystyle 2\equiv 9{\bmod {7}}} ( 1 Enter a number and the Prime Number Calculator will instantly tell you if it is a prime number or not. ( can be in the given list. − There are an infinite number of prime numbers. {\displaystyle n} m is called trial division. = x 1 -adic numbers. their highest common factor(HCF) will be 1. An API-centric application is a web service that is built using application programming interfaces (APIs) to exchange data with other applications. So 2 only has two factors. [15][32][33] The mathematical theory of prime numbers also moved forward with the Green–Tao theorem (2004) that there are arbitrarily long arithmetic progressions of prime numbers, and Yitang Zhang's 2013 proof that there exist infinitely many prime gaps of bounded size. B { \displaystyle n. } [ 63 ] however, there are numerous ways to test whether a number is. Ever exhibited was obtained on 7th January 2016 of Eratosthenes large numbers numbers generator is used to generate ever-larger numbers! A row ( called successive prime numbers is the sieve of Eratosthenes, is still used to generate list! We find successive prime numbers can be divided, without a remainder, only by themselves 1. 289–324, Section 12.1, Sums of two other non-unit elements explain the life cycles of cicadas lawyer, de. Oldest method for the same difference but is slower than elliptic curve method concludes that a number bigger 1... Can thus be considered the basic building blocks '' of the numbers, but was to! } evenly Leonhard Euler and his first major result, the Riemann hypothesis therefore, prime..., a well-known example of public key cryptography, prime numbers and their powers prime is called additive theory... ] and the Mersenne Prime… with one little tweak which both numbers are distributed ],... T be divided evenly into... View all computer Science Definitions one way, that can. The following table gives the largest known prime number within the natural numbers, the. And 11 where we can define F m, where n is a,... Bigger than 1 … a prime number theorem, pp what is a prime number again large... Key cryptography, prime numbers fibonacci brought the innovations from Islamic mathematics to... Mod n ) { \displaystyle n } goes to infinity as n { \displaystyle n.... Forms a rectangle that is not a prime number is a number greater 1. Can specify how many prime numbers have potential connections to quantum what is a prime number, and log is the Program list! 5 7 11 13 17 19 23 … current technology can only be by! Used for hash tables, and 6 their powers because there is no known simple formula separates prime numbers difference! To the prime number 12, are whole numbers sufficient condition for p { n! 12.1, Sums of two squares, pp states that the sum of six primes which of explicit! Natural number that can only run this algorithm for very small numbers than once ; this example has copies. An easy intro to prime numbers from 1 to 12, are whole numbers can! ) are called Euclid numbers form 2 n - 1, where is... What are the positive integers having only two factors, which is divisible by any other natural number a. Only divisible by any number other than 2 are not introduced in the first 5 numbers. Both 0 and 1 the positive integers having only two factors, which are, and. Number or not an unspecified base Core Curriculum 4.OA.4 Play Now which of the prime generator. Metaphorically in the large can be written as a divisor type. [ 165 ] [ 95 in... Define, but there 's a trade off benefits from this notion, and number! Mersenne primes October 2012 [ update ] the first 15 prime numbers Up … there are infinitely twin... First five of them are prime also not composite key idea is check! Let ’ s take a Fermat prime is called a pseudoprime and to one of the significant. Fast methods are based on Wilson 's theorem states that the sequence all have the same.... Another optimization is to check only primes as factors in total the remainder 3 are multiples of prime numbers is! Total prime numbers to prime numbers are numbers that can be seen noting... Here is the sieve of Eratosthenes, is an arbitrarily small positive number, and pseudorandom generators.
This site uses Akismet to reduce spam. Learn how your comment data is processed. | 2021-08-03 03:53:13 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7874175906181335, "perplexity": 471.15276229745643}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046154420.77/warc/CC-MAIN-20210803030201-20210803060201-00355.warc.gz"} |
https://aiida-vasp.readthedocs.io/en/latest/tutorials/fcc_si_dos.html | # 4. FCC Si density of states¶
Let us continue with the VASP tutorials. In particular, let us now extract the density of states for FCC Si as done in the second tutorial. We will from here and in the rest of the tutorials not run VASP in the regular manner and leave those exercise to the reader. Please do so, to appreciate how much simpler AiiDA-VASP makes these operations, in particular when you need to perform many or repeat calculations.
Again, we assume you have completed the previous tutorial.
As you might have noticed we have now populated the database with the silicon structure multiple times. Let us try instead to load some of the structures that already are present to save coding, reuse previous results and save data storage.
In previous tutorial, when developing the call script, we set the label. Please inspect this. E.g. for a lattice constant of 3.9, the label is set to silicon_at_3_9. Remember also that we cannot modify structures already present in the database. If we want to do this, we need to create a new Structure Data using the stored entry as an initializer. In this case we will not modify the structure, so we will simply use the unmodified one.
1. Let us first create a simple call script that uses the VASP workchain:
"""
Call script to calculate the total energies for one volume of standard silicon.
This particular call script set up a standard calculation that execute a calculation for
the fcc silicon structure.
"""
# pylint: disable=too-many-arguments, invalid-name, import-outside-toplevel
from aiida.common.extendeddicts import AttributeDict
from aiida.orm import Code, Bool, Str
from aiida.plugins import DataFactory, WorkflowFactory
from aiida.engine import submit
def get_structure(label):
from aiida.orm import QueryBuilder
qb = QueryBuilder()
qb.append(DataFactory('structure'), filters={'label': {'==': label}}, tag='structure')
# Pick any structure with this label, here, just the first
return qb.all()[0][0]
def main(code_string, incar, kmesh, structure, potential_family, potential_mapping, options):
"""Main method to setup the calculation."""
# First, we need to fetch the AiiDA datatypes which will
# house the inputs to our calculation
dict_data = DataFactory('dict')
kpoints_data = DataFactory('array.kpoints')
# Then, we set the workchain you would like to call
workchain = WorkflowFactory('vasp.verify')
# And finally, we declare the options, settings and input containers
settings = AttributeDict()
inputs = AttributeDict()
# organize settings
# set inputs for the following WorkChain execution
# set code
inputs.code = Code.get_from_string(code_string)
# set structure
inputs.structure = structure
# set k-points grid density
kpoints = kpoints_data()
kpoints.set_kpoints_mesh(kmesh)
inputs.kpoints = kpoints
# set parameters
inputs.parameters = dict_data(dict=incar)
# set potentials and their mapping
inputs.potential_family = Str(potential_family)
inputs.potential_mapping = dict_data(dict=potential_mapping)
# set options
inputs.options = dict_data(dict=options)
# set settings
inputs.settings = dict_data(dict=settings)
# set workchain related inputs, in this case, give more explicit output to report
inputs.verbose = Bool(True)
# submit the requested workchain with the supplied inputs
submit(workchain, **inputs)
if __name__ == '__main__':
# Code_string is chosen among the list given by 'verdi code list'
CODE_STRING = 'vasp@mycluster'
# INCAR equivalent
# Set input parameters
INCAR = {'encut': 240, 'ismear': -5, 'lorbit': 11}
# KPOINTS equivalent
# Set kpoint mesh
KMESH = [21, 21, 21]
# POTCAR equivalent
# Potential_family is chosen among the list given by
# 'verdi data vasp-potcar listfamilies'
POTENTIAL_FAMILY = 'pbe'
# The potential mapping selects which potential to use, here we use the standard
# for silicon, this could for instance be {'Si': 'Si_GW'} to use the GW ready
POTENTIAL_MAPPING = {'Si': 'Si'}
# jobfile equivalent
# In options, we typically set scheduler options.
# AttributeDict is just a special dictionary with the extra benefit that
# you can set and get the key contents with mydict.mykey, instead of mydict['mykey']
OPTIONS = AttributeDict()
OPTIONS.account = ''
OPTIONS.qos = ''
OPTIONS.resources = {'num_machines': 1, 'num_mpiprocs_per_machine': 16}
OPTIONS.queue_name = ''
OPTIONS.max_wallclock_seconds = 3600
OPTIONS.max_memory_kb = 1024000
# POSCAR equivalent
# Set the silicon structure
STRUCTURE = get_structure('silicon_at_3_9')
main(CODE_STRING, INCAR, KMESH, STRUCTURE, POTENTIAL_FAMILY, POTENTIAL_MAPPING, OPTIONS)
%/$wget https://github.com/aiida-vasp/aiida-vasp/raw/develop/tutorials/run_fcc_si_dos.py Where we have modified the input parameters according to the tutorial density of states for FCC Si. And importantly, we have also told the parser to give us the density of states. 2. And execute it: %/$ python run_fcc_si_dos.py
3. After a while we check the status:
%/$verdi process list -a PK Created Process label Process State Process status ------ --------- ---------------------------- ----------------- -------------------------------------- --------------------- 103820 2m ago VerifyWorkChain ⏹ Finished [0] 103821 2m ago VaspWorkChain ⏹ Finished [0] 103822 2m ago VaspCalculation ⏹ Finished [0] 4. Check the content of the topmost workchain: %/$ verdi process show 103820
(aiida) [efl@efl tutorials]\$ verdi process show 103820
Property Value
------------- ------------------------------------
type WorkChainNode
pk 103820
label
description
ctime 2019-10-08 14:30:46.965520+00:00
mtime 2019-10-08 14:31:53.971080+00:00
process state Finished
exit status 0
computer [6] mycluster
Inputs PK Type
--------------------- ------ -------------
clean_workdir 103818 Bool
code 101271 Code
kpoints 103810 KpointsData
max_iterations 103817 Int
options 103814 Dict
parameters 103811 Dict
potential_family 103812 Str
potential_mapping 103813 Dict
settings 103815 Dict
structure 103729 StructureData
verbose 103816 Bool
verify_max_iterations 103819 Int
Outputs PK Type
------------- ------ ----------
dos 103825 ArrayData
misc 103826 Dict
remote_folder 103823 RemoteData
retrieved 103824 FolderData
Called PK Type
-------- ------ -------------
CALL 103821 WorkChainNode
Log messages
---------------------------------------------
There are 1 log messages for this calculation
Run 'verdi process report 103820' to see them
And as you can see, dos is now listed in the output.
Now, as you may already know, running with such a dense k-point grid for the initial calculation is usually not a good idea. It is more efficient to pre-converge the electronic states using a more sparse k-point grid and then restart the calculation using a more dense k-point grid when calculating the density of states. | 2021-12-02 06:49:57 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5346670150756836, "perplexity": 12749.858681069467}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964361169.72/warc/CC-MAIN-20211202054457-20211202084457-00315.warc.gz"} |
http://physics.stackexchange.com/tags/double-slit-experiment/new | # Tag Info
2
What experiments should be done to test a theory depends on both the theory and its rivals. At some point, somebody may come up with a better theory. The crucial experiments will then depend on where quantum theory and its rival differ. If you're asking about quantum physics versus classical physics, there are many experiments where they would make ...
1
If a lump is supposed to react to the state of a hole, then the lump must know about the state of the hole. If the lump has no ability to see its surroundings, then the only way for the lump to make sure that it could go through a hole is to actually go through the hole. assumption that electron has no sight + observation that electron seems to know about ...
0
The reason why they put equations in is because physics is best described in terms of equations. They tried to remove randomness with equation, ultimately believing that QM is not that random, that it is behaving according to certain parameters. They tried to set the boundaries in the randomness/unpredictability, but the equations still couldn't remove the ...
0
Since the two slit experiment is a bit complicated for what I'm about to discuss, allow me to consider a simplified toy model. Consider an $N$ component state vector $| \alpha \rangle$, about to be acted on by some operation, and subsequently measured. This evolution can be represented by: $$| \beta \rangle = U |\alpha\rangle$$ The important bit here is ...
1
Assuming our only aim is to solve double slit experiment (or other problems that can be mapped into that). Actually the double slit experiment for electrons is a derivative/prediction from the quantum mechanical theory as it started with the Schrodinger equation ,its wavefunction solutions and the interpretation of differential operators with energy ...
3
You say that we are only interested in the probability distribution on the screen, $\rho(x,t) = \lvert \psi(x,t) \rvert^2$, which is essentially correct. So, why do we have $\psi(x,t) = \lvert\psi(x,t)\rvert\mathrm{e}^{\frac{\mathrm{i}}{\hbar}S(x,t)}$? Well, looking at the time evolution equation for the probability density, the continuity equation of ...
3
The non-negative real probability distribution can't interfere like a complex wave function can. To produce interference phenomena it is necessary for quantum mechanics to deal with probability amplitudes, not just probabilities.
0
First at all light is electromagnetic radiation. To produce EM radiation you need excited electrons (or protons or some of this stuff), they emit photons. The light you see is thermal radiation from sun, laser, LED, light bulb. Then ever one detect light in detail he will end with the detection of photons. The first who claimed that light consists of quanta ...
-6
The experiment is meant to show the wave-particle duality of light. When light goes through a slit, it is attracted gravitationally by the side-walls of the slit and seems to curve around the exit corners. Since light is radially projected, some light smears along the sidewall of the slit also and is refracted, though this amount of light is negligible when ...
1
She means to say that it would behave as if both the slits were open and fall on the screen according to the double slit pattern. The entire double slit pattern comes from interference.
0
The complex Euclidean norm $\left| \cdot \right|_2$ considered here is calculated by $$\left| \varphi \right|_2 = \varphi\varphi^*$$ where $\varphi\in\mathbb{C}^n$ and the $^*$ denotes complex conjugation. Then, since $$\left(\psi_1 + \psi_2\right)^* = \psi_1^* + \psi_2^*$$ one has \begin{align} \left|\psi_1 + \psi_2\right|_2 &= ...
2
Whether it needs to be done in the dark depends on the detection method. Most electron detectors are very sensitive to light, so you want to make sure no photons generate noise on your detector. For example, a scintillator screen + film is sensitive to light (could cover the scintillator with light proof material but risk stopping the electrons when you do ...
0
How the collapse works, we don't know, there are theories and theories. We believe that the interaction with the macroscopic apparatus is what collapses the wave-function. But if this is all the truth behind the collapse, we don't know. We have reasons to believe that the collapse is a non-local phenomenon, and that our macroscopic apparatuses interact ...
0
Wave interference patterns are created with more than one opening for the particle to pass through. With only one slit, the wave passing through it would not meet another wave beyond the screen to interfere with it. Imagine it like the waves produced by dropping a pebble into a calm body of water. When the uniform ripples get to the screen with two slits in ...
1
Yes. This is called Delayed-choice Double-slit Experiment first thought by physicist John Wheeler. Its result is that our present observations/ actions affect the past. When you observe/ close one slit after photon (or, any other Quantum denizen) has passed it (based on calculation), the two-slit interference pattern doesn’t form. It acts like it passed ...
1
The amplitude should be proportional to the width. In single slit diffraction calculations, the resultant amplitude is obtained by dividing the slit width into a large number of equal segments. For each segment, the amplitude is taken proportionally equal and a constant phase difference is taken as existing between adjacent segments. The resultant amplitude ...
Top 50 recent answers are included | 2015-03-02 01:16:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7023444175720215, "perplexity": 370.37638879945695}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-11/segments/1424936462577.56/warc/CC-MAIN-20150226074102-00266-ip-10-28-5-156.ec2.internal.warc.gz"} |
https://arithmos.wordpress.com/category/cardinal-arithmetic/ | ## [Sh:430.1-4] An easy inequality
January 31, 2011 at 16:38 | Posted in Cardinal Arithmetic, cov vs. pp, [Sh:355], [Sh:430] | 1 Comment
What follows is not really part of the first section of [Sh:430], but it does pin down an important connection between cov and pp. The proposition is a special case of part of Shelah’s “cov vs. pp theorem”, Theorem 5.4 on page 87 of Cardinal Arithmetic. Note that there are no special assumptions on ${\mu}$ here. Getting inequalities in the reverse direction (i.e., showing that covering numbers are less than pp numbers) is generally a more difficult proposition.
Let us recall that ${{\rm cov}(\mu,\mu,\kappa^+, 2)}$ is the minimum cardinality of a set ${\mathcal{P}\subseteq[\mu]^{<\mu}}$ such that for any ${A\in [\mu]^\kappa}$ (equivalently ${[\mu]^{\leq\kappa}}$), there is a ${B\in\mathcal{P}}$ such that ${A\subseteq B}$.
Proposition 1 If ${\mu}$ is singular of cofinality ${\kappa}$, then
$\displaystyle {\rm pp}(\mu)\leq {\rm cov}(\mu,\mu,\kappa^+,2). \ \ \ \ \ (1)$
Proof: Suppose by way of contradiction that
$\displaystyle \theta:={\rm cov}(\mu,\mu,\kappa^+,2)<{\rm pp}(\mu). \ \ \ \ \ (2)$
By the definition of ${\rm pp}$, we can find a (not necessarily increasing) sequence of regular cardinals ${\langle \mu_i:i<\kappa\rangle}$ and an ultrafilter ${D}$ on ${\kappa}$ such that
$\displaystyle \{i<\kappa:\mu_i\leq\tau\}\notin D\text{ for each }\tau<\mu, \ \ \ \ \ (3)$
and
$\displaystyle \theta<\sigma:={\rm cf}\left(\prod_{i<\kappa}\mu_i, <_D\right). \ \ \ \ \ (4)$
Let ${\mathcal{P}\subseteq [\mu]^{<\mu}}$ be a family of cardinality ${\theta}$ standing as witness to ${{\rm cov}(\mu,\mu,\kappa^+,2)=\theta}$, and let ${\langle f_\alpha:\alpha<\sigma\rangle}$ be a ${<_D}$-increasing and cofinal sequence of functions in ${\prod_{i<\kappa}\mu_i}$. For each ${\alpha<\sigma}$, the range of ${f_\alpha}$ is a subset of ${\mu}$ of cardinality at most ${\kappa}$, and so we can find ${A_\alpha\in \mathcal{P}}$ such that
$\displaystyle {\rm ran}(f_\alpha)\subseteq A_\alpha. \ \ \ \ \ (5)$
Since ${\sigma}$ is a regular cardinal greater than ${\theta}$, there is a single ${A^*\in \mathcal{P}}$ such that
$\displaystyle |\{\alpha<\sigma: A_\alpha=A^*\}|=\sigma. \ \ \ \ \ (6)$
Thus, (by passing to a subsequence of ${\langle f_\alpha:\alpha<\sigma\rangle}$) we may as well assume that the range of each ${f_\alpha}$ is a subset of ${A^*}$.
But ${|A^*|<\mu}$, and so
$\displaystyle B:= \{i<\kappa:|A^*|<\mu_i\}\in D \ \ \ \ \ (7)$
by our choice of ${D}$. Let us now define a function ${g}$ by setting ${g(i)=0}$ if ${i\notin B}$, and
$\displaystyle g(i)=\sup(A^*\cap\mu_i) \ \ \ \ \ (8)$
whenever ${i}$ is in ${B}$. Our assumptions imply that ${g}$ is in ${\prod_{i<\kappa}\mu_i}$.
If ${\alpha<\sigma}$, then for each ${i\in B}$ we have
$\displaystyle f_\alpha(i)\leq g(i) \ \ \ \ \ (9)$
as ${f_\alpha(i)\in A^*\cap\mu_i}$.
It follows that ${g}$ is a function in ${\prod_{i<\kappa}\mu_i}$ such that${f_\alpha\leq_D g}$ for all ${\alpha<\sigma}$. This is absurd, given our choice of ${\langle f_\alpha:\alpha<\sigma\rangle}$, and the proof is complete. $\Box$
[Updated 2-7-11]
## [Sh:430.1-2] Framework for proof
Let us assume that ${\mu}$ is a singular cardinal with ${\rm{pp}(\mu)=\mu^+}$. We will prove that there is a family ${\mathcal{P}\subseteq [\mu]^{<\mu}}$ of cardinality ${\mu^+}$ such that every member of ${[\mu]^{\rm{cf}(\mu)}}$ is a subset of some member of ${\mathcal{P}}$.
Fix a sufficiently large regular cardinal ${\chi}$; we will be working with elementary submodels of the structure ${\langle H(\chi),\in, <_\chi\rangle}$ where ${<_\chi}$ is some appropriate well-ordering of ${H(\chi)}$ used to give us definable Skolem functions.
Let ${\mathfrak{M}=\langle M_\alpha:\alpha<\mu^+\rangle}$ be a ${\mu^+}$-approximating sequence, that is, ${\mathfrak{M}}$ is a ${\in}$-increasing and continuous sequence of elementary submodels of ${H(\chi)}$ such that for each ${\alpha<\mu^+}$,
• ${\mu\in M_0}$,
• ${M_\alpha}$ has cardinality ${\mu}$,
• ${M_\alpha\cap\mu^+}$ is an initial segment of ${\mu^+}$ (so ${\mu+1\subseteq M_\alpha\cap\mu^+}$), and
• ${\langle M_\beta:\beta<\alpha\rangle\in M_{\alpha+1}}$.
Let ${M^*}$ denote ${\bigcup_{\alpha<\mu^+} M_\alpha}$. Then ${M^*}$ is an elementary submodel of ${H(\chi)}$ of cardinality ${\mu^+}$ containing ${\mu^+}$ as both an element and subset. Define
$\displaystyle \mathcal{P}=M^*\cap [\mu]^{<\mu}. \ \ \ \ \ (1)$
Clearly ${\mathcal{P}\subseteq [\mu]^{<\mu}}$ and ${|\mathcal{P}|=\mu^+}$, so we need only verify that for any ${a\in [\mu]^{\rm{cf}(\mu)}}$, there is a ${b\in \mathcal{P}}$ with ${a\subseteq b}$.
In broad terms, this will be done via an “${I[\lambda]}$ argument” with ${\lambda=\mu^+}$, but we’ll fill in the details in further posts.
## [Sh:430.1-1] What are we aiming for?
We focus our attention on the first section of [Sh:430]: “equivalence of two covering properties”. The first result states the following:
Theorem 1 If ${\rm{pp}(\lambda)=\lambda^+}$, ${\lambda>\rm{cf}(\lambda)=\kappa>\aleph_0}$ then ${\rm{cov}(\lambda,\lambda,\kappa^+,2)=\lambda^+}$.
I am pretty sure that this isn’t what Shelah meant to write here — the assumption that ${\lambda}$ has uncountable cofinality means that the result is a trivial consequence of a deeper theorem in Cardinal Arithmetic [Update: Perhaps not! See 2nd update below]. I think that the proof he gives works even for the countable cofinality case, and this gives us something of interest because ${\lambda}$ could very well be a fixed point. So, here’s what I conjecture he meant to say (after changing notation and translating the “cov” statement into more standard form):
Theorem 2 Let ${\mu}$ be a singular cardinal. Then ${\rm{pp}(\mu)=\mu^+}$ if and only if there is a family ${\mathcal{P}\subseteq [\mu]^{<\mu}}$ of cardinality ${\mu^+}$ such that every member of ${[\mu]^{\rm{cf}(\mu)}}$ is a subset of some element of ${\mathcal{P}}$.
We’ll work through his proof as best we can, and see if my conjecture is correct.
UPDATE 1: I don’t think the conjecture is correct. It looks to me like the proof was originally written for singular of countable cofinality, but a mistake was discovered and the statement was corrected but a lot of the old proof didn’t get revised properly. The annotated content was revised, but the abstract was not. Anyway, I’ll ask Saharon if I can’t figure out what’s going on.
UPDATE 2: I think the theorem doesn’t follow from the Cardinal Arithmetic stuff. The problem I run into is that when Shelah says informally that “cov = pp when the cardinal has uncountable cofinality”, there are lots of disclaimers hidden in the background. But this underscores why I think what I’m doing here is important — I want to pin down what exactly is known and what exactly is still open in this area. | 2017-03-24 12:07:06 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 105, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.925103485584259, "perplexity": 131.28148893031752}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-13/segments/1490218187945.85/warc/CC-MAIN-20170322212947-00055-ip-10-233-31-227.ec2.internal.warc.gz"} |
https://damogranlabs.com/2018/03/ | ## RC Filter Frequency Pole Calculator
There are numerous RC filter pole calculators online and I happily used them for a quick calculations while DIYing. The problem is that these online calculators calculate frequency pole where gain…
## DIY Constrained Path Optimization
The Problem: Path optimization The exact origin of this problem is somewhat blurry but since I already have the solution I won’t put much effort into clarifying it. We have… | 2023-03-31 16:06:09 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8593140244483948, "perplexity": 3846.6882276971833}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296949644.27/warc/CC-MAIN-20230331144941-20230331174941-00209.warc.gz"} |
https://math.stackexchange.com/questions/3421607/lim-n-to-infty-sum-1n-left1-fracnn1-righta-n-for-any-positive-se | # $\lim_{N\to \infty} \sum_1^N\left(1-\frac{n}{N+1}\right)a_n$ for any positive sequence $\{a_n\}$
Let $$\{a_n\}$$ be a positive sequence. Suppose that
$$\lim_{N\to \infty} \sum_1^N\left(1-\frac{n}{N+1}\right)a_n$$ exists. Is the series $$\sum a_n$$ convergent?
Given that $$a_n\ge0$$ this is trivial. If $$\sum a_n$$ does not converge then $$\sum a_n=+\infty$$, and it's obvious that that implies $$\lim_{N\to \infty} \sum_1^N\left(1-\frac{n}{N+1}\right)a_n=\infty$$: If $$K$$ is fixed then $$\liminf_{N\to \infty} \sum_1^N\left(1-\frac{n}{N+1}\right)a_n\ge\lim_{N\to \infty} \sum_1^K\left(1-\frac{n}{N+1}\right)a_n=\sum_1^K a_n.$$ | 2021-08-05 09:26:42 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 9, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9956839680671692, "perplexity": 112.0571296551363}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-31/segments/1627046155458.35/warc/CC-MAIN-20210805063730-20210805093730-00675.warc.gz"} |
https://electronics.stackexchange.com/questions/102447/how-much-force-does-a-solenoid-really-have | # How much force does a solenoid really have?
This hopefully isn't too off topic for this forum, but here goes: after many days of tinkering I managed to get a circuit working that, in the end, energizes a solenoid to pull a lever. Sadly, even though the solenoid became energized and pulled its plunger in, it didn't have the strength to pull the lever. The solenoid I used says that it is rated to have a holding force of 1.1 lbs and an operating voltage of 12V.
I gave it 12V, and the solenoid snapped on and pulled its plunger in (it's a pull type of solenoid). But when tied the plunger to the lever, it seemed to lose all strength, not budging the lever at all...not even a wiggle. The same is true if I resisted the movement of the plunger with my hand--or just held it lightly in my hand. It seemed to lack any pull power when there was something there to actually pull. I then upped the voltage by stacking three 9V batteries in series, thinking it might give the solenoid some more muscle--but to no avail.
Is this related to its holding force? Should I buy a different solenoid that can hold more weight? Have others experienced this?
UPDATE
I measured the voltage of the batteries I was using to power the solenoid. It was very interesting. First off, they must have already been drained from powering the solenoid the last time because, although they're relatively new, their voltage from the get-go was reading about 14V instead of the 18V I would expect from putting two 9V in series. I connected them to the solenoid and watched the voltage drop down to less than 2V and continue dropping. So it would seem that at the current that the solenoid needs, the batteries can't sustain their voltage. Perhaps a 12V solenoid can't be powered with common batteries? Or am I thinking about this all wrong and I should expect to see the voltage of the batteries drop when connected to something that draws a current? Especially that much current? Is that normal operation for a battery? Or maybe I need to add a current-limiting resistor, while allowing enough current to go through to power the solenoid?
• I forgot to mention that I could also easily pull the plunger back out after it had been energized. There didn't seem to be much force keeping it in. And after I did, the plunger would slowly, almost reluctantly, pull it back in. It wasn't very snappy. – NickRamirez Mar 10 '14 at 18:00
• What power supply did you use and did you measure the current and confirm the 12V wasn't in fact dropping to something a bit lower and therefore reducing its magnetic pull force. Is it a dc solenoid and do you have a link or data sheet for it? – Andy aka Mar 10 '14 at 18:07
• Holding force will be the force it can apply once fully in : when there is no gap between plunger and pole piece. When there IS a gap (i.e. before pulling the plunger in) the force will be much lower. Reputable solenoids will have datasheets showing pull vs. distance at rated current. – user_1818839 Mar 10 '14 at 18:13
• I'm guessing that Andy has it right and your batteries are not able to hold up under the current draw that the solenoid wants. Measure your battery voltage while the solenoid is energized (or measure the solenoid current.) The pull force is proportional to the current, so if your voltage is dropping you won't be getting the current or the rated holding force. – John D Mar 10 '14 at 18:14
• Also you may have underestimated the requirements. You can use a homemade dynamometer (a ruler and a low stiffness spring) to measure how much force the lever actually needs - it may be higher than what the solenoid can provide, hence the fact it doesn't budge. – Mister Mystère Mar 10 '14 at 18:17
Force from a solenoid = $(N\cdot I)^2 \dfrac{μ_0\cdot A}{(2 g^2)}$
Where:
• μ0 = 4π×10-7
• F is the force in Newtons
• N is the number of turns of the coil
• I is the current in Amps in the coil
• A is the area in length units squared (cross sectional area of the coil)
• g is the length of the gap between the coil and the iron.
If you want to check your solenoids pull force and you know the number of turns and have a good estimate of the "gap" and the cross sectional area of the winding then use the formula.
However, it's notable that force is proportional to current-squared and if the 12V you used dropped down to (say) 6V under the load of the solenoid, the pull force would quarter because the current would have halved. It's also worth noting that the force is inversely proportional to the gap-squared - double the gap and the force quarters.
• There is a typo: "is proportional to [the current squared]" – Mister Mystère Mar 10 '14 at 18:16
• I don't think the datasheet gives all of this information. But it's possible that the voltage dropped. But then again, I was giving it more than double the voltage it needed at one point. I tried putting a resistor in series, to better control the current (for the sake of efficiency), but then it didn't energize at all. I'm not totally clear on what amount of current a solenoid needs. Specs only ever mention voltage. – NickRamirez Mar 10 '14 at 19:00
• @NickRamirez solenoids need current but if you look on the details linked on the page it tells you the dc resistance of the device and the power rating - from this you can reasonably infer current. Coil resistance is 36 ohms and this implies current should be 333mA. – Andy aka Mar 10 '14 at 19:10
• An interesting thing to note is that for a constant voltage source, the quantity N*I in the force equation is a constant, because magnet wire has a resistance per unit length. Higher N means higher resistance and lower I. If you rewrite the equation in terms of voltage, you get F = V^2 mu0 / (8 pi g^2 O^2) where O is ohms/meter of the wire. The A and N terms disappear, meaning that for a given voltage and gap size, force is COMPLETELY determined by O. The number of turns simply serves to limit the current drain on the battery. And O is larger for smaller diameter wire. – Anachronist Jun 8 '17 at 20:00
• The unit of the A and the g must be clear, can not be any unit as they will lead to a final unit of force, either in Newton (N) or in LBS. It could be only in meter (SI/metric unit) and feet (LBS/imperial). Here is one unclear explanation and here also the clear one. I have tried them with giving value as follow: I=1A, N=2000 turns, and g=0.1m, Calculation will give result: 7.896 N or 1.775 LBS. Result will be different if unit are not standard. – AirCraft Lover Dec 20 '19 at 21:44
As you have noted, the 9v batteries you are using are drained with no load (14v for 2 equals 7v for a single 9v, which is ~1.15v for each of the alkaline cells inside a 9v battery). At load, it drops significantly. This is because two reasons. One, your solenoid has a large inrush and holding current. At 150% the typical voltage of 12v, there is larger current pull as well.
But Two, 9v batteries are NOT designed for large current draws. They are designed for long life at small currents (like smoke detectors). 9v batteries are typically mde up of 6 small packs of akaline cells in series. The typical capacity of a 9v is ~200 maH (This might be wrong, ill correct it later). As each cell is drained (simultaniously), the inernal resistance rises, the voltage lowers, and the current it can source does as well. A common solenoid will drain a 9v battery quickly. Other common consumer batteries have much higher capacity, and can withstand larger current drains for longer without too much voltage droop. AA batteries have current capacities of 2000maH, or 2 AH. C and D even more. 6v Lantern Batteries as well. Then there are the rechargeable batteries like 12v SLAs or LiPos, all that regularly power high drain motors for hours.
Think of it like this. Portable radios all use multiple D batteries in series. The only time you see a 9v in a radio is as a memory backup battery. Ditch the 9v Batteries, they are not useful for testing motors or solenoids beyond simply seeing if they are working. | 2021-06-20 13:27:34 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4981696605682373, "perplexity": 1122.7599689598883}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-25/segments/1623487662882.61/warc/CC-MAIN-20210620114611-20210620144611-00282.warc.gz"} |
https://3dprinting.stackexchange.com/questions/801/cons-to-uv-printing | # Cons to UV printing
I've been curious about the various UV/Laser printers in (or coming into) market that use liquid resin. I've seen the samples of the Pegasus Touch, Form1, and the Carbon3D as examples. I like the specifications of the quality that machines can put out. However, in my experience with FDM printing, there almost always seems to be something not quite right about the print.
So, what are some major maintenance considerations for these types of 3D printing? Also, specifically, are supports and overhangs as much an issue in these types of printers as with FDM/FFF?
Here are some things I consider major maintenance considerations in FDM:
• Extruder Clogging
• Build platform conditions (i.e. levelness, clean, type of tape, bubbles in tape)
• Variances in material quality (i.e. diameter, purity, physical conditions)
• Mechanics of the machine (i.e. belts, rods, gear teeth, etc.)
• Build environment (i.e. ensuring steady temperature in the build environment, minimize draft)
I'm not necessarily looking for printer recommendations, more so technical insight on the technology.
Maintenance for a resin printer means keeping the vat or tray clean, using appropriate methods to remove the unused resin (or leaving it in the vat per manufacturer's directions). Cleaning the tray should be done also per manufacturer's spec, although each printer's user forum may provide better or more effective options.
The Pegasus Touch has a caution regarding dripping resin on the mirrors, so there's operational care considerations for these types of printers.
There is a build platform for these printers. The flatness and level are as critical or more so for resin printers, as the resolution can be astonishingly high. If any portion of a print does not bond to the platform, that entire print will have a failed section, creating an entirely failed print. Gravity is not particularly helpful in that respect, at least with the Pegasus Touch.
The release medium varies from device to device. The Pegasus Touch originally used PDMS (silicon release compound) and now uses what's called a SuperVat. The plastic material in the SuperVat is purported to provide better release and fewer failures, along with increased lifespan. PDMS becomes cloudy from repeated printing in the same location and can be torn away from the vat if the print does not properly release.
I've become aware of a product from Australia that has had good reports from use in a B9 Creator resin printer. The report indicates that it releases the model quite easily and barely turns cloudy. I have an order pending for this material, as I am hopeful it will perform as described.
The mechanics are also varied. One expects a system to raise and lower the build platform and to direct the laser or illumination system (DLP), but generally, this type of printer is somewhat simpler mechanically.
Because I live in a hot humid climate, my Pegasus Touch remains in the box, and my brain is about to explode with what I've learned of using it. Environmental conditions are likely to vary with different machines. I've seen references that 70 degrees F is too cold, others that say 70-75 degrees F is just fine, anything higher is too hot. Another user says that 65 degrees is good. The type of resin also becomes an important factor for environmental conditions.
The laser will create heat in the resin, so I'm inclined to believe that cooler is better. Different colors require different durations of laser light, somewhat akin to various plastics having different temperatures.
supports and overhangs are important considerations in an SLA or DLP printer, just as they are in FDM.
Expect also that many of the resin printers require that the user purchase only the product provided by the manufacturer. This isn't necessarily a negative as most of the resin sources are priced similarly.
If I've missed any part of your question, let me know.
Despite how many vendors make it appear, resin-curing SLA/DLP printers are industrial or commercial tools that are really not suitable for home desktop use. Here are the major downsides:
• Significantly more expensive to operate than FDM printers, in most cases.
• The resin is seriously toxic until fully cured. Fumes can be an issue for users handling raw resin, and you should NEVER put a photopolymer print into a chemically-sensitive environment like an aquarium or children's toy.
• Prints require messy post-processing to rinse off excess resin (usually with rubbing alcohol) and additional UV light exposure to finish hardening the photopolymer. The used alcohol/resin rinse mix is basically hazmat waste.
• In bottom-up printers, the window in the print vat is typically a consumable. Some printers require replacing the vat () after every liter or two of cured resin. (Technology is advancing rapidly here though.)
• The peel mechanism in bottom-up printers is often a major source of print flaws, due to the need to rock/tilt/slide the print to free it from the vat window.
• In top-down printers, you have to pay a large up-front consumables cost to initially fill the resin tank. (There are workarounds here like floating a layer of resin on brine, but these have their own technical issues.)
• If you leave the resin in the printer for an extended period, you'll probably find a hardened layer on the surface from stray light exposure and have to clean out or replace the vat.
• Resin vats/tanks need to be kept clean and free of cured resin debris from failed prints or stray light.
• Every combination of resin chemistry, printer light source, and printer optics requires specific tuning to dial in the photopolymer curing behavior. This means it's somewhat difficult to change resin brands, and you may effectively be locked into the printer manufacturer's resin. Many light sources will change in intensity or develop dim regions over time as they age, which will either harm print quality, require period recalibration, or require frequent light source replacement.
• There is a limited number of options for print materials. Technology here is advancing rapidly, but for the most part, SLA/DLP prints are non-load-bearing models with a limited range of color options.
These are some pretty significant "user experience" downsides compared to a consumer desktop FDM printer. It's more hazard, more work, and more cost than FDM. SLA/DLP is primarily advantageous where high resolution or high print speeds are required. | 2021-04-23 11:16:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.34791284799575806, "perplexity": 4180.444576880947}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-17/segments/1618039617701.99/warc/CC-MAIN-20210423101141-20210423131141-00234.warc.gz"} |
https://www.physicsforums.com/threads/differentials-in-integrals.103020/ | Differentials in Integrals
1. Dec 6, 2005
G01
I was just wondering about the dx and the end of an integral and evaluating integrals by substitution. When you evaluate integrals by substitution you can treat the dx as the differential of x. This seems to convenient lol. Some one must ahve known that the dx in an integral was the differential of x all along when they decided to end the integral symbol with the term dx! Could someone please give me insight into this. Jeeze I hope Im making some kinda sense here.:tongue2:
2. Dec 6, 2005
tmc
you mean something like this?
$$$\begin{array}{l} y = 3x^2 \\ \frac{{dy}}{{dx}} = 6x \\ dy = 6xdx \\ \int {dy} = \int {6xdx} \\ y = 3x^2 \\ \end{array}$$$
3. Dec 6, 2005
G01
yeah but i don't need to know how they work but why you can treat the dx in an integral as a differential. I know it sounds weird bear with me.
4. Dec 6, 2005
Zurtex
Well generally, when you learn basic calculus such as that is stated above, without the integral sign dx on its own it's just lazy notation and not rigorously defined. It's just a bit of time and hides a bunch of results in it such as the Fundamental Theorems of Calculus:
http://mathworld.wolfram.com/FundamentalTheoremsofCalculus.html
However, dx can actually be rigorously defined on its own and is frequently used in such subject matter as Vector Calculus. | 2018-12-12 14:04:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9359936118125916, "perplexity": 627.8228680334257}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376823895.25/warc/CC-MAIN-20181212134123-20181212155623-00203.warc.gz"} |
https://lexique.netmath.ca/en/unit-of-measurement-of-surface-area/ | # Unit of Measurement of Surface Area
## Unit of Measurement of Surface Area
The base unit to measure surface area is the square metre.
The units to measure surface are units of area, meaning units of measurement for a two-dimensional geometric object.
### Properties
• One square metre is equal to 100 square decimetres and we write: 1 m$$^{2}$$ = 100 dm$$^{2}$$
• One square metre is equal to 10 000 square centimetres and we write: 1 m$$^{2}$$ = 10 000 cm$$^{2}$$
• One square metre is equal to 1 000 000 square millimetres and we write: 1 m$$^{2}$$ = 1 000 000 mm$$^{2}$$
• One square kilometre is equal to 1 000 000 square metres and we write: 1 km$$^{2}$$ = 1 000 000 m$$^{2}$$ | 2021-11-29 18:22:49 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7581846117973328, "perplexity": 2009.694656182153}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-49/segments/1637964358786.67/warc/CC-MAIN-20211129164711-20211129194711-00175.warc.gz"} |
http://www.martinorr.name/blog/2011/05/ | # Martin's Blog
## Line bundles and morphisms to the dual variety
Posted by Martin Orr on Saturday, 28 May 2011 at 15:25
Over the complex numbers, the dual of an abelian variety is defined to have a Hodge structure dual to that of . Hence morphisms can be interpreted as bilinear forms on the Hodge structure of . Of particular importance are the morphisms corresponding to Hodge symplectic forms.
Last time we saw that can also be interpreted as a group of line bundles on . Today we will use this interpretation to define morphisms which turn out to be the same as those corresponding to Hodge symplectic forms. Then we generalise the definition of to base fields other than , which we will use next time in constructing dual abelian varieties over number fields. | 2019-03-22 14:38:22 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.958858847618103, "perplexity": 280.13662583368796}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-13/segments/1552912202671.79/warc/CC-MAIN-20190322135230-20190322161230-00036.warc.gz"} |
http://aimsciences.org/article/doi/10.3934/dcdsb.2017199?viewType=html | American Institue of Mathematical Sciences
Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth
1 Dipartimento di Matematica e Informatica, Università di Cagliari, V. le Merello 92,09123. Cagliari, Italy 2 Cardiff School of Mathematics, Cardiff University, Senghennydd Road, Cardiff, CF24 4AG, United Kingdom
* Corresponding author: Giuseppe Viglialoro
Received December 2016 Revised April 2017 Published July 2017
Fund Project: TEW would like to thank St John's College, Oxford and the Mathematical Biosciences Institute (MBI) at Ohio State University, for financially supporting this research through the National Science Foundation grant DMS 1440386 and BBSRC grant BKNXBKOO BK00.16
In this paper we study the chemotaxis-system
$\begin{equation*}\begin{cases}u_{t}=Δ u-χ \nabla · (u\nabla v)+g(u)&x∈ Ω, t>0, \\v_{t}=Δ v-v+u&x∈ Ω, t>0,\end{cases}\end{equation*}$
defined in a convex smooth and bounded domain
$Ω$
of
$\mathbb{R}^n$
,
$n≥q 1$
, with
$χ>0$
and endowed with homogeneous Neumann boundary conditions. The source
$g$
behaves similarly to the logistic function and satisfies
$g(s)≤q a -bs^α$
, for
$s≥q 0$
, with
$a≥q 0$
,
$b>0$
and
$α>1$
. Continuing the research initiated in [33], where for appropriate
$1 < p < α < 2$
and
$(u_0,v_0) ∈ C^0(\bar{Ω})× C^2(\bar{Ω})$
the global existence of very weak solutions
$(u,v)$
to the system (for any
$n≥q 1$
) is shown, we principally study boundedness and regularity of these solutions after some time. More precisely, when
$n=3$
, we establish that
-for all
$τ>0$
an upper bound for
$\frac{a}{b}, ||u_0||_{L^1(Ω)}, ||v_0||_{W^{2,α}(Ω)}$
can be prescribed in a such a way that
$(u,v)$
is bounded and Hölder continuous beyond
$τ$
;
-for all
$(u_0,v_0)$
, and sufficiently small ratio
$\frac{a}{b}$
, there exists a
$T>0$
such that
$(u,v)$
is bounded and Hölder continuous beyond
$T$
.
Finally, we illustrate the range of dynamics present within the chemotaxis system in one, two and three dimensions by means of numerical simulations.
Citation: Giuseppe Viglialoro, Thomas E. Woolley. Eventual smoothness and asymptotic behaviour of solutions to a chemotaxis system perturbed by a logistic growth. Discrete & Continuous Dynamical Systems - B, doi: 10.3934/dcdsb.2017199
References:
[1] M. Aida, T. Tsujikawa, M. Efendiev, A. Yagi, M. Mimura, Lower estimate of the attractor dimension for a chemotaxis growth system, J. London. Math. Soc., 74 (2006), 453-474. doi: 10.1112/S0024610706023015. [2] J. L. Aragón, R. A. Barrio, T. E. Woolley, R. E. Baker and P. K. Maini, Nonlinear effects on turing patterns: Time oscillations and chaos, Phys. Rev. E, 86 (2012), 026201. [3] N. Bellomo, A. Bellouquid, Y. Tao, M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. [4] J. Belmonte-Beitia, T. E. Woolley, J. G. Scott, P. K. Maini and E. A. Gaffney, Modelling biological invasions: Individual to population scales at interfaces, J. Theor. Biol. , 334 (2013), 1 – 12, URL http://www.sciencedirect.com/science/article/pii/S0022519313002646. [5] S. W. Cho, S. Kwak, T. E. Woolley, M. J. Lee, E. J. Kim, R. E. Baker, H. J. Kim, J. S. Shin, C. Tickle, P. K. Maini, H. S. Jung, Interactions between shh, sostdc1 and wnt signaling and a new feedback loop for spatial patterning of the teeth, Development, 138 (2011), 1807-1816. doi: 10.1242/dev.056051. [6] T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis Nonlinearity, 21 (2008), 1057. [7] M. A. Farina, M. Marras, G. Viglialoro, On explicit lower bounds and blow-up times in a model of chemotaxis, Discret. Contin. Dyn. Syst. Suppl, (), 409-417. doi: 10.3934/proc.2015.0409. [8] D. Horstmann, G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, Eur. J. Appl. Math., 12 (2001), 159-177. doi: 10.1017/S0956792501004363. [9] D. Horstmann, M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differ. Eqns., 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. [10] W. Jäger, S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, T. Am. Math. Soc., 329 (1992), 819-824. doi: 10.1090/S0002-9947-1992-1046835-6. [11] E. F. Keller, L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. [12] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasi-Linear Equations of Parabolic Type in Translations of Mathematical Monographs, vol. 23, American Mathematical Society, 1988. [13] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, J. Differ. Equations., 258 (2015), 1158-1191. doi: 10.1016/j.jde.2014.10.016. [14] P. K. Maini, T. E. Woolley, R. E. Baker, E. A. Gaffney, S. S. Lee, Turing's model for biological pattern formation and the robustness problem, Interface Focus, 2 (2012), 487-496. doi: 10.1098/rsfs.2011.0113. [15] P. K. Maini, T. E. Woolley, E. A. Gaffney and R. E. Baker, The Once and Future Turing chapter 15: Biological pattern formation, Cambridge University Press, 2016. [16] M. Marras, S. Vernier-Piro, G. Viglialoro, Lower bounds for blow-up in a parabolic-parabolic Keller-Segel system, Discret. Contin. Dyn. Syst. Suppl, (), 809-916. doi: 10.3934/proc.2015.0809. [17] M. Marras, S. Vernier-Piro, G. Viglialoro, Blow-up phenomena in chemotaxis systems with a source term, Math. Method. Appl. Sci., 39 (2016), 2787-2798. doi: 10.1002/mma.3728. [18] M. Marras, G. Viglialoro, Blow-up time of a general Keller-Segel system with source and damping terms, C. R. Acad. Bulg. Sci., 69 (2016), 687-696. [19] J. D. Murray, Mathematical Biology Ⅱ: Spatial Models and Biomedical Applications vol. 2, 3rd edition, Springer-Verlag, 2003. [20] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis intwo-dimensional domains, J. Inequal. Appl., 6 (2001), 37-55. doi: 10.1155/S1025583401000042. [21] K. Osaki, T. Tsujikawa, A. Yagi, M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Anal. Theory Methods Appl., 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. [22] K. Osaki, A. Yagi, Finite dimensional attractor for one-dimensional Keller-Segel equations, Funkcial. Ekvacioj., 44 (2001), 441-470. [23] K. J. Painter, T. Hillen, Spatio-temporal chaos in a chemotaxis model, Physica D, 240 (2011), 363-375. doi: 10.1016/j.physd.2010.09.011. [24] L. E. Payne, J. C. Song, Lower bounds for blow-up in a model of chemotaxis, J. Math. Anal. Appl., 385 (2012), 672-676. doi: 10.1016/j.jmaa.2011.06.086. [25] L. -E. Persson and N. Samko, Inequalities and Convexity, in Operator Theory, Operator Algebras and Applications, Springer Basel, 2014,279–306. [26] M. M. Porzio, V. Vespri, Holder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differ. Equ., 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. [27] Y. Tao, Boundedness in a chemotaxis model with oxygen consumption by bacteria, J. Math. Anal. Appl., 381 (2011), 521-529. doi: 10.1016/j.jmaa.2011.02.041. [28] Y. Tao, M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differ. Eqns., 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. [29] J. I. Tello, M. Winkler, A chemotaxis system with logistic source, Commun. Part. Diff. Eq., 32 (2007), 849-877. doi: 10.1080/03605300701319003. [30] P.-F. Verhulst, Notice sur la loi que la population poursuit dans son accroissement, Correspondance mathématique et physique, 10 (1838), 113-121. [31] G. Viglialoro, On the blow-up time of a parabolic system with damping terms, C. R. Acad. Bulg. Sci., 67 (2014), 1223-1232. [32] G. Viglialoro, Blow-up time of a Keller-Segel-type system with Neumann and Robin boundary conditions, Diff. Int. Eqns., 29 (2016), 359-376. [33] G. Viglialoro, Very weak global solutions to a parabolic-parabolic chemotaxis-system with logistic source, J. Math. Anal. Appl., 439 (2016), 197-212. doi: 10.1016/j.jmaa.2016.02.069. [34] G. Viglialoro, Boundedness properties of very weak solutions to a fully parabolic chemotaxis-system with logistic source, Nonlinear Anal. Real World Appl., 34 (2017), 520-535. doi: 10.1016/j.nonrwa.2016.10.001. [35] M. Winkler, Chemotaxis with logistic source: very weak global solutions and their boundedness properties, J. Math. Anal. Appl., 348 (2008), 708-729. doi: 10.1016/j.jmaa.2008.07.071. [36] M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differ. Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. [37] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Commun. Part. Diff. Eq., 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. [38] M. Winkler, Does a 'volume-filling effect' always prevent chemotactic collapse?, Math. Method. Appl. Sci., 33 (2010), 12-24. doi: 10.1002/mma.1146. [39] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. [40] M. Winkler, K. C. Djie, Boundedness and finite-time collapse in a chemotaxis system with volume-filling effect, Nonlinear Anal. Theory Methods Appl., 72 (2010), 1044-1064. doi: 10.1016/j.na.2009.07.045. [41] T. E. Woolley, Spatiotemporal Behaviour of Stochastic and Continuum Models for Biological Signalling on Stationary and Growing Domains} PhD thesis, University of Oxford, 2011. [42] T. E. Woolley, 50 Visions of Mathematics chapter 48: Mighty Morphogenesis, Oxford Univ. Press, 2014. [43] T. E. Woolley, R. E. Baker, E. A. Gaffney and P. K. Maini, Stochastic reaction and diffusion on growing domains: Understanding the breakdown of robust pattern formation Phys. Rev. E, 84 (2011), 046216. [44] T. E. Woolley, R. E. Baker, C. Tickle, P. K. Maini, M. Towers, Mathematical modelling of digit specification by a sonic hedgehog gradient, Dev. Dynam., 243 (2014), 290-298. doi: 10.1002/dvdy.24068.
show all references
References:
[1] M. Aida, T. Tsujikawa, M. Efendiev, A. Yagi, M. Mimura, Lower estimate of the attractor dimension for a chemotaxis growth system, J. London. Math. Soc., 74 (2006), 453-474. doi: 10.1112/S0024610706023015. [2] J. L. Aragón, R. A. Barrio, T. E. Woolley, R. E. Baker and P. K. Maini, Nonlinear effects on turing patterns: Time oscillations and chaos, Phys. Rev. E, 86 (2012), 026201. [3] N. Bellomo, A. Bellouquid, Y. Tao, M. Winkler, Toward a mathematical theory of Keller-Segel models of pattern formation in biological tissues, Math. Models Methods Appl. Sci., 25 (2015), 1663-1763. doi: 10.1142/S021820251550044X. [4] J. Belmonte-Beitia, T. E. Woolley, J. G. Scott, P. K. Maini and E. A. Gaffney, Modelling biological invasions: Individual to population scales at interfaces, J. Theor. Biol. , 334 (2013), 1 – 12, URL http://www.sciencedirect.com/science/article/pii/S0022519313002646. [5] S. W. Cho, S. Kwak, T. E. Woolley, M. J. Lee, E. J. Kim, R. E. Baker, H. J. Kim, J. S. Shin, C. Tickle, P. K. Maini, H. S. Jung, Interactions between shh, sostdc1 and wnt signaling and a new feedback loop for spatial patterning of the teeth, Development, 138 (2011), 1807-1816. doi: 10.1242/dev.056051. [6] T. Cieślak and M. Winkler, Finite-time blow-up in a quasilinear system of chemotaxis Nonlinearity, 21 (2008), 1057. [7] M. A. Farina, M. Marras, G. Viglialoro, On explicit lower bounds and blow-up times in a model of chemotaxis, Discret. Contin. Dyn. Syst. Suppl, (), 409-417. doi: 10.3934/proc.2015.0409. [8] D. Horstmann, G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, Eur. J. Appl. Math., 12 (2001), 159-177. doi: 10.1017/S0956792501004363. [9] D. Horstmann, M. Winkler, Boundedness vs. blow-up in a chemotaxis system, J. Differ. Eqns., 215 (2005), 52-107. doi: 10.1016/j.jde.2004.10.022. [10] W. Jäger, S. Luckhaus, On explosions of solutions to a system of partial differential equations modelling chemotaxis, T. Am. Math. Soc., 329 (1992), 819-824. doi: 10.1090/S0002-9947-1992-1046835-6. [11] E. F. Keller, L. A. Segel, Initiation of slime mold aggregation viewed as an instability, J. Theor. Biol., 26 (1970), 399-415. doi: 10.1016/0022-5193(70)90092-5. [12] O. A. Ladyženskaja, V. A. Solonnikov and N. N. Ural'ceva, Linear and Quasi-Linear Equations of Parabolic Type in Translations of Mathematical Monographs, vol. 23, American Mathematical Society, 1988. [13] J. Lankeit, Eventual smoothness and asymptotics in a three-dimensional chemotaxis system with logistic source, J. Differ. Equations., 258 (2015), 1158-1191. doi: 10.1016/j.jde.2014.10.016. [14] P. K. Maini, T. E. Woolley, R. E. Baker, E. A. Gaffney, S. S. Lee, Turing's model for biological pattern formation and the robustness problem, Interface Focus, 2 (2012), 487-496. doi: 10.1098/rsfs.2011.0113. [15] P. K. Maini, T. E. Woolley, E. A. Gaffney and R. E. Baker, The Once and Future Turing chapter 15: Biological pattern formation, Cambridge University Press, 2016. [16] M. Marras, S. Vernier-Piro, G. Viglialoro, Lower bounds for blow-up in a parabolic-parabolic Keller-Segel system, Discret. Contin. Dyn. Syst. Suppl, (), 809-916. doi: 10.3934/proc.2015.0809. [17] M. Marras, S. Vernier-Piro, G. Viglialoro, Blow-up phenomena in chemotaxis systems with a source term, Math. Method. Appl. Sci., 39 (2016), 2787-2798. doi: 10.1002/mma.3728. [18] M. Marras, G. Viglialoro, Blow-up time of a general Keller-Segel system with source and damping terms, C. R. Acad. Bulg. Sci., 69 (2016), 687-696. [19] J. D. Murray, Mathematical Biology Ⅱ: Spatial Models and Biomedical Applications vol. 2, 3rd edition, Springer-Verlag, 2003. [20] T. Nagai, Blowup of nonradial solutions to parabolic-elliptic systems modeling chemotaxis intwo-dimensional domains, J. Inequal. Appl., 6 (2001), 37-55. doi: 10.1155/S1025583401000042. [21] K. Osaki, T. Tsujikawa, A. Yagi, M. Mimura, Exponential attractor for a chemotaxis-growth system of equations, Nonlinear Anal. Theory Methods Appl., 51 (2002), 119-144. doi: 10.1016/S0362-546X(01)00815-X. [22] K. Osaki, A. Yagi, Finite dimensional attractor for one-dimensional Keller-Segel equations, Funkcial. Ekvacioj., 44 (2001), 441-470. [23] K. J. Painter, T. Hillen, Spatio-temporal chaos in a chemotaxis model, Physica D, 240 (2011), 363-375. doi: 10.1016/j.physd.2010.09.011. [24] L. E. Payne, J. C. Song, Lower bounds for blow-up in a model of chemotaxis, J. Math. Anal. Appl., 385 (2012), 672-676. doi: 10.1016/j.jmaa.2011.06.086. [25] L. -E. Persson and N. Samko, Inequalities and Convexity, in Operator Theory, Operator Algebras and Applications, Springer Basel, 2014,279–306. [26] M. M. Porzio, V. Vespri, Holder estimates for local solutions of some doubly nonlinear degenerate parabolic equations, J. Differ. Equ., 103 (1993), 146-178. doi: 10.1006/jdeq.1993.1045. [27] Y. Tao, Boundedness in a chemotaxis model with oxygen consumption by bacteria, J. Math. Anal. Appl., 381 (2011), 521-529. doi: 10.1016/j.jmaa.2011.02.041. [28] Y. Tao, M. Winkler, Boundedness in a quasilinear parabolic-parabolic Keller-Segel system with subcritical sensitivity, J. Differ. Eqns., 252 (2012), 692-715. doi: 10.1016/j.jde.2011.08.019. [29] J. I. Tello, M. Winkler, A chemotaxis system with logistic source, Commun. Part. Diff. Eq., 32 (2007), 849-877. doi: 10.1080/03605300701319003. [30] P.-F. Verhulst, Notice sur la loi que la population poursuit dans son accroissement, Correspondance mathématique et physique, 10 (1838), 113-121. [31] G. Viglialoro, On the blow-up time of a parabolic system with damping terms, C. R. Acad. Bulg. Sci., 67 (2014), 1223-1232. [32] G. Viglialoro, Blow-up time of a Keller-Segel-type system with Neumann and Robin boundary conditions, Diff. Int. Eqns., 29 (2016), 359-376. [33] G. Viglialoro, Very weak global solutions to a parabolic-parabolic chemotaxis-system with logistic source, J. Math. Anal. Appl., 439 (2016), 197-212. doi: 10.1016/j.jmaa.2016.02.069. [34] G. Viglialoro, Boundedness properties of very weak solutions to a fully parabolic chemotaxis-system with logistic source, Nonlinear Anal. Real World Appl., 34 (2017), 520-535. doi: 10.1016/j.nonrwa.2016.10.001. [35] M. Winkler, Chemotaxis with logistic source: very weak global solutions and their boundedness properties, J. Math. Anal. Appl., 348 (2008), 708-729. doi: 10.1016/j.jmaa.2008.07.071. [36] M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differ. Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. [37] M. Winkler, Boundedness in the higher-dimensional parabolic-parabolic chemotaxis system with logistic source, Commun. Part. Diff. Eq., 35 (2010), 1516-1537. doi: 10.1080/03605300903473426. [38] M. Winkler, Does a 'volume-filling effect' always prevent chemotactic collapse?, Math. Method. Appl. Sci., 33 (2010), 12-24. doi: 10.1002/mma.1146. [39] M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. [40] M. Winkler, K. C. Djie, Boundedness and finite-time collapse in a chemotaxis system with volume-filling effect, Nonlinear Anal. Theory Methods Appl., 72 (2010), 1044-1064. doi: 10.1016/j.na.2009.07.045. [41] T. E. Woolley, Spatiotemporal Behaviour of Stochastic and Continuum Models for Biological Signalling on Stationary and Growing Domains} PhD thesis, University of Oxford, 2011. [42] T. E. Woolley, 50 Visions of Mathematics chapter 48: Mighty Morphogenesis, Oxford Univ. Press, 2014. [43] T. E. Woolley, R. E. Baker, E. A. Gaffney and P. K. Maini, Stochastic reaction and diffusion on growing domains: Understanding the breakdown of robust pattern formation Phys. Rev. E, 84 (2011), 046216. [44] T. E. Woolley, R. E. Baker, C. Tickle, P. K. Maini, M. Towers, Mathematical modelling of digit specification by a sonic hedgehog gradient, Dev. Dynam., 243 (2014), 290-298. doi: 10.1002/dvdy.24068.
Simulations of system (45) in one dimension with varying value of $\alpha$, given beneath each subfigure. Each subfigure contains the system evaluated at the time points $t=1$, 10, 50 and 100. The remaining parameters values are $a=1$, $b=1.1$ and $\chi=6$. The domain was discretised into 1000 equally spaced points
Simulations of system (45) in one dimension. The simulations are nearly identical to those seen in Figure 1(a). However, each simulation involves a single parameter change. Specifically, in (a) a larger initial condition for $u$ was used (100 was added to the mean); in (b) the parameter $b$ was reduced to 0.2; Finally, in (c) the spatial solution domain has been reduced from 10 to 1
Simulations of system (45) in two dimensions with varying value of $\alpha$, given beneath each subfigure. Evolution time shown above each subfigure. The remaining parameters values are $a=1$, $b=1.1$ and $\chi=6$. The domain was triangulated into 24, 968 finite elements. The figure inset of (b) shows the full extent of the peak, which is growing without bound
Simulations of system (45) illustrating the density of $u$ in three dimensions with varying value of $\alpha$, given beneath each subfigure. Evolution time shown above each subfigure. The remaining parameters values are $a=1$, $b=1.1$ and $\chi=6$. The domain was discretised into 1, 139, 254 voxel elements. Apart from the light grey ball illustrating the boundary of the solution domain the images illustrate isosurfaces of the solution (i.e. surface that represent points of a constant value, thus, they are the three-dimensional analogue of contours). In Figure (a) there are five isosurfaces of value 1, 1.25, 1.5 1.75 and 2, coloured, yellow, green, blue, red and black, respectively. In Figure (b) there are three isosurfaces of value 1, 10, and $10^6$, coloured, yellow, blue and black, respectively
[1] Liangchen Wang, Yuhuan Li, Chunlai Mu. Boundedness in a parabolic-parabolic quasilinear chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2014, 34 (2) : 789-802. doi: 10.3934/dcds.2014.34.789 [2] Xie Li, Zhaoyin Xiang. Boundedness in quasilinear Keller-Segel equations with nonlinear sensitivity and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (8) : 3503-3531. doi: 10.3934/dcds.2015.35.3503 [3] Ke Lin, Chunlai Mu. Global dynamics in a fully parabolic chemotaxis system with logistic source. Discrete & Continuous Dynamical Systems - A, 2016, 36 (9) : 5025-5046. doi: 10.3934/dcds.2016018 [4] Pan Zheng, Chunlai Mu, Xuegang Hu. Boundedness and blow-up for a chemotaxis system with generalized volume-filling effect and logistic source. Discrete & Continuous Dynamical Systems - A, 2015, 35 (5) : 2299-2323. doi: 10.3934/dcds.2015.35.2299 [5] Shijie Shi, Zhengrong Liu, Hai-Yang Jin. Boundedness and large time behavior of an attraction-repulsion chemotaxis model with logistic source. Kinetic & Related Models, 2017, 10 (3) : 855-878. doi: 10.3934/krm.2017034 [6] Rachidi B. Salako, Wenxian Shen. Spreading speeds and traveling waves of a parabolic-elliptic chemotaxis system with logistic source on $\mathbb{R}^N$. Discrete & Continuous Dynamical Systems - A, 2017, 37 (12) : 6189-6225. doi: 10.3934/dcds.2017268 [7] Tomomi Yokota, Noriaki Yoshino. Existence of solutions to chemotaxis dynamics with logistic source. Conference Publications, 2015, 2015 (special) : 1125-1133. doi: 10.3934/proc.2015.1125 [8] Michael Winkler. Emergence of large population densities despite logistic growth restrictions in fully parabolic chemotaxis systems. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2777-2793. doi: 10.3934/dcdsb.2017135 [9] Xie Li, Yilong Wang. Boundedness in a two-species chemotaxis parabolic system with two chemicals. Discrete & Continuous Dynamical Systems - B, 2017, 22 (7) : 2717-2729. doi: 10.3934/dcdsb.2017132 [10] Wei Wang, Yan Li, Hao Yu. Global boundedness in higher dimensions for a fully parabolic chemotaxis system with singular sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (10) : 3663-3669. doi: 10.3934/dcdsb.2017147 [11] Qi Wang, Jingyue Yang, Feng Yu. Boundedness in logistic Keller--Segel models with nonlinear diffusion and sensitivity functions. Discrete & Continuous Dynamical Systems - A, 2017, 37 (9) : 5021-5036. doi: 10.3934/dcds.2017216 [12] Pan Zheng, Chunlai Mu, Xiaojun Song. On the boundedness and decay of solutions for a chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2016, 36 (3) : 1737-1757. doi: 10.3934/dcds.2016.36.1737 [13] Ke Lin, Chunlai Mu. Convergence of global and bounded solutions of a two-species chemotaxis model with a logistic source. Discrete & Continuous Dynamical Systems - B, 2017, 22 (6) : 2233-2260. doi: 10.3934/dcdsb.2017094 [14] Yilong Wang, Zhaoyin Xiang. Boundedness in a quasilinear 2D parabolic-parabolic attraction-repulsion chemotaxis system. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 1953-1973. doi: 10.3934/dcdsb.2016031 [15] Giuseppina Autuori, Patrizia Pucci. Kirchhoff systems with nonlinear source and boundary damping terms. Communications on Pure & Applied Analysis, 2010, 9 (5) : 1161-1188. doi: 10.3934/cpaa.2010.9.1161 [16] Hao Yu, Wei Wang, Sining Zheng. Boundedness of solutions to a fully parabolic Keller-Segel system with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2017, 22 (4) : 1635-1644. doi: 10.3934/dcdsb.2017078 [17] Shaohua Chen. Boundedness and blowup solutions for quasilinear parabolic systems with lower order terms. Communications on Pure & Applied Analysis, 2009, 8 (2) : 587-600. doi: 10.3934/cpaa.2009.8.587 [18] Pan Zheng. Global boundedness and decay for a multi-dimensional chemotaxis-haptotaxis system with nonlinear diffusion. Discrete & Continuous Dynamical Systems - B, 2016, 21 (6) : 2039-2056. doi: 10.3934/dcdsb.2016035 [19] Jiashan Zheng. Boundedness of solutions to a quasilinear higher-dimensional chemotaxis-haptotaxis model with nonlinear diffusion. Discrete & Continuous Dynamical Systems - A, 2017, 37 (1) : 627-643. doi: 10.3934/dcds.2017026 [20] Alexandre Montaru. Wellposedness and regularity for a degenerate parabolic equation arising in a model of chemotaxis with nonlinear sensitivity. Discrete & Continuous Dynamical Systems - B, 2014, 19 (1) : 231-256. doi: 10.3934/dcdsb.2014.19.231
2016 Impact Factor: 0.994
Tools
Article outline
Figures and Tables | 2018-01-24 09:31:09 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.6751901507377625, "perplexity": 4229.467461338321}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-05/segments/1516084893629.85/warc/CC-MAIN-20180124090112-20180124110112-00548.warc.gz"} |
https://socratic.org/questions/how-do-you-simplify-d-2-4-1 | How do you simplify (d^2)^-4?
May 30, 2018
(d^2)^(-4)=color(blue)(1/d^8
Explanation:
Simplify:
${\left({d}^{2}\right)}^{- 4}$
Apply the power rule of exponents: ${\left({a}^{m}\right)}^{m} = {a}^{m \cdot n}$
${d}^{2 \cdot \left(- 4\right)}$
${d}^{- 8}$
Apply the negative exponent rule: ${a}^{- m} = \frac{1}{a} ^ m$.
${d}^{- 8} = \frac{1}{d} ^ 8$
May 30, 2018
${d}^{-} 8 \mathmr{and} \frac{1}{d} ^ 8$
Because of the power of a power property, you'll want to multiply your exponents. 2 * -4 = -8, so this gives you ${d}^{-} 8$ or, when simplified even further, $\frac{1}{d} ^ 8$. | 2019-11-11 22:26:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 10, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9504368305206299, "perplexity": 1280.216146282613}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496664439.7/warc/CC-MAIN-20191111214811-20191112002811-00368.warc.gz"} |
http://mathoverflow.net/questions/83801/the-boundary-map-of-kan-simplicial-sets | # The boundary map of Kan simplicial sets
I don't know whether or not the following is a research level question, but since it is concerned with simplicial sets and since they are very popular these days, I think this is the right place to ask:
Suppose $S_\bullet$ is a simplicial set,
$\partial(S_n) = \lbrace $$x_0,...,x_n$$| d_i(x_j)=d_{j-1}(x_i) \text{ for } i < j \rbrace \subset \times^{n+1}S_{n-1}$
is the simplicial kernel (simplicial boundery) of $S_\bullet$ in dimension $n$ and
$\Lambda^n_k = \lbrace \left(x_0,\ldots,\hat{x}_k,\ldots,x_n\right)| d_i(x_j)=d_{j-1}(x_i) \mbox{ for } i < j \mbox{ and } i,j \neq k \rbrace \subset \times^{n}S_{n-1}$
is the $k$-horn in dimension $n$.
Then if the simplicial set is Kan (has the Kan property) the map $$\begin{array}{cccl} Kan(n,k): & S_n &\rightarrow& \Lambda^n_k \\ ; & x & \mapsto & \left(d_1(x),\ldots,\widehat{d_k(x)},\ldots,d_n(x) \right) \end{array}$$ is surjective.
Now my question is:
If $S_\bullet$ is Kan, is the 'boundary map' $$\begin{array}{cccl} \partial_n: & S_n &\rightarrow& \partial(S_n)\\ ; & x & \mapsto & \left(d_1(x),\ldots,d_n(x) \right) \end{array}$$ surjective, too?
-
## 1 Answer
The surjectivity of your "Kan maps" is equivalent to the lifting property against all horn inclusions. Similarly the surjectivity of "boundary maps" is equivalent to the lifting property against all boundary inclusions $\partial \Delta^n \to \Delta^n$. The horn inclusions generate acyclic cofibartions and similarly boundary inclusions generate cofibrations. This means that "boundary maps" are surjective if and only if $S$ is a contractible Kan complex.
- | 2014-10-25 06:19:43 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 2, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7902019619941711, "perplexity": 576.2361297374072}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": false}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2014-42/segments/1414119647784.27/warc/CC-MAIN-20141024030047-00291-ip-10-16-133-185.ec2.internal.warc.gz"} |
http://tm.durusau.net/?cat=37&paged=2 | ## Archive for the ‘Maps’ Category
### 2.95 Million Satellite Images (Did I mention free?)
Saturday, April 2nd, 2016
From the post:
An instrument called the Advanced Spaceborne Thermal Emission and Reflection Radiometer — or ASTER, for short — has been taking pictures of the Earth since it launched into space in 1999.
In that time, it has photographed an incredible 99% of the planet’s surface.
Although it’s aboard NASA’s Terra spacecraft, ASTER is a Japanese instrument and most of its data and images weren’t free to the public — until now.
With 16 years’ worth of images, there are a lot to sort through.
One of Rebecca’s favorites:
You really need to select that image and view it at full size. I promise.
The Andes Mountains. Colors reflect changes in surface temperature, materials and elevation.
### Time Maps:…
Saturday, April 2nd, 2016
From the post:
In this blog post, I’ll describe a technique for visualizing many events across multiple timescales in a single image, where little or no zooming is required. It allows the viewer to quickly identify critical features, whether they occur on a timescale of milliseconds or months. It is adopted from the field of chaotic systems, and was originally conceived to study the timing of water drops from a dripping faucet. The visualization has gone by various names: return map, return-time map, and time vs. time plot. For conciseness, I will call them “time maps.” Though time maps have been used to visualize chaotic systems, they have not been applied to information technology. I will show how time maps can provide valuable insights into the behavior of Twitter accounts and the activity of a certain type of online entity, known as a bot.
This blog post is a shorter version of a paper I recently wrote, but with slightly different examples. The paper was accepted to the 2015 IEEE Big Data Conference. The end of the blog also contains sample Python code for creating time maps.
Building a time map is easy. First, imagine a series of events as dots along a time axis. The time intervals between each event are labeled as t1, t2, t3, t4, …
A time map is simply a two-dimensional scatterplot, where the xy coordinates of the events are: (t1,t2), (t2, t3), (t3, t4), and so on. On a time map, the purple dot would be plotted like this:
In other words, each point in the scatterplot represents an event. The x-coordinate of an event is the time between the event itself and the preceding event. An event’s y-coordinate is the time between the event itself and the subsequent event. The only points that are not displayed in a time map are the first and last events of the dataset.
Max goes on to cover the heuristics of time maps, along with the Python code for generating them.
Max’s time maps use a common time line for events and so aren’t well suited to visualizing overlapping narrative time frames such as occur in novels and/or real life.
I first saw this in a tweet by Data Science Renee
### Mapping Mountains – Tangram
Tuesday, March 22nd, 2016
From the post:
I’ve been spending a lot of time over the mountains of Northern California lately. To view mountains from above is to journey through time itself: over ancient shorelines, the trails of glaciers, the marks of countless seasons, and the front lines of perpetual tectonic struggle. Fly with me now, on a tour through the world of elevation data:
A stunning display of mapping technology!
Peter starts with an illustrated history of the depiction of elevation on maps, including a map that was a declared to be a military secret!
It’s a quick romp that leads to “Tangram functionality” which is described elsewhere as:
Tangram is a map renderer designed to grant you ludicrous levels of control over your map design. By drawing vector tiles live in a web browser, it allows real-time map design, display, and interactivity.
Using WebGL, Tangram saddles and rides your graphics card into a new world of cartographic exploration. Animated shaders, 3D buildings, and dynamic filtering can be combined to produce effects normally seen only in science fiction.
Map styles, data filters, labels, and even graphics card code can be defined in a human-readable and -writable plaintext scene file, and a JavaScript API permits direct interactive control of the style.
The balance of the post is a lengthy demonstration of Tangram that ends in a call for test pilots!
Tangram reminded of the Art of War by Sun Tzu, where it reads:
All armies prefer high ground to low and sunny places to dark.
All armies prefer Tangram map renderers to all others.
Seriously. Protesters, direct action movements, irregulars, etc. should take a long look at this post.
I first saw this in a tweet by Lynn Cherny.
### Superhuman Neural Network – Urban War Fighters Take Note
Wednesday, February 24th, 2016
Google Unveils Neural Network with “Superhuman” Ability to Determine the Location of Almost Any Image
From the post:
Here’s a tricky task. Pick a photograph from the Web at random. Now try to work out where it was taken using only the image itself. If the image shows a famous building or landmark, such as the Eiffel Tower or Niagara Falls, the task is straightforward. But the job becomes significantly harder when the image lacks specific location cues or is taken indoors or shows a pet or food or some other detail.
Nevertheless, humans are surprisingly good at this task. To help, they bring to bear all kinds of knowledge about the world such as the type and language of signs on display, the types of vegetation, architectural styles, the direction of traffic, and so on. Humans spend a lifetime picking up these kinds of geolocation cues.
So it’s easy to think that machines would struggle with this task. And indeed, they have.
Today, that changes thanks to the work of Tobias Weyand, a computer vision specialist at Google, and a couple of pals. These guys have trained a deep-learning machine to work out the location of almost any photo using only the pixels it contains.
Their new machine significantly outperforms humans and can even use a clever trick to determine the location of indoor images and pictures of specific things such as pets, food, and so on that have no location cues.
The full paper: PlaNet—Photo Geolocation with Convolutional Neural Networks.
Abstract:
Is it possible to build a system to determine the location where a photo was taken using just its pixels? In general, the problem seems exceptionally difficult: it is trivial to construct situations where no location can be inferred. Yet images often contain informative cues such as landmarks, weather patterns, vegetation, road markings, and architectural details, which in combination may allow one to determine an approximate location and occasionally an exact location. Websites such as GeoGuessr and View from your Window suggest that humans are relatively good at integrating these cues to geolocate images, especially en-masse. In computer vision, the photo geolocation problem is usually approached using image retrieval methods. In contrast, we pose the problem as one of classification by subdividing the surface of the earth into thousands of multi-scale geographic cells, and train a deep network using millions of geotagged images. While previous approaches only recognize landmarks or perform approximate matching using global image descriptors, our model is able to use and integrate multiple visible cues. We show that the resulting model, called PlaNet, outperforms previous approaches and even attains superhuman levels of accuracy in some cases. Moreover, we extend our model to photo albums by combining it with a long short-term memory (LSTM) architecture. By learning to exploit temporal coherence to geolocate uncertain photos, we demonstrate that this model achieves a 50% performance improvement over the single-image model.
You might think that with GPS engaged that the location of images is a done deal.
Not really. You can be facing in any direction from a particular GPS location and in a dynamic environment, analysts or others don’t have the time to sort out which images are relevant from those that are just noise.
Urban warfare does not occur on a global scale, bringing home the lesson it isn’t the biggest data set but the most relevant and timely data set that is important.
Relevantly oriented images and feeds are a natural outgrowth of this work. Not to mention pairing those images with other relevant data.
PS: Before I forget, enjoy paying the game at: www.geoguessr.com.
### Satellites in Global Development [How Do You Verify Satellite Images?]
Sunday, February 21st, 2016
Satellites in Global Development
From the webpage:
We have better systems to capture, analyze, and distribute data about the earth. This is fundamentally improving, and creating, opportunities for impact in global development.
This is an exploratory overview of current and upcoming sources of data, processing pipelines and data products. It is aimed to offer non GIS experts an exploration of the unfolding revolution of earth observation, with an emphasis on development. See footer for license and contributors.
A great overview of Earth satellite data for the non-specialist.
The impressive imagery of 0.31M resolution, calls to mind the danger of relying on such data without confirmation.
The image of Fortaleza “shows” (at 0.31M) what appears to be a white car parked near the intersection of two highways. What if instead of a white car that was a mobile missile launch platform? It’s not much bigger than a car so would show up on this image.
Would you target that location based on that information alone?
Or consider the counter-case: What reassurance do you have that what appears to be a white car in the image at the intersection is not a mobile missile launcher, but is reported to you on the image as a white car?
Or in either case, what if the image is reporting an inflatable object placed there to deceive remote imaging applications?
As with all data, satellite data is presented to you for a reason.
A reason that may or may not align with your goals and purposes.
I first saw this in a tweet by Kirk Borne.
### All The Pubs In Britain & Ireland & Nothing Else
Tuesday, February 2nd, 2016
From the post:
The map above is elegant in its simplicity. It shows Great Britain and Ireland drawn from pubs. Each blue dot represents a single pub using data extracted from OpenStreetMap with the Matplotlib Basemap Toolkit.
Interestingly, if the same map had been drawn using the number of pubs from 1980 it would have looked quite different.
In total, the map has 29,195 pub locations across both the UK and Ireland. However, the UK alone has lost 21,000 pubs since 1980 according to the Institute of Economic Affairs, with half of these occurring since 2006.
Therefore, a map from 1980 might have had nearly twice as many dots as the one above and possibly not all in the same places. Going back even further, there were a reported 99,000 pubs in the UK in 1905.
See Ramiro’s post for the map but more importantly, book travel to the UK to help stem the loss of pubs!
How many of the 29,195 pubs in the UK have you visited?
### Where Does Your Dope Come From? [Interviewing Tips]
Monday, February 1st, 2016
Visualizing Mexico’s Drug Cartels: A Roundup of Maps by Aleszu Bajak.
From the post:
With the big news this week of the arrest of Joaquín “El Chapo” Guzmán’s, the head of Mexico’s largest drug cartel, most of the attention is being paid to actor Sean Penn’s Rolling Stone interview with the kingpin in his mountain hideout in Mexico.
But where’s the context? How powerful is the Sinaloa cartel that he has run for decades and the other Mexican drug cartels, for that matter? Storybench has been sifting through a wealth of graphics on the workings of the drug trade in Mexico and its impact on the United States that help readers begin to understand the bigger picture of this complex drug war. So now that you’ve read your fill on Sean Penn’s (and Rolling Stone’s) editorial shortcomings, check out these impressive visualizations taken from news organizations, non-profits and government agencies.
Bajak presents a stunning array of maps that visualize the influence of Mexican drug cartels.
One of the most interesting has the title: United States: Areas of Influence of Major Mexican Transnational Criminal Organizations.
(You will need to select the image to get a useful sized image.)
All of the maps are interesting and some possibly more useful than others, such as if you are planning on expanding drug trade in one area but not another.
What I found missing was a map of all the organizations profiting from the war on drugs. Yes?
Location and approximate incomes of drug cartels, agencies, law enforcement offices, government budgets, etc.
The war on drugs isn’t just about the income (and failure to pay taxes) of the drug cartels, it is also about the allocation of personnel and budgets in law enforcement organizations, prisons that house drug offenders, etc.
One persuasive graphic would be the economic impact on government organizations if the drug trade stopped tomorrow and drug offense prisoners were released from jail.
There is a symbiotic relationship in the war on drugs. Government agents limit available competition and help keep prices artificially high. Drug cartels provide a desired product and a rationale for the existence of police and related agencies.
A rather cozy, if adversarial arrangement. (A topic map could clarify the benefits to both sides but truth telling isn’t a characteristic of either side.)
PS: Do read the piece on what Sean Penn should have done for his interview with El Chapo. It makes a good checklist of what reporters don’t do when interviewing political candidates or sitting members of government.
They want to be asked back if you know what I mean.
### Search points of interest by radius with Mapbox GL JS
Thursday, January 7th, 2016
From the post:
Mapbox GL JS provides new and interesting ways to query data found on a map. We built an example that filters points of interest by genre, which uses the featuresAt method to only select data based on the position and radius of the circle. Drag the blue circle around the map to populate points of interest on the left. Zoom in and out on the circle to adjust the radius.
Visually, places of interest appear as colored dots on the map and you can select what type of places appear at all. You use the blue circle to move about the map and as it encompasses dots on the map, additional information appears to the left of the map.
That’s a poor description when compared to the experience. Visit the live map to really appreciate it.
Assuming a map of surveillance cameras and the movement of authorities (in near real time), this would make a handy planning tool.
### The most contested real estate on Earth? [Noble Sanctuary/Temple Mount]
Tuesday, December 29th, 2015
I won’t try to reproduce a smaller version of this image because it would simply befoul rather remarkable work.
From the image (top right):
Muslims call it the Noble Sanctuary. Jews and Christians call it the Temple Mount. Built atop Mount Moriah in Jerusalem, this 36-acre site is the place where seminal events in Islam, Judaism and Christianity are said to have taken place, and it has been a flash point of conflict for millenniums. Many aspects of its meaning and history are still disputed by religious and political leaders, scholars, and even archaeologists. Several cycles of building and destruction have shaped what is on this hilltop today.
Great as far as it goes but the lower left bottom gives the impression that Hezekiah expanded the temple mount after Ahaz (his predecessor) plundered it. So legend holds but that leaves the reader with the false impression that the Jewish temple came to the Noble Sanctuary/Temple Mount first.
If you recall your Sunday School lessons, David conquers Jerusalem (Jebus), as told in 1 Chronicles 11:4-9.
Jerusalem was a sacred site long before David or the Israelites appear in the historical record. How long? Several thousand years at least but the type of excavation required to detail that part of the city’s history won’t happen any time soon.
Do enjoy the map, it is very impressive.
### D3 Maps without the Dirty Work
Monday, December 21st, 2015
From the post:
For those like me who aren’t approaching mapping in D3 with a GIS background in tow, you may find the propretary goe data structures hard to handle. Thankfully, Scott Murray lays out a simple process in his most recent course through JournalismCourses.org. By the time you are through reading this post you’ll have the guide post needed from mapping any of the data sets found on Natural Earths website in D3.
First in a series of posts on D3 rendering for maps. Layers of D3 renderings is coming up next.
Enjoy!
### Magnificent Maps of New York
Sunday, November 15th, 2015
Magnificent Maps of New York by Kate Marshall.
From the post:
The British Library’s ongoing project to catalogue and digitise the King’s Topographical Collection, some 40,000 maps, prints and drawings collected by George III, has highlighted some extraordinary treasures. The improved and up-dated catalogue records are now accessible to all, anywhere in the world, via the Library’s catalogue, Explore, and offer a springboard for enhanced study.
Your donations to this and other projects enable us to digitise more of our collections, the results of which are invaluable. One such example of further research using material digitised with help from donors is the recently published book by Richard H. Brown and Paul E. Cohen, Revolution. Mapping the Road to American Independence, 1755-1783, which features a number of maps from the K.Top.
The Explore link takes to the main interface for the British Library but Maps is a more direct route to the map collection materials.
Practically everyone has made school presentations about their country’s history. With resources such as the British Map collection becoming available online, it isn’t too much to expect student to supplement their reports with historical maps.
Enjoy!
### Quartz to open source two mapping tools
Thursday, November 12th, 2015
From the post:
News outlet Quartz is developing a searchable database of compiled map data from all over the world, and a tool to help journalists visualise this data.
The database, called Mapquery, received $35,000 (£22,900) from the Knight Foundation Prototype Fund on 3 November. Keith Collins, project lead, said Mapquery will aim to make the research stage in the creation of maps easier and more accessible, by creating a system for finding, merging and refining geographic data. Mapquery will not be able to produce visual maps itself, as it simply provides a database of information from which maps can be created – so Quartz will also open source Mapbuilder as the “front end” that will enable journalists to visualise the data. Quartz aims to have a prototype of Mapquery by April, and will continue to develop Mapbuilder afterwards. That’s news to look forward to in 2016! I’m real curious where Quartz is going to draw the boundary around “map data?” The post mentions Mapquery including “historical boundary data,” which would be very useful for some stories, but is traditional “map data.” What if Mapquery could integrate people who have posted images with geographic locations? So a reporter could quickly access a list of potential witnesses for events the Western media doesn’t cover? Live feeds of the results of US bombing raids against ISIS for example. (Doesn’t cover out of deference to the US military propaganda machine or for other reasons I can’t say.) Looking forward to more news on Mapquery and Mapbuilder! I first saw this in a tweet by Journalism Tools. ### Vintage Infodesign [138] Old Map, Charts and Graphics Monday, November 9th, 2015 From the post: Those who follow these weekly updates with vintage examples of information design know how maps fill a good portion of our posts. Cartography has been having a crucial role in our lives for centuries and two recent books help understand this influence throughout the ages: The Art of Illustrated Maps by John Roman, and Map: Exploring The World, featuring some of the most influential mapmakers and institutions in history, like Gerardus Mercator, Abraham Ortelius, Phyllis Pearson, Heinrich Berann, Bill Rankin, Ordnance Survey and Google Earth. Gretchen Peterson reviewed the first one in this article, with a few questions answered by the author. As for the second book recommendation, you can learn more about it in this interview conducted by Mark Byrnes with John Hessler, a cartography expert at the Library of Congress and one of the people behind the book, published in CityLab. Both publications seem quite a treat for map lovers and additions to All delightful and instructive but I think my favorite is How Many Will Die Flying the Atlantic This Season? (Aug, 1931). The cover is a must see graphic/map. It reminds me of the over-the-top government reports on terrorism which are dutifully parroted by both traditional and online media. Any sane person who looks at the statistics for causes of death in Canada, the United States and Europe, will conclude that “terrorism” is a government-fueled and media-driven non-event. Terrorist events should qualify as Trivial Pursuit questions. The infrequent victims of terrorism and their families deserve all the support and care we can provide. But the same is true of traffic accident victims and they are far more common than victims of terrorism. ### We Put 700 Red Dots On A Map Wednesday, November 4th, 2015 We Put 700 Red Dots On A Map Some statistics can be so unbelievable, or deal with concepts so vast, that it’s impossible to wrap our heads around them. The human mind can only do so much to visualize an abstract idea, and often misses much of its impact in the translation. Sometimes you just need to step back and take a good, long look for yourself. That’s why we just put 700 red dots on a map. The dots don’t represent anything in particular, nor is their number and placement indicative of any kind of data. But when you’re looking at them, all spread out on a map of the United States like that—it’s hard not to be a little blown away. Enjoy! PS: Also follow ClickHole on Twitter. Governments will still comfort the comfortable, afflict the afflicted and lie to the rest of us about their activities, but this may keep you from becoming a humorless fanatic. The benefits of being a humorous fanatic aren’t clear but surely it is better than being humorless. I first saw this in a tweet by Matt Boggie. ### Free Your Maps from Web Mercator! Friday, October 30th, 2015 Free Your Maps from Web Mercator! by Mamata Akella. From the post: Most maps that we see on the web use the Web Mercator projection. Web Mercator gained its popularity because it provides an efficient way for a two-dimensional flat map to be chopped up into seamless 256×256 pixel map tiles that load quickly into the rectangular shape of your browser. If you asked a cartographer which map projection you should choose for your map, most of the time the answer would not be Web Mercator. What projection you choose depends on your map’s extent, the type of data you are mapping, and as with all maps, the story you want to tell. Well, get excited because with a few lines of SQL in CartoDB, you can free your maps from Web Mercator! Not only can you choose from a variety of projections at CartoDB but you can also define your own projections! Mamata’s post walks you through these new features and promises that more detailed posts are to follow with “advanced cartographic effects on a variety of maps….” You are probably already following the CartoDB blog but if not…, well today is a good day to start! ### 10,000 years of Cascadia earthquakes Thursday, October 15th, 2015 10,000 years of Cascadia earthquakes From the webpage: The chart shows all 40 major earthquakes in the Cascadia Subduction Zone that geologists estimate have occurred since 9845 B.C. Scientists estimated the magnitude and timing of each quake by examining soil samples at more than 50 undersea sites between Washington, Oregon and California. This chart is followed by: Core sample sites 1999-2009 U.S. Geological Survey scientists studied undersea core samples of soil looking for turbidites — deposits of sediments that flow along the ocean floor during large earthquakes. The samples were gathered from more than 50 sites during cruises in 1999, 2002 and 2009. Great maps but apparently one has nothing to do with the other. If you mouse over the red dot closest to San Francisco, a pop-up says: “ID M9907-50BC Water Depth in Feet 10925.1972.” I suspect that may mean the water depth for the sample but without more, I can’t really say. The fatal flaw of the presentation is the data of the second map is disconnected from the first. There may be some relationship between the two but it isn’t evident in the current presentation. A good example of how to not display data sets on the same subject. ### Hand-Coloured Bomb Damage Maps of London Wednesday, September 2nd, 2015 Hand-Coloured Bomb Damage Maps of London From the webpage: The devastation wrought on the capital by the blitz was documented by the architect’s department of London County Council. The impact of the destruction from air raids and V-bombs can still be seen in London today Bomb Damage Maps 1939-1945 by archivist Laurence Ward was published this week by Thames & Hudson to mark the 75th anniversary of the blitz Photos of maps for: • Bethnal Green, Tower Hamlets and Stepney • Waterloo and Elephant & Castle • Marylebone, Mayfair and Piccadilly • London Bridge, Bermondsey and Wapping • King’s Cross, Angel and Barbican • Regent’s Park, Euston and Somer’s Town • Hampstead Heath, Dartmouth Park and Tufnell Park • Deptford and Rotherhithe The photos are impressive but not of large enough scale to make out details. For that, you will need a copy of Bomb Damage Maps 1939-1945. The current price is £48.00 (without shipping). As you review this important historical resource, realize that nothing similar will be produced for the U.S.-led wars in Afghanistan, Iraq, Syria, etc. Rather than confirming and reporting on “allied” bombing strikes, Western news media bases its reports on accounts supplied by the U.S. military and its familiars. It is certainly possible to have interactive maps that show images of civilian casualties and damages within a matter of days at the outside, but current U.S. military adventures will be some of the least documented on record. Or should I say least independently documented on record? Is anyone collating cellphone videos of the results of U.S. airstrikes? ### 1962 United States Tourist Map Sunday, August 23rd, 2015 Part of the joy of this map comes from being old enough to remember maps similar to this one. Critics can scan the map for what isn’t represented as tourist draws. Consider it to be a snapshot of the styles and interests. Most notable absence? Cape Canaveral. I suspect its absence reflects the lead time involved in the drafting and publishing of a map at the time. Explorer 1 (1958) and the first American in space, Alan Shepard (1961), both preceded this map. Enjoy! ### “True” Size? Wednesday, August 19th, 2015 From the post: Given how popular the Mercator projection is, it’s wise to question how it makes us view the world. Many have noted, for example, how the distortion around the poles makes Africa look smaller than Greenland, when in reality Africa is about 14.5 times as big. In 2010, graphic artist Kai Krause made a map to illustrate just how big the African continent is. He found that he was able to fit the United States, India and much of Europe inside the outline of the African continent. Inspired by Krause’s map, James Talmage, and Damon Maneice, two computer developers based out of Detroit, created an interactive graphic that really puts the distortion caused by the Mercator map into perspective. The tool, dubbed “The True Size” allows you to type in the name of any country and move the outline around to see how the scale of the country gets distorted the closer it gets to the poles. Of course, one thing the map shows well is the sheer size of Africa. Here it is compared with the United States, China and India. This is a great resource for anyone who wants to learn more about the physical size of countries, but it is also an illustration that no map is “wrong,” some display the information you seek better than others. For another interesting take on world maps, see WorldMapper where you will find gems like: #### GDP Wealth #### Absolute Poverty Or you can rank countries by their contributions to science: #### Science Research None of these maps is more “true” than the others. Which one you choose depends on the cause you want to advance. ### Mapping the world of Mark Twain (subject confusion) Sunday, August 2nd, 2015 Mapping the world of Mark Twain by Andrew Hill. From the post: Mapping Mark Twain This weekend I was looking through Project Gutenberg and found something even better than a single book, I found the complete works of Mark Twain. I remembered how geographic the stories of Twain are and so knew immediately I had found a treasure chest. For the last few days, I’ve been parsing the books line-by-line and trying to find the localities that make up the world of Mark Twain. In the end, the data has over 20,000 localities. Even counting the cases where sir names are mistaken for places, it is a really cool dataset. What I’ll show you here is only the tip of the iceberg. I put the results together as an interactive map that maybe will inspire you to take a journey with Twain on your own, extend your life a little. Sounds great! Warning: Subject Confusion Mapping the world of Mark Twain (the map)! The blog entry: http://andrewxhill.com/blog/2014/01/26/Mapping-the-world-of-Mark-Twain/ has the same name as the map: http://andrewxhill.com/maps/writers/twain/index.html. Both are excellent and the blog entry includes details on how you can construct similar maps. Topic maps disambiguate names that would otherwise lead to confusion! What names do you need to disambiguate? Or do you need to avoid subject confusion with names used by others? (Unknown to you.) ### Inside the Secret World of Russia’s Cold War Mapmakers Tuesday, July 21st, 2015 Inside the Secret World of Russia’s Cold War Mapmakers by Greg Miller. From the post: A MILITARY HELICOPTER was on the ground when Russell Guy arrived at the helipad near Tallinn, Estonia, with a briefcase filled with$250,000 in cash. The place made him uncomfortable. It didn’t look like a military base, not exactly, but there were men who looked like soldiers standing around. With guns.
The year was 1989. The Soviet Union was falling apart, and some of its military officers were busy selling off the pieces. By the time Guy arrived at the helipad, most of the goods had already been off-loaded from the chopper and spirited away. The crates he’d come for were all that was left. As he pried the lid off one to inspect the goods, he got a powerful whiff of pine. It was a box inside a box, and the space in between was packed with juniper needles. Guy figured the guys who packed it were used to handling cargo that had to get past drug-sniffing dogs, but it wasn’t drugs he was there for.
Inside the crates were maps, thousands of them. In the top right corner of each one, printed in red, was the Russian word секрет. Secret.
The maps were part of one of the most ambitious cartographic enterprises ever undertaken. During the Cold War, the Soviet military mapped the entire world, parts of it down to the level of individual buildings. The Soviet maps of US and European cities have details that aren’t on domestic maps made around the same time, things like the precise width of roads, the load-bearing capacity of bridges, and the types of factories. They’re the kinds of things that would come in handy if you’re planning a tank invasion. Or an occupation. Things that would be virtually impossible to find out without eyes on the ground.
Given the technology of the time, the Soviet maps are incredibly accurate. Even today, the US State Department uses them (among other sources) to place international boundary lines on official government maps.
If you like stories of the intrigue of the Cold War and of maps, Greg’s post was made for you.
The maps have been rarely studied but one person is trying to change that:
But one unlikely scholar, a retired British software developer named John Davies, has been working to change that. For the past 10 years he’s been investigating the Soviet maps, especially the ones of British and American cities. He’s had some help, from a military map librarian, a retired surgeon, and a young geographer, all of whom discovered the maps independently. They’ve been trying to piece together how they were made and how, exactly, they were intended to be used. The maps are still a taboo topic in Russia today, so it’s impossible to know for sure, but what they’re finding suggests that the Soviet military maps were far more than an invasion plan. Rather, they were a framework for organizing much of what the Soviets knew about the world, almost like a mashup of Google Maps and Wikipedia, built from paper.
they were a framework for organizing much of what the Soviets knew about the world, almost like a mashup of Google Maps and Wikipedia, built from paper.
Has some of the qualities that I associate with topic maps. Granting it chooses a geographic frame of reference but every map has some frame of reference, stated or unstated.
It would make a great paper on topic maps to represent the knowledge of an old-style Soviet map as a topic map.
As a resource, John Davies maintains a comprehensive website about Soviet maps.
### MapFig
Sunday, July 19th, 2015
MapFig
Whether you are tracking the latest outrageous statements from the Repubicans for U.S. President Clown Car or have more serious mapping purposes in mind, you need to take a look at MapFig. There are plugins from WordPress, Drupal, Joomla, and Omeka, along with a host of useful features.
There is one feature in particular I want to call to your attention: “Create highly customized leaflet maps quickly and easily.”
I stumbled over that sentence because I have never encountered “leaflet” maps before. Street, terrain, weather, historical, geological, archaeological, astronomical, etc., but no “leaflet” maps. Do they mean a format size? As in a leaflet for distribution? Seems unlikely because it is delivered electronically.
FAQ was no help. No hits at all.
Of course, you are laughing at this point because you know that “Leaflet” (note the uppercase “L”) is a JavaScript library developed by Vladimir Agafonkin.
So a “leaflet map” is one created using the Leftlet Javascript Library.
Clearer to say “Create highly customized maps quickly and easily using the Leaflet JS library.”
Yes?
Enjoy!
### Mapping the Medieval Countryside
Thursday, July 16th, 2015
Mapping the Medieval Countryside – Places, People, and Properties in the Inquisitions Post Mortem.
From the webpage:
Mapping the Medieval Countryside is a major research project dedicated to creating a digital edition of the medieval English inquisitions post mortem (IPMs) from c. 1236 to 1509.
IPMs recorded the lands held at their deaths by tenants of the crown. They comprise the most extensive and important body of source material for landholding in medieval England. Describing the lands held by thousands of families, from nobles to peasants, they are a key source for the history of almost every settlement in England and many in Wales.
This digital edition is the most authoritative available. It is based on printed calendars of the IPMs but incorporates numerous corrections and additions: in particular, the names of some 48,000 jurors are newly included.
The site is currently in beta phase: it includes IPMs from 1418-1447 only, and aspects of the markup and indexing are still incomplete. An update later this year will make further material available.
The project is funded by the Arts and Humanities Research Council and is a collaboration between the University of Winchester and the Department of Digital Humanities at King’s College London. The project uses five volumes of the Calendars of Inquisitions Post Mortem, gen. ed. Christine Carpenter, xxii-xxvi (The Boydell Press, Woodbridge, 2003-11) with kind permission from The Boydell Press. These volumes are all in print and available for purchase from Boydell, price £195.
One of the more fascinating aspects of the project is the list of eighty-nine (89) place types, which can be used for filtering. Just scanning the list I happened across “rape” as a place type, with four (4) instances recorded thus far.
The term “rape” in this context refers to a subdivision of the county of Sussex in England. The origin of this division is unknown but it pre-dates the Norman Conquest.
The “rapes of Sussex” and the eighty-eight (88) other place types are a great opportunity to explore place distinctions that may or may not be noticed today.
Enjoy!
### Ancient [?] Craft of Information Visualization
Tuesday, July 7th, 2015
From the post:
To open this week’s edition of Vintage InfoDesign, we picked some of the maps published in the 1800s/early 1900’s about the Battle of Waterloo . As we showed you before, on June 18th several newspapers marked with stunning pieces of infographic design the 200th anniversary of Napoleon’s final attempt to rule Europe, and since we haven’t feature any “oldies” related to this topic, we thought it would be interesting to do some Internet “digging”.
Hope you enjoy our findings, and feel free to leave the links to other charts and maps about Waterloo, in the comments section.
I’m not entirely comfortable with using the term “ancient” to describe maps depicting the Battle of Waterloo. I think of the fall of the New Kingdom of Egypt, in about 343 BCE as the beginning of “ancient” history.
### Creating-maps-in-R
Saturday, June 20th, 2015
From the webpage:
Introductory tutorial on graphical display of geographical information in R, to contribute to teaching material. For the context of this tutorial and a video introduction, please see here: http://robinlovelace.net/r/2014/01/30/spatial-data-with-R-tutorial.html
All of the information needed to run the tutorial is contained in a single pdf document that is kept updated: see github.com/Robinlovelace/Creating-maps-in-R/raw/master/intro-spatial-rl.pdf.
By the end of the tutorial you should have the confidence and skills needed to convert a diverse range of geographical and non-geographical datasets into meaningful analyses and visualisations. Using data and code provided in this repository all of the results are reproducible, culminating in publication-quality maps such as the faceted map of London’s population below:
Quite a treat in thirty (30) pages! You will have R and some basic spatial data packages installed and be well on your way to creating maps in R. From a topic map perspective, the joining of attributes to polygons is quite similar to adding properties to topics. Assuming you want to treat each polygon as a subject to be represented by a topic.
Enjoy!
PS:
You will also enjoy:
Cheshire, J. & Lovelace, R. (2014). Spatial data visualisation with R. In Geocomputation, a Practical Primer. In Press with Sage. Preprint available online
and other publications by Robin.
### Thematic Cartography Guide
Thursday, June 18th, 2015
Thematic Cartography Guide
From the webpage:
Welcome! In this short guide we share some insights and tips for making thematic maps. Our goal is to cover the important concepts in cartography and flag the important decision points in the map-making process. As with many activities in life, there isn’t always a single best answer in cartography, and in those cases we’ve tried to outline some of the pros and cons to different solutions.
This is by no means a replacement for a full textbook on cartography; rather it is a quick reference guide for those moments when you’re stumped, unsure of what to do next, or unfamiliar with the terminology. While the recommendations on these pages are short and not loaded with academic references, please appreciate that they represent a thoughtful synthesis of decades of map-making research.
This guide was written by Axis Maps, adapted from documentation written for indiemapper in 2010. However, the content here is about general cartography principles, not software-specific tips. To see the material in its original context, visit indiemapper and its help pages.
If that doesn’t sound exciting, perhaps this will:
Thematic maps are meant not simply to show locations, but rather to show attributes or statistics about places, spatial patterns of those attributes, and relationships between places. For example, while a reference map might show the locations of cities, a thematic map might also represent the population of those cities. It’s the difference between mapping places and mapping data. This site is about thematic maps, describing some of the different types and basic principles.
Hmmm, data about places? Relationships? That’s starting to sound suspiciously like a topic map expressed in a different vocabulary.
The same principles apply, in addition to places on a geographic grid, you can have subjects that exist only on your own intellectual grid, arranged in relationships as you see fit.
Over the years you have no doubt seen a number of offenses against the art of presentation in the name of topic maps. You have the power to break from that tradition. Seeing what works in other mapping domains is one place to start.
Where else would you look for fresh ideas and themes?
### Vintage Infodesign [122] Naval Yards
Tuesday, June 16th, 2015
Vintage Infodesign [122] by Tiago Veloso.
From the post:
Published in October, 1940, the set of maps from Fortune magazine that open today’s Vintage Infodesign was part of a special about the industrial resources committed to the war effort by the United States. It used data compiled by the Bureau of the Census and Agricultural Commission, with the financial support by the Defense Commission. The maps within the four page report are signed by Philip Ragan Associates.
It’s just another great gem archived over at Fulltable, followed by the usual selection of ancient maps, graphics and charts from before 1960.
Hope you enjoy, and have a great week!
One original image (1940) and it modern counterpart to temp you into visiting this edition of Vintage Infodesign.
US shipyards and arsenals in 1940.
Modern map of shipyards. I couldn’t find an image quickly that had arsenals as well.
Notice the contrast in the amount of information given by the 1940 map versus that of the latest map from the Navy.
With the 1940 map, along with a state map I could get within walking distance of any of the arsenals or shipyards listed.
With the modern map, I know that shipyards need to be near water but it is only narrowed down to the coastline of any of the states with shipyards.
That may not seem like a major advantage, knowing the location of a shipyard from a map, but collating that information with a stream of other bits and pieces could be an advantage.
Such as watching wedding announcements near Navy yards for sailors getting married. Which means the happy couple will be on their honeymoon and any vehicle at their home with credentials to enter a Navy yard will be available. Of course, that information has to be co-located for the opportunity to present itself. For that I recommend topic maps.
### Map of the Tracks of Yu, 1136
Monday, June 15th, 2015
I first saw this on Instagram at: https://instagram.com/p/363b2lOpn7/ with the following comment:
Map of the Tracks of Yu, 1136, is the first known map to use a cartographic grid.
The David Rumsey Map Collection, Cartography Associates, offers this more complete image from the Harvard Fine Arts Library:
And the following blurb:
Yujitu (Map of the Tracks of Yu), 1136. This map’s title derives from the Yugong, a treatise describing the sage-king Yu’s mythical channeling of China’s rivers. It is a rare surviving example of cartography used in the 12th century for public education, mixing classical references with later administrative history. Carved on a large stone tablet so that students or visitors could make rubbings, the map strikingly depicts a riverine network on a regular grid of squares intended to represent 100 li to a side. Read a more detailed description of this map by Alexander Akin, Ph.D. View the map in Google Earth. The image is courtesy Harvard Fine Arts Library.
To temp you into further reading, Alexander Akin’s description opens with these lines:
The Yijitu (Map of the Tracks of Yu) is the earliest extant map based on the Yugong (introduced below). Engraved in stone in 1136, the map measures about one meter to a side. It was carved into the face of an upright monument on the grounds of a school in Xi’an so that visitors could make detailed rubbings using paper and ink. These rubbings could be taken away for later reference. The stone plaque thus functioned as something like an immovable printing block, remaining in Xi’an while copies of its map found their way further afield. Harvard University holds one such rubbing made from the original stone, and has generously granted permission for the use of this unusually clear image, which shows more detail than any previously published version….
Alexander struggles, as only a modern would, over the “accuracy” of the map. A map that at times accords with the findings of modern map makers and at times accords with its Confucian heritage.
With maps in general and topic maps in particular, a question of “accuracy” cannot be answered with being supplied with the measurement to be applied in answering that question.
### Mapping the History of L.A.’s Notorious Sprawl
Wednesday, June 3rd, 2015
From the post:
(Apologies for the distortion, the map really needs a full page and to be seen when interactive.)
From the post:
THE SPRAWLING BUILTSCAPE of Los Angeles always seems to have people there riled up in one way or another. Lately there are rumblings about “classic” L.A. homes being displaced by bigger, more modern houses, changing the face of established neighborhoods. Even people with enormous mansions are complaining about the enormouser mansions people are building next door. And this is just one of the ongoing storylines in an ever-morphing city.
Now, urban designer Omar Ureta has created an interactive map to help tell some of these stories. His Built:LA project shows the ages of almost every existing building in the city, and can break them down by decade to reveal how the city has grown over time (works best in Chrome or Firefox).
“There’s so much discussion going on right now in how L.A. is urbanizing, I wanted to create a tool that could contribute to the dialogue,” Ureta, who moved to L.A. nine years ago from the Inland Empire, told me in an email. “I’m excited that the map is actually making people ask more questions about their neighborhood, their city and the whole region.”
Ureta’s combining of data from a variety of sources enables users to peel back layers of construction in L.A. Makes me curious about forward looking “what-if” maps based on local history of development?
This project should be an inspiration for either historical or future projecting maps of urban construction.
### Spatial Humanities Workshop
Tuesday, June 2nd, 2015
From the webpage:
Scholars in the humanities have long paid attention to maps and space, but in recent years new technologies have created a resurgence of interest in the spatial humanities. This workshop will introduce participants to the following subjects:
• how mapping and spatial analysis are being used in humanities disciplines
• how to find, create, and manipulate spatial data
• how to create historical layers on interactive maps
• how to create data-driven maps
• how to tell stories and craft arguments with maps
• how to create deep maps of places
• how to create web maps in a programming language
• how to use a variety of mapping tools
• how to create lightweight and semester-long mapping assignments
The seminar will emphasize the hands-on learning of these skills. Each day we will pay special attention to preparing lesson plans for teaching the spatial humanities to students. The aim is to prepare scholars to be able to teach the spatial humanities in their courses and to be able to use maps and spatial analysis in their own research.
Ahem, the one thing Larry forgets to mention is that he is a major player in spatial humanities. His homepage is an amazing place.
The seminar materials don’t disappoint. It would be better to be at the workshop but in lieu of attending, working through these materials will leave you well grounded in spatial humanities. | 2017-01-19 15:04:38 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.22371840476989746, "perplexity": 2691.9534194553457}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-04/segments/1484560280718.7/warc/CC-MAIN-20170116095120-00503-ip-10-171-10-70.ec2.internal.warc.gz"} |
https://codereview.stackexchange.com/questions/77538/squaring-an-integer-by-repetitive-addition | # Squaring an integer by repetitive addition
I've written two functions:
1. is_integer(), which tries to get a positive or a negative integer as input and returns the same value.
2. square(), which takes this returned value as parameter and returns square of the same.
The code in square() uses a for-loop that squares the integer by 'repetitive addition.' Finally, the script prints out the square of the integer returned by square().
Please review my code, point out errors of any kind and make suggestions.
# Squares an integer by repetitive addition
# Uses a for-loop
def square(integer):
result = 0
for element in range(1, abs(integer)+1):
result += abs(integer)
return 'Square of ' + str(integer) + ' is ' + str(result)
# Makes sure the input is an integer
def is_integer():
number = raw_input('Enter an integer: ')
if number.isdigit():
number = int(number)
return number
elif number[0] == '-' and number[1:].isdigit():
number = int(number)
return number
else:
print '%r is not an integer' % number
print 'To try again, press Enter key'
print 'Otherwise, press Ctrl+C keys to abort'
raw_input('>>> ')
return is_integer()
# Square() takes as parameter the value returned by is_digit()
# Print the result returned by square()
print square(is_integer())
### Improving square
There are several bad practices in square:
• Multiple calls to abs(integer) when it could be calculated just once and reused
• Instead of range(1, x + 1) you could write range(x)
• When you don't need to use the loop variable element for anything, a common convention is to name it _ instead
• Converting integer and result to string is tedious. It would better to use a formatted string expression
Making the appropriate changes the method becomes:
def square(integer):
result = 0
abs_integer = abs(integer)
for _ in range(abs_integer):
result += abs_integer
return 'Square of {} is {}'.format(integer, result)
### Improving is_integer
In addition to what @Jamal already pointed out, the method has other problems too:
• The way you check if the user input is numeric is overly tedious
• Naming the input as "number" is not great, because it might be text
A better way to write it (also fixing the name and making it non-recursive):
def prompt_integer():
while True:
text = raw_input('Enter an integer: ')
try:
return int(text)
except ValueError:
print '%r is not an integer' % text
print 'To try again, press Enter key'
print 'Otherwise, press Ctrl+C keys to abort'
raw_input('>>> ')
### Why Python 2.7 ?
You're using Python 2.7. Note that there are no more planned releases for Python 2.7 beyond 2015 June. Consider migrating to Python 3, it's the future.
• I'm still a beginner and Python is my first language. Most of the books and online classes I'm using to learn still mention Python 2.x explicitly. Sure I'll consider switching to Python 3 as soon as I finish my introductory classes. I consider your improvements a great lesson. – user61142 Jan 14 '15 at 22:04
• I agree that naming the input as "number" is not great, but are you sure naming it as "text" is? because the input is expected to be a number. Please correct me if I'm wrong. I'm aware that raw_input() converts input to a string and if all characters in the string are digits, we can then convert the string to a number if necessary. – user61142 Jan 14 '15 at 23:38
• Wouldn't it be best to use new style string formatting instead of old style? – SethMMorton Jan 15 '15 at 6:28
• @SethMMorton certainly it would be, well spotted! Updated now. – janos Jan 15 '15 at 6:30
It looks like is_integer() should be renamed. A function name with "is" in it could mean that it returns a boolean-type value, but this one returns a number. Perhaps you should have this function only check for a valid number, then have another one return the number to the caller.
Also, I don't think it should have a recursive call. It would be better as a loop that will continue until a valid number has been inputted. The else keyword is also redundant, since it is preceded by return statements.
def prompt_integer():
number = raw_input('Enter an integer: ')
while True:
if number.isdigit():
return int(number)
elif number[0] == '-' and number[1:].isdigit():
return int(number)
# Failed validation
print '%r is not an integer' % number
print 'To try again, press Enter key'
print 'Otherwise, press Ctrl+C keys to abort'
number = raw_input('>>> ')
• I'll rename it right away and make a note of the redundancy. – user61142 Jan 14 '15 at 22:17
• This is not entirely ok, the second raw input is not validated. – orion Jan 15 '15 at 11:24
Integer validation by manually checking the string is not recommended (actually, one should never parse input manually, always use builtins). Let the python decide if it is an integer or not. Following python's philosophy to "rather ask for forgivenes than permission", I'd write it this way:
#let us choose our own prompt string, but use default if not provided
def prompt_integer(prompt='Enter an integer: '):
try:
return int(raw_input(prompt))
except ValueError:
#you could also use a multiline string here, but let's not use too much python sugar
print('Invalid integer')
print('To try again, press Enter key')
print('Otherwise, press Ctrl+C keys to abort')
#recursively call prompt with a different prompt string
return prompt_integer('>>> ')
I didn't want to remember the input string in a variable simply to write it back in case of an error. But that's your choice. You could also just print the exception string.
Focusing soley on square, I agree with janos about extracting the constant abs
def square(integer):
abs_integer = abs(integer)
result = 0
for _ in range(abs_integer):
result += abs_integer
return 'Square of {} is {}'.format(integer, result)
This is nicer as a comprehension and sum:
def square(integer):
abs_integer = abs(integer)
result = sum(abs_integer for _ in range(abs_integer))
return 'Square of {} is {}'.format(integer, result)
The sum call adds up everything in the container you pass it and (abs_integer for _ in range(abs_integer)) produces the element abs_integer abs_integer times.
Here's also a trick that comes in occasional - albeit rare - use. Instead of adding one abs_integer every time, you can instead double it with
result += result
If you make a generic times function, this is quite simple to express. First we need the base condition:
def multiply(number, times):
if times == 0:
return 0
Then we need to consider times > 0 by moving the negative:
# Make times nonnegative for loop
if times < 0:
number = -number
times = -times
Then we start the loop:
added = 1
total = number
and double the number as much as possible:
# Double while possible
total += total
There will be more left to add, so put that on top:
# Add the rest
total += multiply(number, times-added)
That's it!
return total
So the square is just
def square(number):
return multiply(number, number)
Note that even this is suboptimal since we recompute the same stuff on every call to multiply, but fixing it is less simple. | 2020-07-05 10:29:17 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.25968489050865173, "perplexity": 4317.467473981525}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 5, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655887319.41/warc/CC-MAIN-20200705090648-20200705120648-00271.warc.gz"} |
https://math.stackexchange.com/questions/2275730/integrals-of-the-form-int-0-infty-sin-gx-dx | # Integrals of the form $\int_0^{+\infty} \sin g(x) \ dx$
I'm interested in the convergence of integrals of the form $$\int_0^{+\infty} \sin g(x) \ dx$$
where $g$ is nonnegative, increasing and growing without bound as $x \to +\infty$ (hence $\sin g(x)$ oscillates as $x \to +\infty$). For example, it's known that $$\int_0^{+\infty} \sin x^p \ dx$$ converges for $p>1$. Similarly, $\int_0^{+\infty} \sin(\exp x) \ dx$ converges.
Here are three questions (they are related enough that I didn't think it was worth making three separate questions).
Consider functions $g:[0, +\infty) \to [0, +\infty)$ which are continuous, strictly increasing and unbounded.
$(i)$ Suppose we add the additional hypothesis that $g$ is strictly convex. Does $\int_0^{+\infty} \sin g(x) \ dx$ converge?
$(ii)$ Can we characterize those $g$'s for which $\int_0^{+\infty} \sin g(x) \ dx$ converges?
$(iii)$ Is there a $g$ such that $\int_0^{+\infty} \sin g(x) \ dx$ converges absolutely?
I have no idea for the full generality, but at least I have a complete answer when $g$ is convex.
Proposition. Assume that $g : [a,\infty) \to \Bbb{R}$ is increasing, convex and unbounded. Then
(1) $\int_{a}^{\infty} \sin g(x) \, dx$ does not converge absolutely.
(2) $\int_{a}^{\infty} \sin g(x) \, dx$ converges if and only if $g'_{+}(x) \to \infty$, where $g'_{+}$ is the right-hand derivative.
(3) $\int_{a}^{\infty} \sin g(x) \, dx$ converges in Cesaro sense. That is, the limit $$\lim_{R\to\infty} \frac{1}{R} \int_{a}^{R} \int_{a}^{r} \sin g(x) \, dx dr$$ exists.
Step 1. A bit of reduction and some observations
First, since $g$ is convex, it is continuous possibly except at $0$. Using the fact that $g$ is also increasing, we know that $g$ is indeed continuous everywhere on $[0,\infty)$. Then the condition implies that there exists $a' \in [a, \infty)$ such that $g$ is constant on $[a, a']$ and $g$ is strictly increasing on $[a', \infty)$. Since none of statements (1)-(3) is affected on modification of $g$ on a finite interval, we may truncate the interval from the left and assume that $g$ is strictly increasing.
The previous paragraph shows that it suffices to consider $g$ which is strictly increasing, continuous, convex and unbounded. Then its inverse $h : [g(0), \infty) \to [a, \infty)$ is a well-defined function which is strictly increasing, continuous, concave and unbounded. Thus its right-hand derivative $h'_+(x)$ is a decreasing function such that
$$h'_+(x) = \lim_{h \to 0^+} \frac{h(x+h) - h(x)}{h} = \lim_{k \to 0^+} \frac{k}{g(h(x)+k) - g(h(x))} = \frac{1}{g'_+(h(x))}.$$
So $h'_+(x) \to 0$ if and only if $g'_{+}(x) \to \infty$. Moreover, for any continuous function $\varphi$ on $[a, b]$ we have the following formula
$$\int_{a}^{b} \varphi(g(x)) \, dx = \int_{g(a)}^{g(b)} \varphi(y) \, dh(y) = \int_{g(a)}^{g(b)} \varphi(y) h'_+(y) \, dy.$$
Step 2. Actual computation.
• We first resolve (1). Let us write $\rho(x) = h'_+(x)$ for brevity. Choose $m \in \Bbb{Z}$ so that $m\pi > g(a)$. Then along $R_n = h(n\pi)$ with $n > m$,
\begin{align*} \int_{a}^{R_n} \left|\sin g(x) \right| \, dx &\geq \int_{m\pi}^{n\pi} \rho(x) \left|\sin x \right| \, dx = \sum_{k=m}^{n-1} \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx \\ &\hspace{2em} \geq \sum_{k=m}^{n-1} 2\rho((k+1)\pi) \geq \sum_{k=m}^{n-1} \frac{2}{\pi} \int_{(k+1)\pi}^{(k+2)\pi} \rho(x) \, dx \\ &\hspace{4em} \geq \frac{2}{\pi} [h((n+1)\pi) - h((m+1)\pi)] \xrightarrow[n\to\infty]{} \infty \end{align*}
and hence (1) follows.
• Next we resolve part (2). Let $m \in \Bbb{Z}$ be as before and define $F$ by
$$F(r) = \int_{m\pi}^{r} \rho(x) \sin x \, dx.$$
In view of the previous section, we can investigate the convergence of $F(r)$ as $r\to\infty$ instead. Also, since $F(r)$ always lies between $F(n\pi)$ and $F((n+1)\pi)$ whenever $x \in [n\pi, (n+1)\pi]$, it suffices to investigate the convergence of $F(n\pi)$ as $n\to\infty$. Now for $n > m$,
$$F(n\pi) = \sum_{k=m}^{n-1} (-1)^k \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx$$
First, the general term satisfies
$$\left| (-1)^k \int_{0}^{\pi} \rho(x+k\pi) \sin x \, dx \right| \geq \rho((k+1)\pi) \int_{0}^{\pi} \sin x \, dx = 2\rho((k+1)\pi).$$
So if $F(n\pi)$ converges as $n\to\infty$, then $\rho(x)$ also converges to $0$ as $x\to\infty$. By our previous remark, this implies $g'_+(x) \to \infty$ as $x\to\infty$.
Conversely, assume that $g'_+(x) \to \infty$ as $x\to\infty$ so that $\rho(x) \to 0$ as $x\to\infty$. Then
$$F(n\pi) = \int_{0}^{\pi} \left( \sum_{k=m}^{n-1} (-1)^k \rho(x+k\pi) \right) \sin x \, dx$$
and the inner term converges uniformly by the Dirichlet test. This implies the convergence of $F(n\pi)$ and hence the convergence of $F(x)$ as $x\to\infty$.
• Finally we resolve part (3). Let $\rho_{\infty} = \lim_{x\to\infty} \rho(x)$, which exists by the monotonicity of $\rho$. Then we can write
$$\int_{a}^{r} \sin g(x) \, dx = \underbrace{\int_{g(a)}^{g(r)} (\rho(x) - \rho_{\infty}) \sin x \, dx}_{=:A} + \rho_{\infty} \cos g(a) - \underbrace{\vphantom{\int_{g}} \rho_{\infty} \cos g(r)}_{=:B}.$$
Now the term $A$ converges as $r\to\infty$ by (2). So its Cesaro mean also converges. In order to investigate the Cesaro mean of the term $B$, we have to look at
$$\frac{1}{R} \int_{a}^{R} \cos g(r) \, dr = \frac{1}{R} \int_{g(a)}^{g(R)} \rho(x) \cos x \, dx.$$
Using the similar 'alternating series trick' as in part (2), we can check that $\int_{g(a)}^{g(R)} \rho(x) \cos x \, dx$ is uniformly bounded in $R$. Putting altogether, we obtains not only the Cesaro convergence but also its value:
$$\lim_{R\to\infty} \frac{1}{R} \int_{a}^{R} \int_{a}^{r} \sin g(x) \, dx dr = \int_{g(a)}^{\infty} (\rho(x) - \rho_{\infty}) \sin x \, dx + \rho_{\infty} \cos g(a).$$
For (iii). Let $(c_n)$ be a sequence of positive numbers, $0 < c_n < \pi$ for every $n$, and let $$h(x) := \sum_{n=0}^\infty \frac{\pi}{c_n} \chi_{[n\pi, n\pi + c_n]}(x), \qquad g(x) := \int_0^x h(t)\, dt,$$ where $\chi_A$ is the characteristic function of the set $A$. (This function $g$ is not strictly increasing, but the construction can be easily modified in this sense.)
Since $\int_{n\pi}^{(n+1)\pi} g'(x)\, dx = \pi$, we have that $g(n\pi) = n\pi$ and $$\int_{n\pi}^{(n+1)\pi} |\sin g(x)|\, dx = \int_0^{c_n} \sin\left(\frac{\pi}{c_n}\, t\right)\, dt = \frac{2 c_n}{\pi}.$$ Hence, if $\sum_n c_n$ converges, then $\sin g(x)$ is absolutely integrable on $[0,+\infty)$. | 2019-07-22 14:14:55 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9986672401428223, "perplexity": 130.07262833201014}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-30/segments/1563195528037.92/warc/CC-MAIN-20190722133851-20190722155851-00362.warc.gz"} |
http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/DM/DMPlexSetAdjacencyUseClosure.html | petsc-3.6.2 2015-10-02
Report Typos and Errors
Define adjacency in the mesh using the transitive closure
### Synopsis
#include "petscdmplex.h"
### Input Parameters
dm - The DM object useClosure - Flag to use the closure
### Notes
FEM: Two points p and q are adjacent if q \in closure(star(p)), useCone = PETSC_FALSE, useClosure = PETSC_TRUE
FVM: Two points p and q are adjacent if q \in support(p+cone(p)), useCone = PETSC_TRUE, useClosure = PETSC_FALSE
FVM++: Two points p and q are adjacent if q \in star(closure(p)), useCone = PETSC_TRUE, useClosure = PETSC_TRUE | 2015-12-01 06:09:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9989147186279297, "perplexity": 13474.937952603612}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2015-48/segments/1448398464536.35/warc/CC-MAIN-20151124205424-00267-ip-10-71-132-137.ec2.internal.warc.gz"} |
https://www.jamestharpe.com/scientific-notation/ | # Scientific Notation
Scientific Notation is a way to express very small and very large numbers using exponents, and is useful for brevity and for comparing orders of magnitude between numbers.
To convert a number to scientific notation, it is rewritten to be greater than zero but less than ten multiplied by $10$ raised to some exponent.
Simple Examples:
• $99,000$ expressed in scientific notation is $9.9 \cdot 10^4$.
• $0.00099$ expressed in scientific notation is $9.9 \cdot 10^{-4}$
SI number prefixes are commonly used in lieu of scientific or engineering notation. For example, rather than writing "$1.23 \cdot 10^3g$" (grams), we can write "$1.23kg$" (kilograms).
## Common large numbers expressed in scientific notation
• Avogadro's number is $6.02214076 \cdot 10^{23}$.
• The charge of an Electron is $0.6021766208 \cdot 10^{-19}$ coulombs.
• A googol is, a one followed by 100 zeros, is $1 \cdot 10^{100}$
• The speed of light is $2.99792458 \cdot 10^8$ meters per second.
## Deeper Knowledge on Scientific Notation
### Engineering Notation
Expressions of very large and very small numbers for engineers
## Broader Topics Related to Scientific Notation
Fun with numbers
### Science
The most rigorous system of epistemology
### International System of Units (SI)
Formal terms and definitions of the metric system | 2021-05-15 23:40:54 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 11, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8857157230377197, "perplexity": 1091.74400232571}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-21/segments/1620243991488.53/warc/CC-MAIN-20210515223209-20210516013209-00174.warc.gz"} |
https://codereview.stackexchange.com/questions/135435/nesting-loops-on-same-array-but-skipping-same-element/135438 | # Nesting loops on same array but skipping same element
I'm having a bit of trouble trying to find a more Rubyist way to achieve the following. Essentially, I want to try and iterate over every element e and apply e.method(n) for every $n \in \text{array}$, $n \ne e$. In order to determine whether or not $n = e$, I'll have to use an index comparison (really just test for reference equality as opposed to functional equality).
arr = [413, 321, 654, 23, 11]
(0...arr.length).each do |outer_i|
(0...arr.length).each do |inner_i|
next if outer_i == inner_i
arr[outer_i].apply arr[inner_i]
end
end
This reeks of Java/C++ and I can tell that this is not the Ruby way, but I can't seem to find an alternative. Any ideas to improve its Ruby-ness? I was thinking of Array#product but I'm not sure where to go from there.
• Since appears to be off-topic for Code Review in its current state, since the code you've posted is pseudo-code. See if you can reword it to be more on-topic – Flambino Jul 20 '16 at 22:26
• @Flambino That's not pseudo code.. it will pass any Ruby interpreter (aside from the apply method) – Michael Jul 20 '16 at 22:28
• The apply call is my complaint, since the code doesn't run, nor is the function of that method made clear. Hence, the code felt incomplete to me. Pseudo-code like if you'd had a do_stuff method. However, re-reading your question, and seeing tokland's answer, I've retracted my close-vote. I felt there wasn't enough to go on, and misunderstood a few things, but I was wrong. – Flambino Jul 20 '16 at 22:35
• @Flambino Either way, thanks for pointing out the pseudo-code rule; wasn't aware of that! I'll try to steer clear of non-legal code in the future – Michael Jul 20 '16 at 22:40
• Did you patch the Fixnum class to define an apply method? That may be the most questionable practice in this code. – 200_success Jul 21 '16 at 1:32
Note that you are just doing a permutation of two elements from a set, and there is an abstraction in the core for that, Array#permutation(n):
arr.permutation(2).each { |x, y| x.apply(y) }
• Can't get much simpler than that! – Michael Jul 20 '16 at 22:41
Some remarks if you don't want to use the permutation-method.
The loop with a counter on an array is not rubyesk. In ruby an iterator like Array#each is preferred (I added the apply-method to the code to make it runnable):
class Fixnum
def apply(i)
puts "%i -- %i " % [self,i]
end
end
arr = [413, 321, 654, 23, 11]
arr.each do |outer|
arr.each do |inner|
next if outer == inner
outer.apply inner
end
end
This works, unless the array contains an entry twice (e.g. arr = [413, 321, 654, 23, 11, 11]).
If you need the index you can use Array#.each_with_index:
arr.each_with_index do |outer, outer_i|
arr.each_with_index do |inner, inner_i|
next if outer_i == inner_i
outer.apply inner
end
end
You may also replace the inner loop with an array without the element of the outer loop:
arr.each_with_index do |outer, outer_i|
arr2 = arr.dup #Without dup you would change the original array
arr2.delete_at(outer_i)
arr2.each do |inner|
outer.apply inner
end
end | 2019-11-17 01:13:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.4529625177383423, "perplexity": 1900.8153015974053}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-47/segments/1573496668772.53/warc/CC-MAIN-20191116231644-20191117015644-00398.warc.gz"} |
https://math.stackexchange.com/questions/2383503/category-theory-from-the-first-order-logic-point-of-view | # Category theory from the first order logic point of view
$\DeclareMathOperator{\Id}{Id} \DeclareMathOperator{\Obj}{Obj} \DeclareMathOperator{\Arr}{Arr} \DeclareMathOperator{\Dom}{Dom} \DeclareMathOperator{\Cod}{Cod}$
MacLane defines "metacategories" purely axiomaticaly. When we look at it from perspective of first order logic we see that "metacategories" is a first order theory consisting following signature:
• two unary predicates $\Obj,\Arr$
• two unary functions $\Dom,\Cod$
• binary function $\circ$
• unary function $\Id$
and axioms for domain, codomain, composition unity. I wrote this axioms in the appendix.
Next, we can define a category as a model in this first order theory. Given two categories, we can also define a functor as a structure map.
Question. Is there some source which treats the whole category theory from this point of view?
For example I do not know how to define natural transformation in this language.
Appendix. Axioms for category theory :
• $(\forall f)(\Arr(f) \rightarrow (\Obj(\Dom(f)) \wedge \Obj((\Cod(f)))$
• $(\forall f,g)((\Arr(f)\wedge\Arr(g)\wedge\Cod(f)=\Dom(g))\rightarrow \Arr(g\circ f))$
• $(\forall a)(\Obj(a)\rightarrow((\Arr(\Id(a))\wedge \Dom(\Id(a))=a=\Cod(\Id(a))\wedge (\forall f)(\Arr(f)\rightarrow ((\Dom(f)=a\rightarrow f\circ\Id(a)=f)\wedge(\Cod(f)=a\rightarrow \Id(a)\circ f=f))))$
• $(\forall f,g,h)((\Arr(f)\wedge\Arr(g)\wedge\Arr(h)\wedge\Cod(f)=\Dom(g)\wedge \Cod(g)=\Dom(h))\rightarrow h\circ ((g\circ f) = (h\circ g)\circ f))$
• You mean $\operatorname{Obj}(a)\to\ldots$ in the third axiom, I suppose – Hagen von Eitzen Aug 5 '17 at 14:40
• What's $\text{Arr}$? – Andrew Tawfeek Aug 5 '17 at 14:42
• @AndrewTawfeek: I presume $\mathrm{Obj}(a)$ is interpreted to mean that $a$ is an object and $\mathrm{Arr}(f)$ is interpreted to mean that $f$ is an arrow. – Clive Newstead Aug 5 '17 at 14:43
• Maybe a two-sorted language could be more appropriate here since in category theory, asking what $Dom(a)$ is when $a$ is an object can seem weird. – Max Aug 5 '17 at 14:43
• You might want to include an axiom like $$(\forall x)(\mathrm{Obj}(x) \Leftrightarrow \neg \mathrm{Arr}(x))$$ to ensure that everything is an object, or a morphism, but not both. Another possibility is to say that every element is an arrow, and refer to objects by instead referring to their identity arrows. Another possibility is to use first-order logic with dependent sorts, which is perhaps more natural from the perspective of a category theorist. – Clive Newstead Aug 5 '17 at 14:51
This would be very unusual, for two important reasons.
First, models in sets are not enough. It is important to consider large categories, such as Set or AbGrp or Top or Cat.
Secondly, category theory develops its own take on first-order logic — it would be a wasted effort (and somewhat counter-philosophical) to study the subject in the traditional set-oriented version of logic.
However, one does study categories as models — we call such a thing an internal categories.
Regarding your specific note on natural transformations, there are several paths that might lead you there.
The first is that you can show the product has a right adjoint, so that there is a natural bijection
$$\hom(\mathcal{C} \times \mathcal{D}, \mathcal{E}) \cong \hom(\mathcal{C}, \mathcal{E}^{\mathcal{D}})$$
This means that you can treat $\mathcal{D}^\mathcal{C}$ as the category of morphisms from $\mathcal{C}$ to $\mathcal{D}$. A natural transformation, then, is an arrow in this category.
A similar phenomenon happens, for example, in abelian groups, which allows you to construct the abelian group of morphisms from one group to another.
Now, assuming you didn't think to show that, one can still draw inspiration from it; since arrows can be viewed as morphisms $\hom(\uparrow, \mathcal{C})$, where $\uparrow$ means the arrow category, even if you didn't know that $\mathcal{E}^\mathcal{D}$ existed, one can still draw inspiration from the idea of the adjunction above and define a natural transformation to be a morphism $\uparrow \times \mathcal{C} \to \mathcal{D}$.
And this whole thing is similar to topology, and one might be tempted to mimic the definition of a homotopy.
• To be clear, the cartesian closure condition holds for small categories but usually not for locally small or large categories. So "arrow in $\mathbf{Set}^\mathbf{Set}$" can't be used, but the "inspired" approach of a functor $\uparrow\times\mathbf{Set}\to\mathbf{Set}$ works fine. Of course this is only an issue if one wants to capture non-small categories. As you point out, to do this requires considering something more than normal set theoretic models (at least in a "usual" set theory). – Derek Elkins Aug 7 '17 at 1:06
• I agree with your first reason, but I think considering models beyond $\mathbf{Set}$ doesn't lead to any contradiction. As for the second one. I can't see why this approach is a waste of effort. I believe that intertwining theories is not always a bad idea. In our case, I was just curious how Godel Completeness Theorem would look like applied to category theory (treated as a first order theory). – Fallen Apart Aug 8 '17 at 14:30
First-order PA and first-order ZF are usually presented as mono-sortal -- in PA, every "thing" (every object in the domain of quantification) is a "number", and in ZF, "every" thing is a set. A first-order category theory, if trying to preserve this practice, could become a mono-sortal first-order "arrow theory". One could shrink the signature presented in the axiomatization above by eliminating the Arr predicate, since every thing, including objects, could be an arrow -- the objects would be the id-arrows. In that treatment, every arrow goes from one id-arrow to another id-arrow, and id-arrows are the ones that go from themselves to themselves (i.e. are fixpoints of both the domain/"source" and codomain/"target" operators). A group could be a monoid where all the arrows go from and to the same one object, which is a distinguished arrow that is the group's identity element and is defined to compose with all the others accordingly. All of the "actual structure" of a category would be in a model's definition of the composition operator. Unfortunately, since that operator is partial (if there is more than one object in the category, it is possible for some arrows not to compose with others at all because their target and source don't dovetail), the resulting axiomatization would need to write composition as a ternary predicate since the usual definition of a first-order language doesn't allow "partial" term-constructors.
William Hatcher gave this as a first-order axiomatization.
C.1. $\forall x_1[ d(c(x_1)) = c(x_1) \wedge c(d(x_1)) = d(x_1) ]$
The domain of the codomain of $x_1$ is the codomain of $x_1$, and the codomain of the domain of $x_1$ is the domain of $x_1$.
C.2. $\forall x_1x_2x_3x_4 [ K(x_1,x_2,x_3) \wedge K(x_1,x_2,x_4)\rightarrow x_3=x_4]$
The composition of $x_1$ with $x_2$ is unique when it is defined.
C.3. $\forall x_1x_2[ \exists x_3[ K(x1,x2,x3)]\leftrightarrow c(x_1)=d(x_2)$ ]
The composition of $x_1$ with $x_2$ is defined if and only if the codomain of $x_1$ is the domain of $x_2$.
C.4. $\forall x_1x_2x_3[ K(x_1,x_2,x_3)\rightarrow d(x_3)=d(x_1) \wedge c(x_3)=c(x_2) ]$
If $x_3$ is the composition of $x_1$ with $x_2$ then the domain of $x_3$ is the domain of $x_1$ and the codomain of $x_3$ is the codomain of $x_2$.
C.5.$\forall x_1[ K(d(x_1),x_1,x_1) \wedge K(x_1,c(x_1),x_1)]$
For any $x_1$, the domain of $x_1$ is a left identity for $x_1$ under composition and the codomain is a right identity.
C.6. $\forall x_1x_2x_3x_4x_5x_6x_7 [ ( K(x_1,x_2,x_4) \wedge K(x_2,x_3,x_5) \wedge K(x_1,x_5,x_6) \wedge K(x_4,x_3,x_7) )\\\rightarrow x_6=x_7 ]$ Composition is associative when it is defined.
Any model of this theory is a category. Unfortunately, however, that would beg the question of what to use as a model-construction language. First-order logic is typically associated with models constructed in a set theory such as ZF. That would defeat the whole purpose of this as an alternative foundation, as well as limiting us to "small" or "internal" categories. | 2019-08-20 18:46:45 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8703201413154602, "perplexity": 381.07076991174046}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027315558.25/warc/CC-MAIN-20190820180442-20190820202442-00265.warc.gz"} |
http://www.transtutors.com/questions/weak-law-for-positive-variables-suppose-x1-x2-are-i-i-d-p-0-xi-lt-8-1-and-p-xi--825922.htm | # Weak law for positive variables. Suppose X1, X2, . . . are i.i.d., P(0 = Xi < 8) = 1 and P(Xi...
Weak law for positive variables. Suppose X1, X2, . . . are i.i.d., P(0 = Xi < 8) = 1 and P(Xi > x) > 0 for all x. Let µ(s) = R s 0 x dF(x) and ?(s) = µ(s)/s(1 - F(s)). It is known that there exist constants an so that Sn/an ? 1 in probability, if and only if ?(s) ? 8 as s ? 8. Pick bn = 1 so that nµ(bn) = bn (this works for large n), and use Theorem 2.2.6 to prove that the condition is sufficient.
Related Questions in Theory of probability
• 31. Compare the Poisson approximation with the correct binomial probability... (Solved) March 18, 2016
31. Compare the Poisson approximation with the correct binomial probability for the following cases: (a) P{ X = 2 } when n = 8 , p = 0.1 . (b) P{ X = 9} when n = 10, p = 0 .95. (c) P{ X =...
• . For an example where the weak law does not hold, suppose... May 13, 2015
. For an example where the weak law does not hold, suppose X 1 , X 2 , . . . are independent and have a Cauchy distribution: P(Xi = x ) = Z x - 8 dt p( 1 + t 2) As x ? 8 , P(| X 1 | > x )...
• Stat (Solved) February 09, 2012
txtbook: the 5th edition of An Introduction to Mathematical Statistics and its Applications by Richard Larsen and Morris Marx.
Solution Preview :
1. X1;X2; :::;Xn are iid samples from a poisson distribution b= 1nPn i=1Xi is an ecient estimator f(X1;X2; :::;Xnj) = e??nPXi X1!X2!:::Xn! log (f(X1;X2; :::;Xnj)) = log e??nPXi X1!X2!:::Xn!=...
• . Let X1, X2, . . . be i.i.d. with a... May 14, 2015
. Let X 1 , X 2 , . . . be i.i.d . with a density that is symmetric about 0 , and continuous and positive at 0 . We claim that 1 n 1 X 1 + · · · + 1 Xn ? a Cauchy distribution (a = 1 ,...
• --Proof by obtaining the cdf and pdf (Solved) May 09, 2012
Let X 1 , X 2 , ... Xn be iid random variables with common pdf f( x ) = { e^-( x -?) x > ? , - 8 { 0 elsewhere This pdf is called the shifted exponential. Let Yn = min{ X 1 , X 2 , ......
Copy and paste your question here...
Attach Files
• Most Popular
• Most Viewed
• Most Popular
• Most Viewed
• Most Popular
• Most Viewed
5 | 2017-10-20 23:24:45 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.877622127532959, "perplexity": 1397.2190270414428}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-43/segments/1508187824471.6/warc/CC-MAIN-20171020230225-20171021010225-00835.warc.gz"} |
https://discourse.mc-stan.org/t/reparameterizing-to-avoid-low-e-bfmi-warning/7957 | # Reparameterizing to avoid low E-BFMI warning
I’m attempting to fit a hierarchical binary logistic regression model, but have received a low E-BFMI warning for all four chains:
## E-BFMI indicated possible pathological behavior:
## Chain 1: E-BFMI = 0.128
## Chain 2: E-BFMI = 0.169
## Chain 3: E-BFMI = 0.179
## Chain 4: E-BFMI = 0.141
## E-BFMI below 0.2 indicates you may need to reparameterize your model.
I’m quite new to Stan so I’m not really sure where to start with the suggestion to reparameterize my model. (I think) I have used non-centred parameterizations across the board, as well as following the Stan User’s Guide advice to reparameterize the half-Cauchy priors on my SDs and use Cholesky factorization for my LKJ prior:
parameters {
real alpha0; // Intercept
vector[n_issues] alphaj; // Issue-specific effects
real<lower=0,upper=pi()/2> sigma_issue_unif; // Transformed SD of issue-specific effects
vector[n_fam] gamma; // Random family effects
real<lower=0,upper=pi()/2> sigma_gamma_unif; // Transformed SD of family random effects
cholesky_factor_corr[n_issues] L_Omega; // Cholesky factor of correlation
real<lower=0,upper=pi()/2> sigma_eta_unif; // Transformed SD of family-issue-specific effects
matrix[n_issues,n_fam] z;
}
transformed parameters {
real<lower=0> sigma_issue; // SD of issue-specific effects
real<lower=0> sigma_gamma; // SD of family random effects
real<lower=0> sigma_eta; // SD of family-issue-specific effects
matrix[n_issues,n_fam] eta; // Family-issue-specific effects
sigma_issue = 0.07 * tan(sigma_issue_unif);
sigma_gamma = 0.07 * tan(sigma_gamma_unif);
sigma_eta = 0.07 * tan(sigma_eta_unif);
for (n in 1:n_fam)
eta[,n] = sigma_eta * L_Omega * z[,n];
}
model {
alpha0 ~ logistic(0,1); // Logistic prior for the intercept
L_Omega ~ lkj_corr_cholesky(2); // LKJ prior on Cholesky factor
for (n in 1:n_fam)
z[,n] ~ std_normal();
alphaj ~ std_normal(); // Issue-specific effects
gamma ~ std_normal(); // Random family effects
concern ~ bernoulli_logit(alpha0 + sigma_issue*xissues*alphaj + sigma_gamma*xfam*gamma +
xfull*to_vector(eta)); // Logistic regression model
}
generated quantities {
corr_matrix[n_issues] Omega; // Correlation between issues
Omega = multiply_lower_tri_self_transpose(L_Omega);
}
The issue appears to be in the primitive SD parameters, specifically sigma_issue_unif and sigma_eta_unif (sigma_gamma_unif looks fine, as do all of the other parameter vs energy__ scatterplots).
The model takes a while to run so it’s a bit difficult for me to just try things to see what sticks. Am I better off using a half-Cauchy prior directly instead of reparameterizing? Would a weaker, stronger or different prior help matters (I can’t remember where I got 0.07 from)?
Any advice on speeding up my code would be great too. I’m not sure I’m using vectorization as efficiently as I could be.
Can you do plots of:
1. energy vs sigma_eta
2. energy vs sigma_issue
3. sigma_eta vs. sigma_issue
What is xissues (dimensions too)? It is strange to me that covariates (if they are covariates) would go into a term like this. Do you have a centered version of this? I might be reading it wrong.
Thanks Ben for your interest in my question.
I had a bit more of a think and realised that I didn’t want/need to estimate the SD of alphaj - they are “fixed effects”. So I removed sigma_issue altogether.
I still received the E-BFMI warnings with a similar sigma_eta_unif vs energy__ plot, but saw comments elsewhere on these forums that half-Cauchys aren’t so well supported as SD priors. Thinking about it more, I agreed that these put too much weight on quite extreme values in my situation, so I changed it to a half-normal prior.
I still have quite a strong apparent correlation between sigma_eta and energy__ appearing in the plot, but now E-BFMI is up around 0.35-0.4 and I do not get any warnings. I’ll consider more carefully whether half-normal is really what I want, but for now it seems to have avoided the issue.
xissues is (like xfam and xfull) just a ‘design matrix’ of 0s and 1s to help me match up the observations to their parameters; its dimensions are n_obs * n_issues. The model doesn’t include any real covariates yet, but I plan to add them in the future, which is why I set it up like this.
I don’t have a centered version of the code, but this might make clearer what model I intended to fit
\mathrm{logit}(p_{ij}) = \alpha_0 + \alpha_j + \gamma_i + \eta_{ij}, \qquad i = 1, \dotsc, n, \; j = 1, \dotsc, J
with \alpha_0 \sim \mathrm{Logistic}(0,1), \alpha_j \sim N(0,3), \gamma_i \sim N(0,\sigma_\gamma), \sigma_\gamma \sim \text{Half-}N(0,1), \eta_{i} \sim MVN_J(0,\sigma_\eta \Omega^{1/2}), \sigma_\eta \sim \text{Half-}N(0,1), \Omega \sim LKJ(2)
The main idea being that the J responses within each i are correlated in some way, and (along with the \alpha_j s), that correlation matrix \Omega is of great interest.
Nice! Glad to hear it’s working out.
Aaah, gotcha, makes sense.
Someone in another thread just asked something about why you’d want to model a covariance like this. I think I messed up the answer. If you’re feeling charitable: Estimating covariance terms in multilevel model - #2 by bbbales2 :D
I’m afraid I think I’m too new to Bayes to get my head around the exact question they are asking (or how you feel you messed up), but maybe if I describe why the correlation is of interest in my case, it might help?
My scenario is a survey of several potential ethical issues that might arise in a certain medical setting. Families indicated whether or not each issue was concerning to them, as a simple yes/no response.
The scientific questions of interest were, in order of complexity:
1. What is the general level of concern about these ethical issues
2. What is the general level of concern about each specific issue j
3. How are the issues related to one another, e.g. are families who are likely to say ‘issue 7’ is a concern more or less likely to say ‘issue 3’ is a concern?
4. [Still to do] What characteristics of families are associated with their general level of concern across all issues?
5. [Still to do] Are there any family characteristics that are associated with concern about specific issues?
Using the notation from my previous post, \alpha_0 and \alpha_j help answer the first two questions; \gamma_i is a ‘random effect’ to allow families to have different levels of general concern across all issues, and \Omega is essentially there to answer the third question.
I had initially attempted to do this by putting a multivariate normal prior on the \alpha_j s and estimating their correlation, but I hope I was correct in realising (after many computational issues) that it was the correlation between issue-specific random effects within families (i.e. the \eta_{ij} s) that is actually what I wanted to model.
I agree with both of your posts in the other thread, but again I’m not sure if I’ve missed the point or not. I suppose that observing correlations in your posterior after prior independence is a different thing to obtaining a posterior for the correlation; and I hope my scenario is one in which the latter is of direct interest.
What’s happening here is that your regression is poorly identified – or even non-identified – which manifests as a likelihood function that is narrow along the identified directions but stretching towards infinity in the others. With half-Cauchy priors you allow way too much of that pathological behavior into the posterior and the HMC implementation in Stan is struggling to adequately explore it, hence the warning.*
By moving to tighter priors you cut off more of the pathology, which then induces a posterior that Stan can actually handle.
If you want more diffuse priors then consider an exponential density or a Student-t density with 4-7 degrees of freedom. A Cauchy density is never the right answer.
• For the more technically minded, the Hamiltonian trajectories are able to explore only a tiny portion of the posterior typical set and instead the sampler is relying on the momenta resampling to diffuse it across the typical set. That diffusion, however, is too slow to be able to guarantee complete exploration which manifests as the warning.
6 Likes
Nah nah! This is good. That’s what I was looking for – something application-y. I linked your post to the other thread. That seems like a really clear description of what is happening and why you’re doing it.
Thanks for that! Also thanks @betanalpha for the follow up. | 2022-05-20 16:58:20 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5698519349098206, "perplexity": 2406.998452523294}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662533972.17/warc/CC-MAIN-20220520160139-20220520190139-00359.warc.gz"} |
https://www.physicsforums.com/threads/magnetic-field-outside-of-a-solenoid-conceptual.870684/ | # Magnetic field outside of a solenoid conceptual
• B
How come the magnetic field outside a solenoid is practically zero?
I've read reasons along the lines of:
-The magnetic field cancels out on the outside.
Of course the net force cancels out, but what if you have an object placed on just one spot? The force on that object clearly is not 0 because it is closer to one side of the solenoid than the other.
-The field lines spread outside the solenoid so much, that the density goes to zero as the solenoid gets longer.
This seems like it only happens in cases that say you start out with a solenoid of fixed length and current, then extend it out to a very large number. If you increase the current at the same time you are stretching the coil, the magnetic flux density will remain somewhat constant won't it? Now what if you don't stretch the solenoid at all? What if you had a solenoid of infinite length to begin with- is the field still 0 outside?
Another question: Why is the field outside nearly zero at all? Each current running through each section will contribute to a magnetic field outside...
Paul Colby
Gold Member
A linear solenoid should have a field similar to a bar magnet. While the field is largest near the "poles" it's hardly 0 since magnetic field must close on themselves.
nasu
Gold Member
Just consider the field produced by two small elements of the loop situated at the two opposite ends of a diameter.
See how the fields add in the center of the loop and somewhere outside, in a point on the same diameter.
This will show you why the field is weaker outside.
How come the magnetic field outside a solenoid is practically zero?
You are right, this is not obious
I've read reasons along the lines of:
-The magnetic field cancels out on the outside.
-The field lines spread outside the solenoid so much, that the density goes to zero as the solenoid gets longer.
Regarding the first statement, we know it is right but we want to know why. As for the second statement it is only a qualitative one and does not prove anything
One way we prove the correct result is this:
- Apply Biort-Savart to four differential elements, two of them situated obove your position and the other two below it. The two elementes must be situated on a circle perpendicular to the cylinder axis and simetrically situated from your point of sight
- The above analysis yields that the the field is paralell to the cylinder's axis
- Using Ampere's Law to an appropiate contour inside and outside the cylinder yields the the field mus be constant. The values outside and inside could be different however.
- Using Ampere´s law to an appropiate contour one part of it lying inside and the other outside we get:$$B(outside)-B(inside)=-\mu_0NI$$
- By integration of Biot-Savart Low calculate B on the axis of the cylinder which gives : B(inside)=##\mu_0NI##
- The last two results render B(outside)=0
Last edited: | 2021-02-26 19:15:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.628134548664093, "perplexity": 520.0782666319277}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2021-10/segments/1614178357935.29/warc/CC-MAIN-20210226175238-20210226205238-00262.warc.gz"} |
https://nodus.ligo.caltech.edu:8081/40m/?id=16153 | 40m QIL Cryo_Lab CTN SUS_Lab TCS_Lab OMC_Lab CRIME_Lab FEA ENG_Labs OptContFac Mariner WBEEShop
40m Log, Page 15 of 335 Not logged in
ID Date Author Type Category Subject
16168 Fri May 28 17:32:48 2021 AnchalSummaryALSSingle Arm Actuation Calibration with IR ALS Beat
I attempted a single arm actuation calibration using IR beatnote (in the directions of soCal idea for DARM calibration)
## Measurement and Inferences:
• I sent 4 excitation signals at C1:SUS-ITM_LSC_EXC wit 30cts at 31Hz, 200cts at 197Hz, 600cts at 619Hz and 1000cts at 1069 Hz.
• These were sent simultaneously using compose function in python awg.
• The XARM was locked to mai laser and alignment was optimized with ASS.
• The Xend Green laser was locked to XARM and alignment was optimized.
• Sidenote: GTRX is now normalized to give 1 at near maximum power.
• Green lasers can be locked with script instead of toggling.
• Script can be called from sitemap->ALS->! Toggle Shutters->Lock X Green
• Script is present at scripts/ALS/lockGreen.py.
• C1:ALS-BEATX_FINE_PHASE_OUT_HZ_DQ was measured for 60s.
• Also, measured C1:LSC-XARM_OUT_DQ and C1:SUS-ITMX_LSC_OUT_DQ.
• Attachment 1 shows the measured beatnote spectrum with excitations on in units of m/rtHz.
• It also shows resdiual displacement contribution PSD of (output referred) XARM_OUT and ITMX_LSC_OUT to the same point in the state space model.
• Note: that XARM_OUT and ITMX_LSC_OUT (excitation signal) get coherently added in reality and hence the beatnote spectrum at each excitation frequency is lower than both of them.
• The remaining task is to figure out how to calculate the calibration constant for ITMX actuation from this information.
• I need more time to understand the mixture of XARM_OUT and ITMX_LSC_OUT in the XARM length node in control loop.
• Beatnote signal tells us the actual motion of the arm length, not how much ITMX would have actuated if the arm was not locked.
• Attachment 2 has the A,B,C,D matrices for the full state space model used. These were fed to python controls package to get transfer functions from one point to another in this MIMO.
• Note, that here I used the calibration of XARM_OUT we measured earlies in 16127.
• On second thought, maybe I should first send excitation in ETMX_LSC_EXC. Then, I can just measure ETMX_LSC_OUT which includes XARM_OUT due to the lock and use that to get calibration of ETMX actuation directly.
Attachment 1: SingleArmActCalwithIRALSBeat.pdf
Attachment 2: stateSpaceModel.zip
16167 Fri May 28 11:16:21 2021 JonUpdateCDSFront-End Assembly and Testing
An update on recent progress in the lab towards building and testing the new FEs.
### 1. Timing problems resolved / FE BIOS changes
The previously reported problem with the IOPs losing sync after a few minutes (16130) was resolved through a change in BIOS settings. However, there are many required settings and it is not trivial to get these right, so I document the procedure here for future reference.
The CDS group has a document (T1300430) listing the correct settings for each type of motherboard used in aLIGO. All of the machines received from LLO contain the oldest motherboards: the Supermicro X8DTU. Quoting from the document, the BIOS must be configured to enforce the following:
• Remove hyper-threading so the CPU doesn’t try to run stuff on the idle core, as hyperthreading simulate two cores for every physical core.
• Minimize any system interrupts from hardware, such as USB and Serial Ports, that might get through to the ‘idled’ core. This is needed on the older machines.
• Prevent the computer from reducing the clock speed on any cores to ‘save power’, etc. We need to have a constant clock speed on every ‘idled’ CPU core.
I generally followed the T1300430 instructions but found a few adjustments were necessary for diskless and deterministic operation, as noted below. The procedure for configuring the FE BIOS is as follows:
1. At boot-up, hit the delete key to enter the BIOS setup screen.
2. Before changing anything, I recommend photographing or otherwise documenting the current working settings on all the subscreens, in case for some reason it is necessary to revert.
3. T1300430 assumes the process is started from a known state and lists only the non-default settings that must be changed. To put the BIOS into this known state, first navigate to Exit > Load Failsafe Defaults > Enter.
4. Configure the non-default settings following T1300430 (Sec. 5 for the X8DTU motherboard). On the IPMI screen, set the static IP address and netmask to their specific assigned values, but do set the gateway address to all zeros as the document indicates. This is to prevent the IPMI from trying to initiate outgoing connections.
5. For diskless booting to continue to work, it is also necessary to set Advanced > PCI/PnP Configuration > Load Onboard LAN 1 Option Rom > Enabled.
6. I also found it was necessary to re-enable IDE direct memory access and WHEA (Windows Hardware Error Architecture) support. Since these machines have neither hard disks nor Windows, I have no idea why these are needed, but I found that without them, one of the FEs would hang during boot about 50% of the time.
• Advanced > PCI/PnP configuration > PCI IDE BusMaster > Enabled.
• Advanced > ACPI Configuration > WHEA Support > Enabled.
After completing the BIOS setup, I rebooted the new FEs about six times each to make sure the configuration was stable (i.e., would never hang during boot).
### 2. User models created for FE testing
With the timing issue resolved, I proceeded to build basic user models for c1bhd and c1sus2 for testing purposes. Each one has a simple structure where M ADC inputs are routed through IIR filters to an output matrix, which forms linear signal combinations that are routed to N DAC outputs. This is shown in Attachment 1 for the c1bhd case, where the signals from a single ADC are conditioned and routed to a single 18-bit DAC. The c1sus2 case is similar; however the Contec BO modules still needed to be added to this model.
The FEs are now running two models each: the IOP model and one user model. The assigned parameters of each model are documented below.
Model Host CPU DCUID Path
c1x06 c1bhd 1 23 /opt/rtcds/userapps/release/cds/c1/models/c1x06.mdl
c1x07 c1sus2 1 24 /opt/rtcds/userapps/release/cds/c1/models/c1x07.mdl
c1bhd c1bhd 2 25 /opt/rtcds/userapps/release/isc/c1/models/c1bhd.mdl
c1sus2 c1sus2 2 26 /opt/rtcds/userapps/release/sus/c1/models/c1sus2.mdl
The user models were compiled and installed following the previously documented procedure (15979). As shown in Attachment 2, all the RTS processes are now working, with the exception of the DAQ server (for which we're still awaiting hardware). Note that these models currently exist only on the cloned copy of the /opt/rtcds disk running on the test stand. The plan is to copy these models to the main 40m disk later, once the new FEs are ready to be installed.
### 3. AA and AI chassis installed
I installed several new AA and AI chassis in the test stand to interface with the ADC and DAC cards. This includes three 16-bit AA chassis, one 16-bit AI chassis, and one 18-bit AI chassis, as pictured in Attachment 3. All of the AA/AI chassis are powered by one of the new 15V DC power strips connected to a bench supply, which is housed underneath the computers as pictured in Attachment 4.
These chassis have not yet been tested, beyond verifying that the LEDs all illuminate to indicate that power is present.
Attachment 1: c1bhd.png
Attachment 2: gds_tp.png
Attachment 3: teststand.jpeg
Attachment 4: bench_supply.jpeg
16166 Fri May 28 10:54:59 2021 JonUpdateCDSOpto-isolator for c1auxey
I have received the opto-isolator needed to complete the new c1auxey system. I left it sitting on the electronics bench next to the Acromag chassis.
Here is the manufacturer's wiring manual. It should be wired to the +15V chassis power and to the common return from the coil driver, following the instructions herein for NPN-style signals. Note that there are two sets of DIP switches (one on the input side and one on the output side) for selecting the mode of operation. These should all be set to "NPN" mode.
Attachment 1: optoisolator.jpeg
16165 Thu May 27 14:11:15 2021 JordanUpdateSUSCoM to Clamping Point Measurement for 3" Adapter Ring
The current vertical distance between the CoM and the wire clamping point on the 3" Ring assembly is 0.33mm. That is the CoM is .33 mm below the clamping point of the wire. I took the clamping point to be the top edge of the wire clamp piece. see the below attachments.
I am now modifying the dumbell mechanism at the bottom of the ring to move the CoM to the target distance of 1.1mm.
Attachment 1: CoM_to_Clamp.PNG
Attachment 2: CoM_to_Clamp_2.PNG
16164 Thu May 27 11:03:15 2021 Anchal, PacoSummaryALSALS Single Arm Noise Budget
Here's an updated X ARM ALS noise budget.
## Things to remember:
• Major mistake we were making earlier was that we were missing the step of clicking 'Set Phase UGF' before taking the measurement.
• Click the clear phase history just before taking measure.
• Make sure the IR beatnotes are less than 50 MHz (or the left half of HP8591E on water crate). The DFD is designed for this much beatnote frequency (from Gautum).
• We took this measurement with old IMC settings.
• We have saved a template file in users/Templates/ALS/ALS_outOfLoop_Ref_DQ.xml . This si same as ALS_outOfLoop_Ref.xml except we changed all channels to _DQ.
## Conclusions:
• Attachment 1 shows the updated noisebudget. The estimated and measured RMS noise are very close to eachother.
• However, there is significant excess noise between 4 Hz and 200 Hz. We're still thinking on what could be the source of these.
• From 200 Hz to about 3 kHz, the beatnote noise is dominated by AUX residual frequency noise. This can be verified with page 2 of Attachment 2 where coherence between AUX PDH Error signal and BEATX signal is high.
• One mystery is how the measured beatnote noise is below the residual green laser noise above 3 kHz. Could this be just because the phase tracker can't measure noise above 3kHz?
• We have used estimated open loop transfer function for AUX from poles/zeros for uPDH box used (this was done months ago by me when I was working on ALS noise budget from home). We should verify it with a fresh OLTF measurement of AUX PDH loop. That's next on our list.
Attachment 1: ALS_Single_X_Arm_IR.pdf
Attachment 2: ALS_OOL_with_Ref.pdf
16163 Wed May 26 11:45:57 2021 Anchal, PacoConfigurationIMCMC2 analog camera
[Anchal, Paco]
We went near the MC2 area and opened the lid to inspect the GigE and analog video monitors for MC2. Looked like whatever image is coming through the viewport is split into the GigE (for beam tracking) and the analog monitor. We hooked the monitor found on the floor nearby and tweaked the analog video camera around to get a feel for how the "ghost" image of the transmission moves around. It looks like in order to try and remove this "extra spots" we would need to tweak the beam tracking BS. We will consult the beam tracking authorities and return to this.
16162 Wed May 26 02:00:44 2021 gautamUpdateElectronicsCoil driver noise
I was preparing a short write-up / test procedure for the custom HV coil driver, when I thought of something I can't resolve. I'm probably missing some really basic physics here - but why do we not account for the shot noise from DC current flowing through the series resistor? For a 4kohm resistor, the Johnson current noise is ~2pA/rtHz. This is the target we were trying to beat with our custom designed HV bias circuit. But if there is a 1 mA DC current flowing through this resistor, the shot noise of this current is $\sqrt{2eI_{\mathrm{DC}}} \approx$18pA/rtHz, which is ~9 times larger than the Johnson noise of the same resistor. One could question the applicability of this formula to calculate the shot noise of a DC current through a wire-wound resistor - e.g. maybe the electron transport is not really "ballistic", and so the assumption that the electrons transported through it are independent and non-interacting isn't valid. There are some modified formulae for the shot noise through a metal resistor, which evaluates to $\sqrt{2eI_{\mathrm{DC}}/3} \approx$10pA/rtHz for the same 4kohm resistor, which is still ~5x the Johnson noise.
In the case of the HV coil driver circuit, the passive filtering stage I added at the output to filter out the excess PA95 noise unwittingly helps us - the pole at ~0.7 Hz filters the shot noise (but not the Johnson noise) such that at ~10 Hz, the Johnson noise does indeed dominate the total contribution. So, for this circuit, I think we don't have to worry about some un-budgeted noise. However, I am concerned about the fast actuation path - we were all along assuming that this path would be dominated by the Johnson noise of the 4kohm series resistor. But if we need even 1mA of current to null some DC DARM drift, then we'd have the shot noise contribution become comparable, or even dominant?
I looked through the iLIGO literature, where single-stage suspensions were being used, e.g. Rana's manifesto, but I cannot find any mention of shot noise due to DC current, so probably there is a simple explanation why - but it eludes me, at least for the moment. The iLIGO coil drivers did not have a passive filter at the output of the coil driver circuit (at least, not till this work), and there isn't any feedback gain for the DARM loop at >100 Hz (where we hope to measure squeezing) to significantly squash this noise.
Attachment #1 shows schematic topologies of the iLIGO and proposed 40m configs. It may be that I have completely misunderstood the iLIGO config and what I've drawn there is wrong. Since we are mainly interested in the noise from the resistor, I've assumed everything upstream of the final op-amp is noiseless (equivalently, we assume we can sufficiently pre-filter these noises).
Attachment #2 shows the relative magnitudes of shot noise due to a DC current, and thermal noise of the series resistor, as a function of frequency, for a few representative currents, for the slow bias path assuming a 0.7Hz corner from the 4kohm/3uF RC filter at the output of the PA95.
Some lit review suggests that it's actually pretty hard to measure shot noise in a resistor - so I'm guessing that's what it is, the mean free path of electrons is short compared to the length of the resistor such that the assumption that electrons arrive independently and randomly isn't valid. So Ohm's law dictates $I=V/R$ and that's what sets the current noise. See, for example, pg 432 of Horowitz and Hill.
Attachment 1: coilDriverTopologies.pdf
Attachment 2: shotVthermal.pdf
16161 Tue May 25 17:42:11 2021 Anchal, PacoSummaryALSALS Single Arm Noise Budget
Here is our first attempt at a single-arm noise budget for ALS.
Attachment 1 shows the loop diagram we used to calculate the contribution of different noises.
Attachment 2 shows the measured noise at C1:ALS-BEATX_PHASE_FINE_OUT_HZ when XARM was locked to the main laser and Xend Green laser was locked to XARM.
• The brown curve shows the measured noise.
• The black curve shows total estimated noise from various noise sources (some of these sources have not been plotted as their contribution falls off the plotting y-lims.)
• The residual frequency noise of Xend green laser (AUX) is measured by measuring the PDH error monitor spectrum from C1:ALS-X_ERR_MON_OUT_DQ. This measurement was converted into units of V by multiplying it by 6.285e-4 V/cts. This factor was measured by sending a 43 Hz 100 mV sine wave at the readout point and measuring the output in the channel.
• This error signal is referred to AUX_Freq input in the loop diagram (see attachment 1) and then injected from there.
• All measurements were taken to Res_Disp port in the 'Out-of-Loop Beat Note' block (see attachment 1).
• In this measurement, we did not DAC noise that gets added when ALS loop is closed.
• We added ADC noise from Kiwamu's ALS paper after referring it to DFD input. DFD noise is also taken from Kiwamu's ALS paper data.
### Inference:
• Something is wrong above 200 Hz for the inclusion of AUX residual displacement noise. It is coming off as higher than the direct measured residual noise, so something is wrong with our loop diagram. But I'm not sure what.
• There is a lot of unaccounted noise everywhere from 1 Hz to 200 Hz.
• Rana said noise budget below 1 Hz is level 9 stuff while we are at level 2, so I'll just assume the excess noise below 1 Hz is level 9 stuff.
• We did include seismic noise taken from 40m noise budget in 40m/pygwinc. But it seems to affect below the plotted ylims. I'm not sure if that is correct either.
### Unrelated questions:
• There is a slow servo feeding back to Green Laser's crystal temperature by integrating PZT out signal. This is OFF right now. Should we keep it on?
• The green laser lock is very unreliable and it unlocks soon after any signal is being fed back to the ETMX position.
• This means, keeping both IR and green light locked in XARM is hard and simultaneous oscillation does not last longer than 10s of seconds. Why is it like this?
• We notice that multiple higher-order modes from the green laser reach the arm cavity. The HOMs are powerful enough that PDH locks to them as well and we toggle the shutter to come to TEM00 mode. These HOMs must be degrading the PDH error signal. Should we consider installing PMCs at end tables too?
Attachment 1: ALS_IR_b.svg
Attachment 2: ALS_Single_Arm_IR.pdf
16160 Tue May 25 17:08:17 2021 ChubUpdateElectronicschassis rework complete!
All remaining chasses have been reworked and placed on the floor along the west wall in Room 104.
Attachment 1: 40M_chassis_reworked_5-25-21.jpg
16159 Tue May 25 10:22:16 2021 Anchal, PacoSummarySUSMC1 new input matrix calculated and uploaded
The test was succesful and brought back the IMC to lock point at the end.
We calculated new input matrix using same code in scripts/SUS/InMatCalc/sus_diagonalization.py . Attachment 1 shows the results.
The calculations are present in scripts/SUS/InMatCalc/MC1.
We uploaded the new MC1 input matrix at:
Unix Time = 1621963200
UTC May 25, 2021 17:20:00 UTC Central May 25, 2021 12:20:00 CDT Pacific May 25, 2021 10:20:00 PDT
GPS Time = 1305998418
This was done by running python scripts/SUS/general/20210525_NewMC1Settings/uploadNewConfigIMC.py on allegra. Old IMC settings (before Paco and I started workin on 40m) can be restored by running python scripts/SUS/general/20210525_NewMC1Settings/restoreOldConfigIMC.py on allegra.
Everything looks as stable as before. We'll look into long term trends in a week to see if this helped at all.
Attachment 1: SUS_Input_Matrix_Diagonalization.pdf
16158 Mon May 24 20:55:00 2021 KojiSummaryBHDHow to align two OMCs on the BHD platform?
Differential misalignment of the OMCs
40m BHD will employ two OMCs on the BHD platform. We will have two SOSs for each of the LO and AS beams. The challenge here is that the input beam must optimally couple to the OMCs simultaneously. This is not easy as we won't have independent actuators for each OMC. e.g. The alignment of the LO beam can be optimally adjusted to the OMC1, but this, in general, does not mean the beam is optimally aligned to the OMC2.
Requirement
When a beam with the matched mode to an optical cavity has a misalignment, the power coupling C can be reduced from the unity as
$C = 1 - \left(\frac{a}{\omega_0}\right)^2 - \left(\frac{\alpha}{\theta_0}\right)^2$
where $\omega_0$ is the waist radius, $\theta_0$ is the divergence angle defined as $\theta_0 \equiv \lambda/ \pi \omega$, $a$ and $\alpha$ are the beam lateral translation and rotation at the waist position.
The waist size of the OMC is 500um. Therefore $\omega_0$ = 500um and $\theta_0$ = 0.68 mrad. If we require C to be better than 0.995 according to the design requirement document (T1900761). This corresponds to $a$ (only) to be 35um and $\alpha$ (only) to be 48urad. These numbers are quite tough to be realized without post-installation adjustment. Moreover, the OMCs themselves have individual differences in the beam axis. So no matter how we set the mechanical precision of the OMC installation, we will introduce a maximum of 1mm and ~5mrad uncertainty of the optical axis.
Suppose we adjust the incident beam to the OMC placed at the transmission side of the BHD BS. The reflected beam at the BS can be steered by picomotors. The distance from the BS to the OMC waist is 12.7" (322mm) according to the drawing.
So we can absorb the misalignment mode of ($a$, $\alpha$) = (0.322 $\theta$, $\theta$). This is a bit unfortunate. 0.322m is about 1/2 of the rayleigh range. Therefore, this actuation is still angle-dominated but a bit of translation is still coupled.
If we enable to use the third picomotor on the BHD BS mount, we can introduce the translation of the beam in the horiz direction. This is not too huge therefore we still want to prepare the method to align the OMC in the horiz direction.
The difficult problem is the vertical alignment. This requires the vertical displacement of the OMC. And we will not have the option to lower the OMC. Therefore if the OMC2 is too high, we have to raise the OMC1 so that the resulting beam is aligned to the OMC2. i.e. we need to maintain the method to raise both OMCs. (... or swap the OMCs). From the images of the OMC beam spots, we'll probably be able to analyze the intracavity axes of the OMCs. So we can always place the OMC with a higher optical axis at the transmission side of the BHD BS.
16157 Mon May 24 19:14:15 2021 Anchal, PacoSummarySUSMC1 Free Swing Test set to trigger
We've set a free swing test to trigger at 3:30 am tomorrow for MC1. The script for tests is running on tmux session named 'freeSwingMC1' on rossa. The script will run for about 4.5 hrs and we'll correct the input matrix tomorrow from the results. If anyone wants to work during this time (3:30 am to 8:00 am), you can just kill the script by killing tmux session on rossa. ssh into rossa and type tmux kill-session -t freeSwingMC1.
Quote: We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.
16156 Mon May 24 10:19:54 2021 PacoUpdateGeneralZita IOO strip
Updated IOO.strip on Zita to show WFS2 pitch and yaw trends (C1:IOO-WFS2_PIY_OUT16 and C1:IOO-WFS2_YAW_OUT16) and changed the colors slightly to have all pitch trends in the yellow/brown band and all yaw trends in the pink/purple band.
No one says, "Here I am attaching a cool screenshot, becuz else where's the proof? Am I right or am I right?"
Mon May 24 18:10:07 2021 [Update]
After waiting for some traces to fill the screen, here is a cool screenshot (Attachment 1). At around 2:30 PM the MC unlocked, and the BS_Z (vertical) seismometer readout jumped. It has stayed like this for the whole afternoon... The MC eventually caught its lock and we even locked XARM without any issue, but something happened in the 10-30 Hz band. We will keep an eye on it during the evening...
Tue May 25 08:45:33 2021 [Update]
At approximately 02:30 UTC (so 07:30 PM yesterday) the 10-30 Hz seismic step dropped back... It lasted 5 hours, mostly causing BS motion along Z (vertical) as seen by the minute trend data in Attachment 2. Could the MM library have been shaking? Was the IFO snoring during its afternoon nap?
Attachment 1: Screenshot_from_2021-05-24_18-09-37.png
Attachment 2: 24and25_05_2021_PEM_BS_10_30.png
16155 Mon May 24 08:38:26 2021 ChubUpdateElectronics18-bit AI, 16-bit AI and 16-bit AA
- High priority units: 2x 18AI / 1x 16AI / 3x 16AA
All six are reworked and on the electronics workbench. The rest should be ready by the end of the week.
Chub
16154 Sun May 23 18:28:54 2021 JonUpdateCDSOpto-isolator for c1auxey
The new HAM-A coil drivers have a single DB9 connector for all the binary inputs. This requires that the dewhitening switching signals from the fast system be spliced with the coil enable signals from c1auxey. There is a common return for all the binary inputs. To avoid directly connecting the grounds of the two systems, I have looked for a suitable opto-isolator for the c1auxey signals.
I best option I found is the Ocean Controls KTD-258, a 4-channel, DIN-rail-mounted opto-isolator supporting input/output voltages of up to 30 V DC. It is an active device and can be powered using the same 15 V supply as is currently powering both the Acromags and excitation. I ordered one unit to be trialed in c1auxey. If this is found to be good solution, we will order more for the upgrades of c1auxex and c1susaux, as required for compatibility with the new suspension electronics.
16153 Fri May 21 14:36:20 2021 Ian MacMillanUpdateCDSSUS simPlant model
The plant transfer function of the pendulum in the s domain is:
$H(s)=\frac{x(s)}{F(s)}=\frac{1}{ms^2+m\frac{\omega_0}{Q}s+m\omega_0^2}$
Using Foton to make a plot of the TF needed and using m=40kg, w0=3Hz, and Q=50 (See attachment 1). It is easiest to enter the above filter using RPoly and saved it as Plant_V1
Attachment 1: Plant_Mod_TF.pdf
16152 Fri May 21 12:12:11 2021 PacoUpdateNoiseBudgetAUX PDH loop identification
[Anchal, Paco]
We went into 40m to identify where XARM PDH loop control elements are. We didn't touch anything, but this is to note we went in there twice at 10 AM and 11:10 AM.
16151 Fri May 21 09:44:52 2021 Ian MacMillanUpdateCDSSUS simPlant model
The transfer function given in the previous post was slightly incorrect the units did not make sense the new function is:
$\frac{x}{F}=\frac{1}{m\omega_0^2-m\omega^2+im\frac{\omega_0 \omega }{Q}}$
I have attached a quick derivation below in attachment 1
Attachment 1: Transfer_Function_of_Damped_Harmonic_Oscillator.pdf
16150 Fri May 21 00:15:33 2021 KojiUpdateElectronicsDC Power Strip delivered / stored
DC Power Strip Assemblies delivered and stored behind the Y arm tube (Attachment 1)
• 7x 18V Power Strip (Attachment 2)
• 7x 24V Power Strip (Attachment 2)
• 7x 18V/24V Sequencer / 14x Mounting Panel (Attachment 3)
• DC Power Cables 3ft, 6ft, 10ft (Attachments 4/5)
• DC Power Cables AWG12 Orange / Yellow (Attachments 6/7)
I also moved the spare 1U Chassis to the same place.
• 5+7+9 = 21x 1U Chassis (Attachments 8/9)
Attachment 1: P_20210520_233112.jpeg
Attachment 2: P_20210520_233123.jpg
Attachment 3: P_20210520_233207.jpg
Attachment 4: P_20210520_231542.jpg
Attachment 5: P_20210520_231815.jpg
Attachment 6: P_20210520_195318.jpg
Attachment 7: P_20210520_231644.jpg
Attachment 8: P_20210520_233203.jpg
Attachment 9: P_20210520_195204.jpg
16149 Fri May 21 00:05:45 2021 KojiUpdateSUSNew electronics: Sat Amp / Coil Drivers
11 new Satellite Amps were picked up from Downs. 7 more are coming from there. I have one spare unit I made. 1 sat amp has already been used at MC1.
We had 8 HAM-A coil drivers delivered from the assembling company. We also have two coil drivers delivered from Downs (Anchal tested)
Attachment 1: F3CDEF8D-4B1E-42CF-8EFC-EA1278C128EB_1_105_c.jpeg
16148 Thu May 20 16:56:21 2021 KojiUpdateElectronicsProduction version of the HV coil driver tested with KEPCO HV supplies
HP HV power supply ( HP6209 ) were returned to Downs
Attachment 1: P_20210520_154523_copy.jpg
16147 Thu May 20 10:35:57 2021 AnchalUpdateSUSIMC settings reverted
For future reference, the new settings can be upoaded from a script in the same directory. Run python /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/uploadNewConfigIMC.py from allegra.
Quote: There isn't any instruction here on how to upload the new settings
16146 Wed May 19 18:29:41 2021 KojiUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units
Calculation for the SOS POS/PIT/YAW resonant frequencies
- Nominal height gap between the CoM and the wire clamping point is 0.9mm (cf T970135)
- To have the similar res freq for the optic with the 3" metal sleeve is 1.0~1.1mm.
As the previous elog does not specify this number for the current configuration, we need to asses this value and the make the adjustment of the CoM height.
Attachment 1: SOS_resonant_freq.pdf
Attachment 2: SOS_resonant_freq.nb.zip
16145 Tue May 18 20:26:11 2021 ranaUpdatePSLHEPA speed raised
Fluke. Temp fluctuations are as usual, but the overall temperature is still lower. We ought to put some temperature sensors at the X & Y ends to see what's happening there too.
16144 Tue May 18 00:52:38 2021 ranaUpdatePSLHEPA speed raised
Looks like the fan lowered the temperature as expected. Need to get a few more days of data to see if its stabilized, or if that's just a fluke.
The vertical line at 00:00 UTC May 18 is about when I turned the fans up/on.
Attachment 1: Untitled.png
16143 Sat May 15 14:54:24 2021 gautamUpdateSUSIMC settings reverted
I want to work on the IFO this weekend, so I reverted the IMC suspension settings just now to what I know work (until the new settings are shown quantitatively to be superior). There isn't any instruction here on how to upload the new settings, so after my work, I will just restore from a burt-snapshot from before I changed settings.
In the process, I found something odd in the MC2 coil output filter banks. Attachment #1 shows what it it is today. This weird undetermined state of FM9 isn't great - I guess this flew under the radar because there isn't really any POS actuation on MC2. Where did the gain1 filter I installed go? Some foton filter file corruption? Eventually, we should migrate FM7,FM8-->FM9,FM10 but this isn't on my scope of things to do for today so I am just putting the gain1 filter back so as to have a clean FM9 switched on.
Quote: The old setting can be restored by running python3 /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/restoreOldConfigIMC.py from allegra or donatella.
I wrote the values from the c1mcs burt snapshot from ~1400 Saturday May 15, at ~1600 Sunday May 16. I believe this undoes all my changes to the IMC suspension settings.
Attachment 1: MC2coilOut.png
16142 Sat May 15 12:39:54 2021 gautamUpdatePSLNPRO tripped/switched off
The NPRO has been off since ~1AM this morning it looks like. Is this intentional? Can I turn it back on (or at least try to)? The interlock signal we are recording doesn't report getting tripped but I think this has been the case in the past too.
After getting the go ahead from Koji, I turned the NPRO back on, following the usual procedure of diode current ramping. PMC and IMC locked. Let's see if this was a one-off or something chronic.
Attachment 1: NPRO.png
16141 Fri May 14 17:45:05 2021 ranaUpdatePSLHEPA speed raised
The PSL was too hot, so I turned on the south HEPA on the PSL. The north one was on and the south one was off (or so slow as to be inaudible and no vibration, unlike the north one). Lets watch the trend over the weekend and see if the temperature comes down and if the PMC / WFS variations get less. Fri May 14 17:46:26 2021
16140 Fri May 14 03:29:50 2021 KojiUpdateElectronicsHV Driver noise test with the new HV power supply from Matsusada
I believe I did the identical test with the one in [40m ELOG 15786]. The + input of PA95 was shorted to the ground to exclude the noise from the bias input. The voltage noise at TP6 was measured with +/-300V supply by two HP6209 and two Matsusada R4G360.
With R4G360, the floor level was identical and 60Hz line peaks were less. It looks like R4G360 is cheap, easier and precise to handle, and sufficiently low noise.
Attachment 1: HV_Driver_PSD.pdf
16139 Thu May 13 19:38:54 2021 AnchalUpdateSUSMC1 Satellite Amplifier Debugged
[Anchal Koji]
Koji and I did a few tests with an OSEM emulator on the satellite amplifier box used for MC1 which is housed on 1X4. This sat box unit is S2100029 D1002812 that was recently characterized by me 15803. We found that the differential output driver chip AD8672ARZ U2A section for the UL PD was not working properly and had a fluctuating offset at no input current from the PD. This was the cause of the ordeal of the morning. The chip was replaced with a new one from our stock. The preliminary test with the OSEM emulator showed that the channel has the correct DC value.
In further testing of the board, we found that the channel 8 LED driver was not working properly. Although this channel is never used in our current cable convention, it might be used later in the future. In the quest of debugging the issue there, we replaced AD8672ARZ at U1 on channel 8. This did not solve the issue. So we opened the front panel and as we flipped the board, we found that the solder blob shorted the legs of the transistor Q1 2N3904. This was replaced and the test with the LED out and GND shorted indicated that the channel is now properly providing a constant current of 35mA (5V at the monitor out).
After the debugging, the UL channel became the least noisy among the OSEM channels! Mode cleaner was able to lock and maintain it.
We should redo the MC1 input matrix optimization and the coil balancing afterward as we did everything based on the noisy UL OSEM values.
Attachment 1: MC1_UL_Channel_Fixed.png
16138 Thu May 13 11:55:04 2021 Anchal, PacoUpdateSUSMC1 suspension misbehaving
We came in the morning with the following scene on the zita monitor:
The MC1 watchdog was tripped and seemed like IMC struggled all night with misconfigured WFS offsets. After restoring the MC1 WD, clearing the WFS offsets, and seeing the suspension damp, the MC caught lock. It wasn't long before the MC unlocked, and the MC1 WD tripped again.
We tried few things, not sure what order we tried them in:
• Letting suspension loops damp without the WFS switched on.
• Letting suspension loops damp with PSL shutter closed.
• Restoring old settings of MC suspension.
• Doing burt restore with command:
burtwb -f /opt/rtcds/caltech/c1/burt/autoburt/snapshots/2021/May/12/08:19/c1mcsepics.snap -l /tmp/controls_1210513_083437_0.write.log -o /tmp/controls_1210513_083437_0.nowrite.snap -v <
Nothing worked. We kept seeing that ULPD var on MC1 keeps showing kicks every few minutes which jolts the suspension loops. So we decided to record some data with PSL shutter closed and just suspension loops on. Then we switched off the loops and recorded some data with freely swinging optic. Even when optic was freely swinging, we could see impulses in the MC1 OSEM UL PD var which were completely uncorrelated with any seismic activity. Infact, last night was one fo teh calmer nights seismically speaking. See attachment 2 for the time series of OSEM PD variance. Red region is when the coil outputs were disabled.
### Inference:
• We think something is wrong with the UL OSEM of MC1.
• It seems to show false spikes of motion when there is no such spike present in any other OSEM PD or the seismic data itself.
• Currently, this is still the case. We sometimes get 10-20 min of "Good behavior" when everything works.
• But then the impulses start occuring again and overwhelmes the suspension loops and WFS loops.
• Note, that other optic in IMC behaved perfectly normally throughout this time.
• In the past, it seems like satellite box has been the culprit for such glitches.
• We should look into debugging this as ifo is at standstill because of this issue.
• Earlier, Gautum would post Vmon signals of coil outputs only to show the glitches. We wanted to see if switching off the loops help, so we recorded OSEM PD this time.
• In hindsight, we should probably look at the OSEM sensor outputs directly too rather than looking at the variance data only. I can do this if people are interested in looking at that too.
• We've disabled the coil ouputs in MC1 and PSL shutter is off.
Edit Thu May 13 14:47:25 2021 :
Added OSEM Sensor timeseries data on the plots as well. The UL OSEM sensor data is the only channel which is jumping hapazardly (even during free swinging time) and varying by +/- 30. Other sensors only show some noise around a stable position as should be the case for a freely suspended optic.
Attachment 2: MC1_Glitches_Invest2.pdf
16137 Wed May 12 17:06:52 2021 JordanUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units
Here are the mass properties for the only the test mass assembly (optic, 3" ring, and wire block). (Updated with g*mm^2)
Quote: No, this is the property of the suspension assembly. The mass says 10kg Could you do the same for the testmass assembly (only the suspended part)? The units are good, but I expect that the values will be small. I want to keep at least three significant digits.
Attachment 1: Moments_of_Inertia_SI.PNG
16136 Wed May 12 16:53:59 2021 KojiUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units
No, this is the property of the suspension assembly. The mass says 10kg
Could you do the same for the testmass assembly (only the suspended part)? The units are good, but I expect that the values will be small. I want to keep at least three significant digits.
16135 Wed May 12 14:23:20 2021 JordanUpdateSUSMass Properties of SOS Assembly with 3"->2" Optic sleeve, in SI units
Attachment 1: Moments_of_Inertia_SI.PNG
16134 Wed May 12 13:06:15 2021 Ian MacMillanUpdateCDSSUS simPlant model
Working with Chris, we decided that it is probably better to use a simple filter module as a controller before we make the model more complicated. I will use the plant model that I have already made (see attachment 1 of this). then attach a single control filter module to that: as seen in attachment 1. because I only want to work with one degree of freedom (position) I will average the four outputs which should give me the position. Then by feeding the same signal to all four inputs I should isolate one degree of freedom while still using the premade plant model.
The model I made that is shown in attachment 2 is the model I made from the plan. And it complies! yay! I think there is a better way to do the average than the way I showed. And since the model is feeding back on itself I think I need to add a delay which Rana noted a while ago. I think it was a UnitDelay (see page 41 of RTS Developer’s Guide). So I will add that if we run into problems but I think there is enough going on that it might already be delayed.
Since our model (x1sup_isolated.mdl) has compiled we can open the medm screens for it. I provide a procedure below which is based on Jon's post
[First start the cymac and have the model running]
$cd docker-cymac$ eval $(./env_cymac) $ medm -x /opt/rtcds/tst/x1/medm/x1sup_isolated/X1SUP_ISOLATED_GDS_TP.adl
To see a list of all medm screens use:
$cd docker-cymac$ ./login_cymac # cd /opt/rtcds/tst/x1/medm/x1sup_isolated # ls
Some of the other useful ones are:
adl screen Description X1SUP_ISOLATED_Control_Module.adl This is the control filter module shown in attachment 2 at the top in the center. This module will represent the control system. X1SUP_ISOLATED_C1_SUS_SINGLE_PLANT_Plant_POS_Mod.adl See attachment 4. This screen shows the POS plant filter module that will be filled by the filter representing the transfer function of a damped harmonic oscillator: $\frac{x}{F}=\frac{\omega_0^2}{\omega_0^2+i\frac{\omega_0 \omega}{Q}-\omega^2}$ THIS TF HAS BEEN UPDATED SEE NEXT POST
The first one of these screens that are of interest to us (shown in attachment 3) is the X1SUP_ISOLATED_GDS_TP.adl screen, which is the CDS runtime diagnostics screen. This screen tells us "the success/fail state of the model and all its dependencies." I am still figuring out these screens and the best guide is T1100625.
The next step is taking some data and seeing if I can see the position damp over time. To do this I need to:
1. Edit the plant filter for the model and add the correct filter.
2. Figure out a filter for the control system and add it to that. (I can leave it as is to see what the plant is doing)
3. Take some position data to show that the plant is a harmonic oscillator and is damping away.
Attachment 1: SimplePlant_SingleContr.pdf
Attachment 2: x1sup_isolated.pdf
Attachment 3: X1SUP_ISOLATED_GDS_TP.png
Attachment 4: X1SUP_ISOLATED_C1_SUS_SINGLE_PLANT_Plant_POS_Mod.png
16133 Wed May 12 11:45:13 2021 Anchal, PacoSummarySUSNew IMC Settings are miserable
We picked a few parameters from 40m summary page and plotted them to see the effect of new settings. On April 4th, old settings were present. On April 28th (16091), new input matrices and F2A filters were uploaded but suspension gains remained the same. On May 5th (16120), we uploaded new (higher) suspension gains. We chose Sundays on UTC so that it lies on weekends for us. Most probably nobody entered 40m and it was calmer in the institute as well.
• On MC_F spectrum, we see that that noise decreased in 0.3-0.7 Hz but there is more noise from 1-1.5 Hz.
• On MC_TRANS_QPD, we see that both TRANS PIT and YAW signals were almost twice as noisy.
• On MC_REFL_DC too, we see that the noise during the locked state seems to be higher in the new configuration.
We can download data and plot comparisons ourselves and maybe calculate the spectrums of MC_TRANS_PIT/YAW and MC_REFL_DC when IMC was locked. But we want to know if anyone has better ways of characterizing the settings that we should know of before we get into this large data handling which might be time-consuming. From this preliminary 40m summary page plots, maybe it is already clear that we should go back to old settings. Awaiting orders.
Attachment 1: MC_F_Comparison.pdf
Attachment 2: MC_TRANS_QPD_Comparison.pdf
Attachment 3: IMC_REFL_DC_Comparison.pdf
16132 Wed May 12 10:53:20 2021 Anchal, PacoUpdateLSCPSL-IMC PDH Loop and XARM PDH Loop diagram
Attached is the control loop diagram when main laser is locked to IMC and a single arm (XARM) is locked to the transmitted light from IMC.
Quote: I'll post a clean loop diagram soon to make this loopology clearer.
Attachment 1: IMC_SingleArm.pdf
16131 Tue May 11 17:43:09 2021 KojiUpdateCDSI/O Chassis Assembly
Did you match the local PC time with the GPS time?
16130 Tue May 11 16:29:55 2021 JonUpdateCDSI/O Chassis Assembly
Quote:
### Timing system set-up
The next step is to provide the 65 kHz clock signals from the timing fanout via LC optical fiber. I overlooked the fact that an SPX optical transceiver is required to interface the fiber to the timing slave board. These were not provided with the timing slaves we received. The timing slaves require a particular type of transceiver, 100base-FX/OC-3, which we did not have on hand. (For future reference, there is a handy list of compatible transceivers in E080541, p. 14.) I placed a Digikey order for two Finisar FTLF1217P2BTL, which should arrive within two days.
Today I brought and installed the new optical transceivers (Finisar FTLF1217P2BTL) for the two timing slaves. The timing slaves appear to phase-lock to the clocking signal from the master fanout. A few seconds after each timing slave is powered on, its status LED begins steadily blinking at 1 Hz, just as in the existing 40m systems.
However, some other timing issue remains unresolved. When the IOP model is started (on either FE), the DACKILL watchdog appears to start in a tripped state. Then after a few minutes of running, the TIM and ADC indicators go down as well. This makes me suspect the sample clocks are not really phase-locked. However, the models do start up with no error messages. Will continue to debug...
Attachment 1: Screen_Shot_2021-05-11_at_3.03.42_PM.png
16129 Mon May 10 18:19:12 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise, Corrections
A few corrections to last analysis:
• The first plot was not IMC frequency noise but actually MC_F noise budget.
• MC_F is frequency noise in the IMC FSS loop just before the error point where IMC length and laser frequency is compared.
• So, MC_F (in high loop gain frequency region upto 10kHz) is simply the quadrature noise sum of free running laser noise and IMC length noise.
• Between 1Hz to 100 Hz, normally MC_F is dominated by free running laser noise but when we injected enough angular noise in WFS loops, due to Angle to length coupling, it made IMC length noise large enough in 25-30 Hz band that we started seeing a bump in MC_F.
• So this bump in MC_F is mostly the noise due to Angle to length coupling and hence can be used to calculate how much Angular noise normally goes into length noise.
• In the remaining plots, MC_F was plotted with conversion into arm length units but this was wrong. MC_F gets suppressed by IMC FSS open loop gain before reaching to arm cavities and hence is hardly present there.
• The IMC length noise however is not suppresed until after the error point in the loop. So the length noise (in units of Hz calculated in the first step above) travels through the arm cavity loop.
• We already measured the transfer function from ITMX length actuation to XARM OUT, so we know how this length noise shows up at XARM OUT.
• So in the remaining plots, we plot contribution of IMC angular noise in the arm cavities. Note that the factor of 3 business still needed to be done to match the appearance of noise in XARM_OUT and YARM_OUT signal from the IMC angular noise injection.
• I'll post a clean loop diagram soon to make this loopology clearer.
Attachment 1: ArmCavNoiseContributions.pdf
16128 Mon May 10 10:57:54 2021 Anchal, PacoSummaryCalibrationUsing ALS beatnote for calibration, test
### Test details:
• We locked both arms and opened the shutter for Yend green laser.
• After toggling the shutter on.off, we got a TEM00 mode of green laser locked to YARM.
• We then cleared the phase Y history by clicking "CLEAR PHASE Y HISTROY" on C1LSC_ALS.adl (opened from sitemap > ALS > ALS).
• We sent excitation signal at ITMY_LSC_EXC using awggui at 43Hz, 77Hz and 57Hz.
• We measured the power spectrum and coherence of C1:ALS-BEATY_FINE_PHASE_OUT_HZ_DQ and C1:SUS-ITMY_LSC_OUT_DQ.
• The BEATY_FINE_PHASE_OUT_HZ is already calibrated in Hz. This we assume is done by multip[lying the VCO slope in Hz/cts to the error signal of the digital PLL loop that tracks the phase of beatnote.
• We calibrated C1:SUS-ITMY_LSC_OUT_DQ by multiplying with
$\dpi{150} \large 3 \times \frac{2.44 \, nm/cts}{f^2} \times \frac{c}{1064\,nm \times 37.79\, m} = \frac{54.77}{f^2} kHz/cts$ where f is in Hz.
The 2.44/f2 nm/cts is taken from 13984.
• We added the calibration as Poles/zeros option in diaggui using gain=54.577e3 and poles as "0, 0".
• We found that ITMY_LSC_OUT_DQ calibration matches well at 57Hz but overshoots (80 vs 40) at 43 Hz and undershoots (50 vs 80) at 77Hz.
### Conclusions:
• If we had DRFPMI locked, we could have used the beatnote spectrum as independent measurement of arm lengths to calibrate the interferometer output.
• We can also use the beatnote to confirm or correct the ITM actuator calibrations. Maybe shape is not exactly 1/f2 unless we did something wrong here or the PLL bandwidth is too short.
Attachment 1: BeatY_ITMY_CalibrationAt57Hz.pdf
Attachment 2: BeatY_ITMY_CalibrationAt43Hz.pdf
Attachment 3: BeatY_ITMY_CalibrationAt77Hz.pdf
16127 Fri May 7 11:54:02 2021 Anchal, PacoUpdateLSCIMC WFS noise contribution in arm cavity length noise
We today measured the calibration factors for XARM_OUT and YARM_OUT in nm/cts and replotted our results from 16117 with the correct frequency dependence.
Calibration of XARM_OUT and YARM_OUT
• We took transfer function measurement between ITMX/Y_LSC_OUT and X/YARM_OUT. See attachment 1 and 2
• For ITMX/Y_LSC_OUT we took calibration factor of 3*2.44/f2 nm/cts from 13984. Note that we used the factor of 3 here as Gautum has explicitly written that the calibration cts are DAC cts at COIL outputs and there is a digital gain of 3 applied at all coil output gains in ITMX and ITMY that we confirmed.
• This gave us callibration factors of XARM_OUT: 1.724/f2 nm/cts , and YARM_OUT: 4.901/f2 nm/cts. Note the frrequency dependence here.
• We used the region from 70-80 Hz for calculating the calibration factor as it showed the most coherence in measurement.
Inferring noise contributions to arm cavities:
• For converting IMC frequency noise to length noise, we used conversion factor given by $\lambda L / c$ where L is 37.79m and lambda is wavelength of light.
• For converting MC1 ASCPIT OUT cts data to frequency noise contributed to IMC, we sent 100,000 amplitude bandlimited noise from 25 Hz to 30 Hz at C1:IOO-MC1_PIT_EXC. This noise was seen at both MC_F and ETMX/Y_LSC_OUT channels. We used the noise level at 29 Hz to get a calibration for MC1_ASCPIT_OUT to IMC Frequency in Hz/cts. This measurement was done in 16117.
• Once we got the calibration above, we measured MC1_ASCPIT_OUT power spectrum without any excitaiton and multiplied it with the calibration factor.
• Attachment 3 is our main result.
• Page 1 shows the calculation of Angle to Length coupling by reading off noise injects in MC1_ASCPIT_OUT in MC_F. This came out to 10.906/f2 kHz/cts.
• Page 2-3 show the injected noise in X arm cavity length units. Page 3 is the zoomed version to show the matching of the 2 different routes of calibration.
• BUT, we needed to remove that factor of 3 we incorporated earlier to make them match.
• Page 4 shows the noise contribution of IMC angular noise in XARM cavity.
• Page 5-6 is similar to 2-3 but for YARM. The red note above applied here too! So the factor of 3 needed to be removed in both places.
• Page 7 shows the noise contribution of IMC angular noise in XARM cavity.
### Conclusions:
• IMC Angular noise contribution to arm cavities is atleast 3 orders of magnitude lower then total armc cavity noise measured.
Edit Mon May 10 18:31:52 2021
See corrections in 16129.
Attachment 1: ITMX-XARM_TF.pdf
Attachment 2: ITMY-YARM_TF.pdf
Attachment 3: ArmCavNoiseContributions.pdf
16126 Fri May 7 11:19:29 2021 Ian MacMillanUpdateCDSSUS simPlant model
I copied c1scx.mdl to the docker to attach to the plant using the commands:
$ssh nodus.ligo.caltech.edu [Enter Password]$ cd opt/rtcds/userapps/release/isc/c1/models/simPlant scp c1scx.mdl controls@c1sim:/home/controls/docker-cymac/userapps 16125 Thu May 6 16:13:39 2021 AnchalSummaryIMCAngular actuation calibration for IMC mirrors Here's my first attempt at doing angular actuation calibration for IMC mirrors using the method descibed in /users/OLD/kakeru/oplev_calibration/oplev.pdf by Kakeru Takahashi. The key is to see how much is the cavity mode misaligned from the input mode of beam as the mirrors are moved along PIT or YAW. There two possible kinds of mismatch: • Parallel displacement of cavity mode axis: • In this kind of mismatch, the cavity mode is simply away from input mode by some distance $\dpi{150} \large \beta$. • This results in transmitted power reduction by the gaussian factor of $\dpi{150} \large e^{-\frac{\beta^2}{w_0^2}}$ where $\dpi{150} \large w_0$ is the beam waist of input mode (or nominal waist of cavity). • For some mismatch, we can approximate this to $\dpi{150} \large 1 - \frac{\beta^2}{w_0^2}$ • Angular mismatch of cavity mode axis: • The cavity mode axis could be tilted with respect to input mode by some angle $\dpi{150} \large \alpha$. • This results in transmitted power reduction by the gaussian factor of $\dpi{150} \large e^{- \frac{\alpha^2}{\alpha_0^2}}$ where $\dpi{150} \large \alpha_0$ is the beam divergence angle of input mode (or nominal waist of cavity) given by $\dpi{150} \large \frac{\lambda}{\pi w_0}$. • or some mismatch, we can approximate this to $\dpi{150} \large 1 - \frac{\alpha^2}{\alpha_0^2}$ Kakeru's document goes through cases for linear cavities. For IMC, the mode mismatches are bit different. Here's my take on them: ### MC2: • MC2 is the easiest case in IMC as it is similar to the end mirror for linear cavity with plane input mirror (the case of which is already studies in sec 0.3.2 in Kaker's document). • PIT: • When MC2 PIT is changed, the cavity mode simple shifts upwards (or downwards) to the point where the normal from MC2 is horizontal. • Since, MC1 and MC3 are plane mirrors, they support this mode just with a different beam spot position, shifted up by $\dpi{150} \large (R-L)\theta$. • So the mismatch is simple of the first kind. In my calculations however, I counted the two beams on MC1 and MC3 separately, so the factor is twice as much. • Calling the coefficient to square of angular change $\dpi{150} \large \eta$, we get: $\dpi{150} \large \eta_{._{2P}} = \frac{2 (R-L)^2}{w_0^2}$ • Here, R is radius of curvature of MC1/3 taken as 21.21m and L is the cavity half-length of IMC taken as 13.545417m. • YAW: • For YAW, the case is bit more complicated. Similar to PIT, there will be a horizontal shift of the cavity mode by $\dpi{150} \large (R-L)\theta$. • But since the MC1 and MC3 mirrors will be fixed, the angle of the two beams from MC1 and MC3 to MC2 will have to shift by $\dpi{150} \large \theta/2$. • So the overall coefficient would be: $\dpi{150} \large \eta_{._{2Y}} = \frac{2 (R-L)^2}{w_0^2} + \frac{2}{4\alpha_0^2}$ • The factor of 4 in denominator of seconf term on RHS above comes because only half og angular actuation is felt per arm. The factor of 2 in numerator for for the 2 arms. ### MC1/3: • First, let's establish that the case of MC1 and MC3 is same as the cavity mode must change identically when the two mirrors are moved similarly. • YAW: • By tilting MC1 by $\dpi{150} \large \theta$, we increase the YAW angle between MC1 and MC3 by $\dpi{150} \large \theta$. • Beam spot on both MC1 and MC3 moves by $\dpi{150} \large (R-L)\theta$. • The beam angles on both arms get shifted by $\dpi{150} \large \theta/2$. • So the overall coefficient would be: $\dpi{150} \large \eta_{._{13Y}} = \frac{2 (R-L)^2}{w_0^2} + \frac{2}{4\alpha_0^2}$ • Note, this coefficient is same as MC2, so it si equivalent to moving teh MC2 by same angle in YAW. • PIT: • I'm not very sure of my caluculation here (hence presented last). • Changing PIT on MC1, should change the beam spot on MC2 but not on MC3. Only the angle of MC3-MC2 arm should deflect by $\dpi{150} \large \theta/2$. • While on MC1, the beam spot must change by $\dpi{150} \large (R-L)\theta/2$ and the MC1-MC2 arm should deflect by $\dpi{150} \large \theta/2$. • So the overall coefficient would be: $\dpi{150} \large \eta_{._{13P}} = \frac{(R-L)^2}{4 w_0^2} + \frac{2}{4\alpha_0^2}$ ### Test procedure: • We first clicked on MC WFS Relief (on C1:IOO-WFS_MASTER) to reduce the large offsets accumulated on WFS outputs. This script took 10 minutes and reduced the offsets to single digits and IMC remained locked throughout the process. • Then we switched off the WFS to freeze the outputs. • We moved the MC#_PIT/YAW_OFFSET up and down and measured the C1:IOO-MC_TRANS_SUMFILT_OUT channel as an indicater of IMC mode matching. • Attachement 1 are the 6 measurements and there fits to a parabola. Fitting code and plots are thanks to Paco. • We got the curvature of parabolas $\dpi{150} \large \gamma$from these fits in units of 1/cts^2. • The $\dpi{150} \large \eta$ coefficients calculated above are in units of 1/rad^2. • We got the angular actuation calibration from these offsets to physical angular dispalcement in units of rad/cts by $\dpi{150} \large \sqrt{\gamma / \eta}$. • AC calibration: • I parked the offset to some value to get to the side of parabola. I was trying to reduce transmission from about 14000 cts to 10000-12000 cts in each case. • Sent excitation using MC#_ASCPIT/YAW_EXC using awg at 77 Hz and 10000 cts. • Measured the cts on transmission channel at 77 Hz. Divided it by 2 and by the dc offset provided. And divided by the amplitude of cts set in excitation. This gives $\dpi{150} \large \eta_{ac}$ analogous to above DC case. • Then angular actuation calibration at 77 Hz from these offsets to physical angular dispalcement in units of rad/cts by $\dpi{150} \large \sqrt{\gamma/\eta_{ac}}$. • Following are the results: Optic Act Calibration factor at DC [µrad/cts] Calibration factor at 77 Hz [prad/cts] MC1 PIT 7.931+/-0.029 906.99 MC1 YAW 5.22+/-0.04 382.42 MC2 PIT 13.53+/-0.08 869.01 MC2 YAW 14.41+/-0.21 206.67 MC3 PIT 10.088+/-0.026 331.83 MC3 YAW 9.75+/-0.05 838.44 • Note these values are measured with the new settings in effect from 16120. If these are changed, this measurement will not be valid anymore. • I believe the small values for MC1 actuation have to do with the fact that coil output gains for MC1 are very weird and small, which limit the actuation strength. • TAbove the resonance frequencies, they will fall off by 1/f^2 from the DC value. I've confirmed that the above numbers are of correct order of magnitude atleast. • Please let me know if you can point out any mistakes in the calculations above. Attachment 1: IMC_Ang_Act_Cal_Kakeru_Tests.pdf 16124 Thu May 6 16:13:24 2021 Ian MacMillanUpdateCDSSUS simPlant model When using mdl2adl I was getting the error: cd /home/controls/mdl2adl $./mdl2adl x1sup.mdl error: set$site and $ifo environment variables to set these in the terminal use the following commands: $ export site=tst \$ export ifo=x1
On most of the systems, there is a script that automatically runs when a terminal is opened that sets these but that hasn't been added here so you must run these commands every time you open the terminal when you are using mdl2adl.
16122 Wed May 5 15:11:54 2021 Ian MacMillanUpdateCDSSUS simPlant model
I added the IPC parts back to the plant model so that should be done now. It looks like this again here.
I can't seem to find the control model which should look like this. When I open sus_single_control.mdl, it just shows the C1_SUS_SINGLE_PLANT.mdl model. Which should not be the case.
16121 Wed May 5 13:05:07 2021 ChubUpdateGeneralchassis delivery from De Leone
Assembled chassis from De Leone placed in the 40 Meter Lab, along the west wall and under the display pedestal table. The leftover parts are in smaller Really Useful boxes, also on the parts pile along the west wall.
Attachment 1: de_leone_del_5-5-21.jpg
16120 Wed May 5 09:04:47 2021 AnchalUpdateSUSNew IMC Suspension Damping Gains uploaded for long term testing
We have uploaded the new damping gains on all the suspensions of IMC. This completes changing all the configuration to as mentioned in 16066 and 16072. The old setting can be restored by running python3 /users/anchal/20210505_IMC_Tuned_SUS_with_Gains/restoreOldConfigIMC.py from allegra or donatella.
GPSTIME: 1304265872
UTC May 05, 2021 16:04:14 UTC Central May 05, 2021 11:04:14 CDT Pacific May 05, 2021 09:04:14 PDT
16119 Tue May 4 19:14:43 2021 YehonathanUpdateGeneralOSEMs from KAGRA
I put the box containing the untested OSEMs from KAGRA near the south flow bench on the floor.
16118 Tue May 4 14:55:38 2021 Ian MacMillanUpdateCDSSUS simPlant model
After a helpful meeting with Jon, we realized that I have somehow corrupted the sitemap file. So I am going to use the code Chris wrote to regenerate it.
Also, I am going to connect the controller using the IPC parts. The error that I was having before had to do with the IPC parts not being connected properly.
ELOG V3.1.3- | 2022-05-27 06:27:23 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 51, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5874457359313965, "perplexity": 3708.916074349816}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-21/segments/1652662636717.74/warc/CC-MAIN-20220527050925-20220527080925-00197.warc.gz"} |
https://calculus-do.com/%E5%BE%AE%E7%A7%AF%E5%88%86%E7%BD%91%E8%AF%BE%E4%BB%A3%E4%BF%AE%E6%9E%81%E9%99%90%E7%90%86%E8%AE%BA%E4%BB%A3%E5%86%99limit-theory%E4%BB%A3%E8%80%83math407-stability-of-limit-theorems/ | # 微积分网课代修|极限理论代写Limit Theory代考|MATH407 Stability of Limit Theorems
• 单变量微积分
• 多变量微积分
• 傅里叶级数
• 黎曼积分
• ODE
• 微分学
## 微积分网课代修|极限理论代写Limit Theory代考|Stability of Limit Theorems
In this chapter we present some first results on the stability of limit theorems taken from [28] (see also [79, 100]). More precisely, we derive simple sufficient conditions for distributional limit theorems to be mixing.
To this end, let $Z_{n}$ be $(\mathcal{Z}, \mathcal{C})$-valued random variables for some measurable space $(\mathcal{Z}, \mathcal{C})$ and $f_{n}:\left(\mathcal{Z}^{n}, \mathcal{C}^{n}\right) \rightarrow(\mathcal{X}, \mathcal{B}(\mathcal{X}))$ measurable maps for every $n \in \mathbb{N}$, where we need a vector space structure for $\mathcal{X}$. So, let $\mathcal{X}$ be a polish topological vector space (like $\mathbb{R}^{d}, C([0, T])$ for $0<T<\infty$ or $\left.C\left(\mathbb{R}{+}\right)\right)$. Then there exists a translation invariant metric $d$ on $\mathcal{X}$ inducing the topology ([86], Theorem 1.6.1) so that $U{n}-V_{n} \rightarrow 0$ in probability for $(\mathcal{X}, \mathcal{B}(\mathcal{X}))$-valued random variables $U_{n}$ and $V_{n}$ means $d\left(U_{n}, V_{n}\right)=$ $d\left(U_{n}-V_{n}, 0\right) \rightarrow 0$ in probability or, what is the same, $E\left(d\left(U_{n}, V_{n}\right) \wedge 1\right) \rightarrow 0$.
Furthermore, let $b_{n} \in \mathcal{X}$ and $a_{n} \in(0, \infty)$. We consider the $(\mathcal{X}, \mathcal{B}(\mathcal{X}))$-valued random variables
$$X_{n}:=\frac{1}{a_{n}}\left(f_{n}\left(Z_{1}, \ldots, Z_{n}\right)-b_{n}\right)$$
for $n \in \mathbb{N}$ and assume $X_{n} \stackrel{d}{\rightarrow} v$ for some $v \in \mathcal{M}^{1}(\mathcal{X})$. The tail $\sigma$-field of $Z=\left(Z_{n}\right)$ is given by
$$\mathcal{T}{Z}=\bigcap{n=1}^{\infty} \sigma\left(Z_{k}, k \geq n\right) .$$
## 微积分网课代修|极限理论代写Limit Theory代考|Martingale Arrays and the Nesting Condition
For every $n \in \mathbb{N}$ let $\left(X_{n k}\right){1 \leq k \leq k{n}}$ be a sequence of real random variables defined on a probability space $(\Omega, \mathcal{F}, P)$, and let $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}}$ be a filtration in $\mathcal{F}$, i.e. $\mathcal{F}{n 0} \subset \mathcal{F}{n 1} \subset \cdots \subset \mathcal{F}{n k{n}} \subset \mathcal{F}$. The sequence $\left(\bar{X}{n k}\right){1 \leq k \leq k_{n}}$ is called adapted to the filtration $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}}$ if $X_{n k}$ is measurable w.r.t. $\mathcal{F}{n k}$ for all $1 \leq k \leq k{n}$. The triangular array $\left(X_{n k}\right){1 \leq k \leq k{n}, n \in \mathbb{N}}$ of random variables is called adapted to the triangular array $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}, n \in \mathbb{N}}$ of $\sigma$-fields if the row $\left(X_{n k}\right){1 \leq k \leq k{n}}$ is adapted to the filtration $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}}$ for every $n \in \mathbb{N}$. Not all of the following results of a more technical nature require the assumption of adaptedness. Therefore, we will always state explicitly where adapted arrays are considered.
An array $\left(X_{n k}\right){1 \leq k \leq k{n}, n \in \mathbb{N}}$ adapted to $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}, n \in \mathbb{N}}$ is called a martingale difference array if $X_{n k} \in \mathcal{L}^{1}(P)$ with $E\left(X_{n k} \mid \mathcal{F}{n, k-1}\right)=0$ for all $1 \leq k \leq k{n}$ and $n \in \mathbb{N}$, which means that for every $n \in \mathbb{N}$ the sequence $\left(X_{n k}\right){1 \leq k \leq k{n}}$ is a martingale difference sequence w.r.t. the filtration $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}}$. A martingale difference array is square integrable if $X_{n k} \in \mathcal{L}^{2}(P)$ for all $1 \leq k \leq k_{n}$ and $n \in \mathbb{N}$. Note that a martingale difference sequence or array is always by definition adapted to the $\sigma$-fields under consideration.
From now on, we assume that the sequence $\left(k_{n}\right){n \in \mathbb{N}}$ is nondecreasing with $k{n} \geq n$ for all $n \in \mathbb{N}$. We always set $\mathcal{F}{\infty}=\sigma\left(\bigcup{n=1}^{\infty} \mathcal{F}{n k{n}}\right)$. The array $\left(\mathcal{F}{n k}\right){0 \leq k \leq k_{n}, n \in \mathbb{N}}$ is called nested if $\mathcal{F}{n k} \subset \mathcal{F}{n+1, k}$ holds for all $n \in \mathbb{N}$ and $0 \leq k \leq k_{n}$. The subtle role of this property of the $\sigma$-fields in stable martingale central limit theorems will become evident in the sequel.
Our basic stable martingale central limit theorem reads as follows.
## 微积分网课代修|极限理论代写Limit Theory代考| Stability of Limit Theorems
$f_{n}:\left(\mathcal{Z}^{n}, \mathcal{C}^{n}\right) \rightarrow(\mathcal{X}, \mathcal{B}(\mathcal{X}))$ 每个可测量的地图 $n \in \mathbb{N}$ ,其中我们需要一个向量空 间结构 $\mathcal{X}$. 所以,让 $\mathcal{X}$ 是一个抛光拓扑向量空间(如 $\mathbb{R}^{d}, C([0, T])$ 为 $0<T<\infty$ 或 $C(\mathbb{R}+))$. 然后存在一个平移不变度量 $d$ 上 $\mathcal{X}$ 诱导拓扑([86],定理1.6.1),以便
$U n-V_{n} \rightarrow 0$ 在概率中 $(\mathcal{X}, \mathcal{B}(\mathcal{X}))$-值随机变量 $U_{n}$ 和 $V_{n}$ 方法 $d\left(U_{n}, V_{n}\right)=$
$d\left(U_{n}-V_{n}, 0\right) \rightarrow 0$ 在概率上,或者,什么是相同的, $E\left(d\left(U_{n}, V_{n}\right) \wedge 1\right) \rightarrow 0$.
$$X_{n}:=\frac{1}{a_{n}}\left(f_{n}\left(Z_{1}, \ldots, Z_{n}\right)-b_{n}\right)$$
$$\mathcal{T} Z=\bigcap n=1^{\infty} \sigma\left(Z_{k}, k \geq n\right) .$$
## 微积分网课代修|极限理论代写Limit Theory代考| Martingale Arrays and the Nesting Condition
$\mathcal{F} n 0 \subset \mathcal{F} n 1 \subset \cdots \subset \mathcal{F} n k n \subset \mathcal{F}$. 序列 $(\bar{X} n k) 1 \leq k \leq k_{n}$ 称为适应过滤
$(\mathcal{F} n k) 0 \leq k \leq k_{n}$ 如果 $X_{n k}$ 是可测量的 w.r.t. $\mathcal{F} n k$ 面向所有人 $1 \leq k \leq k n$.三角形 阵列 $\left(X_{n k}\right) 1 \leq k \leq k n, n \in \mathbb{N}$ 的随机变量称为适应三角形数组
$(\mathcal{F} n k) 0 \leq k \leq k_{n}, n \in \mathbb{N}$ 之 $\sigma$-字段,如果行 $\left(X_{n k}\right) 1 \leq k \leq k n$ 适应过滤
$(\mathcal{F} n k) 0 \leq k \leq k_{n}$ 对于每个 $n \in \mathbb{N}$. 并非所有技术性更强的以下结果都需要假设适应 性。因此,我们将始终明确说明考虑自适应数组的位置。
$X_{n k} \in \mathcal{L}^{2}(P)$ 面向所有人 $1 \leq k \leq k_{n}$ 和 $n \in \mathbb{N}$. 请注意,根据定义,马丁格尔差分 序列或数组始终适应于 $\sigma$-正在考虑中的字段。 | 2022-09-24 19:09:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.956028401851654, "perplexity": 472.5804139245781}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2022-40/segments/1664030333455.97/warc/CC-MAIN-20220924182740-20220924212740-00649.warc.gz"} |
http://rtx.civil.sharif.edu/RAbrahamsonEtAl2013IntensityModel.html | # Abrahamson et al 2013 Intensity Model
## Class Name
• RAbrahamsonEtAl2013IntensityModel
## Location in Objects Pane
• Models > Model > Hazard > Earthquake > Intensity > Abrahamson et al 2013 Intensity
## Model Description
### Model Form
• This model produces the spectral acceleration, peak ground acceleration, or peak ground velocity at specified locations for given magnitude and hypocenter location of several earthquake sources as input, based on the Abrahamson et al. (2014) attenuation relation.
• Depth, width and some of other geometric specifications of each earthquake are predicted based on the formulas recommended in the Abrahamson et al. (2014), if they are unknown.
• No
## Properties
### Object Name
• Name of the object in Rt
• Allowable characters are upper-case and lower-case letters, numbers, and underscore (“_”).
• The name is unique and case-sensitive.
### Display Output
• Determines whether the model is allowed to print messages in the Output Pane.
### Magnitude List
• Magnitudes of various earthquake sources
### Depth To Top Of Rupture List
• Depth to top of rupture of various earthquake sources (Use 999 in case of unknown)
### Down Dip Rupture Width List
• Down-dip width of rupture of various earthquake sources (Use 999 in case of unknown)
### Horizontal Distance From Top Of Rupture List
• Horizontal distance from top edge of rupture plane measured perpendicular to strike of various earthquake sources (Use 999 in case of unknown. In this case, it is assumed equal to hypocenter location distance)
### Ry 0 Horizontal Distance Off End Of Rup List
• Horizontal distance off the end of rupture plane measured parallel to strike of various earthquake sources (Use 999 in case of unknown)
### Rupture Distance List
• Closest distance from rupture plane of various earthquake sources (Use 999 in case of unknown. In this case, it is calculated based on Pythagorean theorem using the depth of top edge of rupture and hypocenter location distance)
### Dip Angles
• Dip angles of various earthquake sources (You can use approximate angles based on the fault type, provided in the literature)
### Vs Flags
• A helping variable. Enter 1, if you have measured, or 0, if you have inferred shear-wave velocity. You can enter once for all locations
### Hypocentre Location List
• Hypocenter locations of earthquake sources, which automatically will yield the radius $${R}$$ to the various output locations
### Centroid RJB List
• Centroid Joyner-Boore distance (see Wooddell and Abrahamson (2012)) (Use 999 in case of unknown. You can enter once for various earthquake sources)
### Fault Types
• Fault mechanism that can be either NormalSlip, StrikeSlip, or ReverseSlip
### Wall Types
• Identifies the location to be whether on hanging-wall or footing-wall side of the rupture that can be either HangingWall, FootingWall, or Unknown. You can enter once for all locations. Using Unknown omits hanging-wall effects
### Regions
• Region of the location of the building which can be either California, Japan, China, Taiwan, or Other (If Other is used, the model will use California's specifications)
### Shock Type
• Shock type that can be either MainShock or AfterShock
### Response Type
• Type of the response than can be either $${S_a}$$, $${PGA}$$, or $${PGV}$$
### Period List
• List of the natural periods at which the intensity is evaluated
### Epsilon Uncertainty List
• Intra-event model error, typically a standard normal random variable
### Eta Uncertainty List
• Inter-event model error, typically a standard normal random variable
### Structure Location List
• List of the locations where the intensity will be computed at them (the output will give as many intensity values as the locations provided here)
### Shear Wave Velocity List
• List of the shear wave velocities at specified locations
### Depth To Vs 1 List
• List of the depths of Vs = 1 km/s boundry of the specified locations (Use 999 as input in case of unknown depth)
## Output
• Earthquake intensities (as many as the locations provided in the input)
• The output is an automatically generated generic response object, which takes the object name of the model plus “Response”. | 2019-05-22 19:16:36 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5533210635185242, "perplexity": 5687.436761213558}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232256948.48/warc/CC-MAIN-20190522183240-20190522205240-00329.warc.gz"} |
https://mathoverflow.net/questions/221710/is-there-a-galois-theory-for-mathbb-r-geq-0 | # Is there a Galois theory for $\mathbb R_{\geq 0}$?
The broadest version of my question is the following:
Where can I find algebrogeometric abstract nonsense that handles "rings" and "fields" like $$\mathbb R_{\geq 0}$$ in which there is no subtraction?
The reason I think that such a theory might have been developed is that I know that such "rings" appear in algebraic geometry: in tropical geometry, in the study of semialgebraic sets, and so on.
Some specific questions that I hope such literature might answer (but probably this is too optimistic):
Is there an "etale site" over $$\mathbb R_{\geq 0}$$?
What is the "etale homotopy type" or "Galois group" of $$\mathbb R_{\geq 0}$$?
I'm under the vague impression that $$\mathbb R_{\geq 0}$$ is an approximation of the absolute field $$\mathbb F_1$$. I also speculate that $$\mathbb R_{\geq 0} \hookrightarrow \mathbb R$$ is "etale" but not a cover, so that $$\mathbb R_{\geq 0}$$ has multiple components in etale topology. This is in spite of the fact that $$\mathbb R_{\geq 0}$$ seems like a "field".
• Look at F1-geometry in particular the blue schemes of Oliver Lorscheid. – user1688 Oct 24 '15 at 14:32
• The "rings" and "fields" are in fact semirings and semifields. – Ilya Bogdanov Oct 25 '15 at 11:23
The question seems to be about algebraic geometry of commutative semirings (these are rings without subtraction).
The theory by Toen-Vaquié (and others) in "Au-dessous de $Spec \mathbb{Z}$" develops (functorial) algebraic geometry relative to any bicomplete closed symmetric monoidal category. The usual case is $\mathcal{V}=(\mathsf{Ab},\otimes)$, since then $\mathsf{CommMon}(\mathcal{V}) \cong \mathsf{CommRing}$. Now take $\mathcal{V}=(\mathsf{CommMon},\otimes)$ (the tensor product is defined and constructed in the same fashion as the one of abelian groups), then $\mathsf{CommMon}(\mathcal{V})\cong \mathsf{CommSemiRing}$. The category of schemes relative to $(\mathsf{CommMon},\otimes)$ is called the category of $\mathbb{N}$-schemes by Toen-Vaquié (since $(\mathbb{N},+,\cdot)$ is the initial commutative semiring). They also compare this category to the category of $\mathbb{F}_1$-schemes (which corresponds to $\mathcal{V}=(\mathsf{Set},\times)$, or rather $\mathcal{V}=(\mathsf{Set}_*,\wedge)$).
Alternatively, we may view $\mathbb{N}$ as a generalized commutative ring (commutative algebraic monad on $\mathsf{Set}$) and hence use Durov's notion of a generalized scheme ("New Approach to Arakelov Geometry"), which has a more geometric flavor; these are locally ringed spaces which are locally isomorphic to affine schemes, which in turn consist of prime ideals and a structure sheaf defined by localizations. Thus, the construction of the category of $\mathbb{N}$-schemes is very similar to the construction of the category of $\mathbb{Z}$-schemes, except that we ignore the non-existing subtraction.
A commutative semiring is simple (i.e. has exactly two quotients) if and only if it is non-trivial and every non-zero element is invertible with respect to multiplication; such objects are sometimes called semifields. Thus, $\mathbb{R}_{\geq 0}$ is an example of a semifield.
I don't know how much has been done on étale morphisms and Galois theory in relative algebraic geometry, specifically for $\mathbb{N}$-schemes (hopefully, others can write something about this), but the notion of smooth morphisms is the main topic in Florian Marty's thesis "Des Ouverts Zariski et des morphisms formalement lisse en géométrie relative", and in the introduction he hints at a definition of étale morphisms as smooth morphisms with a smooth diagonal.
Here is another idea (which should work at least for $\mathbb{N}$-schemes): The étale morphisms are precisely the flat unramified morphisms. The notion of flatness is defined as usual for affine relative schemes and then by glueing to arbitrary relative schemes. Unramifiedness may be characterized as being locally of finite presentation, which has a common functorial characterization, and the vanishing of the quasi-coherent sheaf of differentials, which may be constructed as usual via glueing.
I don't know anything specifically about the étale $\mathbb{R}_{\geq 0}$-schemes, though. My hope is that my "answer" bumps your question and invites others go give proper answers, which I would enjoy to read, too.
Added: $\Omega^1_{\mathbb{R}/\mathbb{R}_{\geq 0}}$ is indeed trivial. Since we have the commutative semiring presentation$$\mathbb{R} \cong \mathbb{R}_{\geq 0}[X]/\langle X^2=1,X+1=0\rangle,$$we obtain the $\mathbb{R}$-semimodule presentation $$\Omega^1_{\mathbb{R}/\mathbb{R}_{\geq 0}} \cong \mathbb{R} \cdot d(X) / \langle X \cdot d(X)=0,d(X)+0=0 \rangle = 0.$$ However, $\mathbb{R}_{\geq 0} \to \mathbb{R}$ is not flat: Notice that $\mathbb{R}$ is the $\mathbb{R}_{\geq 0}$-semimodule freely generated by $1$ and $-1$ modulo the relation $1+(-1)=0$. Thus, if $M$ is some $\mathbb{R}_{\geq 0}$-semimodule, then $M \otimes_{\mathbb{R}_{\geq 0}} \mathbb{R}$ is the quotient of $M \oplus M$ by the congruence generated by $\{(m,m) : m \in M\}$. This is exactly the Grothendieck group $G(M)$ with the induced $\mathbb{R}$-action. Now if $M \to N$ is an injective homomorphism of $\mathbb{R}_{\geq 0}$-semimodules, it does not follow that $G(M) \to G(N)$ is an injective homomorphism of $\mathbb{R}$-modules. For example, let $M=\mathbb{R}_{\geq 0}$ and $N$ be the $\mathbb{R}_{\geq 0}$-semimodule freely generated by two elements $x,y$ with $x+y=y$. Then $M \to N$, $1 \mapsto x$ is injective, but $G(M) \to G(N)$ vanishes. | 2020-10-22 08:58:33 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 8, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9381688237190247, "perplexity": 359.86186324916395}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-45/segments/1603107879362.3/warc/CC-MAIN-20201022082653-20201022112653-00194.warc.gz"} |
http://cs70.wikidot.com/other:midterm-2-cheat-sheet | Midterm two Cheat Sheat
# Topics likely to be covered:
Error Correcting Codes and Secret Sharing
A polynomial $P(x)$ of degree at most $d$ can be uniquely identified by knowing it's value at $d+1$ points.
There is a unique polynomial of degree at most $d$ such that it hits all $d+1$ points.
Berlekamp and Welch: If the e1, e2 …. ek packets are corrupted so that the received points are r1, r2…rk we can define E(x) = (x-e1)(x-e2)…(x-ek) and Q(x) = P(x)*E(x). Then we use Q(i) = ri * E(i) to solve for Q(x) and E(x) whose quotient is P(x). Q is of degree n + k - 1 and E is degree k with the first coefficient equal to 1.
Graphs
Every graph that is connected except for vertices of degree 0 has an Eulerian Tour if every vertex has an even degree.
sum deg(v) = 2|E| where v exists in V
Probability
If A and B are independent it is true that $P(A\cap B) = P(A)P(B)$ and (if both $P(A) \neq 0$, $P(B) \neq 0$) also $P(A|B) = P(A)$.
As long as both A and B have nonzero probability is always true that $P(A\cap B) = P(A|B)P(B)= P(B|A)P(A)$
Counting
There are $n!$ ways to order $n$ objects.
Ways of Counting
• With or Without Replacement
• Order does or doesn't matter
1. With replacement, order matters
1. Example: $2^n$ ways of flipping a {H, T} coin n times
2. Without replacement, order matters
1. Example: 52 cards, draw 5
2. 52 51 50 49 48
3. $\frac{n!}{(n-k)!}$
3. Without Replacement, Order doesn't matter
1. Example: 52 cards, Queen of Hearts, King of Spades, Jack of Diamonds, Ace of Clubs, 10 of diamonds
2. ${n \choose k} = \frac{n!}{(n-k)!k!}$
4. With replacement, Order doesn't matter:
1. Example: 3 types of veggies, pick 5 from an unlimited number
2. Think: Balls and Bins
3. Balls: number of servings we want to make (n balls)
4. Bins: different types of veggies we have (k bins)
5. ${n - 1 + k \choose n}$
1. ** Also equivalent to: $n-1+k \choose k-1$
page revision: 21, last edited: 12 Nov 2013 10:07 | 2018-12-16 06:54:19 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7662248611450195, "perplexity": 1550.769663920846}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2018-51/segments/1544376827281.64/warc/CC-MAIN-20181216051636-20181216073636-00368.warc.gz"} |
https://cs.stackexchange.com/questions/84643/how-to-prove-that-matrix-multiplication-of-two-2x2-matrices-cant-be-done-in-les | # How to prove that matrix multiplication of two 2x2 matrices can't be done in less than 7 multiplications?
In Strassen's matrix multiplication, we state one strange ( at least to me) fact that matrix multiplication of two 2 x 2 takes 7 multiplication.
Question : How to prove that it is impossible to multiply two 2 x 2 matrices in 6 multiplications?
Please note that matrices are over integers.
• There are other matrix multiplication algorithms that can be faster. This web article from a Stanford CME 323 class provides details about Strassen's algorithm, Matrix multiplication: Strassen's algorithm. There is a Wikipedia topic, Strassen algorithm that goes into details and has links to additional information. – Richard Chambers Nov 29 '17 at 14:00
• @RichardChambers Notice that Strassen’s algorithm has $7$ multiplications. It seems plausible to me that this lower bound is true. – Stella Biderman Nov 29 '17 at 14:10
• As worded this question is wrong. There are plenty of matrices that can be multiplied with $6$ multiplications. You mean to ask for a proof that, in the worst case, it takes 7 aka there exists some matrix that requires 7 – Stella Biderman Nov 29 '17 at 14:15
• @StellaBiderman yes I saw that Strassen's has 7 multiplications. I did not look at the other, faster and algorithms with a lower complexity. From what I can tell they use the same sub-matrix approach as Strassen's but I am not sure. I was just adding some additional information about Strassen's specifically. – Richard Chambers Nov 29 '17 at 14:16
• There seems to be something missing from your question. I can easily give an algorithm which can multiply at least some matrices with 0 multiplications. There's probably a constraint that you are not mentioning. – Jörg W Mittag Nov 29 '17 at 19:50
Strassen showed that the exponent of matrix multiplication is the same as the exponent of the tensor rank of matrix multiplication tensors: the algebraic complexity of $n\times n$ matrix multiplication is $O(n^\alpha)$ iff the tensor rank of $\langle n,n,n \rangle$ (the matrix multiplication tensor corresponding to the multiplication of two $n\times n$ matrices) is $O(n^\alpha)$. Strassen's algorithm uses the easy direction to deduce an $O(n^{\log_27})$ from the upper bound $R(\langle 2,2,2 \rangle) \leq 7$.
Winograd's result implies that $R(\langle 2,2,2 \rangle)=7$. Landsberg showed that the border rank of $\langle 2,2,2 \rangle$ is also 7, and Bläser et al. recently extended that to support rank and border support rank. Border rank and support rank are weaker (=smaller) notions of rank that have been used (in the case of border rank) or proposed (in the case of support rank) in the fast matrix multiplication algorithms. | 2020-04-04 16:27:32 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8660784363746643, "perplexity": 385.9888186515211}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-16/segments/1585370524043.56/warc/CC-MAIN-20200404134723-20200404164723-00175.warc.gz"} |
https://www.physicsforums.com/threads/could-you-travel-using-nuclear-bombs.967757/ | # B Could you travel using nuclear bombs?
#### Audun Nilsen
I have two scenarios.
One; imagine that, at the same time as the engine of your ship is ignited, there is also a device detonated inside the ship. If timed correctly, the difference between the two would equal out, right? For now, there's one huge problem and that's making sure the aim is correct, because, if you collide with an asteroid or something, you'd just spin out of control, but if you could measure the trajectory perfectly, there would be nothing in the way of that, if space, truly, is a vacuum which could accommodate fast travel. Next, there's the slowing down. Once the blast has burned out, you would naturally slow down, but the process of slowing down an object from something in the area of the speed of light would take immensely long, but you can calculate how long you go by adjusting the strength of the device according to distance. If there is some kind of resistance in space, fine, and if not, you can fire an engine in the nose end of the ship and initiate the stopping sequence that way. If there is only miniscule mass the stopping force would have to be on a par with the start force, and, again, if you detonate outside and inside at exaclty the same time, then you won't have a problem, right? Relative to ... the sides of the "ship", your pod hasn't moved at all. In short, dead accuracy is the way to go.
Two; imagine a huge construction with several hundred stories of tunnels with rails in them, which all contained a smaller pod except the one in which you have the astronauts, and imagine also that the pods of this Matryoshka ship has springs in both ends and, when the ship speeds up or slows down, the springs lessen the impact, and reduce G-forces to an acceptable minimum. The challenge as I see it here, is not just accuracy, but sheer size, since the force of, even, a jet engine in a vacuum would be immense and, so, you'd need 100s, maybe 1.000s of pods. I'm sure that having vacuum inside the tunnels would help, since that would lessen the immediate impact, but it would make the length of the tunnels a great deal bigger.
Related General Physics News on Phys.org
#### davenn
Gold Member
Hi there
welcome to PF
Next, there's the slowing down.
do you know why the spaceship would slow down ?
but the process of slowing down an object from something in the area of the speed of light would take immensely long,
did you know that nothing with mass can travel at the speed of light ?
#### Audun Nilsen
Even if you didn't have enough plasma to stop a spaceship, you could use athmospheres of all kinds, going tipsy-turvy around a solar system until you make it; that would be a virtuose feat of accuracy, but theoretically feasible; and
what's to say that light isn't an expression of mass, just like electricity? Electricity is electrons jumping from atom to atom, so why would it be any different with light? Isn't that what you school buds call photons?
#### anorlunda
Mentor
This is not quite what the OP described, but it is lots of fun.
https://en.wikipedia.org/wiki/Project_Orion_(nuclear_propulsion) said:
Project Orion was a study of a spacecraft intended to be directly propelled by a series of explosions of atomic bombs behind the craft (nuclear pulse propulsion). Early versions of this vehicle were proposed to take off from the ground with significant associated nuclear fallout; later versions were presented for use only in space. Six tests were launched[
#### Audun Nilsen
Wicked!
I did some calculations :P and found that you could get more than the speed of light with a Tsar Bomb:
50 megatons is 4.8 B stronger than jet fuel.
Staff Emeritus
I did some calculations :P and found that you could get more than the speed of light with a Tsar Bomb
Unfortunately, not correct calculations.
#### anorlunda
Mentor
Truth is stranger than fiction.
Also from that Project Orion Wikipedia article.
A test that was similar to the test of a pusher plate occurred as an accidental side effect of a nuclear containment test called "Pascal-B" conducted on 27 August 1957.[34] The test's experimental designer Dr. Robert Brownlee performed a highly approximate calculation that suggested that the low-yield nuclear explosive would accelerate the massive (900 kg) steel capping plate to six times escape velocity.[35] The plate was never found but Dr. Brownlee believes that the plate never left the atmosphere, for example it could have been vaporized by compression heating of the atmosphere due to its high speed. The calculated velocity was interesting enough that the crew trained a high-speed camera on the plate which, unfortunately, only appeared in one frame indicating a very high lower bound for the speed of the plate.
#### Audun Nilsen
Feel free to endulge us. I used an energy converter I found online.
#### Audun Nilsen
It said the theoretical top speed was about 10%. That's what I came to when I compared the energy density of a reactor with jet fuel.
#### hutchphd
This is not quite what the OP described, but it is lots of fun.
Great footage and Freeman Dyson and TheodoreTaylor:
#### davenn
Gold Member
what's to say that light isn't an expression of mass, just like electricity?
It isn't .... please don't make things up .... the fundamentals of Electromagnetic radiation is very well known
Isn't that what you school buds call photons?
Photons are quantum packets of energy they are not little bullets/particles shooting along
Staff Emeritus
Feel free to endulge us.
I don't know what you mean.
I used an energy converter I found online.
And. they wouldn't put it on the internet if it wasn't true?
You have to decide if you are here to ask us things or tell us things. You seem to be wanting to tell us things. Unfortunately, those things are not correct.
#### Janus
Staff Emeritus
Gold Member
Wicked!
I did some calculations :P and found that you could get more than the speed of light with a Tsar Bomb:
50 megatons is 4.8 B stronger than jet fuel.
I'm afraid your calculations have misled you.
50 mt = 2.1e17 joules. Using the Newtonian expression for kinetic energy
$$E = \frac{mv^2}{2}$$
And assuming a mass of 1000 kg ( about that of a small car)
we can rearrange and solve for v, which would be the velocity one could reach using 2.1e17 joules of energy to accelerate 1000 kg up to v
The answer comes out to be just a bit under 7% of the speed of light.
So the Tsar bomb, even if you could convert all it its explosive energy to kinetic energy in the 1000 kg mass, comes way short of the speed of light.
Not only that, but the Newtonian expression for KE turns out to only give a reasonably accurate answer for values of v that are small when compared to the speed of light.
The more accurate equation is
$$KE =mc^2 \left ( \frac {1}{\sqrt{1-\frac{v^2}{c^2}}} -1 \right )$$
With this equation, as v approaches the speed of light (c), KE increases without limit. The upshot is that, no matter how much energy you have available, you will always come up short of getting any mass up even up to the speed of light.
#### anorlunda
Mentor
The OP will not be returning to this thread,.
"Could you travel using nuclear bombs?"
### Physics Forums Values
We Value Quality
• Topics based on mainstream science
• Proper English grammar and spelling
We Value Civility
• Positive and compassionate attitudes
• Patience while debating
We Value Productivity
• Disciplined to remain on-topic
• Recognition of own weaknesses
• Solo and co-op problem solving | 2019-04-24 12:00:37 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49080929160118103, "perplexity": 1330.4351725939728}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-18/segments/1555578641278.80/warc/CC-MAIN-20190424114453-20190424140453-00074.warc.gz"} |
https://hurna.io/academy/mathematics/fraction.html | Whatever we do in life, we manipulate fractions on a daily basis. This is done for example when we cut a pie, read a percentage on a label or play LEGO.
Fractions are also the first abstract concept taught in mathematics and more often what confuses students with mathematics. Although these may seem tricky at first, it becomes much simpler once we visualize what a fraction is and how it works.
### Taking a Fraction
The number above, at the numerator, represents the number of shares we take.
The number below, at the denominator, represents the number of total shares.
If we take 1/4 (a quarter) of a pie, we cut it into 4 equal parts and take 1 share.
Other concrete examples:
- If in a class of 16 students 9 are girls, then 9/16 of them are girls.
- To advance 3/4 of a meter, we divide it into 4 equal steps (of 25cm) then advance of 3 times this distance (75cm).
- To take 3/5 of a number, we divide by 5 and multiply by 3.
## Two main rules
We never divide by 0!
If we multiply or divide the numerator and the denominator of a fraction by the same number: we get an equivalent fraction (equal).
## Objectives
We may want to visualize and manipulate fractions first. Let's use the H.urna Explorer
$$\frac{a}{a}=1$$ $$\require{cancel} \frac{a \cdot b}{a \cdot c} = \frac{\cancel{a} \cdot b}{\cancel{a} \cdot c} = \frac{b}{c}$$ $$\frac{a}{b} \cdot c = \frac{a \cdot c}{b} = \frac{c}{b} \cdot a$$ $$\frac{-a}{b}=-\frac{a}{b}$$ $$\frac{1}{\frac{b}{c}}=\frac{c}{b}$$ $$\frac{a}{1}=a$$ $$\frac{-a}{-b}=\frac{a}{b}$$ $$\frac{a}{-b}=-\frac{a}{b}$$ $$\frac{a}{\frac{b}{c}}=\frac{a\cdot c}{b}$$ $$\frac{\frac{b}{c}}{a}=\frac{b}{c\cdot a}$$ $$\frac{0}{a}=0\:,\:a\ne 0$$ | 2020-07-07 02:56:15 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.8859155178070068, "perplexity": 582.3168196446642}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-29/segments/1593655891640.22/warc/CC-MAIN-20200707013816-20200707043816-00272.warc.gz"} |
https://stats.stackexchange.com/questions/286552/why-have-a-tanh-layer-max-pooling-layer-and-then-another-tanh-layer | # Why have a tanh layer, max pooling layer and then another tanh layer
I have been reading a Facebook paper, read here, and am confused about certain features of the architecture. I do not understand why they have a tanh layer, max-pooling layer, and then another tanh layer. I understand what each layer does, but I don't understand why they have this sequence. Wouldn't this output basically give the same as just a max-pooling layer, and a tanh layer?
2. $\text{tanh}$
5. $\text{tanh}$ | 2020-06-03 23:03:44 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3976804316043854, "perplexity": 827.8563764965618}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2020-24/segments/1590347436466.95/warc/CC-MAIN-20200603210112-20200604000112-00562.warc.gz"} |
https://docs.lammps.org/mdi.html | $$\renewcommand{\AA}{\text{Å}}$$
# mdi command¶
## Syntax¶
mdi option args
• option = engine or plugin or connect or exit
engine args = zero or more keyword/args pairs
keywords = elements
elements args = N_1 N_2 ... N_ntypes
N_1,N_2,...N_ntypes = atomic number for each of ntypes LAMMPS atom types
plugin args = name keyword value keyword value ...
name = name of plugin library (e.g., lammps means a liblammps.so library will be loaded)
keyword/value pairs in any order, some are required, some are optional
keywords = mdi or infile or extra or command
mdi value = args passed to MDI for driver to operate with plugins (required)
infile value = filename the engine will read at start-up (optional)
extra value = aditional command-line args to pass to engine library when loaded (optional)
command value = a LAMMPS input script command to execute (required)
connect args = none
exit args = none
## Examples¶
mdi engine
mdi engine elements 13 29
mdi plugin lammps mdi "-role ENGINE -name lammps -method LINK" &
infile in.aimd.engine extra "-log log.aimd.engine.plugin" &
command "run 5"
mdi connect
mdi exit
## Description¶
This command implements operations within LAMMPS to use the MDI Library <https://molssi-mdi.github.io/MDI_Library/html/index.html> for coupling to other codes in a client/server protocol.
See the Howto MDI doc page for a discussion of all the different ways 2 or more codes can interact via MDI.
The examples/mdi directory has examples which use LAMMPS in 4 different modes: as a driver using an engine as either a stand-alone code or as a plugin, and as an engine operating as either a stand-alone code or as a plugin. The README file in that directory shows how to launch and couple codes for all the 4 usage modes, and so they communicate via the MDI library using either MPI or sockets.
The scripts in that directory illustrate the use of all the options for this command.
The engine option enables LAMMPS to act as an MDI engine (server), responding to requests from an MDI driver (client) code.
The plugin option enables LAMMPS to act as an MDI driver (client), and load the MDI engine (server) code as a library plugin. In this case the MDI engine is a library plugin. An MDI engine can also be a stand-alone code, launched separately from LAMMPS, in which case the mdi plugin command is not used.
The connect and exit options are only used when LAMMPS is acting as an MDI driver. As explained below, these options are normally not needed, except for a specific kind of use case.
The mdi engine command is used to make LAMMPS operate as an MDI engine. It is typically used in an input script after LAMMPS has setup the system it is going to model consistent with what the driver code expects. Depending on when the driver code tells the LAMMPS engine to exit, other commands can be executed after this command, but typically it is used at the end of a LAMMPS input script.
To act as an MDI engine operating as an MD code (or surrogate QM code), this is the list of standard MDI commands issued by a driver code which LAMMPS currently recognizes. Using standard commands defined by the MDI library means that a driver code can work interchangeably with LAMMPS or other MD codes or with QM codes which support the MDI standard. See more details about these commands in the MDI library documentation
These commands are valid at the @DEFAULT node defined by MDI. Commands that start with “>” mean the driver is sending information to LAMMPS. Commands that start with “<” are requests by the driver for LAMMPS to send it information. Commands that start with an alphabetic letter perform actions. Commands that start with “@” are MDI “node” commands, which are described further below.
Command name
Action
>CELL or <CELL
Send/request 3 simulation box edge vectors (9 values)
>CELL_DISPL or <CELL_DISPL
Send/request displacement of the simulation box from the origin (3 values)
>CHARGES or <CHARGES
Send/request charge on each atom (N values)
>COORDS or <COORDS
Send/request coordinates of each atom (3N values)
>ELEMENTS
Send elements (atomic numbers) for each atom (N values)
<ENERGY
Request total energy (potential + kinetic) of the system (1 value)
>FORCES or <FORCES
Send/request forces on each atom (3N values)
>+FORCES
Send forces to add to each atom (3N values)
<LABELS
Request string label of each atom (N values)
<MASSES
Request mass of each atom (N values)
MD
Perform an MD simulation for N timesteps (most recent >NSTEPS value)
OPTG
Perform an energy minimization to convergence (most recent >TOLERANCE values)
>NATOMS or <NATOMS
Sends/request number of atoms in the system (1 value)
>NSTEPS
Send number of timesteps for next MD dynamics run via MD command
<PE
Request potential energy of the system (1 value)
<STRESS
Request symmetric stress tensor (virial) of the system (9 values)
>TOLERANCE
Send 4 tolerance parameters for next MD minimization via OPTG command
>TYPES or <TYPES
Send/request the LAMMPS atom type for each atom (N values)
>VELOCITIES or <VELOCITIES
Send/request the velocity of each atom (3N values)
@INIT_MD or @INIT_OPTG
Driver tells LAMMPS to start single-step dynamics or minimization (see below)
EXIT
Driver tells LAMMPS to exit engine mode
Note
The <ENERGY, <FORCES, <PE, and <STRESS commands trigger LAMMPS to compute atomic interactions for the current configuration of atoms and size/shape of the simulation box. I.e. LAMMPS invokes its pair, bond, angle, …, kspace styles. If the driver is updating the atom coordinates and/or box incrementally (as in an MD simulation which the driver is managing), then the LAMMPS engine will do the same, and only occasionally trigger neighbor list builds. If the change in atom positions is large (since the previous >COORDS command), then LAMMPS will do a more expensive operation to migrate atoms to new processors as needed and re-neighbor. If the >NATOMS or >TYPES or >ELEMENTS commands have been sent (since the previous >COORDS command), then LAMMPS assumes the system is new and re-initializes an entirely new simulation.
Note
The >TYPES or >ELEMENTS commands are how the MDI driver tells the LAMMPS engine which LAMMPS atom type to assign to each atom. If both the MDI driver and the LAMMPS engine are initialized so that atom type values are consistent in both codes, then the >TYPES command can be used. If not, the optional elements keyword can be used to specify what element each LAMMPS atom type corresponds to. This is specified by the atomic number of the element (e.g., 13 for Al). An atomic number must be specified for each of the ntypes LAMMPS atom types. Ntypes is typically specified via the create_box command or in the data file read by the read_data command. In this has been done, the MDI driver can send an >ELEMENTS command to the LAMMPS driver with the atomic number of each atom.
The MD and OPTG commands perform an entire MD simulation or energy minimization (to convergence) with no communication from the driver until the simulation is complete. By contrast, the @INIT_MD and @INIT_OPTG commands allow the driver to communicate with the engine at each timestep of a dynamics run or iteration of a minimization; see more info below.
The MD command performs a simulation using the most recent >NSTEPS value. The OPTG command performs a minimization using the 4 convergence parameters from the most recent >TOLERANCE command. The 4 parameters sent are those used by the minimize command in LAMMPS: etol, ftol, maxiter, and maxeval.
The mdi engine command also implements the following custom MDI commands which are LAMMPS-specific. These commands are also valid at the @DEFAULT node defined by MDI:
• Command name
• Action
• >NBYTES
• Send # of datums in a subsequent command (1 value)
• >COMMAND
• Send a LAMMPS input script command as a string (Nbytes in length)
• >COMMANDS
• Send multiple LAMMPS input script commands as a newline-separated string (Nbytes in length)
• >INFILE
• Send filename of an input script to execute (filename Nbytes in length)
• <KE
• Request kinetic energy of the system (1 value)
Note that other custom commands can easily be added if these are not sufficient to support what a user-written driver code needs. Code to support new commands can be added to the MDI package within LAMMPS, specifically to the src/MDI/mdi_engine.cpp file.
MDI also defines a standard mechanism for the driver to request that an MD engine (LAMMPS) perform a dynamics simulation one step at a time or an energy minimization one iteration at a time. This is so that the driver can (optionally) communicate with LAMMPS at intermediate points of the timestep or iteration by issuing MDI node commands which start with “@”.
To tell LAMMPS to run dynamics in single-step mode, the driver sends as @INIT_MD command followed by the these commands. The driver can interact with LAMMPS at 3 node locations within each timestep: @COORDS, @FORCES, @ENDSTEP:
• Command name
• Action
• @COORDS
• Proceed to next @COORDS node = post-integrate location in LAMMPS timestep
• @FORCES
• Proceed to next @FORCES node = post-force location in LAMMPS timestep
• @ENDSTEP
• Proceed to next @ENDSTEP node = end-of-step location in LAMMPS timestep
• @DEFAULT
• EXIT
• Driver tells LAMMPS to exit the MD simulation and engine mode
To tell LAMMPS to run an energy minimization in single-iteration mode. The driver can interact with LAMMPS at 2 node locations within each iteration of the minimizer: @COORDS, @FORCES:
• Command name
• Action
• @COORDS
• Proceed to next @COORDS node = min-pre-force location in LAMMPS min iteration
• @FORCES
• Proceed to next @FORCES node = min-post-force location in LAMMPS min iteration
• @DEFAULT
• EXIT
• Driver tells LAMMPS to exit the minimization and engine mode
While LAMMPS is at its @COORDS node, the following standard MDI commands are supported, as documented above: >COORDS or <COORDS, @COORDS, @FORCES, @ENDSTEP, @DEFAULT, EXIT.
While LAMMPS is at its @FORCES node, the following standard MDI commands are supported, as documented above: <COORDS, <ENERGY, >FORCES or >+FORCES or <FORCES, <KE, <PE, <STRESS, @COORDS, @FORCES, @ENDSTEP, @DEFAULT, EXIT.
While LAMMPS is at its @ENDSTEP node, the following standard MDI commands are supported, as documented above: <ENERGY, <FORCES, <KE, <PE, <STRESS, @COORDS, @FORCES, @ENDSTEP, @DEFAULT, EXIT.
The mdi plugin command is used to make LAMMPS operate as an MDI driver which loads an MDI engine as a plugin library. It is typically used in an input script after LAMMPS has setup the system it is going to model consistent with the engine code.
The name argument specifies which plugin library to load. A name like “lammps” is converted to a filename liblammps.so. The path for where this file is located is specified by the -plugin_path switch within the -mdi command-line switch, which is specified when LAMMPS is launched. See the examples/mdi/README files for examples of how this is done.
The mdi keyword is required and is used as the -mdi argument passed to the library when it is launched. The -role and -method settings are required. The -name setting can be anything you choose. MDI drivers and engines can query their names to verify they are values they expect.
The infile keyword is optional. It sets the name of an input script which the engine will open and process. MDI will pass it as a command-line argument to the library when it is launched. The file typically contains settings that an MD or QM code will use for its calculations.
The extra keyword is optional. It contains additional command-line arguments which MDI will pass to the library when it is launched.
The command keyword is required. It specifies a LAMMPS input script command (as a single argument in quotes if it is multiple words). Once the plugin library is launched, LAMMPS will execute this command. Other previously-defined commands in the input script, such as the fix mdi/qm command, should perform MDI communication with the engine, while the specified command executes. Note that if command is an include command, then it could specify a filename with multiple LAMMPS commands.
Note
When the command is complete, LAMMPS will send an MDI EXIT command to the plugin engine and the plugin will be removed. The “mdi plugin” command will then exit and the next command (if any) in the LAMMPS input script will be processed. A subsequent “mdi plugin” command could then load the same or a different MDI plugin if desired.
The mdi connect and mdi exit commands are only used when LAMMPS is operating as an MDI driver. And when other LAMMPS command(s) which send MDI commands and associated data to/from the MDI engine are not able to initiate and terminate the connection to the engine code.
The only current MDI driver command in LAMMPS is the fix mdi/qm command. If it is only used once in an input script then it can initiate and terminate the connection, but if it is being issued multiple times (e.g., in a loop that issues a clear command), then it cannot initiate or terminate the connection multiple times. Instead, the mdi connect and mdi exit commands should be used outside the loop to initiate or terminate the connection.
See the examples/mdi/in.series.driver script for an example of how this is done. The LOOP in that script is reading a series of data file configurations and passing them to an MDI engine (e.g., quantum code) for energy and force evaluation. A clear command inside the loop wipes out the current system so a new one can be defined. This operation also destroys all fixes. So the fix mdi/qm command is issued once per loop iteration. Note that it includes a “connect no” option which disables the initiate/terminate logic within that fix.
## Restrictions¶
This command is part of the MDI package. It is only enabled if LAMMPS was built with that package. See the Build package page for more info.
To use LAMMPS in conjunction with other MDI-enabled atomistic codes, the units command should be used to specify real or metal units. This will ensure the correct unit conversions between LAMMPS and MDI units, which the other codes will also perform in their preferred units.
LAMMPS can also be used as an MDI engine in other unit choices it supports (e.g., lj), but then no unit conversion is performed.
None | 2023-03-27 03:51:01 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 1, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.3978990614414215, "perplexity": 6989.313268045241}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2023-14/segments/1679296946637.95/warc/CC-MAIN-20230327025922-20230327055922-00069.warc.gz"} |
https://tex.stackexchange.com/questions/479530/when-i-try-to-typeset-pdflatex-i-get-pdftex-error-on-my-output-console | # When I try to typeset pdflatex I get pdftex error on my output console
When I typeset my latex file with pdflatex, I get an output console with this error message:
``````This is pdfTeX, Version 3.14159265-2.6-1.40.16 (TeX Live 2015) (preloaded format=pdftex)
restricted \write18 enabled.
entering extended mode. restricted \write18 enabled. (./PaperOne-03-16-19.tex
! Undefined control sequence.
l.6 \documentclass
[12pt,leqno]{article}
``````
• You need to use the `pdflatex` (which loads the LaTeX format) executable, not `pdftex` (which loads pĺain). – Phelype Oleinik Mar 14 at 18:14
• Welcome to TeX.SE. Please tell us which front-end program -- TeXworks, TeXshop, TeXmaker, TeXstudio, etc -- you employ to edit and typeset your LaTeX documents. – Mico Mar 14 at 18:20
• you give us little in the way of clues looks like your using an old 2015 TeX Live on a file in a subdirectory ? and that file has ? five lines of directives before the class definition ? which is possibly a maths article If your running direct from command line it would help to know if you have a main.tex that's including a number of subfiles and what is FIRST error as shown in main.log plus any FIRST error if there is a /PaperOne….log – KJO Mar 14 at 18:56
• @supa-mega-ducky-momo-da-waffle you should use a code block not a quote for error messages, line endings are vital to understand tex errors. – David Carlisle Mar 14 at 22:29
• @SupaMegaDuckyMomodaWaffle tex error messages are not understandable if you lose the line breaks, eg the one above it is only by the linebreak after the command that shows which command is undefined., the first line shows the line read so far the linebreak is the point of the error and the indented second line is the rest of the source not yet read. if you show it using the quote format all information is lost. – David Carlisle Mar 15 at 1:17 | 2019-05-26 02:56:40 | {"extraction_info": {"found_math": false, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9338659048080444, "perplexity": 3379.011200705313}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 20, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-22/segments/1558232258621.77/warc/CC-MAIN-20190526025014-20190526051014-00314.warc.gz"} |
http://clay6.com/qa/4405/write-the-order-and-degree-of-the-diff-equation-bigg-large-frac-bigg-bigg-y | Browse Questions
# Write the order and degree of the diff. equation $\bigg(\large\frac{d^2y}{dx^2}\bigg)^{2/3}=\bigg(y+\frac{dy}{dx}\bigg)^{1/2}$
Can you answer this question? | 2017-04-28 02:35:11 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 2, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.9281112551689148, "perplexity": 706.2946399507929}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.3, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-17/segments/1492917122726.55/warc/CC-MAIN-20170423031202-00586-ip-10-145-167-34.ec2.internal.warc.gz"} |
https://derek-blog.com/2019/08/17/1/ | # 机器学习|Automatic Machine Learning
Posted by Derek on August 17, 2019
# 1. Example
Here is an example.
Automatic machine learning greatly facilitate applying machine learning: there is no need to select among algorithms, no need to tune hyperparameters, etc.
# 2. Hyperparameter Optimization
Let $\lambda$ be the hyperparameters of an ML algorithm $A$ with domain $\Lambda.$ Let $\mathcal{L}(A_\lambda, D_\mathrm{train}, D_\mathrm{valid})$ denotes the loss of $A,$ using hyperparameters $\lambda$ trained on $D_\mathrm{train}$ and evaluated on $D_\mathrm{valid}.$ The hyperparameter optimization problem is to find a hyperparameter configuration $\lambda^*$ that minimizes the loss $$\lambda^*=\arg\min_{\lambda\in\Lambda}\mathcal{L}(A_\lambda, D_\mathrm{train}, D_\mathrm{valid}).$$ There can be more than one $A$ then the loss is defined also over the choice among different algorithms. The problem is called combined algorithm selection and hyperparameter optimization (CASH).
## 2.1 Types of Hyperparameters
1. Continuous: learning rate, etc.
2. Integer: number of units, etc.
3. Categorical: algorithm$\in\lbrace\mathrm{SVM,\ RF,\ NN}\rbrace$
## 2.2 Conditional Hyperparameters
Conditional hyperparameters $B$ are only active if other hyperparameters $A$ are set a certain way.
## 2.3 Bayesian Optimization (BO)
We can use gradient descent to minimize over parameters but $\mathcal{L}$ is not differentiable over hyperparameters. Conventionally we use graduate student descent.
Now, we try to solve $\min\limits_\lambda\mathcal{L}(\lambda)$ without $\frac{\partial\mathcal{L}}{\partial\lambda}$ (since we cannot get and besides evaluate $\mathcal{L}(\lambda)$ is often expensive).
1. Randomly pick some $\lambda_i$'s and evaluate corresponding $\mathcal{L}(\lambda_i).$
2. Repeat until converged: (1) Fit a model $\mathcal{M}$ using $(\lambda_i, \mathcal{L}(\lambda_i))$'s; (2) Sample next $\lambda_i$'s based on $\mathcal{M}$ and calculate $\mathcal{L}(\lambda_i).$
Conventionally we use Gaussian process. But there is some challenges, for example, sometimes the noise is not Gaussian, GP cost is $O(N^3),$ etc.
Here are other methods: population-based methods (using a population of workers which collaborate and/or compete somehow, including genetic algorithms, particle swarm optimization) and multi-fidelity optimization (using cheap approximations of the blackbox).
However, they do not work well in high dimensionality and conditional hyperparameters. GP, GA, and PSO have their own hypermeters and there is no guarantee or evidence that they work in all cases. Besides, so far AutoML cannot learn features and give interpretation. | 2019-10-15 07:07:02 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.7901535630226135, "perplexity": 1556.4829340623482}, "config": {"markdown_headings": true, "markdown_code": false, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-43/segments/1570986657586.16/warc/CC-MAIN-20191015055525-20191015083025-00181.warc.gz"} |
http://arxiver.moonhats.com/tag/earth-and-planetary-astrophysics/ | # Disc truncation in embedded star clusters: Dynamical encounters versus face-on accretion [GA]
Observations indicate that the dispersal of protoplanetary discs in star clusters occurs on time scales of about 5 Myr. Several processes are thought to be responsible for this disc dispersal. Here we compare two of these processes: dynamical encounters and interaction with the interstellar medium, which includes face-on accretion and ram pressure stripping. We perform simulations of embedded star clusters with parameterisations for both processes to determine the environment in which either of these processes is dominant. We find that face-on accretion, including ram pressure stripping, is the dominant disc truncation process if the fraction of the total cluster mass in stars is $\lesssim 30\,\%$ regardless of the cluster mass and radius. Dynamical encounters require stellar densities $\gtrsim 10^4$ pc$^{-3}$ combined with a mass fraction in stars of $\approx 90\,\%$ to become the dominant process. Our results show that during the embedded phase of the cluster, the truncation of the discs is dominated by face-on accretion and dynamical encounters become dominant when the intra-cluster gas has been expelled. As a result of face-on accretion the protoplanetary discs become compact and their surface density increases. In contrast, dynamical encounters lead to discs that are less massive and remain larger.
T. Wijnen, O. Pols, F. Pelupessy, et. al.
Fri, 23 Jun 17
7/48
Comments: Accepted for publication in A&A, 14 pages, 8 figures, 1 table
# Gap and rings carved by vortices in protoplanetary dust [EPA]
Large-scale vortices in protoplanetary disks are thought to form and survive for long periods of time. Hence, they can significantly change the global disk evolution and particularly the distribution of the solid particles embedded in the gas, possibly explaining asymmetries and dust concentrations recently observed at sub-millimeter and millimeter wavelengths. We investigate the spatial distribution of dust grains using a simple model of protoplanetary disk hosted by a giant gaseous vortex. We explore the dependence of the results on grain size and deduce possible consequences and predictions for observations of the dust thermal emission at sub-millimeter and millimeter wavelengths. Global 2D simulations with a bi-fluid code are used to follow the evolution of a single population of solid particles aerodynamically coupled to the gas. Possible observational signatures of the dust thermal emission are obtained using simulators of ALMA and ngVLA observations. We find that a giant vortex not only captures dust grains with Stokes number St < 1 but can also affect the distribution of larger grains (with St ‘~’ 1) carving a gap associated to a ring composed of incompletely trapped particles. The results are presented for different particle size and associated to their possible signatures in disk observations. Gap clearing in the dust spatial distribution could be due to the interaction with a giant gaseous vortex and their associated spiral waves, without the gravitational assistance of a planet. Hence, strong dust concentrations at short sub-mm wavelengths associated with a gap and an irregular ring at longer mm and cm wavelengths could indicate the presence of an unseen gaseous vortex.
P. Barge, L. Ricci, C. Carilli, et. al.
Fri, 23 Jun 17
10/48
Comments: 11 pages, 11 figures, accepted for publication in A&A
# MOVES I. The evolving magnetic field of the planet-hosting star HD189733 [SSA]
HD189733 is an active K dwarf that is, with its transiting hot Jupiter, among the most studied exoplanetary systems. In this first paper of the Multiwavelength Observations of an eVaporating Exoplanet and its Star (MOVES) program, we present a 2-year monitoring of the large-scale magnetic field of HD189733. The magnetic maps are reconstructed for five epochs of observations, namely June-July 2013, August 2013, September 2013, September 2014, and July 2015, using Zeeman-Doppler Imaging. We show that the field evolves along the five epochs, with mean values of the total magnetic field of 36, 41, 42, 32 and 37 G, respectively. All epochs show a toroidally-dominated field. Using previously published data of Moutou et al. 2007 and Fares et al. 2010, we are able to study the evolution of the magnetic field over 9 years, one of the longest monitoring campaign for a given star. While the field evolved during the observed epochs, no polarity switch of the poles was observed. We calculate the stellar magnetic field value at the position of the planet using the Potential Field Source Surface extrapolation technique. We show that the planetary magnetic environment is not homogeneous over the orbit, and that it varies between observing epochs, due to the evolution of the stellar magnetic field. This result underlines the importance of contemporaneous multi-wavelength observations to characterise exoplanetary systems. Our reconstructed maps are a crucial input for the interpretation and modelling of our MOVES multi-wavelength observations.
R. Fares, V. Bourrier, A. Vidotto, et. al.
Fri, 23 Jun 17
29/48
Comments: 14 pages, 6 figures, accepted for publication in MNRAS
# Cloud formation in metal-rich atmospheres of hot super-Earths like 55 Cnc e and CoRot7b [EPA]
Clouds form in the atmospheres of planets where they can determine the observable spectra, the albedo and phase curves. Cloud properties are determined by the local thermodynamical and chemical conditions of an atmospheric gas. A retrieval of gas abundances requires a comprehension of the cloud formation mechanisms under varying chemical conditions. With the aim of studying cloud formation in metal rich atmospheres, we explore the possibility of clouds in evaporating exoplanets like CoRoT-7b and 55 Cnc e in comparison to a generic set of solar abundances and the metal-rich gas giant HD149026b. We assess the impact of metal-rich, non-solar element abundances on the gas-phase chemistry, and apply our kinetic, non-equilibrium cloud formation model to study cloud structures and their details. We provide an overview of global cloud properties in terms of material compositions, maximum particle formation rates, and average cloud particle sizes for various sets of rocky element abundances. Our results suggest that the conditions on 55 Cnc e and HD149026b should allow the formation of mineral clouds in their atmosphere. The high temperatures on some hot-rocky super-Earths (e.g. the day-side of Corot-7b) result in an ionised atmospheric gas and they prevent gas condensation, making cloud formation unlikely on its day-side.
G. Mahapatra, C. Helling and Y. Miguel
Fri, 23 Jun 17
31/48
Comments: 19 pages, accepted for publication in MNRAS
# The Detectability of Radio Auroral Emission from Proxima B [EPA]
Magnetically active stars possess stellar winds whose interaction with planetary magnetic fields produces radio auroral emission. We examine the detectability of radio auroral emission from Proxima b, the closest known exosolar planet orbiting our nearest neighboring star, Proxima Centauri. Using the Radiometric Bode’s Law, we estimate the radio flux produced by the interaction of Proxima Centauri’s stellar wind and Proxima b’s magnetosphere for different planetary magnetic field strengths. For plausible planetary masses, Proxima b produces 6-83 mJy of auroral radio flux at frequencies of 0.3-0.8 MHz for planetary magnetic field strengths of 1-3 B$_{\oplus}$. According to recent MHD models that vary the orbital parameters of the system, this emission is expected to be highly variable. This variability is due to large fluctuations in the size of Proxima b’s magnetosphere as it crosses the equatorial streamer regions of the dense stellar wind and high dynamic pressure. Using the MHD model of Garraffo et al. 2016 for the variation of the magnetosphere radius during the orbit, we estimate that the observed radio flux can vary nearly by an order of magnitude over the 11.2 day period of Proxima b. The detailed amplitude variation depends on the stellar wind, orbital, and planetary magnetic field parameters. We discuss observing strategies for proposed future space-based observatories to reach frequencies below the ionospheric cut off ($\sim 10$ MHz) as would be required to detect the signal we investigate.
B. Burkhart and A. Loeb
Thu, 22 Jun 17
7/68
# Escape and fractionation of volatiles and noble gases from Mars-sized planetary embryos and growing protoplanets [EPA]
Planetary embryos form protoplanets via mutual collisions, which can lead to the development of magma oceans. During their solidification, large amounts of the mantles’ volatile contents may be outgassed. The resulting H$_2$O/CO$_2$ dominated steam atmospheres may be lost efficiently via hydrodynamic escape due to the low gravity and the high stellar EUV luminosities. Protoplanets forming later from such degassed building blocks could therefore be drier than previously expected. We model the outgassing and subsequent hydrodynamic escape of steam atmospheres from such embryos. The efficient outflow of H drags along heavier species (O, CO$_2$, noble gases). The full range of possible EUV evolution tracks of a solar-mass star is taken into account to investigate the escape from Mars-sized embryos at different orbital distances. The envelopes are typically lost within a few to a few tens of Myr. Furthermore, we study the influence on protoplanetary evolution, exemplified by Venus. We investigate different early evolution scenarios and constrain realistic cases by comparing modeled noble gas isotope ratios with observations. Starting from solar values, consistent isotope ratios (Ne, Ar) can be found for different solar EUV histories, as well as assumptions about the initial atmosphere (either pure steam or a mixture with accreted H). Our results generally favor an early accretion scenario with a small amount of accreted H and a low-activity Sun, because in other cases too much CO$_2$ is lost during evolution, which is inconsistent with Venus’ present atmosphere. Important issues are likely the time at which the initial steam atmosphere is outgassed and/or the amount of CO$_2$ which may still be delivered at later evolutionary stages. A late accretion scenario can only reproduce present isotope ratios for a highly active young Sun, but then very massive steam atmospheres would be required.
P. Odert, H. Lammer, N. Erkaev, et. al.
Thu, 22 Jun 17
14/68
Comments: 53 pages, 7 figures, 3 tables, submitted to Icarus
# EPIC 228735255b – An eccentric 6.57 day transiting hot Jupiter in Virgo [EPA]
We present the discovery of EPIC 228735255b, a P= 6.57 days Jupiter-mass (M$P$=1.019$\pm$0.070 M${Jup}$) planet transiting a V=12.5 (G5-spectral type) star in an eccentric orbit (e=$0.120^{+0.056}{-0.046}$) detected using a combination of K2 photometry and ground-based observations. With a radius of 1.095$\pm$0.018R${Jup}$ the planet has a bulk density of 0.726$\pm$0.062$\rho_{Jup}$. The host star has a [Fe/H] of 0.12$\pm$0.045, and from the K2 light curve we find a rotation period for the star of 16.3$\pm$0.1 days. This discovery is the 9th hot Jupiter from K2 and highlights K2’s ability to detect transiting giant planets at periods slightly longer than traditional, ground-based surveys. This planet is slightly inflated, but much less than others with similar incident fluxes. These are of interest for investigating the inflation mechanism of hot Jupiters.
H. Giles, D. Bayliss, N. Espinoza, et. al.
Thu, 22 Jun 17
38/68
Comments: Submitted to MNRAS, 11 pages, 10 figures | 2017-06-25 07:19:56 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 0, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.5371553301811218, "perplexity": 3187.1443231823914}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2017-26/segments/1498128320443.31/warc/CC-MAIN-20170625064745-20170625084745-00247.warc.gz"} |
https://math.stackexchange.com/questions/3008485/summation-using-previous-sum-inside-the-sigma | Summation using previous sum inside the sigma
I'm doing a summation, but I need the current sum to be a part of the computation in the actual sigma. First I define $$n$$ and $$\delta\in\mathbb{N}$$:
$$\sum_{n=1}^{50}2n+\delta$$
where $$\delta$$ is the current sum for each $$n$$. So to avoid confusion the actual computation that I want is the following:
$$\begin{split} 2\cdot1+0&=2\\ 2\cdot2+2&=6\\ 2\cdot3+6&=12\\ 2\cdot4+12&=20\\ 2\cdot5+20&=30\\ 2\cdot6+30&=42\\ 2\cdot7+42&=56\\ 2\cdot8+56&=72\\ 2\cdot9+72&=90\\\text{etc..} \end{split}$$
You see the previous summation are the $$\delta$$ in the next summation. As a side note: After just plugging in these numbers in OEIS, I found out that they are the pronic numbers, $$a(n) = n\cdot(n+1).$$ However, I could have used a totally different example. My question asks wether there is a notation for the sigma summation to get $$\delta$$ regardless of the outcome?
One idea I have is to use two sigmas instead of one, but I do not want to overcomplicate things if there is a better way..
(Im not necessarily interested in the result of the above computation, only in the process of how the notation works on how to compute)
Is the following notation $$a_k = \displaystyle\sum_{n=1}^{k}2n+a_{n-1}$$ a valid one?
• es, that notation is perfectly valid - that's exactly how I've seen similar things written. – Deusovi Nov 26 '18 at 19:49
HINT
What you actually are defining is the sequence $$a_n$$ which satisfies the initial condition $$a_0=0$$ and recurrence relation $$a_n = a_{n-1} + 2n$$. Can you now solve it?
UPDATE
I am saying that if you define your function as I am suggesting, $$a_0 = \delta$$ and $$\begin{split} a_n &= \sum_{k=1}^n 2k + a_0 \\ &= 2\sum_{k=1}^n k \\ &= 2 \cdot \frac{n(n+1)}{2} \\ &= n(n+1). \end{split}$$
That the formula $$\sum_{k=1}^n k = \frac{n(n+1)}{2}$$ holds can be proven by noticing that the sum $$1 + 2 + 3 + \ldots + (n-2) +(n-1)+n$$ can be grouped into pairs, adding first and last elements together, then second and next-to-last, etc. Each such pair has a sum of $$n+1$$ and
• if $$n$$ is even, there are exactly $$n/2$$ such pairs, so the sum is $$(n+1)n/2$$
• if $$n$$ is odd, there are $$(n-1)/2$$ such pairs and the middle number is $$(n+1)/2$$, so the sum is $$\frac{(n+1)(n-1)}{2} + \frac{n+1}{2} = \frac{n+1}{2} \left[(n-1)+1\right] = \frac{n(n+1)}{2}.$$
• No. Im not used to mathematical notation. Are you saying that $$\sum_{n=1}^{50}a_i=2n+a_{i-1}$$ is allowed? And also is $a_{i-1}$ always initially 0? – Natural Number Guy Nov 21 '18 at 23:12
• oops. should be $a_n$ and $a_{n-1}$ instead of $i$. – Natural Number Guy Nov 21 '18 at 23:20
• @NaturalNumberGuy see update – gt6989b Nov 22 '18 at 5:05
Doubly, or triply, (or morely) applied sum symbols are just a normal occurence, & something to get used-to. | 2019-08-18 04:01:50 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 1, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 25, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.931027889251709, "perplexity": 318.897629509249}, "config": {"markdown_headings": false, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-35/segments/1566027313589.19/warc/CC-MAIN-20190818022816-20190818044816-00545.warc.gz"} |
https://www.springerprofessional.de/en/advanced-real-time-imaging-ii/16460534 | scroll identifier for mobile
main-content
“Real time” imaging techniques have assisted materials science studies especially for non-ambient environments. These techniques have never been collectively featured in a single venue. The book is an assembly of materials studies utilizing cutting edge real time imaging techniques, emphasizing the significance and impact of those techniques.
### A Novel Method of Surface Tension Test for Melt Slags Based on Hot Thermocouple Technique
Currently, the sessile drop method and the maximum bubble method are widely used to measure the surface tension of melt slags. For the above methods, the slow heating rate of furnace decreases the test efficiency. The hot thermocouple technique is widely used in the measurement of the high-temperature performance of molten slags. The slag will form droplets on the thermocouple due to the capillarity, and based on the rapid heating-up and Young’s equation, this study used the Single Hot Thermocouple Technique (SHTT) for the test of CaO–SiO2–Al2O3 (CSA) and the CaO–SiO2–Al2O3–MgO (CSAM) slag systems. The results show that the interfacial tension between the CSA, CSAM slags and the thermocouple is 2117.76–2131.89 mN/m at 1500 °C. The surface tension of the CSA, CSAM slags can be obtained by Young’s equation. Compared with the surface tension measured by the standard test, the SHTT surface tension test error is within 5%.
Zhe Wang, Guanghua Wen, Ping Tang, Zibing Hou
### In Situ Observation on the Interactions of Nonmetallic Inclusions on the Surface of Liquid Steel
The behaviors of several types of inclusions on the surface of liquid steel were examined using a confocal laser scanning microscope, CLSM . While alumina inclusions are likely to attract each other, agglomerate, and grow fast by the agglomeration, the other types of inclusions such as spinel and calcium aluminate rarely interact with each other. The analysis on the observation result of the CLSM indicates that the attraction force and the agglomeration play a significant role in the growth of alumina inclusions. Moreover, the behaviors of liquid calcium aluminate inclusions, which were intentionally injected, could be carefully observed. Their agglomeration took place only when they occasionally collide under the existence of external force, in spite of relatively low collision probability.
Youngjo Kang, Piotr R. Scheller, Du Sichen, Kazuki Morita
### In Situ Structural Variations of Individual Particles of an Al2O3-Supported Cu/Fe Spinel Oxygen Carrier During High-Temperature Oxidation and Reduction
Physical and chemical degradation of the oxygen-carrier materials during high-temperature redox exposures may affect the overall efficiency of the chemical looping process. Therefore, studying real-time physical and chemical changes in these materials when exposed to repeated redox cycles is essential for further development of chemical looping technology. In this work, the National Energy Technology Laboratory’s Al2O3-supported Cu/Fe spinel oxygen carrier, in the form of a CuO · Fe2O3 solid solution, was examined in situ during 3-h exposures to either oxidizing or reducing environments at 800 °C using a controlled atmosphere heating chamber in conjunction with a confocal scanning laser microscope. A compilation of the physical changes of individual particles using a controlled atmosphere confocal microscope and the microstructural/chemical changes documented using a scanning electron microscope will be discussed.
W. H. Harrison Nealley, Anna Nakano, Jinichiro Nakano, James P. Bennett
### Surface Tension of High Temperature Liquids Evaluation with a Thermal Imaging Furnace
At high temperature, the reactivity of liquid metals, salts, oxides, etc. often requires a container-less approach for studying composition-sensitive thermodynamic properties, such as component activities and surface tension. This experimental challenge limits access to essential properties, and therefore our understanding of molten systems. Herein, a thermal imaging furnace (TIF) is investigated as a mean of container-less study of molten materials via the formation of pendant drops. In situ optical characterization of a liquid metal drop is proposed through the use of a conventional digital camera. We report one possible method for measuring surface tension of molten systems using this pendant drop technique in conjunction with an image analysis procedure. Liquid copper was used to evaluate the efficacy of this method. The surface tension of liquid copper was calculated to be 1.159 $$\pm \ 0.043$$ N$$\text {m}^{-1}$$ at $$1084\pm 20$$ $$^{\circ }\mathrm {C}$$, in agreement with published values.
Mindy Wu, Andrew H. Caldwell, Antoine Allanore
### New Laue Micro-diffraction Setup for Real-Time In Situ Microstructural Characterization of Materials Under External Stress
Laue X-ray diffraction (XRD) is a powerful probe to characterize pressure-/strain-induced microstructural changes in materials. The use of brilliant synchrotron radiation allows Laue XRD to be measured in a fast manner, leading to microstructural characterization, such as two-dimensional maps of single-crystals, their texture, and deformation, to be made in time-resolved mode with temporal resolution down to seconds. This technique can be very efficient in the studies of mechanisms of deformation, grain growth, recrystallization, and phase transitions. A progress has been obtained to extend application of Laue diffraction to high-pressure area. Recent case studies of α → β transition in Si and α → ω transitions in Zr are briefly reported. A new experimental setup specifically optimized for real-time in situ Laue measurements has been developed at the 16-BMB beamline at the Advanced Photon Source. Due to the large X-ray energy range, which is typically up to 70 keV, a polychromatic beam diffraction technique can be efficiently implemented despite some limits introduced to the scattering angle by strain generation devices. Currently, the X-ray beam is focused at the sample position down to ~2.2 × 2.2 μm2 at the full width at the half maximum. Precision sample translation stages provide fast data collection with step sizes down to 0.5 μm.
D. Popov, S. Sinogeikin, C. Park, E. Rod, J. Smith, R. Ferry, C. Kenney-Benson, N. Velisavljevic, G. Shen
### In Situ Study on the Transformation Behavior of Ti-Bearing Slags in the Oxidation Atmosphere
Rutile acts as a target phase for the titanium (Ti) recovery from Ti-bearing blast furnace slags (Ti-BFS) due to its special properties. In this study, using single hot thermocouple technique (SHTT), we investigated the crystallization behaviors of the Ti-BFS and the target rutile precipitation behaviors where both the holding temperature, the basicity (mass ratio of CaO to SiO2) and the P2O5 content were considered. We found that basicity has a vital influence on the crystallization behaviors and rod-like rutile only formed with lower basicity. As the basicity increased, the primary phase would transform from rutile to perovskite. As the basicity was 0.5, with the temperature increasing, the growth rate of rutile length initially increased, followed by a decrease with further increasing holding temperature. Thus, the growth rate of rutile had a maximum value of 7.74 µm/s at 1260 °C. Furthermore, the rutile growth followed a one-dimensional template, and the P2O5 content had an important impact. By increasing the content of P2O5, the incubation time of the rod-like rutile got decreased, suggesting that the rutile precipitation got much easier.
Yongqi Sun, Zuotai Zhang
### Dissolution of Sapphire and Alumina–Magnesia Particles in CaO–SiO2–Al2O3 Liquid Slags
Understanding the dissolution kinetics of non-metallic inclusions in liquid slag is key in optimization of slag composition for inclusion removal . In this study, the rate of dissolution of high-precision spheres of sapphire and alumina–magnesia particles in CaO–SiO2–Al2O3 liquid slags was measured in situ using a laser scanning confocal microscope at 1500 °C. It was found that the rate of dissolution of both sapphire and alumina–magnesia particles increased when the slag basicity is increased. A layer was observed around the dissolving sapphire. This layer may be a product layer and/or indicative of a mass transfer rate-controlling system. In the case of alumina–magnesia particle, the kinetics appeared more complex and depended on slag composition. No product layer or mass transfer layer was observed around the particle dissolving in slag with low basicity, whereas for the high basicity slag, a product or stagnant layer was observed, similar to that of the sapphire particle. Assuming a mass transfer-controlled system, measured diffusion coefficients for sapphire particles in slags tested in this study ranged from 10−11 to 10−10 m2 s−1 at 1500 °C.
Hamed Abdeyazdan, Neslihan Dogan, Raymond J. Longbottom, M. Akbar Rhamdhani, Michael W. Chapman, Brian J. Monaghan
### In Situ Characterization of Hot Cracking Using Dynamic X-Ray Radiography
We employ dynamic X-ray radiography (DXR) for in situ and real-time characterization of the hot cracking phenomenon for aluminum alloy 6061 under the processing conditions typical of laser powder bed fusion. The dynamics of processes such as a crack initiating from a bubble trapped subsurface are captured. We also directly observe the backfilling of liquid that heals an open crack. In addition, we demonstrate the feasibility of determining the point of origin for hot cracking with a temporal resolution of order 20 µs and spatial resolution of order 2 µm. This could shed light on the estimation of solid fraction at the initiation of hot cracking, which is a critical parameter upon which many models are based. We demonstrate the capability of DXR for generating new insights into verify or refine hot cracking models, and understand this problem fundamentally, which could ultimately lead to the optimization of process control for additive manufacturing .
Po-Ju Chiang, Runbo Jiang, Ross Cunningham, Niranjan Parab, Cang Zhao, Kamel Fezzaa, Tao Sun, Anthony D. Rollett
### High-Frequency Ultrasound Analysis in Both Experimental and Computation Level to Understand the Microstructural Change in Soft Tissues
High-frequency ultrasound has become a popular tool in characterizing small-scale soft materials. This method is particularly effective in pitch-catch mode. It has been used in tissue phantoms to evaluate the microstructure. This method has the potential to be used in determining the tissue pathology in cancer and other tissue degenerative diseases. Among different types of parameters of ultrasound, peak density has been found to be most sensitive to the microstructural and scatterer variations in soft materials. 25 MHz ultrasound wave is used in a pitch-catch mode to evaluate mm scale tissue phantoms with microscale scatterers on different thickness levels. FFT is used to convert the receiving signal to frequency domain, calibrate to remove the noise and analyze the peak density. Finite element simulation is used to model the wave propagation in the medium containing scatterers to find a systematic correlation with the scatterer density.
Leila Ladani, Koushik Paul, Jeremy Stromer
### Study of Mold Flux Heat Transfer Property by Using Thermal Imaging Enhanced Inferred Emitter Technique
A thermal imaging enhanced inferred emitter technique was developed to investigate the heat transfer behavior of mold flux. Then, the phase transformation behavior, the heat transfer behavior, the temperature field evolution and the mold/slag interfacial thermal resistance evolution for a demonstration experiment of medium carbon mold flux slag disk were in situ recorded. The demonstration experiment results showed that the phase transformation behavior of mold flux significant affected the radiation heat transfer. And the phase transformation behavior also led to the change of temperature distribution on the slag. According to the in situ observation of slag temperature field, the crystallization behavior of mold flux made the high-temperature region move toward the crystalline layer. The variation of the mold/slag interfacial thermal resistance Rint also had been directly obtained with the help of thermal imager. Rint decreased with the increase of mold/slag interfacial temperature. In addition, mold/slag interfacial deformation and the decrease of interfacial temperature caused by the crystallization behavior led to an increase of Rint.
Kaixuan Zhang, Wanlin Wang, Haihui Zhang
### Sub-rapid Solidification Study by Using Droplet Solidification Technique
Droplet solidification technique is important with respect to the fundamental study of strip casting given the common conditions of direct contact between cooling mold and solidifying metal. In this study, an improved droplet solidification technique has been developed for the in situ observation of the sub-rapid solidification phenomena of metal droplets impinging onto the water-cooled copper substrate. The heat transfer rates were calculated by the inverse heat conduction program (IHCP) , according to the responding temperatures’ gradient inside the cooling mold. Meanwhile a charge coupled device (CCD) camera was placed beside the bell jar aimed to record the whole melting and solidification process of the steel sample, which also allowed the determination of the final wetting angel, during the dipping tests. Moreover, it was found that the heat transfer rate increased with decreasing final contact angle, which means better wetting condition between the liquid sample and the copper substrate.
Cheng Lu, Wanlin Wang, Chenyang Zhu
### Comparison of Dissolution Kinetics of Nonmetallic Inclusions in Steelmaking Slag
Nonmetallic oxide inclusions of Al2O3, Al2TiO5, and CaO · 2Al2O3 (CA2) types are responsible for clogging of ceramic nozzles during liquid steel processing. The dissolution of these inclusions in steelmaking slags alleviates the clogging phenomenon. The in situ dissolution behavior a single oxide particle is studied in a synthetic CaO–Al2O3–SiO2 type slag using a high-temperature confocal scanning laser microscope at 1550 °C. The rate determining step for Al2O3 and CA2 inclusions was confirmed to be mass transport control in slag. The rate determining step for dissolution of Al2TiO5 needs further investigation. The rate of dissolution varied in the order from slowest to fastest: Al2O3 < CA2 < Al2TiO5.
Mukesh Sharma, Neslihan Dogan
### Quantitative Thermal Analysis of Solidification in a High-Temperature Laser-Scanning Confocal Microscope
Under near-equilibrium conditions, the concentric solidification technique proved to be an excellent way of studying in situ, solidification phenomena, but under rapid cooling conditions, the solid/liquid interface undergoes dynamic thermal and solute distributions. The current project aims to evaluate temperature distribution under rapid cooling conditions. A number of thermocouples are attached to the specimen surface and measured the temperature over the solidification period. The temperature profile within the liquid phase is measured separately by thinner thermocouple wires which are fixed to the crucible so that the surface tension of the molten liquid keeps the thermocouple suspended in the liquid pool. The dynamically changing temperature profile over the radial axis of the specimen under rapid cooling conditions is determined, including the all-important temperature at the solid/liquid interface . The calculated interface temperatures are utilized in phase-field simulations, and the results are found to be in excellent agreement with experimental results.
Dasith Liyanage, Suk-Chun Moon, Madeleine Du Toit, Rian Dippenaar
### In Situ Investigation of Pt–Rh Thermocouple Degradation by P-Bearing Gases
Gases bearing elements such as As, S, and Si are known to degrade Pt and Pt–Rh alloys, leading to thermocouple sensor failure during industrial operations. While the corrosive impact of As, S, and Si gases have been discussed in the literature, the impact of P has not been well studied. P may originate from the carbon feedstock, ores, additives, and refractory bricks used in metallurgical and gasification processes. In this work, gaseous P interactions with Pt–Rh (Rh = 6 and 30 wt%) alloys were isothermally investigated at 1012° C in situ using a customized environmental white light/confocal scanning laser microscope. Changes in microstructure and P-diffusion into the Pt–Rh alloys are discussed based on real-time images recorded during the exposure tests and electron probe microscopy analysis from the quenched samples.
Anna Nakano, Jinichiro Nakano, James Bennett | 2019-02-22 02:52:40 | {"extraction_info": {"found_math": true, "script_math_tex": 0, "script_math_asciimath": 0, "math_annotations": 0, "math_alttext": 0, "mathml": 0, "mathjax_tag": 0, "mathjax_inline_tex": 0, "mathjax_display_tex": 1, "mathjax_asciimath": 0, "img_math": 0, "codecogs_latex": 0, "wp_latex": 0, "mimetex.cgi": 0, "/images/math/codecogs": 0, "mathtex.cgi": 0, "katex": 0, "math-container": 0, "wp-katex-eq": 0, "align": 0, "equation": 0, "x-ck12": 0, "texerror": 0, "math_score": 0.49789243936538696, "perplexity": 4226.736080159326}, "config": {"markdown_headings": true, "markdown_code": true, "boilerplate_config": {"ratio_threshold": 0.18, "absolute_threshold": 10, "end_threshold": 15, "enable": true}, "remove_buttons": true, "remove_image_figures": true, "remove_link_clusters": true, "table_config": {"min_rows": 2, "min_cols": 3, "format": "plain"}, "remove_chinese": true, "remove_edit_buttons": true, "extract_latex": true}, "warc_path": "s3://commoncrawl/crawl-data/CC-MAIN-2019-09/segments/1550247512461.73/warc/CC-MAIN-20190222013546-20190222035546-00617.warc.gz"} |